Today, Prof. Dr. Thomas Weise has joined the Editorial Board of the Applied Soft Computing journal published by Elsevier and indexed by SCI and EI with an impact factor of 3.541 during the last two years and 3.811 during the previous five years. The topics of this journal and the research focus of our Institute fit very closely together: we develop tailor-made applications of optimization, Operations Research, Computational Intelligence, Evolutionary Computation, Metaheuristics, as well as Machine Learning and Data Mining for industry partners to help them to become more efficient, faster, and environmentally friendlier while, at the same time, reducing costs and improving product and service quality. Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems. Soft computing is a collection of methodologies, which aim to exploit tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness and low solution cost. The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
The Black-Box Discrete Optimization Benchmarking Workshop (BB-DOB@PPSN) has just been accepted to be a part of the Fifteenth International Conference on Parallel Problem Solving from Nature (PPSN XV), September 8-12, 2018, in Coimbra, Portugal: http://ppsn2018.dei.uc.pt/).
The Black-Box-Optimization Benchmarking (BBOB) methodology introduced by the long-standing and successful BBOB-GECCO workshops series has become a well-established standard for benchmarking continuous optimization algorithms. The aim of workshop is to develop a similar standard methodology for the benchmarking of black-box optimization algorithms for discrete and combinatorial domains. The goal of this second edition of the BB-DOB workshop series is to define a suitable set of benchmark functions for discrete and combinatorial optimization problems.
Our workshop is cordially inviting the submission of original and unpublished research papers. Here you can download the BB-DOB@PPSN Workshop Call for Papers (CfP) in PDF format and here as plain text file. The deadline for workshop submissions will be June 26, 2018.
The Black-Box Discrete Optimization Benchmarking Workshop (BB-DOB@GECCO) has just been accepted to be a part of the Genetic and Evolutionary Computation Conference (GECCO 2018), taking place on July 15-19, 2018, in Kyoto, Japan (http://gecco-2018.sigevo.org/).
Quantifying and comparing the performance of optimization algorithms is one important aspect of research in search and optimization. The Black-Box-Optimization Benchmarking (BBOB) methodology introduced by the long-standing and successful BBOB-GECCO workshops series has become a well-established standard for benchmarking continuous optimization algorithms. The aim of this workshop is to develop a similar standard methodology for the benchmarking of black-box optimization algorithms for discrete and combinatorial domains. The goal of this first edition of the BB-DOB workshop series is to define a suitable set of benchmark functions for discrete and combinatorial optimization problems.
Our workshop is cordially inviting the submission of original and unpublished research papers. Here you can download the BB-DOB@GECCO Workshop Call for Papers (CfP) in PDF format and here as plain text file. The deadline for workshop submissions will be April 3, 2018.
The University Curricula Committee of the IEEE Computational Intelligence Society (IEEE CIS) has created a new subreddit, named /r/CompIntellCourses, in order to build a collection of links to good quality courses on "everything CI." Teachers all over the world are invited to submit their courses, if the teaching material available for free. This is a very nice way to produce an organized catalog of high-quality lectures in the field, making it easier for the everybody to learn about some CI technologies from scratch, or to keep up-to-date with new trends.
Our course Metaheuristic Optimization is one of the first courses that has been included in this list. Metaheuristic optimization is concerned with solving computationally hard problems using soft computing methods and therefore is a field of Computational Intelligence. Our course is based on our previous courses Practical Optimization Algorithm Design taught by Prof. Weise at University of Science and Technology of China (USTC) [中国科学技术大学] and some of the units in the course Evolutionary Computation – Theory and Application which also were taught by Prof. Weise at USTC until 2016. It is, however, significantly extended, improved, and adapted to be an introduction to and overview of the field for both graduate students and research team members in our group at Hefei University [合肥学院].
We are happy to announce that the submissions for our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) will be open for two more weeks, until December 1, 2017 (the notification deadline has moved to December 30 accordingly). The hosting event, the Tenth International Conference on Advanced Computational Intelligence 2018 (ICACI 2018), has extended its submission deadline as well. The ICACI conference is organized by the IEEE and will take place in the beautiful city of Xiamen in China from March 29 to 31, 2018. Besides being a very interesting conference with seven special sessions and workshops, ICACI also features an exciting list of top-level speakers, such as Kay Chen Tan, Jun Wang, and Zhi-Hua Zhou.
All accepted papers of our workshop will be included in the Proceedings of the ICACI 2018 published by IEEE Press and indexed by EI. Authors of selected papers will be invited to submit extended versions of these papers to the Special Issue on Benchmarking of Computational Intelligence Algorithms in the Applied Soft Computing journal by Elsevier B.V., indexed by EI and SCIE. Here you can download the Special Issue Call for Papers (CfP) in PDF format and here as plain text file.
The BOCIA workshop provides a forum for researchers to discuss all issues related to the benchmarking and performance comparison of Computational Intelligence methods, including algorithms for optimization, Machine Learning, data mining, operations research, Big Data, and Deep Learning, as well as Evolutionary Computation and Swarm Intelligence. Most of these fields have in common that the algorithms developed in them need to balance the quality of the solutions they produce with the time they require to discover them. So performance has two dimensions, time and quality. A rule of thumb is that if we have a higher computational budget, we can hopefully attain better solutions - but this very strongly depends both on the algorithm we use and the problem we try to solve. Some algorithms are better, some are worse. Actually, some setups of the same algorithm may be good while others are bad. Some problems are harder, some are easier. Actually, some instances of the same problem may be harder than others (say a Traveling Salesman Problem where all cities are on a circle is easier than one where the cities are randomly distributed). In practice, we want to solve the problems at hand in the most efficient way, to pick the right algorithm setup for the right problem instance. Since we usually cannot determine which way is most efficient using theoretic concerns alone, experiments are needed - benchmarking. Benchmarking is also necessary for rigorous research in the domains, since we can only improve algorithms if we understand their mutual advantages and disadvantages, understand what features make problems hard or easy for them, and which setup parameters have which impact on performance.
There are lots of interesting issues involved in benchmarking, such as how to design experiments, how to extract useful information from large sets of experimental results, how to visualize results, down to what should be measured, how to store and document results, and again up to questions such as whether we can design models that tell us how good an algorithm will likely perform on a new problem based on the features of this problem. While this field was under-rated for a long time, its importance is more and more recognized. The field is widely considered as one of the most important "construction sides" in Computational Intelligence. It is not sufficient to just develop more and more algorithms, we also need to get an exact understanding of their mutual advantages and disadvantages. This is vital for practical applications as well. This development is manifested in the fact that major parts of huge international projects such as the COST Action CA15140: ImAppNIO revolve around it. Also, the best algorithms in many domains now use insights into algorithm performance to automatically select the right strategy for a given problem - algorithm selection, portfolios, and configuration all draw from research on benchmarking (see, e.g., the international COSEAL group). The field is massively getting traction and offers many challenges, because the analysis and benchmarking of Computational Intelligence algorithms is an application of Computational Intelligence as well!
Our workshop will be one of the first general events for this field, and the only one which brings together experts from all fields of Computational Intelligence who are interested in algorithm comparison, evaluation, benchmarking, analysis, configuration, and performance modeling.
We are looking forward to meeting you in March 2018 in Xiamen.
Page 6 of 9