The Black-Box Discrete Optimization Benchmarking Workshop (BB-DOB@PPSN) has just been accepted to be a part of the Fifteenth International Conference on Parallel Problem Solving from Nature (PPSN XV), September 8-12, 2018, in Coimbra, Portugal: http://ppsn2018.dei.uc.pt/).
Quantifying and comparing the performance of optimization algorithms is one important aspect of research in search and optimization. The Black-Box-Optimization Benchmarking (BBOB) methodology introduced by the long-standing and successful BBOB-GECCO workshops series has become a well-established standard for benchmarking continuous optimization algorithms. The aim of workshop is to develop a similar standard methodology for the benchmarking of black-box optimization algorithms for discrete and combinatorial domains. The goal of this second edition of the BB-DOB workshop series is to define a suitable set of benchmark functions for discrete and combinatorial optimization problems.
Our workshop is cordially inviting the submission of original and unpublished research papers. Here you can download the BB-DOB@PPSN Workshop Call for Papers (CfP) in PDF format and here as plain text file.
The Black-Box Discrete Optimization Benchmarking Workshop (BB-DOB@GECCO) has just been accepted to be a part of the Genetic and Evolutionary Computation Conference (GECCO 2018), taking place on July 15-19, 2018, in Kyoto, Japan (http://gecco-2018.sigevo.org/).
Quantifying and comparing the performance of optimization algorithms is one important aspect of research in search and optimization. The Black-Box-Optimization Benchmarking (BBOB) methodology introduced by the long-standing and successful BBOB-GECCO workshops series has become a well-established standard for benchmarking continuous optimization algorithms. The aim of this workshop is to develop a similar standard methodology for the benchmarking of black-box optimization algorithms for discrete and combinatorial domains. The goal of this first edition of the BB-DOB workshop series is to define a suitable set of benchmark functions for discrete and combinatorial optimization problems.
Our workshop is cordially inviting the submission of original and unpublished research papers. Here you can download the BB-DOB@GECCO Workshop Call for Papers (CfP) in PDF format and here as plain text file. The deadline for workshop submissions will be March 27, 2018.
The University Curricula Committee of the IEEE Computational Intelligence Society (IEEE CIS) has created a new subreddit, named /r/CompIntellCourses, in order to build a collection of links to good quality courses on "everything CI." Teachers all over the world are invited to submit their courses, if the teaching material available for free. This is a very nice way to produce an organized catalog of high-quality lectures in the field, making it easier for the everybody to learn about some CI technologies from scratch, or to keep up-to-date with new trends.
Our course Metaheuristic Optimization is one of the first courses that has been included in this list. Metaheuristic optimization is concerned with solving computationally hard problems using soft computing methods and therefore is a field of Computational Intelligence. Our course is based on our previous courses Practical Optimization Algorithm Design taught by Prof. Weise at University of Science and Technology of China (USTC) [中国科学技术大学] and some of the units in the course Evolutionary Computation – Theory and Application which also were taught by Prof. Weise at USTC until 2016. It is, however, significantly extended, improved, and adapted to be an introduction to and overview of the field for both graduate students and research team members in our group at Hefei University [合肥学院].
We are happy to announce that the submissions for our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) will be open for two more weeks, until December 1, 2017 (the notification deadline has moved to December 30 accordingly). The hosting event, the Tenth International Conference on Advanced Computational Intelligence 2018 (ICACI 2018), has extended its submission deadline as well. The ICACI conference is organized by the IEEE and will take place in the beautiful city of Xiamen in China from March 29 to 31, 2018. Besides being a very interesting conference with seven special sessions and workshops, ICACI also features an exciting list of top-level speakers, such as Kay Chen Tan, Jun Wang, and Zhi-Hua Zhou.
All accepted papers of our workshop will be included in the Proceedings of the ICACI 2018 published by IEEE Press and indexed by EI. Authors of selected papers will be invited to submit extended versions of these papers to the Special Issue on Benchmarking of Computational Intelligence Algorithms in the Computational Intelligence journal by Wiley Periodicals Inc., indexed by EI and SCI. Here you can download the BOCIA Workshop Call for Papers (CfP) in PDF format and here as plain text file, whereas the Special Issue Call for Papers (CfP) in is provided here in PDF format and here as plain text file.
The BOCIA workshop provides a forum for researchers to discuss all issues related to the benchmarking and performance comparison of Computational Intelligence methods, including algorithms for optimization, Machine Learning, data mining, operations research, Big Data, and Deep Learning, as well as Evolutionary Computation and Swarm Intelligence. Most of these fields have in common that the algorithms developed in them need to balance the quality of the solutions they produce with the time they require to discover them. So performance has two dimensions, time and quality. A rule of thumb is that if we have a higher computational budget, we can hopefully attain better solutions - but this very strongly depends both on the algorithm we use and the problem we try to solve. Some algorithms are better, some are worse. Actually, some setups of the same algorithm may be good while others are bad. Some problems are harder, some are easier. Actually, some instances of the same problem may be harder than others (say a Traveling Salesman Problem where all cities are on a circle is easier than one where the cities are randomly distributed). In practice, we want to solve the problems at hand in the most efficient way, to pick the right algorithm setup for the right problem instance. Since we usually cannot determine which way is most efficient using theoretic concerns alone, experiments are needed - benchmarking. Benchmarking is also necessary for rigorous research in the domains, since we can only improve algorithms if we understand their mutual advantages and disadvantages, understand what features make problems hard or easy for them, and which setup parameters have which impact on performance.
There are lots of interesting issues involved in benchmarking, such as how to design experiments, how to extract useful information from large sets of experimental results, how to visualize results, down to what should be measured, how to store and document results, and again up to questions such as whether we can design models that tell us how good an algorithm will likely perform on a new problem based on the features of this problem. While this field was under-rated for a long time, its importance is more and more recognized. The field is widely considered as one of the most important "construction sides" in Computational Intelligence. It is not sufficient to just develop more and more algorithms, we also need to get an exact understanding of their mutual advantages and disadvantages. This is vital for practical applications as well. This development is manifested in the fact that major parts of huge international projects such as the COST Action CA15140: ImAppNIO revolve around it. Also, the best algorithms in many domains now use insights into algorithm performance to automatically select the right strategy for a given problem - algorithm selection, portfolios, and configuration all draw from research on benchmarking (see, e.g., the international COSEAL group). The field is massively getting traction and offers many challenges, because the analysis and benchmarking of Computational Intelligence algorithms is an application of Computational Intelligence as well!
Our workshop will be one of the first general events for this field, and the only one which brings together experts from all fields of Computational Intelligence who are interested in algorithm comparison, evaluation, benchmarking, analysis, configuration, and performance modeling.
We are looking forward to meeting you in March 2018 in Xiamen.
Our institute welcomes Prof. Dr. Jörg Lässig and Mr. Markus Ullrich for a research stay from November 19 to 26, 2017, at our group. Prof. Lässig is the head of the Enterprise Application Development (EAD) group of the Faculty of Electrical Engineering and Computer Science of the University of Applied Sciences Zittau/Görlitz (HZG, Hochschule Zittau/Görlitz), located in Görlitz, Germany and of a IT security group with the Fraunhofer Society. Mr. Ullrich is a PhD student under his co-supervision and researcher at the EAD group.
Between our team and the EAD group exists a history of collaboration dating back quite a few years. Together, we have analyzed several classical optimization problems from logistics and scheduling, including scheduling against due dates and windows as well as the Traveling Salesman Problem (TSP). We also jointly contributed works on Evolutionary Computational in general. An important aspect of our work always the benchmarking of optimization algorithms. We work together on the TSP Suite, a framework for implementing and comparing algorithms for the TSP. Delegations from the EAD have visited us in China already in 2013 and 2015, while Prof. Weise visited the EAD in 2016 and 2017.
It therefore is a particular pleasure to be able to host Jörg and Markus at our group, especially since they both also gave presentations on the state-of-the-art developments in their fields:
- "An Application Meta-Model to Support the Execution and Benchmarking of Scientific Applications in Multi-Cloud Environments" by Mr. Markus Ullrich, 2017-11-21 14:00, building 35, room 308 [poster]
- "Understanding Quantum Computing - Computing Model and Algorithms" by Prof. Dr. Jörg Lässig, 2017-11-21 15:10, building 35, room 308 [poster]
Prof. Dr. Jörg Lässig is a Full Professor at the Department of Computer Science at the University of Applied Sciences Zittau/Görlitz (HSZG). He studied Computer Science and Computational Physics and received his Ph.D. for work on efficient algorithms and models for the generation and control of cooperation networks at Chemnitz University of Technology. As postdoc he worked in projects at the International Computer Science Institute at Berkeley, California and at the Università della Svizzera italiana in Lugano, Switzerland. His EAD research group at HSZG and his IT security group with the Fraunhofer Society are focusing on topics concerned with intelligent data driven technologies for state-of-the-art IT infrastructures and services. Prof. Lässig is also a co-chair of our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) and a co-guest editor of the Special Issue on Benchmarking of Computational Intelligence Algorithms in the Computational Intelligence Journal with Profs. Thomas Weise, Bin Li (USTC), Markus Wagner (University of Adelaide, Australia) and Xingyi Zhang (Anhui University).
Mr. Markus Ullrich is currently a PhD student at Technische Universität Chemnitz and a research associate at the University of Applied Sciences Zittau/Görlitz where he received his M.S. and B.S. in Computer Science in 2012 and 2010 respectively. From 2009 to 2012, he worked as a software developer for the Decision Optimization GmbH where he developed and tested data mining algorithms for predictive maintenance. He spent three months at the National Institute of Informatics in Tokyo, Japan during an internship where he worked on the modeling of applications and resources in cloud environments. His current research interests are data mining and cloud computing as well as the simulation and modeling of complex distributed systems.
Page 2 of 5