- Written by Thomas Weise
We are happy to announce that the submissions for our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) will be open for two more weeks, until December 1, 2017 (the notification deadline has moved to December 30 accordingly). The hosting event, the Tenth International Conference on Advanced Computational Intelligence 2018 (ICACI 2018), has extended its submission deadline as well. The ICACI conference is organized by the IEEE and will take place in the beautiful city of Xiamen in China from March 29 to 31, 2018. Besides being a very interesting conference with seven special sessions and workshops, ICACI also features an exciting list of top-level speakers, such as Kay Chen Tan, Jun Wang, and Zhi-Hua Zhou.
All accepted papers of our workshop will be included in the Proceedings of the ICACI 2018 published by IEEE Press and indexed by EI. Authors of selected papers will be invited to submit extended versions of these papers to the Special Issue on Benchmarking of Computational Intelligence Algorithms in the Applied Soft Computing journal by Elsevier B.V., indexed by EI and SCIE. Here you can download the Special Issue Call for Papers (CfP) in PDF format and here as plain text file.
The BOCIA workshop provides a forum for researchers to discuss all issues related to the benchmarking and performance comparison of Computational Intelligence methods, including algorithms for optimization, Machine Learning, data mining, operations research, Big Data, and Deep Learning, as well as Evolutionary Computation and Swarm Intelligence. Most of these fields have in common that the algorithms developed in them need to balance the quality of the solutions they produce with the time they require to discover them. So performance has two dimensions, time and quality. A rule of thumb is that if we have a higher computational budget, we can hopefully attain better solutions - but this very strongly depends both on the algorithm we use and the problem we try to solve. Some algorithms are better, some are worse. Actually, some setups of the same algorithm may be good while others are bad. Some problems are harder, some are easier. Actually, some instances of the same problem may be harder than others (say a Traveling Salesman Problem where all cities are on a circle is easier than one where the cities are randomly distributed). In practice, we want to solve the problems at hand in the most efficient way, to pick the right algorithm setup for the right problem instance. Since we usually cannot determine which way is most efficient using theoretic concerns alone, experiments are needed - benchmarking. Benchmarking is also necessary for rigorous research in the domains, since we can only improve algorithms if we understand their mutual advantages and disadvantages, understand what features make problems hard or easy for them, and which setup parameters have which impact on performance.
There are lots of interesting issues involved in benchmarking, such as how to design experiments, how to extract useful information from large sets of experimental results, how to visualize results, down to what should be measured, how to store and document results, and again up to questions such as whether we can design models that tell us how good an algorithm will likely perform on a new problem based on the features of this problem. While this field was under-rated for a long time, its importance is more and more recognized. The field is widely considered as one of the most important "construction sides" in Computational Intelligence. It is not sufficient to just develop more and more algorithms, we also need to get an exact understanding of their mutual advantages and disadvantages. This is vital for practical applications as well. This development is manifested in the fact that major parts of huge international projects such as the COST Action CA15140: ImAppNIO revolve around it. Also, the best algorithms in many domains now use insights into algorithm performance to automatically select the right strategy for a given problem - algorithm selection, portfolios, and configuration all draw from research on benchmarking (see, e.g., the international COSEAL group). The field is massively getting traction and offers many challenges, because the analysis and benchmarking of Computational Intelligence algorithms is an application of Computational Intelligence as well!
Our workshop will be one of the first general events for this field, and the only one which brings together experts from all fields of Computational Intelligence who are interested in algorithm comparison, evaluation, benchmarking, analysis, configuration, and performance modeling.
We are looking forward to meeting you in March 2018 in Xiamen.
- Written by Thomas Weise
Our institute welcomes Prof. Dr. Jörg Lässig and Mr. Markus Ullrich for a research stay from November 19 to 26, 2017, at our group. Prof. Lässig is the head of the Enterprise Application Development (EAD) group of the Faculty of Electrical Engineering and Computer Science of the University of Applied Sciences Zittau/Görlitz (HZG, Hochschule Zittau/Görlitz), located in Görlitz, Germany and of a IT security group with the Fraunhofer Society. Mr. Ullrich is a PhD student under his co-supervision and researcher at the EAD group.
Between our team and the EAD group exists a history of collaboration dating back quite a few years. Together, we have analyzed several classical optimization problems from logistics and scheduling, including scheduling against due dates and windows as well as the Traveling Salesman Problem (TSP). We also jointly contributed works on Evolutionary Computational in general. An important aspect of our work always the benchmarking of optimization algorithms. We work together on the TSP Suite, a framework for implementing and comparing algorithms for the TSP. Delegations from the EAD have visited us in China already in 2013 and 2015, while Prof. Weise visited the EAD in 2016 and 2017.
It therefore is a particular pleasure to be able to host Jörg and Markus at our group, especially since they both also gave presentations on the state-of-the-art developments in their fields:
- "An Application Meta-Model to Support the Execution and Benchmarking of Scientific Applications in Multi-Cloud Environments" by Mr. Markus Ullrich, 2017-11-21 14:00, building 35, room 308 [poster]
- "Understanding Quantum Computing - Computing Model and Algorithms" by Prof. Dr. Jörg Lässig, 2017-11-21 15:10, building 35, room 308 [poster]
Prof. Dr. Jörg Lässig is a Full Professor at the Department of Computer Science at the University of Applied Sciences Zittau/Görlitz (HSZG). He studied Computer Science and Computational Physics and received his Ph.D. for work on efficient algorithms and models for the generation and control of cooperation networks at Chemnitz University of Technology. As postdoc he worked in projects at the International Computer Science Institute at Berkeley, California and at the Università della Svizzera italiana in Lugano, Switzerland. His EAD research group at HSZG and his IT security group with the Fraunhofer Society are focusing on topics concerned with intelligent data driven technologies for state-of-the-art IT infrastructures and services. Prof. Lässig is also a co-chair of our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) and a co-guest editor of the Special Issue on Benchmarking of Computational Intelligence Algorithms in the Computational Intelligence Journal with Profs. Thomas Weise, Bin Li (USTC), Markus Wagner (University of Adelaide, Australia) and Xingyi Zhang (Anhui University).
Mr. Markus Ullrich is currently a PhD student at Technische Universität Chemnitz and a research associate at the University of Applied Sciences Zittau/Görlitz where he received his M.S. and B.S. in Computer Science in 2012 and 2010 respectively. From 2009 to 2012, he worked as a software developer for the Decision Optimization GmbH where he developed and tested data mining algorithms for predictive maintenance. He spent three months at the National Institute of Informatics in Tokyo, Japan during an internship where he worked on the modeling of applications and resources in cloud environments. His current research interests are data mining and cloud computing as well as the simulation and modeling of complex distributed systems.
- Written by Thomas Weise
From October 18 to 27, 2017, our Master's Student Ms. Qi Qi [齐琪] of the University of Science and Technology of China (USTC) [中国科学技术大学] conducted an invited research stay at the Chair of System Simulation of the Department of Informatics of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) in Erlangen, Germany.
The Chair for System Simulation performs research on the modeling and efficient simulation and optimization of complex systems in science and engineering. Its focus is on the design and the analysis of algorithms and tools for these purposes. Since most simulations of complex systems are computationally heavy, the work of the group therefore is centered around diverse high-performance computation (HPC) techniques. They
- research HPC models, by developing tailored simulation algorithms for physical applications, three phase and thermal free flows based on the lattice Boltzmann method, and multi-level algorithms,
- research scientific computing, by researching computational optics, numerical analysis, and related HPC methods, and
- develop HPC software, including visualisations of simulation results, a framework for simulation of fluid scenarios based on the Lattice Boltzmann, and software for rigid body dynamics
Ms. Qi was invited by of Prof. Dr. Harald Köstler, whose main research interest is on the latter point. He works on the ExaStencils project for Advanced Stencil-Code Engineering. Stencil codes are compute-intensive algorithms in which data points in a grid are redefined repeatedly as a combination of the values of neighboring points. The neighborhood pattern used is called a stencil. Stencil codes are used for the solution of discrete partial differential equations and the resulting linear systems.
- Written by Thomas Weise
Our institute welcomes Dr. Markus Wagner, Senior Lecturer from the Optimisation and Logistics Group of the School of Computer Science of The University of Adelaide, SA, Australia, for a research visit from October 24 to November 2. His stay is supported by the Australia-China Young Scientists Exchange Program 2017 (YSEP) [中澳青年科学家交流计划] organized by the China Science and Technology Exchange Center (CSTEC) [中国科学技术交流中心] and the Australian Academy of Technology and Engineering (ATSE).
The members of the Optimisation and Logistics Group in Adelaide research optimization methods that are frequently used to solve hard and complex optimization problems. These include linear programming, branch and bound, genetic algorithms, evolution strategies, genetic programming, ant colony optimization, local search, and others. The areas of interest of Dr. Wagner are heuristic optimization and applications thereof. His work draws on computational complexity analysis and on performance landscape analysis.
Dr. Wagner and the members of our institute will spend most of his visiting time on joint research and on developing future joint projects and collaborations. Additionally, he will visit our colleagues Bin Li at USTC and Xingyi Zhang at Anhui University and give two presentations open for any interested listeners:
- Approximation-Guided Many-Objective Optimisation and the Travelling Thief Problem in Anhui University (AHU) [安徽大学] and co-invited by the IEEE CIS Hefei Chapter [slides]
- Two Real-World Optimisation Problems Related to Energy at our group [slides (android energy consumption), slides (wave energy)]
Dr. Markus Wagner is a Senior Lecturer at the School of Computer Science, University of Adelaide, Australia. He has done his PhD studies at the Max Planck Institute for Informatics in Saarbrücken, Germany and at the University of Adelaide, Australia. His research topics range from mathematical runtime analysis of heuristic optimization algorithms and theory-guided algorithm design to applications of heuristic methods to renewable energy production, professional team cycling and software engineering. So far, he has been a program committee member 30 times, and he has written over 70 articles with over 70 different co-authors. He has chaired several education-related committees within the IEEE CIS, is Co-Chair of ACALCI 2017 and General Chair of ACALCI 2018. Dr. Wagner is also a co-chair of our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) and a co-guest editor of the Special Issue on Benchmarking of Computational Intelligence Algorithms in the Computational Intelligence Journal with Profs. Thomas Weise, Bin Li (USTC), Xingyi Zhang (Anhui University), and Jörg Lässig (University of Applied Sciences Zittau/Görlitz).
- Written by Thomas Weise
Today, Prof. Thomas Weise joined the COnfiguration and SElection of ALgorithms group (COSEAL). COSEAL is an international group of researchers with focus on algorithm selection and algorithm configuration. These two topics are very closely related to our research direction Rigorous Performance Analysis of Algorithms, in particular to the Optimization Benchmarking project.
In order to find the best way to solve an optimization problem, we need to use the best algorithm and the best setup of this algorithm. Algorithm setups have both static and dynamic components, both of which are vital for the algorithm performance. Static parameters do not change during the algorithm run, common examples are (static) population sizes of Evolutionary Algorithms or the tabu tenure of Tabu Search. Dynamic parameters can change all the time, either according to a fixed schedule (like the temperature in Simulated Annealing) or in a self-adaptive way (like the mutation step size in a (1+1) Evolution Strategy with the 1/5th rule). We may also choose the algorithm to apply based on some problem features or dynamically assign runtime to different algorithms based on their progress, i.e., construct some form of meta-algorithm. COSEAL aims to find good algorithm configurations, both dynamic and static, as well as algorithm selection methods.
Where does benchmarking come into play? First, better techniques for benchmarking may lead to better policies for dynamic algorithm configuration. Second, in order to know whether a static or dynamic algorithm parameters and algorithm selection methods work well over different problems, we need benchmarking. Third, benchmarking may tell us the strengths and weaknesses of a setup or algorithm.
Due to this close relationship, several other members of COSEAL also join in the programme committee and even the chairing team of our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA).
Page 8 of 10