Our institute welcomes Dr. Markus Wagner, Senior Lecturer from the Optimisation and Logistics Group of the School of Computer Science of The University of Adelaide, SA, Australia, for a research visit from October 24 to November 2. His stay is supported by the Australia-China Young Scientists Exchange Program 2017 (YSEP) [中澳青年科学家交流计划] organized by the China Science and Technology Exchange Center (CSTEC) [中国科学技术交流中心] and the Australian Academy of Technology and Engineering (ATSE).

The members of the Optimisation and Logistics Group in Adelaide research optimization methods that are frequently used to solve hard and complex optimization problems. These include linear programming, branch and bound, genetic algorithms, evolution strategies, genetic programming, ant colony optimization, local search, and others. The areas of interest of Dr. Wagner are heuristic optimization and applications thereof. His work draws on computational complexity analysis and on performance landscape analysis.

Dr. Wagner and the members of our institute will spend most of his visiting time on joint research and on developing future joint projects and collaborations. Additionally, he will visit our colleagues Bin Li at USTC and Xingyi Zhang at Anhui University and give two presentations open for any interested listeners:

  1. Approximation-Guided Many-Objective Optimisation and the Travelling Thief Problem in Anhui University (AHU) [安徽大学] and co-invited by the IEEE CIS Hefei Chapter [slides]
  2. Two Real-World Optimisation Problems Related to Energy at our group [slides (android energy consumption), slides (wave energy)]

Short Biography

Dr. Markus Wagner is a Senior Lecturer at the School of Computer Science, University of Adelaide, Australia. He has done his PhD studies at the Max Planck Institute for Informatics in Saarbrücken, Germany and at the University of Adelaide, Australia. His research topics range from mathematical runtime analysis of heuristic optimization algorithms and theory-guided algorithm design to applications of heuristic methods to renewable energy production, professional team cycling and software engineering. So far, he has been a program committee member 30 times, and he has written over 70 articles with over 70 different co-authors. He has chaired several education-related committees within the IEEE CIS, is Co-Chair of ACALCI 2017 and General Chair of ACALCI 2018. Dr. Wagner is also a co-chair of our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) and a co-guest editor of the Special Issue on Benchmarking of Computational Intelligence Algorithms in the Computational Intelligence Journal with Profs. Thomas Weise, Bin Li (USTC), Xingyi Zhang (Anhui University), and Jörg Lässig (University of Applied Sciences Zittau/Görlitz).

Today, Prof. Thomas Weise joined the COnfiguration and SElection of ALgorithms group (COSEAL). COSEAL is an international group of researchers with focus on algorithm selection and algorithm configuration. These two topics are very closely related to our research direction Rigorous Performance Analysis of Algorithms, in particular to the Optimization Benchmarking project.

In order to find the best way to solve an optimization problem, we need to use the best algorithm and the best setup of this algorithm. Algorithm setups have both static and dynamic components, both of which are vital for the algorithm performance. Static parameters do not change during the algorithm run, common examples are (static) population sizes of Evolutionary Algorithms or the tabu tenure of Tabu Search. Dynamic parameters can change all the time, either according to a fixed schedule (like the temperature in Simulated Annealing) or in a self-adaptive way (like the mutation step size in a (1+1) Evolution Strategy with the 1/5th rule). We may also choose the algorithm to apply based on some problem features or dynamically assign runtime to different algorithms based on their progress, i.e., construct some form of meta-algorithm. COSEAL aims to find good algorithm configurations, both dynamic and static, as well as algorithm selection methods.

Where does benchmarking come into play? First, better techniques for benchmarking may lead to better policies for dynamic algorithm configuration. Second, in order to know whether a static or dynamic algorithm parameters and algorithm selection methods work well over different problems, we need benchmarking. Third, benchmarking may tell us the strengths and weaknesses of a setup or algorithm.

Due to this close relationship, several other members of COSEAL also join in the programme committee and even the chairing team of our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA).

Optimization and Operations Research are about finding good solutions for computational hard problems. Sometimes we cannot guarantee to find the best solutions, because finding these (or the proof that they are best) may take too long. Then we look for good approximate solution to be found within reasonable time, i.e., our algorithm's performance is measured both in solution quality and required runtime (and runtime may be measured in different ways). Often we have algorithms that start with an initial (probably bad) solution and improve it over time. This is even often the case in situations where we can actually guarantee to solve the problem perfectly if we give enough time (and we may stop earlier once we run out of time, loosing the guarantee but at least having a solution). In Machine Learning and Datamining, the situation is quite similar, which is quite natural since many questions there could also be considered as special optimization tasks. In both fields, algorithms are often randomized, meaning that they may behave differently or return different results even if executed twice with the same input data.

From the above scenario, it becomes clear that it may not be easy to know which algorithm is actually the best for a given scenario. On one hand, we need to use some reasonably robust statistics. On the other hand, we may also need to clarify what "best" actually means, since we have at least two dimensions of performance (and reducing the algorithm performance to a single point measurement may lead to wrong conclusions). These are just two of the problems we face when evaluating new methods for optimization or Machine Learning. There are many more issues, such as how to compare multi-objective optimization methods (where we have more than one quality dimension) and how to compare parallel algorithms or programs running in a cloud or cluster? Or how can we compare algorithms on a problem which is noisy or in the face of uncertainties and the need for robustness?

There are many more opportunities for interesting and good research: How to visualize algorithm the results when comparing many algorithms on many problems? Can we model algorithm performance to guess how a method would perform on new problem? Can we have simple theoretical frameworks which gives us mathematically supported guidelines, limits, boundaries, or estimates for benchmarking? Or can we build automated approaches to answer high-level questions like: What features make a problem hard for a set of algorithms? Which parameters make an algorithm work the best? Are there qualitatively different classes of problems and algorithms for a certain problem type?

With our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA), we try to provide a platform to discuss such topics, a setup where researchers can exchange thoughts on how to compare and analyze the performance of Computational Intelligence (CI) algorithms. We generously consider all optimization, Operations Research, Machine Learning, Datamining, and Evolutionary Computation all as sub-fields of CI. This workshop will take place at the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018) on March 29-31, 2018 in Xiamen, China.

If you are a researcher working on any related topic, we would be very happy if you would consider submitting a paper until December 1, 2017, via the submission page. Here you can download the Call for Papers (CfP) in PDF format.

Yesterday, we published a new course titled "Metaheuristic Optimization". On the course website, you can find all slides and example algorithm implementations and you can download the complete course material as a single tar.xz archive (can be opened under Windows and Linux) from here. As the third completely published course of our institute, after the courses on "Object-Oriented Programming with Java" and "Distributed Computing", this marks another milestone in our strive for teaching excellence.

The new course tries to give a complete and yet in-depth overview on the topic of optimization from the perspective of metaheuristics, i.e., approximate algorithms that can find good solutions for computational hard problems within a short time. The course is designed to require little background knowledge. Many of the presented algorithms can directly be implemented during the course in Java by the instructor within a few minutes. This shows that the algorithms we discuss are not scary and can be mastered even with basic programming knowledge. It also closes the gap between research and practice – after all, we are the Institute of Applied Optimization.

In the course, we discuss a broad spectrum of different optimization methods, ranging from local search algorithms such as hill climbers, Simulated Annealing, and Tabu Search to global search methods such as Evolutionary Algorithms, Evolution Strategies, Genetic Programming, Differential Evolution, Particle Swarm Optimization, Ant Colony Optimization, and Estimation of Distribution Algorithms. We also discuss several phenomena that make problems difficult for these algorithms as well as general concepts such as multi-objective and constraint optimization as well as several example applications. All in all, this course aims to give the student the knowledge to recognize an optimization problem when she sees it, the ability to choose the right algorithm for the right problem, together with the practical experience to implement and apply said algorithm in a short time.

Benchmarking, the empirical algorithm performance comparison, is usually the only feasible way to find which algorithm is good for a given problem. Benchmarking consists of two steps: First, the algorithms are applied to the benchmarking problems and data is collected. Second, the collected data is evaluated. There is little guidance for the first and a lack of tool support for the second step. Researchers investigating new problems need to implement both data collection and evaluation. In our new paper "From Standardized Data Formats to Standardized Tools for Optimization Algorithm Benchmarking," we want to make the case for defining standard directory structures and file formats for the performance data and metadata of experiments with optimization algorithms. Such formats must be easy to read, write, and to incorporate into existing setups. If there are commonly accepted formats and researchers would actually use them, then this would allow more general tools to emerge. There would be real incentive for everyone who makes an evaluation tool to use the common format right away and then, maybe, publish their tool for others to use. Then, researchers then would no longer need to implement their own evaluation programs. We try to derive suitable formats by analyzing what existing tools do and what information they need. We then present a general tool, our optimizationBenchmarking.org framework, including an open source library for reading and writing data in our format. Since our framework obtains its data from a general file format, it can assess the performance of arbitrary algorithms implemented in arbitrary programming languages on arbitrary single-objective optimization problems.

feed-image rss feed-image atom