One of my fundamental research interests is how we can determine which optimization algorithm is good for which problem.

Unfortunately, answering this question is quite complicated. For most practically relevant problems, we need to find a trade-off between the (run)time we can invest in getting a good solution against the quality of said solution. Furthermore, the performance of almost all algorithms cannot just be a described by single pair of "solution quality" and "time needed to get a solution of that quality". Instead, these (anytime) algorithms start with an initial (often not-so-good) guess about the solution and then improve it step-by-step. In other words, their runtime behavior can be described as something like a function relating solution quality to runtime. But not a real function, since a) many algorithms are randomized, meaning that they behave differently every time you use them, even with the same input data, and b) an algorithm will usually behave different on different instances of an optimization problem type.

This means that we need to do a lot of experiments: We need to apply an optimization algorithm multiple times to a given optimization problem instance in order to "average out" the randomness. Each time, we need to collect data about the whole runtime behavior, not just the final results. Then we need to do this for multiple instances with different features in order to learn about how, e.g., the scale of a problem influences the algorithm behavior. This means that we will quite obtain a lot of data from many algorithm setups on many problem instances. The question that researchers face is thus "How can we extract useful information from that data?" How can we obtain information which helps us to improve our algorithms? How can we get data from which we can learn about the weaknesses of our methods so that we can improve them?

In this research presentation, I discuss my take on this subject. I introduce a process for automatically discovering the reasons why a certain algorithm behaves as it does and why a problem instance is harder for a set of algorithms than another. This process has already been implemented in our open source optimizationBenchmarking.org framework and is currently implemented in `R`

.

### 1. Talk Abstract

In the fields of heuristic optimization and machine learning, experimentation is the way to assess the performance of an algorithm setup and the hardness of problems. Good experimentation is complicated. Most algorithms in the domain are anytime algorithms, meaning they can improve their approximation quality over time. This means that one algorithm may initially perform better than another one but converge to worse solutions in the end. Instead of single final results, the whole runtime behavior of algorithms needs to be compared (and runtime may be measured in multiple ways). We do not just want to know which algorithm performs best and which problem is the hardest ― a researcher wants to know *why*. We introduce a process which can 1) automatically model the progress of algorithm setups on different problem instances based on data collected in experiments, 2) use these models to discover clusters of algorithm (or problem instance) behaviors, and 3) propose reasons why a certain algorithm setup (or problem instance) belongs to a certain algorithm (or problem instance) behavior cluster. These high-level conclusions are presented in form of decision trees relating algorithm parameters (or instance features) to cluster ids. We emphasize the duality of analyzing algorithm setups and problem instances. Our process is implemented as open source software and tested in two case studies, on the Maximum Satisfiability Problem and the Traveling Salesman Problem. Besides its basic application to raw experimental data, yielding clusters and explanations of "quantitative" algorithm behavior, our process also allows for "qualitative" conclusions by feeding it with data which is normalized with problem features or algorithm parameters. It can also be applied recursively, e.g., to further investigate the behavior of the algorithms in the cluster with the best-performing setups on the problem instances belonging to the cluster of hardest instances. Both use cases are investigated in the case studies.

### 2. Slide Versions

- the slides of the 1.5 hour version of this research presentation (~7 MiB)
- the slides of the 1 hour version of this research presentation (~6 MiB)
- the slides of the 30 minute version of this research presentation (~3 MiB)

### 3. Related Publication

Thomas Weise, Xiaofeng Wang, Qi Qi, Bin Li, and Ke Tang. Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features.

*Applied Soft Computing Journal (ASOC)*, 73:366–382, December 2018.

doi:10.1016/j.asoc.2018.08.030 / blog entry

Indexing: EI, WOS:000450124900027, ESCI, 2区Qi Qi, Thomas Weise, and Bin Li. Optimization Algorithm Behavior Modeling: A Study on the Traveling Salesman Problem. In Proceedings of the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018), March 29-31, 2018 in Xiamen [厦门], Fujian [福建省], China, IEEE, pages 861–866. ISBN: 978-1-5386-4362-4. Appeared in the

*International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA)*at the ICACI 2018.

doi:10.1109/ICACI.2018.8377576 / pdf / slides / workshop websiteQi Qi, Thomas Weise, and Bin Li. Modeling Optimization Algorithm Runtime Behavior and its Applications. In

*Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'17) Companion*, July 15-19, 2017, Berlin, Germany, New York, NY, USA: ACM Press, pages 115-116, ISBN: 978-1-4503-4939-0.

doi:10.1145/3067695.3076042 / paper / poster / blog entry 1 / blog entry 2

Indexing: EI, CCF-C类Thomas Weise. From Standardized Data Formats to Standardized Tools for Optimization Algorithm Benchmarking. In Newton Howard, Yingxu Wang, Amir Hussain, Freddie Hamdy, Bernard Widrow, and Lotfi A. Zadeh, editors,

*Proceedings of the 16th IEEE Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC'17)*, July 26-28, 2017, University of Oxford, Oxford, UK, pages 490-497. Los Alamitos, CA, USA: IEEE Computer Society Press, ISBN: 978-1-5386-0770-1.

doi:10.1109/ICCI-CC.2017.8109794 / paper / slides / blog entry

Indexing: EI

### 4. Presentations (reverse chronological order)

- at the Goethe University of Frankfurt in Frankfurt, Germany in 2018
- at the Leibnitz University Hannover in Hannover, Germany in 2018
- at the Leuphana University of Lüneburg, Germany in 2018
- at the University of Bremen in Bremen, Germany in 2018
- at the Carl von Ossietzky University of Oldenburg, Germany in 2018
- at the University of Kassel in Kassel, Germany in 2018
- at the IAS and IAO in Stuttgart, Germany in 2018
- at the Johannes Gutenberg University Mainz in Mainz, Germany in 2018
- at the Julius-Maximilians-Universität in Würzburg, Germany in 2018
- at the Technical University of Munich in Munich, Germany in 2018
- at the University of Leipzig in Leipzig, Germany in 2018
- at the TU Chemnitz in Chemnitz, Germany in 2018
- at the TU Bergakademie Freiberg in Freiberg, Germany in 2018
- at the TU Dresden in Dresden, German in 2018
- at the Otto von Guericke University Magdeburg in Magdeburg, Germany in 2018
- at the Zuse Institute Berlin in Berlin, Germany in 2018
- during the First Institute Workshop on Applied Optimization in our own group in 2018
- at the Anhui University in Hefei, Anhui, China as part of the Symposium on Evolutionary Computation in 2017
- at the Ludwig-Maximilians-Universität Munich in Munich (München), Germany in 2017
- at the Johannes Gutenberg University Mainz in Mainz, Germany in 2017
- at the University of Cologne in Köln (Cologne), Germany in 2017
- at the University of Leipzig in Leipzig, Germany in 2017
- at the University of Applied Sciences Zittau/Görlitz in Görlitz, Germany in 2017
- at the Otto von Guericke University Magdeburg in Magdeburg, Germany in 2017
- at the Clausthal University of Technology in Clausthal-Zellerfeld, Germany in 2017
- at the Technische Universität Ilmenau in Ilmenau, Germany in 2017
- at the MiWoCI Workshop at the University of Applied Sciences Mittweida in Mittweida, Germany in 2017
- at the Chemnitz University of Technology in Chemnitz, Germany in 2017
- at the Friedrich Schiller University in Jena, Germany in 2017

### 5. Further Reading

- What is Optimization? (a description what optimization means from the metaheuristic perspective)
- Metaheuristics for Smart Manufacturing (a short course on optimization, with all material publicly available on the website)
- Metaheuristic Optimization (a course on optimization, with all material publicly available on the website)
- Optimization Benchmarking (the project in which we implement the above ideas)
- Rigorous Performance Analysis of Algorithms (our research direction dealing with algorithm performance analysis)
- Are Black-Box Global Search Methods, such as Metaheuristics like Evolutionary Algorithms, better than Local Search? (the question whether global optimization algorithms are better than local search methods, from the perspective of anytime algorithms)
- Two Papers accepted at GECCO 2017 (one is about how to dynamically model the runtime behavior of an optimization algorithm)
- From Standardized Data Formats to Standardized Tools for Optimization Algorithm Benchmarking (a paper on a data format that can represent all information necessary for the above analysis, as used in the optimizationBenchmarking.org project)
- TSP Suite (the predecessor project of optimizationBenchmarking.org)
- Measuring the Runtime of (Optimization) Algorithms (a post describing the problems that can arise when we measure the runtime of algorithms, which is essential for collecting data on algorithm runtime behavior)
- New Beta Release of our optimizationBenchmarking.org Software for Automating Research in Optimization (the release notes of our tool on automated algorithm performance evaluation)