Today, our paper "An Improved Generic Bet-and-Run Strategy with Performance Prediction for Stochastic Local Search," co-authored by Thomas Weise, Zijun Wu, and Markus Wagner, has been accepted at the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) to be held in January 27 to February 1, 2019 in Honolulu, Hawaii, USA.

Thomas Weise, Zijun Wu, and Markus Wagner. An Improved Generic Bet-and-Run Strategy with Performance Prediction for Stochastic Local Search. *Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019),* January 27 – February 1, 2019, Honolulu, Hawaii, USA, pages 2395–2402. Palo Alto, CA, USA: AAAI Press. ISBN: 978-1-57735-809-1

doi:10.1609/aaai.v33i01.33012395 / pdf@IAO / pdf@AAAI / slides / poster

### Abstract

A commonly used strategy for improving optimization algorithms is to restart the algorithm when it is believed to be trapped in an inferior part of the search space. Building on the recent success of Bet-and-Run approaches for restarted local search solvers, we introduce a more generic version that makes use of performance prediction in our paper. It is our goal to obtain the best possible results within a given time budget *t* using a given black-box optimization algorithm. If no prior knowledge about problem features and algorithm behavior is available, the question about how to use the time budget most efficiently arises. We first start *k≥1* independent runs of the algorithm during an initialization budget *t _{1}<t*, pause these runs, then apply a decision maker

*D*to choose

*1≤m<k*runs from them (consuming

*t*time units in doing so), and then continue these runs for the remaining

_{2}≥0*t*time units. In previous Bet-and-Run strategies, the decision maker

_{3}=t-t_{1}-t_{2}*D=*would simply select the run with the best-so-far results at negligible time. We propose using more advanced methods to discriminate between "good" and "bad" sample runs with the goal of increasing the correlation of the chosen run with the a-posteriori best one. In over 157 million experiments, we test different approaches to predict which run may yield the best results if granted the remaining budget. We show (1) that the

`currentBest`

*method is indeed a very reliable and robust baseline approach, and (2) that our approach can yield better results than the previous methods.*

`currentBest`

### Idea

In this paper we combine our group's research on algorithm performance modeling and algorithm performance prediction with the work of Dr. Markus Wagner on Bet-and-Run strategies for local search. Multiple runs of one algorithm are started, each executed for a short time, and then paused. A normal Bet-and-Run strategy would then continue the run so-far performing best for the rest of the computational budget. But this does not need to be the run which will finally yield the best result. It may, for instance, be possible that this was a run already converged to a local optimum while another run may currently have the worse approximate solution but could eventually outperform it. So we ask: Can we get better than the generic approach by trying to predict what result quality each run would provide if we would grant it the rest of the execution time. We explore different ways to do that, ranging from non-linear regression over fitting artificial neural networks (ANNs) to the recent progress of the run to simple methods which just assume that each further improvement will take longer and lead to less quality gain. We find that the latter method and ANNs indeed could be an interesting option to outperform the continue-the-current-best-run approach. Still, the default method is also very robust and often a good (and simple) choice.

### Photos

### Related Publications

Anna V. Kononova, Hao Wang, Thomas Weise, Carola Doerr, Thomas Bäck, Fabio Caraffini, and Johann Dreo.

*Analysing Algorithmic Behaviour of Optimisation Heuristics Workshop*(AABOH'21) at the 2021 Genetic and Evolutionary Computation Conference (GECCO'21), July 10-14, 2021 in Lille, France

website / Call for Papers (CfP)Thomas Weise, Yan Chen, Xinlu Li, and Zhize Wu. Selecting a diverse set of benchmark instances from a tunable model problem for black-box discrete optimization algorithms.

*Applied Soft Computing Journal (ASOC)*, 92:106269, June 2020.

doi:10.1016/j.asoc.2020.106269

Indexing: EI, WOS, ESCI, 1区Thomas Weise, Zijun Wu, and Markus Wagner. An Improved Generic Bet-and-Run Strategy with Performance Prediction for Stochastic Local Search.

*Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019),*January 27 – February 1, 2019, Honolulu, Hawaii, USA, pages 2395–2402. Palo Alto, CA, USA: AAAI Press. ISBN: 978-1-57735-809-1

doi:10.1609/aaai.v33i01.33012395 / pdf@IAO / pdf@AAAI / slides / poster / blog 1 / blog 2 / early preprint@arxiv

Indexing: CCF-A类, EIThomas Weise, Xiaofeng Wang, Qi Qi, Bin Li, and Ke Tang. Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features.

*Applied Soft Computing Journal (ASOC)*, 73:366–382, December 2018.

doi:10.1016/j.asoc.2018.08.030 / blog entry

Indexing: EI, WOS:000450124900027, ESCI, 1区Qi Qi, Thomas Weise, and Bin Li. Optimization Algorithm Behavior Modeling: A Study on the Traveling Salesman Problem. In Proceedings of the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018), March 29-31, 2018 in Xiamen [厦门], Fujian [福建省], China, IEEE, pages 861–866. ISBN: 978-1-5386-4362-4. Appeared in the

*International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA)*at the ICACI 2018.

doi:10.1109/ICACI.2018.8377576 / pdf / slides / workshop websiteQi Qi, Thomas Weise, and Bin Li. Modeling Optimization Algorithm Runtime Behavior and its Applications. In

*Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'17) Companion*, July 15-19, 2017, Berlin, Germany, New York, NY, USA: ACM Press, pages 115-116, ISBN: 978-1-4503-4939-0.

doi:10.1145/3067695.3076042 / paper / poster / blog entry 1 / blog entry 2

Indexing: EI, CCF-C类Thomas Weise. From Standardized Data Formats to Standardized Tools for Optimization Algorithm Benchmarking. In Newton Howard, Yingxu Wang, Amir Hussain, Freddie Hamdy, Bernard Widrow, and Lotfi A. Zadeh, editors,

*Proceedings of the 16th IEEE Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC'17)*, July 26-28, 2017, University of Oxford, Oxford, UK, pages 490-497. Los Alamitos, CA, USA: IEEE Computer Society Press, ISBN: 978-1-5386-0770-1.

doi:10.1109/ICCI-CC.2017.8109794 / paper / slides / blog entry

Indexing: EI