Today, our paper "An Improved Generic Bet-and-Run Strategy with Performance Prediction for Stochastic Local Search," co-authored by Thomas Weise, Zijun Wu, and Markus Wagner, has been accepted at the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) to be held in January 27 to February 1, 2019 in Honolulu, Hawaii, USA.

Thomas Weise, Zijun Wu, and Markus Wagner. An Improved Generic Bet-and-Run Strategy with Performance Prediction for Stochastic Local Search. Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019), January 27 – February 1, 2019, Honolulu, Hawaii, USA, pages 2395–2402. Palo Alto, CA, USA: AAAI Press. ISBN: 978-1-57735-809-1
doi:10.1609/aaai.v33i01.33012395 / pdf@IAO / pdf@AAAI / slides / poster

Abstract

A commonly used strategy for improving optimization algorithms is to restart the algorithm when it is believed to be trapped in an inferior part of the search space. Building on the recent success of Bet-and-Run approaches for restarted local search solvers, we introduce a more generic version that makes use of performance prediction in our paper. It is our goal to obtain the best possible results within a given time budget t using a given black-box optimization algorithm. If no prior knowledge about problem features and algorithm behavior is available, the question about how to use the time budget most efficiently arises. We first start k≥1 independent runs of the algorithm during an initialization budget t1<t, pause these runs, then apply a decision maker D to choose 1≤m<k runs from them (consuming t2≥0 time units in doing so), and then continue these runs for the remaining t3=t-t1-t2 time units. In previous Bet-and-Run strategies, the decision maker D=currentBest would simply select the run with the best-so-far results at negligible time. We propose using more advanced methods to discriminate between "good" and "bad" sample runs with the goal of increasing the correlation of the chosen run with the a-posteriori best one. In over 157 million experiments, we test different approaches to predict which run may yield the best results if granted the remaining budget. We show (1) that the currentBest method is indeed a very reliable and robust baseline approach, and (2) that our approach can yield better results than the previous methods.

Idea

In this paper we combine our group's research on algorithm performance modeling and algorithm performance prediction with the work of Dr. Markus Wagner on Bet-and-Run strategies for local search. Multiple runs of one algorithm are started, each executed for a short time, and then paused. A normal Bet-and-Run strategy would then continue the run so-far performing best for the rest of the computational budget. But this does not need to be the run which will finally yield the best result. It may, for instance, be possible that this was a run already converged to a local optimum while another run may currently have the worse approximate solution but could eventually outperform it. So we ask: Can we get better than the generic approach by trying to predict what result quality each run would provide if we would grant it the rest of the execution time. We explore different ways to do that, ranging from non-linear regression over fitting artificial neural networks (ANNs) to the recent progress of the run to simple methods which just assume that each further improvement will take longer and lead to less quality gain. We find that the latter method and ANNs indeed could be an interesting option to outperform the continue-the-current-best-run approach. Still, the default method is also very robust and often a good (and simple) choice.

Photos

  • aaai_logo
  • 20190128_aaai_reception_1
  • 20190128_aaai_reception_2
  • 20190128_aaai_reception_3
  • 20190128_aaai_reception_4
  • 20190130_weise_poster_1
  • 20190130_weise_poster_2