Today, our paper "An Improved Generic Bet-and-Run Strategy with Performance Prediction for Stochastic Local Search," co-authored by Thomas Weise, Zijun Wu, and Markus Wagner, has been accepted at the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) to be held in January 27 to February 1, 2019 in Honolulu, Hawaii, USA.
Thomas Weise, Zijun Wu, and Markus Wagner. An Improved Generic Bet-and-Run Strategy with Performance Prediction for Stochastic Local Search. Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019), January 27 – February 1, 2019, Honolulu, Hawaii, USA. Palo Alto, CA, USA: AAAI Press. accepted for publication
A commonly used strategy for improving optimization algorithms is to restart the algorithm when it is believed to be trapped in an inferior part of the search space. Building on the recent success of Bet-and-Run approaches for restarted local search solvers, we introduce a more generic version that makes use of performance prediction in our paper. It is our goal to obtain the best possible results within a given time budget t using a given black-box optimization algorithm. If no prior knowledge about problem features and algorithm behavior is available, the question about how to use the time budget most efficiently arises. We first start k≥1 independent runs of the algorithm during an initialization budget t1<t, pause these runs, then apply a decision maker D to choose 1≤m<k runs from them (consuming t2≥0 time units in doing so), and then continue these runs for the remaining t3=t-t1-t2 time units. In previous Bet-and-Run strategies, the decision maker D=
currentBest would simply select the run with the best-so-far results at negligible time. We propose using more advanced methods to discriminate between "good" and "bad" sample runs with the goal of increasing the correlation of the chosen run with the a-posteriori best one. In over 157 million experiments, we test different approaches to predict which run may yield the best results if granted the remaining budget. We show (1) that the
currentBest method is indeed a very reliable and robust baseline approach, and (2) that our approach can yield better results than the previous methods.
In this paper we combine our group's research on algorithm performance modeling and algorithm performance prediction with the work of Dr. Markus Wagner on Bet-and-Run strategies for local search. Multiple runs of one algorithm are started, each executed for a short time, and then paused. A normal Bet-and-Run strategy would then continue the run so-far performing best for the rest of the computational budget. But this does not need to be the run which will finally yield the best result. It may, for instance, be possible that this was a run already converged to a local optimum while another run may currently have the worse approximate solution but could eventually outperform it. So we ask: Can we get better than the generic approach by trying to predict what result quality each run would provide if we would grant it the rest of the execution time. We explore different ways to do that, ranging from non-linear regression over fitting artificial neural networks (ANNs) to the recent progress of the run to simple methods which just assume that each further improvement will take longer and lead to less quality gain. We find that the latter method and ANNs indeed could be an interesting option to outperform the continue-the-current-best-run approach. Still, the default method is also very robust and often a good (and simple) choice.
Thomas Weise, Xiaofeng Wang, Qi Qi, Bin Li, and Ke Tang. Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features. Applied Soft Computing Journal (ASOC), 73:366–382, December 2018.
doi:10.1016/j.asoc.2018.08.030 / share link (valid until November 6, 2018) / blog entry
Indexing: EI, WOS, ESCI
Qi Qi, Thomas Weise, and Bin Li. Optimization Algorithm Behavior Modeling: A Study on the Traveling Salesman Problem. In Proceedings of the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018), March 29-31, 2018 in Xiamen [厦门], Fujian [福建省], China, IEEE, pages 845–850. ISBN: 978-1-5386-4362-4. Appeared in the International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) at the ICACI 2018.
pdf / slides / workshop website
Qi Qi, Thomas Weise, and Bin Li. Modeling Optimization Algorithm Runtime Behavior and its Applications. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'17) Companion, July 15-19, 2017, Berlin, Germany, New York, NY, USA: ACM Press, pages 115-116, ISBN: 978-1-4503-4939-0.
doi:10.1145/3067695.3076042 / paper / poster / blog entry 1 / blog entry 2
Thomas Weise. From Standardized Data Formats to Standardized Tools for Optimization Algorithm Benchmarking. In Newton Howard, Yingxu Wang, Amir Hussain, Freddie Hamdy, Bernard Widrow, and Lotfi A. Zadeh, editors, Proceedings of the 16th IEEE Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC'17), July 26-28, 2017, University of Oxford, Oxford, UK, pages 490-497. Los Alamitos, CA, USA: IEEE Computer Society Press, ISBN: 978-1-5386-0770-1.
doi:10.1109/ICCI-CC.2017.8109794 / paper / slides / blog entry