optimizationBenchmarking.org is the successor project of our TSP Suite. It is a software for rigorously analyzing and comparing the performance of arbitrary combinatorial and numerical optimization algorithms (or machine learning algorithms) implemented in arbitrary programming languages on arbitrary optimization problems. This tool can do more than drawing nice diagrams over your full experimental data or over groups of your data: It combines several machine learning techniques to find the reasons for algorithm behaviors and problem hardness. It generates reports which contain basic statistics and nice diagrams in HTML and LaTeX (also compiled to PDF and ready for re-use in your publications) or exports the data into formats which can be loaded by MatLab, R, etc. Our goal is to provide one central tool which can
- support researchers in optimization and machine learning to fully understand their algorithms and optimization/machine learning problems in order to develop better and more robust algorithms, as well as
- support the practitioner in an application field to understand which optimization/machine learning methods work best for her particular needs.
Although we are currently developing this open source software, you can already use an alpha version, either as Java program or as a Docker container which does not require any additional installation or software. Both versions provide a browser-based graphical user interface (GUI) and do not require any programming skills from your side. The software allows you to download several example experiments and evaluator setups from the GUI and also provides extensive help on how to use it.
The most recent release of our software is version 0.8.9.
- Article "Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features" appears in Applied Soft Computing Journal
- Black-Box Discrete Optimization Benchmarking (BB-DOB@PPSN) Workshop
- Black-Box Discrete Optimization Benchmarking (BB-DOB@GECCO) Workshop
- International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA)
- Workshop on Benchmarking of Computational Intelligence Algorithms approved for ICACI 2018
- New Beta Release of our optimizationBenchmarking.org Software for Automating Research in Optimization
- Two Papers accepted at GECCO 2017
- From Standardized Data Formats to Standardized Tools for Optimization Algorithm Benchmarking (paper at ICCI*CC 2017)
2. About the Software
2.1. Quick Start
If you want to directly run our software and see the examples, you can use its dockerized version. Simply perform the following steps:
- Install Docker following the instructions for Linux, Windows, or MacOS.
- Open a normal terminal (Linux), the Docker Quickstart Terminal (Mac OS), or the Docker Toolbox Terminal (Windows).
- Type in
docker run -t -i -p 9999:8080/tcp optimizationbenchmarking/evaluator-guiand hit return. Only the first time you do that, it downloads our software. This may take some time, as the software is a 600 MB package. After the download, the software will start.
- Browse to
http://<dockerIP>:9999under Windows and Mac OS, where
dockerIPis the IP address of your Docker container. This address is displayed when you run the container. You can also obtain it with the command
docker-machine ip default.
- Enjoy the web-based GUI of our software, which looks quite similar to this web site.
The optimizationBenchmarking.org framework prescribes the following work flow, which is discussed in more detail in this set of slides:
- Algorithm Implementation: You implement your algorithm. Do it in a way so that you can generate log files containing rows such as (
best solution quality so far) for each run (execution) of your algorithm. You are free to use any programming language and run it in any environment you want. We don’t care about that, we just want the text files you have generated.
- Choose Benchmark Instances: Choose a set of (well-known) problem instances to apply your algorithm to.
- Experiments: Well, run your algorithm, i.e., apply it a few times to each benchmark instance. You get the log files. Actually, you may want to do this several times with different parameter settings of your algorithm. Or maybe for different algorithms, so you have comparison data.
- Use Evaluator: Now, you can use our evaluator component to find our how good your method works! For this, you can define the dimensions you have measured (such as runtime and solution quality), the features of your benchmark instances (such as number of cities in a Traveling Salesman Problem or the scale and symmetry of a numerical problem), the parameter settings of your algorithm (such as population size of an EA), the information you want to get (ECDF? performance over time?), and how you want to get it (LaTeX, optimized for IEEE Transactions, ACM, or Springer LNCS? or maybe XHTML for the web?). Our evaluator will create the report with the desired information in the desired format.
- By interpreting the report and advanced statistics presented to you, you can get a deeper insight into your algorithm’s performance as well as into the features and hardness of the benchmark instances you used. You can also directly use building blocks from the generated reports in your publications
Thomas Weise, Zijun Wu, and Markus Wagner. An Improved Generic Bet-and-Run Strategy with Performance Prediction for Stochastic Local Search. Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019), January 27 – February 1, 2019, Honolulu, Hawaii, USA. Palo Alto, CA, USA: AAAI Press. accepted for publication
Thomas Weise, Xiaofeng Wang, Qi Qi, Bin Li, and Ke Tang. Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features. Applied Soft Computing Journal (ASOC), 73:366–382, December 2018.
doi:10.1016/j.asoc.2018.08.030 / share link (valid until November 6, 2018) / blog entry
Indexing: EI, WOS, ESCI
Thomas Weise and Zijun Wu. Difficult Features of Combinatorial Optimization Problems and the Tunable W-Model Benchmark Problem for Simulating them. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1769-1776, ISBN: 978-1-4503-5764-7. ACM.
doi:10.1145/3205651.3208240 / pdf / slides / source codes / workshop website
Additional experimental results with the W-Model, which were not used in this paper, can be found at doi:10.5281/zenodo.1256883.
Markus Ullrich, Thomas Weise, Abhishek Awasthi, and Jörg Lässig. A Generic Problem Instance Generator for Discrete Optimization Problems. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1761-1768, ISBN: 978-1-4503-5764-7. ACM.
doi:10.1145/3205651.3208284 / pdf / slides / source codes / workshop website
Qi Qi, Thomas Weise, and Bin Li. Optimization Algorithm Behavior Modeling: A Study on the Traveling Salesman Problem. In Proceedings of the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018), March 29-31, 2018 in Xiamen [厦门], Fujian [福建省], China, IEEE, pages 845–850. ISBN: 978-1-5386-4362-4. Appeared in the International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) at the ICACI 2018.
pdf / slides / workshop website
Qi Qi, Thomas Weise, and Bin Li. Modeling Optimization Algorithm Runtime Behavior and its Applications. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'17) Companion, July 15-19, 2017, Berlin, Germany, New York, NY, USA: ACM Press, pages 115-116, ISBN: 978-1-4503-4939-0.
doi:10.1145/3067695.3076042 / paper / poster / blog entry 1 / blog entry 2
Thomas Weise. From Standardized Data Formats to Standardized Tools for Optimization Algorithm Benchmarking. In Newton Howard, Yingxu Wang, Amir Hussain, Freddie Hamdy, Bernard Widrow, and Lotfi A. Zadeh, editors, Proceedings of the 16th IEEE Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC'17), July 26-28, 2017, University of Oxford, Oxford, UK, pages 490-497. Los Alamitos, CA, USA: IEEE Computer Society Press, ISBN: 978-1-5386-0770-1.
doi:10.1109/ICCI-CC.2017.8109794 / paper / slides / blog entry
4. Organized Events
Thomas Weise, Bin Li, Markus Wagner, Xingyi Zhang, and Jörg Lässig, eds. Special Issue on Benchmarking of Computational Intelligence Algorithms published in the Applied Soft Computing journal published by Elsevier B.V. and indexed by EI and SCIE. The special issue opens for submissions in July 2018 and closes in April 14, 2019. It is a virtual special issues, where papers will be published as soon as they are accepted.
website / Call for Papers (CfP)
Pietro S. Oliveto, Markus Wagner, Thomas Weise, Borys Wróbel, and Aleš Zamuda. Black-Box Discrete Optimization Benchmarking (BB-DOB@PPSN) Workshop at the Fifteenth International Conference on Parallel Problem Solving from Nature (PPSN XV), 8-9 September 2018, Coimbra, Portugal
website / Call for Papers (CfP)
Pietro S. Oliveto, Markus Wagner, Thomas Weise, Borys Wróbel, and Aleš Zamuda. Black-Box Discrete Optimization Benchmarking (BB-DOB@GECCO) Workshop at the 2018 Genetic and Evolutionary Computation Conference (GECCO 2018), July 15-19, 2018, Kyoto, Japan
website / Call for Papers (CfP)
Thomas Weise, Bin Li, Markus Wagner, Xingyi Zhang, and Jörg Lässig. International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) at the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018), March 29-31, 2018 in Xiamen [厦门], Fujian [福建省], China, IEEE, ISBN: 978-1-5386-4362-4.
website / Call for Papers (CfP)
Thomas Weise and Jörg Lässig. Special Session on Benchmarking and Testing for Production and Logistics Optimization of the 2014 IEEE Symposium on Computational Intelligence in Production and Logistics at the 2014 IEEE Symposium Series on Computational Intelligence (SSCI 2014), December 9-12, 2014, Orlando, Florida, USA