optimizationBenchmarking.org was the successor project of our TSP Suite. It is a software for rigorously analyzing and comparing the performance of arbitrary combinatorial and numerical optimization algorithms (or machine learning algorithms) implemented in arbitrary programming languages on arbitrary optimization problems. This tool can do more than drawing nice diagrams over your full experimental data or over groups of your data: It combines several machine learning techniques to find the reasons for algorithm behaviors and problem hardness. It generates reports which contain basic statistics and nice diagrams in HTML and LaTeX (also compiled to PDF and ready for re-use in your publications) or exports the data into formats which can be loaded by MatLab, R, etc. Our goal is to provide one central tool which can

  • support researchers in optimization and machine learning to fully understand their algorithms and optimization/machine learning problems in order to develop better and more robust algorithms, as well as
  • support the practitioner in an application field to understand which optimization/machine learning methods work best for her particular needs.

Although we are currently developing this open source software, you can already use an alpha version, either as Java program or as a Docker container which does not require any additional installation or software. Both versions provide a browser-based graphical user interface (GUI) and do not require any programming skills from your side. The software allows you to download several example experiments and evaluator setups from the GUI and also provides extensive help on how to use it.

The most recent release of our software is version 0.8.9.

News

  1. Article "Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features" appears in Applied Soft Computing Journal
  2. Black-Box Discrete Optimization Benchmarking (BB-DOB@PPSN) Workshop
  3. Black-Box Discrete Optimization Benchmarking (BB-DOB@GECCO) Workshop
  4. International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA)
  5. Workshop on Benchmarking of Computational Intelligence Algorithms approved for ICACI 2018
  6. New Beta Release of our optimizationBenchmarking.org Software for Automating Research in Optimization
  7. Two Papers accepted at GECCO 2017
  8. From Standardized Data Formats to Standardized Tools for Optimization Algorithm Benchmarking (paper at ICCI*CC 2017)

 About the Software

Quick Start

If you want to directly run our software and see the examples, you can use its dockerized version. Simply perform the following steps:

  1. Install Docker following the instructions for Linux, Windows, or MacOS.
  2. Open a normal terminal (Linux), the Docker Quickstart Terminal (Mac OS), or the Docker Toolbox Terminal (Windows).
  3. Type in docker run -t -i -p 9999:8080/tcp optimizationbenchmarking/evaluator-gui and hit return. Only the first time you do that, it downloads our software. This may take some time, as the software is a 600 MB package. After the download, the software will start.
  4. Browse to
    • http://localhost:9999 under Linux
    • http://<dockerIP>:9999 under Windows and Mac OS, where dockerIP is the IP address of your Docker container. This address is displayed when you run the container. You can also obtain it with the command docker-machine ip default.
  5. Enjoy the web-based GUI of our software, which looks quite similar to this web site.

Workflow

The optimizationBenchmarking.org framework prescribes the following work flow, which is discussed in more detail in this set of slides:

  1. Algorithm Implementation: You implement your algorithm. Do it in a way so that you can generate log files containing rows such as (passed runtime, best solution quality so far) for each run (execution) of your algorithm. You are free to use any programming language and run it in any environment you want. We don’t care about that, we just want the text files you have generated.
  2. Choose Benchmark Instances: Choose a set of (well-known) problem instances to apply your algorithm to.
  3. Experiments: Well, run your algorithm, i.e., apply it a few times to each benchmark instance. You get the log files. Actually, you may want to do this several times with different parameter settings of your algorithm. Or maybe for different algorithms, so you have comparison data.
  4. Use Evaluator: Now, you can use our evaluator component to find our how good your method works! For this, you can define the dimensions you have measured (such as runtime and solution quality), the features of your benchmark instances (such as number of cities in a Traveling Salesman Problem or the scale and symmetry of a numerical problem), the parameter settings of your algorithm (such as population size of an EA), the information you want to get (ECDF? performance over time?), and how you want to get it (LaTeX, optimized for IEEE Transactions, ACM, or Springer LNCS? or maybe XHTML for the web?). Our evaluator will create the report with the desired information in the desired format.
  5. By interpreting the report and advanced statistics presented to you, you can get a deeper insight into your algorithm’s performance as well as into the features and hardness of the benchmark instances you used. You can also directly use building blocks from the generated reports in your publications

Publications

  • Thomas Weise, Zijun Wu, and Markus Wagner. An Improved Generic Bet-and-Run Strategy with Performance Prediction for Stochastic Local Search. Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019), January 27 – February 1, 2019, Honolulu, Hawaii, USA, pages 2395–2402. Palo Alto, CA, USA: AAAI Press. ISBN: 978-1-57735-809-1
    doi:10.1609/aaai.v33i01.33012395 / pdf@IAO / pdf@AAAI / slides / poster / blog 1 / blog 2 / early preprint@arxiv
    Indexing: CCF-A类, EI

  • Thomas Weise, Yuezhong Wu, Weichen Liu, and Raymond Chiong. Implementation Issues in Optimization Algorithms: Do they matter? Journal of Experimental & Theoretical Artificial Intelligence (JETAI) 31(4):533–554, 2019.
    doi:10.1080/0952813X.2019.1574908
    Indexing: EI, ESCI, 4区, CCF-C类

  • Thomas Weise, Xiaofeng Wang, Qi Qi, Bin Li, and Ke Tang. Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features. Applied Soft Computing Journal (ASOC), 73:366–382, December 2018.
    doi:10.1016/j.asoc.2018.08.030 / blog entry
    Indexing: EI, WOS:000450124900027, ESCI, 1区

  • Thomas Weise and Zijun Wu. Difficult Features of Combinatorial Optimization Problems and the Tunable W-Model Benchmark Problem for Simulating them. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1769-1776, ISBN: 978-1-4503-5764-7. ACM.
    doi:10.1145/3205651.3208240 / pdf / slides / source codes / workshop website
    Additional experimental results with the W-Model, which were not used in this paper, can be found at doi:10.5281/zenodo.1256883.
    Indexing: EI, CCF-C类

  • Markus Ullrich, Thomas Weise, Abhishek Awasthi, and Jörg Lässig. A Generic Problem Instance Generator for Discrete Optimization Problems. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1761-1768, ISBN: 978-1-4503-5764-7. ACM.
    doi:10.1145/3205651.3208284 / pdf / slides / source codes / workshop website
    Indexing: EI, CCF-C类

  • Qi Qi, Thomas Weise, and Bin Li. Optimization Algorithm Behavior Modeling: A Study on the Traveling Salesman Problem. In Proceedings of the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018), March 29-31, 2018 in Xiamen [厦门], Fujian [福建省], China, IEEE, pages 861–866. ISBN: 978-1-5386-4362-4. Appeared in the International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) at the ICACI 2018.
    doi:10.1109/ICACI.2018.8377576 / pdf / slides / workshop website

  • Qi Qi, Thomas Weise, and Bin Li. Modeling Optimization Algorithm Runtime Behavior and its Applications. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'17) Companion, July 15-19, 2017, Berlin, Germany, New York, NY, USA: ACM Press, pages 115-116, ISBN: 978-1-4503-4939-0.
    doi:10.1145/3067695.3076042 / paper / poster / blog entry 1 / blog entry 2
    Indexing: EI, CCF-C类

  • Thomas Weise. From Standardized Data Formats to Standardized Tools for Optimization Algorithm Benchmarking. In Newton Howard, Yingxu Wang, Amir Hussain, Freddie Hamdy, Bernard Widrow, and Lotfi A. Zadeh, editors, Proceedings of the 16th IEEE Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC'17), July 26-28, 2017, University of Oxford, Oxford, UK, pages 490-497. Los Alamitos, CA, USA: IEEE Computer Society Press, ISBN: 978-1-5386-0770-1.
    doi:10.1109/ICCI-CC.2017.8109794 / paper / slides / blog entry
    Indexing: EI

Organized Events

  • Thomas Bäck, Thomas Bartz-Beielstein, Jakob Bossek, Bilel Derbel, Carola Doerr, Tome Eftimov, Pascal Kerschke, William La Cava, Arnaud Liefooghe, Manuel López-Ibáñez, Boris Naujoks, Pietro S. Oliveto, Patryk Orzechowski, Mike Preuss, Jérémy Rapin, Ofer M. Shir, Olivier Teytaud, Heike Trautmann, Ryan J. Urbanowicz, Vanessa Volz, Markus Wagner, Hao Wang, Thomas Weise, Borys Wróbel, and Aleš Zamuda. Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@PPSN) Workshop at the Sixteenth International Conference on Parallel Problem Solving from Nature (PPSN XVI), September 5-9, 2020 in Leiden, The Netherlands.
    website / Call for Papers (CfP)

  • Thomas Bäck, Carola Doerr, Tome Eftimov, Pascal Kerschke, William La Cava, Manuel López-Ibáñez, Boris Naujoks, Pietro S. Oliveto, Patryk Orzechowski, Mike Preuss, Jérémy Rapin, Ofer M. Shir, Olivier Teytaud, Heike Trautmann, Ryan J. Urbanowicz, Vanessa Volz, Markus Wagner, Hao Wang, Thomas Weise, Borys Wróbel, and Aleš Zamuda. Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@GECCO) Workshop at the Genetic and Evolutionary Computation Conference (GECCO 2020), July 8-12, 2020, Cancún, Quintana Roo, Mexico.
    website / Call for Papers (CfP) / slides@google: 1,2,3 / slides@iao: 1,3

  • Carola Doerr, Pietro S. Oliveto, Thomas Weise, Borys Wróbel, and Aleš Zamuda. Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop at the Genetic and Evolutionary Computation Conference (GECCO 2019), July 13-17, 2019, Prague, Czech Republic.
    website / Call for Papers (CfP)

  • Thomas Weise, Bin Li, Markus Wagner, Xingyi Zhang, and Jörg Lässig, eds. Special Issue on Benchmarking of Computational Intelligence Algorithms published in the Applied Soft Computing journal published by Elsevier B.V. and indexed by EI and SCIE. The special issue was open for submissions from July 2018 and closed in April 14, 2019. It is a virtual special issues, where papers will be published as soon as they are accepted.
    website / CfP-website / Call for Papers (CfP)

  • Pietro S. Oliveto, Markus Wagner, Thomas Weise, Borys Wróbel, and Aleš Zamuda. Black-Box Discrete Optimization Benchmarking (BB-DOB@PPSN) Workshop at the Fifteenth International Conference on Parallel Problem Solving from Nature (PPSN XV), 8-9 September 2018, Coimbra, Portugal
    website / Call for Papers (CfP)

  • Pietro S. Oliveto, Markus Wagner, Thomas Weise, Borys Wróbel, and Aleš Zamuda. Black-Box Discrete Optimization Benchmarking (BB-DOB@GECCO) Workshop at the 2018 Genetic and Evolutionary Computation Conference (GECCO 2018), July 15-19, 2018 in Kyoto, Japan
    website / Call for Papers (CfP)

  • Thomas Weise, Bin Li, Markus Wagner, Xingyi Zhang, and Jörg Lässig. International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) at the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018), March 29-31, 2018 in Xiamen [厦门], Fujian [福建省], China, IEEE, ISBN: 978-1-5386-4362-4.
    website / Call for Papers (CfP)

  • Thomas Weise and Jörg Lässig. SITA-UBRI Joint Workshop on Sustainable Logistics, November 5, 2015, Hefei, Anhui, China

  • Thomas Weise and Jörg Lässig. Special Session on Benchmarking and Testing for Production and Logistics Optimization of the 2014 IEEE Symposium on Computational Intelligence in Production and Logistics at the 2014 IEEE Symposium Series on Computational Intelligence (SSCI 2014), December 9-12, 2014, Orlando, Florida, USA