Today, the program of the Black-Box Discrete Optimization Benchmarking (BB-DOB@GECCO) Workshop has been released. This first release of the BB-DOB Workshop series takesplace at the Genetic and Evolutionary Computation Conference (GECCO'18) in Kyoto, Japan, whose program can be found here and is mirrored here. The aim of our workshop is to develop a similar standard methodology for the benchmarking of black-box optimization algorithms for discrete and combinatorial domains. We want to produce a well-motivated benchmark function testbed, a standardized experimental set-up, rules for measuring and the generation of data output, and standardized post-processing and presentations for the results in graphs and tables. Our workshop has six accepted submissions and will span over two sessions and take place on Monday, July 16, 09:00-12:40, in Training Room 3 (2F). The schedule will leave room for discussions and socializing.

Session 1 (09:00-10:40)

  1. Ofer M. Shir, Carola Doerr, Thomas Bäck. Compiling a Benchmarking Test-Suite for Combinatorial Black-Box Optimization: A Position Paper. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1753-1760, ISBN: 978-1-4503-5764-7. ACM.
    doi:10.1145/3205651.3208251 / slides

    This contribution focuses on the challenge of formulating a set of benchmark problems and/or a test-suite for Combinatorial Optimization problems when treated as black-box global optimization problems. We discuss the involved dilemmas and possible obstacles of such a compilation. To this end, we formulate a list of design questions that need to be answered as a first step in this compilation process. We articulate our perspective on these questions by proposing a rough classification of relevant problem classes, answering the posed questions, and suggesting a preliminary set of problems. While this position paper addresses the Evolutionary Computation community, it intends to offer an open-minded Computer Science perspective - by considering the broad definition of Combinatorial Optimization and by accounting for equivalent threads within Operations Research and Mathematical Programming communities. At the same time, this work capitalizes on prior art in algorithms' benchmarking, including the authors' own experience with the continuous BBOB benchmark problem set, as well as a number of discrete black-box optimization challenges frequently encountered in practice.

  2. Sebastian Raggl, Andreas Beham, Viktoria Hauder, Stefan Wagner, Michael Affenzeller. Discrete Real-world Problems in a Black-box Optimization Benchmark. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1745-1752, ISBN: 978-1-4503-5764-7. ACM.
    doi:10.1145/3205651.3208280

    Combinatorial optimization problems come in a wide variety of types but five common problem components can be identified. This categorization can aid the selection of interesting and diverse set of problems for inclusion in the combinatorial black-box problem benchmark. We suggest two real-world problems for inclusion into the benchmark. One is a transport-lot building problem and the other one is the clustered generalized quadratic assignment problem. We look into designing an interface for discrete black-box problems that can accommodate problems belonging to all of the described categories as well real-world problems that often feature multiple problem components. We describe three different interfaces for black-box problems, the first using a general encoding for all types of problems the second one using specialized encodings per problem type and the last one describes problems in terms of the available operators. We compare the strengths and weaknesses of the three designs.

  3. Thomas Weise, Zijun Wu. Difficult Features of Combinatorial Optimization Problems and the Tunable W-Model Benchmark Problem for Simulating them. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1769-1776, ISBN: 978-1-4503-5764-7. ACM.
    doi:10.1145/3205651.3208240 / pdf / slides / source / data-doi:10.5281/zenodo.1256883.

    The first event of the Black-Box Discrete Optimization Benchmarking (BB-DOB) workshop series aims to establish a set of example problems for benchmarking black-box optimization algorithms for discrete or combinatorial domains. In this paper, we 1) discuss important features that should be embodied by these benchmark functions and 2) present the W-Model problem which exhibits them. The W-Model follows a layered approach, where each layer can either be omitted or introduce a different characteristic feature such as neutrality via redundancy, ruggedness and deceptiveness, epistasis, and multi-objectivity, in a tunable way. The model problem is defined over bit string representations, which allows for extracting some of its layers and stacking them on top of existing problems that use this representation, such as OneMax, the Maximum Satisfiability or the Set Covering tasks, and the NK landscape. The ruggedness and deceptiveness layer can be stacked on top of any problem with integer-valued objectives. We put the W-Model into the context of related model problems targeting ruggedness, neutrality, and epistasis. We then present the results of a series of experiments to further substantiate the utility of the W-Model and to give an idea about suitable configurations of it that could be included in the BB-DOB benchmark suite.

  4. Hao Wang, Furong Ye, Carola Doerr, Sander van Rijn, Thomas Bäck. IOHProfiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics. (presentation without accompanying publication)
    slides

Session 2 (11:00-12:40)

  1. Markus Ullrich, Thomas Weise, Abhishek Awasthi, Jörg Lässig. A Generic Problem Instance Generator for Discrete Optimization Problems. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1761-1768, ISBN: 978-1-4503-5764-7. ACM.
    doi:10.1145/3205651.3208284 / pdf

    Measuring the performance of an optimization algorithm involves benchmark instances of related problems. In the area of discrete optimization, most well-known problems are covered by a large variety of problem instances already. However, while exploring the area of lesser-known optimization problems there is usually not a sufficient amount or variety of such benchmark instances available. The reasons for this lack of data vary from privacy or confidentiality issues to technical difficulties that prevent the collection of such data. This results in the inability to evaluate new optimization algorithms on these problems. Ideally, the necessary data for a variety of problem instances can be created randomly in advance to measure the performance of a set of algorithms. Random problem instance generators exist for specific problems already, however, generic tools that are applicable to multiple optimization problems are rare and usually restricted to a smaller subset of problems. We propose a generic problem instance generator for discrete optimization problems, which is easy to configure, and simple to expand to cover a broad variety of optimization problems. We show the capabilities of our tool by creating exemplary configurations for the TSP, Max-SAT and a real-world load allocation problem to generate random problem instances.

  2. Pascal Kerschke, Jakob Bossek, Heike Trautmann. Parameterization of State-of-the-Art Performance Indicators: A Robustness Study Based on Inexact TSP Solvers. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1737-1744, ISBN: 978-1-4503-5764-7. ACM.
    doi:10.1145/3205651.3208233 / slides

    Performance comparisons of optimization algorithms are heavily influenced by the underlying indicator(s). In this paper we investigate commonly used performance indicators for single-objective stochastic solvers, such as the Penalized Average Runtime (e.g., PAR10) or the Expected Running Time (ERT), based on exemplary benchmark performances of state-of-the-art inexact TSP solvers. Thereby, we introduce a methodology for analyzing the effects of (usually heuristically set) indicator parametrizations - such as the penalty factor and the method used for aggregating across multiple runs - w.r.t. the robustness of the considered optimization algorithms.

  3. Aleš Zamuda, Christine Zarges, Miguel Nicolau. A Black-Box Discrete Optimization Benchmarking (BB-DOB) Pipeline Survey: Taxonomy, Evaluation, and Ranking. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1777-1782, ISBN: 978-1-4503-5764-7. ACM.
    doi:10.1145/3205651.3208307 / slides

    This paper provides a taxonomical identification survey of classes in discrete optimization challenges that can be found in the literature including a proposed pipeline for benchmarking, inspired by previous computational optimization competitions. Thereby, a Black-Box Discrete Optimization Benchmarking (BB-DOB) perspective is presented for the BB-DOB@GECCO Workshop. It is motivated why certain classes together with their properties (like deception and separability or toy problem label) should be included in the perspective. Moreover, guidelines on how to select significant instances within these classes, the design of experiments setup, performance measures, and presentation methods and formats are discussed.