Call for Papers Call for Papers

2018 Genetic and Evolutionary Computation Conference (GECCO 2018)

July 15-19, 2018 in Kyoto, Japan
http://iao.hfuu.edu.cn/bbdob-gecco18

 

The Black-Box Discrete Optimization Benchmarking Workshop (BB-DOB@GECCO), a part of the Genetic and Evolutionary Computation Conference (GECCO 2018), had been cordially inviting the submission of original and unpublished research papers (and six papers were accepted). Here you can download the BB-DOB@GECCO Workshop Call for Papers (CfP) in PDF format and here as plain text file. The workshop was held on July 16, 2018, at the GECCO 2018 conference in Kyoto, Japan.

The Black-Box-Optimization Benchmarking (BBOB) methodology introduced by the long-standing and successful BBOB-GECCO workshops series has become a well-established standard for benchmarking continuous optimization algorithms. The aim of this workshop is to develop a similar standard methodology for the benchmarking of black-box optimization algorithms for discrete and combinatorial domains. We want to produce:

  1. a well-motivated benchmark function testbed,
  2. a standardized experimental set-up,
  3. rules for measuring and the generation of data output, and
  4. standardized post-processing and presentations for the results in graphs and tables.

All accepted papers in this workshop will be included in the Companion Material Proceedings of the Genetic and Evolutionary Computation Conference 2018 published by ACM and indexed by EI. Authors of selected papers will be invited to submit extended versions of these papers to the Special Issue on Benchmarking of Computational Intelligence Algorithms in the Applied Soft Computing journal by Elsevier B.V., indexed by EI and SCIE. Here you can download the Special Issue Call for Papers (CfP) in PDF format and here as plain text file.

Disclaimer: Two BB-DOB workshops will take place in 2018, i.e., the first edition as BB-DOB@GECCO and the second edition as BB-DOB@PPSN. Both are independent events of the same series.

For more information please contact Pietro S. Oliveto at This email address is being protected from spambots. You need JavaScript enabled to view it..

This workshop is organized as part of the ImAppNIO Cost Action 15140.

GECCO 2018 Conference Logo

Topics of Interest

This first installment of the new BB-DOB workshop series focuses on a debate regarding which functions should be included in the benchmarking testbed. These benchmark functions should capture the difficulties of combinatorial optimization problems in practice, but at the same time be comprehensible so that the resulting algorithm behaviors can be understood or interpreted. In this way, the mutual advantages and disadvantages of algorithms can be analyzed in depth. Gaining such insights is vital for the design of improved algorithms. The sought benchmark functions should ideally be scalable with the problem size and non-trivial in the black-box optimization sense, i.e., allow for shifting the optimum to any point. This workshop wants to provide a common forum to bring together experts on benchmarking of optimization algorithms. Interested participants are encouraged to submit papers related to black-box optimization benchmarking of discrete optimizers in the widest sense. In particular if they suggest,

  • function classes that should be included in the function collection and motivate the reasons for inclusion,
  • benchmark function properties that allow to capture difficulties which occur in real-world applications (e.g., deception, separability, etc.)
  • which classes of standard combinatorial optimization problems should be included and how to select significant instances,
  • which classes of toy problems should be included and motivate why
  • issues concerning any other aspect of benchmarking methodology for discrete optimizers such as design of experiments, performance measures, presentation methods, etc.

Important Dates

Paper Submission Opens:  27  February  2018
Paper Submission Deadline:  3  April  2018  (extended)
Notification of Acceptance:  11  April  2018
Camera-Ready Copy Due:  28  April  2018
Conference Presentation:  16  July  2018

Workshop

The BB-DOB@GECCO workshop was held on July 16, 2018, at the GECCO 2018 conference in Kyoto, Japan. It was a nice workshop, thanks to the authors who delivered talks of high-quality and interest. Therefore, we organizers would like to use this chance to thank all of the contributors of this workshop for their work, as well as the GECCO programme committe, who not only gave us the chance to hold this two-session event at the major conference on Evolutionary Computation, but who also provided their valuable guidance and support. And, of course, without an interesting audience, no workshop can be considered successful. We are therefore also very thankful for the researchers who attended our workshop, who asked important questions and provided great input for future research directions and open problems. Indeed, during the two sessions, the audience was steadily growing, which emphasizes that benchmarking of discrete optimization algorithms is an important direction to go. We hope that we can also welcome you to BB-DOB@PPSN workshop or as authors of submissions to our Special Issue on Benchmarking of Computational Intelligence Algorithms. Finally, a big thanks to our reviewers, who really did a great and diligent job. Every submitted paper received between three and four constructive reviews, providing valuable suggestions and guidance.

Thanks again for contributing to this event. It's been great fun, and we will have enough to discuss for many moons.

Program

Our workshop spanned over two sessions and took place on Monday, July 16, 09:00-12:40, in Training Room 3 (2F). The schedule left room for discussions and socializing.

Session 1 (09:00-10:40)

  1. Ofer M. Shir, Carola Doerr, Thomas Bäck. Compiling a Benchmarking Test-Suite for Combinatorial Black-Box Optimization: A Position Paper. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1753-1760, ISBN: 978-1-4503-5764-7. ACM.
    doi:10.1145/3205651.3208251 / slides

    This contribution focuses on the challenge of formulating a set of benchmark problems and/or a test-suite for Combinatorial Optimization problems when treated as black-box global optimization problems. We discuss the involved dilemmas and possible obstacles of such a compilation. To this end, we formulate a list of design questions that need to be answered as a first step in this compilation process. We articulate our perspective on these questions by proposing a rough classification of relevant problem classes, answering the posed questions, and suggesting a preliminary set of problems. While this position paper addresses the Evolutionary Computation community, it intends to offer an open-minded Computer Science perspective - by considering the broad definition of Combinatorial Optimization and by accounting for equivalent threads within Operations Research and Mathematical Programming communities. At the same time, this work capitalizes on prior art in algorithms' benchmarking, including the authors' own experience with the continuous BBOB benchmark problem set, as well as a number of discrete black-box optimization challenges frequently encountered in practice.

  2. Sebastian Raggl, Andreas Beham, Viktoria Hauder, Stefan Wagner, Michael Affenzeller. Discrete Real-world Problems in a Black-box Optimization Benchmark. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1745-1752, ISBN: 978-1-4503-5764-7. ACM.
    doi:10.1145/3205651.3208280

    Combinatorial optimization problems come in a wide variety of types but five common problem components can be identified. This categorization can aid the selection of interesting and diverse set of problems for inclusion in the combinatorial black-box problem benchmark. We suggest two real-world problems for inclusion into the benchmark. One is a transport-lot building problem and the other one is the clustered generalized quadratic assignment problem. We look into designing an interface for discrete black-box problems that can accommodate problems belonging to all of the described categories as well real-world problems that often feature multiple problem components. We describe three different interfaces for black-box problems, the first using a general encoding for all types of problems the second one using specialized encodings per problem type and the last one describes problems in terms of the available operators. We compare the strengths and weaknesses of the three designs.

  3. Thomas Weise, Zijun Wu. Difficult Features of Combinatorial Optimization Problems and the Tunable W-Model Benchmark Problem for Simulating them. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1769-1776, ISBN: 978-1-4503-5764-7. ACM.
    doi:10.1145/3205651.3208240 / pdf / slides / source / data-doi:10.5281/zenodo.1256883.

    The first event of the Black-Box Discrete Optimization Benchmarking (BB-DOB) workshop series aims to establish a set of example problems for benchmarking black-box optimization algorithms for discrete or combinatorial domains. In this paper, we 1) discuss important features that should be embodied by these benchmark functions and 2) present the W-Model problem which exhibits them. The W-Model follows a layered approach, where each layer can either be omitted or introduce a different characteristic feature such as neutrality via redundancy, ruggedness and deceptiveness, epistasis, and multi-objectivity, in a tunable way. The model problem is defined over bit string representations, which allows for extracting some of its layers and stacking them on top of existing problems that use this representation, such as OneMax, the Maximum Satisfiability or the Set Covering tasks, and the NK landscape. The ruggedness and deceptiveness layer can be stacked on top of any problem with integer-valued objectives. We put the W-Model into the context of related model problems targeting ruggedness, neutrality, and epistasis. We then present the results of a series of experiments to further substantiate the utility of the W-Model and to give an idea about suitable configurations of it that could be included in the BB-DOB benchmark suite.

  4. Hao Wang, Furong Ye, Carola Doerr, Sander van Rijn, Thomas Bäck. IOHProfiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics. (presentation without accompanying publication)
    slides

Session 2 (11:00-12:40)

  1. Markus Ullrich, Thomas Weise, Abhishek Awasthi, Jörg Lässig. A Generic Problem Instance Generator for Discrete Optimization Problems. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1761-1768, ISBN: 978-1-4503-5764-7. ACM.
    doi:10.1145/3205651.3208284 / pdf / slides / sources

    Measuring the performance of an optimization algorithm involves benchmark instances of related problems. In the area of discrete optimization, most well-known problems are covered by a large variety of problem instances already. However, while exploring the area of lesser-known optimization problems there is usually not a sufficient amount or variety of such benchmark instances available. The reasons for this lack of data vary from privacy or confidentiality issues to technical difficulties that prevent the collection of such data. This results in the inability to evaluate new optimization algorithms on these problems. Ideally, the necessary data for a variety of problem instances can be created randomly in advance to measure the performance of a set of algorithms. Random problem instance generators exist for specific problems already, however, generic tools that are applicable to multiple optimization problems are rare and usually restricted to a smaller subset of problems. We propose a generic problem instance generator for discrete optimization problems, which is easy to configure, and simple to expand to cover a broad variety of optimization problems. We show the capabilities of our tool by creating exemplary configurations for the TSP, Max-SAT and a real-world load allocation problem to generate random problem instances.

  2. Pascal Kerschke, Jakob Bossek, Heike Trautmann. Parameterization of State-of-the-Art Performance Indicators: A Robustness Study Based on Inexact TSP Solvers. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1737-1744, ISBN: 978-1-4503-5764-7. ACM.
    doi:10.1145/3205651.3208233 / slides

    Performance comparisons of optimization algorithms are heavily influenced by the underlying indicator(s). In this paper we investigate commonly used performance indicators for single-objective stochastic solvers, such as the Penalized Average Runtime (e.g., PAR10) or the Expected Running Time (ERT), based on exemplary benchmark performances of state-of-the-art inexact TSP solvers. Thereby, we introduce a methodology for analyzing the effects of (usually heuristically set) indicator parametrizations - such as the penalty factor and the method used for aggregating across multiple runs - w.r.t. the robustness of the considered optimization algorithms.

  3. Aleš Zamuda, Christine Zarges, Miguel Nicolau. A Black-Box Discrete Optimization Benchmarking (BB-DOB) Pipeline Survey: Taxonomy, Evaluation, and Ranking. In Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop of Companion Material Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), July 15th-19th 2018, Kyoto, Japan, pages 1777-1782, ISBN: 978-1-4503-5764-7. ACM.
    doi:10.1145/3205651.3208307 / slides

    This paper provides a taxonomical identification survey of classes in discrete optimization challenges that can be found in the literature including a proposed pipeline for benchmarking, inspired by previous computational optimization competitions. Thereby, a Black-Box Discrete Optimization Benchmarking (BB-DOB) perspective is presented for the BB-DOB@GECCO Workshop. It is motivated why certain classes together with their properties (like deception and separability or toy problem label) should be included in the perspective. Moreover, guidelines on how to select significant instances within these classes, the design of experiments setup, performance measures, and presentation methods and formats are discussed.

Photos

  • 0_1_header
  • 0_2_zamuda_oliveto
  • 1_shir_1
  • 1_shir_2
  • 1_shir_3
  • 1_shir_4
  • 3_wagner_1
  • 3_wagner_2
  • 3_wagner_4
  • 4_doerr_1
  • 4_doerr_2
  • 4_doerr_3
  • 4_doerr_4
  • 4_doerr_5
  • 4_doerr_6
  • 4_doerr_7
  • 4_doerr_8
  • 4_doerr_9
  • 5_ullrich_1
  • 5_ullrich_2
  • 6_trautmann_1
  • 6_trautmann_2
  • 6_trautmann_3

Instructions for Authors

Detailed instructions regarding how to submit papers and the paper format are given at the conference website http://gecco-2018.sigevo.org/index.html/Papers+Submission+Instructions. Some general things to consider are:

  • Submitted papers should be ANONYMIZED, i.e., cannot contain any element that may reveal the identity of their authors, in order to facilitate a double-blind review process.
  • The maximum paper length is eight pages.
  • The paper format is the ACM SIG format.
  • The paper size is US Letter (8-1/2x11 inches).
  • Papers must be submitted in pdf format.
  • The papers must include a correct ACM copyright statement.
  • If the paper is accepted, also all sources (LaTeX, Word, all graphics, etc.) need to be submitted alongside the final version.

Chairs

International Program Committee

  • Abhishek Awasthi, University of Applied Sciences Zittau/Görlitz, Görlitz, Germany
  • Christian Blum, Artificial Intelligence Research Institute, Bellaterra, Spain
  • Josu Ceberio Uribe, University of the Basque Country, Bilbao, Spain
  • Wenxiang Chen, Colorado State University, Fort Collins, CO, USA
  • Raymond Chiong, The University of Newcastle, Callaghan, Australia
  • Carola Doerr, Université Pierre et Marie Curie - Paris 6, Paris, France
  • Mohamed El Yafrani, Mohammed V University of Rabat, Rabat, Morocco
  • José Manuel García Nieto, Universidad de Málaga, Málaga, Spain
  • Thomas Jansen, Aberystwyth University, UK
  • Michael Kolonko, Clausthal University of Technology, Clausthal-Zellerfeld, Germany
  • Algirdas Lančinkas, Vilnius University, Lithuania
  • William La Cava, University of Pennsylvania, Philadelphia, PA, USA
  • Jörg Lässig, University of Applied Sciences Zittau/Görlitz, Görlitz, Germany
  • Bin Li, University of Science and Technology of China, Hefei, China
  • Jinlong Li, University of Science and Technology of China, Hefei, China
  • Pu Li, Technische Universität Ilmenau, Ilmenau, Germany
  • Zhen Liu, Institute of Applied Optimization, Hefei University, Hefei, China
  • Manuel López-Ibáñez, University of Manchester, Manchester, UK
  • Yi Mei, Victoria University of Wellington, Wellington, New Zealand
  • Martin Middendorf, Leipzig University, Leipzig, Germany
  • Frank Neumann, University of Adelaide, Australia
  • Miguel Nicolau, University College Dublin, Ireland
  • Pietro S. Oliveto, University of Sheffield, UK
  • Qi Qi, University of Science and Technology of China, Hefei, China
  • Ofer Shir, Tel-Hai College, Israel
  • Danilo Sipoli Sanches, Federal University of Technology – Paraná, Cornélio Procópio, Brazil
  • Kate Smith-Miles, The University of Melbourne, Melbourne, VIC, Australia
  • Thomas Stützle, Université Libre de Bruxelles (ULB), IRIDIA, Belgium
  • Markus Ullrich, University of Applied Sciences Zittau/Görlitz, Görlitz, Germany
  • Ryan J. Urbanowicz, University of Pennsylvania, Philadelphia, PA, USA
  • Sébastien Verel, Université du Littoral Côte d'Opale, France
  • Markus Wagner, University of Adelaide, Adelaide, SA, Australia
  • Thomas Weise, Institute of Applied Optimization, Hefei University, Hefei, China
  • Carsten Witt, Technical University of Denmark, Denmark
  • Borys Wróbel, Adam Mickiewicz University, Poland
  • Zijun Wu, Institute of Applied Optimization, Hefei University, Hefei, China
  • Yang Yu, Nanjing University, Nanjing, China
  • Aleš Zamuda, University of Maribor, Slovenia
  • Xingyi Zhang, Anhui University, Hefei, China

Chair Biographies

Portrait of Dr. Pietro S. Oliveto

Pietro S. Oliveto is a Senior Lecturer and an EPSRC Early Career Fellow at the University of Sheffield, UK. He received the Laurea degree in computer science from the University of Catania, Italy in 2005 and the PhD degree from the University of Birmingham, UK in 2009. He has been EPSRC PhD+ Fellow (2009-2010) and EPSRC Postdoctoral Fellow (2010-2013) at Birmingham and Vice-Chancellor's Fellow at Sheffield (2013-2016). His main research interest is the performance analysis of bio-inspired computation techniques including evolutionary algorithms, genetic programming, artificial immune systems and hyperheuristics. He has won best paper awards at GECCO 2008, ICARIS 2011, and GECCO 2014. He is part of the Steering Committee of the annual workshop on Theory of Randomized Search Heuristics (ThRaSH), Associate Editor of the IEEE Transactions on Evolutionary Computation, Chair of the IEEE CIS Task Force on Theoretical Foundations of Bio-inspired Computation, Leader of the ImAppNIO Cost Action Working Group on Benchmarking and member of the EPSRC Peer Review College. Dr. Oliveto has given tutorials on the runtime complexity analysis of EAs regularly at CEC, GECCO, WCCI, SSCI and PPSN since 2012.

Portrait of Senior Lecturer Dr. Markus Wagner

Markus Wagner is a Senior Lecturer at the School of Computer Science, University of Adelaide, Australia. He has done his PhD studies at the Max Planck Institute for Informatics in Saarbrücken, Germany and at the University of Adelaide, Australia. His research topics range from mathematical runtime analysis of heuristic optimization algorithms and theory-guided algorithm design to applications of heuristic methods to renewable energy production, professional team cycling and software engineering. So far, he has been a program committee member 30 times, and he has written over 70 articles with over 70 different co-authors. He has chaired several education-related committees within the IEEE CIS, is Co-Chair of ACALCI 2017 and General Chair of ACALCI 2018.

Portrait of Prof. Dr. Thomas Weise

Thomas Weise obtained the MSc in Computer Science in 2005 from the Chemnitz University of Technology and his PhD from the University of Kassel in 2009. He then joined the University of Science and Technology of China (USTC) as PostDoc and subsequently became Associate Professor at the USTC-Birmingham Joint Research Institute in Intelligent Computation and Its Applications (UBRI) at USTC. In 2016, he joined Hefei University as Full Professor to found the Institute of Applied Optimization at the Faculty of Computer Science and Technology. Prof. Weise has more than seven years of experience as a full time researcher in China, having contributed significantly both to fundamental as well as applied research. He has more than 80 scientific publications in international peer reviewed journals and conferences. His book "Global Optimization Algorithms – Theory and Application" has been cited more than 730 times. He has acted as reviewer, editor, or programme committee member at 70 different venues.

Portrait of Prof. Dr. Borys Wróbel

Borys Wróbel's background is biology and computer science, and he works at the intersection between the two fields. His current research interest are computational properties of biologically-inspired models of computation (artificial gene regulatory networks and spiking neural networks), which involves building artificial life software platforms that use high performance computing and neuromorphic hardware. Borys graduated from the University of Gdansk (Poland) in 1997, was a Fulbright Visiting Researcher in the Salk Institute for Biological Studies in San Diego, CA, and later FEBS and EMBO Fellow at the Hebrew University of Jerusalem (Israel), Marie Curie Postdoctoral Fellow at the University of Valencia, and Sciex Fellow in the Insitute of Neuroinformatics at the University of Zurich and ETHZ (Switzerland). He is a member of the Global Young Academy, Intelligent Systems Applications Technical Committee of the Institute of Electrical and Electronics Engineers Computational Intelligence Society (since 2012), and Association for Computing Machinery Special Interest Group for Genetic and Evolutionary Computation.

Portrait of Dr. Aleš Zamuda

Aleš Zamuda is an Assistant Professor and Researcher at University of Maribor (UM), Slovenia. He received Ph.D. (2012), M.Sc. (2008), and B.Sc. (2006) degrees in computer science from UM. He is management committee (MC) member for Slovenia at European Cooperation in Science (COST), actions CA15140 (ImAppNIO - Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice) and IC1406 (cHiPSet - High-Performance Modelling and Simulation for Big Data Applications). He is IEEE Senior Member, IEEE Young Professionals Chair for Slovenia Section, IEEE CIS member, ACM SIGEVO member, ImAppNIO Benchmarks working group vice-chair, and editorial board member (associate editor) for Swarm and Evolutionary Computation (2017 IF=3.893). His areas of computer science applications include ecosystems, evolutionary algorithms, multicriterion optimization, artificial life, and computer animation; currently yielding h-index 16, 38 publications, and 742 citations on Scopus. He won IEEE R8 SPC 2007 award, IEEE CEC 2009 ECiDUE, 2016 Danubuius Young Scientist Award, and 1% top reviewer at 2017 Publons Peer Review Awards, including reviews for 40 journals and 65 conferences.

Hosting Event

The Genetic and Evolutionary Computation Conference (GECCO) 2018, July 15-19, 2018 in Kyoto, Japan: http://gecco-2018.sigevo.org/.

The Genetic and Evolutionary Computation Conference (GECCO 2018) will present the latest high-quality results in genetic and evolutionary computation. Topics include genetic algorithms, genetic programming, evolution strategies, evolutionary programming, memetic algorithms, hyper-heuristics, real-world applications, evolutionary machine learning, evolvable hardware, artificial life, adaptive behavior, ant colony optimization, swarm intelligence, biological applications, evolutionary robotics, coevolution, artificial immune systems, and more.

The GECCO 2018 Program Committee invites the submission of technical papers describing your best work in genetic and evolutionary computation. Full papers of at most 8 pages are aimed to present original new work that meets the high-quality standards of GECCO. Accepted full papers appear in the ACM digital library as part of the MAIN PROCEEDINGS of GECCO. For full papers, a separate abstract needs to be submitted first by January 30, 2018. Full papers are due by the non-extensible deadline of February 6, 2018.

Each paper submitted to GECCO will be rigorously evaluated in a double-blind review process. Evaluation is done on a per-track basis, ensuring high interest and high expertise of the reviewers. Review criteria include the significance of the work, technical soundness, novelty, clarity, writing quality, relevance, and sufficiency of information to permit replication, if applicable.

Besides the traditional GECCO full papers, poster-only papers of at most 2 pages can be submitted. These are aimed to present original and new work that has not yet reached the stage of the more mature, complete research results that are typically published in full papers at GECCO. The review of poster-only papers follows the same double-blind process described above. However, accepted poster-only papers appear in the ACM digital library as part of the COMPANION PROCEEDINGS of GECCO. Poster-only papers are due by the non-extensible deadline of February 6, 2018, and no abstract needs to be submitted first.

If an author has more than one paper to present at the conference, there are NO additional registration fees. The author registers one time (pays for 1 registration) and presents as many papers as have been accepted. If a paper has more than one author, and more than one author will attend the conference, then each author in attendance must pay for a separate registration.

Related Events

Web Links

  1. Special Session on Benchmarking of Computational Intelligence Algorithms (BOCIA'21)
  2. Workshop Call for Papers (CfP) in PDF format
  3. Workshop Call for Papers (CfP) as plain text file
  4. WikiCFP Page of the Workshop
  5. Pietro S. Oliveto's BB-DOB @ GECCO page
  6. Sister Workshop: Black-Box Discrete Optimization Benchmarking (BB-DOB@PPSN) Workshop
  7. Related Special Issue: Special Issue on Benchmarking of Computational Intelligence Algorithms in the Applied Soft Computing Journal
  8. Related Workshop: International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA)
  9. BB-DOB @ GECCO: Submission Deadline Extended to April 3, 2018