at the Genetic and Evolutionary Computation Conference (GECCO 2020)
July 8-12, 2020, Cancún, Quintana Roo, Mexico
https://sites.google.com/view/benchmarking-network/home/GECCO20
http://iao.hfuu.edu.cn/benchmark-gecco20
The Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@GECCO) Workshop, was a part of the Genetic and Evolutionary Computation Conference (GECCO) 2020 and took place on July 8 (Cancún time, July 9 in China) as an online meeting with over 70! international participants. We had an incredibly inspiring discussion and it can be said that this workshop will probably have a lasting impact and strengthened our young community effort towards better benchmarking of metaheuristic algorithms.Here you can download the original BB-DOB@GECCO Workshop Call for Papers (CfP) in PDF format and here as plain text file.
Scope and Objectives
Benchmarking aims to illuminate the strengths and weaknesses of algorithms regarding different problem characteristics. To this end, several benchmarking suites have been designed which target different types of characteristics. Gaining insight into the behavior of algorithms on a wide array of problems has benefits for different stakeholders. It helps engineers new to the field of optimization find an algorithm suitable for their problem. It also allows experts in optimization to develop new algorithms and improve existing ones. Even though benchmarking is a highly-researched topic within the evolutionary computation community, there are still a number of open questions and challenges that should be explored:
- most commonly-used benchmarks are small and do not cover the space of meaningful problems,
- benchmarking suites lack the complexity of real-world problems,
- proper statistical analysis techniques that can easily be applied depending on the nature of the data are lacking or seldom used, and)
- user-friendly, openly accessible benchmarking techniques and software need to be developed and spread.
We wish to enable a culture of sharing to ensure direct access to resources as well as reproducibility. This helps to avoid common pitfalls in benchmarking such as overfitting to specific test cases. We aim to establish new standards for benchmarking in evolutionary computation research so we can objectively compare novel algorithms and fully demonstrate where they excel and where they can be improved.
Workshop Description
As the goal of the workshop is to discuss, develop and improve benchmarking practices in evolutionary computation, we particularly welcome informal position statements addressing or identifying open challenges in benchmarking, as well as all other suggestions and contributions for a discussion. Possible contributions include, but are not limited to:
- lists of open questions/issues in benchmarking
- examples of good benchmarking
- descriptions of common pitfalls in benchmarking and how to avoid them.
For all other information about the workshop, please contact Thomas Weise at
Topics
The topics of interest for this workshop include, but are not limited to:
- the selection of meaningful (real-world) benchmark problems,
- performance measures for comparing algorithm behavior,
- novel statistical approaches for analyzing empirical data,
- landscape analysis,
- data mining approaches for understanding algorithm behavior,
- transfer learning from benchmark experiences to real-world problems, and
- benchmarking tools for executing experiments and analysis of experimental results.
Program
Our workshop took place on Wednesday, July 8, 2020, at 16:10 Cancun Time, which was 5:10pm in New York, 22:10 in the UK, 12:10 in Moscow, 05:10 on July 9 in Beijing, and 06:10 on July 9 in Tokyo. The program of our workshop was as follows:
- Discussion (30 min approx): What is the purpose of benchmarking?
- Organizer: Boris Naujoks, joint work with Thomas Bartz-Beielstein and Carola Doerr
- Session Chair: Tome Eftimov
- slides@google / slides@iao
- Discussion (30 min approx): What are good benchmarking practices / how to do benchmarking / what not to do?
- Organizer: William La Cava
- Session Chair: Pietro S. Oliveto
- slides@google
- Presentation (20 min: 15 presentation + 5 discussion): Makoto Ohki. “Benchmark with Facile Adjustment of Difficulty for Many-Objective Genetic Programming and Its Reference Set”
- Speaker: Makoto Ohki
- Session Chair: Thomas Weise
- Discussion (30 min approx): Open questions/issues in benchmarking / What to do next?
- Organizer: Vanessa Volz
- Session Chair: Carola Doerr
- slides@google / slides@iao
The workshop was a great success and more than 70 international researchers from all over the world took part in it. There were many discussions and talks, which will have a lasting impact and which probably have started quite some new efforts in our community. It is clear that all of us strive for improving the way metaheuristic algorithms and optimization problems are investigated. Good research requires sound and reproducible experimentation. Our report Benchmarking in Optimization: Best Practice and Open Issues, published on July 7 on Arxiv, was one step into that direction. Together with the input from the workshop, it will further be improved. Other community efforts discussed in this workshop include the benchmarking network and the IOHprofiler. All-in-all, many questions and topics have been raised, so we now have lots of exciting work to do to answer, research, and implement them.
Important Dates
Paper Submission Opening: | 27 | February | 2020 |
Paper Submission Deadline: | 17 | April | 2020 |
Decisions Due: | 1 | May | 2020 |
Camera-Ready Material Due: | 8 | May | 2020 |
Author Registration Deadline: | 11 | May | 2020 |
Conference Presentation: | 8 | July | 2020 |
Submission Page: https://ssl.linklings.net/conferences/gecco/
Instructions for Authors
All relevant instructions regarding paper submission are available at https://gecco-2020.sigevo.org/index.html/tiki-index.php?page=Workshops.
Related Event @ PPSN
A similar benchmarking best practices workshop will be held at PPSN 2020, which takes place from September 5-9, 2020, in Leiden, The Netherlands: https://sites.google.com/view/benchmarking-network/home/activities/PPSN20 or at http://iao.hfuu.edu.cn/benchmark-ppsn20. Contributions to this workshop are welcome in any format until June 8, 2020.
List of Organizers
The workshop is co-organized by the following people (alphabetically ordered list). All of them are contributing equally to the workshop.
- Thomas Bäck, Leiden University, Leiden, The Netherlands
- Carola Doerr, CNRS and Sorbonne University, Paris, France
- Tome Eftimov, Stanford University, CA, USA and Jožef Stefan Institute, Ljubljana, Slovenia
- Pascal Kerschke, University of Münster, Münster, Germany
- William La Cava, University of Pennsylvania, Philadelphia, PA, USA
- Manuel López-Ibáñez, University of Manchester, Manchester, UK
- Boris Naujoks, Technical University of Cologne, Köln (Cologne), Germany
- Pietro S. Oliveto, University of Sheffield, Sheffield, UK
- Patryk Orzechowski, University of Pennsylvania, Philadelphia, PA, USA
- Mike Preuss, LIACS, Leiden University, Leiden, The Netherlands
- Jérémy Rapin, Facebook AI Research, Paris, France
- Ofer M. Shir, Tel-Hai College and Migal Institute, Israel
- Olivier Teytaud, Facebook AI Research, Paris, France
- Heike Trautmann, University of Münster, Münster, Germany
- Ryan J. Urbanowicz, University of Pennsylvania, Philadelphia, PA, USA
- Vanessa Volz, modl.ai, Copenhagen, Denmark
- Markus Wagner, The University of Adelaide, Adelaide, Australia
- Hao Wang, LIACS, Leiden University, Leiden, The Netherlands
- Thomas Weise, Institute of Applied Optimization, Hefei University, Hefei, China
- Borys Wróbel, Adam Mickiewicz University, Poland
- Aleš Zamuda, University of Maribor, Maribor, Slovenia
Workshop History
This workshop emerged from an initiative at GECCO 2019, which was launched to consolidate the various activities around benchmarking. This was also preceded by the BB-DOB@PPSN, BB-DOB@GECCO, and BOCIA workshops in 2018 as well as the Benchmarking of Computational Intelligence Algorithms special issue in ASOC. We are proud that our workshop is organized by a large number of colleagues who jointly cover a broad spectrum of benchmarking aspects. Most of the organizers are experienced in workshop organization, and will support the junior fellow in making their first experiences.
Hosting Event
The Genetic and Evolutionary Computation Conference (GECCO 2020)
July 8-12, 2020, Cancún, Quintana Roo, Mexico
http://gecco-2020.sigevo.org
The Genetic and Evolutionary Computation Conference (GECCO 2020) will present the latest high-quality results in genetic and evolutionary computation. Topics include genetic algorithms, genetic programming, evolution strategies, evolutionary programming, memetic algorithms, hyper-heuristics, real-world applications, evolutionary machine learning, evolvable hardware, artificial life, adaptive behavior, ant colony optimization, swarm intelligence, biological applications, evolutionary robotics, coevolution, artificial immune systems, and more. The full list of tracks is available at: Program Tracks
The GECCO 2020 Program Committee invites the submission of technical papers describing your best work in genetic and evolutionary computation. Full papers of at most 8 pages (excluding references) should present original work that meets the high-quality standards of GECCO. Accepted full papers appear in the ACM digital library as part of the Main Proceedings of GECCO. For full papers, a separate abstract needs to be submitted first by January 30, 2020. Full papers are due by the non-extensible deadline of February 6, 2020.
Each paper submitted to GECCO will be rigorously evaluated in a double-blind review process. Evaluation is done on a per-track basis, ensuring high interest and high expertise of the reviewers. Review criteria include the significance of the work, technical soundness, novelty, clarity, writing quality, relevance and, if applicable, sufficiency of information to permit replication.
Besides full papers, poster-only papers of at most 2 pages may be submitted. Poster-only papers should present original work that has not yet reached the maturity and completeness of research results that are published as full papers at GECCO. The review of poster-only papers follows the same double-blind process described above. Accepted poster-only papers will appear in the ACM digital library as part of the Companion Proceedings of GECCO. Poster-only papers are due by the non-extensible deadline of February 6, 2020, and no abstract needs to be submitted first.
By submitting a paper, the author(s) agree that, if their paper is accepted, they will:
- Submit a final, revised, camera-ready version to the publisher on or before the camera-ready deadline
- Register at least one author before April 17, 2020 to attend the conference
- Attend the conference (at least one author)
- Present the accepted paper at the conference
Related Events
- Special Session on Benchmarking of Computational Intelligence Algorithms (BOCIA'21)
- Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@PPSN) Workshop
- Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop
- Black-Box Discrete Optimization Benchmarking (BB-DOB@PPSN) Workshop
- Black-Box Discrete Optimization Benchmarking (BB-DOB@GECCO) Workshop
- Special Issue on Benchmarking of Computational Intelligence Algorithms in the Applied Soft Computing Journal
- International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA)