
- Written by: Thomas Weise
Yesterday, on December 18, 2019, as a member of a small delegation from our Hefei University [合肥学院], I had the pleasure to participate in the Consultation Symposium and New Year Party for Foreign Experts [外国专家建言献策暨新年联谊会] of the Anhui province, organized by the Department of Science and Technology of the Anhui Province [安徽省科学技术厅], the Hefei Municipal Bureau of Science and Technology [合肥市科学技术局], and Anhui University [安徽大学].

- Written by: Thomas Weise
On December 18, 2019, at the Consultation Symposium and New Year Party for Foreign Experts [外国专家建言献策暨新年联谊会], Prof. Dr. Thomas Weise was awarded the title Hefei Specially Recruited Foreign Expert [合肥市特聘外国专家证书] within the First Hefei Recruitment Program of Talented Foreigners [首批市"引进外国高端人才计划"] by the Hefei Municipal Bureau of Science and Technology [合肥市科学技术局], Hefei Municipal Bureau of Foreign Experts Affairs [合肥市外国专家局].
Read more: Prof. Weise receives Title "Hefei Specially Recruited Foreign Expert"
- Written by: Thomas Weise
at the Genetic and Evolutionary Computation Conference (GECCO 2020)
July 8-12, 2020, Cancún, Quintana Roo, Mexico
https://sites.google.com/view/benchmarking-network/home/GECCO20
http://iao.hfuu.edu.cn/benchmark-gecco20
The Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@GECCO) Workshop, was a part of the Genetic and Evolutionary Computation Conference (GECCO) 2020 and took place on July 8 (Cancún time, July 9 in China) as an online meeting with over 70! international participants. We had an incredibly inspiring discussion and it can be said that this workshop will probably have a lasting impact and strengthened our young community effort towards better benchmarking of metaheuristic algorithms.Here you can download the original BB-DOB@GECCO Workshop Call for Papers (CfP) in PDF format and here as plain text file.
Scope and Objectives
Benchmarking aims to illuminate the strengths and weaknesses of algorithms regarding different problem characteristics. To this end, several benchmarking suites have been designed which target different types of characteristics. Gaining insight into the behavior of algorithms on a wide array of problems has benefits for different stakeholders. It helps engineers new to the field of optimization find an algorithm suitable for their problem. It also allows experts in optimization to develop new algorithms and improve existing ones. Even though benchmarking is a highly-researched topic within the evolutionary computation community, there are still a number of open questions and challenges that should be explored:
- most commonly-used benchmarks are small and do not cover the space of meaningful problems,
- benchmarking suites lack the complexity of real-world problems,
- proper statistical analysis techniques that can easily be applied depending on the nature of the data are lacking or seldom used, and)
- user-friendly, openly accessible benchmarking techniques and software need to be developed and spread.
We wish to enable a culture of sharing to ensure direct access to resources as well as reproducibility. This helps to avoid common pitfalls in benchmarking such as overfitting to specific test cases. We aim to establish new standards for benchmarking in evolutionary computation research so we can objectively compare novel algorithms and fully demonstrate where they excel and where they can be improved.
Read more: Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@GECCO) Workshop
- Written by: Thomas Weise
at the Sixteenth International Conference on Parallel Problem Solving from Nature (PPSN XVI)
September 5-9, 2020 in Leiden, The Netherlands
https://sites.google.com/view/benchmarking-network/home/PPSN20
http://iao.hfuu.edu.cn/benchmark-ppsn20
The Good Benchmarking Practices for Evolutionary Computation Workshop (BENCHMARK@PPSN), a part of the Sixteenth International Conference on Parallel Problem Solving from Nature (PPSN XVI), is cordially inviting the submission of contributions. Here you can download the BB-DOB@PPSN Workshop Call for Papers (CfP) in PDF format and here as plain text file.
In the era of explainable and interpretable AI, it is increasingly necessary to develop a deep understanding of how algorithms work and how new algorithms compare to existing ones, both in terms of strengths and weaknesses. For this reason, benchmarking plays a vital role for understanding algorithms’ behavior. Even though benchmarking is a highly-researched topic within the evolutionary computation community, there are still a number of open questions and challenges that should be explored:
- most commonly-used benchmarks are too small and cover only a part of the problem space,
- benchmarks lack the complexity of real-world problems, making it difficult to transfer the learned knowledge to work in practice,
- we need to develop proper statistical analysis techniques that can be applied depending on the nature of the data, and
- we need to develop user-friendly, openly accessible benchmarking software.
This enables a culture of sharing resources to ensure reproducibility, and which helps to avoid common pitfalls in benchmarking optimization techniques. As such, we need to establish new standards for benchmarking in evolutionary computation research so we can objectively compare novel algorithms and fully demonstrate where they excel and where they can be improved.
The topics of interest for this workshop include, but are not limited to:
- performance measures for comparing algorithms behavior,
- novel statistical approaches for analyzing empirical data,
- the selection of meaningful benchmark problems,
- landscape analysis,
- data mining approaches for understanding algorithm behavior,
- transfer learning from benchmark experiences to real-world problems, and
- benchmarking tools for executing experiments and analysis of experimental results.
Read more: Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@PPSN) Workshop

- Written by: Thomas Weise
Together with our colleagues Markus Wagner, Jörg Lässig, Bin Li, and Xingyi Zhang, we are organizing a Special Issue on "Benchmarking of Computational Intelligence Algorithms" in the Applied Soft Computing (ASOC). We received quite a lot of very cool submissions until the deadline in April 2019. This is a virtual special issue, which means that the papers will immediately published when they have passed the editorial process and may appear in different physical issues and volumes of the journal. This has now happened for the first few of the submissions we have received. As a result, the journal website for our special issue is now online, too, at https://www.sciencedirect.com/journal/applied-soft-computing/special-issue/10LQLZBS7T4. We still have quite a few very interesting papers in our queue, which will appear step by step on that page as well.
- Second Institute Workshop on Applied Optimization: Seminar on Operations Research and Optimization
- Prof. Möhring and Prof. Weise attend Dagstuhl Seminar 19431 with the topic "Theory of Randomized Optimization Heuristics"
- Celebration of the 70th Anniversary of the People's Republic of China for People from all Walks of Life
- Prof. Weise attends Forum on Mobility of European Researchers in China in Beijing