- Written by: Thomas Weise
Our friends from the Nevergrad and the IOHprofiler are organizing the Open Optimization Competition 2020 and welcome contributions.
Nevergrad is an open source platform for derivative-free optimization developed by Facebook Artificial Intelligence Research, Paris, France. It contains a wide range of optimization algorithm implementations, test cases, supports multi-objective optimization, and handles constraints. It automatically updates results of all experiments merged in the code base, hence users do not need computational power for participating and getting results. IOHprofiler is a tool for benchmarking iterative optimization heuristics such as local search variants, evolutionary algorithms, model-based algorithms, and other sequential optimization techniques. It has two components: the IOHexperimenter for running empirical evaluations and the IOHanalyzer for the statistical analysis and visualization of the experiment data. It is mainly developed by teams at Leiden University in the Netherlands, the Sorbonne University and CNRS in Paris, France, and the Tel Hai College in Israel.
This competition is part of the wider aim to build open-source, user-friendly, and community-driven platforms for comparing different optimization techniques. The key principles are reproducibility, open source, and ease of access. While some first steps towards such platforms have been done, the tools can greatly benefit from the contributions of the various communities for whom they are built. Hence, the goal of this competition is to solicit contributions towards these goals from the community.
- Written by: Thomas Weise
Our team warmly congratulates Dr. Xinlu Li to his promotion to Associate Educator! This promotion is very well deserved and the result of seven years of continuous hard work and high performance as senior lecturer. Additionally, this year in February, Xinlu completed his PhD Degree at TU Dublin in Ireland and published a great article in an SCI/EI-indexed 2区 journal in August. Congratulations Xinlu!
- Written by: Thomas Weise
Yesterday, on December 18, 2019, as a member of a small delegation from our Hefei University [合肥学院], I had the pleasure to participate in the Consultation Symposium and New Year Party for Foreign Experts [外国专家建言献策暨新年联谊会] of the Anhui province, organized by the Department of Science and Technology of the Anhui Province [安徽省科学技术厅], the Hefei Municipal Bureau of Science and Technology [合肥市科学技术局], and Anhui University [安徽大学].
- Written by: Thomas Weise
On December 18, 2019, at the Consultation Symposium and New Year Party for Foreign Experts [外国专家建言献策暨新年联谊会], Prof. Dr. Thomas Weise was awarded the title Hefei Specially Recruited Foreign Expert [合肥市特聘外国专家证书] within the First Hefei Recruitment Program of Talented Foreigners [首批市"引进外国高端人才计划"] by the Hefei Municipal Bureau of Science and Technology [合肥市科学技术局], Hefei Municipal Bureau of Foreign Experts Affairs [合肥市外国专家局].
Read more: Prof. Weise receives Title "Hefei Specially Recruited Foreign Expert"
- Written by: Thomas Weise
at the Genetic and Evolutionary Computation Conference (GECCO 2020)
July 8-12, 2020, Cancún, Quintana Roo, Mexico
https://sites.google.com/view/benchmarking-network/home/GECCO20
http://iao.hfuu.edu.cn/benchmark-gecco20
The Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@GECCO) Workshop, was a part of the Genetic and Evolutionary Computation Conference (GECCO) 2020 and took place on July 8 (Cancún time, July 9 in China) as an online meeting with over 70! international participants. We had an incredibly inspiring discussion and it can be said that this workshop will probably have a lasting impact and strengthened our young community effort towards better benchmarking of metaheuristic algorithms.Here you can download the original BB-DOB@GECCO Workshop Call for Papers (CfP) in PDF format and here as plain text file.
Scope and Objectives
Benchmarking aims to illuminate the strengths and weaknesses of algorithms regarding different problem characteristics. To this end, several benchmarking suites have been designed which target different types of characteristics. Gaining insight into the behavior of algorithms on a wide array of problems has benefits for different stakeholders. It helps engineers new to the field of optimization find an algorithm suitable for their problem. It also allows experts in optimization to develop new algorithms and improve existing ones. Even though benchmarking is a highly-researched topic within the evolutionary computation community, there are still a number of open questions and challenges that should be explored:
- most commonly-used benchmarks are small and do not cover the space of meaningful problems,
- benchmarking suites lack the complexity of real-world problems,
- proper statistical analysis techniques that can easily be applied depending on the nature of the data are lacking or seldom used, and)
- user-friendly, openly accessible benchmarking techniques and software need to be developed and spread.
We wish to enable a culture of sharing to ensure direct access to resources as well as reproducibility. This helps to avoid common pitfalls in benchmarking such as overfitting to specific test cases. We aim to establish new standards for benchmarking in evolutionary computation research so we can objectively compare novel algorithms and fully demonstrate where they excel and where they can be improved.
Read more: Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@GECCO) Workshop
- Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@PPSN) Workshop
- First Articles of Special Issue on "Benchmarking of Computational Intelligence Algorithms" and SI-website are online!
- Second Institute Workshop on Applied Optimization: Seminar on Operations Research and Optimization
- Prof. Möhring and Prof. Weise attend Dagstuhl Seminar 19431 with the topic "Theory of Randomized Optimization Heuristics"