- Written by: Thomas Weise
On Wednesday, 2019-01-30, we presented our paper "An Improved Generic Bet-and-Run Strategy with Performance Prediction for Stochastic Local Search," co-authored by Thomas Weise, Zijun Wu, and Markus Wagner at the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19).
Thomas Weise, Zijun Wu, and Markus Wagner. An Improved Generic Bet-and-Run Strategy with Performance Prediction for Stochastic Local Search. Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019), January 27 – February 1, 2019, Honolulu, Hawaii, USA, pages 2395–2402. Palo Alto, CA, USA: AAAI Press. ISBN: 978-1-57735-809-1
doi:10.1609/aaai.v33i01.33012395 / pdf@IAO / pdf@AAAI / slides / poster
The conference was very nice and interesting. The seven sessions on "Search, Constraint Satisfaction and Optimization" as well as the ten sessions on "Game Theory and Economic Paradigms" showed that our research fields are core topics at one of the most important venues on soft computing. It was very nice presenting our work in such a context and we had several interesting discussions with interested researchers.
- Written by: Thomas Weise
The paper submission deadline of the 2019 IEEE Congress on Evolutionary Computation has been extended until January 31, 2019. As a result, we are happy to announce that there are also two more weeks for authors to submit their papers to our Special Session on Benchmarking of Evolutionary Algorithms for Discrete Optimization (BEADO).
Evolutionary Computation (EC) is a huge and expanding field, attracting more and more interests from both academia and industry. It includes a wide and ever-growing variety of optimization algorithms, which, in turn, are applied to an even wider and faster growing range of different problem domains, including discrete optimization. Benchmarking is the engine driving research in the fields of Evolutionary Algorithms (EAs) for decades, while its potential has not been fully explored. With our special session, we want to bring together experts on benchmarking, evolutionary computation algorithms, and discrete optimization and provide a platform for them to exchange findings, to explore new paradigms for performance comparison, and to discuss issues such as
- modelling of algorithm behaviors and performance
- visualizations of algorithm behaviors and performance
- statistics for performance comparison (robust statistics, PCA, ANOVA, statistical tests, ROC, …)
- evaluation of real-world goals such as algorithm robustness, and reliability
- theoretical results for algorithm performance comparison
- comparison of theoretical and empirical results
- new benchmark problems
- the comparison of algorithms in “non-traditional” scenarios such as
- multi- or many-objective domains
- parallel implementations, e.g., using GGPUs, MPI, CUDA, clusters, or running in clouds
- large-scale problems or problems where objective function evaluations are costly
- dynamic problems or where the objective functions involve randomized simulations or noise
- comparative surveys with new ideas on
- dos and don'ts, i.e., best and worst practices, for algorithm performance comparison
- tools for experiment execution, result collection, and algorithm comparison
- benchmark sets for certain problem domains and their mutual advantages and weaknesses
Please visit the special session website for more information. Here you can download the BEADO Special Session Call for Papers (CfP) in PDF format and here as plain text file.
- Written by: Thomas Weise
at the Genetic and Evolutionary Computation Conference (GECCO 2019)
July 13-17, 2019, Prague, Czech Republic
http://iao.hfuu.edu.cn/bbdob-gecco19
The Black-Box Discrete Optimization Benchmarking (BB-DOB) Workshop, a part of the Genetic and Evolutionary Computation Conference (GECCO) 2019, is cordially inviting the submission of original and unpublished research papers.
The Black-Box-Optimization Benchmarking (BBOB) methodology associated to the BBOB-GECCO workshops has become a well-established standard for benchmarking stochastic and deterministic continuous optimization algorithms. The aim of the BB-DOB workshop series is to set up a process that will allow to achieve a similar standard methodology for the benchmarking of black box optimization algorithms in discrete and combinatorial search spaces.
Here you can download the BBDOB Workshop Call for Papers (CfP) in PDF format and here as plain text file.
The long term aim of our workshop series is to produce, for the domain of discrete optimization:
- a well-motivated benchmark function testbed,
- an experimental set-up,
- methods for the generation of data output for post-processing and
- proper presentations of the results in graphs and tables.
The aims of this GECCO 2019 BB-DOB workshop are to finalize the benchmarking testbed for discrete optimization and to promote a discussion of which performance measures should be used.
The benchmark functions should capture the difficulties of combinatorial optimization problems in practice. They also should be comprehensible so that algorithm behaviors can be understood or interpreted according to the performance on a given benchmark problem. The goal is that a desired search behavior can be pictured and algorithm deficiencies can be understood in depth. This understanding will lead to the design of improved algorithms. Ideally, we would like the benchmark functions to be scalable with the problem size and non-trivial in the black box optimization sense (the function may be shifted such that the global optimum may be any point). Achieving this goal would help greatly in bridging the gap between theoreticians and experimentalists.
We also wish to investigate which measures should be used to compare algorithm performance, which statistical tests should be run to compare algorithms, and how to deal with unsuccessful runs.
This workshop wants to bring together experts on benchmarking of optimization algorithms. It will provide a common forum for discussions and exchange of opinions. Interested participants are encouraged to submit a paper related to black-box optimization benchmarking of discrete optimizers. The topics of interesting especially include papers that
- suggest functions to be included in the benchmark and motivate the reasons for inclusion,
- suggest benchmark function properties that allow to capture difficulties which occur in real-world applications (e.g., deception, separability, etc.),
- suggest which classes of standard combinatorial optimization problems should be included and how to select significant instances,
- suggest which classes of toy problems should be included and motivate why,
- suggest which performance measures should be used to analyze and compare algorithms and comment/suggestions on related issues, and/or
- tackle any other aspect of benchmarking methodology for discrete optimizers such as design of experiments, presentation methods, benchmarking frameworks, etc.
- conduct performance comparisons, landscape analysis, discussion of selected benchmark problems and/or provided statistics of IOHprofiler, a ready-to-use software for the empirical analysis of iterative optimization heuristics
For more information please contact Pietro S. Oliveto at
This workshop is organized as part of ImAppNIO (Cost Action 15140).
Read more: Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop
- Written by: Thomas Weise
at the 2019 IEEE Congress on Evolutionary Computation (CEC'19)
June 10-13, 2019 in Wellington, New Zealand
http://iao.hfuu.edu.cn/beado19
The Special Session on Benchmarking of Evolutionary Algorithms for Discrete Optimization (BEADO), a part of the 2019 IEEE Congress on Evolutionary Computation (CEC'19), is cordially inviting the submission of original and unpublished research papers.
Evolutionary Computation (EC) is a huge and expanding field, attracting more and more interests from both academia and industry. It includes a wide and ever-growing variety of optimization algorithms, which, in turn, are applied to an even wider and faster growing range of different problem domains, including discrete optimization. For the discrete domain and application scenarios, we want to pick the best algorithms. Actually, we want to do more, we want to improve upon the best algorithm. This requires a deep understanding of the problem at hand, the performance of the algorithms we have for that problem, the features that make instances of the problem hard for these algorithms, and the parameter settings for which the algorithms perform the best. Such knowledge can only be obtained empirically, by collecting data from experiments, by analyzing this data statistically, and by mining new information from it. Benchmarking is the engine driving research in the fields of EAs for decades, while its potential has not been fully explored.
Here you can download the BEADO Special Session Call for Papers (CfP) in PDF format and here as plain text file.
The goal of this special session is to solicit original works on the research in benchmarking: Works which contribute to the domain of benchmarking of discrete algorithms from the field of Evolutionary Computation, by adding new theoretical or practical knowledge. Papers which only apply benchmarking are not in the scope of the special session.
This special session wants to bring together experts on benchmarking, evolutionary computation algorithms, and discrete optimization. It provides a common forum for them to exchange findings, to explore new paradigms for performance comparison, and to discuss issues such as
- modelling of algorithm behaviors and performance
- visualizations of algorithm behaviors and performance
- statistics for performance comparison (robust statistics, PCA, ANOVA, statistical tests, ROC, …)
- evaluation of real-world goals such as algorithm robustness, and reliability
- theoretical results for algorithm performance comparison
- comparison of theoretical and empirical results
- new benchmark problems
- the comparison of algorithms in “non-traditional” scenarios such as
- multi- or many-objective domains
- parallel implementations, e.g., using GGPUs, MPI, CUDA, clusters, or running in clouds
- large-scale problems or problems where objective function evaluations are costly
- dynamic problems or where the objective functions involve randomized simulations or noise
- comparative surveys with new ideas on
- dos and don'ts, i.e., best and worst practices, for algorithm performance comparison
- tools for experiment execution, result collection, and algorithm comparison
- benchmark sets for certain problem domains and their mutual advantages and weaknesses
- Written by: Thomas Weise
On November 26, 2018, our university was visited by a delegation from the German province Saxony [萨克森自由州], which, to a large degree, was composed of professors from the Chemnitz University of Technology [Technische Universität Chemnitz] (TUC) from Chemnitz [开姆尼茨], Germany. This made me personally very happy, since Chemnitz is my hometown, I received my Master's degree from that university, and visited it to give research talks in 2017 and 2018.
The delegation was led by Dr. Peter Homilius, the vice-directory of the Economic Development Corporation (WFS) Saxony and Prof. Dr. Maximilian Eibl, the vice-president of the TUC and chair of Media Informatics in my old faculty there, the Faculty of Computer Science. Further members of the delegation were Prof. Dr. Egon Müller from the Department of Factory Planning and Factory Management and the Chemnitz Automotive Institute (CATI) at TUC, Prof. Dr. Andreas Schubert, chair of Micromanufacturing Technology, Mr. Claus-Peter Held (CATI), Dr. Frank Löschmann, director of the SisTeam company, as well as Mr. Huaidong Wu and Mr. Chao Ying (SisTeam).
Saxony is a province in the eastern part of Germany. Its capital is Dresden city, but the most industrialized city has always been Chemnitz, which, historically, is one of the cradles of industrialization of Germany. This area is also named as one of the top-20 innovative regions of Europe. The TU Chemnitz has more than 180 years of history and is the motor of innovation in that area. Automation, engineering, lightweight material engineering, automotive industries, the constant improvement of existing technologies, the improvement of production efficiency, research on new materials – the TU Chemnitz is highly competitive in all of these fields. For instance, it also holds the MERGE excellence cluster for multifunctional lightweight structure technologies. All of these fields are important to fill concepts such as Industry 4.0 and Made in China 2025 with life. There were many fruitful discussions with the aim to establish collaborations between the TUC and our uni, centered around such important topics as smart production and the education of engineers. The delegation was impressed with the application-oriented education that our university has developed by adopting German approaches to the Chinese environment. As a result of our talks, I am convinced that our unis will establish successful, long-lasting, and highly productive collaborations.
- First DAAD Meeting "Research and Teaching in China" in Beijing
- Research Talk "Automating Scientific Research in Optimization" by Prof. Thomas Weise at the Goethe University of Frankfurt in Frankfurt, Germany
- Research Talk "Automating Scientific Research in Optimization" by Prof. Thomas Weise at the Leibnitz University Hannover in Hannover, Germany
- Research Talk "Automating Scientific Research in Optimization" by Prof. Thomas Weise at the Leuphana University of Lüneburg, Germany