One January 10, 2020, foreign experts of Hefei University [合肥学院] met with our new president, Prof. Dr. Chunmei WU [吴春梅]. Our group of foreign experts taking part in the meeting included Prof. Falk Höhn of the Chinese-German Institute of Applied Sciences, Hefei University [合肥学院中德应用科学学院] and Professor of Industrial Design, Prof. Minyong Da [闵泳大], Professor of Korean Language and Literature, Mr. Olaf Rack, Lecturer of Industrial Design, Mrs. Jutta Lahmann and Mr. Adrian Lempert, German language lecturers, and Prof. Thomas Weise [汤卫思], Director of the Institute of Applied Optimization [应用优化研究所]. The keen interest of Prof. WU in our work and in supporting our work was very encouraging and emphasized by the fact that meeting and discussing with us was one of the first actions she took after just recently accepting the position as president of Hefei University. The meeting was very pleasant and featured an open and very constructive environment in which our future plans for further improving the internationalization and teaching at our university were discussed. It should be noted that every single expert expressed great satisfaction with the working and living environment provided by our university as well as gratitude for support that we received during all the years working here.

On January 6th, 2020, Mr. Kaiqiang HUANG [黄凯强] of TU Dublin, Dublin, Ireland gave his research talk Human Action Recognition using Zero Shot Learning [零次学习运用于人体动作识别] at 14:30 in our meeting room 920 in our office building 53 [合肥学院53 栋920]. Mr. Huang received his Bachelor from our university [合肥学院] and TU Dublin in 2016 before doing his Master's at TU Dublin, where he now has become a fully-funded PhD student. His talk, introducing a method to combine semantic side information with learned human actions was highly exciting for the audience. In the over 15 minutes of discussion that followed, we delved into the depths of this interesting topic, which is directly relevant to the work of several of our team mates. I think our school can be proud of Mr. Huang, who has grown very much and has become an excellent researcher in such a short time.

On January 6th, 2020, Dr. Bin CHEN [陈斌] of Eindhoven University of Technology, Eindhoven, The Netherlands, and Hefei University of Technology [合肥工业大学] gave hold research talk Multidimensional Modulation Formats for High Speed Optical Communications [面向高速光通信的多维度调制技术] at 15:30 in our meeting room 920 in our office building 53 [合肥学院53 栋920]. Today, the internet is a vital part of the infrastructure and subject to an ever-rising demand for bandwidth, much of which is routed through optical-fiber based connections. In his highly interesting talk, Dr. Chen discussed the limits of this technology and how we can push them further. If our national and international network connectivity can keep growing and improving in the future, one of the reasons may well be the work of Dr. Chen, which won two best paper awards. Thanks for this great talk!

Our friends from the Nevergrad and the IOHprofiler are organizing the Open Optimization Competition 2020 and welcome contributions.

Nevergrad is an open source platform for derivative-free optimization developed by Facebook Artificial Intelligence Research, Paris, France. It contains a wide range of optimization algorithm implementations, test cases, supports multi-objective optimization, and handles constraints. It automatically updates results of all experiments merged in the code base, hence users do not need computational power for participating and getting results. IOHprofiler is a tool for benchmarking iterative optimization heuristics such as local search variants, evolutionary algorithms, model-based algorithms, and other sequential optimization techniques. It has two components: the IOHexperimenter for running empirical evaluations and the IOHanalyzer for the statistical analysis and visualization of the experiment data. It is mainly developed by teams at Leiden University in the Netherlands, the Sorbonne University and CNRS in Paris, France, and the Tel Hai College in Israel.

This competition is part of the wider aim to build open-source, user-friendly, and community-driven platforms for comparing different optimization techniques. The key principles are reproducibility, open source, and ease of access. While some first steps towards such platforms have been done, the tools can greatly benefit from the contributions of the various communities for whom they are built. Hence, the goal of this competition is to solicit contributions towards these goals from the community.

Call for Papers Call for Papers

at the Genetic and Evolutionary Computation Conference (GECCO 2020)

July 8-12, 2020, Cancún, Quintana Roo, Mexico

The Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@GECCO) Workshop, was a part of the Genetic and Evolutionary Computation Conference (GECCO) 2020 and took place on July 8 (Cancún time, July 9 in China) as an online meeting with over 70! international participants. We had an incredibly inspiring discussion and it can be said that this workshop will probably have a lasting impact and strengthened our young community effort towards better benchmarking of metaheuristic algorithms.Here you can download the original BB-DOB@GECCO Workshop Call for Papers (CfP) in PDF format and here as plain text file.

GECCO 2020 Conference Logo

Scope and Objectives

Benchmarking aims to illuminate the strengths and weaknesses of algorithms regarding different problem characteristics. To this end, several benchmarking suites have been designed which target different types of characteristics. Gaining insight into the behavior of algorithms on a wide array of problems has benefits for different stakeholders. It helps engineers new to the field of optimization find an algorithm suitable for their problem. It also allows experts in optimization to develop new algorithms and improve existing ones. Even though benchmarking is a highly-researched topic within the evolutionary computation community, there are still a number of open questions and challenges that should be explored:

  1. most commonly-used benchmarks are small and do not cover the space of meaningful problems,
  2. benchmarking suites lack the complexity of real-world problems,
  3. proper statistical analysis techniques that can easily be applied depending on the nature of the data are lacking or seldom used, and)
  4. user-friendly, openly accessible benchmarking techniques and software need to be developed and spread.

We wish to enable a culture of sharing to ensure direct access to resources as well as reproducibility. This helps to avoid common pitfalls in benchmarking such as overfitting to specific test cases. We aim to establish new standards for benchmarking in evolutionary computation research so we can objectively compare novel algorithms and fully demonstrate where they excel and where they can be improved.

feed-image rss feed-image atom