User Rating: 3 / 5

Star ActiveStar ActiveStar ActiveStar InactiveStar Inactive

On January 6th, 2020, Dr. Bin CHEN [陈斌] of Eindhoven University of Technology, Eindhoven, The Netherlands, and Hefei University of Technology [合肥工业大学] gave hold research talk Multidimensional Modulation Formats for High Speed Optical Communications [面向高速光通信的多维度调制技术] at 15:30 in our meeting room 920 in our office building 53 [合肥学院53 栋920]. Today, the internet is a vital part of the infrastructure and subject to an ever-rising demand for bandwidth, much of which is routed through optical-fiber based connections. In his highly interesting talk, Dr. Chen discussed the limits of this technology and how we can push them further. If our national and international network connectivity can keep growing and improving in the future, one of the reasons may well be the work of Dr. Chen, which won two best paper awards. Thanks for this great talk!

User Rating: 4 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Inactive

Our friends from the Nevergrad and the IOHprofiler are organizing the Open Optimization Competition 2020 and welcome contributions.

Nevergrad is an open source platform for derivative-free optimization developed by Facebook Artificial Intelligence Research, Paris, France. It contains a wide range of optimization algorithm implementations, test cases, supports multi-objective optimization, and handles constraints. It automatically updates results of all experiments merged in the code base, hence users do not need computational power for participating and getting results. IOHprofiler is a tool for benchmarking iterative optimization heuristics such as local search variants, evolutionary algorithms, model-based algorithms, and other sequential optimization techniques. It has two components: the IOHexperimenter for running empirical evaluations and the IOHanalyzer for the statistical analysis and visualization of the experiment data. It is mainly developed by teams at Leiden University in the Netherlands, the Sorbonne University and CNRS in Paris, France, and the Tel Hai College in Israel.

This competition is part of the wider aim to build open-source, user-friendly, and community-driven platforms for comparing different optimization techniques. The key principles are reproducibility, open source, and ease of access. While some first steps towards such platforms have been done, the tools can greatly benefit from the contributions of the various communities for whom they are built. Hence, the goal of this competition is to solicit contributions towards these goals from the community.

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
Call for Papers Call for Papers

at the Genetic and Evolutionary Computation Conference (GECCO 2020)

July 8-12, 2020, Cancún, Quintana Roo, Mexico

https://sites.google.com/view/benchmarking-network/home/GECCO20
http://iao.hfuu.edu.cn/benchmark-gecco20

The Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@GECCO) Workshop, a part of the Genetic and Evolutionary Computation Conference (GECCO) 2020, is cordially inviting contributions and the submission of original and unpublished research papers. Here you can download the BB-DOB@gecco Workshop Call for Papers (CfP) in PDF format and here as plain text file.

GECCO 2020 Conference Logo

Scope and Objectives

Benchmarking aims to illuminate the strengths and weaknesses of algorithms regarding different problem characteristics. To this end, several benchmarking suites have been designed which target different types of characteristics. Gaining insight into the behavior of algorithms on a wide array of problems has benefits for different stakeholders. It helps engineers new to the field of optimization find an algorithm suitable for their problem. It also allows experts in optimization to develop new algorithms and improve existing ones. Even though benchmarking is a highly-researched topic within the evolutionary computation community, there are still a number of open questions and challenges that should be explored:

  1. most commonly-used benchmarks are small and do not cover the space of meaningful problems,
  2. benchmarking suites lack the complexity of real-world problems,
  3. proper statistical analysis techniques that can easily be applied depending on the nature of the data are lacking or seldom used, and)
  4. user-friendly, openly accessible benchmarking techniques and software need to be developed and spread.

We wish to enable a culture of sharing to ensure direct access to resources as well as reproducibility. This helps to avoid common pitfalls in benchmarking such as overfitting to specific test cases. We aim to establish new standards for benchmarking in evolutionary computation research so we can objectively compare novel algorithms and fully demonstrate where they excel and where they can be improved.

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
Call for Papers Call for Papers

at the Sixteenth International Conference on Parallel Problem Solving from Nature (PPSN XVI)

September 5-9, 2020 in Leiden, The Netherlands

https://sites.google.com/view/benchmarking-network/home/PPSN20
http://iao.hfuu.edu.cn/benchmark-ppsn20

 

The Good Benchmarking Practices for Evolutionary Computation Workshop (BENCHMARK@PPSN), a part of the Sixteenth International Conference on Parallel Problem Solving from Nature (PPSN XVI), is cordially inviting the submission of contributions. Here you can download the BB-DOB@PPSN Workshop Call for Papers (CfP) in PDF format and here as plain text file.

PPSN 2020 Conference Logo

In the era of explainable and interpretable AI, it is increasingly necessary to develop a deep understanding of how algorithms work and how new algorithms compare to existing ones, both in terms of strengths and weaknesses. For this reason, benchmarking plays a vital role for understanding algorithms’ behavior. Even though benchmarking is a highly-researched topic within the evolutionary computation community, there are still a number of open questions and challenges that should be explored:

  1. most commonly-used benchmarks are too small and cover only a part of the problem space,
  2. benchmarks lack the complexity of real-world problems, making it difficult to transfer the learned knowledge to work in practice,
  3. we need to develop proper statistical analysis techniques that can be applied depending on the nature of the data, and
  4. we need to develop user-friendly, openly accessible benchmarking software.

This enables a culture of sharing resources to ensure reproducibility, and which helps to avoid common pitfalls in benchmarking optimization techniques. As such, we need to establish new standards for benchmarking in evolutionary computation research so we can objectively compare novel algorithms and fully demonstrate where they excel and where they can be improved.

The topics of interest for this workshop include, but are not limited to:

  • performance measures for comparing algorithms behavior,
  • novel statistical approaches for analyzing empirical data,
  • the selection of meaningful benchmark problems,
  • landscape analysis,
  • data mining approaches for understanding algorithm behavior,
  • transfer learning from benchmark experiences to real-world problems, and
  • benchmarking tools for executing experiments and analysis of experimental results.

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active

Second Institute Workshop on Applied Optimization

《合肥学院运筹与优化研讨会》

We are happy to announce that the Second Institute Workshop on Applied Optimization will be held in form of the Hefei University Operations Research and Optimization Seminar [合肥学院运筹与优化研讨会] in the conference room of the Institute of Applied Optimization on November 5, 2019. It follows the First Institute Workshop on Applied Optimization, which was still held 2018 in the meeting room near our old offices. Talks will be given by:

  • Prof. Dr. Rolf H. MÖHRING, Hefei University [合肥学院] and Technische Universtiät Berlin [柏林工业大学] ,
  • Prof. Dachuan XU [徐大川教授], Beijing University of Technology [北京工业大学] ,
  • Prof. Xiaoyan ZHANG [张晓岩教授], Nanjing Normal University [南京师范大学] ,
  • Prof. Longkun GUO [郭龙坤教授], Fuzhou University [福州大学] ,
  • Dr. Yong ZHANG, [张涌研究员], Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences [中科院深圳先进技术研究院] , and
  • Dr. Zijun WU [吴自军博士], Hefei University [合肥学院] .

我校人工智能与大数据学院将于2019年11月5日上午在中德应用优化研究所会议室召开第一次“合肥学院运筹与优化研讨会”。届时, Rolf MÖHRING教授(合肥学院,柏林工业大学)、徐大川教授(北京工业大学)、张晓岩教授(南京师范大学)、郭龙坤教授(福州大学)、张涌研究员(中科院深圳先进技术研究院)以及吴自军博士(合肥学院)等国内外运筹与优化领域专家学者将作学术报告,欢迎我校师生踊跃参加!

feed-image rss feed-image atom