Our institute welcomes Prof. Dr. Jörg Lässig and Mr. Markus Ullrich for a research stay from November 19 to 26, 2017, at our group. Prof. Lässig is the head of the Enterprise Application Development (EAD) group of the Faculty of Electrical Engineering and Computer Science of the University of Applied Sciences Zittau/Görlitz (HZG, Hochschule Zittau/Görlitz), located in Görlitz, Germany and of a IT security group with the Fraunhofer Society. Mr. Ullrich is a PhD student under his co-supervision and researcher at the EAD group.

Between our team and the EAD group exists a history of collaboration dating back quite a few years. Together, we have analyzed several classical optimization problems from logistics and scheduling, including scheduling against due dates and windows as well as the Traveling Salesman Problem (TSP). We also jointly contributed works on Evolutionary Computational in general. An important aspect of our work always the benchmarking of optimization algorithms. We work together on the TSP Suite, a framework for implementing and comparing algorithms for the TSP. Delegations from the EAD have visited us in China already in 2013 and 2015, while Prof. Weise visited the EAD in 2016 and 2017.

It therefore is a particular pleasure to be able to host Jörg and Markus at our group, especially since they both also gave presentations on the state-of-the-art developments in their fields:

Short Biographies

Portrait of Prof. Dr. Jörg LässigProf. Dr. Jörg Lässig is a Full Professor at the Department of Computer Science at the University of Applied Sciences Zittau/Görlitz (HSZG). He studied Computer Science and Computational Physics and received his Ph.D. for work on efficient algorithms and models for the generation and control of cooperation networks at Chemnitz University of Technology. As postdoc he worked in projects at the International Computer Science Institute at Berkeley, California and at the Università della Svizzera italiana in Lugano, Switzerland. His EAD research group at HSZG and his IT security group with the Fraunhofer Society are focusing on topics concerned with intelligent data driven technologies for state-of-the-art IT infrastructures and services. Prof. Lässig is also a co-chair of our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) and a co-guest editor of the Special Issue on Benchmarking of Computational Intelligence Algorithms in the Computational Intelligence Journal with Profs. Thomas Weise, Bin Li (USTC), Markus Wagner (University of Adelaide, Australia) and Xingyi Zhang (Anhui University).

Portrait of Mr. Markus UllrichMr. Markus Ullrich is currently a PhD student at Technische Universität Chemnitz and a research associate at the University of Applied Sciences Zittau/Görlitz where he received his M.S. and B.S. in Computer Science in 2012 and 2010 respectively. From 2009 to 2012, he worked as a software developer for the Decision Optimization GmbH where he developed and tested data mining algorithms for predictive maintenance. He spent three months at the National Institute of Informatics in Tokyo, Japan during an internship where he worked on the modeling of applications and resources in cloud environments. His current research interests are data mining and cloud computing as well as the simulation and modeling of complex distributed systems.

From October 18 to 27, 2017, our Master's Student Ms. Qi Qi [齐琪] of the University of Science and Technology of China (USTC) [中国科学技术大学] conducted an invited research stay at the Chair of System Simulation of the Department of Informatics of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) in Erlangen, Germany.

The Chair for System Simulation performs research on the modeling and efficient simulation and optimization of complex systems in science and engineering. Its focus is on the design and the analysis of algorithms and tools for these purposes. Since most simulations of complex systems are computationally heavy, the work of the group therefore is centered around diverse high-performance computation (HPC) techniques. They

  • research HPC models, by developing tailored simulation algorithms for physical applications, three phase and thermal free flows based on the lattice Boltzmann method, and multi-level algorithms,
  • research scientific computing, by researching computational optics, numerical analysis, and related HPC methods, and
  • develop HPC software, including visualisations of simulation results, a framework for simulation of fluid scenarios based on the Lattice Boltzmann, and software for rigid body dynamics

Ms. Qi was invited by of Prof. Dr. Harald Köstler, whose main research interest is on the latter point. He works on the ExaStencils project for Advanced Stencil-Code Engineering. Stencil codes are compute-intensive algorithms in which data points in a grid are redefined repeatedly as a combination of the values of neighboring points. The neighborhood pattern used is called a stencil. Stencil codes are used for the solution of discrete partial differential equations and the resulting linear systems.

Our institute welcomes Dr. Markus Wagner, Senior Lecturer from the Optimisation and Logistics Group of the School of Computer Science of The University of Adelaide, SA, Australia, for a research visit from October 24 to November 2. His stay is supported by the Australia-China Young Scientists Exchange Program 2017 (YSEP) [中澳青年科学家交流计划] organized by the China Science and Technology Exchange Center (CSTEC) [中国科学技术交流中心] and the Australian Academy of Technology and Engineering (ATSE).

The members of the Optimisation and Logistics Group in Adelaide research optimization methods that are frequently used to solve hard and complex optimization problems. These include linear programming, branch and bound, genetic algorithms, evolution strategies, genetic programming, ant colony optimization, local search, and others. The areas of interest of Dr. Wagner are heuristic optimization and applications thereof. His work draws on computational complexity analysis and on performance landscape analysis.

Dr. Wagner and the members of our institute will spend most of his visiting time on joint research and on developing future joint projects and collaborations. Additionally, he will visit our colleagues Bin Li at USTC and Xingyi Zhang at Anhui University and give two presentations open for any interested listeners:

  1. Approximation-Guided Many-Objective Optimisation and the Travelling Thief Problem in Anhui University (AHU) [安徽大学] and co-invited by the IEEE CIS Hefei Chapter [slides]
  2. Two Real-World Optimisation Problems Related to Energy at our group [slides (android energy consumption), slides (wave energy)]

Short Biography

Dr. Markus Wagner is a Senior Lecturer at the School of Computer Science, University of Adelaide, Australia. He has done his PhD studies at the Max Planck Institute for Informatics in Saarbrücken, Germany and at the University of Adelaide, Australia. His research topics range from mathematical runtime analysis of heuristic optimization algorithms and theory-guided algorithm design to applications of heuristic methods to renewable energy production, professional team cycling and software engineering. So far, he has been a program committee member 30 times, and he has written over 70 articles with over 70 different co-authors. He has chaired several education-related committees within the IEEE CIS, is Co-Chair of ACALCI 2017 and General Chair of ACALCI 2018. Dr. Wagner is also a co-chair of our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) and a co-guest editor of the Special Issue on Benchmarking of Computational Intelligence Algorithms in the Computational Intelligence Journal with Profs. Thomas Weise, Bin Li (USTC), Xingyi Zhang (Anhui University), and Jörg Lässig (University of Applied Sciences Zittau/Görlitz).

Today, Prof. Thomas Weise joined the COnfiguration and SElection of ALgorithms group (COSEAL). COSEAL is an international group of researchers with focus on algorithm selection and algorithm configuration. These two topics are very closely related to our research direction Rigorous Performance Analysis of Algorithms, in particular to the Optimization Benchmarking project.

In order to find the best way to solve an optimization problem, we need to use the best algorithm and the best setup of this algorithm. Algorithm setups have both static and dynamic components, both of which are vital for the algorithm performance. Static parameters do not change during the algorithm run, common examples are (static) population sizes of Evolutionary Algorithms or the tabu tenure of Tabu Search. Dynamic parameters can change all the time, either according to a fixed schedule (like the temperature in Simulated Annealing) or in a self-adaptive way (like the mutation step size in a (1+1) Evolution Strategy with the 1/5th rule). We may also choose the algorithm to apply based on some problem features or dynamically assign runtime to different algorithms based on their progress, i.e., construct some form of meta-algorithm. COSEAL aims to find good algorithm configurations, both dynamic and static, as well as algorithm selection methods.

Where does benchmarking come into play? First, better techniques for benchmarking may lead to better policies for dynamic algorithm configuration. Second, in order to know whether a static or dynamic algorithm parameters and algorithm selection methods work well over different problems, we need benchmarking. Third, benchmarking may tell us the strengths and weaknesses of a setup or algorithm.

Due to this close relationship, several other members of COSEAL also join in the programme committee and even the chairing team of our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA).

Optimization and Operations Research are about finding good solutions for computational hard problems. Sometimes we cannot guarantee to find the best solutions, because finding these (or the proof that they are best) may take too long. Then we look for good approximate solution to be found within reasonable time, i.e., our algorithm's performance is measured both in solution quality and required runtime (and runtime may be measured in different ways). Often we have algorithms that start with an initial (probably bad) solution and improve it over time. This is even often the case in situations where we can actually guarantee to solve the problem perfectly if we give enough time (and we may stop earlier once we run out of time, loosing the guarantee but at least having a solution). In Machine Learning and Datamining, the situation is quite similar, which is quite natural since many questions there could also be considered as special optimization tasks. In both fields, algorithms are often randomized, meaning that they may behave differently or return different results even if executed twice with the same input data.

From the above scenario, it becomes clear that it may not be easy to know which algorithm is actually the best for a given scenario. On one hand, we need to use some reasonably robust statistics. On the other hand, we may also need to clarify what "best" actually means, since we have at least two dimensions of performance (and reducing the algorithm performance to a single point measurement may lead to wrong conclusions). These are just two of the problems we face when evaluating new methods for optimization or Machine Learning. There are many more issues, such as how to compare multi-objective optimization methods (where we have more than one quality dimension) and how to compare parallel algorithms or programs running in a cloud or cluster? Or how can we compare algorithms on a problem which is noisy or in the face of uncertainties and the need for robustness?

There are many more opportunities for interesting and good research: How to visualize algorithm the results when comparing many algorithms on many problems? Can we model algorithm performance to guess how a method would perform on new problem? Can we have simple theoretical frameworks which gives us mathematically supported guidelines, limits, boundaries, or estimates for benchmarking? Or can we build automated approaches to answer high-level questions like: What features make a problem hard for a set of algorithms? Which parameters make an algorithm work the best? Are there qualitatively different classes of problems and algorithms for a certain problem type?

With our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA), we try to provide a platform to discuss such topics, a setup where researchers can exchange thoughts on how to compare and analyze the performance of Computational Intelligence (CI) algorithms. We generously consider all optimization, Operations Research, Machine Learning, Datamining, and Evolutionary Computation all as sub-fields of CI. This workshop will take place at the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018) on March 29-31, 2018 in Xiamen, China.

If you are a researcher working on any related topic, we would be very happy if you would consider submitting a paper until December 1, 2017, via the submission page. Here you can download the Call for Papers (CfP) in PDF format.

feed-image rss feed-image atom