User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

On July 21, 2017, I have given the research talk "Automating Scientific Research in Optimization" at the Artificial Intelligence Group of the Faculty of Computer Science, Chemnitz University of Technology [Technische Universität Chemnitz] in Chemnitz, Germany. The host of the talk was Dr. Julien Vitay who welcomed me also in the name of Prof. Dr. Fred Hamker. I was particularly happy to come back here and present our work, as the TU Chemnitz is my old alma mater, the AI Group has a wide variety of very interesting projects, and I even presented in the very same room where I defended my Master's thesis work more than twelve years ago.

The group is renowned for its research on model-driven approaches for exploring visual perception and cognition, having contributed significantly to the topics of object recognition, conscious perception, attention, cognitive control of visual perception, and space perception. It also investigates novel deep learning technologies. Besides contribution much to fundamental research, the group also provides the open source software Artificial Neural Networks architect (ANNarchy), a parallel and hybrid simulator for distributed rate-coded or spiking neural networks written mainly in C++ and parallelized using openMP or CUDA. The group is furthermore associated with the Bernstein Center for Computational Neuroscience Berlin (bccn).

Like my talk yesterday at the Friedrich Schiller University in Jena, the talk was received with broad interest. I am very thankful to Dr. Vitay and Prof. Hamker for publicizing this presentation. The comprehensive thought exchanges before and after talk were also truly inspiring.

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

On July 20, 2017, I have given the research talk "Automating Scientific Research in Optimization" at the Mathematical Optimization group of Prof. Dr. Ingo Althöfer and Prof. Dr. Andreas Löhne at the Department of Mathematics of the Faculty of Mathematics and Computer Science of the Friedrich Schiller University Jena (FSU, Friedrich-Schiller-Universität Jena) in Jena, Germany.

The group of Prof. Löhne is well-known for their research on vector optimization and multi-objective linear programming. They provide an open source software, Bensolve, for multi-objective / vector linear programming written in C as well as the Multi-Objective Problem LIBrary (MOPLIB) of instances of multi-objective linear, multi-objective (mixed) integer and vector linear programs. The visit to the Mathematical Optimization posed a very interesting chance for discussion, as linear programming and metaheuristics (our field) are two entirely different approaches to optimization, having different advantages and disadvantages.

The talk went well and I am very thankful to Prof. Löhne for the hospitality he showed to me. For me, it is always very inspiring to learn about a topic area with which I am not very familiar yet. The vector optimization of Prof. Löhne is such a field, and I think the exchange of thoughts that we had regarding both of our research areas (both belonging to the domain of optimization) was very inspiring.

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

I am currently attending the Genetic and Evolutionary Computation Conference (GECCO'17), taking place from July 15 to 19, 2017 in Berlin, Germany. GECCO is the primary event in the world on Evolutionary Computation. I have attended it before in 2008, 2010, 2012, and 2016 and always enjoyed this conference very much. Our group has two papers there this year, namely:

  • Weichen Liu, Thomas Weise, Yuezhong Wu, and Qi Qi. Combining Two Local Searches with Crossover: An Efficient Hybrid Algorithm for the Traveling Salesman Problem. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'17), July 15-19, 2017, Berlin, Germany, New York, NY, USA: ACM Press, pages 298-305, ISBN: 978-1-4503-4920-8.
    doi:10.1145/3071178.3071201 / paperslides
  • Qi Qi, Thomas Weise, and Bin Li. Modeling Optimization Algorithm Runtime Behavior and its Applications. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'17) Companion, July 15-19, 2017, Berlin, Germany, New York, NY, USA: ACM Press, pages 115-116, ISBN: 978-1-4503-4939-0.
    doi:10.1145/3067695.3076042 / paperposter

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Today I attended the PhD thesis defense of Mr. Stefan Niemczyk of the Distributed Systems Group, Fachbereich 16: Elektrotechnik/Informatik of my old university, the University of Kassel, in Germany as the second reviewer and referee. First of all, congratulations to Mr. Niemczyk for the successful defense of his thesis, rated with "very good". The title the thesis is "Dynamische Konfiguration verteilter Informationsverarbeitung in Gruppen heterogener Agenten," which translates to "Dynamic Configuration of the Distributed Information Processing in Groups of Heterogeneous Agents". I can only recommend reading this excellent work to everyone working with teams of (autonomous) robots.

I think this research is not just relevant in the search-and-rescue scenarios in which it was tested, but may also be helpful when designing robots for, e.g., home service robots, the internet of things applications, automated logistics, warehouse management, or for constructing buildings. This technology could thus become an important asset in a later phase of an Industry 4.0 or Made in China 2025 concept, when intelligent production environments are highly automated and exchange goods and material via automated logistic processes and, thus, need to be able to configure their "interface robots" dynamically to each other.

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

One of my fundamental research interests is how we can determine which optimization algorithm is good for which problem.

Unfortunately, answering this question is quite complicated. For most practically relevant problems, we need to find a trade-off between the (run)time we can invest in getting a good solution against the quality of said solution. Furthermore, the performance of almost all algorithms cannot just be a described by single pair of "solution quality" and "time needed to get a solution of that quality". Instead, these (anytime) algorithms start with an initial (often not-so-good) guess about the solution and then improve it step-by-step. In other words, their runtime behavior can be described as something like a function relating solution quality to runtime. But not a real function, since a) many algorithms are randomized, meaning that they behave differently every time you use them, even with the same input data, and b) an algorithm will usually behave different on different instances of an optimization problem type.

This means that we need to do a lot of experiments: We need to apply an optimization algorithm multiple times to a given optimization problem instance in order to "average out" the randomness. Each time, we need to collect data about the whole runtime behavior, not just the final results. Then we need to do this for multiple instances with different features in order to learn about how, e.g., the scale of a problem influences the algorithm behavior. This means that we will quite obtain a lot of data from many algorithm setups on many problem instances. The question that researchers face is thus "How can we extract useful information from that data?" How can we obtain information which helps us to improve our algorithms? How can we get data from which we can learn about the weaknesses of our methods so that we can improve them?

In this research presentation, I discuss my take on this subject. I introduce a process for automatically discovering the reasons why a certain algorithm behaves as it does and why a problem instance is harder for a set of algorithms than another. This process has already been implemented in our open source optimizationBenchmarking.org framework.

feed-image rss feed-image atom