- Written by: Thomas Weise
I am currently attending the Genetic and Evolutionary Computation Conference (GECCO'17), taking place from July 15 to 19, 2017 in Berlin, Germany. GECCO is the primary event in the world on Evolutionary Computation. I have attended it before in 2008, 2010, 2012, and 2016 and always enjoyed this conference very much. Our group has two papers there this year, namely:
- Weichen Liu, Thomas Weise, Yuezhong Wu, and Qi Qi. Combining Two Local Searches with Crossover: An Efficient Hybrid Algorithm for the Traveling Salesman Problem. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'17), July 15-19, 2017, Berlin, Germany, New York, NY, USA: ACM Press, pages 298-305, ISBN: 978-1-4503-4920-8.
doi:10.1145/3071178.3071201 / paper / slides - Qi Qi, Thomas Weise, and Bin Li. Modeling Optimization Algorithm Runtime Behavior and its Applications. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'17) Companion, July 15-19, 2017, Berlin, Germany, New York, NY, USA: ACM Press, pages 115-116, ISBN: 978-1-4503-4939-0.
doi:10.1145/3067695.3076042 / paper / poster
Read more: Genetic and Evolutionary Computation Conference (GECCO) 2017
- Written by: Thomas Weise
Today I attended the PhD thesis defense of Mr. Stefan Niemczyk of the Distributed Systems Group, Fachbereich 16: Elektrotechnik/Informatik of my old university, the University of Kassel, in Germany as the second reviewer and referee. First of all, congratulations to Mr. Niemczyk for the successful defense of his thesis, rated with "very good". The title the thesis is "Dynamische Konfiguration verteilter Informationsverarbeitung in Gruppen heterogener Agenten," which translates to "Dynamic Configuration of the Distributed Information Processing in Groups of Heterogeneous Agents". I can only recommend reading this excellent work to everyone working with teams of (autonomous) robots.
I think this research is not just relevant in the search-and-rescue scenarios in which it was tested, but may also be helpful when designing robots for, e.g., home service robots, the internet of things applications, automated logistics, warehouse management, or for constructing buildings. This technology could thus become an important asset in a later phase of an Industry 4.0 or Made in China 2025 concept, when intelligent production environments are highly automated and exchange goods and material via automated logistic processes and, thus, need to be able to configure their "interface robots" dynamically to each other.
Read more: PhD Defense of Mr. Stefan Niemczyk at the University of Kassel in Kassel, Germany
- Written by: Thomas Weise
One of my fundamental research interests is how we can determine which optimization algorithm is good for which problem.
Unfortunately, answering this question is quite complicated. For most practically relevant problems, we need to find a trade-off between the (run)time we can invest in getting a good solution against the quality of said solution. Furthermore, the performance of almost all algorithms cannot just be a described by single pair of "solution quality" and "time needed to get a solution of that quality". Instead, these (anytime) algorithms start with an initial (often not-so-good) guess about the solution and then improve it step-by-step. In other words, their runtime behavior can be described as something like a function relating solution quality to runtime. But not a real function, since a) many algorithms are randomized, meaning that they behave differently every time you use them, even with the same input data, and b) an algorithm will usually behave different on different instances of an optimization problem type.
This means that we need to do a lot of experiments: We need to apply an optimization algorithm multiple times to a given optimization problem instance in order to "average out" the randomness. Each time, we need to collect data about the whole runtime behavior, not just the final results. Then we need to do this for multiple instances with different features in order to learn about how, e.g., the scale of a problem influences the algorithm behavior. This means that we will quite obtain a lot of data from many algorithm setups on many problem instances. The question that researchers face is thus "How can we extract useful information from that data?" How can we obtain information which helps us to improve our algorithms? How can we get data from which we can learn about the weaknesses of our methods so that we can improve them?
In this research presentation, I discuss my take on this subject. I introduce a process for automatically discovering the reasons why a certain algorithm behaves as it does and why a problem instance is harder for a set of algorithms than another. This process has already been implemented in our open source optimizationBenchmarking.org framework and is currently implemented in R
.
Read more: Research Talk: Automating Scientific Research in Optimization
- Written by: Thomas Weise
Today, I attended the LogiMAT China 2017 exhibition in Nanjing, an international trade fair for distribution, materials handling, and information flow. This year, I already visited three exhibitions on logistics and transportation: The fresh logistics and Intertraffic exhibitions were mainly concerned "hardware," either indoor or outdoor, and Intermodal focused mainly on container-based, well, intermodal, logistics. (All three, however, also had exhibits and presentations on IT solutions.) LogiMAT puts the logistics inside a factory or warehouse into the center of its attention, but is different in one more aspect: a large percentage of the exhibitions supported aspects of automation and self-organization, i.e., the trending topic of intelligent storage and logistics systems. This is emphasized with forums such as "Industry 4.0 Meets Made in China 2025 – The Benefits of Intelligent Warehousing and Production."
- Written by: Thomas Weise
Yesterday, we published a new course titled "Metaheuristic Optimization". On the course website, you can find all slides and example algorithm implementations and you can download the complete course material as a single tar.xz archive (can be opened under Windows and Linux) from here. As the third completely published course of our institute, after the courses on "Object-Oriented Programming with Java" and "Distributed Computing", this marks another milestone in our strive for teaching excellence.
The new course tries to give a complete and yet in-depth overview on the topic of optimization from the perspective of metaheuristics, i.e., approximate algorithms that can find good solutions for computational hard problems within a short time. The course is designed to require little background knowledge. Many of the presented algorithms can directly be implemented during the course in Java by the instructor within a few minutes. This shows that the algorithms we discuss are not scary and can be mastered even with basic programming knowledge. It also closes the gap between research and practice – after all, we are the Institute of Applied Optimization.
In the course, we discuss a broad spectrum of different optimization methods, ranging from local search algorithms such as hill climbers, Simulated Annealing, and Tabu Search to global search methods such as Evolutionary Algorithms, Evolution Strategies, Genetic Programming, Differential Evolution, Particle Swarm Optimization, Ant Colony Optimization, and Estimation of Distribution Algorithms. We also discuss several phenomena that make problems difficult for these algorithms as well as general concepts such as multi-objective and constraint optimization as well as several example applications. All in all, this course aims to give the student the knowledge to recognize an optimization problem when she sees it, the ability to choose the right algorithm for the right problem, together with the practical experience to implement and apply said algorithm in a short time.