Today I attended the PhD thesis defense of Mr. Stefan Niemczyk of the Distributed Systems Group, Fachbereich 16: Elektrotechnik/Informatik of my old university, the University of Kassel, in Germany as the second reviewer and referee. First of all, congratulations to Mr. Niemczyk for the successful defense of his thesis, rated with "very good". The title the thesis is "Dynamische Konfiguration verteilter Informationsverarbeitung in Gruppen heterogener Agenten," which translates to "Dynamic Configuration of the Distributed Information Processing in Groups of Heterogeneous Agents". I can only recommend reading this excellent work to everyone working with teams of (autonomous) robots.

I think this research is not just relevant in the search-and-rescue scenarios in which it was tested, but may also be helpful when designing robots for, e.g., home service robots, the internet of things applications, automated logistics, warehouse management, or for constructing buildings. This technology could thus become an important asset in a later phase of an Industry 4.0 or Made in China 2025 concept, when intelligent production environments are highly automated and exchange goods and material via automated logistic processes and, thus, need to be able to configure their "interface robots" dynamically to each other.

One of my fundamental research interests is how we can determine which optimization algorithm is good for which problem.

Unfortunately, answering this question is quite complicated. For most practically relevant problems, we need to find a trade-off between the (run)time we can invest in getting a good solution against the quality of said solution. Furthermore, the performance of almost all algorithms cannot just be a described by single pair of "solution quality" and "time needed to get a solution of that quality". Instead, these (anytime) algorithms start with an initial (often not-so-good) guess about the solution and then improve it step-by-step. In other words, their runtime behavior can be described as something like a function relating solution quality to runtime. But not a real function, since a) many algorithms are randomized, meaning that they behave differently every time you use them, even with the same input data, and b) an algorithm will usually behave different on different instances of an optimization problem type.

This means that we need to do a lot of experiments: We need to apply an optimization algorithm multiple times to a given optimization problem instance in order to "average out" the randomness. Each time, we need to collect data about the whole runtime behavior, not just the final results. Then we need to do this for multiple instances with different features in order to learn about how, e.g., the scale of a problem influences the algorithm behavior. This means that we will quite obtain a lot of data from many algorithm setups on many problem instances. The question that researchers face is thus "How can we extract useful information from that data?" How can we obtain information which helps us to improve our algorithms? How can we get data from which we can learn about the weaknesses of our methods so that we can improve them?

In this research presentation, I discuss my take on this subject. I introduce a process for automatically discovering the reasons why a certain algorithm behaves as it does and why a problem instance is harder for a set of algorithms than another. This process has already been implemented in our open source optimizationBenchmarking.org framework and is currently implemented in R.

Today, I attended the LogiMAT China 2017 exhibition in Nanjing, an international trade fair for distribution, materials handling, and information flow. This year, I already visited three exhibitions on logistics and transportation: The fresh logistics and Intertraffic exhibitions were mainly concerned "hardware," either indoor or outdoor, and Intermodal focused mainly on container-based, well, intermodal, logistics. (All three, however, also had exhibits and presentations on IT solutions.) LogiMAT puts the logistics inside a factory or warehouse into the center of its attention, but is different in one more aspect: a large percentage of the exhibitions supported aspects of automation and self-organization, i.e., the trending topic of intelligent storage and logistics systems. This is emphasized with forums such as "Industry 4.0 Meets Made in China 2025 – The Benefits of Intelligent Warehousing and Production."

Yesterday, we published a new course titled "Metaheuristic Optimization". On the course website, you can find all slides and example algorithm implementations and you can download the complete course material as a single tar.xz archive (can be opened under Windows and Linux) from here. As the third completely published course of our institute, after the courses on "Object-Oriented Programming with Java" and "Distributed Computing", this marks another milestone in our strive for teaching excellence.

The new course tries to give a complete and yet in-depth overview on the topic of optimization from the perspective of metaheuristics, i.e., approximate algorithms that can find good solutions for computational hard problems within a short time. The course is designed to require little background knowledge. Many of the presented algorithms can directly be implemented during the course in Java by the instructor within a few minutes. This shows that the algorithms we discuss are not scary and can be mastered even with basic programming knowledge. It also closes the gap between research and practice – after all, we are the Institute of Applied Optimization.

In the course, we discuss a broad spectrum of different optimization methods, ranging from local search algorithms such as hill climbers, Simulated Annealing, and Tabu Search to global search methods such as Evolutionary Algorithms, Evolution Strategies, Genetic Programming, Differential Evolution, Particle Swarm Optimization, Ant Colony Optimization, and Estimation of Distribution Algorithms. We also discuss several phenomena that make problems difficult for these algorithms as well as general concepts such as multi-objective and constraint optimization as well as several example applications. All in all, this course aims to give the student the knowledge to recognize an optimization problem when she sees it, the ability to choose the right algorithm for the right problem, together with the practical experience to implement and apply said algorithm in a short time.

Benchmarking, the empirical algorithm performance comparison, is usually the only feasible way to find which algorithm is good for a given problem. Benchmarking consists of two steps: First, the algorithms are applied to the benchmarking problems and data is collected. Second, the collected data is evaluated. There is little guidance for the first and a lack of tool support for the second step. Researchers investigating new problems need to implement both data collection and evaluation. In our new paper "From Standardized Data Formats to Standardized Tools for Optimization Algorithm Benchmarking," we want to make the case for defining standard directory structures and file formats for the performance data and metadata of experiments with optimization algorithms. Such formats must be easy to read, write, and to incorporate into existing setups. If there are commonly accepted formats and researchers would actually use them, then this would allow more general tools to emerge. There would be real incentive for everyone who makes an evaluation tool to use the common format right away and then, maybe, publish their tool for others to use. Then, researchers then would no longer need to implement their own evaluation programs. We try to derive suitable formats by analyzing what existing tools do and what information they need. We then present a general tool, our optimizationBenchmarking.org framework, including an open source library for reading and writing data in our format. Since our framework obtains its data from a general file format, it can assess the performance of arbitrary algorithms implemented in arbitrary programming languages on arbitrary single-objective optimization problems.

feed-image rss feed-image atom