User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

One of my fundamental research interests is how we can determine which optimization algorithm is good for which problem.

Unfortunately, answering this question is quite complicated. For most practically relevant problems, we need to find a trade-off between the (run)time we can invest in getting a good solution against the quality of said solution. Furthermore, the performance of almost all algorithms cannot just be a described by single pair of "solution quality" and "time needed to get a solution of that quality". Instead, these (anytime) algorithms start with an initial (often not-so-good) guess about the solution and then improve it step-by-step. In other words, their runtime behavior can be described as something like a function relating solution quality to runtime. But not a real function, since a) many algorithms are randomized, meaning that they behave differently every time you use them, even with the same input data, and b) an algorithm will usually behave different on different instances of an optimization problem type.

This means that we need to do a lot of experiments: We need to apply an optimization algorithm multiple times to a given optimization problem instance in order to "average out" the randomness. Each time, we need to collect data about the whole runtime behavior, not just the final results. Then we need to do this for multiple instances with different features in order to learn about how, e.g., the scale of a problem influences the algorithm behavior. This means that we will quite obtain a lot of data from many algorithm setups on many problem instances. The question that researchers face is thus "How can we extract useful information from that data?" How can we obtain information which helps us to improve our algorithms? How can we get data from which we can learn about the weaknesses of our methods so that we can improve them?

In this research presentation, I discuss my take on this subject. I introduce a process for automatically discovering the reasons why a certain algorithm behaves as it does and why a problem instance is harder for a set of algorithms than another. This process has already been implemented in our open source optimizationBenchmarking.org framework.

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active

Some time ago, I discussed why global optimization with an Evolutionary Algorithm (EA) is not necessarily better than local search. Actually, I get asked the question "Why should I use an EA?" quite a few times. Thus, today, it is time to write down a few ideas about why and why not you may benefit from using an EA. I tried to be objective, which is not entirely easy since I work in that domain.

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Currently, two of the leading industry nations, Germany and China, are pushing their industry to increase a higher degree of automation. Automation is among the key technologies of concepts such as Industry 4.0 and Made in China 2025 [中国制造2025]. The goal is not automation in the traditional sense, i.e., the fixed and rigid implementation of static processes which are to be repeated millions of times in exactly the same way. Instead, decisions should be automated, i.e., the machinery carrying out processes in production and logistics should dynamically decide what to do based on its environment and its current situation. In other words, these machines should become intelligent.

As a researcher in optimization and operations research, this idea is not new to me. Actually, this is exactly the goal of work and it has been the goal for the past seven decades – with one major difference: the level at which the automated, intelligent decision process takes place. In this article I want to shortly discuss my point of view on this matter.

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

In an Inductive Program Synthesis (IPS) problem, a set of input/output data examples are given and the task is to find a program which can produce the desired outputs for the given inputs. Recently, researchers from the University of Cambridge and Microsoft Research have submitted a paper to the 5th International Conference on Learning Representations (ICLR'17) on DeepCoder, a new approach to IPS, i.e., to the automatic synthesis of programs. This new technology has goals similar to our work on program synthesis, but achieves them with entirely different means.

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

The main area of research of our institute is metaheuristic optimization, i.e., researching algorithms that can find good approximate solutions for computationally hard problems in feasible time.

The Traveling Salesman Problem (TSP) is an example for such an optimization task. In a TSP, n cities and the distances between them are given and the goal is to find the shortest round-trip tour that goes through each city exactly once and then returns to its starting point. The TSP is NP-hard, meaning that any currently known algorithm for finding the exact globally best solution for any given TSP will need a time which grows exponentially with n in the worst case. And this is unlikely to change. In other words, if a TSP instance has n cities, then the worst-case runtime of any exact TSP solver is in O(2n). Well, today we have algorithms that can exactly solve a wide range of problems with tens of thousands of nodes and approximate the solution of million-node problems with an error of less than one part per thousand. This is pretty awesome, but the worst-case runtime to find the exact (optimal) solution is still exponential.

What researchers try to do is to develop algorithms that can find tours which are as short as possible and do so as fast as possible. Here, let us look at the meaning of "fast". Fast has something to do with time. In other words, in order to know if an algorithm A is fast, a common approach is to measure the time that it needs to find a solution of a given quality. If that time is shorter than the time another algorithm B needs, we can say that A is faster than B. This sounds rather trivial, but if we take a closer look, it is actually not: There are multiple ways to measure time, and each has distinctive advantages and disadvantages.

feed-image rss feed-image atom