- Written by: Thomas Weise
In an Inductive Program Synthesis (IPS) problem, a set of input/output data examples are given and the task is to find a program which can produce the desired outputs for the given inputs. Recently, researchers from the University of Cambridge and Microsoft Research have submitted a paper to the 5th International Conference on Learning Representations (ICLR'17) on DeepCoder, a new approach to IPS, i.e., to the automatic synthesis of programs. This new technology has goals similar to our work on program synthesis, but achieves them with entirely different means.
Read more: Algorithm Synthesis: Deep Learning and Genetic Programming
- Written by: Thomas Weise
The main area of research of our institute is metaheuristic optimization, i.e., researching algorithms that can find good approximate solutions for computationally hard problems in feasible time.
The Traveling Salesman Problem (TSP) is an example for such an optimization task. In a TSP, n cities and the distances between them are given and the goal is to find the shortest round-trip tour that goes through each city exactly once and then returns to its starting point. The TSP is NP-hard, meaning that any currently known algorithm for finding the exact globally best solution for any given TSP will need a time which grows exponentially with n in the worst case. And this is unlikely to change. In other words, if a TSP instance has n cities, then the worst-case runtime of any exact TSP solver is in O(2^n). Well, today we have algorithms that can exactly solve a wide range of problems with tens of thousands of nodes and approximate the solution of million-node problems with an error of less than one part per thousand. This is pretty awesome, but the worst-case runtime to find the exact (optimal) solution is still exponential.
What researchers try to do is to develop algorithms that can find tours which are as short as possible and do so as fast as possible. Here, let us look at the meaning of "fast". Fast has something to do with time. In other words, in order to know if an algorithm A is fast, a common approach is to measure the time that it needs to find a solution of a given quality. If that time is shorter than the time another algorithm B needs, we can say that A is faster than B. This sounds rather trivial, but if we take a closer look, it is actually not: There are multiple ways to measure time, and each has distinctive advantages and disadvantages.
Read more: Measuring the Runtime of (Optimization) Algorithms
- Written by: Thomas Weise
Sometimes, when discussing the benefits of Evolutionary Algorithms (EAs), their special variant Genetic Algorithms (GAs) with bit-string based search spaces, or other metaheuristic global search algorithms, statements like the following may be made: "If you use a huge (unlimited) population size with a large running budget (unlimited iterations or generations) for both global search metaheuristics (with enough diversity) and local search metaheuristics, the global search approach will no doubt outperform the local search."
I cannot not really agree to this statement, in particular if it is applied to black box metaheuristics (those that make no use of the problem knowledge).
This post here is heavily related to the following paper:
Thomas Weise, Yuezhong Wu, Raymond Chiong, Ke Tang, and Jörg Lässig. Global versus Local Search: The Impact of Population Sizes on Evolutionary Algorithm Performance. Journal of Global Optimization 66(3):511-534, November 2016.
doi:10.1007/s10898-016-0417-5 / pdf
I will begin with a quick introduction to the concepts of local search, EAs, and black-box global search metaheuristics. Then I will take apart the different assumptions in the statement above (highlighted with different colors) step-by-step. I make what I think is a good case against the assumption of superior performance of pure global optimization methods like EAs. That being said, I conclude by pointing out practitioners rarely use pure EAs – the best performing methods are hybrid algorithms that combine the speed of a local search with the resilience against getting stuck at local optima offered by global search.
- Written by: Thomas Weise
The center of the research of our institute is optimization. But what is optimization? Basically, optimization is the art of making good decisions. It provides us with a set of tools, mostly from the areas of computer science and mathematics, which are applicable in virtually all fields ranging from business, industry, biology, physics, medicine, data mining, engineering, to even art.
Every question that asks for a thing with a superlative feature is an optimization problem. Constructing the fastest car, finding the most profitable investment plan, finding a way to discover diseases as early and reliable as possible, scheduling your work so that you have the most spare time left for gaming – all of these are optimization tasks.
In this article, we will explore what an optimization problem in more detail, we will distinguish hard from easy problems and discuss what implications come from "hardness". We will discuss how exact and approximate algorithms tackle hard problems, give a slightly more formal definition of optimization problem, and list some more examples for optimization tasks. (Additionally, we also have a course on "Metaheuristic Optimization" and you can find all its slides here.)
- Written by: Thomas Weise
The inspiration gleaned from observing nature has led to several important advances in the field of optimization. Still, it seems to me that currently a lot of work is mainly based on such inspiration alone. This might divert attention away from practical and algorithmic concerns. As a result, there is a growing number of specialized terminologies used in the field of Evolutionary Computation (EC) and Swarm Intelligence (SI), which I consider as a problem for clarity in research. With this article, I would like to formulate my thoughts with the hope to contribute to a fruitful debate.
Read more: Why research in Computational Intelligence should be less nature-inspired.