- Written by: Thomas Weise
Today, we released the new version 0.8.9 of our optimizationBenchmarking.org software framework for automating significant parts of the research in the fields of optimization, machine learning, and operations research. This new version comes much closer to our goal to reduce the workload of researchers and practitioners during the development of algorithms to allow them to spend more time on thinking.
Besides all the functionality offered by the previous releases, it introduces a new process for obtaining high-level conclusions about problem hardness and algorithm behaviors. This process takes the raw data from experiments together with meta-information about algorithm setups and problem instances as input. It applies a sequence of machine learning technologies, namely curve fitting, clustering, and classification, to find which features make a problem instance hard and which algorithm setup parameters cause which algorithm behavior. We just submitted an article about this new process for review.
Our software provides a set of very general tools for algorithm performance analysis (e.g., plotting runtime/quality and ECDF charts) as well as our new process. Since it takes data in form of text files, it can analyze the results of any optimization algorithm implemented in any programming language applied to any optimization problem. It produces human-readable reports either in form of LaTeX/PDF documents or as XHTML.
The software provides a user-friendly, web-based GUI which runs either on your local machine or a server in your lab comes. The software comes in three flavors:
- as Java executable, requiring that several tools are installed (Java, R with several packages, a LaTeX system installation),
- as Docker image, which only requires an installation of Docker. It can be started directly under Linux, Windows, and Mac OS with the single command
docker run -t -i -p 9999:8080/tcp optimizationbenchmarking/evaluator-gui
and then is used by browsing to http://localhost:9999. (At first start, the image is downloaded), and - as command line program without GUI for integration in other software environments (with the same installation requirements as the GUI),
- Written by: Thomas Weise
The main area of research of our institute is metaheuristic optimization, i.e., researching algorithms that can find good approximate solutions for computationally hard problems in feasible time.
The Traveling Salesman Problem (TSP) is an example for such an optimization task. In a TSP, n cities and the distances between them are given and the goal is to find the shortest round-trip tour that goes through each city exactly once and then returns to its starting point. The TSP is NP-hard, meaning that any currently known algorithm for finding the exact globally best solution for any given TSP will need a time which grows exponentially with n in the worst case. And this is unlikely to change. In other words, if a TSP instance has n cities, then the worst-case runtime of any exact TSP solver is in O(2^n). Well, today we have algorithms that can exactly solve a wide range of problems with tens of thousands of nodes and approximate the solution of million-node problems with an error of less than one part per thousand. This is pretty awesome, but the worst-case runtime to find the exact (optimal) solution is still exponential.
What researchers try to do is to develop algorithms that can find tours which are as short as possible and do so as fast as possible. Here, let us look at the meaning of "fast". Fast has something to do with time. In other words, in order to know if an algorithm A is fast, a common approach is to measure the time that it needs to find a solution of a given quality. If that time is shorter than the time another algorithm B needs, we can say that A is faster than B. This sounds rather trivial, but if we take a closer look, it is actually not: There are multiple ways to measure time, and each has distinctive advantages and disadvantages.
Read more: Measuring the Runtime of (Optimization) Algorithms
- Written by: Thomas Weise
The Institute of Applied Optimization and the Faculty of Computer Science and Technology of the Hefei University welcome Ms. Tatiana Vorontsova [Татьяна Воронцова], an exchange student from Nizhni Novgorod [Ни́жний Но́вгород] in the Russian Federation.
Ms. Vorontsova will join our university for the Summer Semester 2017. She will study several computer science courses, including the course "Object-Oriented Programming with Java" newly designed by Prof. Dr. Thomas Weise. Additionally, she will of course also study the Chinese culture and language.
We wish Ms. Vorontsova a nice stay in Hefei and at our university.
- Written by: Thomas Weise
Sometimes, when discussing the benefits of Evolutionary Algorithms (EAs), their special variant Genetic Algorithms (GAs) with bit-string based search spaces, or other metaheuristic global search algorithms, statements like the following may be made: "If you use a huge (unlimited) population size with a large running budget (unlimited iterations or generations) for both global search metaheuristics (with enough diversity) and local search metaheuristics, the global search approach will no doubt outperform the local search."
I cannot not really agree to this statement, in particular if it is applied to black box metaheuristics (those that make no use of the problem knowledge).
This post here is heavily related to the following paper:
Thomas Weise, Yuezhong Wu, Raymond Chiong, Ke Tang, and Jörg Lässig. Global versus Local Search: The Impact of Population Sizes on Evolutionary Algorithm Performance. Journal of Global Optimization 66(3):511-534, November 2016.
doi:10.1007/s10898-016-0417-5 / pdf
I will begin with a quick introduction to the concepts of local search, EAs, and black-box global search metaheuristics. Then I will take apart the different assumptions in the statement above (highlighted with different colors) step-by-step. I make what I think is a good case against the assumption of superior performance of pure global optimization methods like EAs. That being said, I conclude by pointing out practitioners rarely use pure EAs – the best performing methods are hybrid algorithms that combine the speed of a local search with the resilience against getting stuck at local optima offered by global search.
- Written by: Thomas Weise
The center of the research of our institute is optimization. But what is optimization? Basically, optimization is the art of making good decisions. It provides us with a set of tools, mostly from the areas of computer science and mathematics, which are applicable in virtually all fields ranging from business, industry, biology, physics, medicine, data mining, engineering, to even art.
Every question that asks for a thing with a superlative feature is an optimization problem. Constructing the fastest car, finding the most profitable investment plan, finding a way to discover diseases as early and reliable as possible, scheduling your work so that you have the most spare time left for gaming – all of these are optimization tasks.
In this article, we will explore what an optimization problem in more detail, we will distinguish hard from easy problems and discuss what implications come from "hardness". We will discuss how exact and approximate algorithms tackle hard problems, give a slightly more formal definition of optimization problem, and list some more examples for optimization tasks. (Additionally, we also have a course on "Metaheuristic Optimization" and you can find all its slides here.)