User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active

One of my fundamental research interests is how we can determine which optimization algorithm is good for which problem.

Unfortunately, answering this question is quite complicated. For most practically relevant problems, we need to find a trade-off between the (run)time we can invest in getting a good solution against the quality of said solution. Furthermore, the performance of almost all algorithms cannot just be a described by single pair of "solution quality" and "time needed to get a solution of that quality". Instead, these (anytime) algorithms start with an initial (often not-so-good) guess about the solution and then improve it step-by-step. In other words, their runtime behavior can be described as something like a function relating solution quality to runtime. But not a real function, since a) many algorithms are randomized, meaning that they behave differently every time you use them, even with the same input data, and b) an algorithm will usually behave different on different instances of an optimization problem type.

This means that we need to do a lot of experiments: We need to apply an optimization algorithm multiple times to a given optimization problem instance in order to "average out" the randomness. Each time, we need to collect data about the whole runtime behavior, not just the final results. Then we need to do this for multiple instances with different features in order to learn about how, e.g., the scale of a problem influences the algorithm behavior. This means that we will quite obtain a lot of data from many algorithm setups on many problem instances. The question that researchers face is thus "How can we extract useful information from that data?" How can we obtain information which helps us to improve our algorithms? How can we get data from which we can learn about the weaknesses of our methods so that we can improve them?

In this research presentation, I discuss my take on this subject. I introduce a process for automatically discovering the reasons why a certain algorithm behaves as it does and why a problem instance is harder for a set of algorithms than another. This process has already been implemented in our open source optimizationBenchmarking.org framework.

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active

Today, I attended the LogiMAT China 2017 exhibition in Nanjing, an international trade fair for distribution, materials handling, and information flow. This year, I already visited three exhibitions on logistics and transportation: The fresh logistics and Intertraffic exhibitions were mainly concerned "hardware," either indoor or outdoor, and Intermodal focused mainly on container-based, well, intermodal, logistics. (All three, however, also had exhibits and presentations on IT solutions.) LogiMAT puts the logistics inside a factory or warehouse into the center of its attention, but is different in one more aspect: a large percentage of the exhibitions supported aspects of automation and self-organization, i.e., the trending topic of intelligent storage and logistics systems. This is emphasized with forums such as "Industry 4.0 Meets Made in China 2025 – The Benefits of Intelligent Warehousing and Production."

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active

Yesterday, we published a new course titled "Metaheuristic Optimization". On the course website, you can find all slides and example algorithm implementations and you can download the complete course material as a single tar.xz archive (can be opened under Windows and Linux) from here. As the third completely published course of our institute, after the courses on "Object-Oriented Programming with Java" and "Distributed Computing", this marks another milestone in our strive for teaching excellence.

The new course tries to give a complete and yet in-depth overview on the topic of optimization from the perspective of metaheuristics, i.e., approximate algorithms that can find good solutions for computational hard problems within a short time. The course is designed to require little background knowledge. Many of the presented algorithms can directly be implemented during the course in Java by the instructor within a few minutes. This shows that the algorithms we discuss are not scary and can be mastered even with basic programming knowledge. It also closes the gap between research and practice – after all, we are the Institute of Applied Optimization.

In the course, we discuss a broad spectrum of different optimization methods, ranging from local search algorithms such as hill climbers, Simulated Annealing, and Tabu Search to global search methods such as Evolutionary Algorithms, Evolution Strategies, Genetic Programming, Differential Evolution, Particle Swarm Optimization, Ant Colony Optimization, and Estimation of Distribution Algorithms. We also discuss several phenomena that make problems difficult for these algorithms as well as general concepts such as multi-objective and constraint optimization as well as several example applications. All in all, this course aims to give the student the knowledge to recognize an optimization problem when she sees it, the ability to choose the right algorithm for the right problem, together with the practical experience to implement and apply said algorithm in a short time.

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active

Benchmarking, the empirical algorithm performance comparison, is usually the only feasible way to find which algorithm is good for a given problem. Benchmarking consists of two steps: First, the algorithms are applied to the benchmarking problems and data is collected. Second, the collected data is evaluated. There is little guidance for the first and a lack of tool support for the second step. Researchers investigating new problems need to implement both data collection and evaluation. In our new paper "From Standardized Data Formats to Standardized Tools for Optimization Algorithm Benchmarking," we want to make the case for defining standard directory structures and file formats for the performance data and metadata of experiments with optimization algorithms. Such formats must be easy to read, write, and to incorporate into existing setups. If there are commonly accepted formats and researchers would actually use them, then this would allow more general tools to emerge. There would be real incentive for everyone who makes an evaluation tool to use the common format right away and then, maybe, publish their tool for others to use. Then, researchers then would no longer need to implement their own evaluation programs. We try to derive suitable formats by analyzing what existing tools do and what information they need. We then present a general tool, our optimizationBenchmarking.org framework, including an open source library for reading and writing data in our format. Since our framework obtains its data from a general file format, it can assess the performance of arbitrary algorithms implemented in arbitrary programming languages on arbitrary single-objective optimization problems.

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active

Yesterday I had the honor to attend the Chinese-German Future Forum on the Collaboration in Vocational Education [Deutsch-Chinesisches Zukunftsforum zur Zusammenarbeit in der Berufsbildung, 中德职业教育合作研讨会寄语中德职教未来发展] which was one of the parallel sessions of the [Ersten Sitzung des Hochrangigen Deutsch-Chinesischen Dialogs für den gesellschaftlich-kulturellen Austausch, 中德高级别人文交流对话机制首次会议] held at the Diaoyutai State Guesthouse [钓鱼台国宾馆] in Beijing as part of the delegation of our Hefei University [合肥学院]. The meeting was co-organized by the Central Institute for Vocational and Technical Education (CIVTE) of the Ministry of Education of China [中国教育部职业技术教育中心研究所], the China Education Association For International Exchange [中国教育国际交流协会], and the Hans-Seidel Foundation [德国汉斯·赛德尔基金会]. Hefei University can look back on a history of more than thirty years of highly successful collaboration with German institutes on vocational education, so this meeting was an important venue for us. Here you can find the conference programme in Chinese and German language.

feed-image rss feed-image atom