
- Written by: Thomas Weise
Today, my colleagues Professors Rudolf, Xu, Wang, Zhang, and I visited the Hefei Jinglv Environment Equipment Co., Ltd [合肥劲旅环境科技有限公司], a company developing and producing high-quality environmental equipment, such as garbage trucks and garbage containers of all sizes. In the three-hour plus meeting, several novel technologies and ideas were discussed. I was very impressed not just by the modern production facilities of the company, but also by its focus on innovation and new product ideas.
Read more: Visit at Hefei Jinglv Environment Equipment Co., Ltd [合肥劲旅环境科技有限公司]
- Written by: Thomas Weise
Recently, I had the chance to attend the "Chinesisch-Deutsches Symposium zur Ausbildung praxisorientierter Fachkräfte in Intelligenter Produktion" [智能制造应用型人才培养中德论坛], which roughly translates to Chinese-German Symposium on the Education of Practice-Oriented Professionals for Intelligent Production. This symposium took place on March 1st and 2nd, 2017, at the Shanghai Dianji University [上海电机学院] in Shanghai, China. The focus of the symposium was to discuss the requirements imposed by the trend towards a higher degree of automation in the productive industry fueled by concepts such as Industry 4.0 and Made in China 2025 [中国制造2025]. Although I could only attend the first day of this nice meeting, I think I learned a few interesting things on the topics Intelligent Production and Education. Here you can find the (Chinese) press release about our uni's delegation to this meeting.
Read more: Education of Practice-Oriented Professionals for Intelligent Production
- Written by: Thomas Weise
In an Inductive Program Synthesis (IPS) problem, a set of input/output data examples are given and the task is to find a program which can produce the desired outputs for the given inputs. Recently, researchers from the University of Cambridge and Microsoft Research have submitted a paper to the 5th International Conference on Learning Representations (ICLR'17) on DeepCoder, a new approach to IPS, i.e., to the automatic synthesis of programs. This new technology has goals similar to our work on program synthesis, but achieves them with entirely different means.
Read more: Algorithm Synthesis: Deep Learning and Genetic Programming
- Written by: Thomas Weise
Today, we released the new version 0.8.9 of our optimizationBenchmarking.org software framework for automating significant parts of the research in the fields of optimization, machine learning, and operations research. This new version comes much closer to our goal to reduce the workload of researchers and practitioners during the development of algorithms to allow them to spend more time on thinking.
Besides all the functionality offered by the previous releases, it introduces a new process for obtaining high-level conclusions about problem hardness and algorithm behaviors. This process takes the raw data from experiments together with meta-information about algorithm setups and problem instances as input. It applies a sequence of machine learning technologies, namely curve fitting, clustering, and classification, to find which features make a problem instance hard and which algorithm setup parameters cause which algorithm behavior. We just submitted an article about this new process for review.
Our software provides a set of very general tools for algorithm performance analysis (e.g., plotting runtime/quality and ECDF charts) as well as our new process. Since it takes data in form of text files, it can analyze the results of any optimization algorithm implemented in any programming language applied to any optimization problem. It produces human-readable reports either in form of LaTeX/PDF documents or as XHTML.
The software provides a user-friendly, web-based GUI which runs either on your local machine or a server in your lab comes. The software comes in three flavors:
- as Java executable, requiring that several tools are installed (Java, R with several packages, a LaTeX system installation),
- as Docker image, which only requires an installation of Docker. It can be started directly under Linux, Windows, and Mac OS with the single command
docker run -t -i -p 9999:8080/tcp optimizationbenchmarking/evaluator-gui
and then is used by browsing to http://localhost:9999. (At first start, the image is downloaded), and - as command line program without GUI for integration in other software environments (with the same installation requirements as the GUI),
- Written by: Thomas Weise
The main area of research of our institute is metaheuristic optimization, i.e., researching algorithms that can find good approximate solutions for computationally hard problems in feasible time.
The Traveling Salesman Problem (TSP) is an example for such an optimization task. In a TSP, n cities and the distances between them are given and the goal is to find the shortest round-trip tour that goes through each city exactly once and then returns to its starting point. The TSP is NP-hard, meaning that any currently known algorithm for finding the exact globally best solution for any given TSP will need a time which grows exponentially with n in the worst case. And this is unlikely to change. In other words, if a TSP instance has n cities, then the worst-case runtime of any exact TSP solver is in O(2^n). Well, today we have algorithms that can exactly solve a wide range of problems with tens of thousands of nodes and approximate the solution of million-node problems with an error of less than one part per thousand. This is pretty awesome, but the worst-case runtime to find the exact (optimal) solution is still exponential.
What researchers try to do is to develop algorithms that can find tours which are as short as possible and do so as fast as possible. Here, let us look at the meaning of "fast". Fast has something to do with time. In other words, in order to know if an algorithm A is fast, a common approach is to measure the time that it needs to find a solution of a given quality. If that time is shorter than the time another algorithm B needs, we can say that A is faster than B. This sounds rather trivial, but if we take a closer look, it is actually not: There are multiple ways to measure time, and each has distinctive advantages and disadvantages.
Read more: Measuring the Runtime of (Optimization) Algorithms