User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

On March 21 I visited the Intermodal Asia 2017 exhibition and conference in Shanghai, China. Intermodal logistics is about transporting goods by using multiple different means of transportation. A typical scenario can be to transport products from a factory to a train station with a truck, then with a train to a port, then via ship to a destination port, then via train to the destination city, and from there via truck to the customer. The center of this conference therefore are logistics, containers and products surrounding them, and how to transport them via trains and ships. There were many exhibiting companies, presenting products such as different container types, anti-corrosive colors, container locks, container spare parts, sharing and leasing services for containers, management software, wood, rubber, and metal products used in the context of transportation, classification and quality control for containers and their production, as well as intermodal logistics services, and transportation-centered newspapers and journals. In the innovation and technology theaters, speakers presented the newest trends and technological advances of the fields. I found this conference very inspiring, as I could learn a lot of new aspects of the intermodal logistics industry and have a few interesting discussions.

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Currently, two of the leading industry nations, Germany and China, are pushing their industry to increase a higher degree of automation. Automation is among the key technologies of concepts such as Industry 4.0 and Made in China 2025. The goal is not automation in the traditional sense, i.e., the fixed and rigid implementation of static processes which are to be repeated millions of times in exactly the same way. Instead, decisions should be automated, i.e., the machinery carrying out processes in production and logistics should dynamically decide what to do based on its environment and its current situation. In other words, these machines should become intelligent.

As a researcher in optimization and operations research, this idea is not new to me. Actually, this is exactly the goal of work and it has been the goal for the past seven decades – with one major difference: the level at which the automated, intelligent decision process takes place. In this article I want to shortly discuss my point of view on this matter.

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Recently, I had the chance to attend the "Chinesisch-Deutsches Symposium zur Ausbildung praxisorientierter Fachkräfte in Intelligenter Produktion" [智能制造应用型人才培养中德论坛], which roughly translates to Chinese-German Symposium on the Education of Practice-Oriented Professionals for Intelligent Production. This symposium took place on March 1st and 2nd, 2017, at the Shanghai Dianji University [上海电机学院] in Shanghai, China. Although I could only attend the first day of this nice meeting, I think I learned a few interesting things on the topics Intelligent Production and Education.

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

In an Inductive Program Synthesis (IPS) problem, a set of input/output data examples are given and the task is to find a program which can produce the desired outputs for the given inputs. Recently, researchers from the University of Cambridge and Microsoft Research have submitted a paper to the 5th International Conference on Learning Representations (ICLR'17) on DeepCoder, a new approach to IPS, i.e., to the automatic synthesis of programs. This new technology has goals similar to our work on program synthesis, but achieves them with entirely different means.

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

The main area of research of our institute is metaheuristic optimization, i.e., researching algorithms that can find good approximate solutions for computationally hard problems in feasible time.

The Traveling Salesman Problem (TSP) is an example for such an optimization task. In a TSP, n cities and the distances between them are given and the goal is to find the shortest round-trip tour that goes through each city exactly once and then returns to its starting point. The TSP is NP-hard, meaning that any currently known algorithm for finding the exact globally best solution for any given TSP will need a time which grows exponentially with n in the worst case. And this is unlikely to change. In other words, if a TSP instance has n cities, then the worst-case runtime of any exact TSP solver is in O(2n). Well, today we have algorithms that can exactly solve a wide range of problems with tens of thousands of nodes and approximate the solution of million-node problems with an error of less than one part per thousand. This is pretty awesome, but the worst-case runtime to find the exact (optimal) solution is still exponential.

What researchers try to do is to develop algorithms that can find tours which are as short as possible and do so as fast as possible. Here, let us look at the meaning of "fast". Fast has something to do with time. In other words, in order to know if an algorithm A is fast, a common approach is to measure the time that it needs to find a solution of a given quality. If that time is shorter than the time another algorithm B needs, we can say that A is faster than B. This sounds rather trivial, but if we take a closer look, it is actually not: There are multiple ways to measure time, and each has distinctive advantages and disadvantages.

feed-image rss feed-image atom