User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active

We have two accepted papers at the Genetic and Evolutionary Computation Conference (GECCO'17), which will take place from July 15 to 19, 2017 in Berlin, Germany:

GECCO is the prime event for research in Evolutionary Computation and organized yearly by SIGEVO, the ACM Special Interest Group on Genetic and Evolutionary Computation. We are looking forward to meeting you at the conference ^_^

Combining Two Local Searches with Crossover: An Efficient Hybrid Algorithm for the Traveling Salesman Problem

The Traveling Salesman Problem (TSP) is one of the most-intensively studied optimization problems. Ejection Chain Methods (ECM) and the Lin-Kernighan (LK) heuristic are the state-of-art local search (LS) algorithms for solving the TSP. Multi-Neighborhood Search (MNS) is known to be especially suitable for hybridization with Evolutionary Computation (EC) algorithms. It was recently shown that hybridizing two different LS algorithms with each other (LS-LS) can combine their mutual advantages and lead to better overall performance. We introduce the new concept of LS-LS-X hybrids, which combines two different LS algorithms with a crossover operator. We enhance the two best known LS-LS hybrids, namely ECM-LK and LK-MNS, with Order Based Crossover and Heuristic Crossover. We then further hybridize these LS-LS-X algorithms with Evolutionary Algorithms, the most prominent Evolutionary Computation (EC) method, and obtain highly-efficient (memetic) EC-LS-LS-X algorithms. We conduct a large-scale experimental study with many different algorithm setups on all 110 symmetric instances of the TSPLib benchmark set. We find that the new LS-LS-X hybrids have significantly better performance than the original LS-LS as well as their component algorithms and even outperform several memetic EC-LS-LS and EC-LS algorithm setups. The EC-LS-LS-X hybrids turn out to be the best EA-hybrid TSP solvers by a large margin in our experiment and the very wide range of algorithms available in the TSP Suite.

Modeling Optimization Algorithm Runtime Behavior and its Applications

Assume that there was an accurate model which can predict the solution quality that an optimization algorithm will reach if granted a certain amount of runtime on a given problem. Immediately a lot of use cases arise, both dynamic, in terms of algorithm portfolios, and static, in terms of performance comparison. In this paper, we introduce two ways to obtain such models, first by fitting curves and second by training artificial neural networks on benchmark results. We investigate a variety of use cases, including

  1. the interpretation of fitted curves based on the values of their parameters using their fixed semantics,
  2. the classification of performance measurements to algorithms, i.e., the detection of which algorithm was used to solve a given problem just by its runtime behavior,
  3. the prediction of how an algorithm may perform on a yet-unseen problem with yet-unseen features based its performance on other problems, and
  4. the prediction of future algorithm performance based on models fitted to an (earlier) subset of the measured data.

We present these use cases on an example experiment on one of the most well-known problems, namely the Maximum Satisfiability Problem. We confirm that all use cases are viable, easy-to-realize, and reliable.

This work was developed as basic research in the context of the Optimization Benchmarking project.