Optimization and Operations Research are about finding good solutions for computational hard problems. Sometimes we cannot guarantee to find the best solutions, because finding these (or the proof that they are best) may take too long. Then we look for good approximate solution to be found within reasonable time, i.e., our algorithm's performance is measured both in solution quality and required runtime (and runtime may be measured in different ways). Often we have algorithms that start with an initial (probably bad) solution and improve it over time. This is even often the case in situations where we can actually guarantee to solve the problem perfectly if we give enough time (and we may stop earlier once we run out of time, loosing the guarantee but at least having a solution). In Machine Learning and Datamining, the situation is quite similar, which is quite natural since many questions there could also be considered as special optimization tasks. In both fields, algorithms are often randomized, meaning that they may behave differently or return different results even if executed twice with the same input data.

From the above scenario, it becomes clear that it may not be easy to know which algorithm is actually the best for a given scenario. On one hand, we need to use some reasonably robust statistics. On the other hand, we may also need to clarify what "best" actually means, since we have at least two dimensions of performance (and reducing the algorithm performance to a single point measurement may lead to wrong conclusions). These are just two of the problems we face when evaluating new methods for optimization or Machine Learning. There are many more issues, such as how to compare multi-objective optimization methods (where we have more than one quality dimension) and how to compare parallel algorithms or programs running in a cloud or cluster? Or how can we compare algorithms on a problem which is noisy or in the face of uncertainties and the need for robustness?

There are many more opportunities for interesting and good research: How to visualize algorithm the results when comparing many algorithms on many problems? Can we model algorithm performance to guess how a method would perform on new problem? Can we have simple theoretical frameworks which gives us mathematically supported guidelines, limits, boundaries, or estimates for benchmarking? Or can we build automated approaches to answer high-level questions like: What features make a problem hard for a set of algorithms? Which parameters make an algorithm work the best? Are there qualitatively different classes of problems and algorithms for a certain problem type?

With our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA), we try to provide a platform to discuss such topics, a setup where researchers can exchange thoughts on how to compare and analyze the performance of Computational Intelligence (CI) algorithms. We generously consider all optimization, Operations Research, Machine Learning, Datamining, and Evolutionary Computation all as sub-fields of CI. This workshop will take place at the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018) on March 29-31, 2018 in Xiamen, China.

If you are a researcher working on any related topic, we would be very happy if you would consider submitting a paper until December 1, 2017, via the submission page. Here you can download the Call for Papers (CfP) in PDF format.

See also: