Benchmarking, the empirical algorithm performance comparison, is usually the only feasible way to find which algorithm is good for a given problem. Benchmarking consists of two steps: First, the algorithms are applied to the benchmarking problems and data is collected. Second, the collected data is evaluated. There is little guidance for the first and a lack of tool support for the second step. Researchers investigating new problems need to implement both data collection and evaluation. In our new paper "From Standardized Data Formats to Standardized Tools for Optimization Algorithm Benchmarking," we want to make the case for defining standard directory structures and file formats for the performance data and metadata of experiments with optimization algorithms. Such formats must be easy to read, write, and to incorporate into existing setups. If there are commonly accepted formats and researchers would actually use them, then this would allow more general tools to emerge. There would be real incentive for everyone who makes an evaluation tool to use the common format right away and then, maybe, publish their tool for others to use. Then, researchers then would no longer need to implement their own evaluation programs. We try to derive suitable formats by analyzing what existing tools do and what information they need. We then present a general tool, our optimizationBenchmarking.org framework, including an open source library for reading and writing data in our format. Since our framework obtains its data from a general file format, it can assess the performance of arbitrary algorithms implemented in arbitrary programming languages on arbitrary single-objective optimization problems.

Thomas Weise. From Standardized Data Formats to Standardized Tools for Optimization Algorithm Benchmarking. In Newton Howard, Yingxu Wang, Amir Hussain, Freddie Hamdy, Bernard Widrow, and Lotfi A. Zadeh, editors, Proceedings of the 16th IEEE Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC'17), July 26-28, 2017, University of Oxford, Oxford, UK, pages 490-497. Los Alamitos, CA, USA: IEEE Computer Society Press, ISBN: 978-1-5386-0770-1.
doi:10.1109/ICCI-CC.2017.8109794 / paper / slides