User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
 

The paper submission deadline of the 2019 IEEE Congress on Evolutionary Computation has been extended until January 31, 2019. As a result, we are happy to announce that there are also two more weeks for authors to submit their papers to our Special Session on Benchmarking of Evolutionary Algorithms for Discrete Optimization (BEADO).

Evolutionary Computation (EC) is a huge and expanding field, attracting more and more interests from both academia and industry. It includes a wide and ever-growing variety of optimization algorithms, which, in turn, are applied to an even wider and faster growing range of different problem domains, including discrete optimization. Benchmarking is the engine driving research in the fields of Evolutionary Algorithms (EAs) for decades, while its potential has not been fully explored. With our special session, we want to bring together experts on benchmarking, evolutionary computation algorithms, and discrete optimization and provide a platform for them to exchange findings, to explore new paradigms for performance comparison, and to discuss issues such as

  • modelling of algorithm behaviors and performance
  • visualizations of algorithm behaviors and performance
  • statistics for performance comparison (robust statistics, PCA, ANOVA, statistical tests, ROC, …)
  • evaluation of real-world goals such as algorithm robustness, and reliability
  • theoretical results for algorithm performance comparison
  • comparison of theoretical and empirical results
  • new benchmark problems
  • the comparison of algorithms in “non-traditional” scenarios such as
    • multi- or many-objective domains
    • parallel implementations, e.g., using GGPUs, MPI, CUDA, clusters, or running in clouds
    • large-scale problems or problems where objective function evaluations are costly
    • dynamic problems or where the objective functions involve randomized simulations or noise
  • comparative surveys with new ideas on
    • dos and don'ts, i.e., best and worst practices, for algorithm performance comparison
    • tools for experiment execution, result collection, and algorithm comparison
    • benchmark sets for certain problem domains and their mutual advantages and weaknesses

Please visit the special session website for more information. Here you can download the BEADO Special Session Call for Papers (CfP) in PDF format and here as plain text file.