Science BlogInstitute of Applied Optimizationhttp://iao.hfuu.edu.cn/blogs/science-blog2019-07-21T15:42:44+00:00Joomla! - Open Source Content ManagementArticle "Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features" appears in Applied Soft Computing Journal2018-09-17T20:32:10+00:002018-09-17T20:32:10+00:00http://iao.hfuu.edu.cn/blogs/science-blog/158-article-automatically-discovering-clusters-of-algorithm-and-problem-instance-behaviors-as-well-as-their-causes-from-experimental-data-algorithm-setups-and-instance-features-appears-in-applied-soft-computing-journalThomas Weise<p><img src="http://iao.hfuu.edu.cn/images/publications/icons/applied_soft_computing.gif" /></p><p>Today, our article "Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features" has appeared in the <em><a href="http://www.sciencedirect.com/journal/applied-soft-computing">Applied Soft Computing</a></em> journal published by Elsevier, which describes our research topic <a href="http://iao.hfuu.edu.cn/index.php?option=com_content&view=article&id=66&catid=30">Automating Scientific Research in Optimization</a>. This article maybe marks the first contribution where a significant part of the high-level work of a researcher in the fields of <a href="http://iao.hfuu.edu.cn/index.php?option=com_content&view=article&id=25&catid=30">optimization</a> and machine learning is automated by a process applying different machine learning steps.</p>
<p style="margin-left: 3em;">Thomas Weise, Xiaofeng Wang, Qi Qi, Bin Li, and Ke Tang. Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features. <em>Applied Soft Computing Journal (ASOC)</em>, 73:366–382, December 2018.<br />doi:<a href="http://doi.org/10.1016/j.asoc.2018.08.030">10.1016/j.asoc.2018.08.030</a> / <a href="https://authors.elsevier.com/a/1Xkq85aecSZc67">share link</a> (valid until November 6, 2018)</p>
<p>In the fields of heuristic optimization, we aim to get good solutions for computationally hard problems. Solving the <a href="http://iao.hfuu.edu.cn/index.php?option=com_content&view=article&id=16&catid=19">Travelling Salesman Problem</a>, for instance, means to find the shortest tour that goes through n cities and returns back to the starting point. Such problems often cannot be solved to optimality in feasible time due to their complexity. This means that algorithms often start with a more or less random initial guess about the solution and then step-by-step improve it. This means performance has two dimensions: the runtime we grant to the algorithm until we stop it and take the best-so-far result and the solution quality of that best-so-far result. Since there currently are not yet sufficient theoretical tools to assess the performance of such algorithms, researchers conduct many experiments and compare the results. This often means to apply many different setups of an algorithm to many different instances of a problem type. Since optimization algorithms are often randomized, multiple repetitions of the experiments are needed. Evaluating such experimental data is not easy. Moreover, as evaluation result, we do not just want to know which algorithm performs best and which problem is the hardest ― a researcher wants to know why.</p>
<p><img src="http://iao.hfuu.edu.cn/images/publications/icons/applied_soft_computing.gif" /></p><p>Today, our article "Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features" has appeared in the <em><a href="http://www.sciencedirect.com/journal/applied-soft-computing">Applied Soft Computing</a></em> journal published by Elsevier, which describes our research topic <a href="http://iao.hfuu.edu.cn/index.php?option=com_content&view=article&id=66&catid=30">Automating Scientific Research in Optimization</a>. This article maybe marks the first contribution where a significant part of the high-level work of a researcher in the fields of <a href="http://iao.hfuu.edu.cn/index.php?option=com_content&view=article&id=25&catid=30">optimization</a> and machine learning is automated by a process applying different machine learning steps.</p>
<p style="margin-left: 3em;">Thomas Weise, Xiaofeng Wang, Qi Qi, Bin Li, and Ke Tang. Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features. <em>Applied Soft Computing Journal (ASOC)</em>, 73:366–382, December 2018.<br />doi:<a href="http://doi.org/10.1016/j.asoc.2018.08.030">10.1016/j.asoc.2018.08.030</a> / <a href="https://authors.elsevier.com/a/1Xkq85aecSZc67">share link</a> (valid until November 6, 2018)</p>
<p>In the fields of heuristic optimization, we aim to get good solutions for computationally hard problems. Solving the <a href="http://iao.hfuu.edu.cn/index.php?option=com_content&view=article&id=16&catid=19">Travelling Salesman Problem</a>, for instance, means to find the shortest tour that goes through n cities and returns back to the starting point. Such problems often cannot be solved to optimality in feasible time due to their complexity. This means that algorithms often start with a more or less random initial guess about the solution and then step-by-step improve it. This means performance has two dimensions: the runtime we grant to the algorithm until we stop it and take the best-so-far result and the solution quality of that best-so-far result. Since there currently are not yet sufficient theoretical tools to assess the performance of such algorithms, researchers conduct many experiments and compare the results. This often means to apply many different setups of an algorithm to many different instances of a problem type. Since optimization algorithms are often randomized, multiple repetitions of the experiments are needed. Evaluating such experimental data is not easy. Moreover, as evaluation result, we do not just want to know which algorithm performs best and which problem is the hardest ― a researcher wants to know why.</p>
Special Issue on Benchmarking of Computational Intelligence Algorithms in the Applied Soft Computing Jo