User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active
Call for Papers Call for Papers

Special Issue on Benchmarking of Computational Intelligence Algorithms

Computational Intelligence – An International Journal published by Wiley Periodicals Inc.
http://iao.hfuu.edu.cn/bocia-ci-si

 

Cover of the Computational Intelligence journal.Computational Intelligence (CI) is a huge and expanding field which is rapidly gaining importance, attracting more and more interests from both academia and industry. It includes a wide and ever-growing variety of optimization and machine learning algorithms, which, in turn, are applied to an even wider and faster growing range of different problem domains. For all of these domains and application scenarios, we want to pick the best algorithms. Actually, we want to do more, we want to improve upon the best algorithm. This requires a deep understanding of the problem at hand, the performance of the algorithms we have for that problem, the features that make instances of the problem hard for these algorithms, and the parameter settings for which the algorithms perform the best. Such knowledge can only be obtained empirically, by collecting data from experiments, by analyzing this data statistically, and by mining new information from it. Benchmarking is the engine driving research in the fields of optimization and machine learning for decades, while its potential has not been fully explored. Benchmarking the algorithms of Computational Intelligence is an application of Computational Intelligence itself! This special issue of the EI/SCI-indexed Computational Intelligence journal published by Wiley Periodicals Inc. solicits novel contributions from this domain according to the topics of interest listed below.

Here you can download the Call for Papers (CfP) in PDF format and here as plain text file.

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active

Today, Prof. Thomas Weise joined the COnfiguration and SElection of ALgorithms group (COSEAL). COSEAL is an international group of researchers with focus on algorithm selection and algorithm configuration. These two topics are very closely related to our research direction Rigorous Performance Analysis of Algorithms, in particular to the Optimization Benchmarking project.

In order to find the best way to solve an optimization problem, we need to use the best algorithm and the best setup of this algorithm. Algorithm setups have both static and dynamic components, both of which are vital for the algorithm performance. Static parameters do not change during the algorithm run, common examples are (static) population sizes of Evolutionary Algorithms or the tabu tenure of Tabu Search. Dynamic parameters can change all the time, either according to a fixed schedule (like the temperature in Simulated Annealing) or in a self-adaptive way (like the mutation step size in a (1+1) Evolution Strategy with the 1/5th rule). We may also choose the algorithm to apply based on some problem features or dynamically assign runtime to different algorithms based on their progress, i.e., construct some form of meta-algorithm. COSEAL aims to find good algorithm configurations, both dynamic and static, as well as algorithm selection methods.

Where does benchmarking come into play? First, better techniques for benchmarking may lead to better policies for dynamic algorithm configuration. Second, in order to know whether a static or dynamic algorithm parameters and algorithm selection methods work well over different problems, we need benchmarking. Third, benchmarking may tell us the strengths and weaknesses of a setup or algorithm.

Due to this close relationship, several other members of COSEAL also join in the programme committee and even the chairing team of our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA).

Portrait of Dr. Zhen LIU.

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active

The Institute of Applied Optimization welcomes Dr. Zhen Liu [刘震], who today has officially joined our team as researcher. Before, he was an Assistant Researcher at the Hefei Institute of Physical Science [合肥物质科学研究院] of the Chinese Academy of Science [中国科学院]. He received his PhD in 2013 at the University of Science and Technology of China (USTC) [中国科学技术大学], which is also located here in Hefei [合肥]. Dr. Liu is an expert in the fields of systems modeling, error analysis, and systems optimization, with application experience in optics and chemistry.

We are very happy to have Dr. Liu in our team and look forward to working together on many interesting applications of optimization methods.

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active

Today, I attended the Shanghai Smart Home Technology (SSHT) [上海国际智能家居展览会] and Shanghai Intelligent Building Technology (SIBT) [上海国际智能建筑展览会] exhibitions in Shanghai. SIBT is an event for innovative intelligent building technologies and solutions related to the Internet of Things (IOT), cloud computing, and big data for building energy efficiency, energy management systems, and intelligent housing in general. SSHT, on the other hand, focuses more on home automation, technologies, technical integration, and cross-sector business collaboration in order to make our homes more convenient.

The exhibition was maybe slightly larger than the other fairs that I have visited this year and also had a higher visitor density. Also, almost all presenters at almost all booths there were permanently engaged in discussions with interested audience. It was clearly visible that there currently is a huge marked for smart homes and smart buildings with high consumer interest.

Many different companies presented diverse products. In my perception, the largest share of products were automation tools for end consumers such as automated switches maybe controlled via a mobile phone app, multi-functional sockets and switches with LCD displays, actuators such as automated curtain openers (maybe controlled via an app), intelligent locking and surveillance systems, and integrated home entertainment systems. There were providers of electronic building blocks for IOT applications, integrating, and controlling other systems for developing other applications and there were network infrastructure providers. Fewer, but present, were software vendors for building control and management software as well as home robotics.

I would say that the gravitational center of the exhibits was the convenience of the user. If we leave aside the new entertainment systems, I think most of the exhibitions were on better interfaces: nicer switches and apps with which you can turn on and off basically everything in your home. The fraction of truly intelligent systems, e.g., systems that would try to learn from and predict the user behavior or that would try to predict future events or changing environment conditions and adjust the home or building behavior to them, seemed to be smaller (but then again, I cannot really understand Chinese, so I may have overlooked some interesting exhibits).

Given the huge audience at the fairs and the market they represent, I would say that must be a huge future for intelligent homes and buildings. I saw some product frameworks which aim at the higher-level control over and the management of buildings and components. Given the already existing network infrastructure of the components, especially for large building compounds or office buildings (both of which we have many in China), intelligent technologies for water and energy saving, lighting or elevator control, automated cleaning robots, and maintenance scheduling will surely have a great market. And, of course, optimization, operations research, and machine learning could enable and improve on them.

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active

Optimization and Operations Research are about finding good solutions for computational hard problems. Sometimes we cannot guarantee to find the best solutions, because finding these (or the proof that they are best) may take too long. Then we look for good approximate solution to be found within reasonable time, i.e., our algorithm's performance is measured both in solution quality and required runtime (and runtime may be measured in different ways). Often we have algorithms that start with an initial (probably bad) solution and improve it over time. This is even often the case in situations where we can actually guarantee to solve the problem perfectly if we give enough time (and we may stop earlier once we run out of time, loosing the guarantee but at least having a solution). In Machine Learning and Datamining, the situation is quite similar, which is quite natural since many questions there could also be considered as special optimization tasks. In both fields, algorithms are often randomized, meaning that they may behave differently or return different results even if executed twice with the same input data.

From the above scenario, it becomes clear that it may not be easy to know which algorithm is actually the best for a given scenario. On one hand, we need to use some reasonably robust statistics. On the other hand, we may also need to clarify what "best" actually means, since we have at least two dimensions of performance (and reducing the algorithm performance to a single point measurement may lead to wrong conclusions). These are just two of the problems we face when evaluating new methods for optimization or Machine Learning. There are many more issues, such as how to compare multi-objective optimization methods (where we have more than one quality dimension) and how to compare parallel algorithms or programs running in a cloud or cluster? Or how can we compare algorithms on a problem which is noisy or in the face of uncertainties and the need for robustness?

There are many more opportunities for interesting and good research: How to visualize algorithm the results when comparing many algorithms on many problems? Can we model algorithm performance to guess how a method would perform on new problem? Can we have simple theoretical frameworks which gives us mathematically supported guidelines, limits, boundaries, or estimates for benchmarking? Or can we build automated approaches to answer high-level questions like: What features make a problem hard for a set of algorithms? Which parameters make an algorithm work the best? Are there qualitatively different classes of problems and algorithms for a certain problem type?

With our International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA), we try to provide a platform to discuss such topics, a setup where researchers can exchange thoughts on how to compare and analyze the performance of Computational Intelligence (CI) algorithms. We generously consider all optimization, Operations Research, Machine Learning, Datamining, and Evolutionary Computation all as sub-fields of CI. This workshop will take place at the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018) on March 29-31, 2018 in Xiamen, China.

If you are a researcher working on any related topic, we would be very happy if you would consider submitting a paper until December 1, 2017, via the submission page. Here you can download the Call for Papers (CfP) in PDF format.

feed-image rss feed-image atom