The TSP Suite is a Java project available on GitHub for implementing, testing, benchmarking, evaluating, and comparing algorithms for the Traveling Salesman Problem (TSP). As such, it provides

  1. a set of basic classes to implement TSP solvers,
  2. JUnit tests that can check whether an algorithm produces invalid solutions,
  3. classes for benchmarking the solvers (in a parallel and/or distributed fashion), which collect performance metrics by applying the solvers to the problem instances from the well-known TSPLib benchmark set and store these measurements in text-based log files, and
  4. an evaluator component that can run, load, and evaluate these log files to produce summary reports, similar to conference papers or theses (in LaTeX, PDF, or XHTML format) and which can compare the performance of multiple algorithms statistically,
  5. a large set of implemented TSP solvers that can be extended, modified, or otherwise serve as example for implementing and developing own algorithms.

These components make the TSP Suite a useful tool for verifying new ideas for solving TSPs, to test new algorithm approaches in general, and to compare whether a new idea (say a parameter adaptation strategy) actually works well (an algorithm becomes better when using it). The whole project is open-source and licensed under the LGPL.

The evaluation component TSP Suite is complemented by the optimizationBenchmarking.org framework, which has greater flexibility and different features (including more advanced machine learning and deep learning evaluation modules).

Introduction

The Traveling Salesman Problem (TSP) is one of the most well-studied problems in combinatorial optimization [1, 2, 3, 4, 5, 6]. In a TSP, a set of cities and the distances between them are given. The traveling salesman starts his tour in one of the cities and has to visit all other cities exactly once before returning home (to the city he started from). His goal is to visit all cities as quickly as possible, i.e., to find the shortest possible cyclic tour from his origin through all cities and back to his origin. The optimization version of the TSP is NP-hard [4, 7, 8].

The vast amount of literature and solutions available for the TSP as well as the fact that it is quite easy to understand make this problem an ideal candidate to test new algorithms for solving combinatorial problems, to compare algorithms, and to test new "sub-algorithms" like adaptation strategies or search operators. But what do you need to do when developing a new algorithm for TSPs?

Well, first you need an idea. Then you need to implement it in one way or another (and check whether your implementation contains any errors…). But what then? Now the work begins: You run experiments (best with well-known benchmark data sets like those from TSPLib [9, 10, 11, 12]) and collect data. You need to be careful with what data to collect and how to collect it without influencing your algorithm. Then you need to evaluate your data, draw diagrams, use statistical methods to find out what the benefits and drawbacks of your method are. Probably you will do this a couple of times with different configurations of your algorithm. And you will compare it to some results from the literature.

That is a lot of work. Wouldn't it be nice if there was a software environment that can run the experiments for you? That would automatically collect all necessary data from the experiments for you in nice text files? A system that could then also generate a full-fledged report on your experiments that could look like an article or thesis? A report that describes the experimental methodology, includes evaluation results, comparisons with results from the literature, diagrams, and that can even compare results from different experiments? Well, the TSP Suite is doing exactly that.

The goal of the TSP Suite project is to provide an integrated environment that eases the development of TSP solvers. It is an open source Java [13, 14, 15, 16, 17, 18, 19, 20] environment available on GitHubthat supports the whole life cycle of research on TSP, from the implementation of an algorithm up to the moment where a report in one of the common conference or journal formats or a web page is generated that provides an analysis of your algorithm from several different angles. The TSP Suite reduces your work to what really matters: Your algorithm and how it performs. You do not need to care about which benchmark instances to choose, how to run your experiments, or what data to collect. Also you do not need to care about how to transform the collected data into diagrams or tables.

This document provides a step-by-step guide to use this framework and also the slides of a recent presentation [21] on the TSP Suite. If you are interested in numerical or continuous optimization instead of combinatorial problems (like the TSP), we suggest that you should visit the COCO website, presenting an approach for COmparing Continuous Optimisers [22, 23, 24] and for Satisfiability problems, you may wish to take a look into UBCSAT [25, 26, 27].

Implementing TSP Solvers

An important step, after developing an idea for an algorithm, is to actually implement it. The TSP Suite ships with a wide variety of algorithms that can serve as example or can be used as basis for your own method. If you want to implemented, e.g., a specialized, you may want to specialize the existing general EA class.

Generally, when implementing a new TSP solver, there are two basic classes that you will come across:

  1. A TSP solver to be used, tested, executed, and evaluated by the TSP Suite framework must extend the class org.logisticPlanning.tsp.solving.TSPAlgorithm (or any of the pre-defined classes that inherit from it). Detailed information on this class, as well as a detailed guide on how to implement your algorithms, can be found in Section 2.1 below (and here).
  2. Instances of the class org.logisticPlanning.tsp.benchmarking.objective.ObjectiveFunction represent the problem instance information and the objective function. The objective function also automatically collects all performance data of your algorithm, such as progress and runtime information, and writes the log files once a run is completed. Detailed information about this class can be found in Section 2.2 below (and here).

Extending TSPAlgorithm

Here we give a brief introduction into implementing a TSP solver (a more detailed guide is located here). When implementing your algorithm, you proceed as follows:

  1. Create a new class (let's call it MyHeuristic, for instance) that extends org.logisticPlanning.tsp.solving.TSPAlgorithm.

  2. In its constructor, you must call the super-constructor inherited from TSPAlgorithm. It takes one String as parameter: The name of the algorithm.

  3. Your algorithm itself is then implemented by overriding the method solve(ObjectiveFunction). The instance of org.logisticPlanning.tsp.benchmarking.objective.ObjectiveFunction passed into your algorithm represents all problem instance information available to your algorithm and, well, the objective function. We discuss it a bit more in detail in the javadoc documentation and in Section 2.2.

  4. Override the method clone() to perform a deep clone: If your object holds lists or arrays, make sure to clone them, as well as any mutable contents inside.

    The reason is: When conducting experiments with your algorithm, the benchmarking system will use as many processors as available (or permitted) and execute as many runs as possible in parallel (see Section 4.2). It will take the instance of your algorithm that you have provided and it will clone it once for each processor. This way, a separate, independent instance of your algorithm is used for each processor. You do not need to worry about synchronization — just make sure that cloning works properly.

  5. Finally, you need a main method that starts a benchmark run. Therefore, you may just copy the code below, replacing MyHeuristic with the name of your class.

    public static void main(final String[] args) {
    TSPAlgorithmRunner.benchmark(org.logisticPlanning.tsp.benchmarking.instances.Instance.SYMMETRIC_INSTANCES,
    MyHeuristic.class,
    args);
    }
    An example of the main method that invokes the benchmarking system to test the algorithm MyHeuristic for all symmetric TSPLib instances.
  6. Optional: Your algorithm can have any number and type of parameters that can be set from the command line (and whose values will occur in the log files and in the evaluation documents!). If you want to make use of this option, there are three methods you need to override additionally:

    1. The method printParameters should print "parameter-name parameter-description" pairs to the given PrintStream that it gets as a parameter. The user can invoke the benchmarker with the “-help” command line option and will then get this information displayed. You must always call the super-implementation of this method before executing your own code.
    2. The method printConfiguration, too, gets a PrintStream as a parameter. It prints the configuration of the object to that stream in the form of "parameter-name parameter-value" pairs. This method is called by the benchmarking environment when the log files are written. You must always call the super-implementation of this method before executing your own code.
    3. configure sets up the values of the parameters of the algorithm. It therefore receives an instance of the class Configuration, which basically is a typed hash map that is loaded with parameters from the command line or configuration files. You must always call the super-implementation of this method before executing your own code in order to set the parameters of your algorithm.

A more detailed discussion on how to implement TSP solvers is given in the javadoc. The TSP Suite itself provides a wide variety of different solvers (both well-known as well as experimental algorithms) that can serve as examples when implementing your algorithm, for example an edge greedy heuristic [28, 29], a nearest neighbor heuristic [28, 29], an Evolutionary Algorithm (EA) working directly with permutations, extending an EA [30, 31, 32, 33] blueprint that can be customized in different ways, a population-based Ant Colony Optimization algorithm [34, 35, 36], and a specialized Estimation of Distribution algorithm [37], among others.

After implementing your algorithm, it is vital to test it before conducting any experiment. Testing of algorithms is discussed further down in this document.

Using ObjectiveFunction

The class org.logisticPlanning.tsp.benchmarking.objective.ObjectiveFunction provides a TSP solver with all the information necessary to, well, solve a TSP instance. It can be used to compute the objective values (tour lengths) of complete candidate solutions, given either in path representation (permutations of cities) [38, 39] or adjacency representation [40, 38, 39] with its evaluate and evaluateAdj methods. It also allows you to compute the distance between two cities and tell you the number of cities. ObjectiveFunction internally records the achieved solution quality and the consumed time during a run. It provides this information back to the algorithm via its getCurrentLogPoint and getLastImprovementLogPoint methods as well as via getCopyOfBest. It also represents the termination criterion, by providing the method shouldTerminate that algorithms need to call frequently (when it returns true – their solve method should return. Finally, ObjectiveFunction also provides the random number generator to be used in the algorithms (its seed for each will be stored in the log files).

Testing TSP Solvers

Along with our benchmark code, we also provide a unit testing facility based on JUnit [41, 42, 43]. It is strongly recommended that you run tests with your algorithm before doing real experiments. Of course, our testing facility cannot decide whether your algorithm is right or wrong, but it may be able to detect some programming mistakes. The current version cannot check if there are concurrency-related errors, resulting from a missing or faulty cloning method, see here.

You can easily create a unit test for your algorithm by extending the class TSPAlgorithmSymmetricTest if your algorithm can only solve symmetric TSP instances and TSPAlgorithmAsymmetricTest if it can also solve asymmetric instances. In your new test, you only need to override the method createAlgorithm to return an instance of your algorithm implementation.

All that this method does is to return an instance of TSPAlgorithm that should be tested – in this case, an instance of your algorithm.

Assume that your algorithm is implemented as class MyHeuristic in package MyPackage. If MyHeuristic is a solver for symmetric TSPs, then the test class will look like the listing below. For solvers that also understand asymmetric instances, just replace TSPAlgorithmSymmetricTest with TSPAlgorithmAsymmetricTest.

package test.junit.MyPackage.MyHeuristic;
import MyPackage.MyHeuristic;

import test.junit.org.logisticPlanning.tsp.solving.algorithms.TSPAlgorithmSymmetricTest;

public class MyHeuristicTest extends TSPAlgorithmSymmetricTest {

public MyHeuristicTest() {
super();
}

@Override
protected MyHeuristic createAlgorithm() {
return new MyHeuristic();
}
}
A unit test for algorithm MyHeuristic.

A detailed description on the topic is given in the javadoc.

There are two ways to make the TSP Suite classes available in your project: Either you just copy all sources or reference the corresponding JARs in the classpath of your project. The latter either means to reference tsp_suite-0.9.8.jar, tsp_suite-0.9.8-tests.jar, as well as all those JARs in the libs folder provided with this download, or to simply include tsp_suite-0.9.8-standalone.jar. How to add a JAR to the classpath is described here.

The test cases for the TSP Suite are included in package test.junit.org.logisticPlanning.tsp. The hierarchy of the packages and classes within this package mirrors the hierarchy of the package org.logisticPlanning.tsp, which contains the true sources of the project. The reports of these unit tests can be found here.

Benchmarking TSP Solvers

Once a TSP algorithm has been implemented and tested, the TSP Suite system can automatically run experiments with it. More precisely, it can automatically apply the algorithm to the problem instance from the well-known TSPLib [9, 10, 11, 12] benchmark set, which you can find documented in this PDF document. During an experiment, your algorithm will be applied to each of the symmetric instances of this benchmark set (and also the asymmetric ones, if you have specified that in the main method) for 30 independent trials taking up to one hour each (under default settings).

Benchmarking Procedure and Parameters

The general way to conduct an experiment is as follows:

  1. First you package all binaries and sources of your algorithm as well as of the TSP Suite into a JAR archive, having your algorithm class (e.g., MyHeuristic) selected as the main class. Let's say this JAR is called myHeuristicJar.jar.
  2. You can run this JAR (and thus, the experiment) by executing the command java -jar myHeuristicJar.jar. As parameters, you can provide, among others
    1. the target directory where to store the log files (“-outputDir=myLogFilesDir”),
    2. the maximum number of threads to use (“-maxThreads=nnn”, limited by the number of available processors),
    3. -algoLogger="global";ALL” to receive logging information about the benchmarking progress (the collected information will always be logged, this parameter just prints out the benchmark case names on the console so you can see where the process currently is, see Section 6 for more information on logging),
    4. You may pass in parameters like “-researcherName="first name last name"” to have your name in the log files (see class CreatorInfo for more author-related parameters), as well as
    5. all the parameters that your own algorithm accepts.
    As discussed in Section 6, you can store all parameters in a configuration file and supply them via “-configFile=myConfigFile.txt”.

You can abort the experiment at any time, if you are only interested in smaller-scale instances. Of course, you can also resume aborted or crashed experiments.

A detailed description on how to run experiments is given in the javadoc.

Parallelization and Distribution

Please be aware that the experiments may take quite some time. Ideally, you should run them on a powerful computer with many processors that you do not need to touch for about a week… …and after that, you will have a nice set of log files. In order to speed up the process, the TSP Suite has parallelization and distribution support for experimentation.

Benchmarking is done by (potentially) multiple threads. Each thread does the following: It iterates through the list of benchmarking instances (from the TSPLib). For each benchmark instance it will define an output folder of the corresponding name inside the overall output folder "outputDir". It will now iterate through the runs that should be done, from 1 to maxRuns, where maxRuns is 30, by default. Each combination of benchmark instance and run index defines a unique path and file name. The thread tries to create the file with the corresponding name with the createNewFile() which reports true if a new file was created and false if the file already existed. If the thread managed to create the file as "new", it will immediately begin to perform the run and after the run is finished, store its results into the file. If the file already existed, the thread will just move to the next run index. If the run index reaches maxRuns, the thread continues with the next benchmark instance. It can be noted that two threads that work on the same benchmark instance in the same process will automatically share the corresponding internal data structures such as the distance matrix, which reduces memory consumption and cache misses.

This mechanism allows the most primitive and yet surprisingly robust way to enable parallelization, distribution, and restarting of experiments. For instance, amongst the threads in a single process, no communication is necessary. Each thread will automatically find the next run it should perform and no two threads may try to do the same work.

Distribution, e.g., when executing the experiments in a cluster, can be achieved the same way: You only need to let their outputDir parameters point to the same shared folder. This way, it is possible to let 3 computers with 24 threads each work on the same experiment. Since there are about 110 benchmarking instances for symmetric problems, the total required runtime for a symmetric TSP solver could be reduced to 110 * 30 * 1 hour divided by 3 * 24, i.e., to about two days (unless the solver can solve some problems in less than an hour, in which case the time would be less, too).

Restarting experiments is also easy because of this mechanism: A completed run will have an associated log file of non-zero size. Since the log information is only written once the run is completed (in order to not bias the time measurements), all runs that have been started but are incomplete will have log files of zero size. Thus, if your computer crashes or something, just delete all zero-sized files and you can restart the benchmarker. It will resume its work and not repeat work that has already been done. Under Linux, you could do something like find ~/outputDir/ -type f -empty -delete or find ~/outputDir/ -size 0 -print0 |xargs -0 rm, where outputDir is the output folder you have defined.

Evaluating the Benchmarking Results

Now that you have the log files, you can evaluate them using the evaluator component of the TSP Suite. This component, class org.logisticPlanning.tsp.evaluation.Evaluator, is the main class of the standalone, ready-to-run JAR tsp_suite-0.9.8-evaluator-standalone.jar that can be executed from the command line via java -jar tsp_suite-0.9.8-evaluator-standalone.jar [args].

It will take as arguments the source folder(s) with the log files generated by the benchmarking components and the destination folder to write the report to. It will produce a full fledged report, consisting of

  1. An introduction, describing the experimental methodology, TSPs in general, the data that we measure, and how we evaluate them.
  2. The second section dedicates one sub section to each experiment found in the source folder(s). The evaluations of the experiments are performed separately, which includes comparisons to results in literature, printing of results, painting of diagrams displaying different performances measures, etc.
  3. If more than one experiment was detected in the input folder(s), then a third section compares these experiments with each other. We can, for example, use statistical tests to determine which algorithm would produce better end results, or we can check which one would achieve good results faster.

The evaluator accepts a variety of command line parameters, but the most important are:

  1. The list of folders with log files is specified as “-source=myLogFilesDir1;myLogFilesDir2;...”.
  2. The folder to receive the report is specified as “-dest=myReportDir”. This should be a new or empty folder (otherwise, you may get errors).
  3. -documentDriver=…”; allows you to choose the output format (drivers), i.e., to specify in which format you want the report to be generated in. Currently, we support the following values:
    1. org.logisticPlanning.utils.document.impl.xhtml.XHTMLDriver” will produce an XHTML [44] document.

    2. org.logisticPlanning.utils.document.impl.latex.LaTeXDriver” will produce a simple LaTeX [45, 46, 47, 48, 49, 50] report with figures in EPS format [51, 52] and try to compile it to PDF [53, 54].

      Notice that, while the generation of LaTeX documents and the figures should always work without problems, the subsequent translation step to PDF might not. First of all, it may be slow. Second, the TSP Suite system tries to detect an installation of a LaTeX tool chain, such as MiKTeX [55], XeTeX [56], or TeX Live [57]. If you do not have one installed, then it will fail. This translation process, so far, has only been tested with MiKTeX under Windows 7 and may not work with other systems. We are happy about any feedback and bug reports.

      If LaTeX compilation does not work on your computer, or if you are just interested in the LaTeX output and EPS figures and do not wish to compile it to PDF, you can disable the LaTeX compiler by providing parameter “-compileLaTeX=false”.

      The following specialized LaTeX document formats exist:

      1. the “org.logisticPlanning.utils.document.impl.latex.acmConf.ACMConfDriver” using the ACM Conference template [58],
      2. the “org.logisticPlanning.utils.document.impl.latex.ieeeArticle.IEEEArticleDriver” using the IEEE article template [59] – this is the default driver,
      3. the “org.logisticPlanning.utils.document.impl.latex.ieeeConf.IEEEConfDriver” using the IEEE Conference template [59], and
      4. the “org.logisticPlanning.utils.document.impl.latex.springerConf.SpringerConfDriver” using the Springer Lecture Notes on Computer Science-based conference template [60].

      The LaTeX/pdf output may sometimes look a bit ugly. The reason is that LaTeX automatically lays out floating objects (figures and tables), a process that has some technical limitations when it comes to many floats, which is innevitable in our application. This forces us to do things that should not be done normally… …sorry.

  4. If you pass the arguments “-evaluatorLogger="global";FINE”, you will receive logging information about the progress regarding loading the data, processing the data, and creating the output (see Section 6 for more information).
  5. If you also want to receive logging information about the progress of the document generation, you should additionally pass in “-documentLogger="global";ALL”.
  6. If your driver is a LaTeX driver and compiling the generated LaTeX document to PDF is enabled, it makes sense to also log the output of the LaTeX tool chain by specifying “-logProcessOutput=true”.

Common Command Line Parameters

All programs of the TSP Suite, be it the Experiment Runner, the Evaluator, or the implemented algorithms, support a set of common parameters.

  1. Many modules have specific logging parameters (see, e.g., here and here). If you do not specify these parameters, the programs will look for a global logging setup specified with parameter “-logger” instead.

    Each logger parameter, be it module-specific one or the global one, follows the format: "loggerName";Level;enforceLogLevel.

    1. loggerName is a String which is resolved via Logger.getLogger to an instance of java.util.logging.Logger. You can create loggers programmatically and then configure and apply the tools. The default value you should use for this parameter is global.
    2. If Level is specified, it will be parsed with Level.parse. If the logging level has been specified, the configuration will automatically check if there is any Handler specified in the logger that can log records at that level. If not, the handler's log level will apropriately be set.
    3. If the third parameter is either not specified or evaluates to true, then this process will be continued at the logger's parent and so on, until at least one handler has been configured to process logging records of that level or the end of the hierarchy logging records would go through is reached.

    More information is found in the documentation of class _LoggerParser.

  2. Sometimes you have a lot of parameter settings. You can then put them into a configuration file via “-configFile=myConfigFile.txt”, where myConfigFile.txt is the file name (and path) to your configuration file.

Publications

  • Thomas Weise, Yan Jiang, Qi Qi, and Weichen Liu. A Branch-and-Bound-Based Crossover Operator for the Traveling Salesman Problem. International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) 13(3):1-18, Fall 2019
    doi:10.4018/IJCINI.2019070101
    Indexing: ESCI
  • Thomas Weise, Yuezhong Wu, Weichen Liu, and Raymond Chiong. Implementation Issues in Optimization Algorithms: Do they matter? Journal of Experimental & Theoretical Artificial Intelligence (JETAI) 31(4):533–554, 2019.
    doi:10.1080/0952813X.2019.1574908
    Indexing: EI, ESCI, 4区, CCF-C类
  • Thomas Weise, Xiaofeng Wang, Qi Qi, Bin Li, and Ke Tang. Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features. Applied Soft Computing Journal (ASOC), 73:366–382, December 2018.
    doi:10.1016/j.asoc.2018.08.030 / blog entry
    Indexing: EI, WOS:000450124900027, ESCI, 1区
  • Qi Qi, Thomas Weise, and Bin Li. Optimization Algorithm Behavior Modeling: A Study on the Traveling Salesman Problem. In Proceedings of the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018), March 29-31, 2018 in Xiamen [厦门], Fujian [福建省], China, IEEE, pages 861–866. ISBN: 978-1-5386-4362-4. Appeared in the International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) at the ICACI 2018.
    doi:10.1109/ICACI.2018.8377576 / pdf / slides / workshop website
  • Weichen Liu, Thomas Weise, Yuezhong Wu, and Qi Qi. Combining Two Local Searches with Crossover: An Efficient Hybrid Algorithm for the Traveling Salesman Problem. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'17), July 15-19, 2017, Berlin, Germany, New York, NY, USA: ACM Press, pages 298-305, ISBN: 978-1-4503-4920-8.
    doi:10.1145/3071178.3071201 / paper / slides / blog entry 1 / blog entry 2
    Indexing: EI:20173104005769, CCF-C类
  • Thomas Weise. From Standardized Data Formats to Standardized Tools for Optimization Algorithm Benchmarking. In Newton Howard, Yingxu Wang, Amir Hussain, Freddie Hamdy, Bernard Widrow, and Lotfi A. Zadeh, editors, Proceedings of the 16th IEEE Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC'17), July 26-28, 2017, University of Oxford, Oxford, UK, pages 490-497. Los Alamitos, CA, USA: IEEE Computer Society Press, ISBN: 978-1-5386-0770-1.
    doi:10.1109/ICCI-CC.2017.8109794 / paper / slides / blog entry
    Indexing: EI
  • Thomas Weise, Yuezhong Wu, Raymond Chiong, Ke Tang, and Jörg Lässig. Global versus Local Search: The Impact of Population Sizes on Evolutionary Algorithm Performance. Journal of Global Optimization 66(3):511-534, November 2016.
    doi:10.1007/s10898-016-0417-5 / pdf
    Indexing: EI:20160902030834, WOS:000386373700007, SCI, 2区
  • Yuezhong Wu, Thomas Weise, and Weichen Liu. Hybridizing Different Local Search Algorithms with Each Other and Evolutionary Computation: Better Performance on the Traveling Salesman Problem. In Proceedings of the 18th Genetic and Evolutionary Computation Conference (GECCO'16), Denver, Colorado, USA, July 20–24, 2016, pages 57-58, New York, NY, USA: Association for Computing Machinery (ACM). ISBN: 978-1-4503-4323-7
    doi:10.1145/2908961.2909001 / pdf / poster
    Indexing: EI:20163702804277, WOS:000383741800029, CCF-C类
  • Weichen Liu, Thomas Weise, Yuezhong Wu, Dan Xu, and Raymond Chiong. An Improved Ejection Chain Method and Its Hybrid Versions for Solving the Traveling Salesman Problem. Journal of Computational and Theoretical Nanoscience 13(6):3601-3610, June 2016
    doi:10.1166/jctn.2016.5189
    Indexing: EI:20164002870059
  • Weichen Liu, Thomas Weise, Yuezhong Wu, and Raymond Chiong. Hybrid Ejection Chain Methods for the Traveling Salesman Problem. In Proceedings of the 10th International Conference on Bio-Inspired Computing – Theories and Applications (BIC-TA'15), Maoguo Gong, Linqiang Pan, Tao Song, Ke Tang, and Xingyi Zhang, editors, September 25-28, 2015, Hefei, Anhui, China, volume 562 of Communications in Computer and Information Science. Berlin/Heidelberg: Springer-Verlag, pages 268-282, ISBN 978-3-662-49013-6.
    doi:10.1007/978-3-662-49014-3_25 / pdf
    Indexing: EI:20160300001732, WOS:000369890300025
  • Dan Xu, Thomas Weise, Yuezhong Wu, Jörg Lässig, and Raymond Chiong. An Investigation of Hybrid Tabu Search for the Traveling Salesman Problem. In Proceedings of the 10th International Conference on Bio-Inspired Computing – Theories and Applications (BIC-TA'15), Maoguo Gong, Linqiang Pan, Tao Song, Ke Tang, and Xingyi Zhang, editors, September 25-28, 2015, Hefei, Anhui, China, volume 562 of Communications in Computer and Information Science. Berlin/Heidelberg: Springer-Verlag, pages 523-537, ISBN 978-3-662-49013-6.
    doi:10.1007/978-3-662-49014-3_47 / pdf
    Indexing: EI:20160300001756, WOS:000369890300047
  • Yuezhong Wu, Thomas Weise, and Raymond Chiong. Local Search for the Traveling Salesman Problem: A Comparative Study. In Proceedings of the 14th IEEE Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC'15), July 6-8, 2015, Beijing, China, pages 213-220, Los Alamitos, CA, USA: IEEE Computer Society Press, ISBN: 978-1-4673-7289-3.
    doi:10.1109/ICCI-CC.2015.7259388 / pdf
    Indexing: EI:20161202119039, WOS:000380466100034
  • Yan Jiang, Thomas Weise, Jörg Lässig, Raymond Chiong, and Rukshan Athauda. Comparing a Hybrid Branch and Bound Algorithm with Evolutionary Computation Methods, Local Search and their Hybrids on the TSP. In Proceedings of the IEEE Symposium on Computational Intelligence in Production and Logistics Systems (CIPLS'14), Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI'14), Orlando, FL, USA: Caribe Royale All-Suite Hotel and Convention Center, December 9-12, 2014, pages 148-155. Los Alamitos, CA, USA: IEEE Computer Society Press. ISBN 978-1-4799-5375-2.
    doi:10.1109/CIPLS.2014.7007174 / pdf
    Indexing: EI:20150700522268, WOS:000380487400021
  • Thomas Weise, Raymond Chiong, Ke Tang, Jörg Lässig, Shigeyoshi Tsutsui, Wenxiang Chen, Zbigniew Michalewicz, and Xin Yao. Benchmarking Optimization Algorithms: An Open Source Framework for the Traveling Salesman Problem. IEEE Computational Intelligence Magazine (CIM), 9(3):40-52, August 2014.
    Featured article and selected paper at the website of the IEEE Computational Intelligence Society (http://cis.ieee.org/).
    doi:10.1109/MCI.2014.2326101 / pdf
    Indexing: EI:20143117995681, WOS:000340271800004, SCI, 1区, Google Scholar

Organized Events

  • Thomas Bäck, Thomas Bartz-Beielstein, Jakob Bossek, Bilel Derbel, Carola Doerr, Tome Eftimov, Pascal Kerschke, William La Cava, Arnaud Liefooghe, Manuel López-Ibáñez, Boris Naujoks, Pietro S. Oliveto, Patryk Orzechowski, Mike Preuss, Jérémy Rapin, Ofer M. Shir, Olivier Teytaud, Heike Trautmann, Ryan J. Urbanowicz, Vanessa Volz, Markus Wagner, Hao Wang, Thomas Weise, Borys Wróbel, and Aleš Zamuda. Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@PPSN) Workshop at the Sixteenth International Conference on Parallel Problem Solving from Nature (PPSN XVI), September 5-9, 2020 in Leiden, The Netherlands.
    website / Call for Papers (CfP)
  • Thomas Bäck, Carola Doerr, Tome Eftimov, Pascal Kerschke, William La Cava, Manuel López-Ibáñez, Boris Naujoks, Pietro S. Oliveto, Patryk Orzechowski, Mike Preuss, Jérémy Rapin, Ofer M. Shir, Olivier Teytaud, Heike Trautmann, Ryan J. Urbanowicz, Vanessa Volz, Markus Wagner, Hao Wang, Thomas Weise, Borys Wróbel, and Aleš Zamuda. Good Benchmarking Practices for Evolutionary Computation (BENCHMARK@GECCO) Workshop at the Genetic and Evolutionary Computation Conference (GECCO 2020), July 8-12, 2020, Cancún, Quintana Roo, Mexico.
    website / Call for Papers (CfP) / slides@google: 1,2,3 / slides@iao: 1,3
  • Carola Doerr, Pietro S. Oliveto, Thomas Weise, Borys Wróbel, and Aleš Zamuda. Black Box Discrete Optimization Benchmarking (BB-DOB) Workshop at the Genetic and Evolutionary Computation Conference (GECCO 2019), July 13-17, 2019, Prague, Czech Republic.
    website / Call for Papers (CfP)
  • Thomas Weise, Bin Li, Markus Wagner, Xingyi Zhang, and Jörg Lässig, eds. Special Issue on Benchmarking of Computational Intelligence Algorithms published in the Applied Soft Computing journal published by Elsevier B.V. and indexed by EI and SCIE. The special issue was open for submissions from July 2018 and closed in April 14, 2019. It is a virtual special issues, where papers will be published as soon as they are accepted.
    website / CfP-website / Call for Papers (CfP)
  • Pietro S. Oliveto, Markus Wagner, Thomas Weise, Borys Wróbel, and Aleš Zamuda. Black-Box Discrete Optimization Benchmarking (BB-DOB@PPSN) Workshop at the Fifteenth International Conference on Parallel Problem Solving from Nature (PPSN XV), 8-9 September 2018, Coimbra, Portugal
    website / Call for Papers (CfP)
  • Pietro S. Oliveto, Markus Wagner, Thomas Weise, Borys Wróbel, and Aleš Zamuda. Black-Box Discrete Optimization Benchmarking (BB-DOB@GECCO) Workshop at the 2018 Genetic and Evolutionary Computation Conference (GECCO 2018), July 15-19, 2018 in Kyoto, Japan
    website / Call for Papers (CfP)
  • Thomas Weise, Bin Li, Markus Wagner, Xingyi Zhang, and Jörg Lässig. International Workshop on Benchmarking of Computational Intelligence Algorithms (BOCIA) at the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018), March 29-31, 2018 in Xiamen [厦门], Fujian [福建省], China, IEEE, ISBN: 978-1-5386-4362-4.
    website / Call for Papers (CfP)
  • Thomas Weise and Jörg Lässig. SITA-UBRI Joint Workshop on Sustainable Logistics, November 5, 2015, Hefei, Anhui, China
  • Thomas Weise and Jörg Lässig. Special Session on Benchmarking and Testing for Production and Logistics Optimization of the 2014 IEEE Symposium on Computational Intelligence in Production and Logistics at the 2014 IEEE Symposium Series on Computational Intelligence (SSCI 2014), December 9-12, 2014, Orlando, Florida, USA

References

  1. David Lee Applegate, Robert E. Bixby, Vašek Chvátal, and William John Cook: “The Traveling Salesman Problem: A Computational Study,” February 2007, Princeton Series in Applied Mathematics, Princeton, NJ, USA: Princeton University Press. ISBN: 0-691-12993-2 and 978-0-691-12993-8; Google Book ID: nmF4rVNJMVsC and vhsJbqomDuIC
  2. “Traveling Salesman Problem,” September 2008, Federico Greco, editor, Vienna, Austria: IN-TECH Education and Publishing. ISBN: 978-953-7619-10-7.
  3. Eugene Leighton (Gene) Lawler, Jan Karel Lenstra, Alexander Hendrik George Rinnooy Kan, and David B. Shmoys: “The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization,” September 1985, Estimation, Simulation, and Control – Wiley-Interscience Series in Discrete Mathematics and Optimization, Chichester, West Sussex, UK: Wiley Interscience. ISBN: 0-471-90413-9 and 978-0-471-90413-7; LCCN: 85003158; OCLC: 260186267, 11756468, 631997749, 468006920, 611882247, and 488801427; Google Book ID: BXBGAAAAYAAJ; PPN: 02528360X, 011848790, 042459834, 175957169, 042637112, 043364950, 156527677, 078586313, and 190388226
  4. “The Traveling Salesman Problem and its Variations,” 2002–2004, Gregory Z. Gutin and Abraham P. Punnen, editors, volume 12 of Combinatorial Optimization, Norwell, MA, USA: Kluwer Academic Publishers. ISBN: 0-306-48213-4, 1-4020-0664-0, and 978-1-4020-0664-7; doi: 10.1007/b101971; OCLC: 55085594, 818988416, and 803682928; Google Book ID: TRYkPg_Xf20C, mQ5RxzsuOXAC, 5RuNx2TIbIcC, and pfRSPwAACAAJ
  5. “Traveling Salesman Problem, Theory and Applications,” December 30, 2010, Donald David Davendra, editor, Winchester, Hampshire, UK: InTech. ISBN: 978-953-307-426-9; doi: 10.5772/547.
  6. William John Cook: “In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation,” 2011, Princeton, NJ, USA: Princeton University Press. ISBN: 0691152705 and 9780691152707; Google Book ID: S3bxbr_-qhYC
  7. Alan Gibbons: “Algorithmic Graph Theory,” 1985, Cambridge, UK: Cambridge University Press. ISBN: 0521288819 and 9780521288811; Google Book ID: Be6t04pgggwC
  8. Rod Hilton: “Traveling Salesperson: The Most Misunderstood Problem,” (Website), September 14, 2012.
  9. Gerhard Reinelt: “TSPLIB,” (Website), 1995, Heidelberg, Germany: Office Research Group Discrete Optimization, Ruprecht Karls University of Heidelberg.
  10. Gerhard Reinelt: “TSPLIB 95,” Technical Report 1995; published by Heidelberg, Germany: Universität Heidelberg, Institut für Mathematik.
  11. Gerhard Reinelt: “TSPLIB — A Traveling Salesman Problem Library,” in ORSA Journal on Computing 3(4):376–384, Fall 1991; published by Operations Research Society of America (ORSA). doi: 10.1287/ijoc.3.4.376; OCLC: 60628815; ISSN: 0899-1499
  12. William John Cook: “Results of Concorde for TSPLib Benchmark,” (Website), December 2003, Atlanta, GA, USA: Georgia Institute of Technology, H. Milton Stewart School of Industrial and Systems Engineering.
  13. Redwood Shores, CA, USA: Oracle Corporation: “Java™ Platform, Standard Edition 7 ‒ API Specification,” June 6, 2013, Redwood Shores, CA, USA: Oracle Corporation.
  14. Herbert Schildt: “Java 2: A Beginner's Guide,” 2002, Essential Skills for First-Time Programmers, Maidenhead, England, UK: McGraw-Hill Ltd.. ISBN: 0072225130 and 9780072225136; Google Book ID: YWDJJGYaLG4C
  15. Zbigniew Michael Sikora: “Java: Practical Guide for Programmers,” 2003, Morgan Kaufmann Practical Guides, San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.. ISBN: 1558609091 and 9781558609099; Google Book ID: YQLj_AsVN9QC
  16. “Learning Java,” (Website), 2007–2012, San Francisco, CA, USA: Wikiversity, Wikimedia Foundation, Inc..
  17. Robert Sedgewick: “Algorithms in Java, Parts 1–4 (Fundamentals: Data Structures, Sorting, Searching),” September 2002, Reading, MA, USA: Addison-Wesley Professional. ISBN: 0-201-36120-5 and 978-0-201-36120-9; Google Book ID: hyvdUQUmf2UC; PPN: 595338534, 525555153, 511295804, 356227073, and 247734144
  18. James Gosling, William Nelson Joy, Guy Lewis Steele Jr., and Gilad Bracha: “The Java™ Language Specification,” May 2005, The Java Series, Upper Saddle River, NJ, USA: Prentice Hall International Inc., Santa Clara, CA, USA: Sun Microsystems Press (SMP), and Reading, MA, USA: Addison-Wesley Professional. ISBN: 0-321-24678-0 and 978-0321246783.
  19. James Gosling and Henry McGilton: “The Java Language Environment ‒ A White Paper,” Technical Report May 1996; published by Santa Clara, CA, USA: Sun Microsystems, Inc..
  20. Christian Ullenboom: “Java ist auch eine Insel ‒ Programmieren mit der Java Standard Edition Version 6,” 2007, Bonn, North Rhine-Westphalia, Germany: Galileo-Press. ISBN: 3-89842-838-9 and 978-3-89842-838-5.
  21. Thomas Weise [汤卫思]: “TSP, Benchmarking, and EC,” in The 8th International Workshop on Nature Inspired Computation and Applications (NICaiA'13 Autumn), October 17–19, 2013, Nanjing, Jiangsu, China: Nanjing University, Xianlin Campus, Department of Computer Science and Technology, National Key Laboratory for Novel Software Technology, Learning And Mining from DatA group (LAMBDA) [南京大学 仙林校区 LAMBDA, Zhi-Hua Zhou [周志华] and Yang Yu [俞颺], editors
  22. Nikolaus Hansen, Anne Auger, Raymond Ros, and Dimo Brockhoff: “COCO (COmparing Continuous Optimisers),” (Website), October 17, 2012, Orsay, France: Université Paris Sud, Institut National de Recherche en Informatique et en Automatique (INRIA) Futurs, Équipe TAO.
  23. Nikolaus Hansen, Anne Auger, Steffen Finck, and Raymond Ros: “Real-Parameter Black-Box Optimization Benchmarking: Experimental Setup,” Technical Report March 24, 2012; published by Orsay, France: Université Paris Sud, Institut National de Recherche en Informatique et en Automatique (INRIA) Futurs, Équipe TAO.
  24. Anne Auger and Nikolaus Hansen: “Performance Evaluation of an Advanced Local Search Evolutionary Algorithm,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC'05), September 2–5, 2005, Edinburgh, Scotland, UK, pages 1777–1784, David Wolfe Corne, Zbigniew Michalewicz, Robert Ian McKay, Ágoston E. Eiben aka. Gusz/Guszti, David B. Fogel, Carlos M. Fonseca, Günther R. Raidl, Kay Chen Tan, and Ali M. S. Zalzala, editors, Piscataway, NJ, USA: IEEE Computer Society. ISBN: 0-7803-9363-5 and 978-0-7803-9363-9; doi: 10.1109/CEC.2005.1554903; OCLC: 62773008; Google Book ID: -QIVAAAACAAJ; INSPEC Accession Number: 8715465.
    link: [1]
  25. Dave Andrew Douglas Tompkins and Holger H. Hoos: “UBCSAT: An Implementation and Experimentation Environment for SLS Algorithms for SAT and MAX-SAT,” in Revised Selected Papers from the Seventh International Conference on Theory and Applications of Satisfiability Testing (SAT'04), May 10–13, 2004, Vancouver, BC, Canada, pages 306–320, Holger H. Hoos and David G. Mitchell, editors, volume 3542 of Lecture Notes in Computer Science (LNCS), Berlin, Germany: Springer-Verlag GmbH. ISBN: 978-3-540-27829-0; doi: 10.1007/11527695_24; ISSN: 0302-9743 and 1611-3349.
  26. Dave Andrew Douglas Tompkins and Holger H. Hoos: “UBCSAT,” (Software), 2013, Vancouver, BC, Canada: University of British Columbia.
  27. Dave Andrew Douglas Tompkins: “Dynamic Local Search for SAT: Design, Insights and Analysis,” PhD Thesis, October 2010, Vancouver, BC, Canada: University Of British Columbia, Faculty of Graduate Studies, Computer Science.
  28. David Stifler Johnson and Lyle A. McGeoch: “The Traveling Salesman Problem: A Case Study in Local Optimization,” in Local Search in Combinatorial Optimization, pages 215–310, Emile H. L. Aarts and Jan Karel Lenstra, editors, Estimation, Simulation, and Control – Wiley-Interscience Series in Discrete Mathematics and Optimization, Princeton, NJ, USA: Princeton University Press and Chichester, West Sussex, UK: Wiley Interscience. ISBN: 0691115222 and 9780691115221; OCLC: 45733970; Google Book ID: NWghN9G7q9MC.
  29. David Stifler Johnson and Lyle A. McGeoch: “Experimental Analysis of Heuristics for the STSP,” in The Traveling Salesman Problem and its Variations, chapter 369–443, pages 369–443, Gregory Z. Gutin and Abraham P. Punnen, editors, volume 12 of Combinatorial Optimization, Norwell, MA, USA: Kluwer Academic Publishers. ISBN: 0-306-48213-4 and 978-1-4020-0664-7; doi: 10.1007/0-306-48213-4_9; OCLC: 55085594; Google Book ID: TRYkPg_Xf20C.
    link: [1]
  30. Thomas Weise [汤卫思]: “Global Optimization Algorithms ‒ Theory and Application,” 2009, Germany: it-weise.de (self-published).
  31. “Variants of Evolutionary Algorithms for Real-World Applications,” September 30, 2011–2012, Raymond Chiong, Thomas Weise [汤卫思], and Zbigniew Michalewicz, editors, Berlin/Heidelberg: Springer-Verlag. ISBN: 978-3-642-23423-1 and 978-3-642-23424-8; LCCN: 2011935740; doi: 10.1007/978-3-642-23424-8; Google Book ID: B2ONePP40MEC
  32. Kenneth Alan De Jong: “Evolutionary Computation: A Unified Approach,” February 2006, volume 4 of Complex Adaptive Systems, Bradford Books, Cambridge, MA, USA: MIT Press. ISBN: 8120330021, 0262041944, 978-0262041942, and 978-8120330023; OCLC: 276452339, 46500047, and 182530408; Google Book ID: OIRQAAAAMAAJ; ISSN: 1089-4365.
  33. “Handbook of Evolutionary Computation,” January 1, 1997, Thomas Bäck, David B. Fogel, and Zbigniew Michalewicz, editors, Computational Intelligence Library, New York, NY, USA: Oxford University Press, Inc., Dirac House, Temple Back, Bristol, UK: Institute of Physics Publishing Ltd. (IOP), and Boca Raton, FL, USA: CRC Press, Inc.. ISBN: 0-7503-0895-8, 0-7503-0392-1, 978-0-7503-0895-3, and 978-0-7503-0392-7; LCCN: 97004461; OCLC: 327018351, 173074676, and 327018351; Google Book ID: n5nuiIZvmpAC, neKNGAAACAAJ, and kgqGQgAACAAJ; PPN: 224708430 and 364589272
  34. Michael Guntsch: “Ant Algorithms in Stochastic and Multi-Criteria Environments,” PhD Thesis, January 13, 2004, Karlsruhe, Germany: University of Karlsruhe (Friedriciana), Department of Economics and Business Engineering and Karlsruhe, Germany: University of Karlsruhe (Friedriciana), Institute for Applied Computer Science and Formal Description Methods (AIFB). Google Book ID: Lf1ztwAACAAJ.
  35. Michael Guntsch and Martin Middendorf: “A Population Based Approach for ACO,” in Applications of Evolutionary Computing, Proceedings of EvoWorkshops 2002: EvoCOP, EvoIASP, EvoSTIM/EvoPLAN (EvoWorkshops'02), April 2–4, 2002, Kinsale, Ireland, pages 72–81, Stefano Cagnoni, Jens Gottlieb, Emma Hart, Martin Middendorf, and Günther R. Raidl, editors, volume 2279 of Lecture Notes in Computer Science (LNCS), Berlin, Germany: Springer-Verlag GmbH. ISBN: 3-540-43432-1 and 978-3-540-43432-0; doi: 10.1007/3-540-46004-7_8; ISSN: 0302-9743 and 1611-3349.
  36. Michael Guntsch and Martin Middendorf: “Applying Population Based ACO to Dynamic Optimization Problems,” in From Ant Colonies to Artificial Ants ‒ Proceedings of the Third International Workshop on Ant Colony Optimization (ANTS'02), September 12–14, 2002, Brussels, Belgium, pages 111–122, Marco Dorigo, Gianni A. Di Caro, and Michael Samples, editors, volume 2463/2002 of Lecture Notes in Computer Science (LNCS), Berlin, Germany: Springer-Verlag GmbH. ISBN: 3-540-44146-8 and 978-3-540-44146-5; doi: 10.1007/3-540-45724-0_10; ISSN: 0302-9743 and 1611-3349.
  37. Shigeyoshi Tsutsui [筒井 茂義]: “Parallelization of an Evolutionary Algorithm on a Platform with Multi-Core Processors,” in Artificial Evolution: Revised Selected Papers from the 9th International Conference, Evolution Artificielle (EA'09), October 26–28, 2009, Strasbourg, France, pages 61–73, Pierre Collet, Nicolas Monmarché, Pierrick Legrand, Marc Schoenauer, and Evelyne Lutton, editors, volume 5975 of Theoretical Computer Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS), Berlin, Germany: Springer-Verlag GmbH. ISBN: 3642141552 and 978-3-642-14155-3; doi: 10.1007/978-3-642-14156-0_6; ISSN: 0302-9743 and 1611-3349
  38. Pedro Larrañaga, Cindy M. H. Kuijpers, Roberto H. Murga, Iñaki Inza, and Sejla Dizdarevic: “Genetic Algorithms for the Travelling Salesman Problem: A Review of Representations and Operators,” in Journal of Artificial Intelligence Research (JAIR) 13(2):129–170, April 1999; published by El Segundo, CA, USA: AI Access Foundation, Inc. and Menlo Park, CA, USA: AAAI Press. doi: 10.1023/A:1006529012972; ISSN: 11076-9757.
    link: [1]
  39. Jean-Yves Potvin: “Genetic Algorithms for the Traveling Salesman Problem,” in Annals of Operations Research 63(3):337–370, June 1, 1996; published by Dordrecht, Netherlands: Springer Netherlands and Amsterdam, The Netherlands: J. C. Baltzer AG, Science Publishers. doi: 10.1007/BF02125403; ISSN: 0254-5330 and 1572-9338.
  40. John J. Grefenstette, Rajeev Gopal, Brian J. Rosmaita, and Dirk Van Gucht: “Genetic Algorithms for the TSP,” in Proceedings of the 1st International Conference on Genetic Algorithms and their Applications (ICGA'85), June 24–26, 1985, Pittsburgh, PA, USA: Carnegy Mellon University (CMU), pages 160–168, John J. Grefenstette, editor, Hillsdale, NJ, USA: Lawrence Erlbaum Associates. ISBN: 0-8058-0426-9 and 978-0-8058-0426-3; OCLC: 19702892
  41. Kent Beck: “JUnit Pocket Guide,” 2009, Sebastopol, CA, USA: O'Reilly Media, Inc.. ISBN: 1449379028 and 9781449379025; Google Book ID: Ur_zMK0WQwIC
  42. Joe B. Rainsberger and Scott Stirling: “Junit Recipes: Practical Methods for Programmer Testing,” 2005, Manning Pubs Co, Greenwich, CT, USA: Manning Publications Co.. ISBN: 1932394230 and 9781932394238; Google Book ID: 5h7oDjuY5WYC
  43. Vincent Massol and Ted Husted: “Junit In Action,” 2004, Greenwich, CT, USA: Manning Publications Co.. ISBN: 8177225383 and 9788177225389; Google Book ID: P1mDmZUmje0C
  44. Murray Altheim and Shane McCarron: “XHTML™ 1.1 — Module-based XHTML — Second Edition,” November 23, 2010, W3C Recommendation, MIT/CSAIL (USA), ERCIM (France), Keio University (Japan): World Wide Web Consortium (W3C).
  45. Leslie Lamport: “LaTeX: A Document Preparation System. User's Guide and Reference Manual,” 1994, Reading, MA, USA: Addison-Wesley Publishing Co. Inc.. ISBN: 0201529831 and 9780201529838; Google Book ID: 19pzDwEACAAJ
  46. Michel Goossens, Frank Mittelbach, and Alexander Samarin: “The LaTeX Companion,” 1994, Tools and Techniques for Computer Typesetting, Reading, MA, USA: Addison-Wesley Publishing Co. Inc.. ISBN: 0201541998 and 9780201541991; Google Book ID: 54A3MuBzIrEC
  47. Frank Mittelbach, Michel Goossens, Johannes Braams, David Carlisle, and Chris Rowley: “The LaTeX Companion,” 2004, Reading, MA, USA: Addison-Wesley Publishing Co. Inc.. ISBN: 0-201-36299-6
  48. Tobias Oetiker, Hubert Partl, Irene Hyna, and Elisabeth Schlegl: “The Not So Short Introduction to LaTeX2ε ‒ Or LaTeX2ε in 157 minutes,” April 6, 2011.
  49. This email address is being protected from spambots. You need JavaScript enabled to view it. and Thomas Feuerstack: “LaTeX eine Einführung und ein bisschen mehr…,” September 2011, Hagen, North Rhine-Westphalia, Germany: FernUniversität in Hagen, Zentrum für Medien & IT.
  50. This email address is being protected from spambots. You need JavaScript enabled to view it.: “LaTeX ‒ Fortgeschrittene Anwendungen ‒ oder: neues von den Hobbits…,” 1995, Hagen, North Rhine-Westphalia, Germany: FernUniversität Gesamthochschule in Hagen, Universitätsrechenzentrum, Abteilung Wissenschaftliche Anwendungen.
  51. “Encapsulated PostScript File Format Specification,” May 1, 1992.
  52. Peter J. Weingartner: “A First Guide to PostScript,” (Website), February 24, 2006.
  53. “Adobe® Supplement to the ISO 32000; BaseVersion: 1.7; ExtensionLevel: 3; Adobe® Acrobat® SDK, Version 9.0,” June 2008.
  54. “Document Management ‒ Portable Document Format ‒ Part 1: PDF 1.7,” July 2008
  55. Christian Schenk: “MiKTEX …typesetting beautiful documents…,” (Software), 2013.
  56. Jonathan Kew: “The XeTeX Typesetting System,” (Software), 2013, Dallas, TX, USA: SIL International.
  57. Sebastian Rahtz, Akira Kakuto, Karl Berry, Manuel Pégourié-Gonnard, Norbert Preining, Peter Breitenlohner, Reinhard Kotucha, Siep Kroonenberg, Staszek Wawrykiewicz, and Tomasz Trzeciak: “TeX Live,” (Software), June 30, 2013, Portland, OR, USA: TeX Users Group (TUG).
  58. Gerald Murray and G.K.M. Tobin: “SIG-ALTERNATE.CLS ‒ Version 2.4 (Compatible with the ACM_PROC_ARTICLE-SP.CLS" V3.2SP),” (Website), April 22, 2009, New York, NY, USA: Association for Computing Machinery (ACM).
  59. Gerald Murray, Silvano Balemi, Jon Dixon, Peter Nüchter, Jürgen von Hagen, and Michael Shell: “Official IEEE LaTeX Class for Authors of the Institute of Electrical and Electronics Engineers (IEEE) Transactions Journals and Conferences,” (Website), May 3, 2007, Piscataway, NJ, USA: IEEE (Institute of Electrical and Electronics Engineers).
  60. “LLNCS Document Class ‒ Springer Verlag LaTeX2e support for Lecture Notes in Computer Science,” (Website), June 12, 2010, Berlin, Germany: Springer-Verlag GmbH.
  61. “Eclipse,” (Software), Ottawa, ON, Canada: Eclipse Foundation.
  62. L. Darrell Whitley, Timothy Starkweather, and D'Ann Fuquay: “Scheduling Problems and Traveling Salesman: The Genetic Edge Recombination Operator,” in Proceedings of the Third International Conference on Genetic Algorithms (ICGA'89), June 4–7, 1989, Fairfax, VA, USA: George Mason University (GMU), pages 133–140, James David Schaffer, editor, San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.. ISBN: 1-55860-066-3 and 978-1-55860-066-9; OCLC: 257885359; Google Book ID: BPBQAAAAMAAJ
  63. L. Darrell Whitley, Timothy Starkweather, and Daniel Shaner: “The Travelling Salesman and Sequence Scheduling: Quality Solutions using Genetic Edge Recombination,” in Handbook of Genetic Algorithms, pages 350–372, Lawrence Davis, editor, VNR Computer Library, Stamford, CT, USA: Thomson Publishing Group, Inc. and New York, NY, USA: Van Nostrand Reinhold Co.. ISBN: 0-442-00173-8 and 978-0-442-00173-5; LCCN: 90012823; OCLC: 300431834; Google Book ID: vTG5PAAACAAJ; ASIN: 0442001738; PPN: 01945077X
  64. Timothy Starkweather, S. McDaniel, Keith E. Mathias, L. Darrell Whitley, and C. Whitley: “A Comparison of Genetic Sequencing Operators,” in Proceedings of the Fourth International Conference on Genetic Algorithms (ICGA'91), July 13–16, 1991, San Diego, CA, USA: University of California (UCSD), pages 69–76, Richard K. Belew and Lashon Bernard Booker, editors, San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.. ISBN: 1-55860-208-9 and 978-1-55860-208-3; OCLC: 24132977; Google Book ID: h9nYpwAACAAJ and YvBQAAAAMAAJ.
  65. I. M. Oliver, D. J. Smith, and John Henry Holland: “A Study of Permutation Crossover Operators on the Traveling Salesman Problem,” in Proceedings of the Second International Conference on Genetic Algorithms and their Applications (ICGA'87), July 28–31, 1987, Cambridge, MA, USA: Massachusetts Institute of Technology (MIT), pages 224–230, John J. Grefenstette, editor, Mahwah, NJ, USA: Lawrence Erlbaum Associates, Inc. (LEA). ISBN: 0-8058-0158-8 and 978-0-8058-0158-3; OCLC: 17252562; Google Book ID: bvlQAAAAMAAJ
  66. Zbigniew Michalewicz: “Genetic Algorithms + Data Structures = Evolution Programs,” 1996, Berlin, Germany: Springer-Verlag GmbH. ISBN: 3-540-60676-9, 3-540-58090-5, 978-3-540-60676-5, and 978-3-642-08233-7; Google Book ID: vlhLAobsK68C; ASIN: 3540606769
  67. Wolfgang Banzhaf: “The “Molecular” Traveling Salesman,” in Biological Cybernetics 64(1):7–14, November 1990; published by Berlin/Heidelberg: Springer-Verlag. doi: 10.1007/BF00203625; ISSN: 0340-1200 and 1432-0770.
  68. Balamurali Krishna Ambati, Jayakrishna Ambati, and Mazen Moein Mokhtar: “Heuristic Combinatorial Optimization by Simulated Darwinian Evolution: A Polynomial Time Algorithm for the Traveling Salesman Problem,” in Biological Cybernetics 65(1):31–35, May 1991; published by Berlin/Heidelberg: Springer-Verlag. doi: 10.1007/BF00197287; ISSN: 0340-1200 and 1432-0770
  69. Gilbert Syswerda: “Schedule Optimization Using Genetic Algorithms,” in Handbook of Genetic Algorithms, pages 332–349, Lawrence Davis, editor, VNR Computer Library, Stamford, CT, USA: Thomson Publishing Group, Inc. and New York, NY, USA: Van Nostrand Reinhold Co.. ISBN: 0-442-00173-8 and 978-0-442-00173-5; LCCN: 90012823; OCLC: 300431834; Google Book ID: vTG5PAAACAAJ; ASIN: 0442001738; PPN: 01945077X
  70. John Henry Holland: “Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence,” 1975, Ann Arbor, MI, USA: University of Michigan Press. ISBN: 0-472-08460-7 and 978-0-472-08460-9; LCCN: 74078988; OCLC: 1531617 and 266623839; Google Book ID: JE5RAAAAMAAJ, YE5RAAAAMAAJ, Qk5RAAAAMAAJ, vk5RAAAAMAAJ, and JOv3AQAACAAJ; PPN: 075982293
  71. John J. Grefenstette: “Incorporating Problem Specific Knowledge into Genetic Algorithms,” in Genetic Algorithms and Simulated Annealing, pages 42–60, Lawrence Davis, editor, Research Notes in Artificial Intelligence, London, UK: Pitman and San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.. ISBN: 0934613443 and 978-0934613446; LCCN: 87021357; OCLC: 16355405; Google Book ID: edfSSAAACAAJ
  72. David Stifler Johnson, Cecilia R. Aragon, Lyle A. McGeoch, and Catherine A. Schevon: “Optimization by Simulated Annealing: An Experimental Evaluation. Part I, Graph Partitioning,” in Operations Research (Oper. Res.) 37(6), November–December 1989; published by Linthicum, ML, USA: Institute for Operations Research and the Management Sciences (INFORMS) and Cambridge, MA, USA: HighWire Press (Stanford University). LCCN: 66099702; doi: 10.1287/opre.37.6.865; OCLC: 2394608; ISSN: 0030-364X and 1526-5463; CODEN: OPREAI
  73. Scott Kirkpatrick, Charles Daniel Gelatt, Jr., and Mario P. Vecchi: “Optimization by Simulated Annealing,” in Science Magazine 220(4598):671–680, May 13, 1983; published by Washington, DC, USA: American Association for the Advancement of Science (AAAS) and Cambridge, MA, USA: HighWire Press (Stanford University). doi: 10.1126/science.220.4598.671; ISSN: 0036-8075 and 1095-9203; PubMed: 17813860.
  74. David Lee Applegate, Robert E. Bixby, Vašek Chvátal, and William John Cook: “Finding Tours in the TSP: Finding Tours,” Technical Report Number 99885, 1999; published by Bonn, North Rhine-Westphalia, Germany: Rheinische Friedrich-Wilhelms-Universität Bonn. Google Book ID: CR6DNAAACAAJ and gtgGaAEACAAJ.
    link: [1]
  75. David B. Fogel: “An Evolutionary Approach to the Traveling Salesman Problem,” in Biological Cybernetics 60(2):139–144, December 1988; published by Berlin/Heidelberg: Springer-Verlag. doi: 10.1007/BF00202901; ISSN: 0340-1200 and 1432-0770.
  76. Pierre Hansen and Nenad Mladenović: “Variable Neighborhood Search: Principles and Applications,” in European Journal of Operational Research (EJOR) 130(3):449–467, May 1, 2001; published by Amsterdam, The Netherlands: Elsevier Science Publishers B.V. and Amsterdam, The Netherlands: North-Holland Scientific Publishers Ltd.. doi: 10.1016/S0377-2217(00)00100-4; ISSN: 0377-2217
  77. Pierre Hansen, Nenad Mladenović, and José Andrés Moreno Pérez: “Variable Neighbourhood Search: Methods and Applications,” in 4OR 6(4):319–360, December 1, 2008; published by Berlin, Germany: Springer-Verlag GmbH. doi: 10.1007/s10288-008-0089-1; ISSN: 1619-4500 and 1614-2411
  78. Pierre Hansen, Nenad Mladenović, and José Andrés Moreno Pérez: “Variable Neighbourhood Search: Methods and Applications,” in Annals of Operations Research 175(1):367–407, March 1, 2010; published by Dordrecht, Netherlands: Springer Netherlands and Amsterdam, The Netherlands: J. C. Baltzer AG, Science Publishers. doi: 10.1007/s10479-009-0657-6; ISSN: 0254-5330 and 1572-9338
  79. Pierre Hansen, Nenad Mladenović, Jack Brimberg, and José Andrés Moreno Pérez: “Variable Neighborhood Search,” in Handbook of Metaheuristics, chapter 61–86, pages 61–86, Michel Gendrau and Jean-Yves Potvin, editors, volume 146 of International Series in Operations Research & Management Science, Norwell, MA, USA: Kluwer Academic Publishers, Dordrecht, Netherlands: Springer Netherlands, and Boston, MA, USA: Springer US. ISBN: 1461426901 and 978-1-4419-1665-5; doi: 10.1007/978-1-4419-1665-5_3; Google Book ID: xMTS5dyDhwMC; ISSN: 0884-8289; ASIN: 1461426901
  80. David Stifler Johnson and Lyle A. McGeoch: “8th DIMACS Implementation Challenge: The Traveling Salesman Problem,” (Website), December 12, 2008, Florham Park, NJ, USA: AT&T Labs Research — Leading Invention, Driving Innovation.