On constructing alternative benchmark suite for evolutionary algorithms

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

10 Scopus Citations
View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)287–292
Number of pages6
Journal / PublicationSwarm and Evolutionary Computation
Volume44
Online published14 Apr 2018
Publication statusPublished - Feb 2019

Abstract

Benchmark testing offers performance measurement for an evolutionary algorithm before it is put into applications. In this paper, a systematic method to construct a benchmark test suite is proposed. A set of established algorithms are employed. For each algorithm, a uniquely easy problem instance is generated by evolution. The resulting instances consist of a novel benchmark test suite. Each problem instance is favorable (uniquely easy) to one algorithm only. A hierarchical fitness assignment method, which is based on statistical test results, is designed to generate uniquely easy (or hard) problem instances for an algorithm. Experimental results show that each algorithm performs the best robustly on its uniquely favorable problem. The testing results are repeatable. The distribution of algorithm performance in the suite is unbiased (or uniform), which mimics any subset of real-world problems that is uniformly distributed. The resulting suite offers 1) an alternative benchmark suite to evolutionary algorithms; 2) a novel method of assessing novel algorithms; and 3) meaningful training and testing problems for evolutionary algorithm selectors and portfolios.

Research Area(s)

  • Algorithm performance measurement, Evolutionary algorithm, Generating benchmark instance, Hierarchical fitness, Statistical test