Techniques for Improving Online and Offline History-assisted Evolutionary Algorithms


Student thesis: Doctoral Thesis

View graph of relations


Related Research Unit(s)


Awarding Institution
Award date15 Sept 2017


This doctoral thesis presents two techniques to improve the performance of evolutionary algorithms (EAs). They are 1) the memory control technique for non-revisiting genetic algorithm (NrGA); and 2) the sequential learnable (SL) framework for algorithm selection. The former uses online search history and the latter uses both online and offline information.
NrGA uses the entire online search history and parameter-less adaptive mutation to significantly enhance search performance. It costs little when the number of fitness evaluations is small or moderate. However, if the number of evaluations required is substantial, memory management is desirable. We propose two pruning mechanisms to keep the memory used constant, namely least recently used (LRU) pruning and random (R) pruning. The basic idea is to prune a unit of memory when the memory threshold is reached but some new search information is required to be stored, thus keeping the overall memory used constant. Both pruning strategies naturally form parameter-less adaptive mutation operators. We apply both strategies to NrGA. A study is carried out to evaluate the impact on performance caused by loss of search history information. Experimental results show that both strategies can maintain the performance of NrGA, up to the empirical limit when 90% of the search history is not recorded. This suggests that the application of NrGA can be widely extended.
The SL framework uses the algorithm-problem feature to select the best algorithm to run from an EA portfolio (SLEA). SLEA is trained offline. Given a problem, the default algorithm is run. After the initial run, the algorithm-problem feature is collected and used to map to the most similar problem. Then the best algorithm for solving the problem is used in the second run. This process iterates until n runs have been made. Experimental results reveal that the algorithm-problem feature is a good problem identifier. The method works well on both cases: 1) when SLEA has encountered the problems before, and 2) totally unknown problems. It outperforms or equals to the performance of the best algorithm, and has better performance compared with random selection. Since the best algorithm cannot be known a priori, the results confirm that the algorithm selection mechanism is effective.
Algorithm selection in SLEA is made based on the training knowledge. In the training process, some algorithms are eliminated because they do not win the first place on any of the employed training problems. However, the eliminated algorithms may perform well on other problems. To address this issue, the evolving benchmark generator with hierarchical fitness (HFEBG) is proposed. It is designed as follows: A set of established algorithms are employed. For each algorithm, a problem instance is generated by evolution. Each algorithm wins on its favorable (uniquely easy) problems. Therefore, the knowledge base is enriched: 1) the eliminated algorithms are added back to the portfolio; and 2) the generated instances are added as training problems. Experimental results verify the effectiveness and statistical robustness of HFEBG. By this method, a more comprehensive portfolio is formed in SLEA.