Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journal

92 Scopus Citations
View graph of relations


Related Research Unit(s)


Original languageEnglish
Article number6410018
Pages (from-to)114-130
Journal / PublicationIEEE Transactions on Evolutionary Computation
Issue number1
StatePublished - Feb 2014


Adaptive operator selection (AOS) is used to determine the application rates of different operators in an online manner based on their recent performances within an optimization process. This paper proposes a bandit-based AOS method, fitness-rate-rank-based multiarmed bandit (FRRMAB). In order to track the dynamics of the search process, it uses a sliding window to record the recent fitness improvement rates achieved by the operators, while employing a decaying mechanism to increase the selection probability of the best operator. Not much work has been done on AOS in multiobjective evolutionary computation since it is very difficult to measure the fitness improvements quantitatively in most Pareto-dominance-based multiobjective evolutionary algorithms. Multiobjective evolutionary algorithm based on decomposition (MOEA/D) decomposes a multiobjective optimization problem into a number of scalar optimization subproblems and optimizes them simultaneously. Thus, it is natural and feasible to use AOS in MOEA/D. We investigate several important issues in using FRRMAB in MOEA/D. Our experimental results demonstrate that FRRMAB is robust and its operator selection is reasonable. Comparison experiments also indicate that FRRMAB can significantly improve the performance of MOEA/D. © 2013 IEEE.

Research Area(s)

  • Adaptive operator selection (AOS), decomposition, multiarmed bandit, multiobjective evolutionary algorithm based on decomposition (MOEA/D), multiobjective optimization