Game-Theoretic Approach and Learning-to-Decompose Paradigm for Decomposition-Based Evolutionary Multi-Objective Optimization


Student thesis: Doctoral Thesis

View graph of relations


Related Research Unit(s)


Awarding Institution
Award date12 Apr 2018


The evolutionary multiobjective optimization (EMO) algorithm has been widely accepted as a major approach for multiobjective optimization due to its ability of approximating the Pareto front (PF) or Pareto-optimal set in a single run. Convergence and diversity are two cornerstones of EMO, which are often conflict to each other: convergence requires that the population are progressively driven to the PF, while diversity requires that the population are widely spread in the objective or decision space so that promising offspring can be produced and the entire PF can be well approximated. In the past two decades, the decomposition-based EMO algorithms, which decompose the multiobjective optimization problem (MOP) into subproblems using a set of weight vectors or reference points and optimize them simultaneously, have become increasingly popular. In particular, the convergence pressure is strengthened by optimizing the single-objective or simplified multiobjective optimization subproblems, while the population diversity is promoted by the set of weight vectors or reference points that are widely spread in the objective space.

This thesis focuses on developing game-theoretic approaches and a learning-to-decompose (LTD) paradigm to improve convergence and diversity of decomposition-based EMO algorithms, especially when facing problems with complicated properties, e.g., imbalanced problems, problems with many objectives or complex PF shapes. The contributions of this thesis are summarized as follows. Firstly, by modeling the environmental selection process of a decomposition-based EMO algorithm as a game of stable matching with incomplete preference lists, two adaptive selection mechanisms, i.e., two-level one-one stable matching and many-one stable matching, are developed to enhance the population diversity when dealing with imbalanced problems and many-objective optimization problems. Secondly, an adversarial decomposition method is proposed for many-objective optimization, which co-evolves two populations by using different subproblem formulations, i.e., one provides more convergence pressure while the other contributes more to population diversity, following adversarial search directions. To avoid allocating redundant computational resources to the same regions of the PF, a restricted mating selection mechanism is employed, which matches the two populations into solution pairs according to their working regions upon the PF using the proposed two-level one-one stable matching. Next, an adaptive method is developed to generate the set of weight vectors for decomposing the MOP according to the PF shape, so that the population diversity will be less sensitive to the PF shape. Specifically, a model of the estimated PF is periodically learned using the current non-dominated solutions. Thereafter, the weight vectors are generated by evenly sampling on the estimated PF. Last but not the least, an LTD paradigm is proposed to further improve the convergence and particularly diversity of decomposition-based EMO algorithms especially when solving problems with irregular PF shapes, e.g., PFs with disparate scales, discontinuous segments or other complex shapes. The paradigm consists of two inter-dependent parts, i.e., a learning module and an optimization module. The learning module periodically learns an analytical model of the estimated PF, from which useful information is extracted to help set the decomposition method, including reference points compliant with the PF shape and subproblem formulations whose contours and search directions are appropriate for the current status. The optimization module, which can be any decomposition-based EMO algorithm in principle, adopts the adaptively determined decomposition method for optimization. Compared with several state-of-the-art EMO algorithms, the effectiveness of the proposed game-theoretic approaches and LTD paradigm is verified on various test problems.