Learning-Driven Evolutionary Algorithms for Complex Multiobjective Optimization


Student thesis: Doctoral Thesis

View graph of relations


Related Research Unit(s)


Awarding Institution
  • Ka Chun WONG (Supervisor)
  • Kay Chen Tan (External person) (External Co-Supervisor)
Award date21 Jun 2022


Evolutionary algorithms characterized by a population-based iterative search approach have been recognized as effective tools for addressing multiobjective optimization problems (MOPs) in different scenarios. Under different circumstances, multiobjective evolutionary algorithms (MOEAs) should be designed to learn and cope with new challenges brought by complex MOPs with scaling up dimensionality, including many objectives, large-scale decision variables, and multitask. Therefore, it requires divergent thinking to design new powerful MOEAs for solving them effectively. This thesis is written with a focus on the investigation and customization of learning-driven MOEAs to address those challenging complex MOPs. The main contributions are summarized as follows:

Firstly, the selection abilities of decomposition-based MOEAs (MOEADs) are enhanced via customizing effective environmental selection strategies in the objective space, aiming at solving many-objective optimization problems (MaOPs) effectively. The performance of MOEADs is highly affected by the matching degree of the shapes of the reference vectors (RVs) and the Pareto fronts (PFs). To address this issue, a self-guided learning strategy is proposed in MOEADs to extract RVs from the population using a clustering method with an angle-based density measurement to initialize the centroids, followed by adjusting them to properly reflect the population’s distribution. Afterward, these centroids are extracted to obtain adaptive RVs for self-guiding the search process. However, the self-guided RV strategy is strongly affected by the similarity metric used for clustering. Thus, a fuzzy prediction is further designed in MOEADs to estimate the population’s shape, which helps to improve the matching degree between the extracted RVs and the PF of the target MaOP. In this way, a fuzzy decomposition-based evolutionary algorithm is especially proposed to handle MaOPs with irregular PF shapes. Experiments have demonstrated that the proposed algorithms are efficient for solving MaOPs with various PFs.

Secondly, the search capabilities of MOEAs are strengthened via designing specific tricks in the variable space, aiming to solve large-scale multiobjective optimization problems (LMOPs) efficiently. Specifically, most evolutionary search strategies in existing MOEAs are not so efficient when directly handling the variable space of LMOPs. The following two complementary learning-driven tricks are proposed to enhance the efficiency of tackling LMOPs. The first trick is to accelerate the evolutionary search based on competitive learning or training a multilayer perceptron in the original large-scale variable space. The other trick is learning to divide all variables into multiple groups based on their importance for the target LMOP, followed by generating offspring via only searching in a low-dimensional medium subspace formed by more important variables or representative variables of these groups (i.e., problem transformation). In addition, a new LMOP suite is proposed by considering more realistic features, such as mixed formulation of objective functions, mixed linkages in variables, and imbalanced contributions of variables to the objectives. The experimental results have validated that the new LMOP suite can comprehensively evaluate the performance of existing optimizers and the proposed large-scale optimizers show distinct advantages in tackling both existing and the newly proposed LMOP suites.

Lastly, the performance of MOEAs is improved via training a neural network that can transfer useful knowledge between different problems (tasks) in addressing multitask multiobjective optimization problems (MMOPs). Inspired by the adversarial domain adaptation in transfer learning, a discriminative reconstruction network model (DRN) is created on each problem of an MMOP. At each generation, the DRN is trained by the currently obtained non-dominated solutions for all problems via backpropagation with gradient descent. With this well-trained DRN model, the proposed algorithm can transfer the solutions of source problems directly to the target problem for assisting its optimization, can evaluate the correlation between the source and target problems to control the transfer of solutions, and can learn a dimensional-reduced Pareto-optimal subspace of the target problem to improve the efficiency of transfer optimization in the large-scale search space. Moreover, a real-world MMOP suite is proposed to simulate the training of deep neural networks on multiple different classification tasks. Finally, the effectiveness of the proposed algorithm has been validated on this real-world MMOP suite and the other two synthetic MMOP suites.