Study of Search Methods for Multi-objective Optimization and Escaping Saddle Points

Student thesis: Doctoral Thesis

Abstract

This thesis addresses two fundamental challenges in optimization: multi-objective optimization problems (MOPs) and saddle point problems. We present novel search methods for both differentiable and non-differentiable MOPs, along with efficient techniques for escaping saddle points. The main contributions are as follows.

For MOPs with differentiable objectives, we propose a decomposition gradient descent (DGD) method that directly optimizes subproblems in MOEA/D using the NBI-style Tchebycheff approach. The DGD method combines the gradient descent method with the improved multi-gradient descent algorithm to solve the non-differentiable subproblem, achieving well-distributed Pareto optimal solutions.

For MOPs with non-differentiable objectives, we develop a collaboration-based direction estimation (DirE) method that leverages neighborhood information in MOEA/D to estimate search directions efficiently. Experimental results demonstrate the superiority of MOEA/D-DirE over other state-of-the-art multi-objective evolutionary algorithms on most test instances.

In addition, we introduce a new class of multi-objective optimization problem, called multiple multi-objective optimization problems (MMOPs), which comprises multiple MOPs with different decision spaces but the same objective space. The proposed algorithm, MOEA/D-MM, effectively solves these MMOPs as demonstrated through comprehensive benchmark testing.

Finally, we propose a two-step simultaneous perturbation stochastic approximation (2-SPSA) method for escaping saddle points. This method requires only 4 function evaluations per iteration, significantly reducing computational cost while maintaining effectiveness in escaping saddle points.
Date of Award3 Apr 2025
Original languageEnglish
Awarding Institution
  • City University of Hong Kong
SupervisorQingfu ZHANG (Supervisor)

Cite this

'