Collaborative Neurodynamic Optimization Approaches to Global Optimization and Its Applications
基於神經網絡群體協作的全局優化方法及應用
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution | |
---|---|
Supervisors/Advisors |
|
Award date | 6 Sep 2019 |
Link(s)
Permanent Link | https://scholars.cityu.edu.hk/en/theses/theses(b95125ee-149a-4f8f-9e27-3c3d870075d6).html |
---|---|
Other link(s) | Links |
Abstract
Global optimization is to minimize a nonconvex objective function subject to nonconvex constraints. Many problems such as combinatorial optimization problems and mixed-integer problems can be formulated equivalently as global optimization problems. Most of the global optimization problems have a lot of local minima. Thus, finding the global optimal solution efficiently is a demanding task for global optimization approaches.
As a parallel optimization approach, various neurodynamic optimization models are successfully developed for solving constrained convex optimization problems. For global optimization, the stability of neurodynamic optimization models is rarely discussed. Additionally, an individual neurodynamic optimization model is easily stuck at a local minimum of global optimization problems.
Motivated by the above discussions, the thesis is comprised of four parts under a unified framework. The first part aims to develop an effective and efficient approach for solving global optimization problems with nonconvex inequality and equality constraints. A group of neurodynamic optimization models is employed to search the optimal solutions collaboratively by using meta-heuristic algorithms (e.g., particle swarm optimization). In the second part, to solve biconvex optimization problems, a two-timescale duplex neurodynamic system consisting of two recurrent neural networks with two timescales is proposed and proven to be almost surely convergent to a global optimal solution. In the third part, to solve the formulated optimization problem of nonnegative matrix factorization efficiently, an algorithm based on a discrete-time projection neural network is proposed by using a backtracking step-size adaptation and proven to be able to reduce the objective function value iteratively until attaining a partial optimum. In the fourth part, sparse coding and sparse nonnegative matrix factorization are formulated as global optimization problems, and solved by using multiple neurodynamic optimization models.
As a parallel optimization approach, various neurodynamic optimization models are successfully developed for solving constrained convex optimization problems. For global optimization, the stability of neurodynamic optimization models is rarely discussed. Additionally, an individual neurodynamic optimization model is easily stuck at a local minimum of global optimization problems.
Motivated by the above discussions, the thesis is comprised of four parts under a unified framework. The first part aims to develop an effective and efficient approach for solving global optimization problems with nonconvex inequality and equality constraints. A group of neurodynamic optimization models is employed to search the optimal solutions collaboratively by using meta-heuristic algorithms (e.g., particle swarm optimization). In the second part, to solve biconvex optimization problems, a two-timescale duplex neurodynamic system consisting of two recurrent neural networks with two timescales is proposed and proven to be almost surely convergent to a global optimal solution. In the third part, to solve the formulated optimization problem of nonnegative matrix factorization efficiently, an algorithm based on a discrete-time projection neural network is proposed by using a backtracking step-size adaptation and proven to be able to reduce the objective function value iteratively until attaining a partial optimum. In the fourth part, sparse coding and sparse nonnegative matrix factorization are formulated as global optimization problems, and solved by using multiple neurodynamic optimization models.