A Proximal Neurodynamic Network With Fixed-Time Convergence for Equilibrium Problems and Its Applications

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

26 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)7500-7514
Number of pages15
Journal / PublicationIEEE Transactions on Neural Networks and Learning Systems
Volume34
Issue number10
Online published10 Feb 2022
Publication statusPublished - Oct 2023

Abstract

This article proposes a novel fixed-time converging proximal neurodynamic network (FXPNN) via a proximal operator to deal with equilibrium problems (EPs). A distinctive feature of the proposed FXPNN is its better transient performance in comparison to most existing proximal neurodynamic networks. It is shown that the FXPNN converges to the solution of the corresponding EP in fixed-time under some mild conditions. It is also shown that the settling time of the FXPNN is independent of initial conditions and the fixed-time interval can be prescribed, unlike existing results with asymptotical or exponential convergence. Moreover, the proposed FXPNN is applied to solve composition optimization problems (COPs), l-regularized least-squares problems, mixed variational inequalities (MVIs), and variational inequalities (VIs). It is further shown, in the case of solving COPs, that the fixed-time convergence can be established via the Polyak-Lojasiewicz condition, which is a relaxation of the more demanding convexity condition. Finally, numerical examples are presented to validate the effectiveness and advantages of the proposed neurodynamic network.

Research Area(s)

  • Composition optimization problems, Control theory, Convergence, equilibrium problems, fixed-time convergence, Learning systems, mixed variational inequalities, Neurodynamics, Numerical stability, Optimization, Polyak-Lojasiewicz condition, Programming, proximal neurodynamic networks

Citation Format(s)