A Proximal Neurodynamic Network With Fixed-Time Convergence for Equilibrium Problems and Its Applications

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

2 Scopus Citations
View graph of relations


Related Research Unit(s)


Original languageEnglish
Number of pages15
Journal / PublicationIEEE Transactions on Neural Networks and Learning Systems
Online published10 Feb 2022
Publication statusOnline published - 10 Feb 2022


This article proposes a novel fixed-time converging proximal neurodynamic network (FXPNN) via a proximal operator to deal with equilibrium problems (EPs). A distinctive feature of the proposed FXPNN is its better transient performance in comparison to most existing proximal neurodynamic networks. It is shown that the FXPNN converges to the solution of the corresponding EP in fixed-time under some mild conditions. It is also shown that the settling time of the FXPNN is independent of initial conditions and the fixed-time interval can be prescribed, unlike existing results with asymptotical or exponential convergence. Moreover, the proposed FXPNN is applied to solve composition optimization problems (COPs), l-regularized least-squares problems, mixed variational inequalities (MVIs), and variational inequalities (VIs). It is further shown, in the case of solving COPs, that the fixed-time convergence can be established via the Polyak-Lojasiewicz condition, which is a relaxation of the more demanding convexity condition. Finally, numerical examples are presented to validate the effectiveness and advantages of the proposed neurodynamic network.

Research Area(s)

  • Composition optimization problems, Control theory, Convergence, equilibrium problems, fixed-time convergence, Learning systems, mixed variational inequalities, Neurodynamics, Numerical stability, Optimization, Polyak-Lojasiewicz condition, Programming, proximal neurodynamic networks