A recurrent neural network for solving nonconvex optimization problems

Xiaolin Hu, Jun Wang

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

6 Citations (Scopus)

Abstract

An existing recurrent neural network for convex optimization is extended to solve nonconvex optimization problems. One of the prominent features of this neural network is the one-to-one correspondence between its equilibria and the Karush-Kuhn-Tueker (KKT) points of the nonconvex optimization problem. The conditions are derived under which the neural network (locally) converges to the KKT points. It is desired that the neural network is stable at minimum solutions, and unstable at maximum solutions or saddle solutions. It is found in the paper that most likely the neural network is unstable at the maximum solutions. Moreover, we found that if the derived conditions are not satisfied at minimum solutions, by transforming the original problem into an equivalent one with the p-power (or partial p-power) method, these conditions can be satisfied. As a result, the neural network will locally converge to a minimum solution. Finally, two illustrative examples are provided to demonstrate the performance of the recurrent neural network. © 2006 IEEE.
Original languageEnglish
Title of host publicationIEEE International Conference on Neural Networks - Conference Proceedings
PublisherIEEE
Pages4522-4528
ISBN (Print)0780394909, 9780780394902
DOIs
Publication statusPublished - 2006
Externally publishedYes
Event2006 International Joint Conference on Neural Networks (IJCNN '06) - Vancouver, BC, Canada
Duration: 16 Jul 200621 Jul 2006

Publication series

Name
ISSN (Print)2161-4393
ISSN (Electronic)2161-4407

Conference

Conference2006 International Joint Conference on Neural Networks (IJCNN '06)
PlaceCanada
CityVancouver, BC
Period16/07/0621/07/06

Fingerprint

Dive into the research topics of 'A recurrent neural network for solving nonconvex optimization problems'. Together they form a unique fingerprint.

Cite this