A recurrent neural network for solving a class of generalized convex optimization problems

Alireza Hosseini, Jun Wang*, S. Mohammad Hosseini

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

71 Citations (Scopus)

Abstract

In this paper, we propose a penalty-based recurrent neural network for solving a class of constrained optimization problems with generalized convex objective functions. The model has a simple structure described by using a differential inclusion. It is also applicable for any nonsmooth optimization problem with affine equality and convex inequality constraints, provided that the objective function is regular and pseudoconvex on feasible region of the problem. It is proven herein that the state vector of the proposed neural network globally converges to and stays thereafter in the feasible region in finite time, and converges to the optimal solution set of the problem. © 2013 Elsevier Ltd.
Original languageEnglish
Pages (from-to)78-86
JournalNeural Networks
Volume44
DOIs
Publication statusPublished - Aug 2013
Externally publishedYes

Research Keywords

  • Differential inclusion
  • Generalized convex
  • Nonsmooth optimization
  • Pseudoconvexity
  • Recurrent neural networks

Fingerprint

Dive into the research topics of 'A recurrent neural network for solving a class of generalized convex optimization problems'. Together they form a unique fingerprint.

Cite this