A one-layer recurrent neural network with a discontinuous activation function for linear programming

Qingshan Liu, Jun Wang

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

83 Citations (Scopus)

Abstract

A one-layer recurrent neural network with a discontinuous activation function is proposed for linear programming. The number of neurons in the neural network is equal to that of decision variables in the linear programming problem. It is proven that the neural network with a sufficiently high gain is globally convergent to the optimal solution. Its application to linear assignment is discussed to demonstrate the utility of the neural network. Several simulation examples are given to show the effectiveness and characteristics of the neural network. © 2007 Massachusetts Institute of Technology.
Original languageEnglish
Pages (from-to)1366-1383
JournalNeural Computation
Volume20
Issue number5
DOIs
Publication statusPublished - May 2008
Externally publishedYes

Fingerprint

Dive into the research topics of 'A one-layer recurrent neural network with a discontinuous activation function for linear programming'. Together they form a unique fingerprint.

Cite this