TY - GEN
T1 - A recurrent neural network for solving nonconvex optimization problems
AU - Hu, Xiaolin
AU - Wang, Jun
PY - 2006
Y1 - 2006
N2 - An existing recurrent neural network for convex optimization is extended to solve nonconvex optimization problems. One of the prominent features of this neural network is the one-to-one correspondence between its equilibria and the Karush-Kuhn-Tueker (KKT) points of the nonconvex optimization problem. The conditions are derived under which the neural network (locally) converges to the KKT points. It is desired that the neural network is stable at minimum solutions, and unstable at maximum solutions or saddle solutions. It is found in the paper that most likely the neural network is unstable at the maximum solutions. Moreover, we found that if the derived conditions are not satisfied at minimum solutions, by transforming the original problem into an equivalent one with the p-power (or partial p-power) method, these conditions can be satisfied. As a result, the neural network will locally converge to a minimum solution. Finally, two illustrative examples are provided to demonstrate the performance of the recurrent neural network. © 2006 IEEE.
AB - An existing recurrent neural network for convex optimization is extended to solve nonconvex optimization problems. One of the prominent features of this neural network is the one-to-one correspondence between its equilibria and the Karush-Kuhn-Tueker (KKT) points of the nonconvex optimization problem. The conditions are derived under which the neural network (locally) converges to the KKT points. It is desired that the neural network is stable at minimum solutions, and unstable at maximum solutions or saddle solutions. It is found in the paper that most likely the neural network is unstable at the maximum solutions. Moreover, we found that if the derived conditions are not satisfied at minimum solutions, by transforming the original problem into an equivalent one with the p-power (or partial p-power) method, these conditions can be satisfied. As a result, the neural network will locally converge to a minimum solution. Finally, two illustrative examples are provided to demonstrate the performance of the recurrent neural network. © 2006 IEEE.
UR - http://www.scopus.com/inward/record.url?scp=40649126591&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-40649126591&origin=recordpage
U2 - 10.1109/IJCNN.2006.247077
DO - 10.1109/IJCNN.2006.247077
M3 - RGC 32 - Refereed conference paper (with host publication)
SN - 0780394909
SN - 9780780394902
SP - 4522
EP - 4528
BT - IEEE International Conference on Neural Networks - Conference Proceedings
PB - IEEE
T2 - 2006 International Joint Conference on Neural Networks (IJCNN '06)
Y2 - 16 July 2006 through 21 July 2006
ER -