TY - JOUR
T1 - Properties and performance of imperfect dual neural network-based k WTA networks
AU - Feng, Ruibin
AU - Leung, Chi-Sing
AU - Sum, John
AU - Xiao, Yi
PY - 2015/9/1
Y1 - 2015/9/1
N2 - The dual neural network (DNN)-based k-winner-take-all (k WTA) model is an effective approach for finding the k largest inputs from n inputs. Its major assumption is that the threshold logic units (TLUs) can be implemented in a perfect way. However, when differential bipolar pairs are used for implementing TLUs, the transfer function of TLUs is a logistic function. This brief studies the properties of the DNN- k WTA model under this imperfect situation. We prove that, given any initial state, the network settles down at the unique equilibrium point. Besides, the energy function of the model is revealed. Based on the energy function, we propose an efficient method to study the model performance when the inputs are with continuous distribution functions. Furthermore, for uniformly distributed inputs, we derive a formula to estimate the probability that the model produces the correct outputs. Finally, for the case that the minimum separation Δmin of the inputs is given, we prove that if the gain of the activation function is greater than 1/4 Δmin max ln 2n, 2 In 1-ε/ε), then the network can produce the correct outputs with winner outputs greater than 1-ε and loser outputs less than ε, where ε is the threshold less than 0.5.
AB - The dual neural network (DNN)-based k-winner-take-all (k WTA) model is an effective approach for finding the k largest inputs from n inputs. Its major assumption is that the threshold logic units (TLUs) can be implemented in a perfect way. However, when differential bipolar pairs are used for implementing TLUs, the transfer function of TLUs is a logistic function. This brief studies the properties of the DNN- k WTA model under this imperfect situation. We prove that, given any initial state, the network settles down at the unique equilibrium point. Besides, the energy function of the model is revealed. Based on the energy function, we propose an efficient method to study the model performance when the inputs are with continuous distribution functions. Furthermore, for uniformly distributed inputs, we derive a formula to estimate the probability that the model produces the correct outputs. Finally, for the case that the minimum separation Δmin of the inputs is given, we prove that if the gain of the activation function is greater than 1/4 Δmin max ln 2n, 2 In 1-ε/ε), then the network can produce the correct outputs with winner outputs greater than 1-ε and loser outputs less than ε, where ε is the threshold less than 0.5.
KW - Convergence
KW - dual neural network (DNN)
KW - logistic function
KW - winner take all (WTA)
UR - http://www.scopus.com/inward/record.url?scp=84940196127&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-84940196127&origin=recordpage
U2 - 10.1109/TNNLS.2014.2358851
DO - 10.1109/TNNLS.2014.2358851
M3 - RGC 21 - Publication in refereed journal
SN - 2162-237X
VL - 26
SP - 2188
EP - 2193
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 9
M1 - 6945381
ER -