TY - JOUR
T1 - Influence of Imperfections on the Operational Correctness of DNN-kWTA Model
AU - Lu, Wenhao
AU - Leung, Chi-Sing
AU - Sum, John
PY - 2024/10
Y1 - 2024/10
N2 - The dual neural network (DNN)-based k-winner-take-all (WTA) model is able to identify the k largest numbers from its m input numbers. When there are imperfections, such as non-ideal step function and Gaussian input noise, in the realization, the model may not output the correct result. This brief analyzes the influence of the imperfections on the operational correctness of the model. Due to the imperfections, it is not efficient to use the original DNN-kWTA dynamics for analyzing the influence. In this regard, this brief first derives an equivalent model to describe the dynamics of the model under the imperfections. From the equivalent model, we derive a sufficient condition for which the model outputs the correct result. Thus, we apply the sufficient condition to design an efficiently estimation method for the probability of the model outputting the correct result. Furthermore, for the inputs with uniform distribution, a closed form expression for the probability value is derived. Finally, we extend our analysis for handling non-Gaussian input noise. Simulation results are provided to validate our theoretical results. © 2023 IEEE.
AB - The dual neural network (DNN)-based k-winner-take-all (WTA) model is able to identify the k largest numbers from its m input numbers. When there are imperfections, such as non-ideal step function and Gaussian input noise, in the realization, the model may not output the correct result. This brief analyzes the influence of the imperfections on the operational correctness of the model. Due to the imperfections, it is not efficient to use the original DNN-kWTA dynamics for analyzing the influence. In this regard, this brief first derives an equivalent model to describe the dynamics of the model under the imperfections. From the equivalent model, we derive a sufficient condition for which the model outputs the correct result. Thus, we apply the sufficient condition to design an efficiently estimation method for the probability of the model outputting the correct result. Furthermore, for the inputs with uniform distribution, a closed form expression for the probability value is derived. Finally, we extend our analysis for handling non-Gaussian input noise. Simulation results are provided to validate our theoretical results. © 2023 IEEE.
KW - Dual neural network-based (DNN)-k winner-take-all (WTA)
KW - input noise
KW - logistic activation function
KW - threshold logic units
UR - http://www.scopus.com/inward/record.url?scp=85162724855&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85162724855&origin=recordpage
U2 - 10.1109/TNNLS.2023.3281523
DO - 10.1109/TNNLS.2023.3281523
M3 - RGC 21 - Publication in refereed journal
AN - SCOPUS:85162724855
SN - 2162-237X
VL - 35
SP - 15021
EP - 15029
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 10
ER -