TY - CHAP
T1 - Fault-tolerant incremental learning for extreme learning machines
AU - Leung, Ho-Chun
AU - Leung, Chi-Sing
AU - Wong, Eric W.M.
PY - 2016/10
Y1 - 2016/10
N2 - The extreme learning machine (ELM) framework provides an efficient way for constructing single-hidden-layer feedforward networks (SLFNs). Its main idea is that the input bias terms and the input weights of the hidden nodes are selected in a random way. During training, we only need to adjust the output weights of the hidden nodes. The existing incremental learning algorithms, called incremental-ELM (I-ELM) and convex I-ELM (CI-ELM), for extreme learning machines (ELMs) cannot handle the fault situation. This paper proposes two fault-tolerant incremental ELM algorithms, namely fault-tolerant I-ELM (FTI-ELM) and fault-tolerant CI-ELM (FTCI-ELM). The FTI-ELM only tunes the output weight of the newly additive node to minimize the training set error of faulty networks. It keeps all the previous learned weights unchanged. Its fault-tolerant performance is better than that of I-ELM and CI-ELM. To further improve the performance, the FTCI-ELM is proposed. It tunes the output weight of the newly additive node, as well as using a simple scheme to modify the existing output weights, to maximize the reduction in the training set error of faulty networks.
AB - The extreme learning machine (ELM) framework provides an efficient way for constructing single-hidden-layer feedforward networks (SLFNs). Its main idea is that the input bias terms and the input weights of the hidden nodes are selected in a random way. During training, we only need to adjust the output weights of the hidden nodes. The existing incremental learning algorithms, called incremental-ELM (I-ELM) and convex I-ELM (CI-ELM), for extreme learning machines (ELMs) cannot handle the fault situation. This paper proposes two fault-tolerant incremental ELM algorithms, namely fault-tolerant I-ELM (FTI-ELM) and fault-tolerant CI-ELM (FTCI-ELM). The FTI-ELM only tunes the output weight of the newly additive node to minimize the training set error of faulty networks. It keeps all the previous learned weights unchanged. Its fault-tolerant performance is better than that of I-ELM and CI-ELM. To further improve the performance, the FTCI-ELM is proposed. It tunes the output weight of the newly additive node, as well as using a simple scheme to modify the existing output weights, to maximize the reduction in the training set error of faulty networks.
KW - Extreme learning machines
KW - Fault tolerance
KW - Single hidden layer network
KW - Weight noise
UR - http://www.scopus.com/inward/record.url?scp=84992597249&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-84992597249&origin=recordpage
U2 - 10.1007/978-3-319-46672-9_20
DO - 10.1007/978-3-319-46672-9_20
M3 - RGC 12 - Chapter in an edited book (Author)
SN - 9783319466712
VL - 9948 LNCS
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 168
EP - 176
BT - Neural Information Processing
A2 - Ozawa, Seiichi
A2 - Ikeda, Kazushi
A2 - Liu, Derong
A2 - Hirose, Akira
A2 - Doya, Kenji
A2 - Lee, Minho
PB - Springer Verlag
T2 - 23rd International Conference on Neural Information Processing, ICONIP 2016
Y2 - 16 October 2016 through 21 October 2016
ER -