Two noise tolerant incremental learning algorithms for single layer feed-forward neural networks

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

3 Scopus Citations
View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)15643-15657
Number of pages15
Journal / PublicationJournal of Ambient Intelligence and Humanized Computing
Volume14
Issue number12
Online published19 Sept 2019
Publication statusPublished - Dec 2023

Abstract

This paper focuses on noise resistant incremental learning algorithms for single layer feed-forward neural networks (SLFNNs). In a physical implementation of a well trained neural network, faults or noise are unavoidable. As biological neural networks have ability to tolerate noise, we would like to have a trained neural network that has certain ability to tolerate noise too. This paper first develops a noise tolerant objective function that can handle multiplicative weight noise. We assume that multiplicative weight noise exist in the weights between the input layer and the hidden layer, and in the weights between the hidden layer and the output layer. Based on the developed objective function, we propose two noise tolerant incremental extreme learning machine algorithms, namely weight deviation incremental extreme learning machine (WDT-IELM) and weight deviation convex incremental extreme learning machine (WDTC-IELM). Compared to the original extreme learning machine algorithms, the two proposed algorithms have much better ability to tolerate the multiplicative weight noise. Several simulations are carried out to demonstrate the superiority of the two proposed algorithms.

Research Area(s)

  • Extreme learning machine, Fault tolerance, Neural network, Weight noise

Citation Format(s)