A Fault Aware Broad Learning System for Concurrent Network Failure Situations

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations

Related Research Unit(s)


Original languageEnglish
Article number9380139
Pages (from-to)46129-46142
Journal / PublicationIEEE Access
Online published17 Mar 2021
Publication statusPublished - 2021



The broad learning system (BLS) framework gives an efficient solution for training flat-structured feedforward networks and flat structured deep neural networks. However, the classical BLS model and other variants focus on the faultless situation only, where enhancement nodes, feature mapped nodes, and output weights of a BLS network are assumed to be realized in a perfect condition. When a trained BLS network suffers from coexistence of weight/node failures, the trained network has a greatly degradation in its performance if a countermeasure is not taken. In order to reduce the effect of weight/node failures on the BLS network's performance, this paper proposes an objective function for enhancing the fault aware performance of BLS networks. The objective function contains a fault aware regularizer term which handles the weight/node failures. A learning algorithm is then derived based on the objective function. The simulation results show that the performance of the proposed fault aware BLS (FABLS) algorithm is superior to the classical BLS and two state-of-the-arts BLS algorithms, namely correntropy criterion BLS (CBLS) and weighted BLS (WBLS).

Research Area(s)

  • broad learning system, Fault tolerance, incremental learning, multiplicative noise, open fault, regression

Download Statistics

No data available