Extreme Learning Machine: Its Fault Tolerant Algorithms and its Application to Telecommunication
極限機器學習：其容錯算法及其在電子通訊的應用
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution  

Supervisors/Advisors 

Award date  15 Apr 2020 
Link(s)
Permanent Link  https://scholars.cityu.edu.hk/en/theses/theses(6cc63fc6dbe14625b57e4aebfa47e39b).html 

Other link(s)  Links 
Abstract
The extreme learning machine (ELM) framework is widely used in many reallife applications. The framework can be applied to solve regression and classification problems. This technique also has superior performance in solving some communication network and facial recognition problems. In this thesis, several novel algorithms are developed based on the ELM framework. For instance, some algorithms are designed for solving the regression problem under faulty situations. Besides, some algorithms are applied to the performance estimation for optical communication networks.
As described by many scholars, the ELM algorithm is an efficient way to build singlehiddenlayer feedforward networks (SLFNs). This is because the algorithm randomly initializes hidden nodes. However, the ELM algorithm's fault tolerant ability is feeble. When noise or fault exists in SLFNs trained by the ELM, the performance is drastically degraded. Unfortunately, noise and fault are unavoidable in the realization of neural networks due to practical issues. Hence, this problem hinders the hardware implementation of the ELM.
To address the problem, we develop two novel ELM based fault tolerant incremental learning algorithms, namely node fault tolerant incremental ELM (NFTIELM) and node fault tolerant convex incremental ELM (NFTCIELM) in this thesis. They incrementally insert hidden nodes one by one into SLFNs. The NFTIELM determines the output weight of the newly inserted hidden node, whereas the output weights of other nodes remain unchanged. To boost the performance, the NFTCIELM not only determines the output weight of the newly inserted hidden node, but also updates the output weights of other hidden nodes. Notice that the NFTIELM and NFTCIELM have local convergence. For both algorithms, simulation results confirm that their fault tolerant performance is excellent under faulty situations.
To fulfill the exponential growth of telecommunication traffic demand, optical network service providers commit to a service level agreement (SLA). In general, the SLA requires that their networks' blocking probability should below a certain amount of level. The providers need a method to calculate the blocking probability accurately and instantly. However, traditional network simulations and analytical methods either suffer from large computational time or inaccurate estimates. This thesis proposes an ELM algorithm to estimate the blocking probability of optical networks. Based on the incremental ELM (IELM), we train SLFNs in the logarithm domain; hence, the algorithm is named as LogIELM. However, it results in large SLFNs with many redundant hidden nodes. Thus, the alternating direction method of multipliers (ADMM) framework is used to remove these redundant hidden nodes and optimize SLFNs' performance. This improved algorithm is then named as ADMMLogIELM. Notice that this algorithm has global convergence. Simulation results confirm that the performance of ADMMLogIELM is promising.
Furthermore, some scholars proposed an enhancement approach for ELM algorithms to achieve compact SLFNs architecture. The idea is to add a random search based selection phase to obtain hidden nodes yielding the smallest training error. Hence, a relatively small SLFN size can be obtained by this approach. Also, it achieves a smaller training error comparing to other ELM algorithms. Combining the enhancement approach with error minimized ELM (EMELM), we train SLFNs in the logarithm domain; hence, the algorithm is named as LogEEMELM. After the training, hidden nodes are wellselected for SLFNs. Again, we use the ADMM framework to optimize SLFNs' performance; hence, the improved algorithm is named as ADMMLogEEMELM. Meanwhile, we prove that it has global convergence. Simulation results confirm that this algorithm provides a better estimation performance with a smaller SLFN size than the ADMMLogIELM.
As described by many scholars, the ELM algorithm is an efficient way to build singlehiddenlayer feedforward networks (SLFNs). This is because the algorithm randomly initializes hidden nodes. However, the ELM algorithm's fault tolerant ability is feeble. When noise or fault exists in SLFNs trained by the ELM, the performance is drastically degraded. Unfortunately, noise and fault are unavoidable in the realization of neural networks due to practical issues. Hence, this problem hinders the hardware implementation of the ELM.
To address the problem, we develop two novel ELM based fault tolerant incremental learning algorithms, namely node fault tolerant incremental ELM (NFTIELM) and node fault tolerant convex incremental ELM (NFTCIELM) in this thesis. They incrementally insert hidden nodes one by one into SLFNs. The NFTIELM determines the output weight of the newly inserted hidden node, whereas the output weights of other nodes remain unchanged. To boost the performance, the NFTCIELM not only determines the output weight of the newly inserted hidden node, but also updates the output weights of other hidden nodes. Notice that the NFTIELM and NFTCIELM have local convergence. For both algorithms, simulation results confirm that their fault tolerant performance is excellent under faulty situations.
To fulfill the exponential growth of telecommunication traffic demand, optical network service providers commit to a service level agreement (SLA). In general, the SLA requires that their networks' blocking probability should below a certain amount of level. The providers need a method to calculate the blocking probability accurately and instantly. However, traditional network simulations and analytical methods either suffer from large computational time or inaccurate estimates. This thesis proposes an ELM algorithm to estimate the blocking probability of optical networks. Based on the incremental ELM (IELM), we train SLFNs in the logarithm domain; hence, the algorithm is named as LogIELM. However, it results in large SLFNs with many redundant hidden nodes. Thus, the alternating direction method of multipliers (ADMM) framework is used to remove these redundant hidden nodes and optimize SLFNs' performance. This improved algorithm is then named as ADMMLogIELM. Notice that this algorithm has global convergence. Simulation results confirm that the performance of ADMMLogIELM is promising.
Furthermore, some scholars proposed an enhancement approach for ELM algorithms to achieve compact SLFNs architecture. The idea is to add a random search based selection phase to obtain hidden nodes yielding the smallest training error. Hence, a relatively small SLFN size can be obtained by this approach. Also, it achieves a smaller training error comparing to other ELM algorithms. Combining the enhancement approach with error minimized ELM (EMELM), we train SLFNs in the logarithm domain; hence, the algorithm is named as LogEEMELM. After the training, hidden nodes are wellselected for SLFNs. Again, we use the ADMM framework to optimize SLFNs' performance; hence, the improved algorithm is named as ADMMLogEEMELM. Meanwhile, we prove that it has global convergence. Simulation results confirm that this algorithm provides a better estimation performance with a smaller SLFN size than the ADMMLogIELM.