Fault Tolerance Regularizers and Final Prediction Error
Project: Research
Researcher(s)
Description
In classical training methods for fault tolerance, we need to consider many potential faulty networks. Hence, the energy function and corresponding learning algorithm would be computationally complicated. This project will first investigate different kinds of network faults, such as weight removal, node removal and multiplicative weight noise. Afterwards, objective functions with proper regularizers for tackling these network faults will be developed. With the modified energy functions, online version of the first order stochastic gradient learning algorithms, such as back-propagation algorithm (BPA), can used for estimating the weights. However, the BPA learning speed is often very slow. As far as we are aware, not much results about applying fault tolerant regularizers to second order online algorithms yet. As the RLS approach had been shown to be a fast learning algorithm with a small number of tuning parameters, it would be interesting to inquire if there is any possibility of using RLS methods together with fault tolerant regularizers in order to speed up the learning process, as well as to improve on the fault tolerance of the trained network. Finally, this project extends the analysis on prediction error for faulty networks. The prediction error allows us not only to predict the performance of a faulty network, but also to select the model from various settings.Detail(s)
Project number | 7002480 |
---|---|
Grant type | SRG |
Status | Finished |
Effective start/end date | 1/04/09 → 18/11/10 |