Theoretical Analysis of Multi-Bit Quanta Image Sensors and Node Selection in Fault-Tolerant Neural Network

多位量子圖像傳感器的理論分析與容錯神經網絡中的節點選擇

Student thesis: Doctoral Thesis

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Awarding Institution
Supervisors/Advisors
Award date27 Apr 2022

Abstract

This thesis consists of two parts, the first part formally analyzes the properties of multi-bit QIS (MBQIS) systems. The second part develops node selection and training algorithms for fault-tolerant neural networks.

To analyzes the properties of MBQIS systems, we first derive the log likelihood function of the received photon counts in a spatiotemporal jot kernel, and introduces the concept of the probability of all jots being saturated.

From the likelihood function result, we can obtain a maximum likelihood (ML) estimate for the exposure level and present an image construction algorithm, namely ML multi-bit (MLM). Since the estimate is ML based, the variance of the estimated exposure achieves the CRB asymptotically and the MLM is an asymptotically unbiased estimator. Also, based on the Fisher information concept, this paper derives the CRB on the variance of the estimated exposure. Hence the CRB can be considered as a performance indicator for all algorithms. From the jot saturation analysis result, we can accurately formulate the relationship between dynamic range and spatiotemporal kernel size. Specifically, with our two analysis results, we can model the relationships between sensor design parameters and performance metrics (variance of the estimated exposure and the dynamic range). Since the two analysis results are independent of the construction algorithms used, they give us some guidelines to design a QIS system.

The second part of the thesis focuses on developing training algorithms for fault-tolerant neural networks. To further improve the performance of a fault-tolerant network we also develop efficient node selection algorithms. We first formulate fault-tolerant objective function for extreme learning machines (ELMs). According to our proposed objective function, we develop two incremental training algorithms, namely generalized incremental ELM (GI-ELM) and generalized error minimized ELM (GEM-ELM). For the GI-ELM, k hidden nodes are added into the existing network at each iteration. Only the newly inserted output weights are computed and all the existing weights are not updated. For the GEM-ELM, k hidden nodes are added into the existing network at each iteration. Meanwhile, all output weights are computed based on a recursive formula to reduce the computational complexity. The numerical analysis demonstrates that the proposed algorithm has better performance when compared to conventional incremental training algorithms as they have weak fault-tolerant capability. The numerical analysis also shows that the training time can be reduced significantly by adding k nodes at each iteration. As we have developed efficient algorithms to add nodes to a network incrementally, mutable sets of nodes can be generated and the optimum set of nodes can be selected by elevating the training objective.

Other than selecting nodes when training a network incrementally, we can perform node selection without incremental learning. We develop a training algorithm that can explicitly select RBF nodes and training the network simultaneously. We first define a fault-tolerant objective function for RBF networks. By introducing an indicator function in the training objective to limit the number of non-zero weights, we can explicitly define the number of nodes in the network. An iterative algorithm is then developed to handle the challenging nonconvex and nonsmooth optimization problem. We prove that our proposed algorithm converges to a local minimum. As existing node selection algorithms specify the number of nodes with time-consuming parameter tuning, our approach reduces the training time significantly.