Neural Network Based Methods for Constrained Optimization


Student thesis: Doctoral Thesis

View graph of relations


Related Research Unit(s)


Awarding Institution
Award date16 Sep 2021


Constrained optimization refers to maximizing or minimizing an objective function subject to some constraints. The constraints often represent a range of requirements or restrictions in a particular practical application. This thesis is dedicated to solving several nonlinear constrained optimization problems arising from robust estimation and feature learning.

First, two tasks of robust estimation are addressed, namely robust elliptic localization and robust ellipse fitting. In real-life situations, noise and outliers are unavoidable. The key idea of least squares (LS)-based estimators is to adjust the unknowns to minimize the squared ℓ2-norm of errors between the noisy observations and the estimations. Despite the effectiveness of handling Gaussian noise, the performance of the LS-based methods could be greatly degraded when the observed data contains outliers. In this thesis, the tasks of robust estimation are formulated as non-smooth constrained optimization with an ℓ0-norm or ℓ1-norm-based objective function. Thereafter, robust estimators are developed with the use of the Lagrange programming neural network (LPNN).

In a distributed multiple-input multiple-output (MIMO) system, there is a number of transmitters and receivers. The target is located by utilizing a set of range measurements from the MIMO system. Each range measurement equals the sum of the transmitter-to-target distance and target-to-receiver distance, which corresponds to elliptic localization. Nevertheless, the range measurements may contain outliers due to non-line-of-sight (NLOS) propagation or signal interference. To reduce the influence of outliers, the localization problem is formulated as a non-smooth constrained optimization with an ℓ1-norm objective function. By using an approximation function of the ℓ1-norm and the LCA concept, LPNN circumvents the non-differentiability issue of the ℓ1-norm. Accordingly, two LPNN-based methods are developed, and their local stability is proved.

Apart from the robust target localization problem, the robust ellipse fitting task is involved. Ellipse fitting means constructing an ellipse equation that best fits a series of 2-dimensional (2D) points extracted by edge detection techniques. However, the imperfect edge detection process may result in outliers. To obtain robustness against outliers, we formulate the fitting problem via non-differentiable constrained optimization with an ℓ0-norm fitness function. Since the LPNN methodology cannot handle the non-differentiability of the ℓ0-norm term, the locally competitive algorithm (LCA) is utilized to approximate the derivatives of the ℓ0-norm. Moreover, the LPNN-based fitting algorithm is proved to possess local stability.

Finally, a feature learning task based on the convolutional neural network (CNN) is studied. The constrained center loss (CCL) is proposed to enhance the discrimination power of deep features. The training objective function of the CCL-based algorithm consists of two terms, namely, the softmax loss and the CCL. In the proposed algorithm, the purpose of the softmax loss is to push the feature vectors from different classes apart. Meanwhile, the CCL endeavors to cluster the feature vectors such that the feature vectors of each class are close to each other. By doing so, the CNN-based feature extractor is enabled to learn more robust features. In the course of training, the CCL-based algorithm utilizes the alternative learning strategy. The first step updates the cluster centers, while the second step updates the connection weights of the feature learning module. Besides, a simplified CCL (SCCL) approach is also presented to avoid parameter redundancy. Compared to several state-of-the-art approaches, the proposed algorithms provide better performance on four benchmark datasets.