Modeling and Optimization in Semi-Supervised Low-Rank Representation


Student thesis: Doctoral Thesis

View graph of relations


Related Research Unit(s)


Awarding Institution
Award date2 Nov 2018


Learning a low-dimensional representation of the input data plays an important role in machine learning and computer version. In the real world applications, the input data usually have high-dimensional features and may be corrupted by different levels of noises. The low-dimensional representation can remove the noises and outliers, get rid of the curse of dimensionality and decrease the computational complexity for post-processing. Principal component analysis (PCA) is the most well known dimensionality reduction method, which decomposes the input into the product of an orthogonal basis matrix and a coefficient representation matrix. The representation matrix has a lower rank than the input matrix, thus, PCA is essentially a low rank representation model. Except PCA, the representative models of low rank representation include: non-negative matrix factorization (NMF), Laplacian embedding, robust PCA (RPCA), and etc. They are all designed with various assumptions, and thus have various properties to reveal the intrinsic structure of the input.

In the real world applications, there may exist some supervisory information, which may provide extra valuable information.
How to use them to improve the discriminative-ability of the low rank representation models is an important task. To this end, this thesis focuses on the modeling and optimization in the semi-supervised low rank representation models, where the main works are summarized as follows:

As an important low rank representation model, NMF decomposes a non-negative matrix into the product of two non-negative matrices of smaller size. Due to the non-negativity constraints, NMF is able to produce parts-based representations. We propose a novel semi-supervised NMF model by means of elegantly modeling the label information. The proposed model is capable of generating discriminable low dimensional representations to improve clustering performance. Specifically, a pair of complementary regularizers, i.e., similarity and dissimilarity regularizers, is incorporated into the conventional NMF to guide the factorization. And they impose restrictions on both the similarity and dissimilarity of the low dimensional representations of data samples with labels as well as a small amount of unlabeled ones. Technically, the proposed model is formulated as a well-defined constrained optimization problem and further solved with an efficient alternating iterative algorithm. Moreover, we theoretically prove that the proposed algorithm can converge to a limiting point that meets the KKT conditions. Extensive experiments as well as comprehensive analysis demonstrate the proposed model outperforms state-of-the-art methods to a large extent over five benchmark datasets in term of clustering.

However, the above proposed model is essentially a dimensionality reduction method and is not designed for clustering, which aims to partition a typical set of data samples into several groups (or clusters) such that data samples in the same group are more similar to each other than to those in other groups. To improve the clustering ability, we propose a novel model for constrained clustering, in which a new regularizer elegantly incorporates a small amount of weakly supervisory information in the form of pair-wise constraints to regularize the similarity between the low-dimensional representations of data samples. By exploring both the local and global structures of the data samples with the guidance of the supervisory information, the low-dimensional representations of the proposed model establishes strong separability that yields excellent clustering performance. Technically, the proposed model is formulated and relaxed as a convex optimization model, which is further efficiently solved with the global convergence guaranteed. Experimental results on multiple benchmark data sets show that our proposed model can produce higher clustering accuracy than state-of-the-art methods.

Moreover, we find that the dissimilarity relation is important to the proposed constrained clustering model but the number of the dissimilarity relation is usually limited in the real world applications. To remedy this problem, we propose a pairwise dissimilarity propagation (DP) model that propagates the limited initial dissimilarity relations to the whole data set and can further improve the clustering performance of the proposed constrained clustering model. The proposed pairwise dissimilarity propagation (PDP) model is formulated as non-negativity constrained trace minimization problem, which is solved by an element wise iterative optimization algorithm with convergence theoretically proven. Experimental results on multiple benchmark data sets demonstrate that PDP can promote the clustering performance of the previous mentioned constrained clustering model.

As weakly supervisory information, pairwise constraints that are composed of must- and cannot-links are widely used in many semi-supervised tasks. However, the above proposed DP model just take the dissimilarity relation into consideration. Pairwise constraint propagation (PCP), which want to augment the initial pairwise constraint, has been proposed. However, existing PCP algorithms only adopt a single matrix to carry all information, which overlooks the differences between the two types of links such that the discriminability of the propagated constraints is compromised. To this end, in this thesis we propose a novel PCP model via dual adversarial manifold embeddings to fully exploit the potential of a limited amount of constraints. Specifically, we propagate must- and cannot- links with two separated carriers, called similarity and dissimilarity matrices, under the guidance of the graph structure constructed from data samples. At the same time, the adversarial relationship between the two matrices is taken into consideration. Technically, the proposed model is formulated as a non-negative constrained minimization problem, and an efficient iterative algorithm with convergence theoretically proven is also proposed to solve it. Experimental results in terms of constrained clustering on six benchmark datasets demonstrate the superior performance of the proposed model to state-of-the-arts methods.