Dual Optimization for Kolmogorov Model Learning Using Enhanced Gradient Descent

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Pages (from-to)963-977
Journal / PublicationIEEE Transactions on Signal Processing
Volume70
Online published14 Feb 2022
Publication statusPublished - 2022
Externally publishedYes

Abstract

Data representation techniques have made a substantial contribution to advancing data processing and machine learning (ML). Improving predictive power was the focus of previous representation techniques, which unfortunately perform rather poorly on the interpretability in terms of extracting underlying insights of the data. Recently, the Kolmogorov model (KM) was studied, which is an interpretable and predictable representation approach to learning the underlying probabilistic structure of a set of random variables. The existing KM learning algorithms using semi-definite relaxation with randomization (SDRwR) or discrete monotonic optimization (DMO) have, however, limited utility to big data applications because they do not scale well computationally. In this paper, we propose a computationally scalable KM learning algorithm, based on the regularized dual optimization combined with the enhanced gradient descent (GD) method. To make our method more scalable to large-dimensional problems, we propose two acceleration schemes, namely, the eigenvalue decomposition (EVD) elimination strategy and an approximate EVD algorithm. Furthermore, a thresholding technique by exploiting the error bound analysis and leveraging the normalized Minkowski l1-norm, is provided for the selection of the number of iterations of the approximate EVD algorithm. When applied to big data applications, it is demonstrated that the proposed method can achieve compatible training/prediction performance with significantly reduced computational complexity; roughly two orders of magnitude improvement in terms of the time overhead, compared to the existing KM learning algorithms. Furthermore, it is shown that the accuracy of logical relation mining for interpretability by using the proposed KM learning algorithm exceeds 80%.

Research Area(s)

  • approximate eigenvalue decomposition (EVD), Approximation algorithms, big data, Data models, dual optimization, gradient descent (GD), Kolmogorov model (KM), large-dimensional dataset, low latency, Optimization, Prediction algorithms, Predictive models, Random variables, scalability, Signal processing algorithms

Citation Format(s)

Dual Optimization for Kolmogorov Model Learning Using Enhanced Gradient Descent. / Duan, Qiyou; Ghauch, Hadi; Kim, Taejoon.
In: IEEE Transactions on Signal Processing, Vol. 70, 2022, p. 963-977.

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review