TY - JOUR
T1 - An Interpretable Bi-Branch Neural Network for Matrix Completion
AU - Li, Xiao Peng
AU - Wang, Maolin
AU - So, Hing Cheung
PY - 2022/11
Y1 - 2022/11
N2 - The task of recovering a low-rank matrix given an incomplete matrix, also termed as matrix completion, arises in various applications. Methods for matrix completion can be classified into linear and nonlinear approaches. Despite the fact that the linear model provides basic theories ensuring restoring the missing entries with high probability, it has an obvious limitation that latent factors are restricted in the linear subspace. Thus, the nonlinear model has been suggested, which is mainly performed using neural networks. In this paper, a novel and interpretable neural network is developed for matrix completion. Different from existing neural networks whose structure is created by empirical design, the proposed version is devised via unfolding the matrix factorization formulation. Specifically, the two factors decomposed by matrix factorization construct the two branches of the suggested neural network, called bi-branch neural network (BiBNN). The row and column indices of each entry are considered as the input of the BiBNN, while its output is the estimated value of the entry. The training procedure aims to minimize the fitting error between all observed entries and their predicted values and then the unknown entries are estimated by inputting their coordinates into the trained network. The BiBNN is compared with state-of-the-art methods, including linear and nonlinear models, in processing synthetic data, image inpainting, and recommender system. Experimental results demonstrate that the BiBNN is superior to the existing approaches in terms of restoration accuracy.
AB - The task of recovering a low-rank matrix given an incomplete matrix, also termed as matrix completion, arises in various applications. Methods for matrix completion can be classified into linear and nonlinear approaches. Despite the fact that the linear model provides basic theories ensuring restoring the missing entries with high probability, it has an obvious limitation that latent factors are restricted in the linear subspace. Thus, the nonlinear model has been suggested, which is mainly performed using neural networks. In this paper, a novel and interpretable neural network is developed for matrix completion. Different from existing neural networks whose structure is created by empirical design, the proposed version is devised via unfolding the matrix factorization formulation. Specifically, the two factors decomposed by matrix factorization construct the two branches of the suggested neural network, called bi-branch neural network (BiBNN). The row and column indices of each entry are considered as the input of the BiBNN, while its output is the estimated value of the entry. The training procedure aims to minimize the fitting error between all observed entries and their predicted values and then the unknown entries are estimated by inputting their coordinates into the trained network. The BiBNN is compared with state-of-the-art methods, including linear and nonlinear models, in processing synthetic data, image inpainting, and recommender system. Experimental results demonstrate that the BiBNN is superior to the existing approaches in terms of restoration accuracy.
KW - Low rank
KW - Nonlinear matrix completion
KW - Neural network
KW - Image inpainting
KW - Recommender system
UR - http://www.scopus.com/inward/record.url?scp=85131666512&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85131666512&origin=recordpage
U2 - 10.1016/j.sigpro.2022.108640
DO - 10.1016/j.sigpro.2022.108640
M3 - RGC 21 - Publication in refereed journal
SN - 0165-1684
VL - 200
JO - Signal Processing
JF - Signal Processing
M1 - 108640
ER -