Quantized matrix completion through bilinear factorization and approximate message passing
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Compressed sensing is an exciting, quickly developing field, pulling in significant consideration in electrical engineering, applied mathematics, statistics, and software engineering. It can likewise serve computer vision, coding theory, signal processing, image processing, and algorithms for efficient data processing. Compressed sensing gives an exquisite framework for recuperating signals from compressed measurements. In this regard, our main focus in this thesis is the problem of bilinear factorization of sub-sampled matrices, also known as matrix factorization (MF), which has different variations, such as unconstrained MF, integer MF (IMF), non-negative MF (NMF), probabilistic MF (PMF), to cite a few. In this thesis, we introduce a new Bayesian quantized MF solution based on the approximate message passing (AMP) framework to solve the discrete matrix completion (MC) problem. Specifically, we resort to the bilinear generalized vector AMP (BiG-VAMP) algorithm along with a quantizer, and an expectation-maximization (EM) learning procedure to optimize the quantization thresholds. This approach enables our algorithm to jointly recover two matrices, denoted as U and V, as well as their real-valued and quantized product matrices from an incomplete discrete low-rank matrix Y, observed through a component-wise selection transformation. Extensive computer simulations, based on synthetic and real-world discrete datasets, show that the proposed method outperforms the state-of-the-art techniques for MF with discrete-valued factors in terms of reconstruction performance. Moreover, we conduct a theoretical performance analysis of the proposed method, based on the state evolution (SE) framework, and show its consistency with the empirical MSE performance obtained by computer simulations.