TY - JOUR
T1 - Effective Learning of a GMRF Mixture Model
AU - Finder, Shahaf E.
AU - Treister, Eran
AU - Freifeld, Oren
N1 - Funding Information:
This work was supported in part by the Israeli Council for Higher Education (CHE) via the Data Science Research Center, Ben-Gurion University of the Negev, Israel; and in part by the Lynn and William Frankel Center for Computer Science at BGU. The work of Shahaf E. Finder was supported by the Kreitman School of Advanced Graduate Studies. The work of Oren Freifeld was supported in part by the Israel Science Foundation Personal under Grant 360/21.
Publisher Copyright:
© 2013 IEEE.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - Learning a Gaussian Mixture Model (GMM) is hard when the number of parameters is too large given the amount of available data. As a remedy, we propose restricting the GMM to a Gaussian Markov Random Field Mixture Model (GMRF-MM), as well as a new method for estimating the latter's sparse precision (i.e., inverse covariance) matrices. When the sparsity pattern of each matrix is known, we propose an efficient optimization method for the Maximum Likelihood Estimate (MLE) of that matrix. When it is unknown, we utilize the popular Graphical Least Absolute Shrinkage and Selection Operator (GLASSO) to estimate that pattern. However, we show that even for a single Gaussian, when GLASSO is tuned to successfully estimate the sparsity pattern, it does so at the price of a substantial bias of the values of the nonzero entries of the matrix, and we show that this problem only worsens in a mixture setting. To overcome this, we discard the nonzero values estimated by GLASSO, keep only its pattern estimate and use it within the proposed MLE method. This yields an effective two-step procedure that removes the bias. We show that our 'debiasing' approach outperforms GLASSO in both the single-GMRF and the GMRF-MM cases. We also show that when learning priors for image patches, our method outperforms GLASSO even if we merely use an educated guess about the sparsity pattern, and that our GMRF-MM outperforms the baseline GMM on real and synthetic high-dimensional datasets.
AB - Learning a Gaussian Mixture Model (GMM) is hard when the number of parameters is too large given the amount of available data. As a remedy, we propose restricting the GMM to a Gaussian Markov Random Field Mixture Model (GMRF-MM), as well as a new method for estimating the latter's sparse precision (i.e., inverse covariance) matrices. When the sparsity pattern of each matrix is known, we propose an efficient optimization method for the Maximum Likelihood Estimate (MLE) of that matrix. When it is unknown, we utilize the popular Graphical Least Absolute Shrinkage and Selection Operator (GLASSO) to estimate that pattern. However, we show that even for a single Gaussian, when GLASSO is tuned to successfully estimate the sparsity pattern, it does so at the price of a substantial bias of the values of the nonzero entries of the matrix, and we show that this problem only worsens in a mixture setting. To overcome this, we discard the nonzero values estimated by GLASSO, keep only its pattern estimate and use it within the proposed MLE method. This yields an effective two-step procedure that removes the bias. We show that our 'debiasing' approach outperforms GLASSO in both the single-GMRF and the GMRF-MM cases. We also show that when learning priors for image patches, our method outperforms GLASSO even if we merely use an educated guess about the sparsity pattern, and that our GMRF-MM outperforms the baseline GMM on real and synthetic high-dimensional datasets.
KW - GMRF
KW - Gaussian mixture model
KW - Probabilistic models
KW - Sparse inverse covariance matrix
UR - http://www.scopus.com/inward/record.url?scp=85123296614&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2022.3141911
DO - 10.1109/ACCESS.2022.3141911
M3 - Article
AN - SCOPUS:85123296614
VL - 10
SP - 7289
EP - 7299
JO - IEEE Access
JF - IEEE Access
SN - 2169-3536
ER -