In probabilistic principal component analysis (PPCA), an observed vector is modeled as a linear transformation of a low-dimensional Gaussian factor plus isotropic noise. We generalize PPCA to tensors by constraining the loading operator to have Tucker structure, yielding a probabilistic multilinear PCA model that enables uncertainty quantification and naturally accommodates multiple, possibly heterogeneous, tensor observations. We develop the associated theory: we establish identifiability of the loadings and noise variance and show that—unlike in matrix PPCA—the maximum likelihood estimator (MLE) exists even from a single tensor sample. We then study two estimators. First, we consider the MLE and propose an expectation–maximization (EM) algorithm to compute it. Second, exploiting that Tucker maps correspond to rank-one elements after a Kronecker lifting, we design a computationally efficient estimator for which we provide finite-sample guarantees. Together, these results provide a coherent probabilistic framework and practical algorithms for learning from tensor-valued data.
Yaoming Zhen is an Assistant Professor in the School of Data Science at The Chinese University of Hong Kong, Shenzhen. Before joining CUHK-Shenzhen, he served as a Postdoctoral Fellow in the Department of Statistical Sciences at the University of Toronto. Yaoming was a Hong Kong PhD Fellowship Scheme awardee and earned his Ph.D. in Data Science from the City University of Hong Kong in 2023. He obtained a B.S. in Mathematics in Sun Yat-sen University in 2019. Additionally, he visited the University of California, Berkeley, from the fall of 2022 to the spring of 2023 and also in the spring of 2018. Dr. Zhen’s research primarily focuses on tensor-based statistical machine learning, which further links to counterfactual inference, differential privacy, graphical models, high-order network modeling, and transfer learning.