non-negative matrix factorization on multi-view learning
Title: non-negative matrix factorization on multi-view learning
DNr: Berzelius-2025-235
Project Type: LiU Berzelius
Principal Investigator: Fatemeh Sadjadi <fatemeh.sadjadi@umu.se>
Affiliation: Umeå universitet
Duration: 2025-07-30 – 2026-02-01
Classification: 10201
Keywords:

Abstract

We propose a unified optimization framework for multi-view matrix factorization that jointly learns a shared basis matrix \( \mathbf{U} \) and view-specific latent factors \( \mathbf{S}^\nu \) and \( \mathbf{V}^\nu \) across multiple data views \( \{ \mathbf{X}^\nu \}_{\nu=1}^m \). Our formulation minimizes a composite objective that balances reconstruction accuracy, self-expressiveness consistency, graph-regularized smoothness, and cross-view alignment of latent structures. Specifically, the model enforces orthogonality and non-negativity constraints on all factor matrices and incorporates Laplacian regularization using view-specific row and column graphs \( \mathbf{L}_r^\nu \) and \( \mathbf{L}_c^\nu \), respectively. A key component of the objective is a regularization term weighted by a hyperparameter \( \gamma \), encouraging agreement between learned latent representations and projections of the input data. To promote consistency across views, an additional term penalizes pairwise differences between latent factors \( \mathbf{S}^\nu \). The resulting formulation is well-suited for applications involving multi-relational or heterogeneous data, where joint representation learning across views is crucial.