Matrix Inference in Growing Rank Regimes
The inference of a large symmetric signal-matrix 𝐒∈ℝ^N× N corrupted by additive Gaussian noise, is considered for two regimes of growth of the rank M as a function of N. For sub-linear ranks M=Θ(N^α) with α∈(0,1) the mutual information and minimum mean-square error (MMSE) are derived for two classes of signal-matrices: (a) 𝐒=𝐗𝐗^⊺ with entries of 𝐗∈ℝ^N× M independent identically distributed; (b) 𝐒 sampled from a rotationally invariant distribution. Surprisingly, the formulas match the rank-one case. Two efficient algorithms are explored and conjectured to saturate the MMSE when no statistical-to-computational gap is present: (1) Decimation Approximate Message Passing; (2) a spectral algorithm based on a Rotation Invariant Estimator. For linear ranks M=Θ(N) the mutual information is rigorously derived for signal-matrices from a rotationally invariant distribution. Close connections with scalar inference in free probability are uncovered, which allow to deduce a simple formula for the MMSE as an integral involving the limiting spectral measure of the data matrix only. An interesting issue is whether the known information theoretic phase transitions for rank-one, and hence also sub-linear-rank, still persist in linear-rank. Our analysis suggests that only a smoothed-out trace of the transitions persists. Furthermore, the change of behavior between low and truly high-rank regimes only happens at the linear scale α=1.
READ FULL TEXT