Concentration inequalities for random matrix products

2020 
Abstract Suppose { X k } k ∈ Z is a sequence of bounded independent random matrices with common dimension d × d and common expectation E [ X k ] = X . Under these general assumptions, the normalized random matrix product Z n = ( I d + 1 n X n ) ( I d + 1 n X n − 1 ) ⋯ ( I d + 1 n X 1 ) converges to e X as n → ∞ . Normalized random matrix products of this form arise naturally in stochastic iterative algorithms, such as Oja's algorithm for streaming Principal Component Analysis. Here, we derive nonasymptotic concentration inequalities for such random matrix products. In particular, we show that the spectral norm error satisfies ‖ Z n − e X ‖ = O ( ( log ⁡ ( n ) ) 2 log ⁡ ( d / δ ) / n ) with probability exceeding 1 − δ . This rate is sharp in n, d, and δ, up to logarithmic factors. The proof relies on two key points of theory: the Matrix Bernstein inequality concerning the concentration of sums of random matrices, and Baranyai's theorem from combinatorial mathematics. Concentration bounds for general classes of random matrix products are hard to come by in the literature, and we hope that our result will inspire further work in this direction.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    9
    Citations
    NaN
    KQI
    []