New Approach to Matrix Perturbation: Beyond the Worst-Case Analysis
Matrix-perturbation bounds quantify how the spectral characteristics of a baseline matrix A change under additive noise E. Classical results, including Weyl’s inequality for eigenvalues and the Davis–Kahan theorem for eigenvectors and eigenspaces, have long played a foundational role in mathematics. These bounds are known to be sharp in worst-case analyses.
In the past decade, we have developed a perturbation framework that leverages the interaction between E and the eigenvectors of A. This perspective yields quantitative improvements over classical bounds, particularly when E is random, a common scenario in real-world applications.
This talk surveys these developments and main ideas, focusing on recent results concerning eigenspace perturbation. If time allows, we will discuss extensions to other spectral functionals and applications in different areas.