Computer Science/Discrete Mathematics Seminar II
Algorithms for Overcomplete Tensor Decomposition
Tensor decomposition is the task of writing an n x n x n input tensor T as \sum_{i = 1}^r a_i \otimes b_i \otimes c_i for the smallest possible r (called the rank of T). This problem is NP-hard in general. Jennrich's algorithm succeeds if a_is, b_is, and c_is are generic and r<= n. The overcomplete regime with r>n generic components is a major challenge in both algebraic complexity theory and algorithm design, with many applications in statistical estimation. In this talk, I will describe a recent joint work (with Moitra and Wein) that gives an efficient algorithm for overcomplete tensor decomposition whenever r <= (2-\eps)n for every constant \eps>0. A key component is a new rank-detection "gadget" based on Koszul Young Flattenings that reduces tensor rank certification to matrix rank certification.