Video Lectures

Separate tags with a comma.

The Johnson-Lindenstrauss lemma states that for any n points in Euclidean space and error parameter 0

All known proofs of the lemma construct a distribution over linear mappings so that a random such mapping suffices with high probability. In this talk, I will present various proofs of the JL lemma satisfying, for example, (1) the support of the distribution is small, so that a random embedding can be selected with few random bits (e.g. O(log n loglog n) bits for constant eps, which is suboptimal by a loglog n factor), and (2) every embedding matrix in the support of the distribution is sparse (only O(eps*k) entries per column are non-zero), to speed up computation. I will also describe some open problems. This talk is based on joint works with Daniel Kane (Harvard).

In this talk we will discuss recent progresses meant as a contribution to the GLS-project, the second generation proof of the Classification of Finite Simple Groups (jointly with R. Lyons, R. Solomon, Ch. Parker).