The Value of Errors in Proofs
A few months ago, a group of theoretical computer scientists posted a paper on the Arxiv with the strange-looking title "MIP* = RE", impacting and surprising not only complexity theory but also some areas of math and physics. Specifically, it resolved, in the negative, the "Connes' embedding conjecture" in the area of von-Neumann algebras, and the "Tsirelson problem" in quantum information theory. You can find the paper here
As it happens, both acronyms MIP* and RE represent proof systems, of a very different nature. To explain them, we'll take a meandering journey through the classical and modern definitions of proof. I hope to explain how the methodology of computational complexity theory, especially modeling and classification (both problems and proofs) by algorithmic efficiency, naturally leads to the generation of new such notions and results (and more acronyms, like NP). A special focus will be on notions of proof which allow interaction, randomness, and errors, and their surprising power and magical properties.
The talk will be non-technical, and requires no special background.