Deep learning, which is the reemergence of artificial neural networks, has recently succeeded as an approach towards artificial intelligence. In many fields, including computational linguistics, deep learning approaches have largely displaced earlier machine learning approaches, due to the superior performance they provide.
In this public lecture, Manning will discuss some of the results in computer vision, speech, and language which support the preceding claims. Manning will also explore bigger questions including why and how deep learning methods manage to be so successful, what new perspectives they suggest about human cognition and the language of thought, and what opportunities exist for deep learning to move beyond its core successes on sensory perception and classification tasks to be a broader tool for artificial intelligence.
This lecture is part of the Theoretical Machine Learning Lecture Series, a new series curated by Sanjeev Arora, Visiting Professor in the School of Mathematics, and is made possible by a gift from Eric and Wendy Schmidt.