Theoretical Machine Learning Seminar

On Langevin Dynamics in Machine Learning

Langevin diffusions are continuous-time stochastic processes that are based on the gradient of a potential function. As such they have many connections---some known and many still to be explored---to gradient-based machine learning. I'll discuss several recent results in this vein: (1) the use of Langevin-based algorithms in bandit problems; (2) the acceleration of Langevin diffusions; (3) how to use Langevin Monte Carlo without making smoothness assumptions. I'll present these results in the context of a general argument about the virtues of continuous-time perspectives in the analyis of discrete-time optimization and Monte Carlo algorithms.

Date & Time

June 11, 2020 | 3:00pm – 4:30pm


Remote Access Only - see link below


Michael I. Jordan


University of California, Berkeley


We welcome broad participation in our seminar series. To receive login details, interested participants will need to fill out a registration form accessible from the link below.  Upcoming seminars in this series can be found here.

Register Here