Princeton University Machine Learning for the Sciences

Generalized Lagrangian Networks

Even though neural networks enjoy widespread use, they still struggle to learn the basic laws of physics. How might we endow them with better inductive biases? Recent work (Greydanus et al. 2019) proposed Hamiltonian Neural Networks (HNNs) which can learn the Hamiltonian of a physical system from data using a neural network. A key issue with these models is that they require a priori knowledge of the system’s conjugate position and momentum coordinates, and thus are difficult to learn from arbitrary coordinates such as pixels. In this talk, I will introduce Generalized Lagrangian Networks (GLNs), which learn Lagrangians instead of Hamiltonians. These models do not require conjugate position and momentum coordinates and perform well in situations where generalized momentum is difficult to compute (eg the double pendulum). This is particularly appealing for use with a learned latent representations, a case where HNNs struggle. Unlike previous work on learning Lagrangians (Lutter et al. 2019), our approach is fully general and extends to non-holonomic systems such as the 1D wave equation.

Date & Time

January 31, 2020 | 12:00pm – 1:00pm

Location

Center for Statistics and Machine Learning (CSML), 26 Prospect Ave, Auditorium 103

Speakers

Sam Greydanus

Affiliation

Google AI

Notes

Lunch will be served.