Theoretical Machine Learning Seminar

Event Sequence Modeling with the Neural Hawkes Process

Suppose you are monitoring discrete events in real time. Can you predict what events will happen in the future, and when? Can you fill in past events that you may have missed? A probability model that supports such reasoning is the neural Hawkes process (NHP), in which the Poisson intensities of K event types at time t depend on the history of past events. This autoregressive architecture can capture complex dependencies. It resembles an LSTM language model over K word types, but allows the LSTM state to evolve in continuous time.

This talk will present the NHP model along with methods for estimating parameters (MLE and NCE), sampling predictions of the future (thinning), and imputing missing events (particle smoothing). I'll then show how to scale the NHP or the LSTM language model to large K, beginning with a temporal deductive database for a real-world domain, which can track how possible event types and other facts change over time. We take the system state to be a collection of vector-space embeddings of these facts, and derive a deep recurrent architecture from the temporal Datalog program that specifies the database. We call this method "neural Datalog through time."

This work was done with Hongyuan Mei and other collaborators including Guanghui Qin, Minjie Xu, and Tom Wan.

Date & Time

August 20, 2020 | 3:00pm – 4:30pm

Location

Remote Access Only - see link below

Speakers

Jason Eisner

Affiliation

Johns Hopkins University

Notes

We welcome broad participation in our seminar series. To receive login details, interested participants will need to fill out a registration form accessible from the link below.  Upcoming seminars in this series can be found here.

Register Here