Theoretical Machine Learning Seminar

Foundations of Intelligent Systems with (Deep) Function Approximators

Function approximators, like deep neural networks, play a crucial role in building machine-learning based intelligent systems. This talk covers three core problems of function approximators: understanding function approximators, designing new function approximators, and applying function approximators.In the first part of the talk, I will talk about understanding deep neural networks. I will show that the over-parameterized neural network is equivalent to a new type of kernel, Neural Tangent Kernel. Using this equivalence, we show that gradient descent provably finds the global optimum when optimizing an over-parameterized neural network, and the learned neural network also generalizes well. In the second part of the talk, I will focus on designing new function approximators with strong empirical performance. We transform (fully-connected, graph, convolutional) neural networks to (fully-connected, graph, convolutional) Neural Tangent Kernels which achieve superior performance on standard benchmarks. In the last part of the talk, I will focus on applying function approximation in the reinforcement learning setting. I will give an exponential sample complexity lower bound of using function approximation to model the optimal policy. Then I will discuss what additional structures that admit statistically efficient algorithms.

Date & Time

January 16, 2020 | 12:00pm – 1:30pm

Location

Dilworth Room

Speakers

Simon Du

Affiliation

Member, School of Mathematics