Seminars Sorted by Series

Theoretical Machine Learning Seminar

Nov
12
2019

Theoretical Machine Learning Seminar

Fast IRLS Algorithms for p-norm regression
12:00pm|White-Levy

Linear regression in L_p-norm is a canonical optimization problem that arises in several applications, including sparse recovery, semi-supervised learning, and signal processing. Standard linear regression corresponds to p=2, and p=1 or infinity is...

Nov
13
2019

Theoretical Machine Learning Seminar

Some Statistical Results on Deep Learning: Interpolation, Optimality and Sparsity
12:00pm|Dilworth Room

This talk discusses three aspects of deep learning from a statistical perspective: interpolation, optimality and sparsity. The first one attempts to interpret the double descent phenomenon by precisely characterizing a U-shaped curve within the...

Nov
20
2019

Theoretical Machine Learning Seminar

Nonconvex Minimax Optimization
12:00pm|Dilworth Room

Minimax optimization, especially in its general nonconvex formulation, has found extensive applications in modern machine learning, in settings such as generative adversarial networks (GANs) and adversarial training. It brings a series of unique...

Nov
26
2019

Theoretical Machine Learning Seminar

A Fourier Analysis Perspective of Training Dynamics of Deep Neural Networks
11:30am|White-Levy

This talk focuses on a general phenomenon of "Frequency-Principle" that DNNs often fit target functions from low to high frequencies during the training. I will present empirical evidences on real datasets and deep networks of different settings as...

Dec
04
2019

Theoretical Machine Learning Seminar

Uncoupled isotonic regression
12:00pm|Dilworth Room

The classical regression problem seeks to estimate a function f on the basis of independent pairs $(x_i,y_i)$ where $\mathbb E[y_i]=f(x_i)$, $i=1,\dotsc,n$. In this talk, we consider statistical and computational aspects of the "uncoupled" version...

Dec
17
2019

Theoretical Machine Learning Seminar

How will we do mathematics in 2030 ?
Michael R. Douglas
12:00pm|White-Levy

We make the case that over the coming decade, computer assisted reasoning will become far more widely used in the mathematical sciences. This includes interactive and automatic theorem verification, symbolic algebra, and emerging technologies such...

Dec
18
2019

Theoretical Machine Learning Seminar

Online Learning in Reactive Environments
12:00pm|Dilworth Room

Online learning is a popular framework for sequential prediction problems. The standard approach to analyzing an algorithm's (learner's) performance in online learning is in terms of its empirical regret defined to be the excess loss suffered by the...

Jan
16
2020

Theoretical Machine Learning Seminar

Foundations of Intelligent Systems with (Deep) Function Approximators
Simon Du
12:00pm|Dilworth Room

Function approximators, like deep neural networks, play a crucial role in building machine-learning based intelligent systems. This talk covers three core problems of function approximators: understanding function approximators, designing new...

Jan
21
2020

Theoretical Machine Learning Seminar

The Blessings of Multiple Causes
David M. Blei
12:00pm|Dilworth Room

Causal inference from observational data is a vital problem, but it comes with strong assumptions. Most methods require that we observe all confounders, variables that affect both the causal variables and the outcome variables. But whether we have...

Jan
28
2020

Theoretical Machine Learning Seminar

What Noisy Convex Quadratics Tell Us about Neural Net Training
12:00pm|Dilworth Room

I’ll discuss the Noisy Quadratic Model, the toy problem of minimizing a convex quadratic function with noisy gradient observations. While the NQM is simple enough to have closed-form dynamics for a variety of optimizers, it gives a surprising amount...

Feb
04
2020

Theoretical Machine Learning Seminar

Algorithm and Hardness for Kernel Matrices in Numerical Linear Algebra and Machine Learning
12:00pm|Dilworth Room

For a function K : R^d x R^d -> R, and a set P = {x_1, ..., x_n} in d-dimension, the K graph G_P of P is the complete graph on n nodes where the weight between nodes i and j is given by K(x_i, x_j). In this paper, we initiate the study of when...

Feb
11
2020

Theoretical Machine Learning Seminar

Geometric Insights into the convergence of Non-linear TD Learning
12:00pm|Dilworth Room

While there are convergence guarantees for temporal difference (TD) learning when using linear function approximators, the situation for nonlinear models is far less understood, and divergent examples are known. We take a first step towards...

Feb
13
2020

Theoretical Machine Learning Seminar

The Lottery Ticket Hypothesis: On Sparse, Trainable Neural Networks
Jonathan Frankle
12:00pm|Dilworth Room

We recently proposed the "Lottery Ticket Hypothesis," which conjectures that the dense neural networks we typically train have much smaller subnetworks capable of training in isolation to the same accuracy starting from the original initialization...

Feb
20
2020

Theoretical Machine Learning Seminar

Geometric deep learning for functional protein design
Michael Bronstein
12:00pm|Dilworth Room

Protein-based drugs are becoming some of the most important drugs of the XXI century. The typical mechanism of action of these drugs is a strong protein-protein interaction (PPI) between surfaces with complementary geometry and chemistry. Over the...

Feb
25
2020

Theoretical Machine Learning Seminar

Learning from Multiple Biased Sources
Clayton Scott
12:00pm|Dilworth Room

When high-quality labeled training data are unavailable, an alternative is to learn from training sources that are biased in some way. This talk will cover my group’s recent work on three problems where a learner has access to multiple biased...

Feb
27
2020

Theoretical Machine Learning Seminar

Preference Modeling with Context-Dependent Salient Features
12:00pm|Dilworth Room

This talk considers the preference modeling problem and addresses the fact that pairwise comparison data often reflects irrational choice, e.g. intransitivity. Our key observation is that two items compared in isolation from other items may be...

Mar
03
2020

Theoretical Machine Learning Seminar

What Noisy Convex Quadratics Tell Us about Neural Net Training
12:00pm|White-Levy

I’ll discuss the Noisy Quadratic Model, the toy problem of minimizing a convex quadratic function with noisy gradient observations. While the NQM is simple enough to have closed-form dynamics for a variety of optimizers, it gives a surprising amount...

Mar
05
2020

Theoretical Machine Learning Seminar

Understanding Deep Neural Networks: From Generalization to Interpretability
Gitta Kutyniok
12:00pm|Dilworth Room

Deep neural networks have recently seen an impressive comeback with applications both in the public sector and the sciences. However, despite their outstanding success, a comprehensive theoretical foundation of deep neural networks is still missing...

Mar
10
2020

Theoretical Machine Learning Seminar

Your Brain on Energy-Based Models: Applying and Scaling EBMs to Problems of Interest to the Machine Learning Community Today
Will Grathwohl
12:00pm|Dilworth Room

In this talk, I will discuss my two recent works on Energy-Based Models. In the first work, I discuss how we can reinterpret standard classification architectures as class conditional energy-based models and train them using recently proposed...

Mar
11
2020

Theoretical Machine Learning Seminar

Improved Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance
Blair Bilodaeu
4:00pm|Simonyi 101

We study sequential probabilistic prediction on data sequences which are not i.i.d., and even potentially generated by an adversary. At each round, the player assigns a probability distribution to possible outcomes and incurs the log-likelihood of...

Mar
26
2020

Theoretical Machine Learning Seminar

Margins, perceptrons, and deep networks
Matus Telgarsky
12:00pm|https://illinois.zoom.us/j/741628827

This talk surveys the role of margins in the analysis of deep networks. As a concrete highlight, it sketches a perceptron-based analysis establishing that shallow ReLU networks can achieve small test error even when they are quite narrow, sometimes...

Mar
31
2020

Theoretical Machine Learning Seminar

Some Recent Insights on Transfer Learning
12:00pm|https://theias.zoom.us/j/384099138

A common situation in Machine Learning is one where training data is not fully representative of a target population due to bias in the sampling mechanism or high costs in sampling the target population; in such situations, we aim to ’transfer’...

Apr
02
2020

Theoretical Machine Learning Seminar

Learning Controllable Representations
12:00pm|https://theias.zoom.us/j/384099138

As deep learning systems become more prevalent in real-world applications it is essential to allow users to exert more control over the system. Exerting some structure over the learned representations enables users to manipulate, interpret, and even...

Apr
07
2020

Theoretical Machine Learning Seminar

Interpolation in learning: steps towards understanding when overparameterization is harmless, when it helps, and when it causes harm
Anant Sahai
12:00pm|https://theias.zoom.us/j/384099138

A continuing mystery in understanding the empirical success of deep neural networks has been in their ability to achieve zero training error and yet generalize well, even when the training data is noisy and there are many more parameters than data...

Apr
09
2020

Theoretical Machine Learning Seminar

Meta-Learning: Why It’s Hard and What We Can Do
3:00pm|https://theias.zoom.us/j/384099138

Meta-learning (or learning to learn) studies how to use machine learning to design machine learning methods themselves. We consider an optimization-based formulation of meta-learning that learns to design an optimization algorithm automatically...

Apr
21
2020

Theoretical Machine Learning Seminar

Assumption-free prediction intervals for black-box regression algorithms
Aaditya Ramdas
12:00pm|https://theias.zoom.us/j/384099138

There has been tremendous progress in designing accurate black-box prediction methods (boosting, random forests, bagging, neural nets, etc.) but for deployment in the real world, it is useful to quantify uncertainty beyond making point-predictions...

Apr
23
2020

Theoretical Machine Learning Seminar

Deep Generative models and Inverse Problems
Alexandros Dimakis
3:00pm|https://theias.zoom.us/j/384099138

Modern deep generative models like GANs, VAEs and invertible flows are showing amazing results on modeling high-dimensional distributions, especially for images. We will show how they can be used to solve inverse problems by generalizing compressed...

Apr
30
2020

Theoretical Machine Learning Seminar

Latent Stochastic Differential Equations for Irregularly-Sampled Time Series
David Duvenaud
3:00pm|Remote Access Only - see link below

Much real-world data is sampled at irregular intervals, but most time series models require regularly-sampled data. Continuous-time models address this problem, but until now only deterministic (ODE) models or linear-Gaussian models were efficiently...

May
05
2020

Theoretical Machine Learning Seminar

Boosting Simple Learners
12:00pm|Remote Access Only - see link below

We study boosting algorithms under the assumption that the given weak learner outputs hypotheses from a class of bounded capacity. This assumption is inspired by the common convention that weak hypotheses are “rules-of-thumbs” from an “easy-to-learn...

May
07
2020

Theoretical Machine Learning Seminar

Learning probability distributions; What can, What can't be done
Shai Ben-David
3:00pm|Remote Access Only - see link below

A possible high level description of statistical learning is that it aims to learn about some unknown probability distribution ("environment”) from samples it generates ("training data”). In its most general form, assuming no prior knowledge and...

May
12
2020

Theoretical Machine Learning Seminar

Generative Modeling by Estimating Gradients of the Data Distribution
Stefano Ermon
12:00pm|Remote Access Only - see link below

Existing generative models are typically based on explicit representations of probability distributions (e.g., autoregressive or VAEs) or implicit sampling procedures (e.g., GANs). We propose an alternative approach based on modeling directly the...

May
14
2020

Theoretical Machine Learning Seminar

MathZero, The Classification Problem, and Set-Theoretic Type Theory
David McAllester
3:00pm|Remote Access Only - see link below

AlphaZero learns to play go, chess and shogi at a superhuman level through self play given only the rules of the game. This raises the question of whether a similar thing could be done for mathematics --- a MathZero. MathZero would require a formal...

May
19
2020

Theoretical Machine Learning Seminar

Neural SDEs: Deep Generative Models in the Diffusion Limit
Maxim Raginsky
12:00pm|Remote Access Only - see link below

In deep generative models, the latent variable is generated by a time-inhomogeneous Markov chain, where at each time step we pass the current state through a parametric nonlinear map, such as a feedforward neural net, and add a small independent...

May
21
2020

Theoretical Machine Learning Seminar

Forecasting Epidemics and Pandemics
Roni Rosenfeld
3:00pm|Remote Access Only - see link below

Epidemiological forecasting is critically needed for decision making by national and local governments, public health officials, healthcare institutions and the general public. The Delphi group at Carnegie Mellon University was founded in 2012 to...

Jun
09
2020

Theoretical Machine Learning Seminar

What Do Our Models Learn?
Aleksander Madry
12:30pm|Remote Access Only - see link below

Large-scale vision benchmarks have driven---and often even defined---progress in machine learning. However, these benchmarks are merely proxies for the real-world tasks we actually care about. How well do our benchmarks capture such tasks?

In this...

Jun
11
2020

Theoretical Machine Learning Seminar

On Langevin Dynamics in Machine Learning
Michael I. Jordan
3:00pm|Remote Access Only - see link below

Langevin diffusions are continuous-time stochastic processes that are based on the gradient of a potential function. As such they have many connections---some known and many still to be explored---to gradient-based machine learning. I'll discuss...

Jun
16
2020

Theoretical Machine Learning Seminar

On learning in the presence of biased data and strategic behavior
Avrim Blum
3:00pm|Remote Access Only - see link below

In this talk I will discuss two lines of work involving learning in the presence of biased data and strategic behavior. In the first, we ask whether fairness constraints on learning algorithms can actually improve the accuracy of the classifier...

Jun
18
2020

Theoretical Machine Learning Seminar

The challenges of model-based reinforcement learning and how to overcome them
Csaba Szepesvari
3:00pm|Remote Access Only - see link below

Some believe that truly effective and efficient reinforcement learning algorithms must explicitly construct and explicitly reason with models that capture the causal structure of the world. In short, model-based reinforcement learning is not...

Jun
23
2020

Theoretical Machine Learning Seminar

Generalizable Adversarial Robustness to Unforeseen Attacks
Soheil Feizi
12:30pm|Remote Access Only - see link below

In the last couple of years, a lot of progress has been made to enhance robustness of models against adversarial attacks. However, two major shortcomings still remain: (i) practical defenses are often vulnerable against strong “adaptive” attack...

Jun
25
2020

Theoretical Machine Learning Seminar

Instance-Hiding Schemes for Private Distributed Learning
3:00pm|Remote Access Only - see link below

An important problem today is how to allow multiple distributed entities to train a shared neural network on their private data while protecting data privacy. Federated learning is a standard framework for distributed deep learning Federated...

Jul
07
2020

Theoretical Machine Learning Seminar

Machine learning-based design (of proteins, small molecules and beyond)
Jennifer Listgarten
12:30pm|Remote Access Only - see link below

Data-driven design is making headway into a number of application areas, including protein, small-molecule, and materials engineering. The design goal is to construct an object with desired properties, such as a protein that binds to a target more...

Jul
09
2020

Theoretical Machine Learning Seminar

Role of Interaction in Competitive Optimization
Anima Anandkumar
3:00pm|Remote Access Only - see link below

Competitive optimization is needed for many ML problems such as training GANs, robust reinforcement learning, and adversarial learning. Standard approaches to competitive optimization involve each agent independently optimizing their objective...

Jul
14
2020

Theoretical Machine Learning Seminar

Relaxing the I.I.D. Assumption: Adaptive Minimax Optimal Sequential Prediction with Expert Advice
Jeffrey Negrea
12:30pm|Remote Access Only - see link below

We consider sequential prediction with expert advice when the data are generated stochastically, but the distributions generating the data may vary arbitrarily among some constraint set. We quantify relaxations of the classical I.I.D. assumption in...

Jul
21
2020

Theoretical Machine Learning Seminar

Graph Nets: The Next Generation
Max Welling
12:30pm|Remote Access Only - see link below

In this talk I will introduce our next generation of graph neural networks. GNNs have the property that they are invariant to permutations of the nodes in the graph and to rotations of the graph as a whole. We claim this is unnecessarily restrictive...

Jul
23
2020

Theoretical Machine Learning Seminar

Priors for Semantic Variables
Yoshua Bengio
3:00pm|Remote Access Only - see link below

Some of the aspects of the world around us are captured in natural language and refer to semantic high-level variables, which often have a causal role (referring to agents, objects, and actions or intentions). These high-level variables also seem to...

Jul
28
2020

Theoretical Machine Learning Seminar

Generalized Energy-Based Models
Arthur Gretton
12:30pm|Remote Access Only - see link below

I will introduce Generalized Energy Based Models (GEBM) for generative modelling. These models combine two trained components: a base distribution (generally an implicit model), which can learn the support of data with low intrinsic dimension in a...

Jul
30
2020

Theoretical Machine Learning Seminar

Efficient Robot Skill Learning via Grounded Simulation Learning, Imitation Learning from Observation, and Off-Policy Reinforcement Learning
Peter Stone
3:00pm|Remote Access Only - see link below

For autonomous robots to operate in the open, dynamically changing world, they will need to be able to learn a robust set of skills from relatively little experience. This talk begins by introducing Grounded Simulation Learning as a way to bridge...