Training a predictor to minimize a loss function fixed in
advance is the dominant paradigm in machine learning. However, loss
minimization by itself might not guarantee desiderata like fairness
and accuracy that one could reasonably expect from a...
The study of nodal sets, i.e. zero sets of eigenfunctions, on
geometric objects can be traced back to De Vinci, Galileo, Hook,
and Chladni. Today it is a central subject of spectral geometry.
Sturm (1836) showed that the n-th eigenfunction of the...
A nodal domain of a Laplacian eigenvector of a graph is a
maximal connected component where it does not change sign.
Sparse random regular graphs have been proposed as discrete toy
models of "quantum chaos", and it has accordingly been
conjectured...
The absorption method is a very simple yet surprisingly powerful
idea coming from extremal combinatorics which allows one to convert
asymptotic results into exact ones. It has produced a remarkable
number of success stories in recent years with...
Proving a 2009 conjecture of Itai Benjamini, we show: For
any C, there is a greater than 0 such that for any simple random
walk on an n-vertex graph G, the probability that the first Cn
steps of the walk hit every vertex of G is at most
exp[-an]. ...
Two recent and seemingly-unrelated techniques for proving mixing
bounds for Markov chains are:
(i) the framework of Spectral Independence, introduced by Anari,
Liu and Oveis Gharan, and its numerous extensions, which have given
rise to several...
Over the last three decades, the online bipartite matching (OBM)
problem has emerged as a central problem in the area of Online
Algorithms. Perhaps even more important is its role in the area of
Matching-Based Market Design. The resurgence of this...
Two recent and seemingly-unrelated techniques for proving mixing
bounds for Markov chains are:
(i) the framework of Spectral Independence, introduced by Anari,
Liu and Oveis Gharan, and its numerous extensions, which have given
rise to several...
As machine learning is widely deployed, it is increasingly
important to ensure that predictors will perform well not only on
average (across the entire population), but also on specific
socially salient subgroups. In this talk, I will present...
In recent years, a new “fine-grained” theory of computational
hardness has been developed, based on “fine-grained reductions”
that focus on exact running times for problems.
We follow the fashion of NP-hardness in a more delicate manner.
We...