Theoretical Machine Learning Seminar

Improved Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance

We study sequential probabilistic prediction on data sequences which are not i.i.d., and even potentially generated by an adversary. At each round, the player assigns a probability distribution to possible outcomes and incurs the log-likelihood of the observed outcome under said distribution. Without assumptions on the data-generating process, one cannot control the total loss, and so it is standard to study the regret, i.e., the difference between the player’s total loss and the total loss of the single best forecaster in some class of “experts”. We present a novel approach to the open problem of bounding the minimax regret under logarithmic loss for arbitrarily large expert classes by exploiting the self-concordance property of logarithmic loss. Our regret bound depends on the metric entropy of the expert class and matches previous best known results for arbitrary expert classes. We observe a polynomial improvement in dependence on the time horizon for high-dimensional classes.

Date & Time

March 11, 2020 | 4:00pm – 5:30pm

Location

Simonyi 101

Speakers

Blair Bilodaeu

Affiliation

University of Toronto