Online Omniprediction

A recent line of work has shown a surprising connection between multicalibration, a multi-group fairness notion, and omniprediction, a learning paradigm that provides simultaneous loss minimization guarantees for a large family of loss functions [GKR+22, GHK+23 , GKR23 , GHHK+23]. Prior work studies omniprediction in the batch setting. Our work initiates the study of omniprediction in the online adversarial setting. In the talk, we will briefly see the definitions and motivations for these theoretic notions, and then survey the new results for online omniprediction. 

 

Our contributions are two-fold:

 

We develop a new online multicalibration algorithm that is well defined for infinite benchmark classes F (e.g. the set of all linear functions), and is oracle efficient — i.e. for any class F, the algorithm has the form of an efficient reduction to a no-regret learning algorithm for F. This implies an oracle efficient online omnipredictor — an online prediction algorithm that can be used to simultaneously obtain no regret guarantees to all Lipschitz convex loss functions. 

We show upper and lower bounds on the extent to which our regret rates can be improved. 

Joint work with Christopher Jung, Omer Reingold and Aaron Roth.

Date

Speakers

Sumegha Garg

Affiliation

Rutgers University