Theoretical Machine Learning Seminar

Unsupervised Ensemble Learning

In various applications, one is given the advice or predictions of several classifiers of unknown reliability, over multiple questions or queries. This scenario is different from standard supervised learning where classifier accuracy can be assessed from available labeled training or validation data, and raises several questions: given only the predictions of several classifiers of unknown accuracies, over a large set of unlabeled test data, is it possible toa) reliably rank them, andb) construct a meta-classifier more accurate than any individual classifier in the ensemble? In this talk we'll show that under various independence assumptions between classifier errors, this high dimensional data hides simple low dimensional structures. Exploiting these, we will present simple spectral methods to address the above questions, and derive new unsupervised spectral meta-learners.We'll prove these methods are asymptotically consistent when the model assumptions hold, and present their empirical success on a variety of unsupervised learning problems.

Date & Time

October 08, 2019 | 12:00pm – 1:30pm

Location

White-Levy

Affiliation

Weizmann Institute of Science; Member, School of Mathematics