Course Descriptions

Terng Lecture Series

Cynthia Rudin, Duke University: Introduction to Interpretable Machine Learning

Pre Course Videos

Machine learning is now used throughout society and is the driving force behind the accuracy of online recommendation systems, credit-scoring mechanisms, healthcare systems and beyond. Machine learning models have the reputation of being "black boxes" meaning that their computations are so complicated that no human would be capable of understanding them. However, machine-learning models do not actually need to be black boxes. It is possible, with some mathematical sophistication, to derive algorithms for machine learning that produce models that are interpretable by humans.

I will present a self-contained short course that introduces the basic concepts of supervised machine learning, including loss functions, overfitting, and regularization, and how interpretability is a form of regularization. I will focus on two important topics in interpretable machine learning: (1) sparse decision trees and (2) interpretable neural networks. For both topics, I will present a brief history of the field, going all the way to the state-of-the-art in current methodology, and introduce specific technical and mathematical challenges.

Uhlenbeck Lecture Series

Maria Florina Balcan, Carnegie Mellon University: Foundations for Learning in the Age of Big Data

With the variety of applications of machine learning across science, engineering, and computing in the age of Big Data, re-examining the underlying foundations of machine learning has become imperative. In this lecture, I will describe new models and algorithms for important emerging paradigms, specifically, interactive learning and distributed learning.

Most classic machine learning methods depend on the assumption that humans can annotate or label all the data available for training. However, many modern machine-learning applications have massive amounts of unannotated or unlabeled data. As a consequence, there has been a lot of interest recently in machine learning and its application to design active learning algorithms that intelligently select which data to request to be labeled, with the goal of dramatically reducing the human labeling effort. I will discuss recent advances on designing active learning algorithms with provable guarantees that are computationally efficient, noise tolerant, and enjoy strong label efficiency guarantees.

I will also discuss the problem of learning from distributed data and analyze fundamental algorithmic and communication complexity questions involved. Broadly, we consider a framework where massive amounts of data is distributed among several locations. Our goal is to learn a low-error predictor with respect to the overall distribution of data using as little communication and as few rounds of interaction as possible. We will discuss general upper and lower bounds on the amount of communication needed to learn a given class, as well as broadly applicable techniques for achieving communication-efficient learning.

 

Prerequisites:(for both courses)

Undergraduate knowledge of linear algebra and probability, as well as an introductory computer science course.