Computer Science/Discrete Mathematics Seminar I
Combinatorial and Geometric Challenges in PAC Learning with Partial Concepts
I will describe a recent extension of the classical PAC (Probably Approximately Correct) learning framework [Vapnik and Chervonenkis, 1970s; Valiant, 1980s]. This extension makes it possible to model a wide range of common and practical data-dependent assumptions, such as margin conditions or low-dimensional structure. The key idea is to move from total function classes to partial function classes, where functions may be undefined on certain regions, reflecting assumptions about where the data lie. This perspective reveals phenomena fundamentally different from the classical PAC model. For example, the classical empirical risk minimization (ERM) principle — which fully characterizes learnability in the classical PAC framework and is viewed as the central algorithmic paradigm — fails for general partial function classes.
I will then turn to the broader question of how much more expressive partial functions are compared to traditional total functions. This leads to new and open problems at the interface of combinatorics, geometry, and topology, which I will present in the talk. The talk will not assume prior knowledge in learning theory.