You are here

IAS-PNI Seminar on ML and Neuroscience

Compositional generalization in minds and machines

People learn in fast and flexible ways that elude the best artificial neural networks. Once a person learns how to “dax,” they can effortlessly understand how to “dax twice” or “dax vigorously” thanks to their compositional skills. In this talk, we examine how people and machines generalize compositionally in language-like instruction learning tasks. Artificial neural networks have long been criticized for lacking systematic compositionality (Fodor & Pylshyn, 1988; Marcus, 1998), but new architectures have been tackling increasingly ambitious language tasks. In light of these developments, we reevaluate these classic criticisms and find that artificial neural nets still fail spectacularly when systematic compositionality is required. We then show how people succeed in similar few-shot learning tasks and find they utilize three inductive biases that can be incorporated into models. Finally, we show how more structured neural nets can acquire compositional skills and human-like inductive biases through meta learning.

Featuring

Brenden Lake

Speaker Affiliation

New York University

Affiliation

Mathematics

Event Series

Date & Time
February 18, 2020 | 4:005:30pm

Location

Princeton Neurosciences Institute: Room A32

Categories