Workshop on New Directions in Optimization, Statistics and Machine Learning

Steps towards more human-like learning in machines

There are several broad insights we can draw from computational models of human cognition in order to build more human-like forms of machine learning. (1) The brain has a great deal of built-in structure, yet still tremendous need and potential for learning. Instead of seeing built-in structure and learning as in tension, we should be thinking about how to learn effectively with more and richer forms of structure. (2) The most powerful forms of human knowledge are symbolic and often causal and probabilistic. Symbolic forms allow knowledge to generalize well outside the space of training data and tasks, and to be shared publicly and bootstrapped through collective learning. Probabilistic causal knowledge is actionable for plans, supports counterfactual thinking and hypothetical reasoning, and allows us to make predictions in situations for which we may have no relevant data. Human-like machine learning thus has to be able to build symbolic, causal, probabilistic representations. I will talk about steps towards these goals reflected in some recent work by our group and collaborators, combining techniques from deep learning, program synthesis, hierarchical Bayesian modeling and probabilistic programs.

Date & Time

April 16, 2020 | 4:30pm – 5:30pm

Location

Virtual

Speakers

Josh Tenenbaum

Affiliation

MIT

Categories