ɬÀï·¬

Event

MCCHE Precision Convergence Webinar Series with Nathaniel Daw

Wednesday, April 29, 2026 10:00to13:00

Learning --- and Learning to Learn

By Nathaniel Daw

Professor Princeton University

Date: Wednesday, April 29, 2026
Time: 10:00 a.m. to 1:00 p.m.
Location: Online

View poster


Abstract

Classic work on learning from rewards has emphasized a simple error-driven mechanism for incremental averaging, with accompanying behavioral evidence, neural correlates, and connections to normative ideas from statistical machine learning. This talk reviews theory and evidence about how such trial-by-trial learning rules are themselves adaptively shaped by more abstract learning that adapts them to individual situations. I discuss how our conceptions of such "learning to learn" or "metalearning" have grown in scope and ambition: from adjusting hyperparameters of a single baseline learning rule; to judiciously trading off several different candidate learning rules according to context; to discovering new abstractions or learning rules optimized to a task or environment. This work in biological learning parallels a similar expansion of perspective in machine learning, most recently in "in-context learning" in large language models. Neurally, this research implies a shift from textbook synaptic plasticity rules for incremental learning toward more flexible accounts whereby "inner-loop" learning concerns representations maintained in recurrent activity states (e.g., in prefrontal cortex), whose dynamics and implied "learning rules" are themselves shaped by plasticity. Recent behavioral and neural evidence from humans and animals has begun to shed light on these two mechanisms.

Back to top