Structured Reinforcement Learning


February 17, 2014 - 12:00pm
William James Hall 765
About the Speaker
Samuel Gershman (MIT)

Humans can make strong inferences on the basis of little (or even no) experience. This capability is made possible by a rich repertoire of inductive biases that constrain the space of plausible hypotheses. Recent behavioral, neural and computational research suggests that structured inductive knowledge plays an important role in human reinforcement learning. A new picture of reinforcement learning is beginning to emerge from this research, emphasizing interactions between simple error-driven learning mechanisms and high-level cognitive processes.