Northwest Labs 243
Speaker: Jay Hennig (Gershman lab). Lunch at 11:50.
Title: Emergence of belief-like representations through reinforcement learning
Abstract: Performing a task well often requires learning a particular state representation. These representations often need to account for uncertainty about the true state of the world. For example, to succeed at a card game like poker, one might need to estimate the state of other players. From a theoretical perspective, one optimal representation is a “belief,” indicating the posterior estimate of the underlying state given the history of observations. In this talk I will discuss how RNNs learn a state representation for estimating value in associative learning tasks with and without state uncertainty. I will show evidence that RNNs succeed at this task by forming compressed, belief-like representations. These RNN models make predictions about what neural population activity might look like in animals performing the same tasks.