Title: Mechanistic biases in models for neural dynamics
Abstract: One of the central goals of neuroscience is to gain a mechanistic understanding of how the dynamics of neural circuits give rise to their observed function. A popular approach towards this end is to train recurrent neural networks (RNNs) to reproduce experimental recordings of neural activity. These trained RNNs are then treated as surrogate models of biological neural circuits, whose properties can be dissected via dynamical systems analysis. How reliable are the mechanistic insights derived from this procedure? In this talk, I will discuss recent work with Billy Qian and Cengiz Pehlevan in which we show that partial observation and model selection procedures lead to a mechanistic bias towards discovering line attractors in models for temporal integration. I will then discuss ongoing work with Blake Bordelon and Jordan Cotler in which we analyze a solvable model for learning to integrate in recurrent networks, giving us a precise understanding of how mechanistic biases can arise.