Discovering interpretable structure in neural spike trains with negative binomial generalized linear models


December 3, 2014 - 1:00pm
NW 243
About the Speaker
Scott Linderman (Valiant Lab)

The steady expansion of neural recording capability provides exciting opportunities to discover unexpected patterns and gain new insights into neural computation. Realizing these gains requires statistical methods for extracting interpretable structure from large-scale neural recordings. In this talk I will present our recent work on methods that reveal such structure in simultaneously recorded multi-neuron spike trains. We use generalized linear models (GLM) with negative-binomial observations to describe spike trains, which provide a flexible model for over-dispersed spike counts (i.e., responses with greater-than-Poisson variability). Interpretable properties such as latent cell types and features, hidden states of the network, and unknown synaptic plasticity rules are incorporated into the model as latent variables that mediate the functional connectivity of the GLM. We exploit recent innovations in negative binomial regression to perform efficient, fully-Bayesian sampling of the posterior distribution over parameters given the data. We apply our models to neural recordings from primate retina, rat hippocampal place cells, and neural simulators to discover latent structure in population recordings.