Using modern, deep Bayesian inference to analyse neural data and understand neural systems

Summary

Date: 
January 16, 2018 - 12:00pm
Location: 
Northwest Building, Room B103
About the Speaker
Name: 
Laurence Aitchison
Speaker Title: 
Postdoctoral Fellow
Speaker Affiliation: 
Cambridge

I consider how Bayesian inference can address the analytical and theoretical challenges presented by increasingly complex, high-dimensional neuroscience datasets.

With the advent of Bayesian deep neural networks, GPU computing and automatic differentiation it is becoming increasingly possible to perform large-scale Bayesian analyses of data, simultaneously inferring complex biological phenomena and experimental confounds. I present a proof-of-principle: inferring causal connectivity from an all-optical experiment combining calcium imaging and cell-specific optogenetic stimulation. The model simultaneously infers spikes from fluorescence, models low-rank activity and the extent of off-target optogenetic stimulation, and explicitly gives uncertainty estimates about the inferred connection matrix.

Further, there is considerable evidence that humans and animals use Bayes theorem to reason optimally about uncertainty. I present two projects addressing how Bayesian reasoning might be instantiated at the level of neural circuits and synapses. At the circuit level, I show that sampling-based Bayesian inference emerges naturally when combining classical sparse-coding models with a biophysically motivated energetic cost of achieving reliable responses. We understand these results theoretically by noting that the resulting combined objective approximates the objective for a classical Bayesian method: variational inference. Given this strong theoretical underpinning, we are able to extend the model to multi-layered networks modelling MNIST digits. At the synaptic level, I consider how synapses might speed up learning by exploiting Bayes theorem to reason about uncertainty. The resulting learning rules are simple extensions of classical gradient-based learning rules, with an additional term that uses uncertainty to modulate the learning rate. Further, I consider how synapses might communicate their uncertainties to downstream circuits by coupling their uncertainty to their EPSP variability. This hypothesis gives a novel prediction that: that the normalised EPSP variability should decrease as the presynaptic firing rate increases, and I test this prediction by a novel reanalysis of existing data.