Using deep learning to unmask functional capabilities of recurrent networks of spiking neurons

Summary

Date: 
April 24, 2018 - 12:00pm
Location: 
Northwest Building, Room B103
About the Speaker
Name: 
Wolfgang Maass
Speaker Title: 
Professor
Speaker Affiliation: 
Graz University of Techonology

Computing and learning capabilities of hand-constructed recurrent networks of spiking neurons tend to be poor. Their capabilities increase dramatically if one optimizes their parameters and connectivity for a large range of computing and learning tasks through deep learning. In fact, if one includes adapting neurons in the network model, the computational capabilities of recurrent networks of spiking neurons approach for some tasks the performance level of LSTM networks.

LSTM networks, i.e., artificial neural networks with long short-term memory moduls, are currently producing in machine learning some of the best results for temporal processing tasks, such as speech recognition and video-prediction.

By applying in a similar manner learning-to-learn methods to recurrent networks of spiking neurons, one finds new learning methods for networks of spiking neurons, and ways how they can extract abstract knowledge from a series of learning tasks.

The resulting networks of spiking neurons share with networks of neurons in the brain the property that they have undergone long optimization processes and prior learning before they learn a particular task.

Further parallels to computational neuroscience emerge when one starts to investigate HOW the artifically optimized networks attain their superior functional capabilities.

A first report on this work in progress is available on arxiv:
G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass. Long short-term memory and learning-to-learn in networks of spiking neurons.
arXiv:1803.09574, 2018