Publication


A. A. Lazar and Y. Zhou
Massively Parallel Neural Encoding and Decoding of Visual Stimuli
Neural Networks , Volume 32 , pp. 303-312 , August 2012 , Special Issue: IJCNN 2011
BibTex   DOI   PDF     Supplementary Information   Video  
The massively parallel nature of Video Time Encoding Machines (TEMs) calls for scalable, massively parallel decoders that are implemented with neural components. The current generation of decoding algorithms is based on computing the pseudo-inverse of a matrix and does not satisfy these requirements. Here we consider Video TEMs with an architecture built using Gabor receptive fields and a population of Integrate-and-Fire neurons. We show how to build a scalable architecture for Video Time Decoding Machines using recurrent neural networks. Furthermore, we extend our architecture to handle the reconstruction of visual stimuli encoded with massively parallel Video TEMs having neurons with random thresholds. Finally, we discuss in detail our algorithms and demonstrate their scalability and performance on a large scale GPU cluster. Keywords: neural encoding of visual stimuli, spiking neural models, massively parallel reconstruction of visual stimuli, recurrent neural networks, neural circuits with random thresholds, receptive fields.

Reference


@article{LAY12,
  author = "A. A. Lazar and Y. Zhou",
  title = "Massively Parallel Neural Encoding and Decoding of Visual Stimuli",
  year = 2012,
  journal = "Neural Networks",
  volume = 32,
  pages = "303-312",
  month = "Aug"
}