Time Encoding Machines

Massively Parallel Neural Encoding and Decoding

We pioneered formal methods of decoding of stimuli (signals) encoded with Time Encoding Machines (TEMs). TEMs are asynchronous signal processors that encode analog information in the time (spike) domain. TEMs play a key role in modeling the representation of the natural world by biological sensory systems and in building sensors for silicon-based information systems. There is also substantial amount of interest in TEMs as front ends of brain machine interfaces, i.e., as building blocks connecting biological and silicon-based information systems.

We analyzed the representation of stimuli in the spike domain by TEMs realized as single-input single-output, single-input multi-output, multi-input multi-output and space-time encoding circuits. The encoding neural circuits are built upon classical spiking neuron models including Integrate-and-Fire, Threshold-and-Fire, and Hodgkin-Huxley. Neural encoding circuits with feedback and random thresholds were also investigated. The encoding "silicon" circuits are built upon Asynchronous Sigma/Delta Modulators and FM modulators in cascade with zero-crossing detectors.

Time Decoding Machines (TDMs) recover the encoded stimulus from the time (spike) sequence. We discovered that arbitrarily precise stimulus recovery can be achieved under Nyquist-type rate conditions. Furthermore, we demonstrated for the first time the faithful recovery of natural video (movies, animation) and auditory scenes (speech, sounds) encoded with neural circuits consisting of canonical spiking neuron models (including Hodgkin-Huxley). The derivation of these analytical results is based on frame theory, the theory of dynamical systems, statistics and machine learning.

  1. Aurel A. Lazar, Population Encoding with Hodgkin-Huxley Neurons , IEEE Transactions on Information Theory, Volume 56, Number 2, pp. 821-837, February, 2010, Special Issue on Molecular Biology and Neuroscience.
  2. Aurel A. Lazar and Yiyin Zhou, Massively Parallel Neural Encoding and Decoding of Visual Stimuli, Neural Networks, Special Issue: Selected papers from IJCNN11, Volume 32, August 2012, Pages 303–312.
  3. Aurel A. Lazar and Yiyin Zhou, Reconstructing Natural Visual Scenes from Spike Times, Proceedings of the IEEE, Vol. 102, Number 10, October 2014 (invited paper).

A visual demonstration of decoding a short video stimulus encoded with a Video Time Encoding Machine is shown below.

The original video (upper left corner above) has a resolution of 640x360 pixels (HD format). The RGB channels of the original video were encoded separately. For each channel, the Video Time Encoder consists of 96,544 receptive fields in cascade with a population of Hodgkin-Huxley neurons. About 100 million spikes were generated per channel during the 10 seconds of the video. The reconstruction of the video stimulus is shown in the upper right corner. The visual error and its 2D spectrum of the R channel are shown on the bottom left and bottom right corner, respectively. The simulation was performed on a cluster with 55 Tesla M2050 GPU's.

Multisensory Encoding of Audio and Color Visual Fields

  1. Aurel A. Lazar and Yevgeniy B. Slutskiy, Multisensory Encoding, Decoding, and Identification, Advances in Neural Information Processing Systems 26 (NIPS*2013), edited by C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani and K.Q. Weinberger, December 2013.
  2. Aurel A. Lazar, Yevgeniy B. Slutskiy and Yiyin Zhou, Massively Parallel Neural Circuits for Stereoscopic Color Vision: Encoding, Decoding and Identification, Neural Networks, Volume 63, pp. 254-271, March 2015.
  3. Aurel A. Lazar and Yiyin Zhou, Identifying Multisensory Dendritic Stimulus Processors, IEEE Transactions on Molecular, Biological, and Multi-Scale Communications, Volume 2, Number 2, pp. 183-198, December 2016 , Special Issue on Biological Applications of Information Theory in Honor of Claude Shannon's Centennial, Part II (invited paper).

The videos below illustrate that the evaluation of the functional identification of a massively parallel neural circuit can be intuitively and in its entirety performed in the stimulus space. We quantitatively demonstrate how the quality of reconstruction of the encoded signal depends on the number of spikes used for identifying the circuit parameters.

The underlying massively parallel neural circuit for encoding color video consists of 30,000 IAF neurons with color sensitive receptive fields, covering a screen size of 160x90px. The IAF neurons all have the same parameters and are always assumed to be known. The underlying receptive fields are spatio-temporal non-separable in each of their color components and they resemble 2D Gabor filters spatially and rotate around their spatial center over time. However, the color receptive fields are assumed to be unknown and need to be identified. All the receptive fields are functionally identified using the same natural video, whose duration is up to 200 seconds. We identify the entire neural circuit based on 7 different settings. In one setting, each neuron's receptive field is identified using 1,000 spikes (measurements). In the other settings, each neuron's receptive field is identified using 2,000, 4,000, 6,000, 9,000, 13,000, 17,000 spikes (measurements), respectively.

A novel stimulus (the bee video shown here) encoded by the underlying circuit is then recovered. Instead of using the underlying circuit parameters, the identified circuit parameters are used in decoding. Note that the set of spikes used are still the ones generated by the same underlying circuit. The decoding quality then depends on how well the circuit is identified. The reconstructed video and the associated Signal-to-Noise Ratio (SNR) are shown in the above video for the circuit identified with different settings. As identification quality increases (more spikes are used), the quality of reconstruction converges to that of reconstruction using the underlying circuit parameters (known receptive fields/filters).

Encoding with Dendritic Stimulus Processors

We investigated multi-input multi-output neural circuit architectures for nonlinear processing and encoding of stimuli consisting of a bank of dendritic stimulus processors (DSPs). DSPs execute nonlinear transformations of multiple temporal or spatiotemporal signals such as spike trains or auditory and visual stimuli in the analog domain. We demonstrated a fundamental duality between the identification of the dendritic stimulus processor of a single neuron and the decoding of stimuli encoded by a population of neurons with a bank of dendritic stimulus processors. We have also shown that identification algorithms can be effectively and intuitively evaluated in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus.

  1. Aurel A. Lazar and Yevgeniy B. Slutskiy, Spiking Neural Circuits with Dendritic Stimulus Processors: Encoding, Decoding, and Identification in Reproducing Kernel Hilbert Spaces, Journal of Computational Neuroscience, Volume 38, No. 1, pp. 1-24, February 2015.
  2. Aurel A. Lazar and Yiyin Zhou, Volterra Dendritic Stimulus Processors and Biophysical Spike Generators with Intrinsic Noise Sources, Frontiers in Computational Neuroscience, Volume 8, Number 95, September 2014.
  3. Aurel A. Lazar, Nikul H. Ukani, and Yiyin Zhou, Sparse Functional Identification of Complex Cells from Spike Times and the Decoding of Visual Stimuli, The Journal of Mathematical Neuroscience, Volume 8, Number 2, January 2018.

Examples of reconstruction of natural visual stimuli. Snapshots of the original videos encoded by a neural circuit with complex cells are shown on the top row. The reconstructions from the spike times are shown in the middle row and the error on the bottom row. Note that the color bar indicating the magnitude of the error was set to 10% of the input range. SNR of the recovered natural visual stimuli in each column: (A) 48.85 [dB]. (B) 46.92 [dB]. (C) 48.61 [dB]. (D) 50.76 [dB]. (E) 48.11 [dB].

The Bionet Group is supported by grants from


Tweet this! Share on Facebook Email a friend Share on Delicious Share on StumbleUpon Share on Digg Share on Reddit Share on Technorati