Video Time Encoding Machines
This package provides Python/PyCUDA code for encoding and decoding natural and synthetic visual scenes (videos) with Time Encoding Machines consisting of Gabor or center surround receptive fields followed by Integrate-and-Fire neurons [1], [2]. The decoder supports both the pseudoinverse algorithm, described in [1], [2] and the recurrent neural networks method, described in [3].
Documentation
The latest source code and documentation written by Nikul H. Ukani and Yiyin Zhou can be obtained from the vtem GitHub repository.
- Video Time Encoding Machines ,
Aurel A. Lazar and Eftychios A. Pnevmatikakis , IEEE Transactions on Neural Networks , Volume 22 , Number 3 , pp. 461-473 , March 2011
- Encoding Natural Scenes with Neural Circuits with Random Thresholds ,
Aurel A. Lazar, Eftychios A. Pnevmatikakis and Yiyin Zhou , Vision Research , Volume 50 , Number 22 , pp. 2200-2212 , October 2010 , Special Issue on Mathematical Models of Visual Coding
- Massively Parallel Neural Encoding and Decoding of Visual Stimuli ,
Aurel A. Lazar and Yiyin Zhou , Neural Networks , Volume 32 , pp. 303-312 , August 2012 , Special Issue: IJCNN 2011