NeuroInformation Processing Machines
Time Encoding Machines are asynchronous signal processors that encode analog information in the time (spike) domain. TEMs play a key role in modeling the representation of the natural world by biological sensory systems and in building sensors for silicon-based information systems. There is also substantial amount of interest in TEMs as front ends of brain machine interfaces, i.e., as building blocks connecting biological and silicon-based information systems.
We pioneered the representation of stimuli in the spike domain by TEMs realized as single-input single-output, single-input multi-output, multi-input multi-output and space-time encoding circuits. The encoding neural circuits are built upon classical spiking neuron models including Integrate-and-Fire, Threshold-and-Fire, and Hodgkin-Huxley. Neural encoding circuits with feedback and random thresholds were also investigated. The encoding "silicon" circuits are built upon Asynchronous Sigma/Delta Modulators and FM modulators in cascade with zero-crossing detectors.
Time Decoding Machines (TDM ) recover the encoded stimulus from the time (spike) sequence. We discovered that arbitrarily precise stimulus recovery can be achieved under Nyquist-type rate conditions. Furthermore, we demonstrated for the first time the faithful recovery of natural video (movies, animation) and auditory scenes (speech, sounds) encoded with neural circuits consisting of canonical spiking neuron models (including Hodgkin-Huxley). The derivation of these analytical results is based on frame theory, the theory of dynamical systems, statistics and machine learning.
We pioneered a novel class of algorithms for the functional identification of spiking neural circuits. Called Channel Identification Machines (CIMs ), these algorithms identify receptive fields in circuit models that incorporate biophysical spike-generating mechanisms (e.g., the Hodgkin-Huxley neuron) and admit both continuous sensory signals and multidimensional spike trains as input stimuli. Our neural circuit models take explicitly into account the highly nonlinear nature of spike generation that has been shown to result in significant interactions between various stimulus features and to fundamentally affect the estimation of receptive fields. Furthermore, and in contrast to many existing methods, our approach estimates receptive fields directly from spike times produced by a neuron, thereby obviating the need to repeat experiments in order to compute the neuron’s instantaneous rate of response (e.g., PSTH). The employed test signals belong to spaces of bandlimited functions and bridge the gap between identification using synthetic and naturalistic stimuli. This makes our methodology particularly attractive in those sensory modalities (most notably olfaction), where it is difficult to produce stimuli that are white and/or have particular distribution/attributes.
The brain must be capable of forming object representations that are invariant with respect to the large number of fluctuations occurring on the retina. These include object position, scale, pose and illumination, and the presence of clutter. What are some plausible computational or neural mechanisms by which invariance could be achieved in the spike domain? We pioneered the realization of identity preserving transformations (IPTs) on visual stimuli in the spike domain. The stimuli are encoded with a population of spiking neurons; the resulting spikes are processed and finally decoded. A number of IPTs have been demonstrated including faithful stimulus recovery, as well as simple transformations on the original visual stimulus such as translations, rotations and zooming.
Although images can be represented by their global phase alone, phase information has largely been ignored in the field of linear signal processing and for good reason. Phase-based information processing is intrinsically non-linear. Recent research, however, showed that phase information can be smartly employed in speech processing and visual processing. For example, spatial phase in an image is indicative of local features such as edges when considering phase congruency. We have devised a motion detection algorithm based on local phase information and constructed a fast, parallel algorithm for its real-time implementation. Our results suggest that local spatial phase information may provide an efficient alternative to perform many visual tasks in silico as well as in vivo biological vision systems.