CNS*2012 Workshop on


Methods of System Identification for Studying Information Processing in Sensory Systems

Wednesday, July 25, 2012

Atlanta/Decatur, GA




Overview

    A functional characterization of an unknown system typically begins by making observations about the response of that system to input signals. The knowledge obtained from such observations can then be used to derive a quantitative model of the system in a process called system identification. The goal of system identification is to use a given input/output data set to derive a function that maps an arbitrary system input into an appropriate output.

    In neurobiology, system identification has been applied to a variety of sensory systems, ranging from insects to vertebrates. Depending on the level of abstraction, the identified neural models vary from detailed mechanistic models to purely phenomenological models.

    The workshop will provide a state of the art forum for discussing methods of system identification applied to the visual, auditory, olfactory and somatosensory systems in insects and vertebrates.

    The lack of a deeper understanding of how sensory systems encode stimulus information has hindered the progress in understanding sensory signal processing in higher brain centers. Evaluations of various systems identification methods and a comparative analysis across insects and vertebrates may reveal common neural encoding principles and future research directions.

    The workshop is targeted towards systems, computational and theoretical neuroscientists with interest in the representation and processing of stimuli in sensory systems in insects and vertebrates.

    References

    Vasilis Z. Marmarelis (2004). Nonlinear Dynamic Modeling of Physiological Systems. Wiley-IEEE Press, Hoboken, NJ, 2004.

    Wu, M., David, S., & Gallant, J. (2006). Complete Functional Characterization of Sensory Neurons by System Identification. Annual Review of Neuroscience, 29, 477–505.

    Ljung, L. (2010). Perspectives on System Identification, Annual Reviews in Control, 34 (2010), 1-12.


Program Committee

    Aurel A. Lazar, Department of Electrical Engineering, Columbia University.

    Mikko I. Juusola, Department of Biomedical Science, University of Sheffield.





Program Overview

    Wednesday (9:00 AM - 5:10 PM), July 25, 2012


    Morning Session I (9:00 AM - 10:20 AM)
    Identification of Neural Circuits
    Chair: Daniel Coca




    9:00 AM - 9:40 AM

    System Identification in the Thalamocortical Circuit: From Sensory Stimulation to Electrical Microstimulation

    Garrett B. Stanley, Department of Biomedical Engineering, Georgia Institute of Technology and Emory University.

    Classical system identification techniques have been applied broadly in sensory pathways, providing us with our current framework for understanding of early sensory processing. However, in the context of the natural environment, our sensory pathways process information in a highly nonlinear way in order to extract and represent salient features of the stimulus environment. In a range of in-vivo studies in the somatosensory and visual pathways, our laboratory has constructed encoding models at the level of the thalamocortical circuit using a variety of approaches to capture these dynamics. A range of experiments designed to target these goals will be discussed, including multi-site electrophysiological recordings and voltage sensitive dye imaging of somatosensory cortex, in response to sensory stimulation and thalamic microstimulation. Given the construction of predictive models of sensory processing, decoding approaches that take the perspective of the ideal observer of neural activity are useful tools to help us understand to what degree specific elements of encoding are useful for encoding features of the sensory environment, and a range of these studies will be discussed. Finally, the development of strategies for controlling neural systems requires models of the underlying processing. The use of model-based approaches for controlling activation of neural pathways will be discussed, both in the context of sensory stimulation and artificial electrical microstimulation.




    9:40 AM - 10:20 AM

    Modeling of Feedback from Muscle Contractions onto a Rhythmic Central Pattern Generator in the Crab Heart

    Vladimir Brezina, Department of Neuroscience and Friedman Brain Institute, Mount Sinai School of Medicine.

    The blue crab, Callinectes sapidus, has a neurogenic heart that is controlled by the cardiac ganglion (CG), a simple central pattern generator, which is embedded within the heart muscle itself. We are studying this system both experimentally and mathematically. The CG and heart muscle form a closed-loop network. The autoactive CG generates bursts of spikes that elicit contractions of the muscle. The activity of the CG is, in turn, modified by feedback from these contractions, as well as by its own previous history. To predictively model the activity of the CG, we wish to characterize from experimental data both the peripheral feedback and the central history dependence. The two processes are not fully separable, however, because the feedback is only revealed by its modification of the ongoing CG activity. To characterize both processes, we use a kernel-based system identification method that we have developed (Stern et al., Journal of Neuroscience Methods 184: 337-356, 2009). In each experiment, we first record the activity of the CG alone and then with feedback induced by white-noise waveforms of mechanical stretches of the heart muscle. Alternatively, we apply the white-noise stretch waveforms before and after cutting dendrites of the CG neurons that carry the stretch information from the muscle back to the CG. Basic statistical analysis of the data shows that, when the white-noise muscle stretches are applied with intact dendrites, the ongoing CG spike bursts become significantly more variable. We use our kernel-based method to characterize first the central history dependence from the CG activity alone, when the stretches are not applied or the dendrites are cut, and then, given the central history dependence, the peripheral feedback from the modification of the CG activity by the white-noise muscle stretches with intact dendrites. Significant difficulties arise from the fact that in all of these computations we do not have access to any time-continuous measure of the CG activity, but only to the discrete spikes. Nevertheless, the method extracts three elementary functions that completely define the processes that generate the CG spikes: H, a kernel that describes the dependence on previous CG spike, K, a kernel that describes the feedback from the muscle, and F, a static nonlinear function. Pursuing the broader aims of the project, we then incorporate these functions into a complete generative model of the crab cardiac system and study its performance and dynamical properties.

    Joint work with Estee Stern (Mount Sinai School of Medicine), Keyla García-Crescioni and Mark W. Miller (University of Puerto Rico), and Charles S. Peskin (New York University).




    10:20 AM - 10:50 AM

    Morning Break


    Morning Session II (10:50 AM - 12:10 PM)
    Identification in Sensory Systems I
    Chair: Vladimir Brezina




    10:50 AM - 11:30 AM

    Casting Light on the Interplay between Perception and Decision Making in Drosophila Chemotaxis

    Matthieu Louis, Centre for Genomic Regulation, Barcelona.

    Active sensing couples motor responses to sensory processing in feedback loops that can be challenging to investigate experimentally. We combined high-resolution behavioral analysis, electrophysiology, modeling, and optogenetics to dissect the sensorimotor integration underlying chemotaxis in Drosophila melanogaster larvae.

    When exposed to a static odor gradient, larvae orient through a series of runs punctuated by turns. The timing and directionality of turning events proceed from active sampling. Prior to turning, the local odor gradient is resolved through side-to-side head movements (casts). Larvae genetically engineered to retain function in a single olfactory sensory neuron (OSN) demonstrate the same basic orientation strategy as wild type. We reconstructed the sensory dynamics that the larva experiences at key decision points during unconstrained chemotaxis. Using electrophysiology, we recorded the responses of the single functional OSN to a replay of the stimulus time course. Two types of signals were characterized in detail: rapid fluctuations in concentration associated with sub-second head casts, and slower odor ramps detected during runs lasting several seconds. These neural responses constitute the mechanistic basis for a model underlying the spatiotemporal integration of dynamical olfactory inputs and its conversion into orientation responses.

    Our sensorimotor model was tested in virtual reality experiments. Using light sequences to trigger controlled activity patterns in a single-functional OSN expressing channelrhodopsin, we can induce predictable changes in behavior in response to well-defined sensory inputs delivered during specific behavioral states. We exploited this closed-loop paradigm to establish a relationship between the neural integration of sensory evidence and the probability of implementing stereotyped motor responses. Overall, our work clarifies how a simple brain uses active sensing and decision making to direct behavior.




    11:30 AM - 12:10 PM

    Nonlinear Mechanisms for Phase Congruency Detection in Fly Photoreceptors

    Daniel Coca, Department of Automatic Control and Systems Engineering, University of Sheffield.

    Sensory systems are tuned to detect and enhance specific features of the signals which are important for survival. In particular, in the visual system, the detection of boundary or edges of objects is crucial for object segregation, categorization and recognition. As edges correspond to points of local maximum phase alignment of the constituent Fourier components, one expects that sensory neurons in the visual pathway selectively enhance the salience of these signal features. However, as predicted by Barlow (1961), such preliminary signal transformations are not likely to be revealed by ordinary physiological investigation of the early visual system and without a model that can predict accurately the responses of individual neurons to different types of stimuli.

    This talk will show how by combining electrophysiological experiments, nonlinear system identification and control theory we were able to develop the most advanced and accurate functional nonlinear photoreceptor model to date (not directly mapped onto known biophysics or anatomy), which incorporates a dynamic gain control mechanism that is essential for predicting the responses to complex, naturalistic stimuli over a wide range of light intensities. By exploiting the recently developed concept of Nonlinear Output Frequency Response Functions (NORF) the photoreceptor responses were decomposed into linear, second- and higher-order responses and their relative contribution to the overall photoreceptor response was characterized both in the time as well as in the frequency domain. The analysis carried out demonstrates for the first time that photoreceptors are tuned to detect points of maximum local phase congruency and shows how photoreceptors encode phase alignment through second-order nonlinear (three-wave) interactions. While this type of processing is often related to lateral connectivity in the lamina, analysis of isolated mutant photoreceptors reveals that the enhancement of signal features which give rise to points of high phase congruency is entirely the result of temporal processing and is independent of neighbouring neurons. This analysis also explains why photoreceptor responses to naturalistic stimuli are nonlinear whereas Gaussian white noise stimuli linearize the response.




    12:10 PM - 2:00 PM

    Lunch


    Afternoon Session I (2:00 PM - 3:20 PM)
    Identification in Sensory Systems II
    Chair: Matthieu Louis




    2:00 PM - 2:40 PM

    Stochastic Adaptive Sampling of Visual Information

    Mikko I. Juusola, Department of Biomedical Science, University of Sheffield.

    In fly photoreceptors, light is focused onto a photosensitive waveguide, the rhabdomere, consisting of tens of thousands of microvilli. Each microvillus is capable of generating elementary responses, quantum bumps, in response to single photons using a stochastically operating phototransduction cascade. Whereas much is known about the cascade reactions, less is known about how the concerted action of the microvilli population encodes light changes into neural information and how the ultrastructure and biochemical machinery of photoreceptors of flies and other insects evolved in relation to the information sampling and processing they perform.

    We generated biophysically realistic fly photoreceptor models, which accurately simulate the encoding of visual information. By comparing stochastic simulations with single cell recordings from Drosophila photoreceptors, we show how adaptive sampling by 30,000 microvilli captures the temporal structure of natural contrast changes. Following each bump, individual microvilli are rendered briefly (∼100–200 ms) refractory, thereby reducing quantum efficiency with increasing intensity. The refractory period opposes saturation, dynamically and stochastically adjusting availability of microvilli (bump production rate: sample rate), whereas intracellular calcium and voltage adapt bump amplitude and waveform (sample size). These adapting sampling principles result in robust encoding of natural light changes, which both approximates perceptual contrast constancy and enhances input novelty, such as object edges, under different light conditions, and predict information processing across a range of species with different visual ecologies.

    These results clarify why fly photoreceptors are structured the way they are and function as they do, linking sensory information to sensory evolution and revealing benefits of stochasticity for neural information processing. Lastly, we consider the role of stochastic adaptive sampling for routing and processing color and motion information in retinal and brain networks.




    2:40 PM - 3:20 PM

    Channel Identification Machines

    Aurel A. Lazar, Department of Electrical Engineering, Columbia University.

    We present a formal methodology for identifying a channel in a system consisting of a communication channel in cascade with an asynchronous sampler. The channel is modeled as a multidimensional filter, while models of asynchronous samplers are taken from neuroscience and communications and include integrate-and-fire neurons, asynchronous sigma/delta modulators and general oscillators in cascade with zero-crossing detectors. We devise channel identification algorithms that recover a projection of the filter(s) onto a space of input signals loss-free for both scalar and vector-valued test signals. The test signals are modeled as elements of a Reproducing Kernel Hilbert Space (RKHS) with a Dirichlet kernel. Under appropriate limiting conditions on the bandwidth and the order of the test signal space, the filter projection converges to the impulse response of the filter. We show that our results hold for a wide class of RKHSs, including the space of finite-energy bandlimited signals. We also extend our channel identification results to noisy circuits.

    References
    [1] Aurel A. Lazar and Yevgeniy B. Slutskiy, Identifying Dendritic Processing, Advances in Neural Information Processing Systems 23 (NIPS*2010), J. Lafferty and C. K. I. Williams and J. Shawe-Taylor and R.S. Zemel and A. Culotta, pp. 1261--1269, 2010.
    [2] Aurel A. Lazar and Yevgeniy B. Slutskiy, Channel Identification Machines, Computational Intelligence and Neuroscience, in press (2012).
    [3] Bionet Group, Channel Identification Machines Toolbox.




    3:20 PM - 3:50 PM

    Afternoon Break


    Afternoon Session II (3:50 PM - 5:10 PM)
    Identification of Neural Computation
    Chair: Garrett B. Stanley




    3:50 PM - 4:30 PM

    Identifying the Neural Computations and the Biophysics Underlying Collision Avoidance Behaviors

    Fabrizio Gabbiani, Department of Neuroscience, Baylor College of Medicine.

    General system identification techniques have often proven ineffective at unravelling the neural computations and biophysics of higher order sensory neurons. In this talk, I will illustrate how more specialized, ad-hoc techniques have helped us make progress in understanding how identified neurons in the visual system of insects process the information necessary for the motor system to implement collision avoidance behaviors. In particular, I will provide evidence suggesting that single neurons are able to implement a logarithmic-exponential transform within their dendritic tree to implement a multiplication operation. The significance of this multiplication operation in the context of collision avoidance behaviors will be discussed, illustrating in one example how neural computation and behavior can be tightly integrated to understand how the brain processes information.




    4:30 PM - 5:10 PM

    Correlation-Distortion based Analysis and Control of Neural Spike Trains

    Shy Shoham, Department of Biomedical Engineering, Technion.

    Accumulating evidence indicates that information representation and processing at both the network and single-neuron levels are highly dependent on the pair-wise correlation structure of spike trains. Quantitative descriptions and models of these correlations are needed in order to further clarify their role and test various related hypotheses. I will present the “correlation distortion” framework for systematically controlling or analyzing neural spike trains, which is based on analyzing the transformation of signal correlations as they propagate in feed-forward (LNP) and feedback (LN-Hawkes) multivariate neural cascade models. The framework is shown to enable the generation of synthetic spike trains with predefined correlation structure, correlation-based parametric Granger causality analysis of information flow in a neural network. I will show how these results can be applied in the context of new tools developed for interfacing with large populations of neurons and as a bridge between ‘classical’ correlation and spectral signal processing tools and the field of neural spike train analysis. Finally, I will discuss the application of correlation distortions and other approaches for “blind” identification of neuronal functional properties.




    5:10 PM - 5:50 PM

    Toward a Cognitive-Hippocampal Neural Prosthesis: Implantable Biomimetic Microelectronics to Restore Lost Memory

    Theodore W. Berger, Department of Biomedical Engineering, University of Southern California.

    Dr. Berger leads a multi-disciplinary collaboration with Dr. Sam Deadwyler (Wake Forest Univ.), Dr. John Granacki (USC), Dr. Vasilis Marmarelis (USC), and Dr. Greg Gerhardt (Univ. of Kentucky), that is developing a microchip-based neural prosthesis for the hippocampus, a region of the brain responsible for long-term memory. Damage to the hippocampus is frequently associated with epilepsy, stroke, and dementia (Alzheimer's disease), and is considered to underlie the memory deficits characteristic of these neurological conditions. The essential goals of Dr. Berger’s multi-laboratory effort include: (1) experimental study of neuron and neural network function -- how does the hippocampus encode information?, (2) formulation of biologically realistic models of neural system dynamics -- can that encoding process be described mathematically to realize a predictive model of how the hippocampus responds to any event?, (3) microchip implementation of neural system models -- can the mathematical model be realized as a set of electronic circuits to achieve parallel processing, rapid computational speed, and miniaturization?, and (4) creation of hybrid neuron-silicon interfaces -- can structural and functional connections between electronic devices and neural tissue be achieved for long-term, bi-directional communication with the brain? By integrating solutions to these component problems, the team is realizing a microchip-based model of hippocampal nonlinear dynamics that can perform the same function as part of the hippocampus. Through bi-directional communication with other neural tissue that normally provides the inputs and outputs to/from a damaged hippocampal area, the biomimetic model can serve as a neural prosthesis. A proof-of-concept will be presented using rats that have been chronically implanted with stimulation/recording micro-electrodes throughout the dorso-ventral extent of the hippocampus, and that have been trained using a delayed, non-match-to-sample task. Normal hippocampal functioning is required for successful delayed non-match-to-sample memory. Memory-behavioral function of the hippocampus is blocked pharmacologically, and then in the presence of the blockade, hippocampal memory/behavioral function is restored by a multi-input, multi-output model of hippocampal nonlinear dynamics that interacts bi-directionally with the hippocampus. The model is used to predict output of the hippocampus in the form of spatio-temporal patterns of neural activity in the CA1 region; electrical stimulation of CA1 cells is used to “drive” the output of hippocampus to the desired (predicted) state. Using the same procedures in implanted animals with an intact, normally functioning hippocampus substantially enhances memory strength and thus, learned behavior is improved. These results show for the first time that it is possible to create “hybrid microelectronic-biological” systems that display normal physiological properties, and thus, may be used as neural prostheses to restore damaged brain regions – even those regions that underlie cognitive function.