CNS*2014 Workshop on
Methods of System Identification for Studying Information Processing in Sensory Systems
Wednesday, July 30, 2014
Québec City, Canada
A functional characterization of an unknown system typically begins by making observations about the response of that system to input signals. The knowledge obtained from such observations can then be used to derive a quantitative model of the system in a process called system identification. The goal of system identification is to use a given input/output data set to derive a function that maps an arbitrary system input into an appropriate output.
In neurobiology, system identification has been applied to a variety of sensory systems, ranging from insects to vertebrates. Depending on the level of abstraction, the identified neural models vary from detailed mechanistic models to purely phenomenological models.
The workshop will provide a state of the art forum for discussing methods of system identification applied to the visual, auditory, olfactory and somatosensory systems in insects and vertebrates.
The lack of a deeper understanding of how sensory systems encode stimulus information has hindered the progress in understanding sensory signal processing in higher brain centers. Evaluations of various systems identification methods and a comparative analysis across insects and vertebrates may reveal common neural encoding principles and future research directions.
The workshop is targeted towards systems, computational and theoretical neuroscientists with interest in the representation and processing of stimuli in sensory systems in insects and vertebrates.
- Vasilis Z. Marmarelis (2004). Nonlinear Dynamic Modeling of Physiological Systems. Wiley-IEEE Press, Hoboken, NJ, 2004.
- Wu, M., David, S., & Gallant, J. (2006). Complete Functional Characterization of Sensory Neurons by System Identification. Annual Review of Neuroscience, 29, 477–505.
- Ljung, L. (2010). Perspectives on System Identification , Annual Reviews in Control, 34 (2010), 1-12.
- Aurel A. Lazar, Department of Electrical Engineering, Columbia University.
- Mikko I. Juusola , Department of Biomedical Science, University of Sheffield.
Wednesday, 9:00 AM - 5:50 PM, July 30, 2014
Morning Session I (9:00 AM - 10:20 PM)
Chair: Glenn C. Turner
9:00 AM - 9:40 AM
Cellular Implementation of the Drosophila Elementary Motion Detector
Claude Desplan, Department of Biology, NYU.
The algorithms and neural circuits that process motion have been the focus of intense research. An influential model, the Reichardt correlator relies on differential temporal filtering of two different photoreceptor inputs, delaying one input signal with respect to the other. Motion in a particular direction causes these delayed and non-delayed signals to arrive simultaneously at a subsequent processing step in the brain where they are multiplied to produce a direction-selective response. Recent work has identified two parallel pathways that selectively respond to either moving light, or dark edges. Using in vivo patch-clamp recordings, we have shown that four medulla neurons implement the processing steps required for motion vision of light and dark edges: The neurons Mi1 and Tm3 respond selectively to brightness increments, with the response of Mi1 delayed relative to Tm3. Conversely, Tm1 and Tm2 respond selectively to brightness decrements, with the response of Tm1 delayed compared to Tm2. Modeling these steps in collaboration with Tom Clandinin’s lab at Stanford indicates that they are consistent with the optimal speed at which the fly sees the world. This shows that specific medulla neurons possess response properties that allow them to implement the long-sought algorithmic steps of the Reichardt correlator.
Joint work with Rudy Behnia.
9:40 AM - 10:20 AM
Systems Identification of Visual Feature Detection in Drosophila
Mark A. Frye, Department of Integrative Biology and Physiology, UCLA.
Visual feature detection and figure-ground discrimination are prerequisites of feature-based visual attention. Classical models of motion detection explain how space-time correlations of mean luminance generated by rigid figures and ego-motion panoramas are sensed. Yet these models fail to explain how human subjects and flies readily attend to a figure defined only by a luminance envelope, with no difference in mean luminance between the moving figure and the stationary ground. This amounts to a modulation of the second moment of the luminance distribution, and for this reason, this type of stimulus is referred to as second-order motion whereas signals defined by mean luminance are termed first-order. Figures may also be composed of higher-order spatio-temporal disparities that cannot be defined simply by taking higher moments of the luminance distribution.
To address how the processing streams for first-order and higher-order visual motion are organized in flies, my lab adapted a systems identification approach to deconstruct feature perception. We use an electronic ‘virtual reality’ visual flight simulator to record visually elicited flight optomotor steering kinematics by the wings (the fly equivalent of optokinetics). Cross-correlation of time-varying wing amplitude signals with the trajectory of pseudo white-noise modulated motion steps (m-sequence) provides an estimate of the input-output filter describing the steering response to a visual impulse. We project a closed figure window in the form of a vertical bar upon a randomly textured background panorama. One m-sequence generates first-order elementary motion (EM) by displacing the random surface pattern painted upon the figure, and a second m-sequence generates higher-order motion signals by displacing the figure motion (FM) window itself.
We estimate EM and FM response kernels for these random ‘jitter’ stimuli centered at intervals across the entire visual azimuth. We assemble these 1D filter kernels into 2D spatio-temporal action fields (STAFs), named for their analogy to spatio-receptive fields. Unambiguous differences in the topology of the two STAFs indicate distinct visual sub-systems for EM and FM features of visual attention. The STAFs robustly predict behavioral responses to arbitrary visual signals, and recapitulate classical experimental results. The structure of the STAFs motivates predictions about the underlying neurobiology such as ground-motion invariant figure coding. We test these predictions using two-photon excitation microscopy of a genetically encoded calcium indicator expressed within select visual interneurons of the fourth optic ganglion of the fly visual system. We record of thousands of ROIs for each of four genetically identified projection cell classes, and subject each of the four cell classes to a battery of 8 stimuli built from the STAF results. Covariance analysis demonstrates that each of the four projection classes filters distinct features of first-order and higher-order motion streams including (i) unfiltered directionally selective retinotopic elementary motion, (ii) selectivity for both first and higher-order figure motion coupled with inhibition to ground motion, and (iii) selectivity for figure motion independent of ground motion.
Morning Break 10:20 AM - 10:50 AM
Morning Session II (10:50 AM - 12:30 PM)
Chair: Johannes Reisert
10:50 AM - 11:30 AM
Bursts Drive Maximal Visual Encoding
Mikko I. Juusola, Department of Biomedical Science, University of Sheffield.
Sensory neurons integrate information about the world, adapting their sampling to its changes. However, little is understood mechanistically how this primary encoding process, which ultimately limits perception, depends upon stimulus statistics. Here, we analyze this open question systematically by using intracellular recordings from fly (Drosophila melanogaster and Coenosia attenuata) photoreceptors and corresponding stochastic simulations from biophysically realistic photoreceptor models. Recordings show that photoreceptors can sample more information from naturalistic light intensity time series (NS) than from Gaussian white-noise (GWN), shuffled-NS or Gaussian-1/f stimuli; integrating larger responses with higher signal-to-noise ratio and encoding efficiency to large bursty contrast changes. Simulations reveal how a photoreceptor's information capture depends critically upon the stochastic refractoriness of its 30,000 sampling units (microvilli). In daylight, refractoriness sacrifices sensitivity to enhance intensity changes in neural image representations, with more and faster microvilli improving encoding. But for GWN and other stimuli, which lack longer dark contrasts of real-world intensity changes that reduce microvilli refractoriness, these performance gains are submaximal and energetically costly. These results provide mechanistic reasons why information sampling is more efficient for natural/naturalistic stimulation and novel insight into the operation, design, and evolution of signaling and code in sensory neurons.
Finally, I will show that in response to optimally-constructed light bursts the “slow” Drosophila R1-R6 photoreceptors can transmit 800-900 bits/s, which is ~4-times more information than for Gaussian white-noise (GWN) and twice of that for typical naturalistic stimuli. I will explain the biophysical mechanisms and dynamics for this improvement and what this means to spatiotemporal vision and active sensing in general.
11:30 AM - 11:50 PM
Neural System Prediction and Identification Challenge
Arvind Kumar, Bernstein Center Freiburg, University of Freiburg.
Can we infer the function of a biological neural network (BNN) if we know the connectivity and activity of all its constituent neurons? This question is at the core of neuroscience and, accordingly, various methods have been developed to record the activity and connectivity of as many neurons as possible. Surprisingly, there is no theoretical or computational demonstration that neuronal activity, connectivity and other such descriptors of the system are indeed sufficient to infer the function of a BNN. Therefore, we introduce the Neural Systems Identification and Prediction Challenge (nuSPIC). We provide the connectivity and activity of all neurons and invite participants (1) to infer the functions implemented (hard-wired) in spiking neural networks (SNNs) by stimulating and recording the activity of neurons and, (2) to implement predefined mathematical/biological functions using SNNs. The nuSPICs can be accessed via a web-interface to the NEST simulator and the user is not required to know any specific programming language. Furthermore, the nuSPICs can be used as a teaching tool. Finally, nuSPICs use the crowd-sourcing model to address scientific issues. With this computational approach we aim to identify which functions can be inferred by systematic recordings of neuronal activity and connectivity. In addition, nuSPICs could help the design and application of new experimental paradigms based on the structure of the SNN and the presumed function which is to be discovered.
11:50 AM - 12:30 PM
Spiking Neural Circuits with Dendritic Stimulus Processors: Encoding, Decoding, and Identification
Aurel A. Lazar, Department of Electrical Engineering, Columbia University.
We present a multi-input multi-output neural circuit architecture for nonlinear processing and encoding of stimuli in the spike domain. In this architecture a bank of dendritic stimulus processors implements nonlinear transformations of multiple temporal or spatio-temporal signals such as spike trains or auditory and visual stimuli in the analog domain. Dendritic stimulus processors may act on both individual stimuli and on groups of stimuli, thereby executing complex computations that arise as a result of interactions between concurrently received signals. The results of the analog-domain computations are then encoded into a multi-dimensional spike train by a population of spiking neurons modeled as nonlinear dynamical systems. We investigate general conditions under which such circuits faithfully represent stimuli and demonstrate algorithms for (i) stimulus recovery, or decoding, and (ii) identification of dendritic stimulus processors from the observed spikes. Taken together, our results demonstrate a fundamental duality between the identification of the dendritic stimulus processor of a single neuron and the decoding of stimuli encoded by a population of neurons with a bank of dendritic stimulus processors. This duality result enabled us to derive lower bounds on the number of experiments to be performed and the total number of spikes that need to be recorded for identifying a neural circuit.
Joint work with Yevgeniy B. Slutskiy.
Lunch 12:30 PM - 2:00 PM
Afternoon Session I (2:00 AM - 3:20 PM)
Chair: Mark A. Frye
2:00 PM - 2:40 PM
Regulation of Drosophila Motor Circuit Synapse Maturation by Miniature Neurotransmission
Brian D. McCabe, Department of Pathology and Cell Biology, Columbia University.
Miniature neurotransmission is the trans-synaptic process where single synaptic vesicles are spontaneously released from presynaptic neurons to induce miniature post-synaptic potentials. Since their discovery over sixty years ago, miniature events have been found at every chemical synapse studied. However, the in vivo necessity for these small amplitude events has remained enigmatic and they have been often dismissed as synaptic noise. We have found that miniature neurotransmission is required for the normal structural maturation of Drosophila motor circuit glutamatergic synapses in a developmental role that is not shared by evoked neurotransmission. Conversely, we found that increasing miniature events is sufficient to induce synaptic terminal growth. We further determined that miniature neurotransmission acts locally at terminals to regulate synapse maturation via a Trio GEF and Rac1 GTPase molecular signaling pathway. Our results establish that miniature neurotransmission, a universal but often overlooked feature of synapses, seems to act as a parallel second layer of neuronal communication with a unique and essential role in promoting normal synapse development.
2:40 PM - 3:20 PM
Characterization of Neuronal Types "Functional Connectivity" in Mouse Early Visual Pathways Using Generalized Linear Models (GLMs)
Stefan Mihalas, Allen Institute for Brain Science.
Generalized linear models (GLMs) have been successfully used to describe response characteristics of neurons in early sensory pathways. Such models are desirable because they reduce the problem of parameter fitting to a convex optimization problem. The Linear-Nonlinear-Poisson (LNP) model is an example of a model that can be used to characterize the functional relationship between sensory stimuli and observed neural spike trains (Chichilnisky 2001, Simoncelli et al 2004). These models typically consist of i) a linear filtering of the time dependent stimulus, ii) passing the output of linear filtering through a non-linear function to generate the neuron's instantaneous firing rate, and iii) using the instantaneous rate to generate spikes according to an inhomogeneous Poisson process. When the non-linearity is a fixed invertible function, then the LNP model is a generalized linear model (GLM).
Using data obtained from electrophysiological recordings in the LGN and V1 of awake and anesthetized mice at the Allen Institute for Brain Science, we develop a comprehensive characterization of LGN responses to visual stimuli using GLMs. We are able to include the effects of behavioral state-dependent modulation within this framework. GLMs allow the inclusion of crosstalk filters between individual neurons; however these require dense recordings to be able to be mapped to individual neuron connectivity. We construct a classification of neurons in LGN, and based on class responses, using the GLM technique we start constructing a neuron type "functional connectivity" between LGN and V1 in mice. These allow the construction of cascade models, in which multiple linear-nonlinear layers are included.
Joint work with Ramakrisnan Iyer, Severine Durand, Michael Buice, Kenji Mizuseki, R Clay Reid.
Afternoon Break 3:20 PM - 3:50 PM
Afternoon Session II (3:50 AM - 5:50 PM)
Chair: Claude Desplan
3:50 PM - 4:30 PM
Basal activity in olfactory receptor neurons: its origin and how it is controlled
Johannes Reisert, Monell Chemical Senses Center, Philadelphia, PA.
Olfactory receptor neurons (ORNs) transduce odorous information via a G protein-coupled transduction cascade into ultimately action potentials to be conveyed to second order mitral and tufted neurons in the olfactory bulb. cAMP plays a dual role in these processes by not only being the second messenger that drives signal transduction, but also as a signaling molecule that is intricately coordinating the targeting of ORN axons to specific glomeruli in the olfactory bulb. Odorant selectivity and specificity is conveyed to ORNs by expressing only one of 1000 odorant receptors (ORs).
In the absence of stimulation individual mouse ORNs display a low basal action potential firing rate varying from 0 to 4 Hz. We addressed the origin of ORN basal firing activity and found that, interestingly, the basal firing rate depends on the OR a given ORN chose to express and is driven by the thermal activity of the OR in the absence of stimulation. But, as a too high basal activity would be undesirable, do components of the ORN transduction machinery mitigate high basal OR activity? We investigated the role of PDE1C, the enzyme that degrades cAMP to AMP. Currently PDE1C is a protein without a function, since it does not perform its expected role, which is to terminate the response at the end of stimulation. We found that PDE1C limits the amount of current noise generated by highly active ORs, while having little effect of basally quiet ORs. Furthermore, as cAMP is also involved in correct axonal targeting, we investigated bulbar innervation patterns by ORN axons. Axons properly targeted glomeruli in PDE1C wildtype mice even in mice lacking PDE1C, as long a basal OR activity was low. But axons greatly mis-targeted in the absence of PDE1C, when ORs drove a high basal activity in ORNs. Thus PDE1C’s role might be to mitigate the effects by highly active ORs to ensure proper odorant responses and establishing a correct glomerular map.
4:30 PM - 5:10 PM
Trade-Off between Feature Complexity and Invariance in Vision
Tatyana O. Sharpee, The Computational Neurobiology Laboratory, Salk Institute.
In this talk I will describe a set of computational tools that make it possible to simultaneously characterize feature selectivity and the range of invariance of neural responses using natural stimuli. Applying these methods to the responses of neurons area V4, a mid-level visual area that participates in object recognitions, we found that neurons that were tuned to more curved contours had smaller ranges of position invariance and produced sparser responses to natural stimuli than neurons tuned to more shallow contours. The findings obtained with natural stimuli were reproduced in a separate set of experiments involving synthetic stimuli where curvature was controlled parametrically. The trade-off between invariance and selectivity observed within a given stage of visual processing point to parallel pathways were either different features are combined at one retinotopic position or similar features are combined across positions to achieve position invariance. These findings provide empirical support for recent theories of how the visual system estimates three-dimensional shapes from shading flows, as well as the tiling hypothesis of the visual space for different curvature values while allowing for increased in pooling across spatial positions for shallower curvatures.
5:10 PM - 5:50 PM
Olfactory Signaling in Mushroom Body Output Neurons - Neural Coding as a Circuit Converges
Glenn C. Turner, Cold Spring Harbor Laboratory.
Most neural circuits start with relatively small numbers of neurons at the periphery, expand to large populations at higher levels, and ultimately converge before reaching motor output. I'll talk about our efforts to understand the converging side of the Drosophila olfactory circuit. 2000 neurons in the mushroom body converge down onto only 34 MB output neurons. We have characterized the olfactory response properties of this entire population. I will discuss our findings on population-level olfactory processing at this layer, and insights into how uniquely identifiable MB output neurons derive their odor tuning properties.