CNS*2015 Workshop on

Methods of System Identification for Studying Information Processing in Sensory Systems

University of Economics, W. Churchill Sq. 4

Room RB 209, Wednesday, July 22, 2015

Prague, Czech Republic

A functional characterization of an unknown system typically begins by making observations about the response of that system to input signals. The knowledge obtained from such observations can then be used to derive a quantitative model of the system in a process called system identification. The goal of system identification is to use a given input/output data set to derive a function that maps an arbitrary system input into an appropriate output.

In neurobiology, system identification has been applied to a variety of sensory systems, ranging from insects to vertebrates. Depending on the level of abstraction, the identified neural models vary from detailed mechanistic models to purely phenomenological models.

The workshop will provide a state of the art forum for discussing methods of system identification applied to the visual, auditory, olfactory and somatosensory systems in insects and vertebrates.

The lack of a deeper understanding of how sensory systems encode stimulus information has hindered the progress in understanding sensory signal processing in higher brain centers. Evaluations of various systems identification methods and a comparative analysis across insects and vertebrates may reveal common neural encoding principles and future research directions.

The workshop is targeted towards systems, computational and theoretical neuroscientists with interest in the representation and processing of stimuli in sensory systems in insects and vertebrates.


Program Committee

  • Aurel A. Lazar, Department of Electrical Engineering, Columbia University.
  • Mikko I. Juusola , Department of Biomedical Science, University of Sheffield.

Program Overview

Wednesday, 9:00 AM - 5:50 PM, July 22, 2015

Morning Session I (9:00 AM - 10:20 AM)

Chair: Andrew D. Straw

9:00 AM - 9:40 AM

Motion as a Source of Environmental Information: A Fresh View on Biological Motion Computation and its Role for Solving Spatial Vision Tasks

Martin Egelhaaf, Department of Neurobiology & Center of Cognitive Interaction Technology (CITEC), University of Bielefeld.

Despite their miniature brains insects, such as flies and bees, are able to perform highly aerobatic flight maneuvers and thereby solve spatial vision tasks, such as avoiding collisions with stationary obstacles, landing on objects, or localizing previously learned inconspicuous goals on the basis of environmental cues. To accomplish their extraordinary performance, insects rely on spatial information that is contained in the retinal motion patterns (“optic flow”) generated on the eyes during locomotion. The optic flow is shaped by the characteristic dynamics of behavioral actions as well as by the structure of the environment and, thus, has very peculiar spatiotemporal properties. Extraction of spatial information from the optic flow pattern is greatly facilitated by an active flight and gaze strategy of insects that segregates the rotational from the translational optic flow component. Whereas rotations are squeezed into brief saccade-like turns, the gaze direction is kept constant for more than 80% of flight time during the intersaccadic translation phases; information about the distance of the animal to objects in the environment is only contained in the optic flow induced during translational motion.

However, veridical optic flow information is not provided by the insect motion detection system. Rather, the responses of motion detectors as are widespread in biological systems, especially in insects, reflect – in addition to the retinal velocity – the textural properties of the environment. This characteristic has often been regarded as a deficiency of biological motion detection mechanisms. In contrast to this view, we conclude from analyses challenging the insect motion detection system with image flow as generated during translational locomotion through cluttered natural environments that the neuronal activity profile in the visual motion pathway highlights the contours of nearby objects. Contrast borders are a main carrier of functionally relevant object information in artificial and natural sceneries. The motion detection system, thus, segregates in a computationally parsimonious way the environment into behaviorally relevant nearby contours and – in many behavioral contexts – less relevant distant structures. Behavioral analyses and model simulations reveal that this type of motion-based environmental information is well suited for solving behavioral tasks such as collision avoidance and local navigation. Hence, by making use of the behaviorally generated dynamics of the retinal image flow insects are capable of performing extraordinarily well with minimal computational effort.

9:40 AM - 10:20 AM

Integration of Walking Direction and Speed Sensitivity in Cell-Specific Motion-Sensitive Visual Neurons

Eugenia Chiappe, Laboratory of Sensorimotor Integration, Champalimaud Neuroscience Programme, Champalimaud, Lisbon.

Visually guided locomotion requires an accurate estimate of self-movement to guide oriented actions. Recent work has revealed that locomotion modulates the activity of visual interneurons thought to be involved in ego-motion processing [1-4]. However, the role these locomotive signals may support during visually guided locomotion is still unclear. We have started to address this question by first investigating the relation between the canonical receptive field of visual interneurons and their locomotive modulation.

We performed electrophysiological recordings from walking Drosophila melanogaster, targeting motion-sensitive neurons known to respond to visual motion stimuli generated by a fly’s walking behavior (HS cells) [6]. Our data shows at least two different vision-independent and walking-induced phenomena in HS cells. The first type of modulation is not specific to walking: it depolarized HS cells preceding the onset of walking or grooming. The second type of modulation is specific to either the translational or rotational components of walking. Forward walking depolarizes HS cells at the onset of walking. Turning modulates the membrane potential in a direction selective manner that was aligned to the cell’s front-to-back visual direction selectivity: turning to the left depolarize right HS cells, whereas turning to the right hyperpolarized them. This walking direction selectivity was opposite in sign in the left-side HS-cells, and scaled up with walking speed. Interestingly, other motion-sensitive visual interneurons, whose receptive fields are such that they would not be activated by walking-induced visual stimuli, showed neither walking direction selectivity nor preceding membrane potential depolarization in the dark.

Our results show that HS cells receive walking-related signals in a manner that is vision-independent, cell-type specific, and directionally selective. These motor-specific signals in visual feedback neurons may induce a rapid gaze-stabilization during walking, necessary to extract spatial information about the animal’s own trajectory from large-field motion visual information. Alternatively, the motor signals we record from HS cells may provide a sensory calibrated signal to other postsynaptic circuits – yet to be identified, that may internally represent the position of the fly’s body in space [7].


[1] Chiappe, M.E., et al. (2010) Curr Biol 20:1470-1475. [2] Maimon, G., et al. (2010) Nat Neurosci 13: 393-399. [3] Saleem, A.B., et al. (2013) Nat Neurosci 16: 1864-1869. [4] Niell, C. and Stryker, M.P. (2010) Neuron 65: 472-479. [5] Keller, G.B., et al., (2012) Neuron74: 809-815. [6] Kern, R., et al. (2001). J Neurosci 21:1-5. [7] Franklin, D.W., and Wolpert, D.M. (2011) Neuron 72: 425-442.

Joint work with Terufumi Fujiwara and Tomás Cruz.

Morning Break 10:20 AM - 10:50 AM

Morning Session II (10:50 AM - 12:10 PM)

Chair: Thomas Nowotny

10:50 AM - 11:30 AM

Honeybee Odor Processing: Neural Networks for Odor Identity and Evaluation in a World with many Odors and Fast Timescales

C. Giovanni Galizia, Department of Zoology and Neurobiology, University of Konstanz.

Much progress has been made recently in understanding how olfactory coding works in insect brains. In this talk, I will propose a wiring diagram for the major steps from peripheral processing all the way to behavioral readout. In insects this is the antennal lobe (first processing network), the mushroom bodies (most complex brain structure, crucial for learning) and the lateral protocerebrum (containing the premotor control areas). Processing steps include a sequence of: (1) lateral inhibition in the antennal lobe, (2) nonlinear synapses, (3) threshold-regulating gated spring network, (4) selective lateral inhibitory networks across glomeruli, (5) feed-forward inhibition to the lateral protocerebrum. We will present recent data about how honeybees segregate complex temporal information in chaotic and fast odor-plume sequences. We find that a few milliseconds disparity in stimulus onset of two overlapping odor components in a mixture give the animal access to odor object information. We follow that information throughout the olfactory pathway from the olfactory receptor neurons, the antennal lobe networks, all the way to the mushroom bodies and lateral protocerebrum of the honeybees. We propose that the main difference between mushroom bodies and lateral protocerebrum is not about learned vs. innate behavior. Rather, mushroom bodies perform odor identification, while the lateral protocerebrum performs odor evaluation (both learned and innate). Modulatory networks are proposed as switches between different evaluating systems in the lateral protocerebrum.

Joint work with P. Szyszka, G. Raiser and J.S. Stierle.

11:30 AM - 12:10 AM

Projection Neurons in Drosophila Antennal Lobes Signal the Acceleration of Odor Concentrations

Aurel A. Lazar, Department of Electrical Engineering, Columbia University.

Temporal experience of odor gradients is important in spatial orientation of animals. The fruit fly Drosophila melanogaster exhibits robust odor-guided behaviors in an odor gradient field. In order to investigate how early olfactory circuits process temporal variation of olfactory stimuli, we subjected flies to precisely defined odor concentration waveforms and examined spike patterns of olfactory sensory neurons (OSNs) and projection neurons (PNs). We found a significant temporal transformation between OSN and PN spike patterns, manifested by the PN output strongly signaling the OSN spike rate and its rate of change. A simple two-dimensional model admitting the OSN spike rate and its rate of change as inputs closely predicted the PN output. When cascaded with the rate-of-change encoding by OSNs, PNs primarily signal the acceleration and the rate-of-change of dynamic odor stimuli to higher brain centers, thereby enabling animals to reliably respond to the onsets of odor concentrations.


[1] Anmo J. Kim, Aurel A. Lazar and Yevgeniy B. Slutskiy, eLife 2015;10.7554/eLife.06651.

Lunch 12:10 PM - 2:00 PM

Afternoon Session I (2:00 PM - 3:20 PM)

Chair: Eugenia Chiappe

2:00 PM - 2:40 PM

How to sample a reliable neural estimate of the variable world?

Mikko I. Juusola , Department of Biomedical Science, University of Sheffield.

The world is variable and dynamic. Its matter/energy is clustered into changing structures and events encompassing macro- and micro-scales. The central problem facing all animals is how to best sample a reliable estimate of the world, when the estimation itself is limited by variations in their neural machineries and by uncertainty of their surroundings. New results suggest that rather than working against variability, evolution works with it, giving rise to reliable and robust information sampling and representation in the nervous tissue. Photoreceptors sample visual information stochastically and weight it against fluctuating responses of their neighbours. Such anti-aliased sampling improves neural estimates of intensity changes in image pixels. Visual interneurones further adaptively sample and integrate synaptic information of photoreceptors to improve their estimates of the structure of the world. I will present new evidence for the hypothesis that variability in animals’ sensory systems is less noise and more a part of a solution to sample reliable estimates of the variable world.

2:40 PM - 3:20 PM

System Identification of Drosophila Optomotor Behavior Using Free Flight Virtual Reality

Andrew D. Straw, Research Institute of Molecular Pathology, Vienna.

In an animal exploring its environment, many behavioral ‘modules’ may be operating concurrently. To what extent can overall behavior be reconciled with our knowledge of the individual behavioral modules such as stabilization reflexes and object approach behaviors?

To address these types of questions, we study vision in Drosophila. To understand potential interaction between behavioral modules, we must first understand the temporal dynamics of the constituent behavioral modules in isolation. Using our novel virtual reality assay for freely flying Drosophila, and a unique type of open-loop perturbation experiment, we are studying the wide-field visual motion responses of flies in free flight. Using control theoretic methods from engineering disciplines, we perturb freely moving animals and generate dynamic models of the behavior of individual flies. By applying physical constraints in the model estimation process, components of the model relating to the plant (the mass and aerodynamics of the fly) can be separated from the behavioral program (e.g. the dynamics of the optomotor response). By comparing these models one may identify behavioral differences in flies in which specific neural circuit elements have been genetically blocked. By extending this analysis to multiple circuit elements and stimulus conditions, we hope to contribute to a complete picture of how an fly flight trajectory results from the interplay of neural circuit elements and behavioral modules.

Joint work with J.R. Stowers and A. Kugi. Supported by WWTF CS2011-029 and ERC Starting Grant 281884.

Afternoon Break 3:20 PM - 3:50 PM

Afternoon Session II (3:50 PM - 5:50 PM)

Chair: Martin Egelhaaf

3:50 PM - 4:30 PM

High-Level Information Processing of Sensory Signals in Nervous Systems

Chung-Chuan Lo, Institute of Systems Neuroscience and Department of Life Sciences, National Tsing Hua University, Taiwan.

Sensory systems are characterized by their ability to process large amount of input in parallel. The parallel processing often leads to multiple downstream pathways with diverse functionality. Some information pathways may be used to trigger rapid motor responses while others are wired for higher cognitive functions. How can we identify these functionally distinct pathways when the connectome for a nervous system is available? Here we propose that in addition to conventional adjacency matrices, high-order connectivity (indirect connections through multiple synapses) should also be analyzed. This is based on the consideration that multi-synapse pathways may be more important, in terms of high-level functionality, than the direct or the shortest pathways. To test the power of our method, we analyzed connectome of Caenorhabditis elegans and found that the high-order connectivity of the C. elegans neural network is associated with the pathways that mediate the complex social feeding behavior. We further applied our method to the analysis of a partial network of Drosophila central complex. Our analysis revealed the importance of a pair of atypical neurons in the high-level signal processing. The exact function of the pair of neurons is still unknown, but their possible roles in maintaining integrity of the spatial representation will be discussed in the talk.

4:30 PM - 5:10 PM

Understand Brain Functions from Spikes: A Nonlinear Dynamical System Identification Approach

Dong Song, Department of Biomedical Engineering, USC.

Brain represents and processes information with point-process signals, i.e., spikes. To understand the biological basis of cognitive functions, it is essential to identify the spike train transformations performed by brain regions. Such a model can also be used as a computational basis for developing cortical prostheses that can restore lost cognitive functions by bypassing damaged brain regions. We formulate a three-stage strategy for such a system identification problem. First, we formulated a multiple-input, multiple-output physiologically plausible model for representing the nonlinear dynamics underlying spike train transformations. This model is equivalent to a cascade of a Volterra model and a generalized linear model. The model has been successfully applied to the hippocampal CA3-CA1 during learned behaviors. Secondly, we extend the model to nonstationary cases using a point-process adaptive filter technique. The resulting time-varying model captures how the MIMO nonlinear dynamics evolve with time when the animal is learning. Lastly, we seek to identify the learning rule that explains how the nonstationarity is formed as a consequence of the input-output flow that the brain region has experienced during learning. This learning rule is essentially the neurobiological basis of learning and memory formation.

5:10 PM - 5:50 PM

Closed-Loop Computational Electrophysiology

Thomas Nowotny, Department of Informatics, University of Sussex.

The standard method for characterising ion channels in neurons is voltage clamp in patch clamp recordings. However, in the classical procedure measurements are performed with constant voltage steps and chemical channel blockers are used to isolate individual ion channel types. Because chemical blockers can be irreversible, different ion channels of the same neuron type have to be measured in different individual cells, potentially even in separate preparations from different individual animals. This can be highly problematic and it has been observed that combining measurements from many different cells does sometimes prevent building useful whole cell models. Here I will introduce a proposal to go beyond the classical constant steps and try to design optimised stimulation patterns to isolate the effect of different ion channels without blockers. Furthermore, I propose to use closed-loop online parameter estimation methods to then build a model of all ionic currents in an individual neuron simultaneously. I will present preliminary results of simulations that seem to suggest that this new method may now just about be possible when using modern GPU acceleration. If successful, the new closed-loop electrophysiology methods could have deep impacts on our understanding of how individual neurons vary in their ion channel content.

Tweet this! Share on Facebook Email a friend Share on Delicious Share on StumbleUpon Share on Digg Share on Reddit Share on Technorati