CNS*2013 Workshop on


Methods of Information Theory in Computational Neuroscience

Wednesday and Thursday, July 17 and 18, 2013

Paris, France


Overview

Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience.

A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited.

The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work.

The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.

References

Standing Committee

  • Alex G. Dimitrov, Department of Mathematics, Washington State University - Vancouver.
  • Aurel A. Lazar, Department of Electrical Engineering, Columbia University.

Program Committee




Program Overview




Wednesday, 9:00 AM - 5:00 PM, July 17, 2013


Morning Session I (9:00 AM - 10:20 AM)

Chair: Alex G. Dimitrov


9:00 AM - 9:40 AM

Using Information Theory to Add a New Dimension to Eye Design

Simon Laughlin, Department of Zoology, University of Cambridge.

Are resources allocated to a neural system’s components to maximize its overall efficiency? We address this problem in an exceptionally well characterized set of systems, by asking a new question in eye design. How do the benefits of investing in optics, to form a better image, trade-off against the benefits of investing in photoreceptors, to extract information from the image? Basic Information Theory allows one to reduce competing factors to a single measure of performance, bits coded per unit solid angle per second [1]. Using this measure we find that there is an optimum division of resources that maximizes an eye’s efficiency in daylight, and this explains why large diurnal eyes with better optical resolution have longer photoreceptors. Efficient resource allocation also explains why optically efficient simple eyes (like our own) are largely filled with optics while optically inefficient compound eyes are largely filled with photoreceptors. Thanks to Information Theory, a structural difference that was too obvious to catch peoples’ attention turns out to be an efficient design.

Reference

[1] Hateren, J.H. (1992). Theoretical predictions of spatiotemporal receptive fields of fly LMCs, and experimental validation. J. Comp. Physiology A 171, 157 - 170.

(joint work with Francisco Hernandez-Heras)


9:40 AM - 10:20 AM

Information Transmission Across the Retinogeniculate Synapse

Lawrence C. Sincich, University of Alabama.

Taking a single connected pair of neurons as an example, I will present a case study of how information is transmitted from one neuron to another in vivo. Derived from recordings of neurons in the lateral geniculate nucleus of a primate, where inputs from single retinal ganglion cells can be simultaneously recorded, I will show how retinogeniculate transfer of information can occasionally be so efficient as to be lossless. In our experiments, the stimuli consisted of a small spot of light modulated with naturalistic temporal frequencies, and thus the transfer dynamics are very close to what would be expected when primates (including humans!) view natural scenes. I will discuss the empirical pitfalls and opportunities created by physiological recordings of pairs of connected neurons in the living animal.


Morning Break 10:20 AM - 10:50 AM


Morning Session II (10:50 AM - 12:10 PM)

Chair: Alex G. Dimitrov


10:50 AM - 11:30 AM

What Does a Neuron Do? An Online Learning View of Neuronal Computation

Dmitri B. Chklovskii, HHMI Janelia Farm, Ashburn, VA.

The enormous complexity of biological neurons complicates our efforts to model their dynamics. Because the values of numerous biophysical parameters required to construct detailed models are largely unavailable, there is a need for models on a higher level of abstraction. To help construct such models we propose to view a neuron from a functional or computational perspective. Specifically, we propose that a leaky integration and a non-linear output function perform online estimation of a non-Gaussian-distributed signal, which results from a weighted summation of pre-synaptic inputs. In turn, the synaptic weights for extracting such non-Gaussian signal can be generated by Hebbian-like learning rules provided the output function is non-linear. This allows us to map specific biophysical mechanisms onto the steps of a computational algorithm setting a foundation for modeling neurons as online signal processing devices.


11:30 AM - 12:10 PM

Information Theoretic Analysis of Large-scale Neurophysiological Recordings by Manifold Learning

Simon R. Schultz, Department of Bioengineering, Imperial College.


Lunch 12:10 PM - 2:00 PM


Afternoon Session (2:00 PM - 5:00 PM)

Chair: Simon R. Schultz


2:00 PM - 2:50 PM

Resonating Vector Strength: How to Determine Periodicity in a Sequence of Events

J. Leo van Hemmen, Technical University of Munich

Can periodicity of discrete events be measured and, if so, how? In fact, the two questions are identical and our goal is to understand why and to see how uncertainty modifies the answer. Typically, we find responses as events at times {t1, t2 , … , tn}. Say, for a given periodic stimulus of angular frequency ωo = 2π/To . Then the question is, given a bunch of times {t1, t2, … , tn} on the real axis, how periodic are they? And do they repeat in "some" sense in accordance with the stimulus period To ? The question and the answer are at least as old as a classical paper due to von Mises and dating back to 1918. The key idea is simply this. We map the events tj onto the unit circle through tj → exp (iωtj) and focus on their center of gravity ρ(ω), a complex number in the unit disk. Its absolute value |ρ(ωo)| with ω ≡ ωo is what von Mises studied and is now called the vector strength. We prove that the nearer |ρ(ωo)| is to 1 the more periodic the events tj are w.r.t. To , a simple topological criterion. Furthermore, we also show why it is useful to study ρ(ω) as a function of ω so as to obtain a 'resonating' vector strength (RVS), an idea strongly deviating from the classical characteristic function. Finally, we discuss how noise as a means of quantifying our uncertainty regarding the tj modifies ρ(ω) as a measure of periodicity.


2:50 PM - 3:40 PM

Neurons and Surprise

Israel Nelken, ICNC and the Edmond and Lily Safra Center for Brain Sciences, Hebrew University.

Neurons in the auditory system of mammals are sensitive to rare events in the sensory sequence. Furthermore, neurons are exquisitely sensitive to the complexity of the sound sequence, even when controlling for the rarity of the individual events. We used an information-theoretic framework to formally define 'surprise' for sound sequences. We assume that the auditory system represents those aspects of the past which are maximally predictive of the future. A family of reduced representations of the past sequence, which minimizes the mutual information with the past ('complexity') under a constraint on the mutual information with the future ('predictive power'), can be calculated explicitly by applying the information bottleneck principle to the joint probability distribution of past (defined as the last N tones in the sequence) and the future (here, the next tone in the sequence). Given such a representation, surprise is defined as -log(p), where p is the probability of the next event given the state of the reduced representation that corresponds to the preceding sequence of sound events. We show that neuronal responses in auditory cortex are fitted well by the surprise calculated from such models. Thus, the auditory system seems to encode a formal notion of surprise derived from first principles.

(joint work with Jonathan Rubin and Naftali Tishby)


Afternoon Break 3:40 PM - 4:10 PM


4:10 PM - 5:00 PM

Bounds on Reliable Information Transmission and Coding Complexity in Some Stochastic Neuronal Models

Lubomir Kostal, Institute of Physiology, Academy of Sciences of the Cech Republic.

The problem of information processing in single neurons and neuronal networks is one of the most intensively studied topics in the field of computational neuroscience. The mathematical framework for the theoretical approach to this problem is often provided by information theory [1]. The theory quantifies, under certain assumptions, the ultimate limit on reliable information transfer by means of information channel capacity. However, channel capacity is known to be essentially an asymptotic quantity, as the code length and the associated coding/decoding complexity tends to infinity. In this contribution we address both the ultimate limits (capacity) and the bounds on non-asymptotic performance for a given code length, taking into account the probability that the stimulus is decoded incorrectly by employing the maximum likelihood decoding scheme. Metabolic cost of neuronal activity is taken into account.

References

[1] Gallager, R. G. (1968) Information theory and reliable communication. John Wiley and Sons, Inc., New York, USA. [2] Kostal, L. (2010) Information capacity in the weak-signal approximation. Phys. Rev. E, 82, 026115.

(joint work with Ryota Kobayashi)


Thursday, 9:00 AM - 5:00 PM, July 18, 2013


Morning Session I (9:00 AM - 10:20 AM)

Chair: Conor Houghton


9:00 AM - 9:40 AM

Not Noisy, Just Wrong: the Computational and Neural Cause of Behavioral Variability

Alex Pouget, University of Geneva.

Behavior varies from trial to trial even when the stimulus is maintained as constant as possible. In many models, this variability is attributed to noise in the brain. Here, we propose that there is another major source of variability: suboptimal inference. Importantly, we argue that in most tasks of interest, and particularly complex ones, suboptimal inference is likely to be the dominant component of behavioral variability. This perspective explains a variety of intriguing observations, including why variability appears to be larger on the sensory than on the motor side, and why our sensors are sometimes surprisingly unreliable. It also predicts specific patterns of correlations among neurons which are markedly different from the ones that are currently assumed to exist in cortical circuits.


9:40 AM - 10:20 AM

The Simultaneous Silence of Neurons Explains Higher-order Interactions in Ensemble Spiking Activity

Taro Toyoizumi, Riken Brain Sciences Institute.

Collective spiking activity of neurons is the basis of information processing in the brain. Sparse neuronal activity in a population of neurons limits possible spiking patterns and, thereby, influences the information content conveyed by each pattern. However, because of the combinatorial explosion of the number of parameters required to describe higher-order interactions (HOIs), the characterization of neuronal interactions has been mostly limited to lower-order interactions, such as pairwise interactions.

Here, we propose a new model that characterizes population-spiking activity by adding a single parameter to the previously proposed pairwise interaction model. This parameter describes the fraction of time a group of neurons is simultaneously silent, which can be alternatively expressed as a specific combination of HOIs. We apply our model to groups of neighboring neurons that are simultaneously recorded from spontaneously active slice cultures from the hippocampal CA3 area. Most groups of neurons that are not adequately explained by the pairwise interaction model exhibit significantly longer periods of simultaneous silence than the chance level expected from firing rates and pairwise correlations, demonstrating that simultaneous silence is a common property coded by HOIs.

To confirm that the simultaneous silence is also a major property, we systematically obtained a one-dimensional data-driven HOI term that is asymptotically optimal when added to a pairwise-interaction model. This analysis exhibited the structured HOIs expected from the simultaneous silence of neurons, i.e., positive pairwise interactions are followed by negative triple-wise interactions, and then positive quadruple-wise interactions. These results suggest that seemingly complex HOIs can be explained by simultaneous silence of multiple neurons. We discuss the implication of simultaneous silence for our understanding of the underlying circuit architecture and information coding.


Morning Break 10:20 AM - 10:50 AM


Morning Session II (10:50 AM - 12:10 PM)

Chair: Conor Houghton


10:50 AM - 11:30 AM

Characterizing Neural Feature Selectivity and Invariance Using Natural Stimuli

Tatyana O. Sharpee, The Computational Neurobiology Laboratory, Salk Institute.

In this talk I will describe a set of computational tools for characterizing responses of high level sensory neurons. The goal is to describe in as simple as possible ways how the responses of these neurons signal the appearance of conjunctions of different features in the environment. The focus will be on computational methods that are designed to work with stimuli derived from the natural sensory environment. Some of the new methods that I will discuss characterize neural feature selectivity while assuming that the neural responses exhibit a certain type of invariance, such as position invariance for visual neurons. Other methods do not require one to make an assumption of invariance, and instead can determine the type of invariance by analyzing relationship between the multiple stimulus features that affect the neural responses. I will discuss the relative advantages and limitations of these computational tools and illustrate their performance using model neurons as well as recordings from the visual system.


11:30 AM - 12:10 PM

The t-Transform and Its Inverse in Modeling Neural Encoding and Decoding

Aurel A. Lazar, Department of Electrical Engineering, Columbia University.

The t-transform is a key tool for characterizing the encoding of stimuli with spiking neural circuits. Formally, the t-transform maps an analog signal into a time sequence. For single-input multi-output neural circuits consisting of a set of receptive fields and a population of integrate-and-fire, threshold-and-fire and/or Hodgkin-Huxley neurons, the analysis of the inverse t-transform has revealed some deep connections between faithful decoding and the bandwidth of the encoded stimuli. In the noiseless case, a Nyquist-type rate condition guarantees invertibility with perfect stimulus recovery. In the noisy case, a standard regularization framework provides optimum stimulus recovery. In addition, the decoding algorithms are tractable for stimuli encoded with massively parallel neural circuits. More recently, we generalized these circuits to encode 3D color visual information for models of stereoscopic vision, to integrate sensory information of different modalities and dimensions, and to handle nonlinear dendritic signal processing. These results set the stage for the first rigorous results in spike processing [1].

[1] A. A. Lazar, E. A. Pnevmatikakis, and Y. Zhou, The Power of Connectivity: Identity Preserving Transformations on Visual Streams in the Spike Domain, Neural Networks , Volume 44 , pp. 22-35 , 2013.


Lunch 12:10 PM - 2:00 PM


Afternoon Session (2:00 PM - 3:40 PM)

Chair: Tatyana O. Sharpee


2:00 PM - 2:50 PM

Sparse Sampling: Sensing Brain Activity at Infinite Resolution

Pier Luigi Dragotti, Imperial College.

The problem of reconstructing or estimating partially observed or sampled signals is an important one that finds application in many areas of signal processing. Traditional acquisition and reconstruction approaches are heavily influences by classical Shannon sampling theory which gives an exact sampling and interpolation formula for bandlimited signals. Recently, the emerging theory of sparse sampling has challenged the way we think about signal acquisition and has demonstrated that, by using more sophisticated signal models, it is possible to break away from the need to sample signals at the Nyquist rate. The insight that sub-Nyquist sampling can, under some circumstances, allow perfect reconstruction is revolutionizing signal processing, communications and inverse problems.

The main aim of this talk is to give an overview of these new exciting findings in sampling theory. The new fundamental theoretical results of sparse sampling will be reviewed and constructive algorithms will be presented. We also discuss the effect of noise on the sampling and reconstruction of sparse signals. In this context, a variation of an iterative algorithm due to Cadzow is proposed and shown to perform close to optimal over a wide range of signal to noise ratios. To emphasize the relevance of these new theories, a set of applications in neuroscience will be presented. In particular, we will present a new fast algorithm for spike detection from two-photon calcium imaging and will show how to perform spike sorting at sub-Nyquist rates.


2:50 PM - 3:40 PM

Categorical Perception: from Coding Efficiency to Reaction Times

Jean-Pierre Nadal, Laboratoire de Physique Statistique, CNRS UMR8550, Ecole Normale Supérieure, Paris.

We address issues specific to the perception of categories (e.g., vowels, familiar faces, etc.) making a clear distinction between identifying a category (an element of a discrete set) and estimating a continuous parameter (such as a direction). With the neural decision making process as main focus, we consider discrete (typically binary) choice tasks, implying the identification of the stimulus as an exemplar of a category.

First, we exhibit a link between optimal Bayesian decoding (identification) and coding efficiency, the latter being measured by the mutual information between the discrete category set and the neural activity. Focusing on the high signal-to-noise ratio limit with a large population of stimulus-specific encoding cells, we then obtain an analytical expression for the mutual information. We deduce the properties of the most efficient codes. One main outcome is to find that, in this high signal-to-noise ratio limit, the Fisher information at the population level should be the greatest between categories, which is achieved by having many cells with the stimulus-discriminating parts (steepest slope) of their tuning curves placed in the transition regions between categories in stimulus space. At the behavioral level, this leads to the main features that are characteristic of categorical perception (see, e.g., S. Harnard, Editor: “Categorical Perception: The Groundwork of Cognition”, Cambridge University Press; 1987), thus appearing as a byproduct of optimal coding.

Next, we characterize the properties of the best estimator of the likelihood of the category, when this estimator takes its inputs from the neural coding layer. This allows to study the reaction-times in a perceptual identification task, for a given (not necessarily optimized) coding layer. Adopting the diffusion-to-bound approach to model the decisional process, we relate analytically the bias and variance of the diffusion process underlying decision making to macroscopic quantities that are behaviorally measurable. A major consequence is the existence of a quantitative link between reaction times and discrimination accuracy. The results account for empirical facts, both qualitatively (e.g., more time is needed to identify a category from a stimulus at the boundary compared to a stimulus lying within a category), and quantitatively (working on published experimental data on phoneme identification tasks).

Joint work with Laurent Bonnasse-Gahot, Centre d’Analyse et de Mathématique Sociales, CNRS UMR8557, Ecole des Hautes Etudes en Sciences Sociales, Paris.

References

  • Bonnasse-Gahot L, Nadal JP: Neural coding of categories: information efficiency and optimal population codes. Journal of computational neuroscience 2008, 25(1): 169-187.
  • Bonnasse-Gahot L, Nadal JP: Perception of categories: from coding efficiency to reaction times. Brain Research 2012, 1434: 47-61.