CNS*2011 Workshop on


Methods of Information Theory in Computational Neuroscience

Wednesday and Thursday, July 27-28, 2011

Stockholm, Sweden




Overview

    Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience.

    A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited.

    The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work.

    The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.

    References

    C.E. Shannon, A Mathematical Theory of Communication, Bell System Technical Journal, vol. 27, pp. 379-423 and 623-656, 1948.

    Milenkovic, O., Alterovitz, G., Battail, G., Coleman, T. P., et al., Eds., Special Issue on Molecular Biology and Neuroscience, IEEE Transactions on Information Theory, Volume 56, Number 2, February, 2010.

    Dimitrov, A.G., Lazar, A.A. and Victor, J.D., Information Theory in Neuroscience, Journal of Computational Neuroscience, Vol. 30, No. 1, February 2011, pp. 1-5, Special Issue on Methods of Information Theory.


Standing Committee

    Alex G. Dimitrov, Department of Mathematics, Washington State University - Vancouver.

    Aurel A. Lazar, Department of Electrical Engineering, Columbia University.

Program Committee

    Todd P. Coleman, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign.

    Michael C. Gastpar, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley.

    Simon R. Schultz, Department of Bioengineering, Imperial College.



Program Overview


    Wednesday (9:00 AM - 5:00 PM), July 27, 2011


    Morning Session I (9:00 AM - 10:40 AM)
    Modeling
    Chair: Christian Machens




    9:00 AM - 9:50 AM

    Towards Understanding the Biophysical Limits to Information Transmission

    A. Aldo Faisal, Department of Computing & Department of Bioengineering, Imperial College.

    Do hard physical limits constrain the structure and function of neural circuits - and their capacity to process information? We studied this problem from first-principle biophysics looking at three fundamental constraints (noise, energy and time) and how the basic properties of a neuron's components set up a trade-off between these. We focus on the action potentials as the fundamental signal used by neurons to transmit information rapidly and reliably to other neurons along neural circuits, and looked at cell soma, axon and the synapse.




    9:50 AM - 10:40 AM

    Minimal Models of Multidimensional Neural Computations

    Tatyana Sharpee, Computational Neurobiology Laboratory, The Salk Institute of Biological Studies.

    Biological systems across many scales, from molecules to ecosystems, can all be considered information processors, detecting important events in their environment and transforming them into actions. Detecting events of interest in the presence of noise and other overlapping events often necessitates the use of nonlinear transformations of inputs. The nonlinear nature of the relationships between inputs and outputs makes it difficult to characterize them experimentally given the limitations imposed by data collection. I will discuss how minimal models of the nonlinear input/output relationships of information processing systems can be constructed by maximizing a quantity called the noise entropy. The proposed approach can be used to ‘‘focus’’ the available data by determining which input/output correlations are important and creating the least-biased model consistent with those correlations.

    We used this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non- Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.




    10:40 AM - 11:10 AM

    Morning Break


    Morning Session II (11:10 AM - 12:00 PM)
    Analysis
    Chair: Eftychios A. Pnevmatikakis




    11:10 AM - 12:00 PM

    How to Deal with the Heterogeneity of Responses in Higher Brain Areas: a Demixing Method

    Christian Machens, Ecole Normale Supérieure, Paris.

    Higher brain areas that are implicated in decision-making or attentional processes receive inputs from many other parts of the brain. The activity of neurons in these areas often reflects this mix of influences. As a result, neural responses are extremely complex and heterogeneous, even in animals that are performing relatively facile tasks such as simple stimulus–response associations. This heterogeneity makes it hard to understand what exactly these areas are doing. The traditional approach to analyzing data from neural populations essentially ignores the problem and simply focuses on the responses of single neurons, selecting some for presentation, and submerging the rest in a swamp of statistical measures. More recent approaches have sought to analyze recordings by resorting to principal component analysis (PCA) and other dimensionality reduction techniques. These techniques provide a succinct and complete description of the population response, however, the construction of this description is independent of the relevant task variables.

    Here, we propose an exploratory data analysis method that seeks to maintain the major benefits of PCA while also extracting the relevant task variables from the data. We suggest that task variables will often find orthogonal representations in higher-order areas. The heterogeneity observed at the level of individual cells is then simply due to the random mixture of these orthogonal representations. We propose a dimensionality reduction method that seeks a coordinate transformation such that firing rate variance caused by different task variables falls in orthogonal subspaces and is maximized within these subspaces. We discuss the loss function needed to achieve so, and show its relation to PCA and information-theory-based methods such as ICA. We study use of the method both for simulated data and for neural population recordings obtained in the PFC of monkeys as well as in the VTA and the Piriform cortex of rodents. In all of these areas, we find a universal subspace in the neural population activity that represents information about task timing.

    Joint work with Wieland Brendel, Christos Constantinidis, Ranulfo Romo, Mark Terrelonge, and Naoshige Uchida.




    12:00 PM - 2:00 PM

    Lunch


    Afternoon Session (2:00 PM - 5:00 PM)
    Formal Methods
    Chair: Olivier Faugeras




    2:00 PM - 2:50 PM

    Adaptive Cluster Expansion for the Inverse Ising Problem

    Simona Cocco, Laboratoire de Physique Statistique de l'ENS, Paris.

    In this talk I will present a procedure to solve the inverse Ising problem, based on a cluster expansion of the Ising entropy at fixed magnetizations and correlations. I will discuss the performance of the procedure and present applications to neurobiological data (multi-electrode recordings of the retina).




    2:50 PM - 3:40 PM

    Linear Time Interior Point Optimization Methods for Sparse Receptive Field Estimation

    Eftychios A. Pnevmatikakis, Department of Statistics, Columbia University.

    The increasing availability of electrophysiological and optical imaging data calls for the development of fast processing algorithms that scale reasonably both with the length and the size of the experiment. In this talk we present a second order optimization algorithm for the estimation of penalized state space models, suitable in the case where only a few linear observations are available in each time step. We assume that the state transition and noise distribution are log-concave. In such a setup, and under certain conditions on the prior distribution, we derive a second order algorithm that operates with linear complexity in time and storage requirement. This is done by computing the Newton direction using an efficient forward-backward scheme that is based on a series of low rank updates. We formalize the conditions on the prior and present a large class of priors that satisfy them. These include both smooth and nonsmooth, sparsity promoting, priors (l_1, total variation, group sparsity norms) for which we employ an interior point modification of our algorithm without affecting its linear complexity. We apply our algorithm in spiking data estimating high dimensional quantities such as linear and quadratic models of nonstationary (time variant) receptive fields.




    3:40 PM - 4:10 PM

    Afternoon Break


    4:10 PM - 5:00 PM

    Understanding How the Dynamical Aspects of Information Processing in Interacting Networks of Neural Activity

    Todd P. Coleman, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign.

    Advances in multi-unit neural recording are allowing neuroscientists to understand how the dynamic interaction between proximal and distal neural signals takes place in functionally specialized manners. Moreover, an over-abundance of data has now led to the problem of data deluge: a seminal current challenge in quantitative neuroscience is to develop parsimonious, scalable toolsets that can extract meaningful aspects nature of the dynamical interaction between neural units from many recordings.

    In this talk we demonstrate how characterizing the causal interactions between neural systems, and how they co-vary with function, provides novel insights into brain function. We demonstrate that (a) our graphical representation of interactions is concise and is equivalent to a graph derived from a generative statistical model (b) our approach can be interpreted as a sequential prediction approach to characterize Granger's causality definition across arbitrary modalities, and (c) we provide statistical approaches to instantiate this on on simulated and experimental datasets. We also describe provably good information-theoretic multi-scale modeling methods to provide more succinct characterizations of the most dominant effects. Experimental findings pertaining to ensemble recordings in primary motor cortex of an awake behaving monkey preceding a movement demonstrate (a) a persistent, dominant causal flow of information afer a visually evoked stimulus and after movemnt whose direction is consistent with that found in local field potentiels, and (b) the estimated speed of propagation of the caual flow at the neuronal level is 28.9cm/s second, approximately equal to the 30cm/s wave speed findings with local field potentials. These findings allude to a coupling between the dynamic information processing mechanisms at the level of local field potentials and neuarl spike trains.




    Thursday (9:00 AM - 5:00 PM), July 28, 2011


    Morning Session I (9:00 AM - 10:20 AM)
    Neuronal Networks
    Chair: Sridevi V. Sarma




    9:00 AM - 9:40 AM

    Redundancy in Neural Populations

    Michael C. Gastpar, UC Berkeley and EPFL.

    Redundant signaling appears to be prevalent in many places in cortex. Through the use of information measures, we investigate the role of redundancy in neural populations, both in the sensory pathways (avian auditory cortex) and in motor cortex (M1 of primates). We find significant differences in redundancy levels for the same neural population under various experimental conditions and show how this establishes a role for redundancy in robust and efficient signal processing.

    Based on joint work with Kelvin So, Jose Carmena, Noopur Amin, and Frederic Theunissen.




    9:40 AM - 10:20 AM

    Multi-network Modeling of Learning in Neuronal Networks

    Erol Gelenbe, Department of Electrical and Electronic Engineering, Imperial College.

    This paper investigates the effect of correlated spiking activity and molecular flows in neuronal networks. A probabilistic multi-network model is proposed to represent such correlated flows of spikes and molecules, and a means for computing the excitation of neuronal cells is proposed in this framework. The results are used to analyse plausible models of learning for neuronal networks that may carry out learning similar to backpropagation in this multi-network context.




    10:20 AM - 10:50 AM

    Morning Break


    Morning Session II (10:50 AM - 12:10 PM)
    Computational Models
    Chair: Aldo Faisal




    10:50 AM - 11:30 AM

    A Computational Model of Neural Sensor Networks

    Kilian Koepsell, Redwood Center for Theoretical Neuroscience, University of California, Berkeley.

    We propose a model for sensory processing in recurrent neural networks. During development or learning, the network connectivity is adapted to the input statistics. After training, the dynamics of the network can be identified with a computation on stimuli. We use an online version of minimum probability flow (MPF) learning (Sohl-Dickstein et. al., 2009) as a biologically plausible learning rule to train the neural network. In the special case that the training data are concentrated around a fixed set of patterns, the learning rule stores the pattens as fixed points and the dynamics performs a computation corresponding to a clean-up. The model can thus be seen as learning to perform perceptual inference on noisy stimuli in an unsupervised way. We find that our model outperforms standard auto-associative memory networks in speed and retrieval robustness in this setting.

    Joint work with Christopher Hillar and Jascha Sohl-Dickstein.




    11:30 AM - 12:10 PM

    Mean-Field Theory for Non-Equilibrium Network Reconstruction and Applications to Neural Data

    John A. Hertz, NORDITA, Stockholm, and Niels Bohr Institute, Copenhagen.

    With the advancement of multi-electrode recording, an important and interesting question is arising: can we infer interactions between neurons from the simultaneous observation of spike trains from many neurons? In this talk, we first describe a toy version of this problem: How can we infer interactions of an Ising model from observing its state samples? We will start with a short review of how this can be done for equilibrium systems and then study how Dynamical Mean-Field (naive mean field and TAP) theory can be developed for a non-equilibrium Ising model and exploited for inferring the network connectivity. Finally, we will describe how all this can be used to for inferring connectivity from neural multi-electrode recordings.

    Joint work with Yasser Roudi, Kavli Institute for Systems Neuroscience, Trondheim.




    12:10 PM - 2:00 PM

    Lunch


    Afternoon Session (2:00 PM - 5:00 PM)
    Large-Scale Neuronal Networks
    Chair: Kilian Koepsell




    2:00 PM - 2:50 PM

    Seizure Foci Localization and Onset Detection: A Multivariate Approach

    Sridevi V. Sarma, Department of Biomedical Engineering, Johns Hopkins University.

    Epilepsy is a neurologic disorder that affects 50 million people worldwide. Despite many new antiepileptic medications, over 30% of patients still have drug-resistant epilepsy. This has increased interest in alternative therapies including neurostimulation, both programmed chronic (open loop) and responsive (closed loop). Closed loop therapies are thought to be most effective if directed at the seizure focus and at the time of or prior to seizure (‘ictal’) onset. However, focus localization and early onset detection from neural recordings remain largely open problems.

    In this talk, we propose a novel computational framework for localization and early onset detection that involves (i) constructing information theoretic multivariate statistics of neural activity to localize the foci and uncover sub-clinical early ictal states; (ii) modeling neural dynamics in early ictal and ictal states, and their transition probabilities; and, (iii) developing an optimal strategy to detect transitions from early ictal to ictal states from sequential neural measurements. “Quickest Detection” (QD) of seizure onsets is implemented using optimal control tools, which minimizes the absolute difference between detected onset time to the annotated unequivocal onset time. Consequently, false positives decrease while true positives (with minimal delays) increase.

    We demonstrate our approach with intracranial EEG recordings from two human patients with drug- resistant epilepsy. These measurements contained multiple channels of simultaneously recorded data spatial distributed over the cortex. A series of time dependent connectivity matrices were formed by calculating the pairwise mutual information between all channels before, during and after seizure (annotated by clinicians). Each element of the connectivity matrix is a measure of pairwise dependency but at each time the entire multivariate matrix structure over all electrodes was analyzed. Singular value decomposition (SVD) was used to identify the leading order structure in these connectivity matrices by tracking the first singular value and vector at each time. We find in two patients that the first singular vector has a characteristic direction indicative of the seizure foci, and the first singular value can be used for QD of seizure onset which out performs widely used detection algorithms.




    2:50 PM - 3:40 PM

    Cross-Level Coupling in Multi-Scale Brain Networks: Task-Dependent Coupling between Large-Scale LFP Patterns and Single Neurons

    Ryan T. Canolty, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley.

    Brains exhibit structure across a variety of different scales – fromsingle neurons (micro-scale) to functional areas (meso-scale) to large-scale cortical networks (macro-scale). However, unlike complexsystems designed by humans, the different levels of this multi-scalebrain network often appear to interact with each other. That is, activity and information at one level can influence other levels, a phenomenon termed cross-level coupling (CLC).

    We recently showed that oscillatory phase coupling between multiple brain areas (macro-scale) coordinates the spiking of single neurons (micro-scale) [Canolty et al, 2010, PNAS]. However, different behavioral tasks require different subnetworks of connected brain areas to be active. Furthermore, a single brain area or neuron may perform different functions when presented with different task demands. For example, it has been shown that direction tuning of cells can change in different conditions of a brain-machine interface (BMI) task [Carmena and Ganguly, 2009, PLoS Biology; Ganguly et al, 2011, Nature Neuroscience]. We hypothesized that observable changes in CLC may reflect this functional remapping.

    To test this hypothesis, we compared two distinct conditions employed in the BMI center-out reach task. In this paradigm, the same neurons are responsible for controlling two distinct physical plants (the subject’s arm during manual control, or an on-screen cursor in brain control), and thus one neuron may play a different role in each condition. We found that the specific pattern of CLC observed for a neuron was dependent on task demands, supporting the hypothesis that CLC may play a key role in the functional reorganization of dynamic brain networks.


    3:40 PM - 4:10 PM

    Afternoon Break


    4:10 PM - 5:00 PM

    Propagation to Chaos and Information Processing in Large Assemblies of Neurons

    Olivier Faugeras, INRIA, Sophia Antipolis.

    We derive the mean-field equations of completely connected networks of excitatory/inhibitory Hodgkin-Huxley and Fitzhugh-Nagumo neurons and prove that there is propagation to chaos, i.e. that in the limit the neurons become a) independent (this is the propagation to chaos) and b) a copy (with the same law) of a new individual, the mean field limit. This is related to some recently published experimental work by Eker et al., Science 2010. We show the results of numerical experiments that confirm the propagation to chaos and indicate, through the notion of Fischer information, that this is optimal in terms of information processing. We also consider finite size effects, i.e. the difference between the mean field situation when neuronal populations are of infinite size and the real situation, when the size is finite and show that the mean field approximation is very good for populations of reasonable size.

    This is joint work with Diego Fasoli and Jonathan Touboul.