CNS*2012 Workshop on


Methods of Information Theory in Computational Neuroscience

Wednesday and Thursday, July 25-26, 2012

Atlanta/Decatur, GA




Overview

    Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience.

    A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited.

    The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work.

    The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.

    References

    C.E. Shannon, A Mathematical Theory of Communication, Bell System Technical Journal, vol. 27, pp. 379-423 and 623-656, 1948.

    Milenkovic, O., Alterovitz, G., Battail, G., Coleman, T. P., et al., Eds., Special Issue on Molecular Biology and Neuroscience, IEEE Transactions on Information Theory, Volume 56, Number 2, February, 2010.

    Dimitrov, A.G., Lazar, A.A. and Victor, J.D., Information Theory in Neuroscience, Journal of Computational Neuroscience, Vol. 30, No. 1, February 2011, pp. 1-5, Special Issue on Methods of Information Theory.


Standing Committee

    Alex G. Dimitrov, Department of Mathematics, Washington State University - Vancouver.

    Aurel A. Lazar, Department of Electrical Engineering, Columbia University.

Program Committee



Program Overview





    Wednesday (9:00 AM - 5:00 PM), July 25, 2012


    Morning Session I (9:00 AM - 10:40 AM)
    Modeling Information Flows
    Chair: Christopher J. Rozell




    9:00 AM - 9:50 AM

    Effects of Dopamine Depletion on Information Flow between the Subthalamic Nucleus and External Globus Pallidus

    Bruno B. Averbeck, Laboratory of Neuropsychology, NIMH/NIH.

    Abnormal oscillatory synchrony is increasingly acknowledged as a pathophysiological hallmark of Parkinson’s disease, but what promotes such activity remains unclear. We used nonlinear time series analyses and information theory to capture the effects of dopamine depletion on directed information flow within and between the subthalamic nucleus (STN) and external globus pallidus (GPe). We compared neuronal activity recorded simultaneously from these nuclei in 6-hydroxydopamine-lesioned Parkinsonian rats with that in dopamine-intact control rats. After lesioning, both nuclei displayed pronounced augmentations of beta-frequency (~20 Hz) oscillations and, critically, information transfer between STN and GPe neurons was increased. Furthermore, temporal profiles of the directed information transfer agreed with the neurochemistry of these nuclei, being ‘excitatory’ from STN to GPe and ‘inhibitory’ from GPe to STN. Results thus indicate that information flow around the STN-GPe circuit is exaggerated in Parkinsonism, and further define the temporal interactions underpinning this.




    9:50 AM - 10:40 AM

    Information Conveyed by a Population of Neurons

    Ehud Kaplan, The Neuroscience Department and the Friedman Brain Institute, Mount Sinai School of Medicine.

    Every brain function involves the dynamical interaction of many neurons, but until recently there was no way of estimating the amount of information that such coordinated activity by a neural population conveys to its targets. We have recently published a method for providing such estimates (Yu et al., Frontiers of Computational Neuroscience, 2010), and I shall illustrate its application with examples from the mammalian visual system. Like other methods, our approach calculates mutual information as the difference between the total and noise entropies of the responses to sensory stimulation. A remaining challenge is the estimation of the amount of information transmitted by a neural population in a non-sensory system, where we have no reliable way of estimating either the total or the noise entropy. I shall illustrate the problem, and discuss other ways of quantifying the complexity of the discharge in such systems.

    Supported by NIH grants EY16224, NIGMS 1P50- GM071558 and R21MH093868-02.




    10:40 AM - 11:10 AM

    Morning Break


    Morning Session II (11:10 AM - 12:00 PM)
    Modeling Population Coding
    Chair: Ehud Kaplan




    11:10 AM - 12:00 PM

    Neural Coding with Local Stimulus and Response Invariances

    Alex G. Dimitrov, Department of Mathematics, Washington State University - Vancouver.

    Biological sensory systems, and more so individual neurons, do not represent external stimuli exactly. This obvious statement is a consequence of the almost infinite richness of the sensory world compared to the relative paucity of neural resources that are used to represent it. Even if the intrinsic uncertainty present in all biological systems is disregarded, there will always be a many-to-one representation of whole regions of sensory space by indistinguishable neural responses. When noise is included, the representation is many-to-many. One direction of research in sensory neuroscience, espoused by us and others, is to identify and model such regions, with the goal of eventually completely describing neural sensory function as the partitioning of sensory space into distinguishable regions, associated to different response states of a sensory system.

    In most cases, such analysis treats noise in a relatively simplified manner, due to the high dimensional spaces that are involved. However, taking a simplistic view of noise can lead to a biased representation of the neural coding schemes and the mechanisms by which they are implemented. We described and characterized two nontrivial noise sources, based on invariance rather than random processes. In [1], we showed that response invariances to stimulus transformations such as time shifts, translations, rotations and dilations act as nontrivial noise sources, and can improve our understanding of neural coding when taken into account explicitly. In [2], we demonstrated the same principle on the response side: invariances to spike translations and dilations leads to nontrivial noise sources in the response code. The combination brings about a new view of biological signal processing, in which signal discrimination is closely associated with local transformation invariances, to both stimulus and response transformations.

    1. Dimitrov AG, Gedeon T: Effects of stimulus transformations on the perceived function of sensory neurons. J. Comp. Neurosci 2006, 20:265-283.

    2. Dimitrov and Cummins: Dejittering of neural responses by use of their metric properties. BMC Neuroscience 2011 12 (Suppl 1):P50.




    12:00 PM - 2:00 PM

    Lunch


    Afternoon Session (2:00 PM - 5:10 PM)
    Modeling Neural Correlations and Control
    Chair: Ilya Nemenman




    2:00 PM - 2:50 PM

    How Variability is Controlled in the Brain

    Tatyana O. Sharpee, Computational Neurobiology Laboratory, The Salk Institute of Biological Studies.

    I will discuss how the nervous system can control variability by correlating two different types of noise against each other. This general strategy will be illustrated with two example system. The first example system will be the retina where coordination between scatter in the receptive field positions and the irregularities in receptive field shapes increases information transmission compared to the situation where these two types of variability existed by themselves or independently from each other. The second example system will be neural coding in the high level auditory stage CLM in the songbird brain. Here, coordination between signal and noise correlations improved discriminability of task-relevant stimuli. At the same time, noise correlations remained detrimental for encoding of task-irrelevant or novel stimuli.




    2:50 PM - 3:40 PM

    The Orchestral Brain: Coding with Correlated and Heterogeneous Neurons

    Rava Azeredo da Silveira, Laboratoire de Physique Statistique, Ecole Normale Superieure.

    Positive correlations in the activity of neurons are widely observed in the brain. Previous studies have shown these correlations to be detrimental to the fidelity of population codes or at best marginally favorable compared to independent codes. Here, we show that positive correlations can enhance coding performance by astronomical factors. Specifically, the probability of discrimination error can be suppressed by many orders of magnitude. Likewise, the number of stimuli encoded -- the capacity -- can be enhanced by similarly large factors. These effects do not necessitate unrealistic correlation values and can occur for populations with a few tens of neurons. We further show that both effects benefit from heterogeneity commonly seen in population activity. Error suppression and capacity enhancement rest upon a pattern of correlation. In the limit of perfect coding, this pattern leads to a 'lock-in' of response probabilities that eliminates variability in the subspace relevant for stimulus discrimination. We discuss the nature of this pattern and suggest experimental tests to identify it.




    3:40 PM - 4:10 PM

    Afternoon Break


    4:10 PM - 5:10 PM

    Panel Discussion: The Need for Mathematical Neuroscience: Beyond Computation and Simulation

    Gabriel A. Silva (Organizer and Chair), Systems Neural Engineering and Theoretical Neuroscience, University of California, San Diego.

    Participants:

    Marius Buibas, Scientist, Brain Corporation, San Diego, California.

    Gaute T. Einevoll Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences (UMB).

    Robert Sinclair Mathematical Biology Unit, Okinawa Institute of Science and Technology, Japan.




    Thursday (9:00 AM - 5:00 PM), July 26, 2012


    Morning Session I (9:00 AM - 10:40 AM)
    Formal Methods I
    Chair: Tatyana O. Sharpee




    9:00 AM - 9:50 AM

    A Kernel Density Approach to Estimating Mutual Information

    Conor Houghton, Department of Mathematics, Trinity College Dublin.

    In the usual approach to calculating the mutual information of a stimulus and a spiking response, the spike trains are turned into words, sequences of ones and zeros, using a temporal discretization. The frequency of each word is calculated and this is used as an estimate of the probability. Since, for useful discretization scales, there are a huge number of words, this process requires a large number amount of data, something that is not always available for electrophysiology. Here, kernel density estimation is used to estimate probability distributions on the metric space of spike trains in the calculation of mutual information. This is a continuous approach and the challenge here is that the metric space may not have a natural integration measure: here the distribution of spike trains is itself used to estimate a measure.

    Joint work with Josh Tobin.




    9:50 AM - 10:40 AM

    An Efficient Sequential Prediction Approach to Time-Varying Causal Inference on Neural Data with Non-Probabilistic Guarantees

    Sanggyun Kim, Department of Bioengineering, University of California, San Diego.

    The ability to track the dynamics of the neural activity is crucial for better understanding how the neural systems adapt their representation to learning, behavioral and sensory stimuli. Recently, neuroscientific endeavors have evolved towards developing directional (causal) measures instead of relying on correlation. Many current approaches ignore the transient and non-stationary nature of neural systems, and provide a static view of the system’s dynamics. This might over-simplify the picture of the neural system’s spatio-temporal dynamics. In this study, we present an efficient approach to understand the time-varying nature of interaction between neural spike trains, using a sequential prediction methodology with non-probabilistic guarantees of goodness. Our approach measures the time-varying causality by comparing two dynamic, provably good, sequential predictors - one with the candidate “effector” from the past and the other without, when modeling the influence of one sequence on another. We construct an efficient sequential prediction algorithm that works well for all possible neural sequences in the sense that its performance is as good as the best expert in a given reference class. Moreover, this algorithm can be efficiently developed by demonstrating that the predictor serves as solution to an optimal transportation problem, which can be efficiently solved. The proposed approach was tested on simulated data, and then applied to neural spike trains recorded in the primary motor cortex of monkey, which was trained to perform a visuomotor task. The time-varying interactions present in the simulated data were predicted accurately. When applied to the real neural data, our approach tracked the time-varying causal interactions of neurons, which were consistent with our previous findings, and provided more detailed spatio-temporal dynamics.

    Joint work with Todd P. Coleman.




    10:40 AM - 11:10 AM

    Morning Break


    Morning Session II (11:10 AM - 12:00 PM)
    Formal Models II
    Chair: Gabriel A. Silva




    11:10 AM - 12:00 PM

    Short Term Network Memory via the Restricted Isometry Property

    Christopher J. Rozell, School of Electrical and Computer Engineering, Georgia Institute of Technology.

    Many researchers have postulated that the transient activity of a richly connected recurrent neural network can serve as a substrate for the short-term memory (STM) of input sequences. Past analyses using Gaussian input statistics has shown STM capacities of linear recurrent networks that are limited to the number of nodes in the network. However, the recent work in compressed sensing has shown that sparsity models (which are highly non-Gaussian) can be used to make strong guarantees on signal recovery from highly undersampled measurement systems. In this work we leverage these results to prove strong guarantees on the STM capacity of linear network architectures under sparsity assumptions on the input statistics. The main contribution of our work is to provide rigorous, non-asymptotic recovery guarantees for input sequences, illustrating that network STM capacities can be significantly higher than the number of the nodes in the network. We provide both perfect recovery guarantees for finite inputs, as well as results on the recovery tradeoffs when the network is presented with an infinitely long input sequence. The latter analysis highlights the fact that when the network is faced with an infinitely long streaming input, there is an optimal recovery length for the system that balances mistakes due to forgetting and recovery errors.

    Joint work with Adam Charles and Han Lun Yap.




    12:00 PM - 2:00 PM

    Lunch


    Afternoon Session (2:00 PM - 5:00 PM)
    Computational Models
    Chair: Conor Houghton




    2:00 PM - 2:50 PM

    Using Energy Optimized Computation to Interpret Neurophysiological Observations: Some Single Neuron Insights

    William B. Levy, Laboratory of Systems Neurodynamics, University of Virginia.

    In general, interpretation of experimental results relies on theoretical preconceptions. In neuroscience such preconceptions may arise from knowledge of statistical decision theory or knowledge of control theory, or of communication theory, etc. Specific examples of such preconception biases can be found when interpreting the biophysics of a single neuron, e.g., the role(s) of various voltage-controlled conductances. Our position (Levy and Berger, ISIT 2012) is that conventional perspectives can be misleading and that the proper perspective is Nature's herself. A first-principles optimization approach is our method of discovering Nature's perspective. The quantities being optimized are information, time, and energy expenditures. This talk describes our current model neuron and implications of this model. These recent results include the necessary and sufficient conditions for preserving "relevant" statistical information while minimizing the energetic costs of this preservation. A simple extension of this model bridges these results about statistical (Fisher) information to Shannon information.

    Joint work with Toby Berger.




    2:50 PM - 3:40 PM

    Why Pairwise Interactions in Biological Data?

    Ilya Nemenman, Department of Physics, Emory University.

    Multiple experimental groups have reported that statistical dependences in large-scale biological data, from molecular profiling, to neural activity, and to behavior, can be approximated well with maximum entropy models constrained by observed correlations among pairs of variables (aka, pairwise interaction models). Based on numerical simulations and theoretical analysis, in this talk I will argue that this is a property of a wide class of interaction networks, where, in the limit of large and dense networks, local strong higher order interactions become hard to distinguish from many weak non-local pairwise interactions.


    3:40 PM - 4:10 PM

    Afternoon Break


    4:10 PM - 5:00 PM

    Video Compressive Sensing for Dynamic MRI

    Jianing Shi, Department of Electrical and Computer Engineering, Rice University.

    Recent theoretical and practical advances in video compressive sensing (CS) hold considerable promise for the application in dynamic MRI, an imaging technique for capturing sequence of images. The most profound utility of dynamic MRI is to examine motion, which carries biological signature of malfunctions for various organs, especially the heart and the lung. Dynamic MRI can also examine brain functions, exemplified by perfusion imaging. Compressive sensing enables signal recovery based on number of samples well below Nyquist rate, thereby accelerating the acquisition process of MRI.

    Existing literature mainly employs two specific methodologies for dynamic imaging of organs such as the heart and the lung. The first method relies on a set of static imaging acquisitions while the subject holds breathing, and resorts to image registration for retrieving the motion. The second method tries to capture free-breathing motion by using fast imaging techniques, with a trade-off between image resolution and quality. Both methods have their limitations, and we envision video compressive sensing can resolve the challenge.

    We propose a novel framework, inspired by a state-of-the-art video compressive sensing algorithm, to estimate motion and reconstruct high fidelity dynamic MRI based on partially sampled k-t data. Given highly under-sampled data in the k-t space, we first estimate a low-resolution version of the video at a sequence of sub-sampled time points based on a customized sensing matrix in the Fourier domain. The spatial and temporal resolution of this video is optimized for the best image quality, given the time varying nature of dynamic imaging and the uncertainty principle. Optical flow is consequently estimated between consecutive frames of this reconstructed low-resolution video.

    Based on the partially sampled Fourier data, and the motion estimate obtained from the optical flow, we use a convex optimization framework to reconstruct the dynamic MRI at full spatial resolution. For the best recovery performance, we minimize an objective function using a mixture of wavelet sparsity of the MRI signal as well as its total variation (TV) in both spatial and temporal dimensions; the objective function is minimized along with two constraints in the least square sense. These constraints (a) enforce consistency of compressive measurements, and (b) impose the estimated optical flow between consecutive frames.

    We develop an efficient algorithm to solve for the resultant convex optimization using the alternating direction method based on augmented Lagrangian, further accelerated using Bregman divergence on a sequence of minimization subproblems.

    Our framework enables simultaneous motion extraction and high fidelity video reconstruction in a computational efficient manner, based on highly compressive k-t sampling, therefore making it possible to perform real-time dynamic MRI.

    Joint work with Aswin C. Sankaranarayanan, Christoph Studer, Richard G. Baraniuk.