CNS*2007 Workshop on
Methods of Information Theory in Computational Neuroscience
Wednesday, July 11, 2007
Toronto, Canada
Overview

Methods originally developed in Information Theory have found wide applicability in
computational neuroscience. Beyond these original methods there is a need to
develop novel tools and approaches that are driven by problems arising in neuroscience.
A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited.
The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work.
The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.
Organizers

Alex Dimitrov, Center for Computational Biology, Montana State University.
Aurel A. Lazar, Department of Electrical Engineering, Columbia University
Program Overview
Wednesday, July 11, 2007 (9:00 AM  5:10 PM)
Morning Session (9:00 AM  12:10 noon)
Title: Coding and Neural Computation
Chair: Todd P. Coleman
9:00 AM  9:40 AM
The DendritetoSoma Input/Output Function of Single Neurons
Erik P. Cook, Department of Physiology, McGill University.
The discovery that an array of voltage and timedependent channels are present in both the dendrites and somas of neurons has led to a variety of models for singleneuron computation. Most of these models, however, are based on experimental techniques that use simplified inputs of either single synaptic events or brief current injections. In this study, we used a more complex timevarying input to mimic the continuous barrage of synaptic input that neurons are likely to receive in vivo. Using dual wholecell recordings of CA1 pyramidal neurons, we injected longduration whitenoise current into the dendrites. The variance of this stimulus was adjusted to produce either low subthreshold or high suprathreshold fluctuations of the somatic membrane potential. Somatic action potentials were produced in the highvariance input condition.
Applying a systemsidentification approach, we discovered that the neuronal input/output function was extremely well described by a model containing a linear bandpass filter followed by a nonlinear staticgain. The estimated filters contained a prominent bandpass region in the 1 to 10 Hz frequency range that remained constant as a function of stimulus variance. The gain of the neuron, in contrast, varied as a function of stimulus variance. When the contribution of the voltagedependent current Ih was eliminated, the estimated filters lost their bandpass properties and the gain regulation was substantially altered.
Using computer models, we found that a range of voltagedependent channel properties can readily account for the experimentally observed filtering in the neuronal input/output function. In addition, the bandpass signal processing of the neuronal input/output function was most affected by the timedependence of the channels. A simple active channel, however, could not account for the experimentally observed change in gain. These results suggest that nonlinear voltage and timedependent channels contribute to the linear filtering of the neuronal input/output function and that channel kinetics shape temporal signal processing in dendrites.
9:40 AM  10:20 AM
Broadband Coding with Dynamical Synapses
Benjamin Lindner, MaxPlanckInstitut für Physik Komplexer Systeme, Dresden.
Shortterm synaptic plasticity (STP) comprises facilitation and depression processes. Although STP can alter the mean value and spectral statistics of the effective input to a neuron from presynaptic spike trains, its functional roles are not clear. In a steady state condition, synaptic depression is generally considered to provide lowpass filtering of inputs, with facilitation providing highpass filtering.
Here, we consider the general case of a model neuron receiving inputs from a population of independent Poissonian spike trains, and show using both analytical results and simulations that dynamic synapses can add or remove (depending on synaptic parameters) spectral power at low frequencies. The implications of these findings are demonstrated when a bandlimitednoise rate modulation of the Poissonian spike trains is considered. Information transmission, as measured by the spectral coherence between the rate modulation and synaptic input, does not depend on frequency. This effect is also observed for the coherence between the rate modulation and the membrane voltage of the postsynaptic neuron.
In contrast to the prevalent view, in terms of information transmission, synaptic dynamics provide no low or highpass filtering of the input under steadystate conditions. Despite the lack of dependence on frequency, there is a balance between facilitation and depression that optimizes total information transmission and this balance can be modulated by a parameter associated with some forms of longterm plasticity.
10:20 AM  10:50 AM
Morning Break
10:50 AM  11:30 AM
Optimal Computation with Probabilistic Population Codes
Wei Ji Ma, Department of Brain and Cognitive Sciences, University of Rochester
Cortical activity is in general highly variable, yet behavioral data show that the brain can, in many cases, perform Bayesoptimal computations on sensory inputs. To understand this paradox, one needs to go beyond a meanfield analysis of neural populations and consider the structure of neural variability. Making use of this structure, a population pattern of activity on a single trial encodes not only a single "best guess", but an entire probability distribution over the stimulus. The quality of this encoding is measured by Fisher information.
I will describe how the specific form of variability observed in cortex makes it easy to implement computations in neural populations that preserve Fisher information and therefore manifest itself as Bayesoptimal at the behavioral level. Two examples of such computations will be discussed: multisensory cue integration and visual decisionmaking. This work opens the door to a new understanding of cortical variability.
11:30 AM  12:10 AM
Recovery of Stimuli Encoded with an Ensemble of HodgkinHuxley Neurons
Aurel A. Lazar, Columbia University
We formally investigate the encoding of a (weak) bandlimited stimulus with a population of HodgkinHuxley neurons. Both multiplicative (tangential) coupling and additive coupling of the stimulus into the neural ensemble are considered. In the absence of the bandlimited stimulus, the HodgkinHuxley neurons are assumed to be tonically spiking.
In the multiplicative case, each HodgkinHuxley neuron is I/O equivalent with an integrateandfire neuron with a variable threshold sequence. Consequently, N HodgkinHuxley neurons are I/O equivalent with an ensemble of N IAF neurons. For HodgkinHuxley neuron models with deterministic conductances, we demonstrate an algorithm for stimulus recovery based on the spike trains of an arbitrary subset of the I/O equivalent IAF neurons.
In the additive coupling case, we show that a HodgkinHuxley neuron with deterministic gating
variables is I/O equivalent with a projectintegrateandfire (PIF) neuron with a variable threshold
sequence. The PIF neuron integrates a projection of the stimulus onto the phase response curve
that is, in turn, modulated by a phase shift process. A complete characterization of the PIF neuron
is given. The PIF neuron generates a spike whenever a threshold value is achieved; the values of
the threshold sequence are explicitly given. Building on the I/O equivalent PIF neuron, we provide
an ensemble recovery algorithm for the stimulus and evaluate its performance. The results obtained
are based on frame theory. If the gating variables of the HodgkinHuxley neurons are stochastic,
a regularization formulation of the stimulus recovery problem is employed.
Afternoon Session (2:00 PM  5:10 PM)
Title: Inference
Chair: Don H. Johnson
2:00 PM  2:40 PM
Estimation of Information from Neural Data: Why it is Challenging, and Why Many Approaches Are Useful
Jonathan D. Victor, Department of Neurology and Neuroscience, Cornell University.
Entropy and information are quantities of interest to neuroscientists because of certain invariances that they possess, and because of the limits that they place on the performance of a neural system. However, estimating these quantities from data is often challenging. The fundamental difficulty is that undersampling affects estimation of informationtheoretic quantities much more severely than other statistics, such as mean and variance. The reason for this can be precisely stated in elementary mathematical terms. Moreover, it is tightly linked to the properties of information that make it a desirable quantity to calculate.
To surmount this fundamental difficulty, most approaches rely (perhaps implicitly) on a model for how spike trains are related, and estimate informationtheoretic quantities based on that model. Approaches can be dichotomized according to whether the model represents spike trains in discrete or continuous time. Within each branch of this dichotomy, approaches can be further classified by the nature of the model for spike train relatedness. Stronger models generally handle the undersampling problem more effectively. However, they result in a downward bias in information estimates when the model assumptions ignore informative aspects of spike trains. This view indicates that information estimates are useful not only in situations in which several approaches provide mutually consistent results, but also in situations in which they differ.
Support: NEI 1RO1EY09314 to J.V., NIMH 1R01MH68012 to Dan Gardner.
2:40 PM  3:20 PM
Entropy Estimation: Coincidences, Additivity, and Uninformative Priors
Ilya Nemenman, Los Alamos National Laboratory
To analyze importance of various features in neural spike trains, one often wants to estimate their information content under different approximations. Insufficient sample size and the consequent estimation bias is the usual problem that limits this approach. Correspondingly, development of methods for estimation of entropic quantities from small samples has been a hot topic lately.
As a result, we now understand that, in the worst case, estimating an entropy of a variable is only marginally simpler than estimating this variable's entire probability distribution. However, for limited classes of probability distributions, entropy estimation can be much simpler, sometimes requiring about a squarerootfewer samples than the worst case result suggests. One particular way to achieve this squareroot improvement can be derived by reexamining standard Bayesian "uninformative" priors, relating them to coincidence counting methods (known since the 1930s as the capturereleaserecapture technique for estimation of population sizes), and using the additivity of entropy to control the bias.
I will describe this method in detail, and I will briefly illustrate its power on the data from the blowfly H1 model system, which I will discuss beforehand in a talk at the main conference.
3:20 PM  3:50 PM
Afternoon Break
3:50 PM  4:30 PM
Querying for Relevant Stimuli
Alexander Dimitrov, Center for Computational Biology, Montana State University.
The use of natural stimuli for studying sensory systems has been instrumental to recent breakthroughs in sensory systems neuroscience. However, more and more researchers are raising questions about hidden assumptions in this technique. A complementary approach that may resolve some of the issues found in natural stimulus techniques takes the animalcentric point of view. It asks the question "Can the characteristics of behaviorally relevant stimuli be determined objectively by querying the sensory systems themselves, without making strong apriori assumptions concerning the nature of these stimuli?"
In the work presented here, we transform this general question into a question about decoding sensory stimuli, and test it in the cricket cercal sensory system. The answer to the original question is essentially positive; however, the decoding has to be performed very carefully. We use adaptive sampling tools to guide the stimulus to its "optimal" distribution, and remove multiple invariant subspaces generated by temporal jitter, dilation and scaling, before characterizing the stimulus.
4:30 PM  5:10 PM
Using Convex Optimization for Nonparametric Statistical Analysis of Point Processes
Todd P. Coleman, Department of Electrical and Computer Engineering, University of Illinois at UrbanaChampaign.
Point process models have been shown to be useful in characterizing neural spiking activity as a function of extrinsic and intrinsic factors. Most point process models of neural spiking are parametric as they are often efficiently computable. However, if the actual point process does not lie in the assumed parametric class of functions, misleading inferences can arise. Nonparametric methods are attractive due to fewer assumptions, but most methods require excessively complex algorithms.
We propose a computationally efficient method for nonparametric maximum likelihood estimation when the conditional intensity function, which characterizes the point process in its entirety, is assumed to satisfy a Lipschitz continuity condition. We show that by exploiting the structure of the likelihood function of a point process, the problem becomes efficiently solvable via Lagrangian duality and we compare our nonparametric estimation method to the most commonly used parametric approaches on goldfish retinal ganglion neural data. In this example, our nonparametric method gives a superior absolute goodnessoffit measure than all parametric approaches analyzed.