CNS*2010 Workshop on


Methods of Information Theory in Computational Neuroscience

Thursday and Friday, July 29-30, 2010

San Antonio, TX




Overview

    Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience.

    A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited.

    The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work.

    The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.

    References

    C.E. Shannon, A Mathematical Theory of Communication, Bell System Technical Journal, vol. 27, pp. 379-423 and 623-656, 1948.

    Milenkovic, O., Alterovitz, G., Battail, G., Coleman, T. P., et al., Eds., Special Issue on Molecular Biology and Neuroscience, IEEE Transactions on Information Theory, Volume 56, Number 2, February, 2010.


Standing Committee

    Alex G. Dimitrov, Department of Mathematics, Washington State University - Vancouver.

    Aurel A. Lazar, Department of Electrical Engineering, Columbia University.

Program Committee

    Todd P. Coleman, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign.

    Simon R. Schultz, Department of Bioengineering, Imperial College.


Program Overview


    Thursday (9:00 AM - 5:10 PM), July 29, 2010


    Morning Session I (9:00 AM - 10:20 AM)
    Analysis of Neural Activity
    Chair: John A. Hertz




    9:00 AM - 9:40 AM

    Assumptions and Results in the Berger-Levy Energy Efficient Neuron Model

    Toby Berger, Department of Electrical and Computer Engineering, University of Virginia.

    In “A Mathematical Theory of Energy Efficient Neural Computation and Communication” (IEEE Trans. Inform. Theory, February 2010, 852-874), we introduced and analyzed a neuroscience-based mathematical model of how a neuron stochastically processes data and communicates information. In this talk we review the modeling assumptions that describe the IIF idealized integrate and fire (IIF) neuron model in question and the mathematical results that would pertain to any neuron which satisfies those assumptions. As is the case with all theories of neuron behavior, no real neuron satisfies all the theory’s assumptions. Nonetheless, we consider the above-cited paper to be significant because it links neuroscience, information theory, mathematical statistics, and constrained optimization theory in quantitatively satisfying ways that agree with results from experimental neuroscience that were not explicitly built into either the model or its analysis. We feel that it also makes a compelling case for viewing neurons, especially neurons in primary sensory cortex, as entities that sophisticatedly maximize bits per joule (bpj) in their computations and communications. In this regard, today’s best computer simulations of neural network activity expend circa 109 times as much energy per neuron as do the neurobiological entities being simulated.

    The principal assumptions of the Berger-Levy IIF model are: (1) The spiking threshold is fixed. (2) No postsynaptic potential (PSP) builds up during the refractory period. (3) All synapses have the same weight, w > 0. (4) An afferent spike arriving at a time ta not in the refractory period contributes w·u(t-ta) to the PSP, where u(·) is a unit step function. (5) Axonal propagation perfectly preserves the durations of interspike intervals (ISI’s). (6) The union of the spike arrival instants at all the synapses is an effectively Poisson process whose intensity may be considered to be homogeneous during any single ISI.

    The principal results of the Berger-Levy model are: (1) The long term probability density (pdf) of ISI durations is a delayed gamma distribution, ft(t) = (bκtκ-1e-bt/Γ(κ))u(t), where Δ is refractory period duration and κ; is a positive constant. (2) With b as in (1), the long term pdf of b times the reciprocal of the average excitation intensity during a randomly chosen ISI, call it G, is beta distributed with parameters κ and m-κ where m is the smallest integer multiple of w that exceeds the threshold; that is, fG(g) = [Γ(m)/(Γ(κ)Γ(m-κ))]gκ-1(1-g)m-κ-1, 0≤g≤1. (3) A careful distinction is drawn and quantified between the duration of a randomly chosen ISI and the duration of an ISI containing a randomly chosen instant. Result (1) agrees with experimental studies of ISI duration statistics in primate sensory cortex, with the value of κ lying between 0.5 and 2.0. Result 2 predicts slow decay of the afferent excitation pdf at high excitation intensities, which is in keeping with the fact that the thousands of neurons that comprise the afferent cohort of a neuron operate largely asynchronously.

    Joint work with William B. Levy.




    9:40 AM - 10:20 AM

    Finite Size Effects and Information in Neural Networks

    Michael Buice and Carson C. Chow, Laboratory of Biological Modeling, NIDDK, NIH.

    Neural network population equations (Wilson-Cowan, Cohen-Grossberg, etc.) have led to important insights but there is no systematic method for relating detailed neural dynamics to population equations. For example, such equations do not include information about correlations between neuron firing times. We present a systematic approach for analyzing network activity dynamics of synaptically coupled neurons in terms of the network size. The resulting expansion governs the dynamics of the information of the network configuration and can thus be used to calculate correlations of the activity and the configuration. Using this approach we can also derive "effective" activity equations which include effects such as firing time information (e.g. degree of synchrony). We expect these results to lead to new approaches to correlation based learning and inference in network dynamics.




    10:20 AM - 10:50 AM

    Morning Break


    Morning Session II (10:50 AM - 12:10 PM)
    Analysis of Neural Spike Trains
    Chair: Ron Meir




    10:50 AM - 11:30 AM

    A Sequential Prediction Approach to Generalize Granger's Notion of Causality in Application of Inferring Causal Relationships in Ensemble Neural Spike Train Recordings

    Todd P. Coleman, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign.

    Advances in recording technologies have given neuroscience researchers access to large amounts of data, in particular, simultaneous recordings of neural signals. Recent emphasis has been placed on developing causal measures of influence in network models of brain function that capture dynamic, complex relationships present in the data. Traditionally, many neuroscientists have used variants of Granger causality to infer causal network structure.

    Here, we revisit Granger's original statement about a statistical measure of causality - and emphasis what he noted about prediction. With this, we use a sequential prediction viewpoint to develop causal measures. Under a specific measure of loss in sequential prediction (the log loss), we demonstrate that the mathematization of Granger's statement embodied in a sequential prediction framework leads to the directed information as a measure of causality. Its non-negativity along with necessary and sufficient conditions for being zero make it directly applicable to understand causality. We subsequently show that the notion of "causal conditioning" introduced in the information theory community is a natural and provably good way to understand network measures of causaliy beyond 2 random processes. Moreover, this approach is applicable on arbitrary modalities. We use point process likelihood approaches to develop provably good estimators of the directed infomration. The procedure is tested on a simulated network of neurons, for which it correctly identifies all o the pairwise relationships (whether there is an influence or not) and yields a quantity which can be interpreted as the strength of each pairwise influence. We subsequently estimate network-level causal interactions by estimating the causally conditioned directed information to infer the directed causal network structure in more than two simultaneous recordings. The estimation procedue correctly identifies the network structure and can differentiate between cascading/proxy influences. We then apply this procedure to analyze ensemble spike train recordings in primary motor cortex of an awake monkey while performing target reaching tasks. The procedure identified strong structure in the estimated causal relationships, the directionality of which is consistent with predictions made from the wave propagation of simultaneously recorded local field potentials.




    11:30 AM - 12:10 PM

    Signal Detection in Neural Spike Trains

    Rama Ratnam, Department of Biology, University of Texas at San Antonio.

    The nervous system processes sensory information in real time. In any given sensory modality, neural pathways sort through multiple overlapping signals, detecting and processing only those stimuli that are of behavioral relevance while suppressing others. In natural surroundings, for animals and humans alike, a rapid and reliable response to weak but important stimuli can mean the difference between life and death. Two questions suggest themselves: 1) at the proximate level how does the nervous system cope with the challenge of detecting a sensory signal in noisy surroundings? 2) At the ultimate level has selection favored neural mechanisms that can facilitate the detection of weak signals?

    The work presented here attempts to answer both questions. At the proximate level the biophysical characteristics of a neuron equips it to detect signals in real time, i.e., it is a sequential detector that can test hypotheses. We show that confidence levels are governed by the threshold for nerve impulse generation, and reliability and speed of signal detection are governed by the time constant for integration. At the ultimate level, we show that the statistical properties of the incoming sensor data profoundly affect the performance of the detector. Signal detection performance is best when sensory information is encoded in non-renewal spike trains that are negatively correlated. A consequence of this mechanism is that most sensory receptors are weak differentiators, responding only to changes in the input conditions. During the course of the presentation we will lay out problems in the statistical analysis of neural spike trains, and propose theoretical directions for future research.




    12:10 PM - 2:00 PM

    Lunch


    Afternoon Session I (2:00 PM - 3:20 PM)
    Neural Coding I
    Chair: Byron Yu




    2:00 PM - 2:40 PM

    Encoding Natural Scenes with Neural Circuits with Random Thresholds

    Aurel A. Lazar, Department of Electrical Engineering, Columbia University.

    We present a general framework for the reconstruction of natural video scenes encoded with a population of spiking neural circuits with random thresholds. The natural scenes are modeled as space-time functions that belong to a space of trigonometric polynomials. The visual encoding system consists of a bank of filters, modeling the visual receptive fields, in cascade with a population of neural circuits, modeling encoding in the early visual system. The neuron models considered include integrate-and-fire neurons and ON-OFF neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed to be random. We demonstrate that neural spiking is akin to taking noisy measurements on the stimulus both for time and space-time varying stimuli.

    We formulate the reconstruction problem as the minimization of a suitable cost functional in a finite-dimensional vector space and provide an explicit algorithm for stimulus recovery. We also present a general solution using the theory of smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both synthetic video as well as for natural scenes and demonstrate that the quality of the reconstruction degrades gracefully as the threshold variability of the neurons increases.

    Joint work with Eftychios A. Pnevmatikakis and Yiyin Zhou.




    2:40 PM - 3:20 PM

    Neural Encoding and Decoding - a View from Optimal Filtering Theory

    Ron Meir, Department of Electrical Engineering, Technion.

    Biological systems display impressive capabilities in effectively responding to environmental signals in real time. There is increasing evidence that organisms may indeed be employing near optimal Bayesian calculations in their decision-making. An intriguing question relates to the properties of optimal encoding methods, namely determining the properties of neural populations in sensory layers that optimize performance, subject to physiological constraints. Within an ecological theory of neural encoding/decoding, we show that optimal Bayesian performance requires neural adaptation which reflects environmental changes. Moreover, given temporal constraints on decoding time, we demonstrate that dynamic adaptation is required even if the environment is static. Interestingly, this type of adaptation has been observed in several experimental settings. The mathematical framework developed, and the encouraging experimental verification of specific predictions, set the stage for a functional theory of neural encoding and decoding, which may explain the high reliability of sensory systems, and the utility of neuronal adaptation occurring at multiple time scales.



    3:20 PM - 3:50 PM

    Afternoon Break


    Afternoon Session II (3:50 PM - 5:10 PM)
    Neural Coding II
    Chair: Mike DeWeese




    3:50 PM - 4:30 PM

    Dimensionality Reduction for Discrimination: Removal of Common Structures with iSTAC

    Alexander G. Dimitrov, Department of Mathematics, Washington State University - Vancouver.

    An important first step toward discovering general principles of sensory processing is to determine the correspondence between neural activity patterns and sensory stimuli. We refer to this correspondence as a "sensory neural code". Most approaches to determining the neural code implemented by a given neural system rely on estimating some type of model of this system from observations of its responses to a variety of stimuli. In general, there is little firm knowledge about the range of stimulus patterns that are perceived and encoded by the system, or about the mechanisms of the encoding. Determining these constraints must also be part of the experiment. There is substantial debate about the best way to choose stimuli in this situation. Common solutions include using simplified patterns that have been found to elicit robust responses from the system, using stimuli that are expected to be relevant to the system based on the observed habitat and behavior of the organism, and using a large set of randomly generated stimuli. Each of these methods offers some advantages, and some limitations.

    In this talk we focus on the third approach, which is often called the method of white noise analysis, even when the stimulus is technically not white. It has proven highly effective in a variety of engineering applications. Some of the strengths of this method are that it typically requires less knowledge of the system, and makes fewer assumptions, compared to other approaches. With it, it is possible in principle to discover stimuli of interest, even with no ''a priori'' clues to the nature of these stimuli. The analysis of experiments should be less prone to biases introduced by the structure of the stimuli. Indeed, if the inputs are truly random, we do not expect to find that the system responds to a particular pattern P simply because P was the least objectionable of a small set of input patterns that we chose to consider. In practical experiments, however, it can be shown that even random input distributions can substantially bias model estimates. The effects of these biases can be amplified by inappropriate choices of models.

    In this contribution we consider a class of errors that arise when a model, intended to capture the coding behavior of the system, also captures structures in the stimulus distribution. We show how an appropriately chosen dimensionality reduction can avoid these errors. For clarity, we demonstrate these errors for particular types of stimulus, class of models, and protocols of model estimation. We will consider generalizations of the effects we observe, and of our proposed solution. Typically, the system of interest might be a neuron or small network. Here we use ''virtual'' cells, which are simulations based on simple mathematical models of coding properties. We use these artificial systems because they allow us to know the true underlying model, so that we can reliably evaluate the accuracy of analysis techniques that propose to discover this model. We use two such artificial systems to demonstrate that inaccurate proposed models that effectively capture the structure of the inputs can be judged, by likelihood ratio tests, to outperform the true model.

    Joint work with Graham Cummins.




    4:30 PM - 5:10 PM

    Connecting Information-Theoretic and Model-Based Approaches to Understanding the Neural Code

    Jonathan Pillow, Department of Psychology and Neurobiology, University of Texas at Austin.

    A central problem in computational neuroscience is to understand the coding relationship between external sensory or motor variables and neural spike trains. Namely, what information do a neuron's spikes convey about an external stimulus? Although information-theoretic and model-based methods for addressing this question are often framed as competing alternatives, there is in fact a strong equivalence between them. Contrary to claims of being "assumption-" or "model-free", information-theoretic methods for neural data require the same types of modeling assumptions as those involved in specifying a probabilistic neural model.

    In this talk, I will review the link between mutual information and the log-likelihood under a neural encoding model, and will attempt to highlight some of the advantages to be gained by formulating an explicit model when computing information-theoretic quantities. I will then apply these insights to two problems of recent theoretical interest: (1) information-theoretic dimensionality-reduction methods for estimating a neuron's receptive field from electrophysiological recordings; and (2) model-based techniques for estimating the mutual information between a stimulus and a neuron's spike response that incorporate the history-dependence of neural spike trains.




    Friday (9:00 AM - 5:10 PM), July 30, 2010


    Morning Session I (9:00 AM - 10:20 AM)
    Decoders of Neural Activity I
    Chair: Jonathan Pillow




    9:00 AM - 9:40 AM

    Neural Encoding, Decoding and Control: What Have We Learned from Brain Machine Interface Studies?

    Karim G. Oweiss, Department of Electrical and Computer Engineering, and Neuroscience, Michigan State University.

    Fundamental to understanding how our world is represented in our brain is the ability to observe the collective activity of ensembles of neurons acting in concert, and to correlate this activity with a sensory stimulus or an observed motor behavior. In the context of brain-machine interfaces, recent studies have demonstrated the ability of cortical neurons to adapt their firing patterns to an ever-changing sensorimotor experience. Likewise, decoding the observed neural signals has been limited by our ability to recognize distinct patterns of activity as subjects perform specific tasks, many of which are lost in neurologically impaired subjects. Despite the large volumes of physiological and behavioral data collected, our understanding of neural encoding and its natural plasticity mechanisms remains elusive. There are significant challenges that encumber our ability to decipher these mechanisms in order to enable more efficient, reliable and robust neural decoding to take place.

    I will demonstrate how some of these challenges can be overcome by re-examining the hypothesized elements of the neural code, namely precise spike timing, firing rate, and neuronal correlation. I will show how these elements can be used to build graphical models that characterize the dynamics of functional neuronal circuits. I will also show how our recently developed techniques for inferring functional and effective connectivity between cortical neurons can be used to characterize and quantify synaptic plasticity, for example, after sensory deprivation or during learning new tasks. I will conclude with a brief discussion on the potential of this framework to identify and dissect disease neural circuits through selective, optimized intervention techniques to guide and harness neural plasticity that is otherwise brain controlled.




    9:40 AM - 10:20 AM

    The Ising Decoder: Reading Out the Activity of Large Neural Ensembles

    Simon R. Schultz, Department of Bioengineering, Imperial College.

    New technologies such as high-density multi-electrode array recording and multiphoton calcium imaging allow the activity of large numbers of neurons to be monitored. However, analysis tools have lagged behind the experimental technology, with most approaches limited to very small population sizes.

    In this talk, I will outline the Ising Decoder approach to decoding the activity of large ensembles, in the limit of short time windows where neuronal activity can be binarized without loss of information. In this approach, an Ising type model is fit to a set of training data, after which decoding can take place trial by trial (or instant by instant). By taking advantage of recent advances in machine learning approaches for learning model parameters, we have been able to scale our neural population decoder up to relatively large ensembles. A key problem that had to be solved in order to do this was practical computation of the partition function for large ensembles - an issue often ignored in applications of the Ising model to neuroscience, but which is essential for decoding. We have demonstrated the utility of this approach by using it to decode the activity of a population of simulated visual cortical neurones - comparing coding regimes from "mouse" to "monkey". We also demonstrate the approach on data from two-photon calcium imaging of complex spiking activity in the rodent cerebellum.

    This is joint work with M Schaub and D Cook.




    10:20 AM - 10:50 AM

    Morning Break


    Morning Session II (10:50 AM - 12:10 PM)
    Decoders of Neural Activity II
    Chair: Stephen LaConte




    10:50 AM - 11:30 AM

    Factor-Analysis Decoders for Higher Performance Neural Prostheses

    Byron Yu, Department of Electrical and Computer Engineering, Carnegie Mellon University.

    Neural prosthetic systems aim to assist disabled patients by translating neural activity into desired movements. One of the major challenges is to reliably estimate desired movements from the recorded activity. In this talk, I will consider the problem of classifying the neural activity to one of N possible reach endpoints. The activity of different neurons is often assumed to be independent, conditioned on the reach endpoint. However, we found that there can be meaningful correlation structure across the neural population, even when considering only trials corresponding to the same reach endpoint. To contend with and exploit such co-modulation in the neural activity, we designed decoders based on factor analysis, which attempts to capture the co-modulation using a small number of shared latent factors. We then applied these decoders to neural activity recorded using multi-electrode arrays in premotor cortex in macaque monkeys. We found that the factor-analysis decoders yielded higher classification accuracy than commonly-used decoders that assume conditional independence, thereby increasing the clinical viability of classification-based neural prosthetic systems.

    Joint work with Gopal Santhanam, Vikash Gilja, Stephen Ryu, Afsheen Afshar, Maneesh Sahani, and Krishna Shenoy.




    11:30 AM - 12:10 PM

    Extension of the Berger-Levy Model to Neurons with Unequal Synaptic Weights

    Toby Berger, Department of Electrical and Computer Engineering, University of Virginia.

    We analyze an extension of the Berger-Levy model to neurons with unequal synaptic weights. (See T. Berger and W. B Levy, “A Mathematical Theory of Energy Efficient Neural Computation and Communication”, IEEE Trans. Inform. Theory, February 2010, 852-874). This extension is significant because every neuron continually employs synaptic weight modification in order to best utilize the joint statistics of the neurons that comprise its afferent cohort. Although we treat unequal synaptic weights, our analysis is confined to time intervals whose durations are sufficiently short (e.g., single visual fixations lasting less than a second) that each synaptic weight may safely be considered to remain constant during the analysis interval.

    Letting t=0 denote the beginning of an interspike interval (ISI), we adhere to the B-L assumption that the post-synaptic potential (PSP) is zeroed in the axon hillock and in the axon’s initial segment immediately after t=0. However, we also analyze cases in which contributions to PSP that originated at a synapse prior to t=0 but have yet to travel the entire path from that synapse through the dendrite tree to the axon hillock are not zeroed; rather, they contribute to PSP not only after the refractory period ends at t=Δ but even during the interval 0 < t < Δ. Finally, we sometimes consider a spiking threshold that decays inversely with time during an ISI rather than the constant threshold assumed by B-L. All other assumptions of the B-L model remain in force.

    Our principal finding is that the B-L result that the long term pdf of an energy-efficient neuron’s ISI durations has a delayed gamma distribution continues to hold without having to assume that the synaptic weights all are equal. Also, though the scaled reciprocal of the neuron’s long term excitation intensity no longer is beta distributed when unequal weights are admitted, its pdf can be determined straightforwardly by numerical solution of a Type-1 Fredholm integral equation. When the threshold decays as the reciprocal of time and the weights of the time-ordered sequence of bombarded synapses are modeled as IID exponentially distributed random variables, we obtain relatively simple closed-form results. Particularly important among these is that the conditional pdf of ISI duration given excitation intensity is that of a classical Barndorff-Nielsen diffusion. Moreover, this form of conditional pdf readily enables Bayesian updating from one ISI to the next that does not resulting in time-varying functional forms, only time-varying parameters in a fixed functional form. These findings interface well with a general theory of energy efficient time-continuous analog computation the authors currently are developing in collaboration with W. B. Levy.

    Joint work with Jie Xing.




    12:10 PM - 2:00 PM

    Lunch


    Afternoon Session I (2:00 PM - 3:20 PM)
    Learning I
    Chair: Simon R. Schulz




    2:00 PM - 2:40 PM

    Minimum Probability Flow Learning

    Mike DeWeese, Department of Physics, University of California at Berkeley.

    Fitting probabilistic models to data is often difficult, due to the general intractability of the normalization factor (or partition function) and its derivatives. In this talk I will present a new parameter estimation technique called Minimum Probability Flow Learning that does not require computing the normalization factor or sampling from the equilibrium distribution of the model. This is achieved by establishing dynamics that would transform the observed data distribution into the model distribution, and then setting as the objective the minimization of the KL divergence between the data distribution and the distribution produced by running the dynamics for an infinitesimal time. Like the Metropolis-Hastings algorithm, this approach is inspired by core concepts from statistical mechanics, and it can be shown that Score Matching, Minimum Velocity Learning, and certain forms of Contrastive Divergence are special cases of this learning technique.

    We have demonstrated parameter estimation in several cases, including Ising models, which have recently proven useful for modeling populations of spiking neurons. In the Ising model case, current state of the art techniques are outperformed by approximately two orders of magnitude in learning time, with comparable error in recovered parameters. Our hope is that this technique will broaden the class of probabilistic models that are practical for use with large, complex data sets.




    2:40 PM - 3:20 PM

    Inferring Network Connectivity using Kinetic Ising Models

    John A. Hertz, NORDITA, Stockholm, and Niels Bohr Institute, Copenhagen.

    In a number of recent studies, network connectivity has been inferred by fitting spike correlation data using Ising models with symmetric connections. This approach allows one to exploit the machinery of equilibrium statistical mechanics, but it works rather poorly because real synaptic matrices are not symmetric. We have therefore developed an approach using Ising-Glauber models in which the connectivity may be (and in general is) asymmetric. We can obtain an exact learning algorithm by performing gradient ascent on the log-likelihood of the observed spike histories. As in Boltzmann learning for the equilibrium (symmetric) case, the correction to the current estimate of a connection strength is proportional to the difference between measured and model correlations, but here at unequal times. We have also derived approximate inversion formulas based on mean field theory and a dynamical generalization of the TAP equations of spin glass theory. These (very fast) approximations appear to be quite effective on data from a realistic spiking cortical network model.




    3:20 PM - 3:50 PM

    Afternoon Break


    Afternoon Session II (3:50 PM - 5:10 PM)
    Learning II
    Chair: Karim G. Oweiss




    3:50 PM - 4:30 PM

    Using Machine Learning of fMRI-based Brain States to Provide Real-Time Neurofeedback

    Stephen LaConte, Department of Neuroscience, Baylor College of Medicine.

    Within both the machine learning and cognitive neuroscience communities, there has been a growing interest in multi-voxel pattern analysis applied to neuroimaging data. This type of analysis allows the investigator to decode brain states from functional magnetic resonance imaging (fMRI) data. In other words, as each brain image is acquired, the objective is to determine what the volunteer is "doing" - e.g. receiving sensory input, effecting motor output, or otherwise internally focusing on a prescribed task or thought. This interest has been fostered by a growing number of studies focused on methods for predictive modeling and its great potential to enhance our understanding of mental representations. This talk focuses on applying supervised learning methods for a real-time fMRI neurofeedback system based on brain state prediction. This new technological advance has exciting potential to enable an entirely new level of experimental flexibility and to facilitate learning and plasticity in rehabilitation and therapeutic applications.



    4:30 PM - 5:10 PM

    Panel Discussion

    Chair: Todd P. Coleman.