Columbia Workshop on Brain Circuits, Memory and Computation

BCMC 2015

Monday and Tuesday, March 16-17, 2015

Center for Neural Engineering and Computation

Columbia University, New York, NY 10027


The goal of the workshop is to bring together researchers interested in developing executable models of neural computation/processing of the brain of model organisms. Of interest are models of computation that consist of elementary units of processing using brain circuits and memory elements. Elementary units of computation/processing include population encoding/decoding circuits with biophysically-grounded neuron models, non-linear dendritic processors for motion detection/direction selectivity, spike processing and pattern recognition neural circuits, movement control and decision-making circuits, etc. Memory units include models of spatio-temporal memory circuits, circuit models for memory access and storage, etc. A major aim of the workshop is to explore the integration of various computational sensory and control models.

Organizer and Program Chair

Aurel A. Lazar, Department of Electrical Engineering, Columbia University.


Registration is free but all participants have to register. Thank you!

Lodging and Directions to Venue

Please follow this link for lodging details and directions to the hotel and venue.

Program Overview

(PDF version including the list of close by restaurants)

Monday, 9:00 AM - 5:45 PM, March 16, 2015

Morning Session: Memory and Computation I (9:00 AM - 10:30 AM)

Chair: Daniel Coca

9:00 AM - 9:45 AM

Mushroom Body Output Neurons Encode Valence and Guide Memory-based Action Selection

Yoshi Aso, Ph.D., Rubin Lab, Janelia Research Campus, Ashburn, Virginia, USA.

The mushroom body is a center for associative memory in insect brains. A sparse and non-stereotyped activity of the mushroom body (MB) intrinsic neurons, Kenyon cells, represents environmental cues such as odors. In the adult Drosophila brain, parallel axonal fibers of ~2,000 Kenyon cells form MB lobes. Distinct dopaminergic neurons mediate information of reward or punishment to the MB lobes to adaptively impose the valence to sensory representation by Kenyon cells. MB output neurons (MBONs) are thought to read and translate the activity of Kenyon cells to bias selection of behavioral responses to the learned stimuli, but little is known about their functions.

In this study, we describe detail projection patterns of the full set of neurons comprising the MB lobes. Dendrites of 21 types of MBON collectively tile the 15 compartments along the axonal fibers of Kenyon cells in the lobes. Each of the 20 dopaminergic neuron types innervate one or two of these compartments. Convergence of DAN axons on compartmentalized Kenyon cell-MBON synapses creates a highly ordered unit that can impose meaning on sensory representations. The elucidation of the complement of intrinsic and extrinsic neurons of the MB provides a comprehensive anatomical substrate upon which one can impose a functional logic of associative learning and memory.

Using the intersectional drivers that allow specific manipulation of individual cell type, we begin the process of determining the nature of the information conveyed by MBONs as well as correlating specific MBON cell types with roles in several aversive and appetitive learning paradigms. We also show that optogenetic activation of MBONs can, depending on the cell type, either repel or attract untrained flies. The effects of different MBONs are additive, implying that activities of MBONs representing opposing valence are balanced in untrained flies. MBONs form layered-feedforward circuit inside the MB lobes and converge in small regions outside the MB. We propose that the ensemble of the MBONs collectively encodes valence of the learned stimuli and breaking the balance between MBON activities by dopamine modulation biases the behavioral response. Our anatomical and behavioral results lay the groundwork for understanding the circuit principles for memory-based valuation and action selection.

9:45 AM - 10:30 AM

A Computational Model of the Insect Mushroom Body Applied to Ant Navigation

Barbara Webb, School of Informatics, University of Edinburgh.

Many insects, including ants, exhibit excellent memory for visual routes, but the neural mechanisms underlying their abilities in this complex task are unknown. For the simpler task of olfactory conditioning the mushroom body neuropils have been strongly implicated as a site of memory. It has recently been suggested that ants store a high density of images seen along a route, and can then follow the route by comparing all stored images with their current view to decide which direction appears most familiar. We show that this apparently computationally costly method can be implemented in a spiking neural model of insect mushroom body memory circuits. The key insight is that the MB, considered as a sparse associative net, allows many complex and arbitrary input patterns to be stored as direct synaptic mappings to a small set of outputs required to drive behaviour. In the real ant, these patterns might combine olfactory, visual and other sensory inputs into a multimodal ‘gestalt’ of the current location. Although this function could be interpreted as labelling or classification (familiar vs. unfamiliar), and thus could potentially be performed by a variety of alternative network architectures, our model produces successful performance with a very simple and fast learning mechanism; there is no need for neural implementation of a more complex classification algorithm. The model supports successful behaviour when tested in a realistic simulation based on the recorded routes of real ants, Cataglyphus velox, in their natural environment of scrubby desert in southern Spain.

Morning Break 10:30 AM - 11:00 AM

Morning Session: Memory and Computation II (11:00 AM - 12:30 PM)

Chair: Anthony Leonardo

11:00 AM - 11:45 AM

Network Plasticity as Bayesian Inference

Wolfgang Maass, Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.

General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling.

Joint work with David Kappel, Stefan Habenschuss, Robert Legenstein.

11:45 AM - 12:30 PM

The Memory Coding Problem

Charles Randy Gallistel, Rutgers Center for Cognitive Science, Rutgers University.

For all the information storage media we understand, including DNA and RNA, it is trivial to specify rules whereby numbers might be encoded and stored in that medium in a computationally accessible form. This is important, because, when we know how numbers might be stored in numerically addressable memory locations, we know how anything might be stored there. Contemporary neuroscience theory holds that information is stored in altered patterns of synaptic conductances (learned weights), but I know of no proposals as to how numbers might be so stored nor how they might be accessed for computational purposes. That’s a problem, given the strong behavioral and neurobiological evidence that numbers representing many aspects of the experienced world—set cardinalities, distances, directions, times, durations, probabilities—are stored in brains in computationally accessible form, even in the minuscule brains of insects. Recent findings suggest that durations are stored inside single neurons, in other words, molecularly. It is easy to suggest how numbers might be encoded in various molecular structures. Moreover, the basic operations of computation are known to be implemented at the molecular level on information stored in DNA. Can it be that basic neural computations are done by molecular machinery inside neurons, rather than by circuit-level machinery? It would be vastly more efficient.

Lunch 12:30 PM - 2:00 PM

Afternoon Session: Neural Architectures and Models of Computation I (2:00 PM - 3:30 PM)

Chair: Fabrizio Gabbiani

2:00 PM - 2:45 PM

The Dynamic Brain

Charles D. Gilbert, Laboratory of Neurobiology, Rockefeller University.

Vision is an active and dynamic process. The brain's analysis of scenes involves perceptual grouping, perceptual learning and top-down influences. Information conveyed in a top-down fashion includes attention, expectation, perceptual tasks, working memory and motor commands. As a consequence neurons function as adaptive processors that are able to assume different functional states according to the task being executed, and these dynamics are mediated by rapid changes in effective connectivity within the cortical network. Moreover, all cortical levels of sensory processing are capable of undergoing long term experience dependent functional changes, with rapid and massive modification of cortical connections, involving exuberant outgrowth of new axon collaterals and a parallel process of pruning of both existing and newly grown axon collaterals. The circuitry of the adult cortex therefore is under a continual long term process of modification as we assimilate new experiences, and short term dynamics as we analyze the constituents of visual scenes.

2:45 PM - 3:30 PM

Computational Properties of Cortical Columns

Stefan Mihalas, Allen Institute for Brain Science, Seattle, WA.

The mammalian neocortex has a generally repetitious, laminar structure and performs functions integral to higher cognitive processes. We constructed a series of anatomically grounded models starting from models with only two cell types in each layer. We find multiple functions being implemented, with an example being layer 5 activity representing the difference between a bottom-up and a specific top down input. This computation might be subserve, for example, an inferential update of prior experience with new sensory information. We generalize this finding by computing a kernel that describes the response of the column circuit to time varying stimuli. Inclusion of additional cell types and non-homogeneous connection patterns allow for an increasing array of computations to be approximated: from gain modulation to cue combination.

To model a larger area of cortex we use the master equation for populations of generalized leaky integrate-and-fire neurons with shot-noise synapses. We developed a fast semi-analytic numerical method to solve this equation for either current or conductance synapses, with and without synaptic depression. We show that its solutions match simulations of equivalent neuronal networks better than those of the Fokker-Planck formalism. Using these tools we started analyzing the mechanisms and gating of inter-column communication. Due to the relative similarity in organization of cortical tissue from different areas circuits, models of cortical column computations together with those of inter-column communication can represent the building blocks for larger scale models of cortical computation.

Afternoon Break 3:30 PM - 4:00 PM

Afternoon Session: Neural Architectures and Models of Computation II (4:00 PM - 5:45 PM)

Chair: Vivek Jayaraman

4:00 PM - 4:45 PM

The Atoms of Neural Computation

Gary F. Marcus, Department of Psychology, New York University.

The human neocortex participates in a wide range of tasks, yet superficially appears to adhere to a relatively uniform six-layered architecture throughout its extent. For that reason, much research has been devoted to characterizing a single "canonical" cortical computation", repeated massively throughout the cortex, with differences between areas presumed to arise from their inputs and outputs rather than from “intrinsic” properties. There is as yet no consensus, however, about what such a canonical computation might be, little evidence that uniform systems can capture abstract and symbolic computation (e.g., language) and little contact between proposals for a single canonical circuit and complexities such as differential gene expression across cortex, or the diversity of neurons and synapse types. Here, we evaluate and synthesize diverse evidence for a different way of thinking about neocortical architecture, which we believe to be more compatible with evolutionary and developmental biology, as well as with the inherent diversity of cortical functions. In this conception, the cortex is composed of an array of reconfigurable computational blocks, each capable of performing a variety of distinct operations, and possibly evolved through duplication and divergence. The computation performed by each block depends on its internal configuration. Area-specific specialization arises as a function of differing configurations of the local logic blocks, area-specific long-range axonal projection patterns and area-specific properties of the input. This view provides a possible framework for integrating detailed knowledge of cortical microcircuitry with computational characterizations.

Joint work with Adam Marblestone (MIT) and Tom Dean (Google).

4:45 PM - 5:45 PM

Panel Discussion: What Would a Good Theory of the Brain Actually Look Like?

Moderator: Gary F. Marcus, Department of Psychology, New York University.

Tuesday, 9:00 AM - 5:45 PM, March 17, 2015

Morning Session: Brain Circuits and Computation I (9:00 AM - 10:30 AM)

Chair: Barbara Webb

9:00 AM - 9:45 AM

A Donut that Means the World to the Fly

Vivek Jayaraman, Janelia Research Campus, Ashburn, VA.

Drosophila melanogaster, like many other animals, displays a range of sophisticated visual behaviors including short- and long-term memory for patterns, and place learning. Behavioral genetics studies have implicated a brain region called the central complex in such behaviors. Studies across insects have suggested that the region plays a broad and important role in adaptive sensorimotor integration. We are using a combination of physiological and optogenetic techniques in head-fixed behaving flies to identify and understand circuit computations carried out by this intriguing area of the insect brain. In recent experiments, we have focused in particular on determining the function of the ellipsoid body ‹ a toroidal structure within the central complex. Our results from two-photon calcium imaging performed during tethered behavior in virtual reality shed new light on the role of this structure in spatial navigation. We are now pursuing a combination of modeling, physiology and perturbation to understand central complex function and its link to behavior.

9:45 AM - 10:30 AM

Fly Photoreceptors Detect Phase Congruency

Daniel Coca, Department of Automatic Control and Systems Engineering, University of Sheffield.

More than five decades ago it was postulated that sensory neurons detect and selectively enhance behaviourally relevant features of natural signals. Although we now know that sensory neurons are tuned to encode efficiently natural stimuli, until now it was not clear what statistical features of the stimuli they encode and how. By reverse-engineering the neural code of Drosophila photoreceptors we were able to show for the first time that photoreceptors exploit nonlinear dynamics to selectively enhance and encode illumination- and contrast-invariant phase-based measures of stimuli features that are behaviourally relevant, such as edges. We demonstrate that to mitigate for the inherent sensitivity to noise of these measures, the nonlinear coding mechanisms of photoreceptors are highly tuned to minimize sensitivity to white noise stimuli whilst maximizing sensitivity to local and global phase correlations. This explains the differences in coding of naturalistic and white noise signals and how this can be achieved without adaptation.

Morning Break 10:30 AM - 11:00 AM

Morning Session: Brain Circuits and Computation II (11:00 AM - 12:30 PM)

Chair: Wolfgang Maass

11:00 AM - 11:45 AM

Tuning Neural Nonlinearities for Naturalistic Visual Motion Estimation

Damon A. Clark, Department of Molecular, Cellular, and Developmental Biology, Yale University.

Direction-selective responses to visual motion require two fundamental operations: first, signals from two points in space must be differentially delayed; and second, the two signals must be combined nonlinearly. In the fruit fly, the nonlinear step itself remains largely uncharacterized. The Hassenstein-Reichardt Correlator (HRC) model uses a multiplication operation to generate its nonlinear step, and it successfully predicts a host of neural and behavioral visual motion responses in insects. Recent experimental results, however, have shown that flies respond not just to pairwise correlations, but also to triple correlations, which is not predicted by a pure multiplicative nonlinearity. In order to understand how the form of the nonlinear step influences motion perception, we have investigated the performance of alternative model nonlinearities in estimating motion in natural scenes. Our results show that the addition of higher-order nonlinearities can significantly improve motion estimates compared to the standard HRC, typically by taking advantage of light/dark asymmetries in natural scenes. Remarkably, several models optimized to work well in natural scenes respond to artificial triple correlations in patterns similar to those in wild-type flies. Our work shows how the nonlinear step can be tuned to perform optimally under specific conditions, that alternative nonlinear combination steps improve the performance of HRC-like models, and that measured response properties of the fly’s motion detector are consistent with a nonlinear step that is optimized to estimate motion in natural conditions.

11:45 AM - 12:30 PM

Neurokernel: Building an in Silico Fruit Fly Brain

Aurel A. Lazar, Department of Electrical Engineering, Columbia University.

Neurokernel is an open-source platform for the emulation of the entire fruit fly brain on multiple Graphics Processing Units (GPUs). By defining a standard communication interface that all neuropil models must use, Neurokernel enables independently developed neuropil and connectivity pattern models with compatible interfaces to be integrated into a single executable model irrespective of their internal design. To reduce interference between model development and addition of new features to the software, Neurokernel's architecture separates its application plane (which provides support for neuropil model and connectivity pattern specification tools) from its control and compute planes (which respectively provide resource management and GPU-based digital machinery required by neuropil models). Development of the Neurokernel Project is based upon the well-known and highly successful collaborative model of the IETF's Requests for Comments.

Neurokernel has been used to connect and execute proof-of-concept models of select neuropils in the fly's olfactory and vision sensory systems that have been independently developed by different teams. Implementations of these models and the core Neurokernel source code are both available online. Neurokernel's support for interfacing independently developed neuropil models and executing them on commodity parallel hardware provides neuroscience researchers with a fundamentally collaborative platform upon which to join forces to achieve the ultimate goal of building the fly brain in silico.

Joint work with Lev E. Givon, Konstantinos Psychas, Nikul H. Ukani, Chung-Heng Yeh and Yiyin Zhou.

Lunch 12:30 PM - 2:00 PM

Afternoon Session: Computing with Brain Circuits I (2:00 PM - 3:30 PM)

Chair: Stefan Mihalas

2:00 PM - 2:45 PM

Neural Circuits Underlying Internal Models in Predictive Motor Control

Anthony Leonardo, Janelia Research Campus, Ashburn, VA.

Sensorimotor control in vertebrates relies on internal models. When extending an arm to reach for an object, the brain uses predictive models of both limb dynamics and target properties. Whether invertebrates use such models remains a longstanding question. Here we examine to what extent prey interception by dragonflies, a behavior analogous to targeted reaching, requires internal models. By simultaneously tracking the position and orientation of a dragonfly’s head and body during flight, we provide evidence that interception steering is driven by forward and inverse models of dragonfly body dynamics and by models of prey motion. Predictive rotations of the head are used to foveate and continuously track the prey’s angular position. The head-body angles established by foveation appear to guide systematic rotations of the dragonfly’s body to orient it with the prey’s flight path. Model-driven control thus underlies the bulk of interception steering maneuvers, while vision is used for reactions to unexpected prey movements. These findings illuminate the computational sophistication with which insects construct behavior. In my talk I will discuss these principles of model driven interception steering, as well as new anatomical data which sheds light on their neural implementation.

2:45 PM - 3:30 PM

Biophysics and Neural Computations Underlying Visually-Guided Collision Avoidance Behaviors

Fabrizio Gabbiani, Dept. of Neuroscience, Baylor College of Medicine, and Computational and Applied Mathematics, Rice University.

Insects have proven to be ideal model systems to investigate the biophysical properties of neurons involved in the generation of collision avoidance behaviors, as well as the neural circuits involved in these computations. We will review recent results on how visual signals are processed at various stages of the neural networks involved in tracking objects approaching on a collision course and in particular how voltage-gated ion channels contribute to the segmentation of a threat from a visual scene, a computation necessary to react appropriately to visual stimuli since escape should be triggered only in response to truly threatening situations.

Afternoon Break 3:30 PM - 4:00 PM

Afternoon Session: Computing with Brain Circuits II (4:00 PM - 5:45 PM)

Chair: Damon A. Clark

4:00 PM - 4:45 PM

A Biological Neuron as an Online Matrix Factorization Device

Dmitri "Mitya" B. Chklovskii, Simons Center for Data Analysis, Simons Foundation.

Despite our extensive knowledge of the biophysical properties of neurons, there is no commonly accepted algorithmic theory of neuronal function. Here we explore the hypothesis that a neuron performs online matrix factorization of the streamed data. By starting with a matrix factorization cost function we derive an online algorithm, which can be implemented by neurons and synapses with local learning rules. We demonstrate that such network performs feature discovery and soft clustering. The derived algorithm replicates many known aspects of sensory anatomy and biophysical properties of neurons. Thus, we make a step towards an algorithmic theory of neuronal function, which should facilitate large-scale neural circuit simulations and biologically inspired artificial intelligence.

4:45 PM - 5:45 PM

Panel Discussion: A Neuroscience Esperanto: Building a Better Bridge between Modeling and Experimental Neurobiology

Moderator: Brian D. McCabe, Department of Pathology and Cell Biology, Columbia University.

Tweet this! Share on Facebook Email a friend Share on Delicious Share on StumbleUpon Share on Digg Share on Reddit Share on Technorati