Neural Engineering and Computation Seminars


Center for Neuroengineering and Computation Seminar Series


Computational Neuroscience and Neuroengineering Seminars (Archive)

    04/05/2013  Nima Mesgarani  Reverse Engineering the Brain Computations Involved in Speech Processing
    12/14/2012  Fabrizio Gabbiani  Neural Computations Underlying Collision Avoidance Behaviors
    04/08/2011  Todd P. Coleman  A Multi-Disciplinary Discussion of Brain-Machine Interfaces
    10/21/2010  Erol Gelenbe  The Random Neural Network Model: Discovery or Invention?
    03/16-17/2010  ColumbiaU - Technion  Workshop on Neuroengineering of Biological Networks
    10/6/2009  Alex G. Dimitrov  Is transformation-invariant object recognition based on locally invariant signal processing?
    3/31/2009  Peng Yin  Information Directed Molecular Technology: Programming Nucleic Acid Self-Assembly
    3/24/2009  Murat Acar  Feedback Regulation and Dosage Compensation in Gene Networks
    7/29/2008  Pradeep Shenoy  Human-aided Computing: Utilizing Implicit Human Processing to Classify Images
    7/14/2008  Eric Pohlmeyer  A brain-machine interface for regaining control of a paralyzed arm
    6/9/2008  Gabriel A. Silva  Mapping the functional connectivity of cellular neural networks
    5/1/2008  Daniel A. Butts  The Importance of Time in Visual Computation
    4/15/2008  Klaus-Robert Mueller  Toward Brain Computer Interfacing
    4/14/2008  Timothy Gardner  The formation of neural circuits and behavior in songbirds
    3/24/2008  Sridevi V. Sarma  Improving Deep Brain Stimulation in Parkinson's Disease Using Feedback Control
    3/10/2008  Marom Bikson  Rational Design of Electrotherapy Devices: Translational Neural Engineering at CCNY BME
    1/25/2008  John G. Harris  Biologically Inspired Sensing and Coding of Signals
  

Related Seminars (Archive)

    04/19-20/2010  NSF Workshop  Hybrid Neuro-Computer Vision Systems
    3/31/2008  Hernando Ombao  Spectral Analysis of Brain Signals
    3/25/2008  Daniel Lee  Biologically Inspired Sensorimotor Processing
    3/13/2008  Luke Theogarajan  Bio-ionic Neural Interfaces





Computational Neuroscience and Neuroengineering

Seminar Series

Title:    Reverse Engineering the Brain Computations Involved in Speech Processing
Speaker:    Nima Mesgarani
Affiliations:    Department of Neurological Surgery, University of California, San Francisco
Date:    Friday, April 5, 2013
Time:    11:00 am
Location:    Interschool Lab, Room 750 Schapiro CEPSR
Abstract:    The brain empowers humans and other animals with remarkable abilities to sense and perceive their acoustic environment in highly degraded conditions. These seemingly trivial tasks for humans have proven extremely difficult to model and implement in machines. One crucial limiting factor has been the need for a deep interaction between two very different disciplines, that of neuroscience and engineering. In this talk, I will present results of an interdisciplinary research effort to address the following fundamental questions: 1) what computation is performed in the brain when we listen to complex sounds? 2) How could this computation be modeled and implemented in computational systems? and 3) how could one build an interface to connect brain signals to machines? I will present results from recent invasive neural recordings in human auditory cortex that show a distributed representation of speech in auditory cortical areas. This representation remains unchanged even when an interfering speaker is added, as if the second voice is filtered out by the brain. In addition, I will show how this knowledge has been successfully incorporated in novel automatic speech processing applications and used by DARPA and other agencies for their superior performance. Finally, I will demonstrate how speech can be read directly from the brain that eventually, can allow for communication by people who have lost their ability to speak. This integrated research approach leads to better scientific understanding of the brain, innovative computational algorithms, and a new generation of Brain-Machine interfaces.
Speaker Bio:    Nima Mesgarani is a postdoctoral scholar at the neurosurgery department of UC San Francisco. He received his Ph.D. in Electrical Engineering from University of Maryland College Park and was a postdoctoral scholar at Johns Hopkins University prior to joining UCSF. His research interests are in human-like information processing of acoustic signals at the interface of engineering and brain science. His goal is to develop an interdisciplinary research program designed to bridge the gap between these two very different disciplines by reverse engineering the signal processing in the brain, which in turn inspires novel approaches to emulate human abilities in machines. This integrated research approach leads to better scientific understanding of the brain, novel speech processing algorithms for automated systems, and a new generation of Brain-Machine Interface and neural prosthesis.
  
  
      Back to Top
Title:    Neural Computations Underlying Collision Avoidance Behaviors
Speaker:    Fabrizio Gabbiani
Affiliations:    Department of Neuroscience, Baylor College of Medicine,
Department of Computational and Applied Mathamtics, Rice University,
and Visiting Faculty, Max-Planck Institute of Neurobiology
Date:    Friday, December 14, 2012
Time:    11:00 am
Location:    Interschool Lab, Room 750 Schapiro CEPSR
Abstract:    Understanding how the brain processes sensory information in real-time to generate meaningful behaviors is one of the outstanding contemporary challenges of neuroscience. Visually guided collision avoidance behaviors are nearly universal in animals endowed with spatial vision and offer a favorable opportunity to address this question. This talk will summarize the current understanding of their generation at the level of neural networks, single neurons and their ion channels. The focus will be on a model system that has proven particularly well-suited for this purpose, the locust brain, but will also tie the results learned in this preparation to studies carried out in a wide range of other species. Engineering developments that have enabled and been part of this research will be highlighted as well.
Speaker Bio:    Fabrizio Gabbiani received his Ph.D. from the Swiss Federal Institute of Technology Zürich (ETHZ) in 1992. His Ph.D. work, under the guidance of Dr. Jürg Fröhlich, was in algebraic quantum field theory, focusing on the development of its formalism for two-dimensional conformal field theories. Following his Ph.D., his research interests shifted towards computational and systems neuroscience as he spent one additional year in Zürich developing models of cerebellar granule cells under the guidance of Drs. Thomas Knöpfel and Klaus Hepp. In 1994, he joined the groups of Drs. Christof Koch and Gilles Laurent at Caltech where he first analyzed information coding in neuronal spike trains using statistical signal processing techniques. In collaboration with experimentalists, he then applied these methods to spike trains of neurons at successive stages of the electrosensory system of weakly electric fish. In 2000, he joined the Department of Neuroscience at Baylor College of Medicine in Houston. His work there, initially started at Caltech, has focused on understanding how single neurons carry out the computations necessary to implement collision avoidance behaviors. Dr. Gabbiani, was an Alfred P. Sloan Fellow and is a recipient of the Humboldt Research Award. He is co-author of a book entitled “Mathematics for Neuroscientists”, published in 2010 by Academic Press. He is currently a visiting faculty at the Max-Planck Institute of Neurobiology in Martinsried, Germany.
  
  
      Back to Top
Title:    A Multi-Disciplinary Discussion of Brain-Machine Interfaces
Speaker:    Todd P. Coleman
Affiliation:    Department of Electrical and Computer Engineering
and Neuroscience Program
University of Illinois at Urbana-Champaign (UIUC)
Date:    Friday, April 8, 2011
Time:    10:30 am
Location:    Interschool Lab, Room 750 Schapiro CEPSR
Abstract:    In this talk, we will discuss the topic of brain-machine interfaces, which comprise a coupling between the brain and an external device. First, we discuss a systems-engineering viewpoint on designing the protocol of interaction between the human and the external device from the lens of team decision theory and feedback information theory. We demonstrate how this application led to interesting new theoretical problems and solutions, that can be instantiated in a BMI for a text communication prosthesis as well as traversal of smooth paths in two dimensions. Next, we discuss some recent research in understanding how information is represented and processed in the ensemble neurophysiological recordings in motor areas of a monkey through the causal interaction between neural signals. Lastly, we discuss new neuro-technology developments that use stretchable electronics to sense neural signals non-invasively without the use of conductive gel.
Speaker Bio:    Todd P. Coleman received the B.S. degrees in electrical engineering (summa cum laude), as well as computer engineering (summa cum laude) from the University of Michigan, Ann Arbor, in 2000, along with the M.S. and Ph.D. degrees in electrical engineering from the Massachusetts Institute of Technology (MIT), Cambridge, in 2002, and 2005. During the 2005-2006 academic year, he was a postdoctoral scholar at MIT and Massachusetts General Hospital in computational neuroscience. Since the fall of 2006, he has been on the faculty in the ECE Department and Neuroscience Program at UIUC. His research interests include information theory, operations research, and computational neuroscience. Dr. Coleman, a National Science Foundation Graduate Research Fellowship recipient, was awarded the University of Michigan College of Engineering's Hugh Rumler Senior Class Prize in 1999 and was awarded the MIT EECS Department's Morris J. Levin Award for Best Masterworks Oral Thesis Presentation in 2002. In Fall 2008, he was a co-recipient of the University of Illinois College of Engineering's Grainger Award in Emerging Technologies for development of a novel, practical timing-based technology. Beginning Fall 2009, Coleman has served as a co-Principal Investigator on a 5-year NSF IGERT interdisciplinary training grant for graduate students, titled "Neuro-engineering: A Unified Educational Program for Systems Engineering and Neuroscience" in conjunction with Tennessee State, and UT San Antonio. Coleman also has been serving on the DARPA ISAT study group for a 3-year term, beginning Fall 2009. Beginning June 2010, he will serve as Diversity Coordinator for a new 5-year NSF Science and Technology Center pertaining to "Emerging Frontiers of the Science of Information", in conjunction with Purdue, Princeton, Stanford, MIT, Berkeley, Bryn Mawr, and Howard. Recently, he has been selected for a Fellow appointment with the University of Illinois Center for Advanced Study (CAS) for the 2010-2011 academic year.
  
      This seminar is organized jointly with the Center for Theoretical Neuroscience
  
      Back to Top



Title:    The Random Neural Network Model: Discovery or Invention?
Speaker:    Erol Gelenbe
Affiliation:    Professor in the Dennis Gabor Chair
Electrical and Electronic Engineering Department
Imperial College London
Date:    Thursday, October 21, 2010
Time:    11:00 am
Location:    Interschool Lab, Room 750 Schapiro CEPSR
Abstract:    Work on theoretical developments and applications of the random neural network (RNN) has now spanned two decades, and its applications include the modeling of cortico-thalamic oscillations, image and video compression, network routing algorithms, relaxation based optimisation, and the detection of tumors in the human brain. This research has attracted several thousand citations. The basic model can be viewed as a network of discrete counter-automata that interact with each other and with the outside world, operating in real time. Its remarkable properties include product form solutions for different variants of the model, the proof of its approximation ability for bounded and continuous real-valued functions, and an O(N3) time complexity backpropagation learning algorithm for the recurrent network. This talk will outline some of these theoretical and practical results, and present some links to new research issues that include gene regulatory networks and very low power digital electronics.
Speaker Bio:    Erol Gelenbe is the Dennis Gabor Chaired Professor at Imperial College. He received the MS and PhD degrees from Brooklyn Poly, and is a graduate of METU Ankara. In 2010 he was awarded the Brooklyn Poly Distinguished Alumnus Award and elected Honorary Member of the Hungarian Academy of Sciences. He is a Member of the French National Academy of Engineering (Academie des Technologies), of the Turkish Academy of Sciences, and of Academia Europaea. A Fellow of IEEE, ACM and IEE, he won the ACM SIGMETRICS Award in 2008 for his work on computer and network performance modelling and analysis. He received the Italian honours of Commander of Merit, and of Grand Officer of the Order of the Star. France awarded him the honour of Officer of Merit. In 1996 he was the first computer scientist to receive the Grand Prix France Telecom of the French Academy of Sciences. He has received Honoris Causa doctorates from the University of Liège (Belgium), University of Roma II (Italy), and Bogazici University (Istanbul, Turkey). Erol's research is currently funded by UK EPSRC and EU FP7. He also works with industry, and serves as Editor-in-Chief of The Computer Journal (Oxford University Press and British Computer Society), and on the editorial board of the Proc. Royal Society A.
Selected References:   
  1. E. Gelenbe, "Stability of the random neural network model," Neural Computation, 2 (2), pp. 239-247, 1990.
  2. E. Gelenbe, "Learning in the recurrent random neural network," Neural Computation, 5 (1), pp. 154-164, 1993.
  3. E. Gelenbe, C. Cramer, M. Sungur, P. Gelenbe "Traffic and video quality in adaptive neural compression", Multimedia Systems, 4, 357--369, 1996.
  4. C. Cramer, E. Gelenbe, H. Bakircioglu "Low bit rate video compression with neural networks and temporal sub-sampling," Proc. IEEE, 84 (10), 1529--1543, Oct. 1996.
  5. E. Gelenbe, T. Feng, K.R.R. Krishnan "Neural network methods for volumetric magnetic resonance imaging of the human brain," Proc. IEEE, 84 (10), 1488--1496, Oct. 1996.
  6. E. Gelenbe, A. Ghanwani, V. Srinivasan "Improved neural heuristics for multicast routing," IEEE J. Selected Areas in Communications, 15 (2), 147-155, 1997.
  7. E. Gelenbe, Z. H. Mao, and Y. D. Li, "Function approximation with the random neural network," IEEE Trans. Neural Networks, 10 (1), January 1999.
  8. E. Gelenbe, Z. H. Mao, and Y. D. Li, "Function approximation with the random neural network," IEEE Trans. Neural Networks, 10 (1), January 1999.
  9. E. Gelenbe and K. Hussain. Learning in the multiple class random neural network. IEEE Transactions on Neural Networks, 13(6):1257-1267, November 2002.
  10. E. Gelenbe, T. Koçak, and Rong Wang. Wafer surface reconstruction from top-down scanning electron microscope images. Microelectronic Engineering, 75(2):216-233, August 2004.
  11. E. Gelenbe, Z.H. Mao, and Y.D. Li. Function approximation by random neural networks with a bounded number of layers. Journal of Differential Equations and Dynamical Systems, 12(1-2):143-170, 2004.
  12. E. Gelenbe "Steady-state solution of probabilistic gene regulatory networks", Phys. Rev. E, 76(1), 031903 (2007).
  13. E. Gelenbe "Network of interacting synthetic molecules in equilibrium". Proc. Royal Society A 464 (2096):2219 - 2228, 2008.
  14. E. Gelenbe and S. Timotheou. Random Neural Networks with Synchronised Interactions. Neural Computation, 20: 2308-2324, 2008.
  15. E. Gelenbe "Steps toward self-aware networks", Comm. ACM, 52(7):66-75, July 2009.
      Back to Top



Title:    Is transformation-invariant object recognition based on locally invariant signal processing?
Speaker:    Alexander G. Dimitrov
Affiliation:    Department of Cell Biology and Neuroscience and
Center for Computational Biology,
Montana State University
Date:    Tuesday, October 6, 2009
Time:    11:00 am
Location:    Electrical Engineering Conference Room, 13th Floor Mudd
Abstract:    Recognition of visual objects is quite robust with respect to changing various parameters of the objects: location, size, orientation and such. Interest in the neural mechanisms underlying such invariant recognition has seen a recent resurgence. We have developed a method to characterize locally invariant object recognition - operational robustness under small changes in object parameters. This is achieved by modeling transformations which characterize other object parameters separately from cues to the object's identity. In this presentation I shall discuss the techniques used in this analysis. Application of the techniques to single unit observations from cat and macaque early cortical processing finds invariances to spatio-temporal translations. Local rotation and dilation invariances emerge when such single units are combined. I will discuss the implications for building broad invariant object recognizers by combining such locally invariant elements.
      Back to Top



Title:    Information Directed Molecular Technology: Programming Nucleic Acid Self-Assembly
Speaker:    Peng Yin
Affiliation:    Center for Biological Circuit Design, Caltech
Date:    Tuesday, March 31, 2009
Time:    11:00 am
Location:    Interschool Lab, Room 750 Schapiro CEPSR
Abstract:    This talk will describe my work and plans on engineering information directed self-assembly of nucleic acid (DNA/RNA) structures and devices, and on exploiting such systems to do useful molecular work, e.g. templating molecular entities into functional devices and materials, and probing and programming biological processes for bioimaging and therapeutic applications.

Specifically, I will first present a rudimentary programming language that enables user-friendly design of the dynamic behavior of synthetic nucleic acid systems (Yin et al, Nature, 451:318, 2008). The language is based on the graphical abstraction of a DNA hairpin motif, which physically implements a programmable kinetic trap. A high level molecular program specifies the connection of such kinetic traps on a free energy landscape, and defines the system's reaction pathway and dynamic behavior. A variety of molecular programs were experimentally executed: the catalytic formation of DNA branch junctions, a cross catalytic circuit, the triggered growth of a binary molecular "tree", and the autonomous unidirectional motion of a DNA "walker". In a related work, the abstraction of a 42 base single-stranded DNA motif is used to direct the self-assembly of molecular tubes with monodisperse, programmable circumferences (Yin et al, Science, 321:824, 2008). The self-assembled nucleic acid structures can serve as templates to organize molecular entities (e.g. proteins, gold nanoparticles, and carbon nanotubes) into functional materials. The dynamic self-assembling process can be interfaced with biological molecular processes, and provide powerful molecular instrumentation tools for systems biology and developmental biology research and potentially molecular therapeutic tools with single cell precision.

The above work and plans will bring us closer to the vision of information directed molecular technology: by programming a user-friendly molecular controller, humans freely specify and realize their functional needs in the molecular world.
      Back to Top



Title:    Feedback Regulation and Dosage Compensation in Gene Networks
Speaker:    Murat Acar
Affiliation:    CBCD Fellow
Center for Biological Circuit Design, Caltech
Date:    Tuesday, March 24, 2009
Time:    11:00 am
Location:    Interschool Lab, Room 750 Schapiro CEPSR
Abstract:    Feedback-mediated regulation of gene expression is ubiquitous in gene networks. Positive feedback structures can give rise to bistability or hysteresis while negative feedback loops can help cells tune the frequency of cellular switching between different gene expression states. By using the galactose utilization pathway of the yeast Saccharomyces cerevisiae as a model gene network, we experimentally quantified the contribution of different feedback topologies on gene network activity. We reprogrammed the rates of phenotypic switching between the ON and OFF states of the network with time scales ranging from hours to months. Next, in order to understand how the activity of the network is affected by its size (or dosage), we combinatorially constructed diploid 'network mutant' strains in which either one or two copies of the four regulatory genes (GAL2, GAL3, GAL4, and GAL80) of the network were present. Our results demonstrate that the activity of the galactose regulatory network is robust to variations in network size. Cells could use such a design principle to better cope with variations in network size caused, for example, by genome duplication events.
      Back to Top



Title:    Human-aided Computing: Utilizing Implicit Human Processing to Classify Images
Speaker:    Pradeep Shenoy
Affiliation:    University of Washington
Date:    Tuesday, July 29, 2008
Time:    2:00 pm
Location:    BME Conference Room, 351 Engineering Terrace
Abstract:    We propose the notion of human-aided computing, where brain responses to image stimuli can be used to categorize images based on their content.
In this talk I will describe experiments that use an electroencephalograph (EEG) device to measure the presence and outcomes of implicit cognitive processing in response to visual stimuli consisting of real-world images. Our EEG classification system can distinguish between images containing faces, animals and inanimate objects, and benefits from multiple image presentations to the same or multiple users. Our system can also leverage computer vision techniques for performing this categorization task, and may potentially help improve the performance of these machine algorithms.
      Back to Top



Title:    A brain-machine interface for regaining control of a paralyzed arm: A primate model of cortically controlled functional electrical stimulation
Speaker:    Eric Pohlmeyer
Affiliation:    Biomedical Engineering
Northwestern University
Date:    Monday, July 14, 2008
Time:    11:00 am
Location:    DBME Conference Room, 351 Engineering Terrace
Abstract:    We have developed a system in which neural signals recorded from microelectrodes implanted within the brain can be used to control electrical stimulation of paralyzed hand and forearm muscles. Functional electrical stimulation (FES) has often been used to restore the capacity to grasp and manipulate objects to spinal cord injury patients by activating paralyzed muscles through the direct application of electric current. However, providing the user with the means to control the multiple degrees of freedom needed for dexterous manipulation of objects remains an important limitation. The goal of our lab has been to use signals recorded directly from the brain to allow more complex and varied control of grasping FES systems. We have previously shown that multielectrode recordings from the monkey primary motor cortex can be used to predict arm and hand muscle activity during complex reaching tasks. We have now developed a novel FES system which uses information about intended muscle activation extracted from the motor cortex to control stimulation in intramuscular electrodes. In our experiments, two monkey subjects used this cortically controlled FES system to regain voluntary wrist flexion following limb paralysis by blocks of the median and ulnar nerves. Cortically controlled FES of four forelimb muscles approximately doubled the maximum flexion force that the monkeys could achieve. Furthermore, the monkeys were able to voluntarily grade this force to match several different target levels, at speeds of only one-half to two-thirds normal, and force variations of only about twice normal. These results provide an important proof of concept, demonstrating the feasibility of a cortically controlled FES prosthesis for human spinal injured patients. Such systems would offer a significant advantage to patients with injuries in the mid-cervical spinal cord, and potentially even greater benefits to high-cervical spinal cord injured patients with paralysis of the entire upper limb.
      Back to Top



Title:    Mapping the functional connectivity of cellular neural networks in order to investigate how networks represent and store information
Speaker:    Gabriel A. Silva
Affiliation:    Silva Research Group
Department of Bioengineering, Jacobs School of Engineering
Department of Ophthalmology, School of Medicine
Date:    Thursday, June 9, 2008
Time:    11:00 am
Location:    414 Schapiro CEPSR
Abstract:    Memories define who we are and our place relative to the world we live in by providing causality and continuity between the immediate present and the past. Although the molecular and cellular mechanisms of learning and memory are generally well understood, we have a limited understanding of how these stereotyped processes scale to the level of cellular neural networks and bestow upon them the ability to encode, store, and recall information.

One cannot directly extrapolate how the molecular mechanisms give rise to dynamic properties at the network level. To address this, our lab has been developing novel experimental and computational methods for imaging and mapping functional signaling in cellular neural networks with single cell and sub-cellular resolution. These methods identify functional patterns of activation in a way that can be related back to the fundamental molecular and cellular neurobiology, effectively mapping the functional connectivity topology of networks as they store information in response to specific stimuli. Two approaches being developed by our lab will be discussed: The spatial graph connectivity model is designed to map the spatiotemporal evolution of the connectivity topology of experimentally measured functional networks with single cell resolution in large networks of neurons and glia, while a variation of an optical flow algorithm was adapted to derive and track second messenger signaling with sub-cellular single pixel resolution.

These functional methods compliment other work in our lab using quantum dot nanotechnology to image cellular anatomy and structure at high spatial resolutions and high signal to noise ratio using epifluorescence microscopy.

Collectively, we are beginning to use these tools to address specific questions about how cellular neural networks encode information, and how they can be reversed engineered in order to design in silico network structures that are statistically and functionally similar to biological neural networks.
Speaker Bio:    Dr. Gabriel A. Silva is an assistant professor in the Departments of Bioengineering and Ophthalmology and the Neurosciences Program at the University of California, San Diego. He received his undergraduate degrees (1996) in human physiology (Hon. B.Sc.) and biophysics (B.Sc.) and a masters degree (1997) in neuroscience from the University of Toronto, Canada. After completing his PhD in neural engineering and neurophysiology at the University of Illinois, Chicago in 2001, he did a postdoctoral fellowship in applied nanotechnology to neuroscience at Northwestern University, Chicago (2003). He joined the faculty at UCSD in 2004. His research focuses on investigating cell structure, signaling, and information processing in biological neural networks in health and disease.
      Back to Top



Title:    The Importance of Time in Visual Computation
Speaker:    Daniel A. Butts
Affiliation:    Institute of Computational Biomedicine
Weill Medical College of Cornell University
Date:    Thursday, May 1, 2008
Time:    1:00 pm
Location:    414 Schapiro CEPSR
Abstract:    Even when we look at a stationary visual scene, the image of the visual world projected on the retina is in constant motion due to ever- present movements of the eye. Dynamically changing visual stimuli, such as those created by eye movements, lead to highly reliable neuronal responses in the visual pathway, where timing can be precise down to the level of milliseconds. Because the visual stimulus driving such precision is significantly slower, investigation of neuronal responses in context of natural vision has lead to several insights: regarding both the role of time in representing sensory information, and how fast neuronal signaling arises from more-slowly changing input through the function of local circuitry.

This work thus provides a picture of the relevant timing relationships across visual neuron populations, and sets the stage for understanding how the cortex uses time in performing visual computation.
      Back to Top



Title:    Toward Brain Computer Interfacing
Speaker:    Prof. Dr. Klaus-Robert Mueller
Affiliation:    Technical University of Berlin Institute for Software Engineering and Theoretical Computer Science
Intelligent Data Analysis Group Fraunhofer FIRST, Berlin, Germany
Date:    Tuesday, April 15, 2008
Time:    1:00 pm
Location:    DBME Conference Room, 351 Engineering Terrace
Abstract:    Brain Computer Interfacing (BCI) aims at making use of brain signals for e.g. the control of objects, spelling, gaming and so on. This talk will first provide a brief overview of Brain Computer Interface from a machine learning and signal processing perspective. In particular it shows the wealth, the complexity and the difficulties of the data available, a truely enormous challenge: In real-time a multi-variate very strongly noise contaminated data stream is to be processed and neuroelectric activities are to be accurately differentiated.

Finally, I report in more detail about the Berlin Brain Computer (BBCI) Interface that is based on EEG signals and take the audience all the way from the measured signal, the preprocessing and filtering, the classification to the respective application. BCI as a new channel for man-machine communication is discussed in a clincial setting and for gaming.

This is joint work with Benjamin Blankertz, Michael Tangermann, Matthias Krauledat, Claudia Sanelli, Stefan Hauffe (TU, Berlin) and Gabriel Curio (Charite, Berlin).
      Back to Top



Title:    The formation of neural circuits and behavior in songbirds
Speaker:    Timothy Gardner
Affiliation:    Post-doctoral Fellow, Massachusetts Institute of Technology
Date:    Monday, April 14, 2008
Time:    11:00am
Location:    DBME Conference Room, 351 Engineering Terrace
Abstract:    Songbirds form detailed auditory memories of other birds. songs and use these memories to guide vocal imitation. This natural behavior provides excellent material to study the rules that govern the wiring of the nervous system and the principles of circuit dynamics in relation to behavior.

In this talk, I will describe recent efforts to build a quantitative understanding of the vocal learning process. This includes developments in auditory processing theory, a study of the behavioral limits of vocal learning in songbirds, and recent in-vivo imaging of neural changes during song learning.
      Back to Top



Title:    Improving Deep Brain Stimulation in Parkinson's Disease Using Feedback Control
Speaker:    Sridevi V. Sarma
Affiliation:    Post-Doc Research Fellow
Harvard Medical School and MIT
Date:    Monday, March 24, 2008
Time:    11:00 am
Location:    414 CEPSR
Abstract:    An estimated 3 to 4 million people in the United States have Parkinson's Disease (PD), a chronic progressive neural disease that occurs when specific neurons in the midbrain degenerate, causing movement disorders such as tremor, rigidity, and bradykinesia. Currently, there is no cure to stop disease progression. However, surgery and medications are available to relieve some of the symptoms in the short term. A highly promising treatment is deep brain stimulation (DBS). DBS is a surgical procedure in which an electrode is inserted through a small opening in the skull and implanted in a targeted area of the brain. The electrode is connected to a neurostimulator (sits inferior to the collar bone), which injects current back into the brain to regulate the pathological neural activity. Although DBS is virtually a breakthrough for PD, it is necessary to search for the optimal stimulation signal postoperatively. This calibration often takes several weeks or months because the process is trial-and- error. During a post-operative visit, the neurologist asks the patient to perform various motor tasks and makes subjective observations. Based on these, he/she tweaks the stimulation parameters and asks the patient to return in hours, days or even weeks. The difficulty is that there are millions of stimulation parameters to choose from, though experience has reduced this to roughly 1000 options. In this talk, I will describe my current research efforts, which are to 1. reduce calibration time down to days by developing a systematic testing paradigm using feedback control principles, and to 2. develop a new feedback stimulation paradigm that allows for broader classes of DBS signals to be administered. The former will allow neurologists to treat more patients with DBS and significantly cut medical costs, and the latter may result in further improving patient's responses to DBS while reducing the need for replacements surgeries.
      Back to Top



Title:    Rational Design of Electrotherapy Devices: Translational Neural Engineering at CCNY BME
Speaker:    Marom Bikson
Affiliation:    Department of Biomedical Engineering
The City College of New York of CUNY
Date:    Monday, March 10, 2008
Time:    11:00 am
Location:    Biomedical Engineering Conference Room, 351 Eng. Terrace
Announcement:    PDF, PowerPoint
Abstract:    Clinical application of electrical stimulation is a promising treatment for a range of neurological and psychiatric disorders. Despite the establishment of therapeutic electrical stimulation as a standard treatment for several diseases (including Parkinson.s and Depression), fundamental challenges remain in the design of safe and effective technology. This talk summarizes ongoing basic and translational research studies by our group, with the overall goal of developing targeted electrotherapies. Experimental tools ranging from single cell recording and complex morphological reconstruction to system level finite-element-modeling and prototyping are used by our group toward the rational design of therapy treatments. Non-invasive (rTMS, tDCS, TES, ECT) and invasive (DBS, SCS) approaches, diverse cell targeting (neurons, glia, endothelial cells), concurrent drug delivery (Blood-Brain Barrier permeabilization). spatial focality, and safety optimization (joule heat, electroporation) are considered.
Biography    Professor Bikson's research falls broadly into two related topics: 1) the synchronization of neuronal activity in central networks during physiological oscillations (e.g. cognition) and pathological oscillations (epilepsy), with a specific emphasis on the role of non-synaptic (e.g. gap junction, glial) mechanisms; and 2) computer-neural interfaces, including assessing the risks of exposure to 'environmental' electric fields and developing stimulation protocols for the control of abnormal neuronal function (Deep Brain Stimulation).
      Back to Top



Title:    Biologically Inspired Sensing and Coding of Signals
Speaker:    John G. Harris
Affiliation:    University of Florida
Date:    Friday, January 25, 2008
Time:    2:00 PM
Location:    Electrical Engineering Conference Room, 13th Floor Mudd
Announcement:    PDF, PowerPoint
Abstract:    We discuss the role of biologically inspired spike representations in various engineering applications including sensor design, time-based signal processing, and power-efficient neural recording circuitry for brain-machine interfaces. These spike-based systems are shown to outperform conventional approaches in terms of various performance metrics such as power consumption, size, SNR, signal bandwidth and dynamic range. We also consider the implications this work has on our understanding of neurobiological systems.
Biography:    Dr. John G. Harris received his BS and MS degrees in Electrical Engineering from MIT in 1983 and 1986. He earned his PhD from Caltech in the interdisciplinary Computation and Neural Systems program in 1991. After a two-year postdoc at the MIT AI lab, Dr Harris joined the Electrical and Computer Engineering Department at the University of Florida (UF). He is currently a full professor and leads the Hybrid Signal Processing Group in researching biologically-inspired circuits, architectures and algorithms for sensing and signal processing. Dr. Harris has published over 100 research papers and patents in this area. He co-directs the Computational NeuroEngineering Lab and has a joint appointment in the Biomedical Engineering Department at UF.
      Back to Top


Related Seminars

Title:    Spectral Analysis of Brain Signals
Speaker:    Hernando Ombao
Affiliation:    Brown University
Department of Community Health
Center for Statistical Sciences
Date:    Monday, March 31, 2008
Time:    12:00 - 1:30pm
Location:    1255 Amsterdam Ave., Room 903
Reception:    Tea and coffee served at 11:30am, Room 1025
Abstract:    In many neuroscience experiments, one of the key goals is to investigate the oscillatory behavior of brain signals as quantified by spectral analysis. First, we review some basic ideas of Fourieranalysis of stationary time series and highlight its connection to analysis of variance. Second, we discuss current models and methods for analyzing non-stationary processes (i.e., processes whose spectral decomposition change overtime). Stochastic representations using localized basis functions will be discussed. The talk will conclude with some current investigations including spatio-temporal-spectral analysis and classification of> biological signals. These methods will be illustrated using> electroencephalogram (EEGs) and magnetoencephalogram (MEGs).
      Back to Top



Title:    Biologically Inspired Sensorimotor Processing
Speaker:    Daniel Lee
Affiliation:    GRASP (General Robotics, Automation, Sensing, Perception) Lab
Dept. of Electrical and Systems Engineering
University of Pennsylvania
Date:    Tuesday, March 25, 2008
Time:    11:00 am
Location:    Interschool Lab, Room 750 Schapiro CEPSR
Abstract:    How do animals process the tremendous amount of information coming from their senses, in time to take appropriate actions with their muscles? This type of robust sensorimotor processing is still difficult to replicate in robots even with the latest computers, sensors, and actuators. However, new advances in machine learning that borrow techniques from statistical physics, information theory, and differential geometry are helping to create new algorithms that replicate behaviors that animals routinely perform. I will describe some of my lab's recent work on artificial sensorimotor processing systems and demonstrate some of their latest feats and tricks.
Biography    Daniel D. Lee is currently Graduate Chair, Raymond S. Markowitz Faculty Fellow, and Associate Professor of Electrical and Systems Engineering at the University of Pennsylvania. He received his B.A. in Physics from Harvard University in 1990, and his Ph.D. in Condensed Matter Physics from the Massachusetts Institute of Technology in 1995. Before coming to Penn, he was a researcher at Bell Laboratories, Lucent Technologies, from 1995-2001 in the Theoretical Physics and Biological Computation departments. He has received the NSF Career award and the Univ. of Pennsylvania Lindback award for distinguished teaching; he is a fellow of the Hebrew University Institute of Advanced Studies in Jerusalem, and a foreign affiliate of the Korea Advanced Institute of Science and Technology, and has helped organize the US-Japan National Academy of Engineering Frontiers of Engineering symposium. His research focuses on understanding the general principles that biological systems use to process and organize information, and on applying that knowledge to build better artificial sensorimotor systems. He resides in Leonia, New Jersey, with his wife Lisa, six-year old son Jordan, and four-year old daughter Jessica.
      Back to Top



Title:    Bio-ionic Neural Interfaces
Speaker:    Luke S. Theogarajan
Affiliation:    Research Laboratory of Electronics
Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Date:    Thursday, March 13, 2008
Time:    10:00 am
Location:    Interschool Lab, Room 750 Schapiro CEPSR
Abstract:    Retinal Prostheses are being developed around the world in hopes of restoring useful vision for patients suffering from certain types of diseases like Age Related Macular Degeneration and retinitis pigmentosa. This talk will examine two approaches to developing such a retinal prosthesis. The first is an electrically based retinal prosthesis and the second is a novel bio-ionic neural interface.

The central component of an electrical retinal prosthesis is a wirelessly powered and driven stimulator chip. In this talk we will discuss the design of a 15-channel, low-power, fully implantable stimulator chip. The chip is powered wirelessly and receives wireless commands. The chip features a CMOS only ASK detector, a single-differential converter based on a novel feedback loop, a low-power adaptive bandwidth DLL and 15 programmable current sources that can be controlled via four commands.

Though electronics offer a superior computational platform, the electrical interface to neural tissue is not always optimal. The key limitation of the electrical interface is fundamental: electronics and the natural neural environment are incompatible in both, form and function. One of the main issues with the electrical interface is the need for large amounts of current. This further necessitates the use of large electrodes, due to safety issues, that leads to stimulation of large populations of neurons rather than a few.

Clearly there is a need for a fundamentally different approach to neural interfaces. The ultimate challenge is to design a self-sufficient neural interface. The ideal device will lend itself to seamless integration with the existing neural architecture. Communication with the neural tissue should then be performed via chemical rather than electrical messages. However, catastrophic destruction of neural tissue due to release of large quantities of a neuroactive species precludes the storage of quantities large enough to suffice for the lifetime of the device. The ideal device then should actively sequester the chemical species from the body and release it upon receiving appropriate triggers in a power efficient manner.

The use of ionic gradients, specifically K+ ions as an alternative chemical stimulation method will be examined in this talk. The key advantage being that the required ions can readily be sequestered from the background extracellular fluid. Results from in-vitro stimulation of rabbit retina show that modest increases in K+ ion concentration are sufficient to elicit a neural response.

The talk will then outline the different components that will be needed to build a neural interface using the ionic stimulation method. One of the key components is the development of a self-assembling potassium selective membrane. To achieve low-power the membranes must be ultrathin to allow for efficient operation in the diffusive transport limited regime. One method of achieving this is to use lyotropic self-assembly; unfortunately, conventional lipid bilayers cannot be used since they are not robust enough. Furthermore, the membrane cannot be made potassium selective by simply incorporating ion carriers since they would eventually leach away from the membrane. A single solution that solves all the above issues will be then discussed. The talk will then conclude by discussing some of the exciting opportunities and challenges that lie in this intersection of biology, chemistry and electrical engineering.
      Back to Top