Skip To Content

People/Faculty
Sebastian Seung, Ph.D.
Professor of Computational Neuroscience
Investigator, Howard Hughes Medical Institute

Department of Brain and Cognitive Sciences
Building: 46-5065
Lab: Seung Lab
Email: seung@mit.edu

Neural networks
Sebastian Seung studies neural networks using mathematical models, computer algorithms, and circuits of biological neurons in vitro. His interests include computational neuroanatomy, the idea of synaptic plasticity as an optimization algorithm, and persistent activity in neural integrators.

Computational neuroanatomy
Our laboratory is part of a small but growing community of scientists working to transform neuroanatomy into a high-throughput, data-rich field of science. Our ultimate goal is to create automated systems that will take a sample of brain tissue as input and generate its "circuit diagram," a list of all its neurons and their synaptic connections.

Determining this information, dubbed the "connectome," involves tracing the "wires" of the brain, its axons and dendrites. Because the thinnest axons are about 100 nm in diameter, it is necessary to image the structure of the brain at nanoscale resolution. We are collaborating with Winfried Denk (Max Planck Institute - Heidelberg), who has invented a new technique called serial block-face scanning electron microscopy. This automated technique yields three-dimensional images of the brain at nanoscale resolution.

Our role in this collaboration is to provide algorithms that take these raw images and extract information about neuronal morphology and synaptic connectivity. This is an image processing problem of unprecedented scale, because a sample of even modest dimensions yields a huge amount of data at nanoscale resolution. To solve a problem of this scale with high reliability, we are employing a novel approach based on machine learning techniques. Our work on "computational neuroanatomy" represents a new point of convergence for computer science and neuroscience.

Optimizing with synapses
Many types of biological learning can be regarded as optimizations. For example, operant conditioning can be viewed as a process by which animals adapt their behaviors so as to maximize reward. The adage that ``practice makes perfect'' refers to the iterative improvement of complex motor skills like playing the piano or serving a tennis ball.

It is widely believed that long-lasting modifications of synaptic connections are responsible at least in part for the changes in behavior called learning. In my laboratory, we are interested in the hypothesis that one function of synaptic plasticity is to perform the computations required to optimize neural circuits.

We have proposed a number of hypothetical synaptic plasticity rules that are driven by the covariance of a global reward signal with various measures of neural activity that are local to the synapse.

These rules effectively allow synapses to estimate the gradient of the expected reward, and thereby implement a procedure known in computer science as stochastic gradient learning. Through our modeling, we have shown how these rules can be applied to biophysically realistic, spiking neural network models. Furthermore, we have modeled a number of specific examples of biological learning as arising from our hypothetical synaptic plasticity rules.

Motivated by this theoretical work, we are also doing experiments on synaptic physiology. Our aim is to find synaptic plasticity mechanisms that could serve the computational function of optimization. In particular, we focus on the interaction between neuromodulators and local activity signals in inducing synaptic plasticity. One neuromodulator of special interest is dopamine, which is thought to play the role of a reward signal in the brain.

Persistent activity in neural integrators
Humans and other animals possess neural integrators, brain modules specialized for performing the mathematical operation of integrating a time-varying signal. This computation is important for certain behaviors such as motor control, navigation, and decision making. While neural integrators have been localized to particular brain areas, a mechanistic explanation of how neurons integrate is still lacking. Transient stimuli to neural integrators produce sustained changes in rate of action potential discharge that persist for up to tens of seconds. Such persistent neural activity has been observed in many brain areas, not just in neural integrators, and therefore its mechanisms are of very general interest. Integration can be regarded as the simplest form of working memory, the ability to store information and actively manipulate it. Therefore, understanding how neurons integrate could shed light on how working memory is implemented by the brain.

In collaboration with David Tank's laboratory at Princeton, we have been studying the velocity-to-position neural integrator of the oculomotor system. All types of eye movements involve this integrator, which transforms angular velocity signals into changes in the angular position of the eyes. Burst inputs from saccadic command neurons are integrated to produce step changes in eye position, and velocity signals from the semicircular canals are integrated to produce changes in eye position appropriate for a vestibulo-ocular reflex that compensates for head movements. Our goal is to understand the cellular and circuit mechanisms responsible for persistent activity in this neural integrator.


Loewenstein Y, Seung HS. Operant matching is a generic outcome of synaptic plasticity based on the covariance between reward and neural activity. Proc Natl Acad Sci U S A. 103:15224-9 (2006).

Fiete IR, Seung HS. Gradient learning in spiking neural networks by dynamic perturbation of conductances. Phys Rev Lett. 97:048104 (2006).

Seung HS. Learning in spiking neural networks by reinforcement of stochastic synaptic transmission. Neuron 40:1063-73 (2003).