Current Research

Algorithm and software development for automated neural circuit tracing
Automating the process of neural circuit reconstruction is one of the foremost challenges of the emerging field of connectomics. At present, reliable reconstruction of neurites can only be done manually or semi-automatically, which can be extremely time-consuming. Because neurites of many cell types can span the entire brain of an animal (e.g. cortical pyramidal cell axons) or the entire animal itself (e.g. C. elegans neurons), circuit-level reconstructions must be performed on large spatial scales. To this end, we are developing algorithms and software for automated tracing of sparsely labeled neurites from 3D light microscopy stacks of images. Our algorithms utilize the latest image processing and machine learning methods and the software, named the Neural Circuit Tracer, is being developed in Java Swings. The Neural Circuit Tracer combines a number of functionalities of ImageJ with several newly developed functions for automated tracing and manual editing. The software makes it possible to use ImageJ plug-ins and add functions written in MatLab.

Theoretical analysis of learning and long-term memory formation
Understanding the roles different cortical areas play in learning and long-term memory storage is a fundamental unsolved problem in the field of neuroscience. Generally, it is established that traces of stored memories are written into cortical circuits by a variety of plasticity mechanisms. In adult cerebral cortex plasticity can be mediated by potentiation and depression of synaptic strengths and structural reorganization of circuits through growth and retraction of dendritic spines. Because strengths of excitatory synapses on spines are correlated with spine head volumes, and because the number of connectivity patterns that can be built with structural plasticity depends on spine lengths, analysis of spine shapes can provide clues to the roles different cortical areas play in learning and long-term memory storage. By analyzing spine head volume and spine length distributions measured in different cortical systems we determined the “generalized cost” associated with dendritic spines. This cost universally depends on spine shape, i.e. the dependence is the same in different systems. We show that synaptic strength and structural synaptic plasticity mechanisms are in statistical equilibrium with each other, the numbers of dendritic spines in different cortical areas are nearly optimally chosen for memory storage, and the distributions of spine lengths and head volumes are governed by a single parameter – the effective temperature. Our results suggest that the effective temperature may be viewed as a measure of circuit stability or longevity of stored memories.

Capacity of artificial neural networks with elements of biological connectivity
Many basic features of synaptic connections are ubiquitous among cortical systems: (i) Connections received by excitatory neurons in the cerebral cortex are predominately excitatory, where the fraction of inhibitory inputs is typically in the 10-20% range. (ii) In spite of all-to-all local potential connectivity, excitatory input is very spars, e.g. probabilities of finding connected excitatory neuron pairs are typically less than 20%. (iii) The distributions of excitatory and inhibitory connection strengths have stereotypic shapes, described by coefficients of variation in the 0.7-1.0 range. In the attempt to relate these observations with learning and memory, we theoretically analyzed associative memory capacity of McCulloch and Pitts neurons receiving excitatory and inhibitory inputs. We show that at critical capacity the coefficient of variation in connection strength is uniquely related to probability of connection. This theoretical relation is consistent with a large number of experimental observations made in different cortical systems. The capacity for associative memory storage in the model depends on the fraction of inhibitory inputs, while connection probability remains below 50% for excitatory inputs and is typically higher for inhibitory connections in agreement with recent experimental findings.

Effect of neuron morphology on synaptic connectivity
Synaptic connectivity patterns in the brain are constrained by the morphologies of the underlying neurons. Traditionally, Sholl analysis has been used to quantitatively describe neural shapes. This analysis, however, has no direct bearing on synaptic connectivity which is dependent on correlations of pre- and postsynaptic neuron morphologies. A proper structural descriptor of connectivity between a pair of neurons is the number of potential synapses they form. A potential synapse is a micro-scale apposition between branches of neurons that can be transformed into an actual synaptic connection. To understand how potential connectivity is affected by the details neuron morphology we generate distributions of potential synapse numbers for pairs of excitatory and inhibitory neurons reconstructed in 3D from various species and brain regions. Our preliminary results show that the distributions of potential synapse numbers are well described by a family of binomial functions of two parameters: the expected number of potential synapses and the success probability. The former parameter can be calculated from the densities of pre- and postsynaptic neurites in the arbors’ overlap region, while the latter is cell type specific. These findings make it possible to generate biologically plausible structural connectivity diagrams in silico, which is an important first step toward realistic modeling of neural circuit dynamics.

  • Contact Information:

    Prof. Armen Stepanyants
    Department of Physics
    Northeastern University
    110 Forsyth St.
    Boston, MA 02115
    Tel: (617) 373-2944
    Fax: (617) 373-2943

    Google Map
    Campus Map