Sensory Systems/NeuralSimulation

From Wikibooks, open books for an open world
< Sensory Systems
Jump to: navigation, search

Simulating Action Potentials[edit]

Action Potential[edit]

The "action potential" is the stereotypical voltage change that is used to propagate signals in the nervous system.

Action potential – Time dependence

With the mechanisms described below, an incoming stimulus (of any sort) can lead to a change in the voltage potential of a nerve cell. Up to a certain threshold, that's all there is to it ("Failed initiations" in Fig. 4). But when the Threshold of voltage-gated ion channels is reached, it comes to a feed-back reaction that almost immediately completely opens the Na+-ion channels ("Depolarization" below): This reaches a point where the permeability for Na+ (which is in the resting state is about 1% of the permeability of K+) is 20\*larger than that of K+. Together, the voltage rises from about -60mV to about +50mV. At that point internal reactions start to close (and block) the Na+ channels, and open the K+ channels to restore the equilibrium state. During this "Refractory period" of about 1 m, no depolarization can elicit an action potential. Only when the resting state is reached can new action potentials be triggered.

To simulate an action potential, we first have to define the different elements of the cell membrane, and how to describe them analytically.

Cell Membrane[edit]

The cell membrane is made up by a water-repelling, almost impermeable double-layer of proteins, the cell membrane. The real power in processing signals does not come from the cell membrane, but from ion channels that are embedded into that membrane. Ion channels are proteins which are embedded into the cell membrane, and which can selectively be opened for certain types of ions. (This selectivity is achieved by the geometrical arrangement of the amino acids which make up the ion channels.) In addition to the Na+ and K+ ions mentioned above, ions that are typically found in the nervous system are the cations Ca2+, Mg2+, and the anions Cl- .

States of ion channels[edit]

Ion channels can take on one of three states:

  • Open (For example, an open Na-channel lets Na+ ions pass, but blocks all other types of ions).
  • Closed, with the option to open up.
  • Closed, unconditionally.

Resting state[edit]

The typical default situation – when nothing is happening - is characterized by K+ that are open, and the other channels closed. In that case two forces determine the cell voltage:

  • The (chemical) concentration difference between the intra-cellular and extra-cellular concentration of K+, which is created by the continuous activity of the ion pumps described above.
  • The (electrical) voltage difference between the inside and outside of the cell.

The equilibrium is defined by the Nernst-equation:

{{E}_{X}}=\frac{RT}{zF}\ln \frac{{{[X]}_{o}}}{{{[X]}_{i}}}

R ... gas-constant, T ... temperature, z ... ion-valence, F ... Faraday constant, [X]o/i … ion concentration outside/ inside. At 25° C, RT/F is 25 mV, which leads to a resting voltage of

{{E}_{X}}=\frac{58mV}{z}\log \frac{{{[X]}_{o}}}{{{[X]}_{i}}}

With typical K+ concentration inside and outside of neurons, this yields E_{K+} = -75 mV. If the ion channels for K+, Na+ and Cl- are considered simultaneously, the equilibrium situation is characterized by the Goldman-equation

{{V}_{m}}=\frac{RT}{F}\ln \frac{{{P}_{K}}{{[{{K}^{+}}]}_{o}}+{{P}_{Na}}{{[N{{a}^{+}}]}_{o}}+{{P}_{Cl}}{{[Cl-]}_{i}}}{{{P}_{K}}{{[{{K}^{+}}]}_{i}}+{{P}_{Na}}{{[N{{a}^{+}}]}_{i}}+{{P}_{Cl}}{{[Cl-]}_{o}}}

where Pi denotes the permeability of Ion "i", and I the concentration. Using typical ion concentration, the cell has in its resting state a negative polarity of about -60 mV.

Activation of Ion Channels[edit]

The nifty feature of the ion channels is the fact that their permeability can be changed by

  • A mechanical stimulus (mechanically activated ion channels)
  • A chemical stimulus (ligand activated ion channels)
  • Or an by an external voltage (voltage gated ion channels)
  • Occasionally ion channels directly connect two cells, in which case they are called gap junction channels.

Important

  • Sensory systems are essentially based ion channels, which are activated by a mechanical stimulus (pressure, sound, movement), a chemical stimulus (taste, smell), or an electromagnetic stimulus (light), and produce a "neural signal", i.e. a voltage change in a nerve cell.
  • Action potentials use voltage gated ion channels, to change the "state" of the neuron quickly and reliably.
  • The communication between nerve cells predominantly uses ion channels that are activated by neurotransmitters, i.e. chemicals emitted at a synapse by the preceding neuron. This provides the maximum flexibility in the processing of neural signals.

Modeling a voltage dependent ion channel[edit]

Ohm's law relates the resistance of a resistor, R, to the current it passes, I, and the voltage drop across the resistor, V:

V=IR

or

I=gV

where g=1/R is the conductance of the resistor. If you now suppose that the conductance is directly proportional to the probability that the channel is in the open conformation, then this equation becomes

I={{g}_{\max }}*n*V

where gmax is the maximum conductance of the cannel, and n is the probability that the channel is in the open conformation.

Example: the K-channel

Voltage gated potassium channels (Kv) can be only open or closed. Let α be the rate the channel goes from closed to open, and β the rate the channel goes from open to closed

{{({{K}_{v}})}_{closed}}\underset{\beta }{\overset{\alpha }{\longleftrightarrow}}{{({{K}_{v}})}_{open}}

Since n is the probability that the channel is open, the probability that the channel is closed has to be (1-n), since all channels are either open or closed. Changes in the conformation of the channel can therefore be described by the formula

\frac{dn}{dt}=(1-n)\alpha -n\beta =\alpha -(\alpha +\beta )n

Note that α and β are voltage dependent! With a technique called "voltage-clamping", Hodgkin and Huxley determine these rates in 1952, and they came up with something like

\alpha (V)=\frac{0.01*\left( V+10 \right)}{\begin{align}
  & \exp \left( \frac{V+10}{10} \right)-1 \\ 
 & \beta (V)=0.125*\exp \left( \frac{V}{80} \right) \\ 
\end{align}}

If you only want to model a voltage-dependent potassium channel, these would be the equations to start from. (For voltage gated Na channels, the equations are a bit more difficult, since those channels have three possible conformations: open, closed, and inactive.)

Hodgkin Huxley equation[edit]

The feedback-loop of voltage-gated ion channels mentioned above made it difficult to determine their exact behaviour. In a first approximation, the shape of the action potential can be explained by analyzing the electrical circuit of a single axonal compartment of a neuron, consisting of the following components: 1) membrane capacitance, 2) Na channel, 3) K channel, 4) leakage current:

Circuit diagram of neuronal membrane based on Hodgkin and Huxley model.

The final equations in the original Hodgkin-Huxley model, where the currents in of chloride ions and other leakage currents were combined, were as follows:

{{C}_{m}}\frac{dV}{dt}={{G}_{Na}}{{m}^{3}}h({{E}_{Na}}-V)+{{G}_{K}}{{n}^{4}}({{E}_{K}}-V)+{{G}_{m}}({{V}_{rest}}-V)+{{I}_{inj}}(t)

Spiking behavior of a Hodgkin-Huxley model.

where m, h, and n are time- and voltage dependent functions which describe the membrane-permeability. For example, for the K channels n obeys the equations described above, which were determined experimentally with voltage-clamping. These equations describe the shape and propagation of the action potential with high accuracy! The model can be solved easily with open source tools, e.g. the Python Dynamical Systems Toolbox PyDSTools. A simple solution file is available under [1] , and the output is shown below.

Links to full Hodgkin-Huxley model[edit]

Modeling the Action Potential Generation: The Fitzhugh-Nagumo model[edit]

Phaseplane plot of the Fitzhugh-Nagumo model, with (a=0.7, b=0.8, c=3.0, I=-0.4). Solutions for four different starting conditions are shown. The dashed lines indicate the nullclines, and the "o" the fixed point of the model. I=-0.2 would be a stimulation below threshold, leading to a stationary state. And I=-1.6 would hyperpolarize the neuron, also leading to a - different - stationary state.

The Hodgkin-Huxley model has four dynamical variables: the voltage V, the probability that the K channel is open, n(V), the probability that the Na channel is open given that it was closed previously, m(V), and the probability that the Na channel is open given that it was inactive previously, h(V). A simplified model of action potential generation in neurons is the Fitzhugh-Nagumo (FN) model. Unlike the Hodgkin-Huxley model, the FN model has only two dynamic variables, by combining the variables V and m into a single variable v, and combining the variables n and h into a single variable r

\begin{align}
  & \frac{dv}{dt}=c(v-\frac{1}{3}{{v}^{3}}+r+I) \\ 
 & \frac{dr}{dt}=-\frac{1}{c}(v-a+br)  
\end{align} The following two examples are taken from I is an external current injected into the neuron. Since the FN model has only two dynamic variables, its full dynamics can be explored using phase plane methods (Sample solution in Python here [2])

Simulating a Single Neuron with Positive Feedback[edit]

The following two examples are taken from [3] . This book provides a fantastic introduction into modeling simple neural systems, and gives a good understanding of the underlying information processing.

Simple neural system with feedback.

Let us first look at the response of a single neuron, with an input x(t), and with feedback onto itself. The weight of the input is v, and the weight of the feedback w. The response y(t) of the neuron is given by

y(t)=wy(t-1)+vx(t-1)

This shows how already very simple simulations can capture signal processing properties of real neurons.

System output for a input pulse: a “leaky integrator”
# -*- coding: utf-8 -*-
import numpy as np
import matplotlib.pylab as plt
 
def oneUnitWithPosFB():
    '''Simulates a single model neuron with positive feedback '''
    # set input flag (1 for impulse, 2 for step)
    inFlag = 1
 
    cut = -np.inf   # set cut-off
    sat = np.inf    # set saturation
 
    tEnd = 100      # set last time step
    nTs = tEnd+1    # find the number of time steps
 
    v = 1           # set the input weight
    w = 0.95        # set the feedback weight
 
    x = np.zeros(nTs)   # open (define) an input hold vector 
    start = 11          # set a start time for the input     
    if inFlag == 1:     # if the input should be a pulse 
        x[start] = 1    # then set the input at only one time point
    elif inFlag == 2:   # if the input instead should be a step, then
        x[start:nTs] = np.ones(nTs-start) #keep it up until the end 
 
    y = np.zeros(nTs)   # open (define) an output hold vector 
    for t in range(2, nTs): # at every time step (skipping the first) 
        y[t] = w*y[t-1] + v*x[t-1]  # compute the output 
        y[t] = np.max([cut, y[t]])  # impose the cut-off constraint
        y[t] = np.min([sat, y[t]])  # mpose the saturation constraint 
 
    # plot results (no frills)
    plt.subplot(211)
    tBase = np.arange(tEnd+1)
    plt.plot(tBase, x)
    plt.axis([0, tEnd, 0, 1.1])
    plt.xlabel('Time Step')
    plt.ylabel('Input')
    plt.subplot(212)
    plt.plot(tBase, y)
    plt.xlabel('Time Step')
    plt.ylabel('Output')
    plt.show()
 
if __name__ == '__main__':
    oneUnitWithPosFB()

Simulating a Simple Neural System[edit]

Even very simple neural systems can display a surprisingly versatile set of behaviors. An example is Wilson's model of the locust-flight central pattern generator. Here the system is described by

\vec{y}(t)=\mathbf{W}\cdot \vec{y}(t-1)+\vec{v}\,x(t-1)

W is the connection matrix describing the recurrent connections of the neurons, and describes the input to the system.

Input x connects to units yi (i=1,2,3,4) with weights vi , and units y_l (l = 1,2,3,4) connect to units y_k (k = 1,2,3,4) with weights w_kl . For clarity, the self-connections of y2 and y3 are not shown, and the individual forward and recurrent weights are not labeled. Based on Tom Anastasio's excellent book "Tutorial on Neural Systems Modeling".
The response of units representing motoneurons in the inear version of Wilson’s model of the locust-flight central pattern generator (CPG): A simple input pulse elicits a sustained antagonistic oscillation in neurons 2 and 3.
import numpy as np
import matplotlib.pylab as plt
 
def printInfo(text, value):
    print(text)
    print(np.round(value, 2))
 
def WilsonCPG():
    '''implements a linear version of Wilson's 
    locust flight central pattern generator (CPG) '''
 
    v1 = v3 = v4 = 0.                   # set input weights
    v2 = 1.
    w11=0.9; w12=0.2; w13 = w14 = 0.    # feedback weights to unit one
    w21=-0.95; w22=0.4; w23=-0.5; w24=0 # ... to unit two
    w31=0; w32=-0.5; w33=0.4; w34=-0.95 # ... to unit three
    w41 = w42 = 0.; w43=0.2; w44=0.9    # ... to unit four
    V=np.array([v1, v2, v3, v4])        # compose input weight matrix (vector)
    W=np.array([[w11, w12, w13, w14],
              [w21, w22, w23, w24],
              [w31, w32, w33, w34],
              [w41, w42, w43, w44]])    # compose feedback weight matrix
 
    tEnd = 100              # set end time
    tVec = np.arange(tEnd)  # set time vector
    nTs = tEnd              # find number of time steps
    x = np.zeros(nTs)       # zero input vector
    fly = 11                # set time to start flying
    x[fly] = 1              # set input to one at fly time
 
    y = np.zeros((4,nTs))   # zero output vector
    for t in range(1,nTs):  # for each time step
        y[:,t] = W.dot(y[:,t-1]) + V*x[t-1]; # compute output
 
    # These calculations are interesting, but not absolutely necessary
    (eVal,eVec) = np.linalg.eig(W); # find eigenvalues and eigenvectors    
    magEVal = np.abs(eVal)          # find magnitude of eigenvalues
    angEVal = np.angle(eVal)*(180/np.pi) # find angles of eigenvalues
 
    printInfo('Eigenvectors: --------------', eVec)
    printInfo('Eigenvalues: ---------------', eVal)
    printInfo('Angle of Eigenvalues: ------', angEVal)    
 
    # plot results (units y2 and y3 only)
    plt.figure()
    plt.rcParams['font.size'] = 14      # set the default fontsize
    plt.rcParams['lines.linewidth']=1
 
    plt.plot(tVec, x, 'k-.', tVec, y[1,:],'k', tVec,y[2,:],'k--', linewidth=2.5)
    plt.axis([0, tEnd, -0.6, 1.1])
    plt.xlabel('Time Step',fontsize=14)
    plt.ylabel('Input and Unit Responses',fontsize=14)
    plt.legend(('Input','Left Motoneuron','Right Motoneuron'))
    plt.show()
 
if __name__ == '__main__':
    plt.close('all')
    WilsonCPG()

The Development and Theory of Neuromorphic Circuits[edit]

Introduction[edit]

Neurmomorphic engineering uses very-large-scale-integration (VLSI) systems to build analog and digital circuits, emulating neuro-biological architecture and behavior. Most modern circuitry primarily utilizes digital circuit components because they are fast, precise, and insensitive to noise. Unlike more biologically relevant analog circuits, digital circuits require higher power supplies and are not capable of parallel computing. Biological neuron behaviors, such as membrane leakage and threshold constraints, are functions of material substrate parameters, and require analog systems to model and fine tune beyond digital 0/1. This paper will briefly summarize such neuromorphic circuits, and the theory behind their analog circuit components.

Current Events in Neuromorphic Engineering[edit]

Recently, the field of neuromorphic engineering has experienced a period of rapid growth, receiving widespread attention from the press and scientific community. In 2013, after drawing the attention of the EU commission, the Human Brain Project was initiated, funding it 1.2 billion euros over ten years. This project proposes computationally simulating the human brain from the level of molecules and neurons up through neuronal circuits. Shortly after this announcement, the U.S. National Insitiute of Health announced the funding of the US\$100 million BRAIN Project, aimed to reconstruct the activity of large populations of neurons. Corporate labs at Hewlett-Packard and IBM are also investigating in various neuromorphic projects.

Transistor Structure & Physics[edit]

Metal-oxide-silicon-field-effect-transistors (MOSFETs) are common components of modern integrated circuits. MOSFETs are classified as unipolar devices because each transistor utilizes only one carrier type; negative-type MOFETs (nFETs) have electrons as carriers and positive-type MOSFETs (pFETs) have holes as carriers.

Cross section of an n-type MOSFET. Transistor showing gate (G), body (B), source (S), and drain (D). Positive current flows from the n+ drain well to the n+ source well. Source: Wikipedia

The general MOSFET has a metal gate (G), and two pn junction diodes known as the source (S) and the drain (D) as shown in Fig \ref{fig: transistor}. There is an insulating oxide layer that separates the gate from the silicon bulk (B). The channel that carries the charge runs directly below this oxide layer. The current is a function of the gate dimensions.

The source and the drain are symmetric and differ only in the biases applied to them. In a nFET device, the wells that form the source and drain are n-type and sit in a p-type substrate. The substrate is biased through the bulk p-type well contact. The positive current flows below the gate in the channel from the drain to the source. The source is called as such because it is the source of the electrons. Conversely, in a pFET device, the p-type source and drain are in a bulk n-well that is in a p-type substrate; current flows from the source to the drain.

When the carriers move due to a concentration gradient, this is called diffusion. If the carriers are swept due to an electric field, this is called drift. By convention, the nFET drain is biased at a higher potential than the source, whereas the source is biased higher in a pFET.

In a nFET, when a positive voltage is applied to the gate, positive charge accumulates on the metal contact. This draws electrons from the bulk to the silicon-oxide interface, creating a negatively charged channel between the source and the drain. The larger the gate voltage, the thicker the channel becomes which reduces the internal resistance, and thus increases the current logarithmically. For small gate voltages, typically below the threshold voltage, V_{th} = 0.7V , the channel is not yet fully conducting and the increase in current from the drain to the source increases linearly on a logarithmic scale. This regime, when V_{gs} < V_{th}, is called the subthreshold region. Beyond this threshold voltage, V_{gs} > V_{th}, the channel is fully conducting between the source and drain, and the current is in the superthreshold regime.

Transistor current as a function of V_{g} for a fixed value value of V_{ds}.

For current to flow from the drain to the source, there must initially be an electric field to sweep the carriers across the channel. The strength of this electric field is a function of the applied potential difference between the source and the drain (V_{ds}), and thus controls the drain-source current. For small values of V_{ds}, the current linearly increases as a function of V_{ds} for constant V_{gs} values. As V_{ds} increases beyond 100mV, the current saturates.

pFETs behave similarly to nFET except that the carriers are holes, and the contact biases are negated.

In digital applications, transistors either operate in their saturation region (on) or are off. This large range in potential differences between the on and off modes is why digital circuits have such a high power demand. Contrarily, analog circuits take advantage of the linear region of transistors to produce a continuous signals with a lower power demand. However, because small changes in gate or source-drain voltages can create a large change in current, analog systems are prone to noise.

The field of neuromorphic engineering takes advantage of the noisy nature of analog circuits to replicate stochastic neuronal behavior [4], [5]

</ref>. Unlike clocked digital circuits, analog circuits are capable of creating action potentials with temporal dynamics similar to biological time scales (approx. 10 \mu sec). The potentials are slowed down and firing rates are controlled by lengthening time constants through leaking biases and variable resistive transistors. Analog circuits have been created that are capable of emulating biological action potentials with varying temporal dynamics, thus allowing silicon circuits to mimic neuronal spike-based learning behavior [6]. Whereas, digital circuits can only contain binary synaptic weights [0,1], analog circuits are capable of maintaining synaptic weights within a continuous range of values, making analog circuits particularly advantageous for neuromorophic circuits.


Basic static circuits[edit]

With an understanding of how transistors work and how they are biased, basic static analog circuits can be rationalized through. Afterward, these basic static circuits will be combined to create neuromorphic circuits. In the following circuit examples, the source, drain, and gate voltages are fixed, and the current is the output. In practice, the bias gate voltage is fixed to a subthreshold value (0<V_g<0.7V), the drain is held in saturation (V_d>100mV), and the source and bulk are tied to ground (V_s, V_b = 0V). All non-idealities are ignored.

Basic static circuits. (A) Diode-connected transistor. (B) Current mirror. (C) Source follower. (D) Inverter. (E) Current conveyor. (F) Differential Pair.

Diode-Connected Transistor[edit]

A diode-connected nFET has its gate tied to the drain. Since the floating drain controls the gate voltage, the drain-gate voltages will self-regulate so the device will always sink the input current, I_{ds}. Beyond several microvolts, the transistor will run in saturation. Similarly, a diode-connected pFET has its gate tied to the source. Though this simple device seems to merely function as a short circuit, it is commonly used in analog circuits for copying and regulating current. Particularly in neuromorphic circuits, they are used to slow current sinks, to increase circuit time constants to biologically plausible time regimes.

Current Mirror[edit]

A current mirror takes advantage of the diode-connected transistor’s ability to sink current. When an input current is forced through the diode connected transistor, M1, the floating drain and gate are regulated to the appropriate voltage that allows the input current to pass. Since the two transistors share a common gate node, M2 will also sink the same current. This forces the output transistor to duplicate the input current. The output will mirror the input current as long as:

  1.  V_{s1} = V_{s2}
  2. \frac{W_{M1}}{L_{M1}}=\frac{W_{M2}}{L_{M2}} .

The current mirror gain can be controlled by adjusting these two parameters. When using transistors with different dimensions, otherwise known as a tilted mirror, the gain is:


  Gain = \frac{(\frac{W}{L})_{M2}}{(\frac{W}{L})_{M1}}.

A pFET current mirror is simply a flipped nFET mirror, where the diode-connected pFET mirrors the input current, and forces the other pFET to source output current.

Current mirrors are commonly used to copy currents without draining the input current. This is especially essential for feedback loops, such as the one use to accelerate action potentials, and summing input currents at a synapse.

Source Follower[edit]

A source follower consists of an input transistor, M_1, stacked on top of a bias transistor, M_b. The fixed subthreshold (<0.7V) bias voltage controls the gate M_b, forcing it to sink a constant current, I_b. M_1 is thus also forced to sink the same current (I_1 = I_b) regardless of what the input voltage, V_{in}.

A source follower is called so because the output, V_{out}, will follow V_{in} with a slight offset described by:



  V_{out} = \kappa \cdot (V_{in} -V_b),
where kappa is the subthreshold slope factor, typically less than one.

This simple circuit is often used as a buffer. Since no current can flow through the gate, this circuit will not draw current from the input, an important trait for low-power circuits. Source followers can also isolate circuits, protecting them from power surges or static. A pFET source follower only differs from an nFET source follower in that the bias pFET has its bulk tied to V_{out}.

In neuromorphic circuits, source followers and the like are used as simple current integrators which behave like post-synaptic neurons collecting current from many pre-synaptic neurons.


Inverter[edit]

An inverter consists of a pFET, M_1, stacked on top of a nFET, M_2, with their gates tied to the input, V_{in} and the output is tied to the common source node, V_{out}. When a high signal is input, the pFET is off but the nFET is on, effectively draining the output node, V_{out}, and inverting the signal. Contrarily, when the input signal is low, the nFET is off but the pFET is on, charging up the V_{out} node.

This simple circuit is effective as a quick switch. The inverter is also commonly used as a buffer because an output current can be produced without directly sourcing the input current, as no current is allowed through the gate. When two inverters are used in series, they can be used as a non-inverting amplifier. This was used in the original Integrate-and-Fire silicon neuron by Mead et al., 1989 to create a fast depolarizing spike similar to that of a biological action potential [7]. However, when the input fluctuates between high and low signals both transistors are in superthreshold saturation draining current, making this a very power hungry circuit.

Current Conveyor[edit]

The current conveyor is also commonly known as a buffered current mirror. Consisting of two transistors with their gates tied to a node of the other, the Current Conveyor self regulates so that the output current matches the input current, in a manner similar to the Current Mirror.

The current conveyor is often used in place of current mirrors for large serially repetitious arrays. This is because the current mirror current is controlled through the gate, whose oxide capacitance will result in a delayed output. Though this lag is negligible for a single output current mirror, long mirroring arrays will accumulative significant output delays. Such delays would greatly hinder large parallel processes such as those that try to emulate biological neural network computational strategies.

Differential Pair[edit]

The differential pair is a comparative circuit composed of two source followers with a common bias that forces the current of the weaker input to be silenced. The bias transistor will force I_b to remain constant, tying the common node, V_s, to a fixed voltage. Both input transistors will want to drain current proportional to their input voltages, I_1 and I_2, respectively. However, since the common node must remain fixed, the drains of the input transistors must raise in proportion to the gate voltages. The transistor with the lower input voltage will act as a choke and allow less current through its drain. The losing transistor will see its source voltage increase and thus fall out of saturation.

The differential pair, in the setting of a neuronal circuit, can function as an activation threshold of an ion channel below which the voltage-gated ion channel will not open, preventing the neuron from spiking [8].

Silicon neurons[edit]

Winner-Take-All[edit]

The Winner-Take-All (WTA) circuit, originally designed by Lazzaro et al. [9], is a continuous time, analog circuit. It compares the outputs of an array of cells, and only allows the cell with the highest output current to be on, inhibiting all other competing cells.

A two-input CMOS winner-take-all circuit

Each cell comprises a current-controlled conveyor, and receives input currents, and outputs into a common line controlling a bias transistor. The cell with the largest input current, will also output the largest current, increasing the voltage of the common node. This forces the weaker cells to turn off. The WTA circuit can be extended to include a large network of competing cells. A soft WTA also has its output current mirrored back to the input, effectively increasing the cell gain. This is necessary to reduce noise and random switching if the cell array has a small dynamic range.

WTA networks are commonly used as a form of competitive learning in computational neural networks that involve distributed decision making. In particular, WTA networks have been used to perform low level recognition and classification tasks that more closely resemble cortical activity during visual selection tasks [10].


Integrate & Fire Neuron[edit]

The most general schematic of an Integrate & Fire Neuron, is also known as an Axon-Hillock Neuron, is the most commonly used spiking neuron model [11]. Common elements between most Axon-Hillock circuits include: a node with a memory of the membrane potential V_c, an amplifier, a positive feedback loop C_f, and a mechanism to reset the membrane potential to its resting state, V_p.

The input current, I_i, charges the V_{c}, which is stored in a capacitor, C. This capacitor is analogous to the lipid cellular membrane which prevents free ionic diffusion, creating the membrane potential from the accumulated charge difference on either side of the lipid membrane. The input is amplified to output a voltage spike. A change in membrane potential is positively fed back through C_f to V_{c}, producing a faster spike. This closely resembles how a biological axon hillock, which is densely packed with voltage-gated sodium channels, amplifies the summed potentials to produce an action potential. When a voltage spike is produced, the reset bias, V_p, begins to drain the V_{c} node. This is similar to sodium-potassium channels which actively pump sodium and potassium ions against the concentration gradient to maintain the resting membrane potential.

Spiking neuron circuit. The amplifier consists of two inverting amplifiers that create the characteristic fast upward swing of an actional potential. The output spike, V_o, is initiated by the input current, I_i and the width is modulated by V_p. Source: adopted from Mead et al., 1989
File:Conductneuron.png
Conductance-based neuron circuit with an adaptive membrane. Source: Indiveri et al., 2010


The original Axon Hillock silicon neuron has been adapted to include an activation threshold with the addition of a Differential Pair comparing the input to a set threshold bias [12]. This conductance-based silicon neuron utilizes differential-pair integrator (DPI) with a leaky transistor to compare the input, I_{in} to the threshold, V_{thr}. The leak bias V_{tau}, refractory period bias V_{rfr}, adaptation bias V_{ahp}, and positive feed back gain, all independently control the spiking frequency. Research has been focused on implementing spike frequency adaptation to set refractory periods and modulating thresholds [13]. Adaptation allows for the neuron to modulate its output firing rate as a function of its input. If there is a constant high frequency input, the neuron will be desensitized to the input and the output will be steadily diminished over time. The adaptive component of the conductance-based neuron circuit is modeled through the calcium flux and stores the memory of past activity through the adaptive capacitor, C_{ahp}. The advent of spike frequency adaptation allowed for changes on the neuron level to control adaptive learning mechanisms on the synapse level. This model of neuronal learning is modeled from biology [14] and will be further discussed in Silicon Synapses.

File:Spikefreqadapt.png
(A)Current depression mechanism. (B) Adaptive threshold mechanism as a function of V_{mem}(blue). The neuron's spiking threshold (red) increases with every spike, increasing the spiking time constant. Source: Indiveri et al., 2010

Silicon Synapses[edit]

The most basic silicon synapse, originally used by Mead et al.,1989 [15], simply consists of a pFET source follower that receives a low signal pulse input and outputs a unidirectional current, I_o [16].

(A) Basic synapse circuit. (B)Synapse circuit with longer time constant. Sources: adopted from Mead et al., 1989, and Lazzaro et al., 1993, respectively.

The amplitude of the spike is controlled by the weight bias, V_w, and the pulse width is directly correlated with the input pulse width which is set by $V_{\tau}$. The capacitor in the Lazzaro et al. (1993) synapse circuit was added to increase the spike time constant to a biologically plausible value. This slowed the rate at which the pulse hyperpolarizes and depolarizes, and is a function of the capacitance.

Basic synapse circuit. Source: adopted from Lazzaro et al., 1992

For multiple inputs depicting competitive excitatory and inhibitive behavior, the log-domain integrator uses I_1 and I_2 to regulate the output current magnitude, I_o, as function of the input current, I_i, according to:


I_o = I_i \cdot \sqrt{\frac{I_1}{I_2}}.

I_1 controls the rate at which I_i is able to charge the output transistor gate. I_2 governs the rate in which the output I_o is sunk. This competitive nature is necessary to mimic biological behavior of neurotransmitters that either promote or depress neuronal firing.

Synaptic models have also been developed with first order linear integrators using log-domain filters capable of modeling the exponential decay of excitatory post-synaptic current (EPSC) [17]. This is necessary to have biologically plausible spike contours and time constants. The gain is also independently controlled from the synapse time constant which is necessary for spike-rate and spike-timing dependent learning mechanisms.

File:Logdomainsynapse.png
(A) Data fit for a typical EPSC according to the linear integrator model. (B) A basic log-domain integrator. Source: Mitra et al., 2010

The aforementioned synapses simply relay currents from the pre-synaptic sources, varying the shape of the pulse spike along the way. They do not, however, contain any memory of previous spikes, nor are they capable of adapting their behavior according to temporal dynamics. These abilities, however, are necessary if neuromorphic circuits are to learn like biological neural networks.

An artificial neural network. There are p presynaptic neurons (x), and q postsynaptic neurons (b). x_p is a single presynaptic neuron that synapses upon postsynaptic neuron b_q with the synaptic weight w_{pq} resulting in the postsynaptic neuron to output y_q. Source: Wikipedia

According to Hebb's postulate, behaviors like learning and memory are hypothesized to occur on the synaptic level [18]. It accredits the learning process to long-term neuronal adaptation in which pre- and post-synaptic contributions are strengthened or weakened by biochemical modifications. This theory is often summarized in the saying, "Neurons that fire together, wire together." Artificial neural networks model learning through these biochemical "wiring" modifications with a single parameter, the synaptic weight, w_{pq}. A synaptic weight is a parameter state variable that quantifies how a presynaptic neuron spike affects a postsynaptic neuron output. Two models of Hebbian synaptic weight plasticity include spike-rate-dependent plasticity (SRDP), and spike-timing-dependent plasticity (STDP). Since the conception of this theory, biological neuron activity has been shown to exhibit behavior closely modeling Hebbian learning. One such example is of synaptic NMDA and AMPA receptor plastic modifications that lead to calcium flux induced adaptation [19].

Learning and long-term memory of information in biological neurons is accredited to NMDA channel induced adaptation. These NMDA receptors are voltage dependent and control intracellular calcium ion flux. It has been shown in animal studies that neuronal desensitization is diminished when extracellular calcium was reduced [20].

(A)Simple synapse consisting of AMPA and NMDA channels, and calcium. (B) Circuit models of individual elements of the synapse. (C) Circuit outputs in response to a presynaptic action (AP) potential input (AP_{PRE}). Source: Rachmuth et al., 2011

Since calcium concentration exponentially decays, this behavior easily implemented on hardware using subthreshold transistors. A circuit model demonstrating calcium dependent biological behavior is shown by Rachmuth et al. (2011) [21]. The calcium signal, I_{Ca_{2+}}, regulates AMPA and NMDA channel activity through the V_{mem} node according to calcium-dependent STDP and SRDP learning rules. The output of these learning rules is the synaptic weight, w, which is proportional to the number of active AMPA and NMDA channels. The SRDP model describes the weight in terms of two state variables, \Omega, which controls the update rule, and \eta, which controls the learning rate.



dw = \eta([Ca_{2+}]) \cdot (\Omega([Ca_{2+}]) - \lambda w)  ,
where w is the synaptic weight, \Omega([Ca_{2+}]) is the update rule, \eta([Ca_{2+}]) is the learning rate, and \lambda is a constant that allows the weight to drift out of saturation in absence of an input.

The NMDA channel controls the calcium influx, I_{Ca}. The NMDA receptor voltage-dependency is modeled by V_{mem}, and the channel mechanics are controlled with a large capacitor to increase the calcium time constant, \tau_{Ca}. The output I_{Ca} is copied via current mirrors into the \Omega and \eta circuits to perform downstream learning functions.

The \Omega circuit compares I_{Ca} to threshold biases, \theta_{LTP} and \theta_{LTD}), that respectively control long-term potentiation or long-term depression through a series of differential pair circuits. The output of differential pairs determines the update rule. This \Omega circuit has been demonstrated to exhibit various Hebbian learning rules as observed in the hippocampus, and anti-Hebbian learning rules used in the cerebellum.

The \eta circuit controls when synaptic learning can occur by only allowing updates when I_{Ca} is above a differential pair set threshold, \theta_{\eta}. The learning rate (LR) is modeled according to:



\tau_{LR} \sim \frac{\theta_{\eta} \cdot C_{\eta}}{I_{\eta} \cdot [Ca_{2+}]}  ,
where \eta is a function of [Ca_{2+}] and controls the learning rate, C_{\eta} is the capacitance of the \eta circuit, and \theta_{\eta} is the threshold voltage of the comparator. This function demonstrates that \theta_{\eta} must be biased to maintain an elevated [Ca_{2+}] in order to simulate SRDT. A leakage current, I_{LEAK}, was included to drain V_{\eta} to \eta_{REST} during inactivity.


References[edit]

  1. T. Haslwanter (2012). "Hodgkin-Huxley Simulations [Python"]. private communications. http://work.thaslwanter.at/CSS/Code/HH_model.py. 
  2. T. Haslwanter (2012). "Fitzhugh-Nagumo Model [Python"]. private communications. http://work.thaslwanter.at/CSS/Code/Fitzhugh_Nagumo.py. 
  3. T. Anastasio (2010). "Tutorial on Neural systems Modeling". http://www.sinauer.com/detail.php?id=3396. 
  4. name="aydiner2003"> T. Anastasio (2010). "Tutorial on Neural systems Modeling". http://www.sinauer.com/detail.php?id=3396. 
  5. WM Siebert (1965), Some implications of the stochastic behavior of primary auditory neurons 
  6. indiveri2010
  7. mead1989
  8. douglaspatent
  9. lazzaro1989
  10. riesenbuber1999
  11. mead1989
  12. douglaspatent
  13. douglas2003
  14. indiveri
  15. mead1989
  16. lazzaro1993
  17. mitra2010
  18. hebb1949
  19. koplas1997
  20. koplas1997
  21. rachmuth2011

Introduction · Neurosensory_Stimulation

[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

  1. E Aydiner, AM Vural, B Ozcelik, K Kiymac, U Tan (2003), A simple chaotic neuron model: stochastic behavior of neural networks 
  2. E Chicca, G Indiveri, R Douglas (2004), An event-based VLSI network of Integrate-and-Fire Neurons 
  3. E Chicca, G Indiveri, R Douglas (2003), An adaptive silicon synapse 
  4. RJ Douglas, MA Mahowald (2003), Silicon Neuron 
  5. DO Hebb (1949), The organization of behavior 
  6. G Indiveri, E Chicca, R Douglas (2004), A VLSI reconfigurable network of integrate-and-fire neurons with spike-based learning synapses 
  7. G Indiveri, F Stefanini, E Chicca (2010), Spike-based learning with a generalized integrate and fire silicon neuron 
  8. PA Koplas, RL Rosenberg, GS Oxford (1997), The role of calcium in the densensitization of capsaisin responses in rat dorsal root ganglion neurons 
  9. a b J Lazzaro, S Ryckebusch, MA Mahowald, CA Mead (1989), Winner-Take-All: Networks of O(N) Complexity 
  10. CA Mead (1989), Analog VLSI and Neural Systems 
  11. S Mitra, G Indiveri, RE Cummings (2010), Synthesis of log-domain integrators for silicon synapses with global parametric control 
  12. G Rachmuth, HZ Shouval, MF Bear, CS Poon (2011), A biophysically-based neuromorphic model of spike rate-timing-dependent plasticity 
  13. M Riesenhuber, T Poggio (1999), Hierarchical models of object recognition in cortex 
  14. SC Liu, J Kramer, T Delbrück, G Indiveri, R Douglas (2002), Analog VLSI: Circuits and Principles 
  15. WM Siebert (1965), Some implications of the stochastic behavior of primary auditory neurons