# Sensory Systems/NeuralSimulation

## Simulating Action Potentials

### Action Potential

The "action potential" is the stereotypical voltage change that is used to propagate signals in the nervous system.

Action potential – Time dependence

With the mechanisms described below, an incoming stimulus (of any sort) can lead to a change in the voltage potential of a nerve cell. Up to a certain threshold, that's all there is to it ("Failed initiations" in Fig. 4). But when the Threshold of voltage-gated ion channels is reached, it comes to a feed-back reaction that almost immediately completely opens the Na+-ion channels ("Depolarization" below): This reaches a point where the permeability for Na+ (which is in the resting state is about 1% of the permeability of K+) is 20\*larger than that of K+. Together, the voltage rises from about -60mV to about +50mV. At that point internal reactions start to close (and block) the Na+ channels, and open the K+ channels to restore the equilibrium state. During this "Refractory period" of about 1 m, no depolarization can elicit an action potential. Only when the resting state is reached can new action potentials be triggered.

To simulate an action potential, we first have to define the different elements of the cell membrane, and how to describe them analytically.

### Cell Membrane

The cell membrane is made up by a water-repelling, almost impermeable double-layer of proteins, the cell membrane. The real power in processing signals does not come from the cell membrane, but from ion channels that are embedded into that membrane. Ion channels are proteins which are embedded into the cell membrane, and which can selectively be opened for certain types of ions. (This selectivity is achieved by the geometrical arrangement of the amino acids which make up the ion channels.) In addition to the Na+ and K+ ions mentioned above, ions that are typically found in the nervous system are the cations Ca2+, Mg2+, and the anions Cl- .

#### States of ion channels

Ion channels can take on one of three states:

• Open (For example, an open Na-channel lets Na+ ions pass, but blocks all other types of ions).
• Closed, with the option to open up.
• Closed, unconditionally.

#### Resting state

The typical default situation – when nothing is happening - is characterized by K+ that are open, and the other channels closed. In that case two forces determine the cell voltage:

• The (chemical) concentration difference between the intra-cellular and extra-cellular concentration of K+, which is created by the continuous activity of the ion pumps described above.
• The (electrical) voltage difference between the inside and outside of the cell.

The equilibrium is defined by the Nernst-equation:

${{E}_{X}}=\frac{RT}{zF}\ln \frac{{{[X]}_{o}}}{{{[X]}_{i}}}$

R ... gas-constant, T ... temperature, z ... ion-valence, F ... Faraday constant, [X]o/i … ion concentration outside/ inside. At 25° C, RT/F is 25 mV, which leads to a resting voltage of

${{E}_{X}}=\frac{58mV}{z}\log \frac{{{[X]}_{o}}}{{{[X]}_{i}}}$

With typical K+ concentration inside and outside of neurons, this yields $E_{K+} = -75 mV$. If the ion channels for K+, Na+ and Cl- are considered simultaneously, the equilibrium situation is characterized by the Goldman-equation

${{V}_{m}}=\frac{RT}{F}\ln \frac{{{P}_{K}}{{[{{K}^{+}}]}_{o}}+{{P}_{Na}}{{[N{{a}^{+}}]}_{o}}+{{P}_{Cl}}{{[Cl-]}_{i}}}{{{P}_{K}}{{[{{K}^{+}}]}_{i}}+{{P}_{Na}}{{[N{{a}^{+}}]}_{i}}+{{P}_{Cl}}{{[Cl-]}_{o}}}$

where Pi denotes the permeability of Ion "i", and I the concentration. Using typical ion concentration, the cell has in its resting state a negative polarity of about -60 mV.

#### Activation of Ion Channels

The nifty feature of the ion channels is the fact that their permeability can be changed by

• A mechanical stimulus (mechanically activated ion channels)
• A chemical stimulus (ligand activated ion channels)
• Or an by an external voltage (voltage gated ion channels)
• Occasionally ion channels directly connect two cells, in which case they are called gap junction channels.

Important

• Sensory systems are essentially based ion channels, which are activated by a mechanical stimulus (pressure, sound, movement), a chemical stimulus (taste, smell), or an electromagnetic stimulus (light), and produce a "neural signal", i.e. a voltage change in a nerve cell.
• Action potentials use voltage gated ion channels, to change the "state" of the neuron quickly and reliably.
• The communication between nerve cells predominantly uses ion channels that are activated by neurotransmitters, i.e. chemicals emitted at a synapse by the preceding neuron. This provides the maximum flexibility in the processing of neural signals.

#### Modeling a voltage dependent ion channel

Ohm's law relates the resistance of a resistor, R, to the current it passes, I, and the voltage drop across the resistor, V:

$V=IR$

or

$I=gV$

where $g=1/R$ is the conductance of the resistor. If you now suppose that the conductance is directly proportional to the probability that the channel is in the open conformation, then this equation becomes

$I={{g}_{\max }}*n*V$

where gmax is the maximum conductance of the cannel, and n is the probability that the channel is in the open conformation.

Example: the K-channel

Voltage gated potassium channels (Kv) can be only open or closed. Let α be the rate the channel goes from closed to open, and β the rate the channel goes from open to closed

${{({{K}_{v}})}_{closed}}\underset{\beta }{\overset{\alpha }{\longleftrightarrow}}{{({{K}_{v}})}_{open}}$

Since n is the probability that the channel is open, the probability that the channel is closed has to be (1-n), since all channels are either open or closed. Changes in the conformation of the channel can therefore be described by the formula

$\frac{dn}{dt}=(1-n)\alpha -n\beta =\alpha -(\alpha +\beta )n$

Note that α and β are voltage dependent! With a technique called "voltage-clamping", Hodgkin and Huxley determine these rates in 1952, and they came up with something like

\alpha (V)=\frac{0.01*\left( V+10 \right)}{\begin{align} & \exp \left( \frac{V+10}{10} \right)-1 \\ & \beta (V)=0.125*\exp \left( \frac{V}{80} \right) \\ \end{align}}

If you only want to model a voltage-dependent potassium channel, these would be the equations to start from. (For voltage gated Na channels, the equations are a bit more difficult, since those channels have three possible conformations: open, closed, and inactive.)

### Hodgkin Huxley equation

The feedback-loop of voltage-gated ion channels mentioned above made it difficult to determine their exact behaviour. In a first approximation, the shape of the action potential can be explained by analyzing the electrical circuit of a single axonal compartment of a neuron, consisting of the following components: 1) membrane capacitance, 2) Na channel, 3) K channel, 4) leakage current:

The final equations in the original Hodgkin-Huxley model, where the currents in of chloride ions and other leakage currents were combined, were as follows:

${{C}_{m}}\frac{dV}{dt}={{G}_{Na}}{{m}^{3}}h({{E}_{Na}}-V)+{{G}_{K}}{{n}^{4}}({{E}_{K}}-V)+{{G}_{m}}({{V}_{rest}}-V)+{{I}_{inj}}(t)$

Spiking behavior of a Hodgkin-Huxley model.

where m, h, and n are time- and voltage dependent functions which describe the membrane-permeability. For example, for the K channels n obeys the equations described above, which were determined experimentally with voltage-clamping. These equations describe the shape and propagation of the action potential with high accuracy! The model can be solved easily with open source tools, e.g. the Python Dynamical Systems Toolbox PyDSTools. A simple solution file is available under [1] , and the output is shown below.

### Modeling the Action Potential Generation: The Fitzhugh-Nagumo model

Phaseplane plot of the Fitzhugh-Nagumo model, with (a=0.7, b=0.8, c=3.0, I=-0.4). Solutions for four different starting conditions are shown. The dashed lines indicate the nullclines, and the "o" the fixed point of the model. I=-0.2 would be a stimulation below threshold, leading to a stationary state. And I=-1.6 would hyperpolarize the neuron, also leading to a - different - stationary state.

The Hodgkin-Huxley model has four dynamical variables: the voltage V, the probability that the K channel is open, n(V), the probability that the Na channel is open given that it was closed previously, m(V), and the probability that the Na channel is open given that it was inactive previously, h(V). A simplified model of action potential generation in neurons is the Fitzhugh-Nagumo (FN) model. Unlike the Hodgkin-Huxley model, the FN model has only two dynamic variables, by combining the variables V and m into a single variable v, and combining the variables n and h into a single variable r

\begin{align} & \frac{dv}{dt}=c(v-\frac{1}{3}{{v}^{3}}+r+I) \\ & \frac{dr}{dt}=-\frac{1}{c}(v-a+br) \end{align}

I is an external current injected into the neuron. Since the FN model has only two dynamic variables, its full dynamics can be explored using phase plane methods (Sample solution in Python here [2])

## Simulating a Single Neuron with Positive Feedback

The following two examples are taken from [3] . This book provides a fantastic introduction into modeling simple neural systems, and gives a good understanding of the underlying information processing.

Let us first look at the response of a single neuron, with an input x(t), and with feedback onto itself. The weight of the input is v, and the weight of the feedback w. The response y(t) of the neuron is given by

$y(t)=wy(t-1)+vx(t-1)$

This shows how already very simple simulations can capture signal processing properties of real neurons.

System output for a input pulse: a “leaky integrator”
# -*- coding: utf-8 -*-
import numpy as np
import matplotlib.pylab as plt

def oneUnitWithPosFB():
'''Simulates a single model neuron with positive feedback '''
# set input flag (1 for impulse, 2 for step)
inFlag = 1

cut = -np.inf   # set cut-off
sat = np.inf    # set saturation

tEnd = 100      # set last time step
nTs = tEnd+1    # find the number of time steps

v = 1           # set the input weight
w = 0.95        # set the feedback weight

x = np.zeros(nTs)   # open (define) an input hold vector
start = 11          # set a start time for the input
if inFlag == 1:     # if the input should be a pulse
x[start] = 1    # then set the input at only one time point
elif inFlag == 2:   # if the input instead should be a step, then
x[start:nTs] = np.ones(nTs-start) #keep it up until the end

y = np.zeros(nTs)   # open (define) an output hold vector
for t in range(2, nTs): # at every time step (skipping the first)
y[t] = w*y[t-1] + v*x[t-1]  # compute the output
y[t] = np.max([cut, y[t]])  # impose the cut-off constraint
y[t] = np.min([sat, y[t]])  # mpose the saturation constraint

# plot results (no frills)
plt.subplot(211)
tBase = np.arange(tEnd+1)
plt.plot(tBase, x)
plt.axis([0, tEnd, 0, 1.1])
plt.xlabel('Time Step')
plt.ylabel('Input')
plt.subplot(212)
plt.plot(tBase, y)
plt.xlabel('Time Step')
plt.ylabel('Output')
plt.show()

if __name__ == '__main__':
oneUnitWithPosFB()


## Simulating a Simple Neural System

Even very simple neural systems can display a surprisingly versatile set of behaviors. An example is Wilson's model of the locust-flight central pattern generator. Here the system is described by

$\vec{y}(t)=\mathbf{W}\cdot \vec{y}(t-1)+\vec{v}\,x(t-1)$

W is the connection matrix describing the recurrent connections of the neurons, and describes the input to the system.

Input x connects to units yi (i=1,2,3,4) with weights vi , and units y_l (l = 1,2,3,4) connect to units y_k (k = 1,2,3,4) with weights w_kl . For clarity, the self-connections of y2 and y3 are not shown, and the individual forward and recurrent weights are not labeled. Based on Tom Anastasio's excellent book "Tutorial on Neural Systems Modeling".
The response of units representing motoneurons in the inear version of Wilson’s model of the locust-flight central pattern generator (CPG): A simple input pulse elicits a sustained antagonistic oscillation in neurons 2 and 3.
import numpy as np
import matplotlib.pylab as plt

def printInfo(text, value):
print(text)
print(np.round(value, 2))

def WilsonCPG():
'''implements a linear version of Wilson's
locust flight central pattern generator (CPG) '''

v1 = v3 = v4 = 0.                   # set input weights
v2 = 1.
w11=0.9; w12=0.2; w13 = w14 = 0.    # feedback weights to unit one
w21=-0.95; w22=0.4; w23=-0.5; w24=0 # ... to unit two
w31=0; w32=-0.5; w33=0.4; w34=-0.95 # ... to unit three
w41 = w42 = 0.; w43=0.2; w44=0.9    # ... to unit four
V=np.array([v1, v2, v3, v4])        # compose input weight matrix (vector)
W=np.array([[w11, w12, w13, w14],
[w21, w22, w23, w24],
[w31, w32, w33, w34],
[w41, w42, w43, w44]])    # compose feedback weight matrix

tEnd = 100              # set end time
tVec = np.arange(tEnd)  # set time vector
nTs = tEnd              # find number of time steps
x = np.zeros(nTs)       # zero input vector
fly = 11                # set time to start flying
x[fly] = 1              # set input to one at fly time

y = np.zeros((4,nTs))   # zero output vector
for t in range(1,nTs):  # for each time step
y[:,t] = W.dot(y[:,t-1]) + V*x[t-1]; # compute output

# These calculations are interesting, but not absolutely necessary
(eVal,eVec) = np.linalg.eig(W); # find eigenvalues and eigenvectors
magEVal = np.abs(eVal)          # find magnitude of eigenvalues
angEVal = np.angle(eVal)*(180/np.pi) # find angles of eigenvalues

printInfo('Eigenvectors: --------------', eVec)
printInfo('Eigenvalues: ---------------', eVal)
printInfo('Angle of Eigenvalues: ------', angEVal)

# plot results (units y2 and y3 only)
plt.figure()
plt.rcParams['font.size'] = 14      # set the default fontsize
plt.rcParams['lines.linewidth']=1

plt.plot(tVec, x, 'k-.', tVec, y[1,:],'k', tVec,y[2,:],'k--', linewidth=2.5)
plt.axis([0, tEnd, -0.6, 1.1])
plt.xlabel('Time Step',fontsize=14)
plt.ylabel('Input and Unit Responses',fontsize=14)
plt.legend(('Input','Left Motoneuron','Right Motoneuron'))
plt.show()

if __name__ == '__main__':
plt.close('all')
WilsonCPG()


## References

1. T. Haslwanter (2012). "Hodgkin-Huxley Simulations [Python"]. private communications.
2. T. Haslwanter (2012). "Fitzhugh-Nagumo Model [Python"]. private communications.
3. T. Anastasio (2010). "Tutorial on Neural systems Modeling".