# Sensory Systems/NeuralSimulation

## Contents

## Simulating Action Potentials[edit]

### Action Potential[edit]

The "action potential" is the stereotypical voltage change that is used to propagate signals in the nervous system.

With the mechanisms described below, an incoming stimulus (of any sort) can lead to a change in the voltage potential of a nerve cell. Up to a certain threshold, that's all there is to it ("Failed initiations" in Fig. 4). But when the Threshold of voltage-gated ion channels is reached, it comes to a feed-back reaction that almost immediately completely opens the Na+-ion channels ("Depolarization" below): This reaches a point where the permeability for Na+ (which is in the resting state is about 1% of the permeability of K+) is 20\*larger than that of K+. Together, the voltage rises from about -60mV to about +50mV. At that point internal reactions start to close (and block) the Na+ channels, and open the K+ channels to restore the equilibrium state. During this "Refractory period" of about 1 m, no depolarization can elicit an action potential. Only when the resting state is reached can new action potentials be triggered.

To simulate an action potential, we first have to define the different elements of the cell membrane, and how to describe them analytically.

### Cell Membrane[edit]

The cell membrane is made up by a water-repelling, almost impermeable double-layer of proteins, the *cell membrane*. The real power in processing signals does not come from the cell membrane, but from ion channels that are embedded into that membrane. Ion channels are proteins which are embedded into the cell membrane, and which can selectively be opened for certain types of ions. (This selectivity is achieved by the geometrical arrangement of the amino acids which make up the ion channels.) In addition to the Na+ and K+ ions mentioned above, ions that are typically found in the nervous system are the cations Ca2+, Mg2+, and the anions Cl- .

#### States of ion channels[edit]

Ion channels can take on one of three states:

- Open (For example, an open Na-channel lets Na+ ions pass, but blocks all other types of ions).
- Closed, with the option to open up.
- Closed, unconditionally.

#### Resting state[edit]

The typical default situation – when nothing is happening - is characterized by K+ that are open, and the other channels closed. In that case two forces determine the cell voltage:

- The (chemical) concentration difference between the intra-cellular and extra-cellular concentration of K+, which is created by the continuous activity of the ion pumps described above.
- The (electrical) voltage difference between the inside and outside of the cell.

The equilibrium is defined by the Nernst-equation:

R ... gas-constant, T ... temperature, z ... ion-valence, F ... Faraday constant, [X]o/i … ion concentration outside/ inside. At 25° C, RT/F is 25 mV, which leads to a resting voltage of

With typical K+ concentration inside and outside of neurons, this yields . If the ion channels for K+, Na+ and Cl- are considered simultaneously, the equilibrium situation is characterized by the Goldman-equation

where Pi denotes the permeability of Ion "i", and I the concentration. Using typical ion concentration, the cell has in its resting state a negative polarity of about -60 mV.

#### Activation of Ion Channels[edit]

The nifty feature of the ion channels is the fact that their permeability can be changed by

- A mechanical stimulus (mechanically activated ion channels)
- A chemical stimulus (ligand activated ion channels)
- Or an by an external voltage (voltage gated ion channels)
- Occasionally ion channels directly connect two cells, in which case they are called gap junction channels.

**Important**

- Sensory systems are essentially based ion channels, which are activated by a mechanical stimulus (pressure, sound, movement), a chemical stimulus (taste, smell), or an electromagnetic stimulus (light), and produce a "neural signal", i.e. a voltage change in a nerve cell.
- Action potentials use voltage gated ion channels, to change the "state" of the neuron quickly and reliably.
- The communication between nerve cells predominantly uses ion channels that are activated by neurotransmitters, i.e. chemicals emitted at a synapse by the preceding neuron. This provides the maximum flexibility in the processing of neural signals.

#### Modeling a voltage dependent ion channel[edit]

Ohm's law relates the resistance of a resistor, R, to the current it passes, I, and the voltage drop across the resistor, V:

or

where is the conductance of the resistor. If you now suppose that the conductance is directly proportional to the probability that the channel is in the open conformation, then this equation becomes

where gmax is the maximum conductance of the cannel, and n is the probability that the channel is in the open conformation.

**Example: the K-channel**

Voltage gated potassium channels (Kv) can be only open or closed. Let α be the rate the channel goes from closed to open, and β the rate the channel goes from open to closed

Since n is the probability that the channel is open, the probability that the channel is closed has to be (1-n), since all channels are either open or closed. Changes in the conformation of the channel can therefore be described by the formula

Note that α and β are voltage dependent! With a technique called "voltage-clamping", Hodgkin and Huxley determine these rates in 1952, and they came up with something like

If you only want to model a voltage-dependent potassium channel, these would be the equations to start from. (For voltage gated Na channels, the equations are a bit more difficult, since those channels have three possible conformations: open, closed, and inactive.)

### Hodgkin Huxley equation[edit]

The feedback-loop of voltage-gated ion channels mentioned above made it difficult to determine their exact behaviour. In a first approximation, the shape of the action potential can be explained by analyzing the electrical circuit of a single axonal compartment of a neuron, consisting of the following components: 1) membrane capacitance, 2) Na channel, 3) K channel, 4) leakage current:

The final equations in the original Hodgkin-Huxley model, where the currents in of chloride ions and other leakage currents were combined, were as follows:

where m, h, and n are time- and voltage dependent functions which describe the membrane-permeability. For example, for the K channels n obeys the equations described above, which were determined experimentally with voltage-clamping. These equations describe the shape and propagation of the action potential with high accuracy! The model can be solved easily with open source tools, e.g. the Python Dynamical Systems Toolbox *PyDSTools*. A simple solution file is available under ^{[1]} , and the output is shown below.

##### Links to full Hodgkin-Huxley model[edit]

- http://itb.biologie.hu-berlin.de/~stemmler/sec1.html#SECTION00020000000000000000
- http://www.afodor.net/HHModel.htm

### Modeling the Action Potential Generation: The Fitzhugh-Nagumo model[edit]

The Hodgkin-Huxley model has four dynamical variables: the voltage V, the probability that the K channel is open, n(V), the probability that the Na channel is open given that it was closed previously, m(V), and the probability that the Na channel is open given that it was inactive previously, h(V). A simplified model of action potential generation in neurons is the Fitzhugh-Nagumo (FN) model. Unlike the Hodgkin-Huxley model, the FN model has only two dynamic variables, by combining the variables V and m into a single variable v, and combining the variables n and h into a single variable r

I is an external current injected into the neuron. Since the FN model has only two dynamic variables, its full dynamics can be explored using phase plane methods (Sample solution in Python here ^{[2]})

## Simulating a Single Neuron with Positive Feedback[edit]

The following two examples are taken from ^{[3]} . This book provides a fantastic introduction into modeling simple neural systems, and gives a good understanding of the underlying information processing.

Let us first look at the response of a single neuron, with an input x(t), and with feedback onto itself. The weight of the input is v, and the weight of the feedback w. The response y(t) of the neuron is given by

This shows how already very simple simulations can capture signal processing properties of real neurons.

# -*- coding: utf-8 -*- import numpy as np import matplotlib.pylab as plt def oneUnitWithPosFB(): '''Simulates a single model neuron with positive feedback ''' # set input flag (1 for impulse, 2 for step) inFlag = 1 cut = -np.inf # set cut-off sat = np.inf # set saturation tEnd = 100 # set last time step nTs = tEnd+1 # find the number of time steps v = 1 # set the input weight w = 0.95 # set the feedback weight x = np.zeros(nTs) # open (define) an input hold vector start = 11 # set a start time for the input if inFlag == 1: # if the input should be a pulse x[start] = 1 # then set the input at only one time point elif inFlag == 2: # if the input instead should be a step, then x[start:nTs] = np.ones(nTs-start) #keep it up until the end y = np.zeros(nTs) # open (define) an output hold vector for t in range(2, nTs): # at every time step (skipping the first) y[t] = w*y[t-1] + v*x[t-1] # compute the output y[t] = np.max([cut, y[t]]) # impose the cut-off constraint y[t] = np.min([sat, y[t]]) # mpose the saturation constraint # plot results (no frills) plt.subplot(211) tBase = np.arange(tEnd+1) plt.plot(tBase, x) plt.axis([0, tEnd, 0, 1.1]) plt.xlabel('Time Step') plt.ylabel('Input') plt.subplot(212) plt.plot(tBase, y) plt.xlabel('Time Step') plt.ylabel('Output') plt.show() if __name__ == '__main__': oneUnitWithPosFB()

## Simulating a Simple Neural System[edit]

Even very simple neural systems can display a surprisingly versatile set of behaviors. An example is Wilson's model of the locust-flight central pattern generator. Here the system is described by

W is the connection matrix describing the recurrent connections of the neurons, and describes the input to the system.

import numpy as np import matplotlib.pylab as plt def printInfo(text, value): print(text) print(np.round(value, 2)) def WilsonCPG(): '''implements a linear version of Wilson's locust flight central pattern generator (CPG) ''' v1 = v3 = v4 = 0. # set input weights v2 = 1. w11=0.9; w12=0.2; w13 = w14 = 0. # feedback weights to unit one w21=-0.95; w22=0.4; w23=-0.5; w24=0 # ... to unit two w31=0; w32=-0.5; w33=0.4; w34=-0.95 # ... to unit three w41 = w42 = 0.; w43=0.2; w44=0.9 # ... to unit four V=np.array([v1, v2, v3, v4]) # compose input weight matrix (vector) W=np.array([[w11, w12, w13, w14], [w21, w22, w23, w24], [w31, w32, w33, w34], [w41, w42, w43, w44]]) # compose feedback weight matrix tEnd = 100 # set end time tVec = np.arange(tEnd) # set time vector nTs = tEnd # find number of time steps x = np.zeros(nTs) # zero input vector fly = 11 # set time to start flying x[fly] = 1 # set input to one at fly time y = np.zeros((4,nTs)) # zero output vector for t in range(1,nTs): # for each time step y[:,t] = W.dot(y[:,t-1]) + V*x[t-1]; # compute output # These calculations are interesting, but not absolutely necessary (eVal,eVec) = np.linalg.eig(W); # find eigenvalues and eigenvectors magEVal = np.abs(eVal) # find magnitude of eigenvalues angEVal = np.angle(eVal)*(180/np.pi) # find angles of eigenvalues printInfo('Eigenvectors: --------------', eVec) printInfo('Eigenvalues: ---------------', eVal) printInfo('Angle of Eigenvalues: ------', angEVal) # plot results (units y2 and y3 only) plt.figure() plt.rcParams['font.size'] = 14 # set the default fontsize plt.rcParams['lines.linewidth']=1 plt.plot(tVec, x, 'k-.', tVec, y[1,:],'k', tVec,y[2,:],'k--', linewidth=2.5) plt.axis([0, tEnd, -0.6, 1.1]) plt.xlabel('Time Step',fontsize=14) plt.ylabel('Input and Unit Responses',fontsize=14) plt.legend(('Input','Left Motoneuron','Right Motoneuron')) plt.show() if __name__ == '__main__': plt.close('all') WilsonCPG()

## References[edit]

- ↑ T. Haslwanter (2012). "Hodgkin-Huxley Simulations [Python"].
*private communications*. http://work.thaslwanter.at/CSS/Code/HH_model.py. - ↑ T. Haslwanter (2012). "Fitzhugh-Nagumo Model [Python"].
*private communications*. http://work.thaslwanter.at/CSS/Code/Fitzhugh_Nagumo.py. - ↑ T. Anastasio (2010). "Tutorial on Neural systems Modeling". http://www.sinauer.com/detail.php?id=3396.