Sensory Systems/Computer Models/Simulations Olfactory System

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Computational Models of Olfaction[edit | edit source]

The olfactory system is a vast and complex system, which allows for odorant processing across species. Much like other sensory systems, the olfactory system has been modeled numerous times in the hope of concretely describing a working and physiologically plausible mechanism of its transduction. Specifically, computational models of olfaction typically focus on the vertebrate olfactory system, and seek to accurately reproduce known behavioural or perceptual phenomena related to odorant exposure while using components that behave and are organized in a fashion analogous to that of biological elements known to be involved in the process of vertebrate olfaction. Below, two such early models will be described; the first is that which was developed by Ambros-Ingerson, Granger, and Lynch (1990), presented in their paper Simulation of paleocortex performs hierarchical-clustering] [1], and the second is Hopfield’s (1995), from Pattern-recognition computation using action-potential timing for stimulus representation [2].

Hierarchical Clustering of Odors[edit | edit source]

In the paper Simulation of paleocortex performs hierarchical-clustering, (1990) Ambros-Ingerson, Granger and Lynch presented a model of the olfactory paleocortex – a phylogenetically old portion of the cortex whose main parts comprise the olfactory bulb and the piriform cortex, areas functionally associated with olfaction – that performs hierarchical clustering. Their research question, answered in the title, was whether the organization of the olfactory paleocortex led to two types of representations (categories and individuals) – as they had found in simpler earlier models – or to more complex renderings of the world with intricate hierarchical structures. A simple example can serve to demonstrate the difference between such two modes of classification: while considering the scent of lavender, one may for instance 1) classify it as a plant and then identify its individual scent as lavender, or 2) decide first that the perceived smell is that of a biological organism, then that of a plant, then that of a flower, then that of lavender in particular, thus sorting inputs as parts of a complex hierarchy.

Ambros-Ingerson et al.'s model's architecture – Ambros-Ingerson et al.’s model uses two networks, bulb and cortex. Within the bulb, there are 400 simulated excitatory mitral cells, each projecting sparsely and non-topographically to the cortex via the (biologically inspired) Lateral Olfactory Tract. These 400 Mitral cells are divided into patches of 40 cells, each receiving an input from a peripheral receptor axon. The number of active Mitral cells in a patch represents the intensity of a given cue. Simulated inhibitory granule cells serve to normalize the output of the bulb (in biology, this is believed to occur via dendro-dendritic contacts with Mitral cells) such that the bulbar output to the cortex is relatively constant across cues and intensities. Inhibitory granule cells also receive randomly organized excitatory feedback from the cortex, which is composed of 1000 excitatory layer II cells. These are grouped into patches of 20 cells, and connected to each other in a feedforward fashion either directly or via local inhibitory neurons, creating a competitive soft winner-take-all response to bulbar input in each patch. Connections are learnt via correlational, Hebbian-type rules. Both learning and anatomy of the model are based on reported anatomical and physiological data.

With respect to the elements of the model, it is composed of two thoroughly interconnected networks, one for the olfactory bulb and one for the cortex. The olfactory bulb receives its inputs from “peripheral receptor axons,” each of which projects to a group, or patch, of simulated mitral cells located within the bulb. Each mitral cell is in turn connected to local inhibitory granule cells via dendro-dendritic-like connections that allow for bulbar output normalization. In the biological olfactory system, mitral cells are excitatory neurons thought to serve as the main input and output point of the olfactory bulb (the first by their dendrites, located in the olfactory bulb's glomeruli, and the second by their axons, which project to different parts of the cortex via the lateral olfactory tract - although tufted cells of the olfactory bulb have a similar role, they are not included in this model). Accordingly, the simulated olfactory bulb’s output is carried by mitral cells to layer II of the simulated piriform cortex, which is made of 1000 excitatory cells divided into patches of 20 cells each. Feedforward connections between cortical neurons of the same patch, either directly to each other or indirectly via local inhibitory neurons, result in a competitive soft winner-take-all response to bulbar input within cortical patches. Feedback connections from the cortex to the simulated olfactory bulb end onto inhibitory granule cells.

The model implements repetitive sampling features, representing “the cyclic sniffing behaviour of mammals,” operating at a theta rhythm (4- to 7- Hz), “characteristic of small mammal” sniffing. Therefore, for a given cue, the simulated mitral cells receive the same peripheral receptor axon input, repetitively and for brief periods of time. Although the overall bulbar output to the cortex is normalized via local mitral-granule cell connections, the number of mitral cells active in a particular bulbar patch varies and indicates the intensity of a given cue, or a given cue’s component (in the case of multicomponent cues). These mitral cells are sparsely and randomly connected to cortical cells, which respond to cues responsible for the firing of those mitral cells most connected to them. The strength of mitral to cortical cell connections is modified and learnt via an unsupervised Hebbian rule, which acts within a single operation cycle in the model. Said learning rule implements long-term potentiation (LTP) between firing mitral cells and their cortical cell targets, when the latter are sufficiently depolarized. Feedback connections from the simulated cortex to the bulb’s granule cells are learnt during an earlier “developmental” period (training), through a similar Hebbian rule, which correlates the activity of bulb and cortex. Thus, both the response of the bulb and the response of the cortex are time-locked to the sniff-like theta rhythm at which this model operates, and continue cycling until the input cue is removed or axonal receptor input is virtually silent.

The connectivity and learning rules of Ambros-Ingerson et al.’s model result in a multilevel hierarchical memory that unveils underlying statistical relationships between different learned cues. Its mechanism is as follows. When a multicomponent cue is fed into the model, receptor axons activate the appropriate mitral cell patches that represent the cue’s different components. The simulated bulb’s normalization, ensured by its inhibitory granule cells, will lead to a constant olfactory bulb output despite the fact that different mitral cell patches may be activated to different extents during the model’s exposure to different cues. This output will be fed into the simulated layer II of the piriform cortex, where excitatory to inhibitory cell connections result in a competitive winner-take-all response. This generates a Hopfield like network, in which the stable state is that which matches the strongest input pattern. Since connections are learnt based on Hebbian-type rules, input lines shared across many similar input cues (therefore having participated in several previous learning episodes) create stronger connections to their representative piriform cortex cells than input lines shared across fewer input cues (and thus having participated in fewer previous learning episodes).

The cortical cells then feedback to the bulb onto inhibitory granule cells, which suppress those bulbar patches most responsible for cortical firing, thus subsequently allowing (via normalization) weaker patches to make it through to activating the cortex in a stronger fashion. With this ‘competitive queuing’ dynamic, initial (first-cycle) responses are very similar across members of similar input clusters, and then, as the cycles continue, they become more and more specific to the particular input cue – implying that the model uncovers multi-level statistical relationships between its inputs and classifies them accordingly.

Ambros-Ingerson et al.’s computational model of olfaction shows that the olfactory system’s organization as reported by anatomical and physiological research, forms a network that implements an algorithm through which the “computationally difficult problem of hierarchical clustering” can be solved. Their findings are consistent with the findings of those previous perceptual studies in which human subjects were found to recognize objects in a hierarchical fashion (Biederman, 1972[3]; Gluck and Bower, 1988[4]). Moreover, several of Ambros-Ingerson et al.’s model’s predictions have been further explored, including the possibility that this type of “progressive tuning over successive sampling cycles” and hierarchical clustering could be a general feature of thalamo-cortical circuits (Rodriguez et al., 2004[5]; Wilent and Contreras, 2005[6]). Although no clear biological confirmation appears to have been issued (probably because of the complex responses of the system, which prevent both clear confirmation and clear rejection of the model), other models of thalamo-cortical interactions (e.g. Wilent and Contreras, 2005) suggest that the dynamics found by Ambros-Ingerson et al. could form a fundamental part of the working neocortex.

The Problem of Scale-Invariant Recognition[edit | edit source]

Scale-invariant recognition refers to the act of appropriately recognizing an object in a fashion that is independent from its scale – where ‘scale’ includes, amongst others, both size and intensity. Again, a simple example may prove useful for understanding the scope of this problem: when one is exposed to a defined scent, say lavender, one’s qualitative classification of the scent does not change with variations in the scent’s intensity (i.e. as one approaches the origin of the scent, the smell’s intensity may increase, but this change in intensity will not lead the lavender scent to be gradually or suddenly perceived as that of orange juice). In other words X=λX, where λ is the scale. Although natural to us humans, this type of classification presented a real challenge for Perceptrons and other rate-based models in pattern recognition, which were widely used at the time. These were consistent with the rate-based processing observed in the peripheral nervous systems, where firing rates of motor neurons modulate the strength of muscle contractions. However, for more complex systems such as vision, audition or olfaction, it is often not only the scale but also the ratio of the input variables that determines the stimulus’s quality and intensity. It is the ‘ratio’ that presents a problem here, especially if lower-level sensory neurons are used for the identification of several different inputs; if the olfactory bulb’s ‘lavender’ neuron was activated by a 5:5:1 activation ratio from three different olfactory receptor neurons within the nasal cavity, and the ‘orange juice’ neuron by a 5:5:2 activation ratio of the same neurons, then, provided a strong enough lavender scent and the receiving neuron’s constant threshold, both would be active in a rate-based model despite there being no orange juice.

Pattern Recognition by Spike Timing[edit | edit source]

Measuring input scale by action potential timing – shows the encoding neuron’s resting subthreshold oscillation in a solid grey line. An example of input current that does not lead the neuron to reach its action potential threshold (Vt) is shown by the light green dotted line. An example of input current that drives the encoding neuron to reach its action potential threshold is depicted by the dotted dark green line. The neuron spikes upon reaching Vt, and undergoes a brief refractory period (included due to its biological presence). 'Time advance' refers to the time of the neuron's action potential with respect to its subthreshold oscillation's maxima (when it is most likely to fire). Higher input currents will lead the neuron's potential to cross threshold earlier, and thus to a bigger time advance.

Before Hopfield, several attempts had been made to resolve the problem of scale-invariant recognition. One such attempt normalized and re-scaled lower-level responses before passing on low-level output to higher-level parts. These higher-level parts would then identify the stimulus. Besides seeming biologically implausible, this sort of scheme, given the constant output magnitude of the lower-level neurons in the presence of an input cue, lost information about the input cue’s intensity and decreased the system’s sensitivity to smaller pattern components. Using timing and delay lines instead of activation rates and connection weights for strength and processing respectively, John Hopfield (1995) presented a solution to the complicated problem of scale-invariant recognition in “Pattern-recognition computation using action-potential timing for stimulus representation.” There, he also shows that his solution is specifically applicable to the processing of, amongst others, the olfactory system.

First, the solution presented in Hopfield’s paper consists of using subthreshold oscillations in higher-level ‘encoding’ neurons to measure input scale by action potential timing. In this scheme, each ‘encoding’ neuron possesses an intrinsic sub-threshold oscillation, such that the timing of its action potential (response) with respect to the peak of its oscillation varies with the lower-level neurons’ input strength. It is important here to specify, that each neuron in Hopfield’s model receives analogue input currents and acts as a leaky integrate-and-fire neuron, where the time constant (determining the dynamics of an input’s decay) is smaller than the oscillation’s period. Encoding neurons are also affected by an action potential’s refractory period, during which producing another action potential is most difficult (this refractory period is mostly introduced simply for the model to more closely resemble the nervous system's known physiology). Thus, encoding neurons reach their threshold potential if their input currents, from upstream neurons, are sufficiently large, and then earlier and earlier with respect to the oscillation’s peak with increasing input strength. Time-code, and not rate-code, is used to denote input scale.

The described time advances on the encoding neuron’s subthreshold oscillations contribute directly to the model’s ability to perform scale invariant recognition. Specifically, since Hopfield opted for cosine shaped subthreshold oscillations – a vastly simplified model, given the complex oscillations found in the nervous system – the upper part of the cosine’s peak could be used to encode stimuli logarithmically. That is, if the input current’s scale is proportional to the signal’s time advance, provided said input current is within a given magnitude (and thus the time advance falls within the top of the cosine curve), the network will encode intensities logarithmically (Ti ∝ log(xi), where T is the time advance and x is the input current for a particular cue i). This allows the network to do scale-invariant recognition of the same input, since doubling inputs does not change relative time advances; i.e. log(λxi)=log(λ)+log(xi). Later recognition, then, is dependent not on magnitude but on coincidence detection since inputs which are part of the same cue will all time-advance by the same amount when the cue’s intensity is increased.

Spike timing and delay lines – a) shows 5 different encoding neurons with different amounts of input current (Orange>Red>Purple>Blue>Green). The spiking times of these neurons are colour-coded, and shown with respect to the global subthreshold oscillation’s peak. One could see each of these neurons as feature detecting neurons, which would connect to a smell recognition neuron (shown in black, in b). For the output of the feature neurons (as shown in the first few cycles) to all arrive at the same time onto the black recognition neuron, the one which has the largest time advance (and thus the highest input current - Orange) must have the longest delay line, and the one which has the smallest time advance (and thus the smallest input current - Green) must have the shortest delay line. In this example, the pattern identified by the recognition neuron is not presented in the last two cycles; during this new cue, input to the green neuron is insufficient to make it fire as it would for the previous 'recognized' cue. The black neuron, without input from the green neuron would thus not receive sufficient synchronous stimulation and as a result would not fire, recognizing the absence of the cue it is wired to recognize.

Coincidence detection must occur at higher level recognition neurons, and is ensured by their a small time constant (a very ‘leaky’ neuronal membrane), as well as ‘delay lines’ that make the preferred cue’s components, in their correct ratio, arrive onto the recognition neuron simultaneously. Here, the lowest perceptible correct combination of a cue’s components should lead to a higher-level neuron’s spike, but none of the inputs (or cue components) alone should be able to trigger the recognition neuron’s firing.

In term of this model’s relation to olfaction, the olfactory bulb is known to undergo ≈40Hz global oscillations, as well as breathing-related oscillations. Although more complex than a cosine function, these allow for a version of the aforementioned time advances to be implemented. Mitral cells should then act as the described encoding neurons, with very small time constants for coincidence detection of their cue’s various components. The model is also particularly suited for the olfactory bulb, since global oscillations as well as cue intensity are known to modulate mitral cell activity. Indeed, it would explain why certain mammals sniff faster (accelerate their breathing cycle) to enhance their ability to discern an odour from the next. However, the model’s assumption of a single spike per cycle does not generally stand, since often in biological systems a burst of action potentials occur within a single oscillation. Still, a time-code might still hold; bursts could allow for the encoding of more intensities or further features of the presented cue. They could also be suppressed later via synaptic learning if misrepresentation occurred. Finally, some axons connecting to the piriform cortex, from mitral cells, are unmyelinated and, with the recurrent connections found within the piriform cortex, could thus potentially sum to form delay lines analogous to those of Hopfield’s model, allowing for delays of up to 20ms.

Hopfield’s 1995 model therefore provided an elegant mechanism for scale-invariant recognition, with applications to the mammalian olfactory system. Although it is difficult to ascertain whether it indeed describes the functioning of the olfactory bulb and cortex because of the messier nature of electrophysiological signals (and measurements), the model provides a useful alternative to traditional rate-based models, which are unable to solve this type of problem. Two main criticisms have been raised, with respect to its biological applicability: the first is that the model’s log encoding could heighten noise, as weakly stimulated neurons, if they reached threshold, would appear to be more synchronous than strongly stimulated neurons. The second relates to the long delays needed to make the system work: although some mitral cell axons are unmyelinated, thus allowing for large delays, their length is static and thus delays cannot be learnt. This latter criticism is mitigated by some recent evidence, which shows that in other parts of the central nervous system, most axons are only partially myelinated and that myelination is crucial to the process of learning (McKenzie et al., 2014[7]).


In conclusion, Ambros-Ingerson et al.’s “Simulation of paleocortex performs hierarchical-clustering” (1990) shows how the known connectivity of the olfactory system can lead to hierarchical representations of smell in the piriform cortex, and Hopfield’s “Pattern-recognition computation using action-potential timing for stimulus representation” (1995) demonstrates how oscillations and spikes seen in the olfactory bulb could be used to perform scale-invariant odour recognition. These papers present not only two excellent examples of early computational models of olfaction, but also two main approaches to the modelling of biological processes. Whilst the first, in replicating a known section of a biological system, extracts specific widely applicable computational properties of a circuit, the second, in solving a purely computational problem, finds its solution aligning with the components of certain biological systems that naturally resolve said problem. These two forms of modelling coexist in the scientific literature and can work in synergy to create a fuller and fuller picture of the complex systems that generally underlie behaviour and, in the case of neuroscience, our intricate experience of life.

  1. Ambros-Ingerson J., Granger R., Lynch G. (1990) Simulation of paleocortex performs hierarchical-clustering. Science, 247: 1344-1348
  2. Hopfield. J. J. (1995) Pattern-recognition computation using action-potential timing for stimulus representation. Nature 376: 33-36
  3. Biederman I. (1972) Perceiving Real World Scenes. Science, 177: 77-80
  4. Gluck M. A., Bower G. G. (1988) From Conditioning to Category Learning: An Adaptive Network Model. J of Exp. Psych., 117: 227-247
  5. Rodriguez A., Whitson J., Granger R. (2004) Derivation and analysis of basic computational operations of thalamocortical circuits. J Cogn. Neurosci. 16: 856-77.
  6. Wilent W. B., Contreras D. (2005) Dynamics of excitation and inhibition underlying stimulus selectivity in rat somatosensory cortex. 8: 1364-1370
  7. McKenzie I. A., Ohayon D., Li H., Paes de Faria J., Emery B. Tohyama K., Richardson W. D. (2014) Motor skill learning requires active central myelination. Science 346: 318-22