Visual system of the Mantis shrimp
Mantis shrimps or Stomatopods are a family of crustaceans that are usually between 10 and 20 cm in length. They can be brightly colored and live in shallow waters in tropical or subtropical oceans. A big part of the Stomatopod's thorax is covered with a resilient carapace with the head in the front and the two eyeballs sticking out on a pair of stalks. They carry multiple pairs of limbs where the second pair is noticeably larger and known for excellent punching abilities. What they are probably most famous for is though their complex visual system. 
A unique visual system
Mantis shrimps have one of the most complex visual system discovered in animals. Instead of using 2-4 photoreceptors types for color vision like most other species, they use 12! In addition, they have 4-7 receptor types (depending on the species) that are sensitive for linear and circular polarized light.  This has made people wonder how the Mantis shrimps see the world and if they have a 12-dimensional color space compared to our 3-dimensional.
Scientists have assumed that having so many types of photoreceptors would make the Mantis shrimp able to distinguish colors only a few nano-meters apart if it made analog comparisons between spectral sensitivities. On the contrary, a recent study showed that they have trouble distinguishing colors less than 25 nm apart which is approximately the distance between the sensitivity peaks of different photoreceptors. This suggests that the animals don't process visual information by comparing input from different photoreceptors like humans do but rather detect which receptor gives the strongest signal.  This would mean that the Mantis shrimps do not have a 12-dimensional continuous color space but rather a discrete color space with 12 color bins. The advantage of this system is that it allows the animals to determine colors very fast and reliably without the delay that occurs in a multidimensional color space. The neural processing of the system is though still to be determined. 
Anatomy of the eye
The eye of Mantis shrimps is a compound eye made up of optical units called ommatidia. An ommatidia has a lens that is covered with a cornea and behind the lens there is a light guide, called rhabdom. Surrounding the rhabdom are photoreceptors that can be sensitive to ultraviolet light or light in the human visible range. The eyes are usually elliptical and are divided in morphologically different zones.  Each eye is divided horizontally in three regions, the dorsal hemisphere, the mid-band and the ventral hemisphere which all explore the space.   It is believed that the two hemispheres allow the animal to have stereoscopic vision on each eye. The ommatidia are arranged in rows where each row has the same morphology. 
The ommatidia in the hemispheres of the eye are similar to the ommatidia found in other crustaceans.The mid-band contains larger, specialized ommatidia with photoreceptors that are responsible for most of the spectral diversity.  Having the mid-band horizontally between the hemispheres makes the animal able to keep objects anywhere on the horizon within the focal area without much horizontal saccadic eye movements. In fact, most saccadic movements are vertical.  Rows 1 through 4 of the mid-band are involved with color vision while rows 5 and 6 detect linear and circular polarized light. In rows 1 to 4 there are 12 different types of cells, each sensitive to a different wavelength of light. Additionally, there are 4 cell types sensitive to ultraviolet light distally in the first four rows.  
Each rhabdom has an individual optic system which results in a high number of optical units. This way, all photoreceptors in an ommatidium can view the same field for simultaneous analysis of different properties. Two separate regions in the same eye can even view the same field which makes the system flexible and increases the possibilities of parallel processing. The disadvantage is an eye with low spatial resolution compared to its size. Ommatidia in each hemisphere of the eye are able to view the same field which makes each eye stereoscopic. 
Each eye is seated on a stalk and can move relatively freely in all axes thanks to six groups of muscles. In addition, each eye can move independently of the other. The independent moving of each eye makes it hard to make use of binocular stereopsis. Instead, they probably use the overlapping of the view of the two hemispheres of each eye to estimate distance. When the stalks of the eyes move, distant objects move more slowly than close objects which adds to their depth perception.  The top part of the ommatidium in the hemispheres is composed of a cornea above a crystalline cone. This part is dedicated to the optics and focuses the incoming light onto the photosensitive rhabdom below. The rhabdom is composed of eight receptor cells, the R8 cell at the top and cells R1-7 surrounding the rhabdom below. These cells form a light guide. The R8 cell is only sensitive to ultraviolet light while cells R1-7 are sensitive to wavelengths around 500 nm. The R8 cells are not sensitive to polarization but cells R1-7 have two types of receptors that are sensitive to polarization orthogonal to each other. 
The ommatidia in the mid-band of the eye are different from those in the hemispheres and the mid-band contains three types of ommatidia. The first type of ommatidia are in the two most ventral rows and sense polarized light. The R8 cells of each of the two rows sense polarization planes orthogonal to each other. The R1-7 cells sense orthogonal wavelengths around 500 nm using two types of receptors. The R8 receptors also convert circular polarized light to linear polarized light that is then sensed by the R1-7 receptors.
The second type of ommatidia are in two of the four most dorsal rows. The R1-7 cells are split in two layers so incoming light first goes through the ultraviolet sensitive R8 part, then the distal part of R1-7 and finally the proximal part. Each layer absorbs certain wavelengths before the light reaches the layers below, together creating narrow-band photoreceptors.
The third type of ommatidia are in the two remaining rows. They contain colored photo-stable filters between the receptor layers so incoming light is filtered by these filters as well as the absorption of the receptors.
The second and third type of ommatidia are insensitive to polarization. The four rows of the second and third type of ommatidia, have two types of receptor layers each. These total of eight receptor types have different visual pigments so together they span the spectrum of around 400-700 nm. Adding the ultraviolet receptors and the polarization sensitivity, gives around 16-21 receptor classes, depending on the species. 
Unlike humans, Mantis shrimps are able to detect ultraviolet light. The ultraviolet photoreceptors are evenly spaced in the eye which suggests that the ultraviolet vision in Mantis shrimps is a part of their color vision system with sensitivity in the range of 300-700 nm wavelengths, compared to 400-700 nm in humans. The ultraviolet sensitivities are too narrow to result only from visual- pigment absorption which made scientists believe that they were tuned with ultraviolet filters in the photoreceptors. 
A recent experiment found four types of ultraviolet absorbing MAA (mycosporine-like amino acids) in the mid-band that are sensitive to ultraviolet light. These pigments work as either short- or long-pass ultraviolet filters that act on the same visual pigment in the retina, multiplying the sensitivity of the ultraviolet spectrum. This way, they can generate six types of ultraviolet receptors.  
In deep waters, fluorescence can contribute more to color than on land because of its contrast to the surrounding blue color. Many sea organisms have fluorescent coloration, one of them is the Mantis shrimp. The Mantis shrimp species Lysiosquillina glabriuscula has fluorescent markings on its antennal scales and carapace. The fluorescence is estimated to give rise to 7-10% of the total photons from the markings in the depth range inhabited by the animal. When looked at from the L. glabriuscula's perspective, the fluorescence is of more importance because of their special visual system and accounts for up to 30% of the total number of photons. The fluorescence makes the animal able to enhance its color signal underwater where shorter wavelengths don't arrive. 
Polarisation can be described with the Stokes' parameters,
with as the intensity and standing for horizontal, vertical, diagonal, anti-diagonal, right-hand circular and left-hand circular, respectively. S0 is the total intensity which does not affect the polarization. Polarized light is common in nature, especially reflected light and arthropods as well as crustaceans are sensitive to linearly polarized light. It can for example give information about the texture and orientation of an object. A single linear polarization component provides more contrast, especially in turbulent water, while more linear components probably influence orientation, navigation, prey detection, predator avoidance, and intra-species signaling.
Optimal polarization vision is simultaneous sensitivity to all six linear and circular polarized components and the Gonodactylidae family of the Mantis shrimps is the first organism discovered that possesses this ability. The dorsal and ventral hemispheres in their eye sense linear polarization, rotated 45 deg from each other. What makes the Gonodactylidae special is though their sensitivity for circular polarization in two rows of the mid-band which makes it possible for them to measure all six Stokes' parameters. In addition to these anatomical features, it has the neuronal features for measuring the Stokes' parameters.   
Different Mantis shrimp species live at various depths. Animals that live in shallow waters are exposed to illumination over a much broader spectrum than animals in deeper waters. Experiments on the Mantis shrimp species Haptosquilla trispinosa revealed that they use colored filters in front of their photoreceptors to tune the spectral sensitivity. In animals that live in shallow waters, the filters are used on most of the visible spectrum. Meanwhile, those that live in deeper waters have their filters shifted to transmit shorter wavelengths (green-blue light) as longer wavelengths are attenuated by the water. This makes them able to distinguish smaller differences of the short wavelength light in the ocean. 
The visual processing in Mantis shrimps is different from humans and may be compared to artificial systems as they use serial and parallel processing. Mantis shrimps must move their eyes to collect some types of visual information from the environment, unlike most other animals. This lies in the fact that the most important region for visual analysis is in the narrow mid-band which can only scan a slice of the visual space. Mantis shrimps solve this problem by moving the eye slowly up and down and thus getting information about color, polarization and ultraviolet intensity for the whole visual field.
A lot of the visual processing in Mantis shrimps takes place within the eye and even in single photoreceptors. This decreases the amount of data needed to deliver information to higher areas. From the retina, it appears that information is sent via multiple parallel streams into the central nervous system which makes it able to minimize processing at higher levels. Another advantage of Mantis shrimps' splitting of the visual spectrum into discrete channels is its color constancy. Visual systems that have few receptors with a broad wavelength spectrum, can adapt strongly to wavelengths that are far from their peak sensitivity wavelength which makes it difficult to recognize colors in different environments, such as underwater. 
Benefits from a developed visual system
Mantis shrimp lives involve incredibly fast movements while attacking prey which makes it impor- tant to have fast processing of visual information.  Mantis shrimps are known to attack, not only to hunt prey but also to fight members of the same species. This is believed to have evolved their signaling behaviour that involves polarised light and color. They use their color in signaling to a greater extent than other crustaceans and it is believed that their special visual system with high color constancy makes that possible. 
Many features of the visual system of Mantis shrimps could influence the development of artificial optical systems. When designing optical systems where color constancy is important, the Mantis shrimps' visual system can be used as a model where the narrow spectral channels increase the accuracy. Contrary to current optical sensor properties, motion is essential to the Mantis shrimp's vision. This opens up the idea of integrating the possibility of motion into optical sensors.  Mantis shrimps' eye design is a good model for visual electronics as it is able to do analysis within individual units. Its visual processing in the eye before information gets passed on to higher centers is also an inspiration for efficient, low-power artificial optical systems. Processing data at the sensor level can reduce the bandwidth and the power needed. Their polarization sensitivity has also inspired scientists in the development of polarization sensors. In fact, Mantis shrimps' alignment of polarization sensitive ommatidia has been replicated with aluminum nanowires functioning as linear polarization filters on top of photo-diodes to create a CMOS imager. This real-time polarization imaging has enabled early diagnosis of cancerous tissue that has not been possible before and has many potential future applications. 
- Ross Piper, Extraordinary Animals: An Encyclopedia of Curious and Unusual Animals, Green- wood Press, 2007.
- Hanne H. Thoen et al, A Dierent Form of Color Vision in Mantis Shrimp, Science 343: 411- 413, 2014.
- Kleinlogel S, White AG, The Secret World of Shrimps: Polarisation Vision at Its Best, PLoS ONE 3(5): e2190, 2008.
- David Cowles, Jaclyn R. Van Dolson, Lisa R. Hainey, Dallas M. Dick, The use of dierent eye regions in the mantis shrimp Hemisquilla californiensis Stephenson, 1967 (Crustacea: Stom- atopoda) for detecting objects, Journal of Experimental Marine Biology and Ecology 330 (2): 528534, 2006.
- Thomas W. Cronin, Justin Marshall, Parallel processing and image analysis in the eyes of mantis shrimps, The Biological Bulletin 200 (2): 177183, 2001.
- Michael Bok, Megan Porter, Allen Place, Thomas Cronin, Biological Sunscreens Tune Polychro- matic Ultraviolet Vision in Mantis Shrimp, Current Biology 24 (14): 163642, 2014.
- Justin Marshall, Johannes Oberwinkler, Ultraviolet vision: the colourful world of the mantis shrimp, Nature 401 (6756): 873874, 1999.
- Ellis R. Loew, Vision: Two Plus Four Equals Six, Current Biology 24 (16): 753-755, 2014.
- C. H. Mazel, T. W. Cronin, R. L. Caldwell, N. J. Marshall, Fluorescent enhancement of sig- naling in a mantis shrimp, Science 303 (5654): 51, 2004.
- Tsyr-Huei Chiou et el, Circular polarization vision in a stomatopod crustacean, Current Biology 18 (6): 42934, 2008.
- Thomas W. Cronin, Roy L. Caldwell, Justin Marshall, Tunable colour vision in a mantis shrimp, Nature 411, 547, 2001.
- T. York et al, Bioinspired polarization imaging sensors: from circuits and optics to signal processing algorithms and biomedical applications, Proceedings of the IEEE 102 (10): 14501469, 2014.
Spider´s Visual System
While the highly developed visual systems of some spider species have been subject to extensive studies for many decades, terms like animal intelligence or cognition were not usually used in the context of spider studies. Instead, spiders were traditionally portrayed as rather simple, instinct driven animals (Bristowe 1958, Savory 1928), processing visual input in pre-programmed patterns rather than actively interpreting the information received from their visual apparatus towards appropriate reactions. While Although this still seems to be the case in a majority of spiders, which primarily interact with the world through tactile sensation rather than by visual cues, some spider species have shown surprisingly intelligent use of their eyes. Considering its limited dimensions within the body, a spider´s optical apparatus and visual processing perform extremely well. Recent research points towards a very sophisticated use of visual cues in a spider´s world when investigating topics such as the complex hunting schemes of the vision-guided jumping spiders (Salticidae) taking huge leaps of up to 30 times their own body length onto prey or a wolf spider´s (Lycosidae) ability to visually recognize asymmetries in potential mates. Even in the case of the night-active Cupiennius salei (Ctenidae), relying primarily on other sensory organs, or the ogre-faced Dinopis hunting at night by spinning small webs and throwing them at approaching prey, the visual system is still highly developed. Findings like these are not only fascinating but are also inspiring other scientific and engineering fields such as robotics and computer-guided image analysis.
General structure of a spider´s anatomy
A spider´s anatomy primarily consists of two major body segments, the prosoma and the opisthosoma, which are also known as the cephalothorax and abdomen, respectively. All extremities as well as the sensory organs including the eyes are located in the prosoma. Other than the visual system of arthropods featuring compound eyes, modern arachnid eyes are ocelli (simple eyes consisting of a lens covering a vitreous fluid-filled pit with a retina at the bottom), of which spiders have six or eight, characteristically arranged in three or four rows across the prosoma´s carapace. Overall, 99% of all spiders have eight eyes and of the remaining 1% almost all have six. Spiders with only six eyes lack the “principal eyes”, which are described in detail below.
The pairs of eyes are called anterior median eyes (AME), anterior lateral eyes (ALE), posterior median eyes (PME), and posterior lateral eyes (PLE). The large principal eyes facing forward are the anterior median eyes, which provide the highest spatial resolution to a spider, at the cost of a very narrow field of view. The smaller forward-facing eyes are the anterior lateral eyes with a moderate field of view and medium spatial resolution. The two posterior eye pairs are rather peripheral, secondary eyes with wide field of view. They are extremely sensitive and suitable for low-light conditions. Spiders use their secondary eyes for sensing motion, while their principal eyes allow shape and object recognition. In contrast to insect vision, a visually-based spider´s brain is almost completely devoted to vision, as it receives only the optic nerves and consists of only the optic ganglia and some association centers. The brain is apparently able to recognize object motion, but even more to also classify the counterpart into a potential mate, rival or prey by seeing legs (lines) at a particular angle to the body. Such stimulus will result in a spider displaying either courtship or threatening signs respectively.
A Spider´s eyes
Although spider eyes may be described as “camera eyes”, they are very different in their details from the “camera eyes” of mammals or any other animals. In order to fit a high-resolution eye into such a small body, neither an insect´s compound eyes nor spherical eyes, as we humans have them, would solve the problem. The ocelli found in spiders are the optically better solution, as their resolution is not limited by refractive effects at the lens which would be the case with compound eyes. When replacing the eye of a spider by a compound eye of the same resolving power, it would simply not fit into the spider´s prosoma. By using ocelli, the spatial acuity of some spiders is more similar to that of a mammal than to that of an insect, with a huge size difference and only a few thousand photocells, e.g. in a jumping spider´s eye, as compared to more than 150 million photocells in the human retina.
The anterior median eyes (AME), which are present in most spider species, are also called the principal eyes. Details about the principal eye´s structure and its components are illustrated in the figure below and are explained in the following by going through the AME of the jumping spider Portia (family Salticidae), which is famous for its high-spatial-acuity eyes and vision-guided behavior despite its very small body size of 4.5-9.5 mm.
When a light beam enters the principal eye it firstly passes a large corneal lens. This lens features a long focal length enabling it to magnify even distant objects. The combined field of view of the two principal eyes´ corneal lenses would cover about 90° in front of the salticid spider, however a retina with the desired acuity would be too large to fit inside a spider´s eye. The surprising solution is a small, elongated retina, which lies behind a long, narrow tube and a second lens (a concave pit) at its end. Such combination of a corneal lens (with a long focal length) and a long eye tube (magnifying the image from the corneal lens) resembles a telephoto system, making the pair of principal eyes similar to a pair of binoculars.
The salticid spider captures light beams successively on four retina layers of receptors, which lie behind each other (in contrast, the human retina is arranged in only one plane). This structure allows not only a larger number of photoreceptors in a confined area but also enables color vision, as the light is split into different colours (chromatic aberration) by the lens system. Different wavelengths of light thus come into focus at different distances, which correspond to the positions of the retina´s layers. While salticids discern green (layer 1 – ~580 nm, layer 2 – ~520-540 nm), blue (layer 3 – ~480-500 nm) and ultraviolet (layer 4 – ~360 nm) using their principal eyes, it is only the two rearmost layers (layers 1 and 2) which allow shape and form detection due to their close receptor spacing.
As in human eyes, there is a central region in layer 1 called the “fovea”, where the inter-receptor spacing was measured to about 1 μm. This was found to be optimal, as the telephoto optical system provides images precise enough to be sampled in this resolution, but any closer spacing would reduce the retina´s sampling quality due to quantum-level interference between adjacent receptors. Equipped with such eyes, Portia exceeds any insect by far when it comes to visual acuity: While the dragonfly Sympetrum striolatus has the highest acuity known for insects (0.4°), the acuity of Portia is ten times higher (0.04°) with much smaller eyes. The human eye with 0.007° acuity is only five times better than Portia´s. With such visual precision, Portia would be technically able to discriminate two objects which are 0.12 mm apart from a distance of 200 mm. The spatial acuity of other salticid eyes is usually not far behind that of Portia.
Principal eye retina movements
Such spectacular visual abilities come at a price within small animals as the jumping spiders: The retina in each of Portia´s principal eyes has only 2-5° field of view, while its fovea even captures only 0.6° field of view. This results from the principal retina having elongated boomerang-like shapes which span about 20° vertically and only 1° horizontally, corresponding to about six receptor rows. This severe limitation is compensated by sweeping the eye tube over the whole image of the scene using eye muscles, of which jumping spiders have six. These are attached to the outside of the principal eye tube and allow the same three degrees of freedom – horizontal, vertical, rotation – as in human eyes. Principal retinae can move by as much as 50° horizontally and vertically and rotate about the optical axis (torsion) by a similar amount.
Spiders making sophisticated use of visual cues move their principal eyes´ retinae either spontaneously, in “saccades” fixating the fovea on a moving visual target (“tracking”), or by “scanning”, which serves presumably for pattern recognition. It seems today, that spiders scan a scene sequentially by moving the eye-tube in complex patterns, allowing it to process high amounts of visual information despite their very limited brain capacities.
The spontaneous retinal movements, so-called “microsaccades”, are a mechanism thought to prevent the photoreceptor cells of the anterior-median eyes from adapting to a motionless visual stimulus. Cupiennius spiders, which feature 4 eye muscles - two dorsal and two ventral ones – continuously perform such microsaccades of 2° to 4° in the dorso-median direction, lasting about 80 ms (when fixed to a holder). The 2-4° of microsaccadic movements match closely to Cupiennius´ angle of about 3° between the receptor cells, supporting the idea of its function preventing adaption. In contrast, retinal movements elicited by mechanical stimulation (directing an air puff onto the tarsus of the second walking leg) can be considerably larger than the spontaneous retinal movements, with deflections up to 15°. Such stimulus increases eye muscle activity from being spontaneously active at 12 ± 1 Hz at the resting level to 80 Hz with the air puff stimulation applied. Active retinal movement of the two principal eyes is however never activated simultaneously during such experiments and no correlation exists between the two eyes regarding their direction either. These two mechanisms, spontaneous microsaccades as well as active “peering” by active retinal movement, seemingly allow spiders to follow and analyze stationary visual targets efficiently using only their principal eyes without reinforcing the saccadic movements by body movements.
However, there is another factor influencing visual capacities of a spider´s eye, which is the problem of keeping objects at different distances in focus. In human eyes, this is solved by accommodation, i.e. changing the shape of the lens, but salticids take a different approach: the receptors in layer 1 of their retina are arranged on a “staircase” at different distances from the lens. Thus, the image of any object, whether a few centimeters or some meters in front of the eye, will be in focus on some part of the layer-1 staircase. Additionally, the salticid can swing the eye tubes side to side without moving the corneal lenses and will thus sweep the staircase of each retina across the image of the corneal lense, sequentially obtaining a sharp image of the object.
The resulting visual performance is impressive: Jumping spiders such as Portia focus accurately on an object at distances between 2 centimeters to infinity, being able to see up to about 75 centimeters in practice. The time needed to recognize objects is however relatively long (seemingly in the range of 10-20 s) because of the complex scanning process needed to capture high-quality images from such tiny eyes. Due to this limitation, it is very difficult for spiders such as Portia to identify much larger predators fast enough because of the predator´s size, making the small spider an easy prey for birds, frogs and other predators.
Blurry vision for distance estimation
An unexpected finding recently surprised researchers, when it was shown that jumping spiders use a technique called blurry vision to estimate their distance to previously recognized prey before taking a jump. Where humans achieve depth perception using binocular vision and other animals do so by moving their heads around or measuring ultrasound responses, jumping spiders perform this task within their principal eyes. As in other jumping spider species, the principal eyes of Hasarius adansoni feature four retinal layers with the two bottom ones featuring photocells responding to green impulses. However, green light will only ever focus sharply on the bottom one, layer 1, due to its distance from the inner lens. Layer 2 would receive focused blue light, however these photoreceptor cells are not sensitive to blue and receive a fuzzy green image instead. Interestingly, the amount of blur depends on the distance of an object from the spider´s eye – the closer it is, the more out of focus it will appear on the second retina layer. At the same time, the first retina layer 1 always receives a sharp image due to its staircase structure. Jumping spiders are thus able to estimate depth using a single unmoving eye by comparing the images of the two bottom retina layers. This was confirmed by letting spiders jump at prey in an arena flooded with green light versus red light of equal brightness. Without the ability to use the green retina layers, jumping spiders would repeatedly fail to judge distance accurately and miss their jump.
In contrast to the principal eyes responsible for object analysis and discrimination, a spider´s secondary eyes act as motion detectors and therefore do not feature eye muscles to analyze a scene more extensively. Depending on their arrangement on the spider´s carapace, secondary eyes enable the animal to have panoramic vision detecting moving objects almost 360° around its body. The anterior and posterior lateral eyes (i.e. secondary eyes) only feature a single type of visual cells with a maximum spectral sensitivity for green colored light of ~535-540 nm wavelength. The number and arrangement of secondary eyes differs significantly between or even within different spider families, as does their structure: Large secondary eyes can contain several thousand rhabdomeres (the light-sensitive parts of the retina) and support hunters or nocturnal spiders with their high sensitivity to light, while small secondary eyes contain at most a few hundred rhabdomeres and only providing basic movement detection. Differently from the principal eyes which are everted (the rhabdomeres point towards the light), the secondary eyes of a spider are inverted, i.e. their rhabdomeres point away from the light, as is the case for vertebrates like the human eye. Spatial resolution of the secondary eyes e.g. in the extensively studied Cupiennius salei is greatest in horizontal direction, enabling the spider to analyse horizontal movements well even with the secondary eyes, while vertical movement may not be especially important when living in a “flat world”.
The reaction time of jumping spiders´ lateral eyes is comparably slow and amounts to 80-120 ms, measured with a 3°-sized (inter-receptor angle) square stimulus travelling past the animal´s eyes. The minimum stimulus travel distances, until the spider reacts, are 0.1° at a stimulus velocity of 1°/s, 1° at 9°/s and 2.5° at 27°/s. This means that a jumping spider´s visual system detects motion even if an object is travelling only a tenth of the secondary eyes´ inter-receptor angle at slow speed. If the stimulus gets even smaller to a size of only 0.5°, responds occur only after long delays, indicating that they lie at the spiders´ limit of perceivable motion.
Secondary eyes of (night-active) spiders usually feature a tapetum behind the rhabdomeres, which is a layer of crystals reflecting light back to the receptors to increase visual sensitivity. This allows night-hunting spiders to have eyes with an aperture as large as f/0.58 enabling them to capture visual information even in ultra-low-light conditions. Secondary eyes containing a tapetum thus easily reveal a spider´s location at night when illuminated e.g. by a flashlight.
Central nervous system and visual processing in the brain
As anywhere in neuroscience, we still know very little about a spider´s central nervous system (CNS), especially regarding its functioning in visually controlled behavior. Of all the spiders, the CNS of Cupiennius has been studied most extensively, focusing mainly on the CNS structure. As of today, only little is known about electrophysiological properties of central neurons in Cupiennius, and even less about other spiders in this regard.
The structure of a spider´s nervous system is closely related to its body´s subdivisions, but instead of being spread all over the body, the nervous tissue is enormously concentrated and centralized. The CNS is made up of two paired, rather simple nerve cell clusters (ganglia), which are connected to the spider´s muscles and sensory systems by nerves. The brain is formed by fusion of these ganglia in the head segments ahead of and behind the mouth and fills the prosoma largely with nervous tissue, while no ganglia exist in the abdomen. Looking at the spider´s brain, it receives direct inputs from only one sensory system, the eyes - unlike any insects and crustaceans. The eight optic nerves enter the brain from the front and their signals are processed in two optic lobes in the anterior region of the brain. When a spider´s behavior is especially dependent on vision, as in the case of the jumping spider, the optic ganglia contribute up to 31% of the brain´s volume, indicating the brain to be almost completely devoted to vision. This score still amounts to 20% for Cupiennius, whereas other spiders like Nephila and Ephebopus come in at only 2%.
The distinction between principal and secondary eyes persists in the brain. Both types of eyes have their own visual pathway with two separate neuropil regions fulfilling distinct tasks. Thus spiders evidently process the visual information provided by their two eye types in parallel, with the secondary eyes being specialized for detecting horizontal movement of objects and the principal eyes being used for the detection of shape and texture.
Two visual systems in one brain
While principal and secondary eyesight seems to be distinct in spiders´ brains, surprising inter-relations between both visual systems in the brain are known as well. In visual experiments principal eye muscle activity of Cupiennius was measured while covering either its principal or secondary eyes. When stimulating the animals in a white arena with short sequences of moving black bars, the principal eyes moved involuntarily whenever a secondary eye detected motion within its visual field. This activity increase of the principal eye muscles, compared to no stimulation presented, would not change when covering the principal eyes with black paint, but would stop with the secondary eyes masked. Thus it is now clear, that only the input received from secondary eyes controls principal eye muscle activity. Also, a spider´s principal eyes do not seem to be involved in motion detection, which is only the secondary eyes´ responsibility.
Other experiments using dual-channel telemetric registration of the eye muscle activities of Cupiennius have shown that the spider actively peers into the walking direction: The ipsilateral retina of the principal eyes was measured to shift with respect to the walking direction before, during and after a turn, while the contralateral retina remained in its resting position. This happened independently from the actual light conditions, suggesting a “voluntary” peering initiated by the spider´s brain.
Pattern recognition using principal eyes
Recognition of shape and form by jumping spiders is believed to be accomplished through a scanning process of the visual field, which consists of a complex set of rotations (torsional movements) and translations of the anterior-median eyes´ retinae. As described in the section “Principal eye retina movements”, a spider´s retinae are narrow and shaped like boomerangs, which can be matched with straight features by sweeping over the visual scene. When investigating a novel target, the eyes scan it in a stereotyped way: By moving slowly from side to side at speeds of 3-10° per second and rotating through ± 25°, horizontal and torsional retina movement allows the detection of differently positioned and rotated lines. This method can be understood as template matching where the template has elongated shape and produces a strong neural response whenever the retina matches a straight feature in the scene. This identifies a straight line with little or no further processing necessary.
A computer vision algorithm for straight line detection as an optimization problem (da Costa, da F. Costa) was inspired by the jumping spider´s visual system and uses the same approach of scanning a scene sequentially using template matching. While the well-known Hough Transform allows robust detection of straight visual features in an image, its efficiency is limited due to the necessity to calculate a good part or even the whole parameter space while searching for lines. In contrast the alternative approach used in salticid visual systems suggests searching the visual space by using a linear window, which allows adaptive searching schemes during the straight line search process without the need to systematically calculate the parameter space. Also, solving the straight line detection in such a way allows to understand it as an optimization problem, which makes efficient processing by computers possible. While it is necessary to find appropriate parameters controlling the annealing-based scanning experimentally, the approach taking a jumping spider´s path of straight line detection was proven to be very effective, especially with properly set parameters.
Discernment of visual targets
The ability of discerning between slightly different visual targets has been shown for Cupiennius salei, although this species relies mainly on its mechanosensory systems during prey catching or mating behavior. When presenting two targets at a distance of 2 m to the spider, its walking path depends on their visual appearance: Having to choose between two identical targets such as vertical bars, Cupiennius shows no preference. However the animal strongly prefers a vertical bar to a sloping bar or a V-shaped target.
The discrimination of different targets has been shown to be only possible with the principal eyes uncovered, while the spider is able to detect the targets using any of the eyes. This suggests that many spiders´ anterior-lateral (secondary) eyes are capable of much more than simply object movement detection. With all eyes covered, the spider exhibits totally undirected walking paths.
Placing Cupiennius in total darkness however results not only in undirected walks but also elicits a change of gait: Instead of using all eight legs the spider will only walk with six and employ the first legs as antennae, comparable to a blind person´s cane. In order to feel the surroundings the extended forelegs are moved up and down as well as sideways. This is specific to the first leg pair only, influenced solely by the visual input when the normal room light is switched to the invisible infrared light.
Vision-based decision making in jumping spiders
The behavior of jumping spiders after having detected movement with the eyes depends on three factors: the target´s size, speed and distance. If it has more than twice the spider´s size, the object is not approached and the spider tries to escape if it comes towards her. If the target has adequate size, its speed is visually analyzed using the secondary eyes. Fast moving targets with a speed of more than 4°/s are chased by jumping spiders, guided by her anterior-lateral eyes. Slower objects are carefully approached and analyzed with the anterior-median (i.e. principal) eyes to determine whether it is prey or another spider of the same species. This is seemingly achieved by applying the above described straight line detection, to find out whether a visual target features legs or not. While jumping spiders have shown to approach potential prey of appropriate characteristics as long as it moves, males are pickier in deciding whether their current counterpart might be a potential mate.
Potential mate detection
Experiments have shown that drawings of a central dot with leg-like appendages on the sides will result in courtship displays, suggesting that visual feature extraction is used by jumping spiders to detect the presence and orientation of linear structures in the target. Additionally, a spider´s behavior towards a considered conspecific spider depends on different factors such as sex and maturity of both involved spiders and whether it is mating time. Female wolf spiders, Schizocosa ocreata, even discern asymmetries in male secondary sexual characters when choosing their mate, possibly to avoid developmental instability in their offspring. Conspicuous tufts of bristles on a male´s forelegs, which are used for visual courtship signaling, appear to influence female mate choice and asymmetry of these body parts in consequence of leg loss and regeneration apparently reduces female receptivity to such male spiders.
Secondary eye-guided hunting
A jumping spider´s stalking behavior when hunting insect prey is comparable to a cat stalking birds. If something moves within the visual field of the secondary eyes, they initiate a turn to bring the larger, forward-facing pair of principal eyes into position for classifying the object´s shape into mate, rival or prey. Even very small, low contrast dot stimuli moving at slow or fast speeds elicit such orientation behavior. Like Cupiennius, jumping spiders are also able to use their secondary eyes for more sophisticated tasks than just motion detection: Presenting visual prey cues to salticids with only visual information from the secondary eyes available and both primary eyes covered, results in the animal exhibiting complete hunting sequences. This suggests that the anterior lateral eyes of jumping spiders may be the most versatile components of their visual system. Besides detecting motion, the secondary eyes obviously also feature a spatial acuity which is good enough to direct complete visually-guided hunting sequences.
Prey “face recognition”
Visual cues also play an important role for jumping spiders (salticids) when discriminating between salticid and non-salticid prey using principal eyesight. To this end a salticid prey´s large principal eyes provide critical cues, to which the jumping spider Portia fimbriata reacts by exhibiting cryptic stalking tactics before attacking (walking very slowly with palps retracted and freezing when faced). This behavior is only used when identifying a prey as salticid. This was exploited in experiments presenting computer-rendered, realistic three-dimensional lures with modified principal eyes to Portia fimbriata. While intact virtual lures resulted in cryptic stalking, lures without or with smaller principal eyes than usual (as sketched in the figure on the right) elicited different behavior. Presenting virtual salticid prey with only one anterior-median eye or a regular lure with two enlarged secondary eyes elicited cryptic stalking behavior suggesting successful recognition of a salticid, while P. fimbriata froze less often when faced by a Cyclops-like lure (a single principal eye centered between the two secondary eyes). Lures with square-edged principal eyes were usually not classified as a salticid, indicating that the shape of the principal eyes´ edges are an important cue to identify fellow salticids.
Jumping decisions from visual features
Spiders in the genus Phidippus have been tested within a study for their willingness to cross inhospitable open space by placing visual targets on the other side of a gap. It was found that whether the spider takes the risk of crossing open ground or not is mainly dependent on factors like distance to target, relative target size compared to distance and the target´s color and shape. In independent test runs, the spider moved to tall, distant targets equally often as to short, close targets, with both objects appearing equally sized on the spider´s retina. When giving the choice of moving to either white or green grass-like targets, the spiders consistently chose the green target irrespective of its contrast with the background, thus proving their ability to use color discernment in hunting situations.
Identifying microhabitat traits by visual cues
Presented with manipulated real plants and photos of plants, Psecas chapoda (a bromeliad-dwelling salticid spider) is able to detect a favorable microhabitat by visually analyzing architectural features of the host plant´s leaves and rosette. By using black-and-white photos, any potential influence of other cues, such as color and smell, on host plant selection by the spider could be excluded during a study, leaving only shape and form as discerning characteristics. Even when having to decide solely from photographs, Psecas chapoda consistently preferred rosette-shaped plants (Agavaceae) with narrow and long leaves over differently looking plants, which proves that some spider species are able to evaluate and distinguish physical structure of microhabitats only on the basis of shape from visual cues of plant traits.
Johnston's Organs (Antennae in Bees and Butterflies)
Butterflies and moth keep their balance with Johnston's organ: this is an organ at the base of a butterfly's antennae, and is responsible for maintaining the butterfly's sense of balance and orientation, especially during flight.
The perception of sound for some insects is important for mating behavior, e.g. Drosophila . The ability of hearing in Insecta and Crustacea is given by chordotonal organs: mechanoreceptors, which respond to mechanical deformation . These chordotonal organs are widely distributed throughout the insect’s body and differ in their function: proprioceptors are sensitive to forces generated by the insect itself and exteroreceptors to external forces. These receptors allow detection of sound via the vibrations of particles when sound is transmitted though a medium such as air or water. Far-field sounds refer to the phenomenon when air particles transmit the vibration as a pressure change over a long distance from the source. Near-field sounds refer to sound close to the source, where the velocity of the particles can move lightweight structures. Some insects have visible hearing organs such as the ears of noctuoid moths, whereas other insects lack a visible auditory organ, but are still able to register sound. In these insects the "Johnston's Organ" plays an important role for hearing.
The Johnston’s Organ (JO) is a chordotonal organ present in most insects. Christopher Johnston was the first who described this organ in mosquitoes, thus the name Johnston’s Organs . Quarterly Journal of Microscopical Science. 1855, Vols. s1-3, 10, pp. 97-102.. This organ is located at the stem of the insect’s antenna. It has developed the highest degree of complexity in the Diptera (two-wings), for which hearing is of particular importance . The JO consists of organized base sensory units called scolopidia (SP). The number of scolopidia varies among the different animals. JO has various mechanosensory functions, such as detection of touch, gravity, wind and sound, for example in honeybees JO (≈ 300 SPs) is responsible to detect sound coming from another “dancing” honeybee . In male mosquitoes (≈ 7000 SPs) JO is used to detect and locate female flight sound for mating behavior . . The antenna of these insects is specialized to capture near-field sound. It acts as a physical mechanotransducer.
Anatomy of the Johnston’s Organ
A typical insect antenna has three basic segments: the scape (base), the pedicel (stem) and the flagellum . Some insects have a bristle at the third segment called an arista. Figure 1 shows the Drosophila antenna. For the Drosophila the antenna segment a3 fits loosely into the sockets on segment a2 and can rotate when sound energy is absorbed . This leads to stretching or compression of JO neurons of the scolopidia. In Diptera the JO scolopidia are located in the second antennal segment a2 the pedicel (Yack, 2004). JO is not only associated with sound perception (exteroreceptor), it can also function as a proprioceptors giving information on the orientation and position of the flagellum relative to the pedicel .
JO studied in the fruit fly (Drosophila melanogaster)
The JO in Drosophila consists of an array of approximately 277 scolopidia located between the a2/a3 joint and the a2 cuticle (a type of an outer tissue layer) . The scolopidia in Drosophila are mononematic . Most are heterodynal and contain two or three neurons, thus the JO comprises around 480 neurons. It is the largest mechanosensory organ of the fruit fly . Perception by JO of male Drosophila courtship songs (produced by their wings) makes females reduce locomotion and males to chase each other forming courtship chains . JO is not only important to perceive sound, but also to gravity  and wind  sensing. Using GAL4 enhancer trap lines in the JO showed that JO neurons of flies can be categorized anatomically into five subgroups, A-E . Each has a different target area of the antennal mechanosensory and motor centre (AMMC) in the brain (see Figure 2). Kamikouchi et al. showed that the different subgroups are specialized to distinct types of antennal movement . Different groups are used for sound and gravity response.
Neural activities in the JO
To study JO neurons activities it is possible to observe intracellular calcium signals in the neurons caused by antenna movement . Furthermore flies should be immobilized (e.g. by mounting on a coverslip and immobilizing the second antennal segment to prevent muscle-caused movements). The antenna can be actuated mechanically using an electrostatic force. The antenna receiver vibrates when sound energy is absorbed and deflects backwards and forwards when the Drosophila walks. Deflecting and vibrating the antenna yields different activity patterns in the JO neurons: deflecting the receiver backwards with a constant force gives negative signals in the anterior region and positive ones in the posterior region of the JO. Forward deflection produces the opposite behavior. Courtship songs (pulse song with a dominant frequency of ≈ 200Hz) evoke broadly distributed signals. The opposite patterns for the forward and backward deflection reflect the opposing arrangements of the JO neurons. Their dendrites connect to anatomically distinct sides of the pedicel: the anterior and posterior sides of the receiver. Deflecting the receiver forwards stretches the JO neurons in the anterior region and compresses neurons in the posterior one. From this is can be concluded that JO neurons are activated (i.e. depolarized) by stretch and deactivated (i.e. hyperpolarized) by compression.
Different JO neurons
A JO neuron usually targets only one zone of the AMMC, and neurons targeting the same zone are located in characteristic spatial regions within JO . Similar projecting neurons are organized into concentric rings or paired clusters (see Figure 2A).
Vibration sensitive neurons for sound perception
A and B neurons (AB) were activated maximally by receiver vibration between 19 Hz and 952 Hz. This response was frequency dependent. Subgroup B showed larger response to low-frequency vibrations. Thus subgroup A is responsible for the high-frequency responses.
Deflection sensitive neurons for gravity and wind perception
C and E showed maximal activity for static receiver deflection. Thus these neurons provide information about the direction of a force. They have a larger displacement threshold of the arista than the neurons of AB . Nevertheless CE neurons can respond to small displacement of the arista (e.g. gravitational force): gravity displaces the arista-tip by 1 µm (see S1 of ). They also respond to larger displacement caused by air-flow (e.g. wind) . Zone C and E neurons showed distinct sensitivity to air flow direction, which causes deflection of the arista in different directions. Air flow applied to the front of the head resulted in strong activation in zone E and little activation in zone C. Air flow applied from the rear showed the opposite result. Air flow applied to the side of the head yielded in zone C in ipsilaterally activation and in zone E in contralaterally one. The different activation allows the Drosophila to sense from which direction the wind comes. It is not known whether the same subgroups-CE neurons mediate wind and gravity detection or if there are more sensitive CE neurons for gravity detection and less sensitive CE neurons for wind detection . A proof that wild-type Drosophila melanogaster can perceive gravity is that the flies tend to fly upwards against the force vector of gravitation (negative gravitaxis) after getting shaken in a test tube. When the antennal aristae were ablated this negative gravitaxis behavior vanished, but not the phototaxis behavior (flies fly towards light source). Removing also the second segment, i.e. where the JO is located, the negative gravitaxis behavior came present again. This shows that when JO is lost, Drosophila can still perceive gravitational force through other organs, for example mechanoreceptors on neck or legs. These receptors were shown to be responsible for gravity sensing in other insect species .
Silencing specific neurons
It is possible to silence selectively subgroups of JO neurons using tetanus toxin combined with subgroup-specific GAL4 drivers and tubulin-GAL80. The latter is a temperature-sensitive GAL4 blocker. With this it could be confirmed that neurons of subgroup CE are responsible for gravitaxis behavior. Elimination of neurons of subgroups CE did not impair the ability of hearing . Silencing subgroup B impaired the male’s response to courtship songs, whereas silencing groups CE or ACE did not . Since subgroup A was found to be involved in hearing (see above) this result was unexpected. From different experiment, in which the sound-evoked compound action potential (sum of action potentials) were investigated the conclusion was drawn that subgroup A is required for nanometer-range receiver vibrations as imposed by faint songs of courting males.
Differences in gravitation and sound perception in the brain
Neurons of subgroups A and B target on one side zones of the primary auditory centre in the AMMC and on the other side the inferior part of ventrolateral protocerebrum (VLP) (see Figure 2B)). These zones show many commissural connections between themselves and with the VLP. For neurons of subgroups CE almost no commissural connection between the target zones were found, nor connections to the VLP. Neurons associated with the zones of subgroup CE descended or ascended from the thoracic ganglia. This difference in the AB and CE neurons projection reminds strongly on the separate vertebrate projection of the auditory and vestibular pathways in mammals .
Johnston’s Organ in honeybees
The JO in bees is also located in the pedicel of the antenna and used to detect near field sounds . In a hive some bees perform a waggle dance, which is believed to inform conspecifics about the distance, direction and profitability of a food source. Followers have to decode the message of the dance in the darkness of the hive, i.e. visual perception is not involved in this process. Perception of sound is a possible way to get the information of the dance. The sound of a dancing bee has a carrier frequency of about 260 Hz and is produced by wing vibrations. Bees have various mechanosensors, such as hairs on the cuticle or bristles on the eyes. Dreller et al. found that the mechanosensors in JO are responsible for sound perception in bees . Nevertheless hair sensors could still be involved in detection of further sound-sources, when the amplitude is too low to vibrate the flagellum. Dreller et al. trained bees to associate sound signals with a sucrose reward. After the bees were trained some of the mechanosensors were abolished on different bees. Then the bee’s ability to associate the sound with the reward was tested again. Manipulating the JO yielded loss of the learnt skill. Training could be done with a frequency of 265 Hz, but also of 10 Hz, which shows that JO is also involved in low-frequency hearing. Bees with only one antenna made more mistakes, but were still better than bees that had ablated both antennas. Two JO in each antenna could help followers to calculate the direction of the dancing bee. Hearing could also be used by bees in other contexts, e.g. to keep a swarming colony together. The decoding of the waggle dance is not only done by auditory perception, but also or even more by electric field perception. JO in bees allows detection of electric fields . If body parts are moved together, bees accumulate electric charge in their cuticle. Insects respond to electric fields, e.g. by a modified locomotion (Jackson, 2011). Surface charge is thought to play a role in pollination, because flowers are usually negatively charged and arriving insects have a positive surface charge . This could help bees to take up pollen. By training bees to static and modulated electric fields, Greggers et al. showed that bees can perceive electric fields . Dancing bees produce electric fields, which induce movements of the flagellum 10 times more strongly than the mechanical stimulus of wing vibrations alone. The vibrations of the flagellum in bees are monitored with JO, which responds to displacement amplitudes induced by oscillation of a charged wing. This was proven by recording compound action potential responses from JO axons during electric field stimulation. Electric field reception with JO does not work without antenna. Whether also other non-antennal mechanoreceptors are involved in electric field reception has not been excluded. The results of Greggers et al. suggest that electric fields (and with it JO) are relevant for social communication in bees.
Importance of JO (and chordotonal organs in general) for research
Chordotonal organs, like JO, are only found in Insecta and Crustacea . Chordotonal neurons are ciliated cells . Genes that encode proteins needed for functional cilia are expressed in chordotonal neurons. Mutations in the human homologues result in genetic diseases. Knowledge of the mechanisms of ciliogenesis can help to understand and treat human diseases which are caused by defects in the formation or function of human cilia. This is because the process of controlling neuronal specification in insects and in vertebrates is based on highly conserved transcription factors, which is shown by the following example: Atonal (Ato), a proneural transcription factor, specifies chordotonal organ formation. The mouse orthologue Atoh1 is necessary for hair cell development in the cochlea. Mice which expressed a mutant Atoh1 phenotype, which are deaf, can be cured by the atonal gene of Drosophila. Studying chordotonal organs in insects can lead to more insights of mechanosensation and cilia construction. Drosophila is a versatile model to study the chordotonal organs . The fruit fly is easy and inexpensive to culture, produces large numbers of embryos, can be genetically modified in numerous ways and has a short life cycle, which allows investigating several generations within a relative short time. In addition comes that most of the fundamental biological mechanisms and pathways that control development and survival are conserved across Drosophila and other species, such as humans. While the human sensory system offers us stunning ways of perceiving our movement and environment, the sensory systems of insects and spiders are not any less fascinating. To give just a few examples, spiders have up to eight eyes, and some see almost as sharply as humans; bees "feel the rhythm" when other bees dance in the bee-hive, and learn from this the location of food sources; mosquitoes hunt their victims by smell. In addition, studies in insects have many fewer ethical or methodological limitations than studies in mammals. And especially in flies, with molecular genetic tools any gene can be targeted (e.g. knocked out or overexpressed), and the system is much more manageable than in humans.
The insect olfactory system
This sensory systems book is mostly about human sensory systems and there is a chapter about the olfactory system, so why do we need a chapter on the insect olfactory system? The fruit fly (drosophila melanogaster), which we will focus on here, is a very important model animal in biology and a lot of research on sensory systems is done in the fruit fly. The visual as well as the olfactory system are studied intensively and there are less ethical or methodological limitations. With molecular genetic tools, any gene in a fly can be targeted (e.g. knocked out or overexpressed) and the system is much more manageable than in humans. While the olfactory system functions quite different from the human’s, it is possible to find common principles. Furthermore, the insect olfactory system inspires engineering in robotics, medicine and many other areas.
The nature of smell
To understand the specifics of odor sensing one has to be aware that smell is quite different from other stimuli. It differs from light and sound by the fact that it is not carried by waves but by diffusion, air flows and turbulences. Furthermore, while light and sound only have the two perceptually relevant characteristics of frequency composition and amplitude, smell has a variety of discrete odorants and even more possible mixtures in different concentrations.
In insects (but also in most vertebrates) the sensory system is of importance for orientation and food foraging but has also social (nest mate recognition e.g. in ants) and sexual (mating partner search and selection by pheromones) significance. The main path of the odor information begins at the olfactory sensilla (insect’s sensory organs that contain the sensory neurons) that can in most insects be found on the antennae and look like small hairs in the fly (see Figure). There exists a huge variety of antenna types (that are not only used for olfaction) and many different sensillum types.
To understand the general principle the example of drosophila melanogaster basiconic sensilla should suffice. The odorant molecules go through slits or pores of the cuticle into the aqueous sensillum lymph, where some types of odorant molecules are bound to odorant binding proteins and carried towards the dendrites of the olfactory receptor neurons (ORN), others diffuse in the lymph towards the dendrites. On the membrane of the dendrites there are odorant receptors (OR) that bind the odorant molecules and are responsible for the conversion of the signal into a membrane current. This current propagates through the dendrite to the cell body where (at the axon hill), an action potential is generated. The action potential travels in the ORN axon to the antennal lobe (which is analog to the olfactory bulb in vertebrates), where ORN make synapses to local interneurons and projection neurons. The antennal lobe is organized in so called glomeruli. It is not fully understood how they are involved in pattern recognition, but the glomerular activation pattern can provide information about the odors presented to the fly.
Projection neurons project into the lateral horn (where probably innate odor responses are processed) and to the Kenyon cells in the mushroom bodies. The mushroom bodies are a neuropil in the insect brain and have their name from the similarity to mushrooms. There, odors are associated with other sensory modalities and behavior which is why the mushroom bodies are an important model system to study learning and memory.
Odorant reception in ORN
An insect’s olfactory sensillum contains one or more olfactory receptor neurons that transform the odor information into an electrical signal (action potential). Most ORNs contain only one receptor type, but each receptor type reacts to many odors (see Figure). However there are some receptors that are more specific as their detected odors have either an important role in the insect’s behavior or are chemically unique. An example for a more specific receptor is the receptor for CO2. Other odorants activate it only weakly and it directly triggers an avoidance response in the fly . Specificity of ORs is due to different affinity of odorants to the receptor. The higher the affinity the more receptors are occupied when an odor is applied and the stronger the current response. However, ORNs do not react linearly on stimulation. It can be assumed that most of them respond logarithmic within their working range which increases their dynamic range. The logarithmic relation does not apply to stimuli below the respective detection threshold and in saturation. Most ORN show a phasic-tonic on-response, i.e. they react with a strong increase in firing rate when a stimulus is presented and then show rate adaptation if the stimulus persists.
Odorant receptors are membrane proteins that elicit an ionic current through the membrane, when an odorant molecule binds to them. There are two ways how this is accomplished in the olfactory system: Metabotropic and ionotropic receptors. In mammals olfactory receptors are known to be metabotropic g-protein coupled receptors that release a g-protein into the cell that activates an ion channel via a short intracellular signal cascade . In contrast most olfactory receptors in insects are ionotropic receptors that are ion channels which open when an odorant molecule binds. Saving time on the signal cascade, ionotropic receptors are much faster than metabotropic receptors .
Odor information processing
The odor that reaches the antenna contains different parts of information: On the one hand the odor identity, i.e. which odor or mixture it is, on the other hand the quantity of the components. Furthermore, the timing of the stimulus contains information. If two odors start for example at the same time, it is probable, that they belong to the same object. It has been shown, that insects indeed use the odorant timing information not only to detect the direction of an odor source but also to distinguish and track odor objects . The processing mechanism that enables this behavior is not clear, but it is amazing that already stimulus onset delays in the range of a few milliseconds can be useful. There are recent results that suggest that the speed and the temporal resolution of the insect’s olfactory system are remarkable and much higher than expected.
By means of calcium imaging, a method that visualizes cytoplasmic calcium by a fluorescent marker and therefore activity in neurons, it is possible to create a functional atlas of the antennal lobe. Basically all ORNs of one receptor neuron type, containing one OR, project into one glomerulus. So the (in the fruit fly about 54) glomeruli each unite the response of one receptor type and form a spatial pattern. However it has been shown that also the temporal dynamics of the glomeruli response is used to encode information . The odor information is therefore encoded in the olfactory system by spatiotemporal firing patterns .
So far it has not been possible to disentangle and understand the odor code in a way that is comparable to the knowledge of the visual and the auditory system. This might be due to the peculiarities of smell discussed above. There is no easily mappable topographic organization of neurons as the odor space is multidimensional and not continuous.
Glomerular activity patterns are linked in the mushroom bodies to behavior. Most insects show a high amount of plasticity there and e.g. bees are able to associate odors with a food reward after only a few presentations. Odor Information in the mushroom bodies is said to be represented by a sparse code, which means that only few kenyon cells respond with only few spikes. In contrast the above described code in the antennal lobe and in the ORNs it is a combinatorial code.
Odor perception and behavioral significance
For insects the olfactory system is of great behavioral significance. For example, as we all have probably experienced first-hand, mosquitos can track their victims by smell. Ants follow pheromone traces to food sources, but are also able to identify their nestmates by a colony specific hydrocarbon profile (and are therefore able to eliminate foes and thieves when they enter their territory). And many moths use sex pheromones to find mating partners.
Usually odors in nature are not pure chemical substances but mixtures. However, those mixtures are perceived as a unit and are very often directly linked to a behavior. The neuronal response in the antennal lobe of a mixture cannot always be predicted by the response to the components . It should therefore not be taken for granted that the olfactory system works like an e-nose that is designed to analyze the components of the presented odor. Furthermore, compared to vision, where the information has to be processed deeply until the relevance of the content becomes obvious, the olfactory system is more strongly and directly linked to behavior (and at least in higher animals emotions). These connections are sometimes innate, but often also learned and idiosyncratic.
- F. G. Barth: A Spider´s World: Senses and Behavior. ISBN 978-3-642-07557-5, Springer-Verlag Berlin, Heidelberg. (2002)
- D. P. Harland, R. R. Jackson: 'Eight-legged cats' and how they see - a review of recent research on jumping spiders (Araneae: Salticidae). Department of Zoology, University of Canterbury (2000)
- A. Schmid: Different functions of different eye types in the spider Cupiennius salei. The Journal of Experimental Biology 201, 221–225 (1998)
- S. Yamashita, H. Tateda: Spectral Sensitivities of Jumping Spider Eyes. J. comp. Physiol. 105, 29-41 (1976)
- D. P. Harland, R. R. Jackson: Inﬂuence of cues from the anterior medial eyes of virtual prey on Portia ﬁmbriata, an araneophagic jumping spider. The Journal of Experimental Biology 205, 1861–1868 (2002)
- A. Schmid, C. Trischler: Active sensing in a freely walking spider: Look where to go. Journal of Insect Physiology 57 p.494–500 (2011)
- D. B. Zurek, X. J. Nelson: Hyperacute motion detection by the lateral eyes of jumping spiders. Vision Research 66 p.26–30 (2012)
- D. B. Zurek, A. J. Taylor, C. S. Evans, X. J. Nelson: The role of the anterior lateral eyes in the vision-based behaviour of jumping spiders. The Journal of Experimental Biology 213, 2372-2378 (2010)
- F. M. G. da Costa, L. da F. Costa: Straight Line Detection as an Optimization Problem: An Approach Motivated by the Jumping Spider Visual System. In: Biologically Motivated Computer Vision, First IEEE International Workshop, BMVC 2000, Seoul, Korea (2000)
- G.W. Uetz, E. I. Smith: Asymmetry in a visual signaling character and sexual selection in a wolf spider. Behav Ecol Sociobiol (1999) 45: 87–93
- D. P. Harland, R. R. Jackson: Inﬂuence of cues from the anterior medial eyes of virtual prey on Portia ﬁmbriata, an araneophagic jumping spider. The Journal of Experimental Biology 205, 1861–1868 (2002)
- R. R. Jackson, D. P. Harland: One small leap for the jumping spider but a giant step for vision science. THE JOURNAL OF EXPERIMENTAL BIOLOGY, JEB Classics p.2129-2132
- P. M. de Omena, and G. Q. Romero: Using visual cues of microhabitat traits to ﬁnd home: the case study of a bromeliad-living jumping spider (Salticidae). Behavioral Ecology 21:690–695 (2010)
- Kamikouchi A, Inagaki HK, Effertz T, Hendrich O, Fiala A, Gopfert MC, Ito K (2009). "The neural basis of Drosophila gravity-sensing and hearing.". Nature 458 (7235): 165-171.
- Yack JE (2004). "The structure and function of auditory chordontonal organs in insects.". Microscopy Research and Technique 63 (6): 315-337.
- Johnston, Christopher. 1855. Original Communications: Auditory Apparatus of the Culex Mosquito
- Dreller C and Kirchner WH (1993). "Hearing in honeybees: localization of the auditory sense organ.". Journal of Comparative Physiology A 173: 275-279.
- McIver, S.B. 1989. Mechanoreception, In Comprehensive Insect Physiology, Biochemistry, and Pharmacology. Pergamon Press. 1989, Vol. 6, pp. 71-132.
- Keil, Thomas A. 1999. Chapter 1 - Morphology and Development of Peripheral Olfactory Organs. [book auth.] B.S. Hansson. Insect Olfaction. s.l. : Springer, 1999, pp. 5-48
- Jarman, Andrew P. 2014. Chapter 2 - Development of the Auditory Organ (Johnston's Organ) in Drosophila. Development of Auditory and Vestibular Systems (Fourth Edition). San Diego : Academic Press, 2014, pp. 31-61
- Baker, Dean Adam and Beckingham, Kathleen Mary and Armstrong, James Douglas. 2007. Functional dissection of the neural substrates for gravitaxic maze behavior in Drosophila melanogaster. Journal of Comparative Neurology. 2007, Vol. 501, 5, pp. 756-764
- Nadrowski, Björn and Albert, Jörg T. and Göpfert, Martin C (2008). "Transducer-Based Force Generation Explains Active Process in Drosophila Hearing.". Current Biology 18 (18): 1365-1372.
- Kamikouchi A, Shimada T and Ito K (2006). "Comprehensive classification of the auditory sensory projections in the brain of the fruit fly Drosophila melanogaster.". J. Comp. Neurol. 499 (3): 317-356.
- Tauber, Eran and Eberl, Daniel F. 2003. Acoustic communication in Drosophila. Behavioural Processes. 2003, Vol. 64, 2, pp. 197-210
- Baker, Dean Adam and Beckingham, Kathleen Mary and Armstrong, James Douglas. 2007. Functional dissection of the neural substrates for gravitaxic maze behavior in Drosophila melanogaster. Journal of Comparative Neurology. 2007, Vol. 501, 5, pp. 756-764
- Yorozu S, Wong A, Fischer BJ, Dankert H, Kernan MJ, Kamikouchi A, Ito K, Anderson DJ (2007). "Distinct sensory representations of wind and near-field sound in the Drosophila brain.". Nature 458 (7235): 201-205.
- Beckingham, Kathleen M. and Texada, Michael J. and Baker, Dean A. and Munjaal, Ravi and Armstrong, J. Douglas. 2005. Genetics of Graviperception in Animals. Academic Press. 2005, Vol. 55, pp.105-145
- Nadrowski, Björn and Albert, Jörg T. and Göpfert, Martin C. 2008. Transducer-Based Force Generation Explains Active Process in Drosophila Hearing. Current Biology. 2008, Vol. 18, 18, pp. 1365-1372
- Greggers U, Koch G, Schmidt V, Dürr A, Floriou-Servou A, Piepenbrock D, Göpfert MC, Menzel R (2013). "Reception and learning of electric fields in bees.". Proceedings of the Royal Society B: Biological Sciences 280: 1759.
- Kavlie, Ryan G. and Albert, Jörg T. 2013. Chordotonal organs. Current Biology. 2013, Vol. 23, 9, pp. 334-335
- Jennings, Barbara H. 2011. Drosophila a versatile model in biology & medicine. Materials Today. 2011, Vol. 14, 5, pp. 190-195
- Suh GS, Ben-Tabou de Leon S, Tanimoto H, Fiala A, Benzer S, Anderson DJ: Light activation of an innate olfactory avoidance response in Drosophila. Curr Biol 2007, 17:905-908.
- Silbering AF, Benton R (2010) Ionotropic and metabotropic mechanisms in chemoreception: 'chance or design'? EMBO reports 11:173-179.
- Baker TC, Fadamiro HY, Cosse AA (1998) Moth uses fine tuning for odour resolution. Nature 393:530-530.
- Justus KA, Schofield SW, Murlis J, Carde RT (2002) Flight behaviour of Cadra cautella males in rapidly pulsed pheromone plumes. Physiological Entomology 27:58-66.
- Szyszka P, Stierle JS, Biergans S, Galizia CG (2012) The Speed of Smell: Odor-Object Segregation within Milliseconds. PloS one 7:e36096.
- DasGupta S, Waddell S (2008) Learned Odor Discrimination in Drosophila without Combinatorial Odor Maps in the Antennal Lobe. Current biology : CB 18:1668-1674.
- Brown, S. L., Joseph, J., & Stopfer, M. (2005). Encoding a temporally structured stimulus with a temporally structured neural representation. Nature neuroscience, 8(11), 1568-1576.
- Silbering, A. F., & Galizia, C. G. (2007). Processing of odor mixtures in the Drosophila antennal lobe reveals both global inhibition and glomerulus-specific interactions. The Journal of Neuroscience, 27(44), 11966-11977.
Halteres (Gyroscopes for Flies)
Halteres are sensory organs present in many flying insects. Widely thought to be an evolutionary modifcation of the rear pair of wings on such insects, halteres provide gyroscopic sensory data, vitally important for flight. Although the fly has other relevant systems to aid in flight, the visual system of the fly is too slow to allow for rapid maneuvers. Additionally, to be able to fly adeptly in low light conditions, a requirement to avoid predation, such a sensory system is necessary. Indeed, without halteres, flies are incapable of sustained, controlled flight. Since the 18th century, scientists have been aware of the role halteres play in flight, but it was only recently that the mechanisms by which they operate have been better explored.  
The haltere evolved from the rearmost of two pairs of wings. While the first has maintained its usage for flight, the posterior pair has lost its flight functions and has adopted a slightly different shape. The haltere is visually comprised of three structural components: a knob-shaped end, a thin shaft, and a slightly wider base. The knob contains approximately 13 innervated hairs, while the base contains two chordotonal organs, each innervated by about 20-30 nerves. Chordotonal organs are sense organs thought to be solely responsive to extension, though they remain relatively unknown. The base is also covered by around 340 campaniform sensilla, which are small fibers which respond preferentially to compression in the direction in which they are elongated. Each of these fibers is also innervated. Relative to the stalk of the haltere, both the chordotonal organs and the campaniform sensilla have an orientation of approximately 45 degrees, which is optimal for measuring bending forces on the haltere. The halteres move contrary (anti-phase) to the wings during flight. The sensory components can be categorized into three groups ): those sensitive to vertical oscillations of the haltere, including the dorsal and ventral scapal plates, dorsal and ventral Hicks papillae (both the plates and papillae are subcategories of the aforementioned campaniform sensilla), and the small chordotonal organ. The basal plate (another manifestation of the sensilla) and the large chordotonal organ are sensitive to gyroscopic torque acting on the haltere, and there is also a population of undifferentiated papillae which are responsive to all strains acting on the base of the haltere. This provides an additional method for flies to distinguish between the direction of force being applied to the haltere.
As Homeobox genes were being discovered and explored for the first time, it was found that the deletion or inactivation of the Hox gene Ultrabithorax (Ubx) causes the halteres to develop into a normal pair of wings. This was a very compelling early result as to the nature of Hox genes. Manipulations to the Antennapedia gene can similarly cause legs to become severely deformed, or can cause a set of legs to develop instead of antennae on the head.
The halteres function by detecting Coriolis forces, sensing the movement of air across the potentially rotating fly body. Studies have indicated that the angular velocity of the body is encoded by the Coriolis forces measured by the halteres . Active halteres can recruit any neighboring units, influencing nearby muscles and causing dramatic changes in the flight dynamics. Halteres have been shown to have extremely fast response times, allowing these flight changes to be performed much more quickly than if the fly were to rely on its visual system. In order to distinguish between different rotational components, such as pitch and roll, the fly must be able to combine signals from the two halteres, which must not be coincident (coincident signals would diminish the ability of the fly to differentiate the rotational axes). The halteres are capable of contributing to image stabilization, as well as in-flight attitude control, which was established by numerous authors noting a reaction from the head and wings to inputs from the components of the rotation rate vector. contributions from halteres to head and neck movements have been noted, explaining their role in gaze stabilization. The fly therefore uses input from the halteres to establish where to fixate its gaze, an interesting integration of the two senses.
Recordings have indicated that halteres are capable of responding to stimuli at the same (double-wingbeat) frequency as Coriolis forces, the proof of concept that allows further mathematical analysis of how these measurements can occur. The vector cross-product of the halteres' angular velocity and the rotation of the body provide the Coriolis force vector to the fly. This force is at the same frequency as the wingbeat in both the pitch and roll planes, and is doubly fast in the yaw plane. Halteres are capable of providing a rate damping signal to affect rotations. This is because the Coriolis force is proportional to the fly's own rotation rate. By measuring the Coriolis force, the halteres can send an appropriate signal to their affiliated muscles, allowing the fly to properly control its flight. The large amplitude of haltere motion allows for the calculation of the vertical and horizontal rates of rotation. Because of the large disparity in haltere movement between vertical and horizontal movement, Ω1, the vertical component of the rotation rate, generates a force of double the frequency of the horizontal component. It is widely thought that this twofold frequency difference is what allows the fly to distinguish between the vertical and horizontal components. If we assume that the haltere moves sinusoidally, a reasonably accurate approximation of its real-world behavior, the angular position γ can be modeled as:
where ω is the haltere beat frequency, and the amplitude is 180, a close approximation to the real life range of motion. The body rotational velocities can be computed, given the known rates (the roll, pitch, and yaw components are labeled below with 1, 2, and 3, respectively) from the two halteres' (Ωb being the left and Ωc being the right haltere) reference frames, respective to the body of the fly with the following calculations :
α represents the haltere angle of rotation from the body plane, and the Ω terms are, as mentioned, the angular velocity of the haltere with respect to the body. Knowing this, one could roughly simulate input to the halteres using the equation for forces on the end knob of a haltere:
m is the mass of the knob of the haltere, g is the acceleration due to gravity, ri, vi,} and ai are the position, velocity, and acceleration of the knob relative to the body of the fly in the i direction, aF is the fly's linear acceleration, and Ωi and Ώi are the angular velocity and acceleration components for the direction i, respectively, of the fly in space. The Coriolis force is simulated by the 2mΩ × vi term. Because the sensory signal generated is proportional to the forces exerted on the halteres, this would allow the haltere signal to be simulated. If attempting to reconcile the force equation with the rotational component equations, it is worthwhile to remember that the force equation must be calculated separately for both halteres.
- J. L. Fox and T. L. Daniel (2008), "A neural basis for gyroscopic force measurement in the halteres of Holorusia.", J Comp Physiol 194: 887-897
- Rhoe A. Thompson (2009), "Haltere Mediated Flight Stabilization in Diptera: Rate Decoupling, Sensory Encoding, and Control Realization.", PhD thesis (University of Florida)
- J. W. S. Pringle (1948), "The gyroscopic mechanism of the halteres of diptera.", Phil Trans R Soc Lond B 233 (602): 347-384
Some insects show a remarkable memory of location. Crickets learn where a cool spot is located in an otherwise hot area. Parasitoid wasps (Argochrysis armilla) can remember where another hymenoptera (Ammophila pubescens) dug its inconspicuous nest in order to feed on its larvae. To help other bees find their way back into the nest Partamona batesi has developed an elaborate behaviour which consists of sticking white river sand together to form a good visible portico. All these different behaviours have in common that the insects have a sense of their environment and can navigate within it. Spatial memory is observed in the behaviour of various insects with central place foraging. Remembering locations is essential for finding and remembering resourceful food sites, returning to nests, or to hold position in flowing water or in the air .
Until recently behavioural experiments were mainly performed with social bees and ants, and studies in simplified environments (with reduced amounts of landmarks and altered panoramas depending on the question asked) to unravel three different types of memory-based guidance mechanisms . While two are based on memory of views of the surrounding, the third is based on an inner accumulator creating a vector from the nest to a desired site :
- Alignment image-matching is used to head along familiar routes, comparing memorised snap-shot views with the present sight .
- Position image-matching is a more general way of orientation and is used when the desired goal is known but the starting position or the route is new .
- Path integration is exerted when the perceived environment is unknown or when travelling through featureless landscapes. The insect measures the distance and the direction creating a vector .
Using all three procedures, the insect compares its sensory input with a memory of the desired sensory input. The discrepancy of these inputs is transformed into an “output vector” giving a direction towards the desired goal . Depending on the situation not all mechanisms are needed simultaneously. Orientation over a range of conditions is achieved by converging and complementing of the three different processes .
Alignment image-matching is the most basic way to compare visual input to memory. It is used when travelling along a known route. Doing so, a current retinal picture is compared to visual memories and made congruent .
Fractional Position of Mass, Orientated Edges and Segmentation
Lent et al. looked at the behaviour of wood ants (Formica rufa) in an artificial scene, comparing the routes taken in the original and in the altered scene. The group found several different behaviours which helped the ant choose a direction. One of these processes resembles an already known ability of ants to compute the centre of mass of shapes. It was found that ants orientate themselves using the “fractional position of mass” (FPM) which gives a robust orientation over several meters distance from the shape. The FPM is a direction to which the ant heads to and is described by a ratio of the left to the right area of the shape (Figure 1). Further it was found that the insects orientate themselves by extracting local visual features such as oriented edges (for example a diagonal margin of a shape) and superimpose them on a visual memory . Additionally the ants seem to segment complex scenes calculating the FMP of each piece individually. In simple scenes one FPM is calculated over the whole image .
For optimal guidance it was found that the ant orientates itself first by local features. A recent study suggets that ants probably segment their view and calculate the corresponding FPM. If the insects diverge from their original direction, a correction of the body orientation is achieved by saccade-like turns . It has been experimentally assessed that wood ants do this alignment every three seconds, correcting up to 70 degrees in direction if necessary .
Landmarks and Skylines
The characteristics of a reliable landmark are the availability over several journeys and its visibility in various different light conditions (Figure 2). Graham and Cheng have suggested that the skyline is such a trustable landmark. A skyline profile is an object on the ground which contrasts against the sky . Their experiments were conducted using so-called zero vector ants. Zero vector ants are ants caught right before entering their nest. Due to this they lack a path integration vector (which is set at zero) and have to navigate with visual cues only when displaced. Graham and Chen investigated how the ants successfully home on their nest when displaced. Additionally this behaviour was also tested in an altered environment with an artificial panorama, and with several sideways shifted versions of it. They found that ants shifted their direction according to the false skyline independent of an inner compass mechanism .
Other researchers differentiate the omnipresent skyline from other landmarks. Wystrach et al. observed that the guidance of ants (Melophorus bogati) cannot be completely explained by orientating with landmarks only. Since an ants' eye has a poor resolution and thus reduces complex natural sceneries, it has been suggested that panoramas or skyline views are used for basic orientation, giving primary crude directional hints .
Positional image-matching describes a more universal utilisation of visual memories. Compared to alignment image-matching this process allows guidance from new, unknown locations via novel directions to a desired goal as long as there are enough familiar elements present . In cluttered environments ants rely heavily on their path integrator vector and on surrounding landmarks. If they are in familiar terrain with sufficient landmarks, the path integrator vector is neglected and the insects rely on the snapshots . That this process might be used by ants was shown in simulations in a robot performing image-matching which showed similar behaviour to ants.
Skyline Heights, Visual Compass and Mismatch Gradient Descent
Wystrach et al. have proposed three further view-matching strategies. Firstly they suggest that ants compare the heights of the skylines of the present location with the memory of the view at the goal scene. If the skyline is too high at the current state, then the ant has to walk in the direction it is looking. In contrast, if the skyline is too low, the ant has to veer away from the current direction of orientation (Figure 3). Notwithstanding the general robustness of this model, it has two flaws. First, the height difference between the two views must be big enough to tell a difference. Second, the ant has to have the same absolute orientation as in the memory view in order to successfully compare the two skylines. This however may be achieved via a geomagnetic or a celestial compass. Therefore skyline height comparison is a good tool to obtain the rough heading direction when in a new location, but it is barely practical when near the goal .
Second, they proposed that ants have a visual compass which requires memories of several different views taken around the nest while facing it. If the insect finds itself in an unknown position, it can compare the current view with the memorised views and pick the best matching one and calculate the heading direction out of that. This model is supported by the observation that Ocymymex ants perform a well-choreographed learning walk which also involves facing the nest. Compared to the skyline height comparison, ants do not have a problem finding the appropriate orientation, because their memory views are already directed to the nest. However if the retained memory view of the nest is not the best matching, the ant may be led in a completely wrong direction. This means that using the visual compass on a familiar route is a very robust model, but not when it comes to finding a way to the nest from an unfamiliar location .
The third proposed process is called mismatch gradient descent and provides orientation. The perceived and the memorised images are constantly compared to each other. In order for this process to work, ants would need three dimensional information which would have to be obtained by moving diagonally through the landscape. However this has not been observed and leads to the conclusion that mismatch gradient descent alone would not render the ant at the wanted destination .
These findings suggest that the skyline height comparison should be used to find a heading direction. Mismatch gradient descent helps the ant remember its direction to the nest during walking and once near the nest, the visual compass leads the ant to its homestead. All the different models can be mixed, their relative contribution probably varying . If none of the mechanisms proposed above indicate a heading direction, i.e. if the insect is in a completely new environment, ants start with a systematic search consisting of loops which increase in size the more time passes during the search .
Confusion After Displacement
It was shown that zero vector ants which are trained along a route exhibit a short phase of confusion expressed by unorientated walking after they are picked up in front of their home and replaced at the feeder. It seems as if the route memory towards the nest was ignored during the time of disorientation. It is evident that the repetition of the route caused the confusion of the animal letting the question arise, whether they have an episode-like memory which tells them that they already have taken the route . However, the confusion is not always observed, especially not in cluttered environments. This could be explained by the assumption that ants segregate their view in cluttered environments in stages. When moved, ants may think that they are still on their way to the nest since the last stage of the route has not been reached yet .
Path integration is a kind of dead-reckoning where there is ongoing localisation via measurement of the direction, the velocity and the time taken . This requires an accumulator as a core component of the whole system . As a result the ant obtains a vector with the resulting distance and direction it went relative to the nest's position . To know headings for a desired site, an output vector is generated by subtracting the current path integration state from the memory of the path integration state at the goal .
Build Up of an Path Integrator Vector
In order to build a vector the insect needs a starting point. The convention of this model is that the nest is taken as the point of origin and defined as zero .The direction of the vector has to be made in relation to external or internal cues. External cues are the sun compass, large landmarks or polarised light . Internal cues such as a sense for the magnetic field of the earth are used if there is nothing in the environment to navigate . By bees, the length of the vector is identified by measuring the distance travelled using the optic flow. Ants do this by the proprioceptive input derived from their steps .
Ants Completely Rely on Path Integration in Featureless Environments
Because of the arithmetic way to orientate, insects using path integration can move in completely unfamiliar terrain. This helps the animals to travel through featureless environment and helps them return to their nest after an exploration journey .
A simple experiment to demonstrate that ants completely rely on path integration was shown by Collett and Collett. Returning ants were displaced into either familiar or unknown areas before they reach their goal. In both cases the ants continue their route until they reach the point where they should have reached the nest. Only then do they start to show search behaviour .
Path Integration Is a Mathematical and Cognitive Challenge
Next to an accumulator which keeps track of the current position, insects using path integration need some mechanism to store path integrator states of desired places (for example at feeding grounds) for subsequent remembrance. In total, four factors need to be remembered during navigation: the present state of the path integrator, the path integration state of the goal, the output of a comparator and the output vector. While this kind of guidance system is clearly based on vector computation, no neural circuits are known which could process this .
Two Main Models for Path Integration
There are two main models known for path integration. The first postulates that the accumulator gets updated continuously, while the second suggests an intermitted accumulator update. In the model of the continuous updated accumulator, the accumulator state upon returning has to be the same as when leaving the nest. Therefore the coordinate system is in-bound. In the intermitted updating model, the accumulator state gets reset at the nest and at the point where the ant turns around to return. To return, the polarity of the compass gets inverted and the ant heads in the direction of the opposite vector, therefore the system is out-bound. Studies suggest that the latter model is more probable since the outward and inward vector do not exactly have to be cancelled out leaving room for small errors .
Adjustment of Path Integration and Image-Matching
As discussed above, insects do not only rely on path integration during navigation. Image-matching based on landmarks is the major way of orientation on familiar routes. Path integration and image matching work independently but can also interact, adjusting each other. During the course of walking directed by path integration, new landmarks can be learned. And during walking with image-matching the navigation vector can be standardized .
Path Integration and Research
Insect navigation is also very interesting for robot developers. Several Robots have already been constructed using path integration as one of their main navigational systems. It is also a principle which is applied by driverless cars . Lambrinos et al. constructed a robot called “Sahabot” which orientates itself based on path integration and view based matching . Navigation in wide open areas seems to be less complicated than in cluttered environments, which is still subject to intense research . This field of research does not only push forward the development of machines but also helps to better understand human orientation. Path integration principles are applied with the navigation of blind people, patients with vestibular defects or in theoretical work .
In order for proper and smooth orientation, remembering coincides with learning. Insects have to learn the position of a goal and the route to it. Further routes can also be learned from other individuals. Probably the best known example to show this behaviour are social bees (Apis mellifera) .
Despite of the three different orientation mechanisms, insects seem to lack a cognitive map. In fact it is arguable, whether or not there is evidence against such a map when considering path integration . However, one should keep in mind that a cognitive map in insects could exist based on yet unknown mechanisms . Learning and the discussion of cognitive maps have not been covered here, but may be of interest for further reviewing.
 Akesson, S., & Wehner, R. (2002). Visual navigation in desert ants Cataglyphis \newblock fortis: are snapshots coupled to a celestial system of reference? J Exp Biol, \newblock 205(Pt 14), 1971-1978.
 Cheng, K., & Freas, C. A. (2015). Path integration, views, search, and matched filters: the contributions of Rudiger Wehner to the study of orientation and navigation. J Comp Physiol A Neuroethol Sens Neural Behav Physiol, 201(6), 517-532. doi:10.1007/s00359-015-0984-9
 Cheng, K., & Newcombe, N. S. (2005). Is there a geometric module for spatial orientation? Squaring theory and evidence. Psychon Bull Rev, 12(1), 1-23.
 Collett, M. (2014). A desert ant's memory of recent visual experience and the control of route guidance. Proc Biol Sci, 281(1787). doi:10.1098/rspb.2014.0634
 Collett, M., Chittka, L., & Collett, T. S. (2013). Spatial memory in insect navigation. Curr Biol, 23(17), R789-800. doi:10.1016/j.cub.2013.07.020
 Collett, M., & Collett, T. S. (2000). How do insects use path integration for their navigation? Biol Cybern, 83(3), 245-259. doi:10.1007/s004220000168
 Graham, P., & Cheng, K. (2009). Ants use the panoramic skyline as a visual cue during navigation. Curr Biol, 19(20), R935-937. doi:10.1016/j.cub.2009.08.015
 Harris, R. A., Graham, P., & Collett, T. S. (2007). Visual cues for the retrieval of landmark memories by navigating wood ants. Curr Biol, 17(2), 93-102. doi:10.1016/j.cub.2006.10.068
 Lambrinos, D., Moller, R., Labhart, T., Pfeifer, R., & Wehner, R. (2000). A mobile robot employing insect strategies for navigation. Robotics and Autonomous Systems, 30(1-2), 39-64. doi:Doi 10.1016/S0921-8890(99)00064-0
 Lent, D. D., Graham, P., & Collett, T. S. (2013). Visual scene perception in navigating wood ants. Curr Biol, 23(8), 684-690. doi:10.1016/j.cub.2013.03.016
 Moller, R. (2000). Insect visual homing strategies in a robot with analog processing. Biological Cybernetics, 83(3), 231-243. doi:Doi 10.1007/Pl00007973
 Wystrach, A., Beugnon, G., & Cheng, K. (2011). Landmarks or panoramas: what do navigating ants attend to for guidance? Front Zool, 8, 21. doi:10.1186/1742-9994-8-21
 Wystrach, A., Beugnon, G., & Cheng, K. (2012). Ants might use different view-matching strategies on and off the route. J Exp Biol, 215(Pt 1), 44-55. doi:10.1242/jeb.059584
The Visual System of Drosophila
The visual system gives animals the ability to perceive their environment. This fast sensing of food sources or danger is important across species. The fruit fly Drosophila melanogaster (see Fig. 1) belongs to the invertebrates and constitutes an important model organism for this group. Drosophila shares the ability to see with vertebrates, like humans. Comparing the visual system of invertebrates (Fig. 2) with vertebrates reveals many similarities in the general structure and architecture but also differences. The exact molecular mechanisms are not yet completely understood, but there seem to be conserved mechanisms between the species.
Two papers (S. Hakeda-Suzuki, T. Suzuki (2014) and Zipursky (2010)) that focus on the Drosophila visual system will be discussed to gain information about the general structure, as well as differences and similarities to the retina of vertebrates. The different steps of specific targeting of the photoreceptor cells will be explained with the example of the photoreceptor cells R7 and R8.
Drosophila belongs to the invertebrates and possesses a compound eye, that consists of about 750 ommatidia (Fig. 3). One ommatidium comprises eight photoreceptor cells (R1 – R8), that differ in the rhodopsin, they express, and thereby in their functioning. The photoreceptor cells R1 – R6 perceive information about motion. They express rhodopsin Rh1, which responds to a broad spectrum of visible light. Colour vision is performed by R7 and R8. R7 expresses the UV-sensitive rhodopsin Rh3/ Rh4, while R8 expresses Rh5/ Rh6. Each of the 750 ommatidia contains all eight photoreceptor cells, arranged in a highly specific way. R1 – R6 surround the centred R7 and R8 photoreceptor cells. Incoming information that activates the photoreceptor cells is forwarded to the optic lobe, which consists of four distinct parts: lamina, medulla, lobula and lobula plate.
The lamina is organized into radially distinct areas, called cartridges. The curvature of the compound eye causes the outer photoreceptor cells R1 - R6 to perceive information about different spatial locations. To account for this phenomenon and to increase sensitivity, the photoreceptor cells R1 - R6 from different, neighbouring ommatidia are guided to the same one cartridge within the lamina. This allows to maintain retinotopy. This process is called neural superposition. R7 and R8 do not share the problem of retinotopy, as they are located in the centre of the ommatidium. The photoreceptor cells R1 – R6 terminate in the lamina and connect to the lamina neurons by formation of synapses. Those lamina neurons forward the information into the medulla. R7 and R8 do not form synapses in the lamina and further project into the medulla.
The medulla consists of ten distinct layers M1 – M10 and is radially differentiated into columns. R7 projects specifically into the layer M6, whereas R8 projects into M3. R7 and R8 from the same ommatidium (Retina) target the same column in the Medulla as lamina neurons from the same cartridge (Lamina). The lamina neurons also show stereotypic connection patterns. Lamina neurons, as well as the photoreceptor cells R7 and R8 form synapses to neurons that guide the visual information out of the medulla into the lobula and lobula plate. By passing of the information through the lamina, medulla, lobula and lobula plat, the information is computed and allows visual perception of the environment.
Comparison invertebrate and vertebrate retina
In general, there are major similarities between the visual systems of vertebrates and invertebrates. The most important one is the general structuring into layers and radial columns. The photoreceptor cells in the vertebrate visual system are called rods and cones. Rods are responsible for light sensation and motion. They share their functioning with the photoreceptor cells R1 – R6 in the invertebrate visual system. Cones are necessary for colour sensation. This role refers to the R7 and R8 photoreceptor cells.
One major difference is the presence of synapses in the retina of vertebrates. As already stated, Drosophila contains no synapses in the retina. All synaptic connections of the photoreceptor cells are located in the lamina (R1 – R6) or the medulla (R7 – R8). The vertebrate visual system also shows different cell types in the retina. It contains not only photoreceptor cells, but also horizontal cells, bipolar cells, amacrine cells and ganglion cells. This leads to a huge connectivity between the cell types and results in the development of five distinct layers that can be distinguished by whether they contain cell bodies or synapses. A development of horizontal layers in the retina of the fly visual system cannot be observed. In vertebrates, most visual computation takes place in the retina, whereas the computation of visual input in invertebrates is divided between retina, lamina and medulla.
Development: stepwise targeting of R7 and R8
The development of the insect visual system is illustrated with the example of the stepwise targeting of the photoreceptor cells R7 and R8. All photoreceptor cells target in a highly specific way in the lamina, in case of R1 – R6, or the medulla, in case of R7 and R8. The specific steps involved in targeting are herein described for R7 and R8. R7 targets the medulla layer M6, whereas R8 finally targets the medulla layer M3. In the larval instar, the starting point of the developing compound eye is called eye-disc. The photoreceptor cells in the retina start to differentiate but the lamina and medulla are not innervated by axons at the beginning of the larval instar. Throughout the larval instar the photoreceptors start differentiating form the posterior to the anterior tip. The first photoreceptor that differentiates is R8. R8 induces differentiation of the remaining photoreceptor cells in a specific order. The axon of R8 constitutes a pioneer axon, so the axons of R1 – R7 from the same ommatidium can orientate themselves to reach the lamina. R1 – R6 undergo synaptogenesis to connect to lamina neurons. Specific targeting underlies the principle of neural superposition, as described before. By growing towards the lamina, the axons build a connection between the eye and the brain. The positioning of the photoreceptor cells with respect to each other is maintained throughout growing to account for proper perception of the environment. Topography is thereby maintained. R7 and R8 continue growing until they reach the medulla. R8 photoreceptor cells pause at the apical surface of the developing medulla, which is called R8 temporary layer. R7 photoreceptor cells that follow the R8 pioneer axon continue growing until they reach the R7 temporary layer that is located basally with respect to the R8 temporary layer. Lamina neurons follow by extending axons towards the medulla. When they reach the medulla, they start to branch radially and horizontally. They establish distinct layers between the R8 temporary layer and the R7 temporary layer and, thereby, increase the distance between them. Only after all lamina neurons did innervate the medulla, R7 and R8 start growing again. They extend until they reach their final target layer by sending out thin processes, called filopodia. In their final target layer, R7 and R8 undergo synaptogenesis. Many different molecular cues are involved in the targeting steps and in the specific targeting of one distinct medulla layer, but they are not yet fully understood.
S. Hakeda-Suzuki, T. Suzuki (2014) Cell surface control of the layer specific targeting in the Drosophila visual system. Genes Genet. Syst. 89: 9-15 J. Sanes, S. Zipursky (2010) Design Principles of Insect and Vertebrate Visual Systems. Neuron 66: 15-36