Robotics/Sensors/Digital image Acquisition

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Image Sensors[edit | edit source]

There are 2 types of image sensors commonly used to digitally acquire an image, CCD and CMOS. While both have similar image quality, their core functionality and other features greatly differ.[3]

CCD[edit | edit source]

A CCD, or Charge-Coupled Device, is an older technology based on an analog system. A photoelectric surface that coats the CCD creates an electric charger when light hits it, and the charge is then transferred and stored in a capacitive bin that sits below the surface of each pixel.[2] The CCD then functions like a shift register, and transfers the charge in the bin below each pixel one spot over by applying a voltage in a cascading motion across the surface. The charge that reaches the edge of the CCD is transferred to an analog to digital converter, which turns the charge into a digital value for each pixel. This process relatively slow because of the way it has to shift each pixel to the edge of the CCD to then turn it into digital information.[4]

CMOS[edit | edit source]

A CMOS image sensor is a type of Active-Pixel Sensor (APS) constructed of Complementary metal–oxide–semiconductors. CMOS sensors can be much faster at generating a digital image, and consume less power than a CCD. They can also be larger than a CCD, allowing for higher resolution images, and can be manufactured through cheaper methods than a CCD. Each pixel in a CMOS sensor contains a photodetector and an amplifier.[4] The simplest type of CMOS image sensor is the 3T model, where each pixel is made up of 3 transistors and a photodiode. The transistor Mrst is used to clear the value of the pixel and reset it to acquire a new image. The Msf transistor buffers and amplifies the value from the photodiode until the pixel can be read and is reset. The Msel transistor is the pixel select transistor that only outputs the pixel’s value to the bus when the device is reading the row it is in. In this model the data is gathered in parallel via a shared bus, where all pixels in a column share the same bus, and then the data is sent down the bus one row at a time. This method is faster at shifting the charge value to the digital converter. There are other variations of the CMOS sensor which help reduce image lag and noise. Image lag is created where some of the previous image remains in the current one. This is often caused by a pixel not getting fully reset, so some of the charge from the previous image still exists. Image noise is a measure of how accurately the amount of light that hits the pixel is measured.It is very important tool for robotics.

Color Images[edit | edit source]

The 2 types of image sensors do not natively measure color; they simply convert the amount of light (regardless of color) to a digital value.[1] There are several different ways to gather color data. The 2 most common are to use a color filtering array, the Foveon X3 specialized sensor and using a trichroic prism and 3 image sensors.

Color Filtering Array[edit | edit source]

The most common method is to use a color filtering array. The most common type of filter used is the Bayer filter, developed by Kodak researcher Bryce Bayer in 1976. A color filtering array filters the light coming into each pixel so that the pixel only detects one of the primary colors. The full color image can later be recreated by adding the colors from each pixel together to create a full color image.[1] The Bayer filter uses a pattern of 50% green, 25% red and 25% blue to match the sensitivity of the human eye to the 3 primary colors. The Bayer filter repeats over 4 pixels.[1] Because the images have to be reconstructed and you only know one of the colors in each pixel, some of the image fidelity is lost in the process of image reconstruction called demosaicing. Edges of objects in the image can often be jagged and have non uniform color along the edges. In addition to the Bayer filter, there are many other filter patterns that can achieve the same result.[1] The main problem with using a filter is that it reduces the amount of light that reaches each photodetector, thus reducing each pixel’s sensitivity to light. This can cause problems in low-light situations because the photodetectors will not receive enough light to produce a charge, resulting in large amounts of noise. To help reduce this effect, there is another type of filter in use that has some pixels that are not filtered. These Panchromatic filters mimic the human eye, which has detectors of color and detectors of light and dark. These do much better in low light, but require a much larger area to mimic the pattern than a traditional Bayer filter. This causes some loss in fidelity

Image Transfer[edit | edit source]

The two most common methods for connecting cameras to a robot are USB and FireWire (IEEE 1394). When Apple developed FireWire they had in mind that it would be used to transfer audio and video. This resulted in a greater effective speed and higher sustained data transfer rates than USB, which are needed for audio and video streaming. FireWire also has the benefits of being capable of supplying more power to devices than USB can. FireWire can also function without a computer host. Devices can communicate with each other over FireWire without a computer to mediate.[5]

References[edit | edit source]

1. Color Filter Array

2. Charge-Coupled devices (CCDs)

3. How Digital Cameras Work

4. What is the difference between CCD and CMOS image sensors in a digital camera?

5. How FireWire Works