Jump to content

Basic Physics of Digital Radiography/The Computer

From Wikibooks, open books for an open world

Computers are widely used in many areas of radiology. The main application for our purposes is the display and storage of digital radiography images. This chapter presents a basic description of a general-purpose computer, outlines the design of a generalised digital image processor and gives a brief introduction to digital imaging before describing some of the more common image processing applications.

Before considering these topics, some general comments are required about the form in which information is handled by computers as well as the technology which underpins the development of computers so that a context can be placed on the discussion.

Binary Representation

[edit | edit source]

Virtually all computers in use today are based on the manipulation of information which is coded in the form of binary numbers. A binary number can have only one of two values, i.e. 0 or 1, and these numbers are referred to as binary digits - or bits, to use computer jargon. When a piece of information is represented as a sequence of bits, the sequence is referred to as a word, and when the sequence contains eight bits, the word is referred to as a byte - the byte being widely used today as the basic unit for expressing amounts of binary-coded information. In addition, large volumes of coded information are generally expressed in terms of kilobytes, megabytes, gigabytes etc. It is important to note that the meanings of these prefixes differ slightly from their conventional meanings because of the binary nature of the information coding. As a result, kilo in computer jargon represents 1024 units - 1024 (or 210) being the nearest power of 2 to one thousand. Thus, 1 kbyte refers to 1024 bytes of information and 1 Mbyte represents 1024 times 1024 bytes, and so on.

Binary coding of image information is needed in order to store images in a computer. Most imaging devices used in medicine however generate information which can assume a continuous range of values between preset limits, i.e. the information is in analogue form. It is therefore necessary to convert this analogue information into the discrete form required for binary coding when images are input to a computer. This is commonly achieved using an electronic device called Analogue-to-Digital Converter (ADC).

The development of modern computers has been almost totally dependent on major developments in material science and digital electronics which have occurred in the latter half of the 20th century. These developments have allowed highly complex electronic circuitry to be compressed into small packages called integrated circuits. These packages contain tiny pieces of silicon (or other semiconductor material) which have been specially manufactured to perform complex electronic processes. These pieces of silicon are generally referred to as silicon chips or microchips. Within the circuitry of a chip, a high electronic voltage can be used to represent the digit 1 and a low voltage can be used to represent the binary digit 0. Thus the circuitry can be used to manipulate information which is coded in the form of binary numbers.

An important feature of these electronic components is the very high speed at which the two voltage levels can be changed in different parts of the circuitry. This results in the ability of the computer to rapidly manipulate the binary information. Furthermore, the tiny size of modern integrated circuits has allowed the manufacture of computers which are very small physically and which do not generate excessive amounts of heat - previous generations of computers having occupied whole rooms, which required cooling because they were built using larger electronic components such as valves and transistors. Thus modern computers are capable of being mounted on a desk, for instance in an environment which does not require specific air-conditioning. In addition, the ability to manufacture integrated circuits using mass production methods has given rise to enormous decreases in costs - which has contributed to the phenomenal explosion of this technology into the mobile phone/computer market in the early 21st century.

It is worth noting that information in this chapter is likely to change by the time the chapter is read, given the ongoing, rapid developments in this field. The treatment here is therefore focused on general concepts - and the reader should note that current technologies and techniques may well differ from those described here. In addition, note that mention of any hardware or software product in this chapter does in no way intend support for such a product and its use in this discussion is purely for illustrative purposes.

General-Purpose Computer

[edit | edit source]

The following figure shows a block diagram of the major hardware components of a general-purpose computer. The diagram illustrates that a computer consists of a central communication pathway, called a bus, to which dedicated electronic components are connected. Each of these components is briefly described below.

Block diagram of components of a general-purpose computer.
  • Central Processing Unit (CPU): This is generally based on an integrated circuit called a microprocessor. Its function is to act as the brains of the computer where instructions are interpreted and executed, and where data is manipulated. The CPU typically contains two sub-units - the Control Unit (CU) and the Arithmetic/Logic Unit (ALU). The Control Unit is used for the interpretation of instructions which are contained in computer programs as well as the execution of such instructions. These instructions might be used, for example, to send information to other components of the computer and to control the operation of those devices. The ALU is primarily used for data manipulation using mathematical techniques - for example, the addition or multiplication of numbers - and for logical decision-making.
  • Main Memory: This typically consists of a large number of integrated circuits which are used for the storage of information which is currently required by the computer user. The circuitry is generally of two types - Random Access Memory (RAM) and Read Only Memory (ROM). RAM is used for the short-term storage of information. It is a volatile form of memory since its information contents are lost when the electric power to the computer is switched off. Its contents can also be rapidly erased - and rapidly filled again with new information. ROM, on the other hand, is non-volatile and is used for the permanent storage of information which is required for basic aspects of the computer's operation.
  • Secondary Memory: This is used for the storage of information in permanent or erasable form for longer-term purposes, i.e. for information which is not currently required by the user but which may be of use at some later stage. There are various types of devices used for secondary memory which include hard disks, CD-ROMs and flash drives.
  • Input/Output Devices: These are used for user-control of the computer and generally consist of a keyboard, visual display and printer. They also include devices, such as the mouse, joystick and trackpad, which are used to enhance user-interaction with the computer.
  • Computer Bus: This consists of a communication pathway for the components of the computer - its function being somewhat analogous to that of the central nervous system. The types of information communicated along the bus include that which specifies data, control instructions as well as the memory addresses where information is to be stored and retrieved. As might be anticipated, the speed at which a computer operates is dependent on the speed at which this communication link works. This speed must be compatible with that of other components, such as the CPU and main memory.
  • Software: There is a lot more to computer technology than just the electronic hardware. In order for the assembly of electronic components to operate, information in the form of data and computer instructions is required. This information is generally referred to as software. Computer instructions are generally contained within computer programs. Categories of computer program include:
  • a text editor for writing the text of the program into the computer (which is similar to programs used for word-processing);
  • a library of subroutines-which are small programs for operating specific common functions;
  • a linker which is used to link the user-written program to the subroutine library;
  • a compiler for translating user-written programs into a form which can be directly interpreted by the computer, i.e. it is used to code the instructions in digital format.

Digital Image Processor

[edit | edit source]

Computers used for digital image processing generally consist of a number of specialised components in addition to those used in a general-purpose computer. These specialised components are required because of the very large amount of information contained in images and the consequent need for high capacity storage media as well as very high speed communication and data manipulation capabilities.

Block diagram of the components of a digital image processor. The yellowed components are those of a general-purpose computer.

Digital image processing involves both the manipulation of image data and the analysis of such information. An example of image manipulation is the computer enhancement of images so that subtle features are displayed with greater clarity. An example of image analysis is the extraction of indices which express some functional aspect of an anatomical region under investigation. Most digital radiography systems provide extensive image manipulation capabilities with a limited range of image analysis features.

A generalised digital image processor is shown in the following figure. The yellow components at the bottom of the diagram are those of a general-purpose computer which have been described above. The digital image processing components are those which are connected to the image data bus. Each of these additional components is briefly described below.

  • Imaging System: This is the device which produces the primary image information, i.e. the image receptor and associated electronics. The image system is often physically separate from the other components of the image processor. Image information produced by the imaging system is fed to the image acquisition circuitry of the digital image processor. Connections from the digital image processor to the imaging system are generally also present, for controlling specific aspects of the operation of the imaging system, e.g. movement of a C-arm and patient table in a fluoroscopy system.
  • Image Acquisition: This circuitry is used to convert the analogue information produced by the imaging system so that it is coded in the form of binary numbers. The type of device used for this purpose is called an Analogue-to-Digital Converter (ADC). The image acquisition device may also include circuitry for manipulating the digitised data so as to correct for any aberrations in the image data. The type of circuitry which can be used for this purpose is called an Input Look-Up Table. An example of this type of data manipulation is logarithmic image transformation in digital radiography.
  • Image Display: This device is used to convert digital images into a format which is suitable for display. The image display unit may also include circuitry for manipulating the displayed images so as to enhance their appearance. The type of circuitry which can be used for this purpose is called an Output Look-Up Table and examples of this type of data manipulation include windowing. Other forms of image processing provided by the image display component can include image magnification and the capability of displaying a number of images on the one screen. The device can also allow for the annotation of displayed images with the patient name and details relevant to the patient's examination.
  • Image Memory: This typically consists of a volume of RAM which is sufficient for the storage of a number of images which are of current interest to the user.
  • Image Storage: This generally consists of magnetic disks of sufficient capacity to store large numbers of images which are not of current interest to the user and which may be transferred to image memory when required.
  • Image ALU: This consists of an ALU designed specifically for handling image data. It is generally used for relatively straight-forward calculations, such as image subtraction in digital subtraction angiography (DSA) and the reduction of noise through averaging a sequence of images.
  • Array Processor: This consists of circuitry designed for more complex manipulation of image data and at higher speeds than the Image ALU. It typically consists of an additional CPU as well as specialised high speed data communication and storage circuitry. It may be viewed as a separate special-purpose computer whose design has traded a loss of operational flexibility for enhanced computational speed. This enhanced speed is provided by the capability of manipulating data in a parallel fashion as opposed to sequential processing - which is the approach used in general-purpose computing - although this distinction is becoming redundant with developments such as multicore CPUs. This component is used, for example, for calculating Fast Fourier Transforms and for reconstruction calculations in cone-beam CT.
  • Image Data Bus: This component consists of a very high speed communication link designed specifically for image data.

Note that many of the functions described above have today been encapsulated in one device called graphics processing units (GPUs) and have been incorporated into many computer designs, e.g. the iMac computer.

Digital Images

[edit | edit source]

The digitisation of images generally consists of two concurrent processes - sampling and quantisation. These two processes generally happen concurrently and are described briefly below.

Illustration of a digital image, obtained when an original, consisting of a central dark region with the brightness increasing towards the periphery, is digitised with N=8 and G=4 (i.e. m=2).

Image sampling is the process used to digitise the spatial information in an image. It is typically achieved by dividing an image into a square or rectangular array of sampling points - see the following figure. Each of the sampling points is referred to as a picture element - or pixel to use computer jargon. Although in the context of DR image receptors, the term detector element, or del, is also used. Naturally, the larger the number of pixels or dels, the closer the spatial resolution of the digitised image approximates that of the radiation pattern transmitted through the patient – see the following figure, panels (a) and (b).

The process may be summarised as the digitisation of an image into an N x N array of pixel data. Examples of values for N are 1024 for a angiography image, and 3,000 for a digital radiograph.

Note that each pixel represents not a point in the image but rather an element of a discrete matrix. Distance along the horizontal and vertical axes is no longer continuous, but instead proceeds in discrete steps, each given by the pixel size. With larger pixels, not only is the spatial resolution poor, since there is no detail displayed within a pixel, but grey-level discontinuities also appear at the pixel boundaries (pixelation) - see panel (b) in the figure. The spatial resolution improves with smaller pixels and a perceived lack of pixelation gives the impression of a spatially continuous image to the viewer.

The pixel size for adequate image digitisation is of obvious importance. In order to capture fine detail, the image needs to be sampled at an adequate sampling frequency, fs. This frequency is given theoretically by the Nyquist–Shannon sampling theorem which indicates that it should be at least twice the highest spatial frequency, fmax, contained in the image, i.e.

fs ≥ 2 fmax

Thus, for example, when a digital imager has a pixel size of x mm, the highest spatial resolution that can be adequately represented is given by:

fN = (2 x)-1 line pairs/mm.

where fN is called the Nyquist frequency.

A line pair consists of a strip of radio-opaque material with a trip of radio-lucent material or equal width, as we will see in the next chapter. Put simply therefore, at least two pixels are required to adequately represent a line pair. Thus when a 43 cm x 43 cm image receptor is digitised to 3,000x3,000 pixels, the highest frequency that can be adequately represented is ~3.5 LP/mm. When an image contains frequencies above this, they are under-sampled upon digitisation and appear as lower frequency information in the digital image. This is known as aliasing and gives rise to false frequencies as far below the Nyquist frequency as they are in actuality above it. In other words, high frequencies are folded back around the sampling frequency to appear in the digital image at a frequency less than fN.

In practice, almost all digital images are under-sampled to some degree. A slightly larger pixel size than is optimal theoretically could be used, for instance, which would allow most of the frequency content to be digitised adequately. Note that significant aliasing artifacts occur when fine repetitive patterns, e.g. a resolution test object, are imaged and can result in Moiré patterns. Filtering the analogue image, so as to remove higher spatial frequency content prior to digitisation, can be used to reduce this effect. Note that aliasing effects can also occur when a stationary grid is used with a digital image receptor because of interference between the grid lines and the image matrix. A grid pitch of at least 70 lines/cm is therefore used.

A digitised chest radiograph displayed with image resolutions of (a) 256x256x8 bits, (b) 32x32x8 bits, (c) 256x256x2 bits.

Image quantisation is the process used to digitise the brightness information in an image. It is typically achieved by representing the brightness of a pixel by an integer whose value is proportional to the brightness. This integer is referred to as a 'pixel value' and the range of possible pixel values which a system can handle is referred to as the grey scale. Naturally, the greater the grey scale, the closer the brightness information in the digitised image approximates that of the original image – see the following figure, panels (a) and (c). The process can be considered as the digitisation of image brightness into G shades of grey. The value of G is dependent on the binary nature of the information coding. Thus G is generally an integer power of 2, i.e. G=2m, where m is an integer which specifies the number of bits required for storage. Examples of values of G are 1,024 (m=10) in fluoroscopy, 2,048 (m=11) in angiography and 4,096 (m=12) in digital radiography. Note that the slight difference between the brightness in an analogue image and its pixel value in the digital image is referred to as the quantisation error, and is lower at larger values of G.

The number of bits, b, required to represent an image in digital format is given by:

b = N x N x m.

A 512x512x8-bit image therefore represents 0.25 Mbytes of memory for storage and a 3,000x3,000x12-bit image represents ~13 Mbytes. Relatively large amounts of computer memory are therefore needed to store digital radiography images and processing times can be relatively long when manipulating large volumes of such data. This feature of digital images gives rise to the need for dedicated hardware for image data which is separate from the components of a general-purpose computer - although this distinction may vanish with future technological developments.

Digital Image Processing

[edit | edit source]

The dimensions in digital radiographs which can be manipulated using a digital image processor are those which involve alterations of image brightness and of spatial representation. Image brightness adjustments include Contrast Enhancement and Histogram Processing, while the spatial adjustments include Spatial Frequency and Spatial Domain Processing. We will consider each of these below.

Contrast Enhancement

[edit | edit source]
Fig. 5.5: Gray-level transformation for contrast enhancement of images with 256 shades of gray (i.e. m=8 bits). In this example, the unprocessed data is transformed so that all pixels with a pixel value less than 50 are displayed as black, all pixels with a pixel value greater than 150 are displayed as white and all pixels with pixel values between 50 and 150 are displayed using an intermediate shade of gray.
Contrast Enhancement is a widely-used form of image processing used throughout medical imaging and will be used as an example of basic methods. This form of processing (commonly referred to as Windowing) is described below and is a form of grey-level transformation where the real pixel value is replaced by a new pixel value for display purposes. The process is generally performed using the Output Look- Up Table section of the image display component - see Figure 5.2. As a result, the original data in image memory is not affected by the process, so that from an operational viewpoint, the original image data can be readily retrieved in cases where an unsatisfactory output image is obtained. In addition, the process can be implemented at very high speed using modern electronic techniques so that, once again from an operational viewpoint, user-interactivity is possible.
An example of a Look-Up Table (LUT) which can be used for contrast enhancement is illustrated in Figure 5.5. The process is controlled typically by two controls on the console of the digital image processor - the Level and Window. It should be noted that variations in the names for these controls, and in their exact operation, can exist with different systems but the general approach described here is sufficient for our purposes. It is seen in the figure that the level controls the threshold value below which all pixels are displayed as black and the window controls a similar threshold value for a white output. The simultaneous use of the two controls allows the application of a gray-level window, of variable width, which can be placed anywhere along the gray scale. Subtle gray-level changes within images can therefore be enhanced so that they are displayed with greater clarity - see Figure 5.6.
Fig. 5.6: Examples of contrast enhancement applied to a radiograph of the wrist: (a) unprocessed image, (b) window and level adjusted and (c) an inverted grey scale.
Fig. 5.7: Examples of subtle contrast enhancement curves.
More sophisticated look-up tables can be used for more subtle effects. Figure 5.7 shows examples for high and low latitude displays and an inverted grey-scale is also illustrated, where dark pixels can be preferentially boosted and bright pixels are suppressed.
Look-up tables can also be used to display grey scale images in pseudo-colours. This is particularly useful in 3D visualization where a colour scale ranging gradually from yellow to red can be used to display bones and tissues, for example. Such colour scales can be generated using a Colour Look-Up Table (CLUT).

Histogram Processing

[edit | edit source]
Contrast enhancement can also be effected through manipulation of the image's statistical histogram and is illustrated in Figure 5.8. A histogram is a plot of the frequency of occurrence of each pixel value in an image - see panel (a) for an example, where this frequency is plotted as a function of pixel value. It can be seen that pixels from the black surroundings of the hand are indicated by the peak at low pixel values. It can also be seen that the pixel values representing bone and tissue attenuation form a broad range of lower frequencies extending to just over half the grey scale. Indeed the pixels of the 'L' marker can be seen to form an isolated blip at a pixel value of 75. Note that the term frequency as used here should not be confused with the term spatial frequency we have been using in our discussion of Fourier techniques.
Manipulating such histogram data can be used for contrast enhancement by redistributing the pixel values to generate, for instance, a better utilisation of the grey scale. The process of Histogram Equalisation is illustrated in the Figure 5.8, panel (b) along with its histogram - panel (c). It can be seen that the process broadens the frequency distribution so that it now spans the full range of the grey scale. Notice also that the process can generate absent pixel values.:Manipulating such histogram data can be used for contrast enhancement by redistributing the pixel values to generate, for instance, a better utilisation of the grey scale. The process of Histogram Equalisation is illustrated in the Figure 5.8, panel (b) along with its histogram - panel (c). It can be seen that the process broadens the frequency distribution so that it now spans the full range of the grey scale. Notice also that the process can generate absent pixel values.
Fig. 5.8(a): Histogram of pixel values for the image in panel (c) of the figure above.
Fig. 5.8(b): The wrist radiograph following contrast equalisation.
Fig. 5.8(c): Histogram for the contrast equalised radiograph.

Spatial Frequency Processing

[edit | edit source]
Fourier methods are used in digital image processing to enhance the display of image detail. Here the 2D-FFT is manipulated so that certain spatial frequencies can be boosted in the displayed image to enhance, for instance, the display of the surrounds of features, e.g. Edge Enhancement. Suppression of features is also possible using this technique to generate a smoothing of image detail to reduce their prominence and the mottled appearance of image noise. The general process is illustrated in Figure 5.9. It involves transforming the image data into the spatial frequency domain using the Fast Fourier Transform (FFT), manipulating these frequencies by applying a spatial-frequency filter and then re-transforming the data back into the spatial domain, using the Inverse FFT (IFT). Parameters which define the filter generate different effects.
Fig. 5.9: Illustration of the process of Fourier filtering.
Fourier filtering is illustrated in more detail in Figure 5.10. The wrist/hand radiograph of the Figure 5.6 is again used for this illustration. You might remember that its 2D FFT was discussed in an earlier chapter. The filter is shown in panel (c) of the figure in the form of a two-dimensional image. It can be seen that the image data is symmetrical around the centre (i.e.isotropic) where its pixel values are relatively low. Outside this central dark region, a halo of bright pixel values dominates the image as the pixel value trails off slowly towards the periphery. An amplitude profile through this image is shown in panel (d) to further illustrate the effect. The filter can be used to modify the 2D-FFT by multiplying it by the filter values, for instance, to form a filtered 2D-FFT - as shown in panel (e). The Inverse FFT (IFT) of this data is then reveals the filtered image - see panel (f). Given that certain spatial frequencies have been amplified while others have been suppressed with this type of filter, it is called a Bandpass filter.
Fig. 5.10: Fourier filtering: (a) a radiograph of the wrist, (b) the 2D-FFT of this image, (c) the filter displayed in two-dimensions, (d) a one-dimensional profile through this filter, (e) the result of multiplying the 2D-FFT by the filter values, and (f) the IFT of the filtered 2D-FFT.
Note that spatial frequency filters can also be classed as Low-Pass (giving image smoothing) and High-Pass (giving edge enhancement). Here the filter function as illustrated in panel (d) of the figure would have a form which allows low or high frequencies to be accentuated while high or low frequencies, respectively, are suppressed. Parameters which can control the transition between accentuated and suppressed frequencies are:
  • the value at which the transition occurs, called the Cut-Off Frequency, for example, and
  • the rate at which the transition changes - either abruptly or gradually - called the Order or Power of the filter function.

Spatial Domain Processing

[edit | edit source]
Similar enhancement effects can also be generated using spatial domain processing, i.e. without using Fourier methods. Here the filter is generally defined as a mask consisting of a small two-dimensional array of values which is used to modify the value of each pixel in the image. A 3x3 pixel mask is shown to the right, where W1 thru W9 represent weighting factors that are used for the pixel modification process.
W1
W2
W3
W4
W5
W6
W7
W8
W9
On this basis, the value of each pixel is combined with the values of its nearest neighbouring pixels to form the filtered image as illustrated in Figure 5.11. The process is sometimes referred to as image convolution and the mask is referred to as a convolution filter or kernel. Note that larger masks can be defined depending on the application, e.g. 5x5, 9x9 arrays of weighting factors.
Fig. 5.11: Illustration of spatial filtering: (a)-(c) examples of 3x3 pixel masks that can be used for averaging, edge enhancement and edge detection, respectively; (d)-(f) radiographs of the wrist processed using similar kernels.
Fig. 5.12: Illustration of the unsharp masking process.
Other spatially-defined filters that can be used are the:
  • Gaussian Filter: which provides image smoothing effects using large kernels which form the shape of a Gaussian function, the
  • Median Filter: which replaces each pixel with the median value of itself and its neighbours, the
  • Maximum Filter: which replaces each pixel with the maximum value of itself and its neighbours, and the
  • Minimum Filter: which replaces each pixel with the minimum value of itself and its neighbours.

Unsharp Masking

[edit | edit source]
Unsharp Masking is a technique which can also be used for image enhancement. Here a blurred version of the image is subtracted from the original using a weighting factor, W, which can control the degree of edge enhancement - see Figure 5.12. The edge enhancement effect is similar to that achieved using spatial or Fourier filtration, as can be seen in Figure 5.13.
Fig. 5.13(a): Edge Enhancement.
Fig. 5.13(b): Unsharp Masking.
More sophisticated processes are in clinical use and the discussion above serves only to convey the underlying principles. Modern CR and DR systems incorporate complex, multi-stage processes which analyse the spatial frequency content (using the FFT, for example) and modulate the grey scale within different spatial frequency bands in a manner similar to audio equalisers, e.g. multi-scale image contrast amplification (MUSICA)[1] and enhanced visualization processing (EVP)[2]. Multifrequency processing allows the enhancement and suppression image features depending on their contrast, size and background brightness. The result is a harmonization of image data with portrayal of a more transparent mediastinum in chest radiography, for example, and improved visualization of low-contrast details. Refinements to these algorithms have been found useful in their clinical application[3]. In addition, processes such as Adaptive Histogram Equalization can be applied to equalise contrasts in different regions of images. Additional functions such as the automatic addition of a black surrounding mask for image presentation purposes are also generally provided.
Note that still other image processes need to be applied at the level of the image receptor, which include:
  • Logarithmic Transformation, to correct for exponential X-ray attenuation, with gain and offset corrections,
  • Flat Field Correction, to overcome inter-pixel sensitivity variations, and
  • Median Filtering, to suppress the effects of bad pixels.
Bad pixels are defective pixels which arise during the manufacturing process and cannot be avoided. Their affect on images can be striking although they are generally small in number. Median filtration can be used for compensation. All other pixels will respond slightly differently to each other. This effect can be corrected by flat-fielding the image using a reference image which was acquired using a uniform exposure. The log transform corrects for attenuation and is generally scaled to the desired image brightness.
The reduction of noise in images can be achieved by spatial processing, such as median filtering, and also by temporal domain processing. Here a time sequence of images can be averaged, for example, to produce a single image with significantly reduced mottle. Further details of these techniques can be found in a later chapter.

Image Segmentation

[edit | edit source]
Many forms of image analysis require the identification of structures and objects within an image. Image segmentation is the process of partitioning an image into distinct regions by grouping together pixels which belong to the same object. Two general approaches have been developed:
Threshold Definition
Fig. 5.13.5: A threshold look-up table on the left and thresholding a bimodal distribution on the right.
Here some property of an image is compared to a fixed or variable threshold on a pixel-by-pixel basis. A simple example is a grey-level threshold where a look-up table (LUT) of the form illustrated in the left hand panel of Figure 5.13.5 is applied, and where the value of the threshold, T, can be adjusted interactively.
This is a useful technique when the image contains a single, well-defined object or group of objects of similar pixel value superimposed on a background with a substantially different pixel value. However, difficulties can arise with grey-level thresholding when objects are close together, e.g. cardiac chambers. Histogram analysis can be used as an alternative where pixel values are thresholded on the basis of their frequency of occurrence, as illustrated in the right hand panel of the figure. Other alternatives include thresholding colours when a CLUT is applied, monitoring the time of arrival of a tracer or contrast medium in a region of an image and analysis of the variation in pixel values in the neighbourhood of a pixel within an object of interest.
Region Growth
Fig. 5.13.6: Region Growth: The object of interest in a CT scan is identified in the top left panel. A pixel value range for this object is specified to highlight the resultant region in the top right panel, to identify the borders of the region in the bottom left panel or to extract the object into another image in the bottom right panel.
This technique exploits two characteristics of imaged objects:
  • pixels for an object tend to have similar pixel values, and
  • pixels for the same object are contiguous.
A common technique is based on firstly defining a starting pixel in the object and then testing neighbouring pixels on the basis of a specific criterion for addition to a growing region. This criterion could be based on pixel value considerations, as in Figure 5.13.6 for instance, or on the anticipated size or shape of the object.
Note that this approach can readily be extended to grow regions in three-dimensions when the image data consists of a set of contiguous tomographic slices.

3D Visualization

[edit | edit source]

3D image data, from Computed Tomography (CT) for example, can be considered to be a cube of equally sized voxels, each with different voxel values. This perspective will be used below to illustrate various options for presenting this 3D data on a 2D computer screen. Note that the term 'reconstruction' as applied in 3D visualization has a different meaning to its use in CT reconstruction from projections.

Axial Projection

[edit | edit source]
The first technique we'll consider is a relatively simple one called Axial Projection. It involves integrating a number of axial images to display a composite which presents a three-dimensional impression of that volume of image data. The technique is sometimes referred to as Thick Slab or Z-Projection.
Figure 5.14 illustrates the outcome of a range of z-projection methods, with a single slice shown in the bottom right hand corner for reference purposes. The first image in the top left shows the result of summing 16 slices, and the other two images on that row show the results of computing the mean and median of these slices.
Fig. 5.14: A range of z-projections of 16 axial slices from the CT scan, with a reference, single slice shown in the bottom right corner.
The first two images in the second row show the result of what are called a Maximum Intensity Projection (MIP) and a Minimum Intensity Projection (MinIP), respectively. A MIP evaluates each voxel along each line of voxels through the volume to determine the maximum voxel value and forms an image using the values so determined for each line. A MinIP uses the minimum voxel values, as illustrated in Figure 5.15.
Fig. 5.15: A single line of voxels through eight axial slices illustrating the determination of the maximum voxel value for MIPs on the left, and the minimum value for MinIPs on the right.
Volume rendered projections are shown in the first two images along the bottom row of Figure 5.14. This image compositing method involves applying an opacity function to the voxel data as well as the recursive addition of the resulting data. An equation of the form:
Cn = An + Bn
where
  • An = (α).(Voxel Value of voxel, n),
 
  • Bn = (1-α).(Voxel Value of voxel, n-1), and
 
  • α = opacity, in the range 0 (i.e. fully transparent) to 1 (i.e. fully opaque),
 
is applied to each line of voxels as illustrated in Figure 5.16.
The figure shows the line of voxels we've used previously with an opacity table in the top right corner. The opacity function shown is one where a zero opacity is applied to voxel values below a threshold level, a linear increase in opacity is applied to an intermediate range of voxel values and maximum opacity applied to high voxel values. The opacity table is somewhat like the look-up table used for greyscale windowing which we've described earlier, with the function applied to the opacity of voxel values instead of to their grey levels. Note that more complex opacity tables to the one used in our figure above can be also applied, e.g. logarithmic and exponential functions.
The bottom half of Figure 5.16 shows the steps involved in calculating the volume rendered value of the composited voxel. Voxel values are shown on the top row with opacity values, derived from a crude opacity table, for each voxel shown on the second row. The third, fourth and fifth rows detail the values of A, B and C, calculated using our volume rendering equation above. The final voxel value is obtained by summing the bottom row, and normalizing the result to, say, a 256 level grey scale.
The outcome of this form of processing is the generation of an image which includes visual depth cues on the basis that similar voxel values will be displayed with a similar transparency and those closest to the reference slice having a stronger contribution than those from more distal slices. Further, note that all voxel values in each line contribute to the rendered image, in contrast to the limited number of voxels that contribute to a MIP or a MinIP image. A 3D effect results from volume rendering, as illustrated in Figure 5.14.
Fig. 5.16: Illustration of the volume rendering process.
Notice that volume rendering can be applied from distal to proximal slices, as illustrated in our figure, as well as in the opposite direction, i.e. from proximal to distal slices. Hence the terms Volume Rendering Up and Volume Rendering Down used in Figure 5.14.
The type of axial projection method appropriate to an individual patient study is dependent on the anatomical and/or functional information of relevance to the diagnostic process. Let's take the case of imaging contrast-filled blood vessels, for example, in our nine example images above. Note that a MIP can be used to give a visually-stunning impression of the vessel bed in the patient's lungs. There's little depth information in this projection, however, so that overlapping and underlying vessels can obscure lesions that might be present in blood vessels of interest. The application of this form of axial projection to angiography is therefore limited to studies where vessel overlap isn't an issue. The inclusion of voxel transparency and depth weighting in volume rendered images addresses this limitation of MIP processing.
A final point to note is that this form of image projection can also be applied to planes of the data cube other than axial, which we'll consider next.

Multi-Planar Reconstruction

[edit | edit source]
In the simplest case, multi-planar reconstruction (MPR) involves generating perspectives at right angles to a stack of axial slices so that coronal and sagittal images can be generated. We'll start this section by describing these orthogonal projections before considering their synchronized composite display. We'll also describe three variants on this theme: oblique reconstruction, curved reconstruction and integrated three-dimensional presentation.
Coronal Reconstruction
Here the image stack is rotated so that the z-axis becomes the vertical and a stack of images is reconstructed using parallel planes of voxels arranged from the patient's anterior to posterior surface.
Coronal reconstructions from our image data are shown in the form of an animation in Figure 5.17.
Here the reconstructed slices are typically displayed from the patient's anterior to their posterior surface with the patient's head towards the top of the slices and their left hand side on the left of the slices.
Fig. 5.17: Animated sequence of reconstructed coronal slices from a CT pulmonary angiography (CTPA) study.
Fig. 5.18: Animated sequence of reconstructed (left-to-right) sagittal slices from a CT pulmonary angiography (CTPA) study.
Sagittal Reconstruction
Sagittal reconstructions are possible through additional rotations of the image stack so that a patient's left-to-right slice sequence can be generated as illustrated in Figure 5.18.
Here the reconstructed slices are typically displayed from the patient's left to right side, with their head towards the top and their anterior surface towards the left of the slices. Note that a right-to-left sagittal stack can also be generated using additional geometric transformation of the data.
Composite MPR Display
Coronal and sagittal reconstructions are referred to as Orthogonal MPRs because the perspectives generated are from planes of image data which are at right angles to each other. Composite MPR displays can be generated so that linked cursors or crosshairs can be used to locate a point of interest from all three perspectives, as illustrated in Figure 5.19.
Fig. 5.19: Axial and sagittal reconstructions from the CT study with a coronal MIP defined by the blue lines.
Oblique Reconstruction
Oblique MPRs are possible by defining angled planes through the voxel data , as illustrated in Figure 5.20.
Here the plane can be defined in, say, the axial images (red line, top left) and a maximum intensity projection (the limits used are highlighted by the blue lines), for example, can be displayed for the reconstructed plane (right). This technique is useful when attempting to generate perspectives in cases where the visualization of three-dimensional structures is complicated by overlapping anatomical detail.
Fig. 5.20: CT MPR incorporating an oblique MIP.
Curved Reconstruction
Curved MPRs can be used for the reconstruction of more complex perspectives, as illustrated in Figure 5.21.
Here a curve (highlighted in green) can been positioned in the axial images (left panel) to define a curved surface which extends through the voxel data in the z-direction, and voxels from this data can be reconstructed into a two-dimensional image (right panel). Note that more complex curves than the one illustrated can be generated so that, for instance, the three-dimensional course of a major blood vessel can be isolated, or CT head scans can be planarized for orthodontic applications.
Fig. 5.21: An axial slice from the CT scan on the left with a curve (highlighted in green) which is used to define the reconstruction on the right.
Fig. 5.22: 3D Orthogonal MPR rotating sequence.
3D Multi-Planar Reconstruction
A final variant on the MPR theme is the generation of a three-dimensional display showing all three orthogonal projections combined so that a defined point of interest locates the intersection of the planes, as illustrated in Figure 5.22.
The point of intersection is located for illustrative purposes at the centre of the voxel data in the figure. It can typically be placed at any point in the 3D data using interactive controls. In addition, the perspective used for the rotating sequence can be manipulated interactively to improve the visualization of a region of interest. Note that the image sequence illustrated is one from a myriad of perspectives that can thus be generated. Note also that slice projections (e.g. MIPs) can be combined with this form of display to provide additional perspectives on a feature of interest.

Maximum Intensity Projection

[edit | edit source]
We've described the maximum intensity projection (MIP) earlier in the context of axial projection, where the maximum voxel value is determined for lines running in parallel through the projected slice thickness. A sequence of such images can be generated when this computation is applied at successive angles around the voxel data. One simple sequence is a rotating one for 360 degrees around the horizontal plane, as illustrated in the left panel of Figure 5.23, where the maximum intensity is projected for every 9 degrees around the patient and the resultant 40 images compiled into a repeating, temporal (e.g. movie) sequence.
Notice that the 3D MIP derives its information from the most attenuating regions of the CT scan (given that the CT-number is directly dependent on the linear attenuation coefficient) and hence portrays bone, contrast media and metal with little information from surrounding, lower attenuating tissues. It has therefore found application in 3D angiography and interventional radiology. Notice also that continued viewing of the rotating MIP sequence can generate a disturbing effect where the direction of rotation appears to periodically reverse - which may be an aspect of perceptual oscillation. The perspective MIP, illustrated in the panel on the right in the above figure, can reduce this limitation by providing spatial cues which can be used to guide continued visual inspection.
Fig. 5.23: 3D MIPs of a CT scan: Horizontal rotating sequences using parallel projections (left) and perspective projections (right).
Perspective projections can be generated by changing from the parallel lines used to generate the parallel projections to lines of voxels which diverge from an apparent point behind the volume at a distance such that the viewer of the display can visualize closer features of the image data as relatively larger than deeper features - see Figure 5.24. Notice that it is somewhat similar in principle to projection imaging using a small source as in general radiography.
Fig. 5.24: Illustration of parallel (left) and perspective (right) projections using conceptualized lateral views of the voxel data and the eye of the viewer of the projected image.

Volume Rendering

[edit | edit source]
Volume rendering can be applied to the voxel data in the successive rotation manner described for MIPs above, as illustrated in Figure 5.25. A superior display to that of MIPs is clearly evident.
Fig. 5.25: 3D volume rendered parallel projection (left) and perspective projection (right).
Note that the volume rendering can be contrast enhanced so as to threshold, for instance, through the voxel values to eliminate lower attenuation surfaces, as illustrated in Figure 5.26.
Fig. 5.26: 3D VR contrast enhancement progressively applied, from top left to bottom right panels, through the voxel value range.
Note also that the colour look-up table (CLUT) can be varied to highlight features of particular interest, as shown in the set of images in Figure 5.27. A vast range of CLUTs is available with the four shown being used here solely for illustrative purposes.
Fig. 5.27: 3D volume renderings using four different CLUTs.
Fig. 5.28: 3D volume renderings using four different opacity tables.
The influence of the opacity table is illustrated in Figure 5.28, which shows (from top left to bottom right) the effect of linear, exponential, logarithmic and non-linear functions.
A final feature to note about volume rendering is that 3D editing techniques can be applied so as to exclude unwanted features from the computations and to expose internal structure. This is illustrated in Figure 5.29, where planes of an orthogonal frame can be moved to crop voxel data in just one direction to reveal the aorta (top right), for example, or in multiple directions to inspect the transplanted kidney (bottom right), as a second example.
Fig. 5.29: 3D volume rendering with cropping frame.

Surface Rendering

[edit | edit source]
Fig. 5.30: Illustration of surface rendering.
Surface rendering is also referred to as Shaded Surface Display (SSD) and involves generating surfaces from regions with similar voxel values in the 3D data.
The process involves the display of surfaces which might potentially exist within the 3D voxel data on the basis that the edges of objects can be expected to have similar voxel values. One approach is to use a grey-level thresholding technique where voxels are extracted once a threshold value is encountered in the line of the projection - see Figure 5.30. Triangles are then used to tesselate the extracted voxels, as shown in the right panel of the figure above - and the triangles are filled using a constant value with shading applied on the basis of simulating the effects of a fixed virtual light source - as shown in the left panel above.
An opacity table can be applied to the results so that surfaces from internal features can also be visualized, as illustrated in Figure 5.31. Here, axial CT data from the patient's airways have been segmented using a region growth technique and the result processed using surface rendering, with full opacity as shown in the left panel and with a reduced opacity (30%) as shown in the right panel:
Fig. 5.31: 3D SSD: opaque and transparent display.
Notice that internal features of each lung can be discerned when the opacity is reduced. Notice also that continued viewing of this type of transparency display can generate apparent reversal of the image rotation, similar to that noted for the 3D MIPs above. One method of overcoming this type of problem is to segment each lung, for instance. and to blend the results, as illustrated in Figure 5.32.
Fig. 5.32: 3D SSD: blending of each lung following segmentation.

Virtual Endoscopy

[edit | edit source]
Here a virtual camera can be located within the 3D image data to generate images of internal surfaces from an internal perspective. This technique can be used for Virtual Colonoscopy, for example, to generate images somewhat similar to those from optical colonoscopy procedures. Three MPR views can be used to define the location and orientation of the virtual camera and sequences of such images can be blended to provide a virtual Fly-Through through the region of interest. Such movies can also be generated from both internal and external camera locations to produce a virtual tour of the 3D image data, as illustrated in the movie below.
A movie generated by blending VR images from a large number of perspectives.

Image Storage & Distribution

[edit | edit source]
Basic elements of a generic PACS. Images from the modalities are sent to short term RAID storage and archived simultaneously. Access to images is enabled through the high speed local area network which distributes the images to clinical and diagnostic workstations and to a Web server for dispatch to remote sites. (Glossary: NM: Nuclear Medicine; US: Ultrasound; HIS: Hospital Information System; RIS: Radiology Information System; RAID: Redundant Array of Independent Disks).

Picture Archival and Communication Systems (PACS) are generally based on a dedicated computer which can access data stored in the digital image processors of different imaging modalities and transfer this data at high speeds to reporting workstations, to remote viewing consoles, to archival storage media and to other computer systems either within the facility or at remote locations - see the following figure.

PACS environments generally include the following features:

  • Standardization of the manner in which the image data is interchanged between different medical imaging devices, workstations and computers. The Digital Imaging & Communications in Medicine (DICOM) standard is in widespread use.
  • Interfacing to a Radiology Information System (RIS) which in turn is interfaced to a Hospital Information System (HIS). The Integrating the Healthcare Enterprise (IHE) initiatives are used to facilitate such data transfers.
  • High quality image display devices - see the earlier discussion.
  • User-friendly workstations where the display and manipulation of images is clinically intuitive.
  • The efficient distribution of images throughout the facility including wards, theatres, clinics etc. and to remote locations.
  • Short image transfer times, within a timeframe that supports clinical decision-making. High speed networks with Gbit/s transfer speeds are therefore widely used.
  • Archive storage up to a many Tbytes providing retrieval of non-current image files.
A small section of a DICOM header file.

DICOM-standard images contain what is called a header file which contains information regarding the patient, the examination and the image data - a section of one is shown in the following figure as an example. Note that in this case the image data refers to a hand/wrist image which is stored at a resolution of 2,920x2,920 pixels each of size 0.1 mm. In addition, default window display settings are shown. Furthermore, the form of image compression used can be included, i.e. whether lossless, which preserves the fidelity of image data, or lossy which degrades fidelity in the interest of image transfer speeds, for instance. Numerous other parameters can also be included in a header file.

The main advantage of PACS is related to the ease, convenience and accuracy factors which are common to all computerised environments.

  1. Prokop M, Neitzel U & Schaefer-Prokop C, 2003. Principles of image processing in digital chest radiography. J Thorac Imaging, 18:148-64.
  2. Krupinski EA, Radvany M, Levy A, Ballenger D, Tucker J, Chacko A & VanMetter R, 2001. Enhanced visualization processing: Effect on workflow. Acad Radiol, 8:1127-33.
  3. Moore CS, Liney GP, Beavis AW & Saunderson JR, 2007. A method to optimize the processing algorithm of a computed radiography system for chest radiography. Br J Radiol, 80:724-30.