Basic Physics of Digital Radiography/The Computer

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Computers are widely used in many areas of radiology. The main application for our purposes is the display and storage of digital radiography images. This chapter presents a basic description of a general-purpose computer, outlines the design of a generalised digital image processor and gives a brief introduction to digital imaging before describing some of the more common image processing applications.

Before considering these topics, some general comments are required about the form in which information is handled by computers as well as the technology which underpins the development of computers so that a context can be placed on the discussion.

Binary Representation[edit]

Virtually all computers in use today are based on the manipulation of information which is coded in the form of binary numbers. A binary number can have only one of two values, i.e. 0 or 1, and these numbers are referred to as binary digits - or bits, to use computer jargon. When a piece of information is represented as a sequence of bits, the sequence is referred to as a word, and when the sequence contains eight bits, the word is referred to as a byte - the byte being widely used today as the basic unit for expressing amounts of binary-coded information. In addition, large volumes of coded information are generally expressed in terms of kilobytes, megabytes, gigabytes etc. It is important to note that the meanings of these prefixes differ slightly from their conventional meanings because of the binary nature of the information coding. As a result, kilo in computer jargon represents 1024 units - 1024 (or 210) being the nearest power of 2 to one thousand. Thus, 1 kbyte refers to 1024 bytes of information and 1 Mbyte represents 1024 times 1024 bytes, and so on.

Binary coding of image information is needed in order to store images in a computer. Most imaging devices used in medicine however generate information which can assume a continuous range of values between preset limits, i.e. the information is in analogue form. It is therefore necessary to convert this analogue information into the discrete form required for binary coding when images are input to a computer. This is commonly achieved using an electronic device called Analogue-to-Digital Converter (ADC).

The development of modern computers has been almost totally dependent on major developments in material science and digital electronics which have occurred in the latter half of the 20th century. These developments have allowed highly complex electronic circuitry to be compressed into small packages called integrated circuits. These packages contain tiny pieces of silicon (or other semiconductor material) which have been specially manufactured to perform complex electronic processes. These pieces of silicon are generally referred to as silicon chips or microchips. Within the circuitry of a chip, a high electronic voltage can be used to represent the digit 1 and a low voltage can be used to represent the binary digit 0. Thus the circuitry can be used to manipulate information which is coded in the form of binary numbers.

An important feature of these electronic components is the very high speed at which the two voltage levels can be changed in different parts of the circuitry. This results in the ability of the computer to rapidly manipulate the binary information. Furthermore, the tiny size of modern integrated circuits has allowed the manufacture of computers which are very small physically and which do not generate excessive amounts of heat - previous generations of computers having occupied whole rooms, which required cooling because they were built using larger electronic components such as valves and transistors. Thus modern computers are capable of being mounted on a desk, for instance in an environment which does not require specific air-conditioning. In addition, the ability to manufacture integrated circuits using mass production methods has given rise to enormous decreases in costs - which has contributed to the phenomenal explosion of this technology into the mobile phone/computer market in the early 21st century.

It is worth noting that information in this chapter is likely to change by the time the chapter is read, given the ongoing, rapid developments in this field. The treatment here is therefore focused on general concepts - and the reader should note that current technologies and techniques may well differ from those described here. In addition, note that mention of any hardware or software product in this chapter does in no way intend support for such a product and its use in this discussion is purely for illustrative purposes.

General-Purpose Computer[edit]

The following figure shows a block diagram of the major hardware components of a general-purpose computer. The diagram illustrates that a computer consists of a central communication pathway, called a bus, to which dedicated electronic components are connected. Each of these components is briefly described below.

Block diagram of components of a general-purpose computer.
  • Central Processing Unit (CPU): This is generally based on an integrated circuit called a microprocessor. Its function is to act as the brains of the computer where instructions are interpreted and executed, and where data is manipulated. The CPU typically contains two sub-units - the Control Unit (CU) and the Arithmetic/Logic Unit (ALU). The Control Unit is used for the interpretation of instructions which are contained in computer programs as well as the execution of such instructions. These instructions might be used, for example, to send information to other components of the computer and to control the operation of those devices. The ALU is primarily used for data manipulation using mathematical techniques - for example, the addition or multiplication of numbers - and for logical decision-making.
  • Main Memory: This typically consists of a large number of integrated circuits which are used for the storage of information which is currently required by the computer user. The circuitry is generally of two types - Random Access Memory (RAM) and Read Only Memory (ROM). RAM is used for the short-term storage of information. It is a volatile form of memory since its information contents are lost when the electric power to the computer is switched off. Its contents can also be rapidly erased - and rapidly filled again with new information. ROM, on the other hand, is non-volatile and is used for the permanent storage of information which is required for basic aspects of the computer's operation.
  • Secondary Memory: This is used for the storage of information in permanent or erasable form for longer-term purposes, i.e. for information which is not currently required by the user but which may be of use at some later stage. There are various types of devices used for secondary memory which include hard disks, CD-ROMs and flash drives.
  • Input/Output Devices: These are used for user-control of the computer and generally consist of a keyboard, visual display and printer. They also include devices, such as the mouse, joystick and trackpad, which are used to enhance user-interaction with the computer.
  • Computer Bus: This consists of a communication pathway for the components of the computer - its function being somewhat analogous to that of the central nervous system. The types of information communicated along the bus include that which specifies data, control instructions as well as the memory addresses where information is to be stored and retrieved. As might be anticipated, the speed at which a computer operates is dependent on the speed at which this communication link works. This speed must be compatible with that of other components, such as the CPU and main memory.
  • Software: There is a lot more to computer technology than just the electronic hardware. In order for the assembly of electronic components to operate, information in the form of data and computer instructions is required. This information is generally referred to as software. Computer instructions are generally contained within computer programs. Categories of computer program include:
  • a text editor for writing the text of the program into the computer (which is similar to programs used for word-processing);
  • a library of subroutines-which are small programs for operating specific common functions;
  • a linker which is used to link the user-written program to the subroutine library;
  • a compiler for translating user-written programs into a form which can be directly interpreted by the computer, i.e. it is used to code the instructions in digital format.

Digital Image Processor[edit]

Computers used for digital image processing generally consist of a number of specialised components in addition to those used in a general-purpose computer. These specialised components are required because of the very large amount of information contained in images and the consequent need for high capacity storage media as well as very high speed communication and data manipulation capabilities.

Block diagram of the components of a digital image processor. The yellowed components are those of a general-purpose computer.

Digital image processing involves both the manipulation of image data and the analysis of such information. An example of image manipulation is the computer enhancement of images so that subtle features are displayed with greater clarity. An example of image analysis is the extraction of indices which express some functional aspect of an anatomical region under investigation. Most digital radiography systems provide extensive image manipulation capabilities with a limited range of image analysis features.

A generalised digital image processor is shown in the following figure. The yellow components at the bottom of the diagram are those of a general-purpose computer which have been described above. The digital image processing components are those which are connected to the image data bus. Each of these additional components is briefly described below.

  • Imaging System: This is the device which produces the primary image information, i.e. the image receptor and associated electronics. The image system is often physically separate from the other components of the image processor. Image information produced by the imaging system is fed to the image acquisition circuitry of the digital image processor. Connections from the digital image processor to the imaging system are generally also present, for controlling specific aspects of the operation of the imaging system, e.g. movement of a C-arm and patient table in a fluoroscopy system.
  • Image Acquisition: This circuitry is used to convert the analogue information produced by the imaging system so that it is coded in the form of binary numbers. The type of device used for this purpose is called an Analogue-to-Digital Converter (ADC). The image acquisition device may also include circuitry for manipulating the digitised data so as to correct for any aberrations in the image data. The type of circuitry which can be used for this purpose is called an Input Look-Up Table. An example of this type of data manipulation is logarithmic image transformation in digital radiography.
  • Image Display: This device is used to convert digital images into a format which is suitable for display. The image display unit may also include circuitry for manipulating the displayed images so as to enhance their appearance. The type of circuitry which can be used for this purpose is called an Output Look-Up Table and examples of this type of data manipulation include windowing. Other forms of image processing provided by the image display component can include image magnification and the capability of displaying a number of images on the one screen. The device can also allow for the annotation of displayed images with the patient name and details relevant to the patient's examination.
  • Image Memory: This typically consists of a volume of RAM which is sufficient for the storage of a number of images which are of current interest to the user.
  • Image Storage: This generally consists of magnetic disks of sufficient capacity to store large numbers of images which are not of current interest to the user and which may be transferred to image memory when required.
  • Image ALU: This consists of an ALU designed specifically for handling image data. It is generally used for relatively straight-forward calculations, such as image subtraction in digital subtraction angiography (DSA) and the reduction of noise through averaging a sequence of images.
  • Array Processor: This consists of circuitry designed for more complex manipulation of image data and at higher speeds than the Image ALU. It typically consists of an additional CPU as well as specialised high speed data communication and storage circuitry. It may be viewed as a separate special-purpose computer whose design has traded a loss of operational flexibility for enhanced computational speed. This enhanced speed is provided by the capability of manipulating data in a parallel fashion as opposed to sequential processing - which is the approach used in general-purpose computing - although this distinction is becoming redundant with developments such as multicore CPUs. This component is used, for example, for calculating Fast Fourier Transforms and for reconstruction calculations in cone-beam CT.
  • Image Data Bus: This component consists of a very high speed communication link designed specifically for image data.

Note that many of the functions described above have today been encapsulated in one device called graphics processing units (GPUs) and have been incorporated into many computer designs, e.g. the iMac computer.

Digital Images[edit]

The digitisation of images generally consists of two concurrent processes - sampling and quantisation. These two processes generally happen concurrently and are described briefly below.

Illustration of a digital image, obtained when an original, consisting of a central dark region with the brightness increasing towards the periphery, is digitised with N=8 and G=4 (i.e. m=2).

Image sampling is the process used to digitise the spatial information in an image. It is typically achieved by dividing an image into a square or rectangular array of sampling points - see the following figure. Each of the sampling points is referred to as a picture element - or pixel to use computer jargon. Although in the context of DR image receptors, the term detector element, or del, is also used. Naturally, the larger the number of pixels or dels, the closer the spatial resolution of the digitised image approximates that of the radiation pattern transmitted through the patient – see the following figure, panels (a) and (b).

The process may be summarised as the digitisation of an image into an N x N array of pixel data. Examples of values for N are 1024 for a angiography image, and 3,000 for a digital radiograph.

Note that each pixel represents not a point in the image but rather an element of a discrete matrix. Distance along the horizontal and vertical axes is no longer continuous, but instead proceeds in discrete steps, each given by the pixel size. With larger pixels, not only is the spatial resolution poor, since there is no detail displayed within a pixel, but grey-level discontinuities also appear at the pixel boundaries (pixelation) - see panel (b) in the figure. The spatial resolution improves with smaller pixels and a perceived lack of pixelation gives the impression of a spatially continuous image to the viewer.

The pixel size for adequate image digitisation is of obvious importance. In order to capture fine detail, the image needs to be sampled at an adequate sampling frequency, fs. This frequency is given theoretically by the Nyquist–Shannon sampling theorem which indicates that it should be at least twice the highest spatial frequency, fmax, contained in the image, i.e.

fs ≥ 2 fmax

Thus, for example, when a digital imager has a pixel size of x mm, the highest spatial resolution that can be adequately represented is given by:

fN = (2 x)-1 line pairs/mm.

where fN is called the Nyquist frequency.

A line pair consists of a strip of radio-opaque material with a trip of radio-lucent material or equal width, as we will see in the next chapter. Put simply therefore, at least two pixels are required to adequately represent a line pair. Thus when a 43 cm x 43 cm image receptor is digitised to 3,000x3,000 pixels, the highest frequency that can be adequately represented is ~3.5 LP/mm. When an image contains frequencies above this, they are under-sampled upon digitisation and appear as lower frequency information in the digital image. This is known as aliasing and gives rise to false frequencies as far below the Nyquist frequency as they are in actuality above it. In other words, high frequencies are folded back around the sampling frequency to appear in the digital image at a frequency less than fN.

In practice, almost all digital images are under-sampled to some degree. A slightly larger pixel size than is optimal theoretically could be used, for instance, which would allow most of the frequency content to be digitised adequately. Note that significant aliasing artifacts occur when fine repetitive patterns, e.g. a resolution test object, are imaged and can result in Moiré patterns. Filtering the analogue image, so as to remove higher spatial frequency content prior to digitisation, can be used to reduce this effect. Note that aliasing effects can also occur when a stationary grid is used with a digital image receptor because of interference between the grid lines and the image matrix. A grid pitch of at least 70 lines/cm is therefore used.

A digitised chest radiograph displayed with image resolutions of (a) 256x256x8 bits, (b) 32x32x8 bits, (c) 256x256x2 bits.

Image quantisation is the process used to digitise the brightness information in an image. It is typically achieved by representing the brightness of a pixel by an integer whose value is proportional to the brightness. This integer is referred to as a 'pixel value' and the range of possible pixel values which a system can handle is referred to as the grey scale. Naturally, the greater the grey scale, the closer the brightness information in the digitised image approximates that of the original image – see the following figure, panels (a) and (c). The process can be considered as the digitisation of image brightness into G shades of grey. The value of G is dependent on the binary nature of the information coding. Thus G is generally an integer power of 2, i.e. G=2m, where m is an integer which specifies the number of bits required for storage. Examples of values of G are 1,024 (m=10) in fluoroscopy, 2,048 (m=11) in angiography and 4,096 (m=12) in digital radiography. Note that the slight difference between the brightness in an analogue image and its pixel value in the digital image is referred to as the quantisation error, and is lower at larger values of G.

The number of bits, b, required to represent an image in digital format is given by:

b = N x N x m.

A 512x512x8-bit image therefore represents 0.25 Mbytes of memory for storage and a 3,000x3,000x12-bit image represents ~13 Mbytes. Relatively large amounts of computer memory are therefore needed to store digital radiography images and processing times can be relatively long when manipulating large volumes of such data. This feature of digital images gives rise to the need for dedicated hardware for image data which is separate from the components of a general-purpose computer - although this distinction may vanish with future technological developments.

Digital Image Processing[edit]

Contrast enhancement is a widely-used form of image processing used throughout medical imaging and will be used as an example of basic methods. This form of processing (commonly referred to as windowing) is described below and is a form of gray-level transformation where the real pixel value is replaced by a new pixel value for display purposes. The process is generally performed using the Output Look- Up Table section of the image display component - see the earlier figure. As a result, the original data in image memory is not affected by the process, so that from an operational viewpoint, the original image data can be readily retrieved in cases where an unsatisfactory output image is obtained. In addition, the process can be implemented at very high speed using modern electronic techniques so that, once again from an operational viewpoint, user-interactivity is possible.

Gray-level transformation for contrast enhancement of images with 256 shades of gray (i.e. m=8 bits). In this example, the unprocessed data is transformed so that all pixels with a pixel value less than 50 are displayed as black, all pixels with a pixel value greater than 150 are displayed as white and all pixels with pixel values between 50 and 150 are displayed using an intermediate shade of gray.

An example of a Look-Up Table (LUT) which can be used for contrast enhancement is illustrated in the following figure. The process is controlled typically by two controls on the console of the digital image processor - the level and window. It should be noted that variations in the names for these controls, and in their exact operation, can exist with different systems but the general approach described here is sufficient for our purposes. It is seen in the figure that the level controls the threshold value below which all pixels are displayed as black and the window controls a similar threshold value for a white output. The simultaneous use of the two controls allows the application of a gray-level window, of variable width, which can be placed anywhere along the gray scale. Subtle gray-level changes within images can therefore be enhanced so that they are displayed with greater clarity - see the following images.

Many other processes are available to enhance image contrast and the conspicuity of details. Examples are greyscale inversion and edge sharpening. Note that the inverted greyscale is commonly used in diagnostic radiology. Still other image processes need to be applied at the level of the image receptor, which include:

  • logarithmic transformation, to correct for exponential X-ray attenuation, with gain and offset corrections,
  • flat field corrections, to overcome spatial sensitivity variations, and
  • median filtering, to suppress the effects of bad pixels.

Modern CR and DR systems in clinical use incorporate sophisticated, multi-stage processes which analyse the spatial frequency content (using the Fourier Transform, for instance) and modulate the grey scale within different frequency bands in a manner similar to audio equalisers, e.g. multi-scale image contrast amplification (MUSICA) and enhanced visualization processing (EVP). In addition, processes such as adaptive histogram equalization can be applied. Additional functions such as the automatic addition of a black surrounding mask for image presentation purposes are also typically provided.

Examples of contrast enhancement applied to a radiograph of the wrist: (a) unprocessed image, (b) window and level adjusted and (c) an inverted grey scale.

Noise reduction algorithms are widely used in digital fluoroscopy systems. Here, a sequence of images can be integrated or averaged to suppress noise effects. Recursive filtration can also be applied where a weighted average of images in a sequence is computed with more recent images having the higher weightings. A moving-average image sequence results which can be used to generate vascular traces, for example.

Clinical Applications[edit]

  • Digital Subtraction Angiography (DSA)
Digital Subtraction Angiography, as the name implies, involves an image subtraction technique - see the following figure. As will be seen below, the technique involves more than simply applying a subtraction process in the digital image processor. In addition, it will be seen that the type of technology utilised, while based on the design of a fluoroscopy systems, needs to incorporate a number of modifications unique to DSA. Before addressing the technology however, some basic physics needs to be introduced which will aid in putting the subsequent technology discussion into context.

  • Cone-beam CT
  • Dual-energy

Image Storage & Distribution[edit]

Basic elements of a generic PACS. Images from the modalities are sent to short term RAID storage and archived simultaneously. Access to images is enabled through the high speed local area network which distributes the images to clinical and diagnostic workstations and to a Web server for dispatch to remote sites. (Glossary: NM: Nuclear Medicine; US: Ultrasound; HIS: Hospital Information System; RIS: Radiology Information System; RAID: Redundant Array of Independent Disks).

Picture Archival and Communication Systems (PACS) are generally based on a dedicated computer which can access data stored in the digital image processors of different imaging modalities and transfer this data at high speeds to reporting workstations, to remote viewing consoles, to archival storage media and to other computer systems either within the facility or at remote locations - see the following figure.

PACS environments generally include the following features:

  • Standardization of the manner in which the image data is interchanged between different medical imaging devices, workstations and computers. The Digital Imaging & Communications in Medicine (DICOM) standard is in widespread use.
  • Interfacing to a Radiology Information System (RIS) which in turn is interfaced to a Hospital Information System (HIS). The Integrating the Healthcare Enterprise (IHE) initiatives are used to facilitate such data transfers.
  • High quality image display devices - see the earlier discussion.
  • User-friendly workstations where the display and manipulation of images is clinically intuitive.
  • The efficient distribution of images throughout the facility including wards, theatres, clinics etc. and to remote locations.
  • Short image transfer times, within a timeframe that supports clinical decision-making. High speed networks with Gbit/s transfer speeds are therefore widely used.
  • Archive storage up to a many Tbytes providing retrieval of non-current image files.
A small section of a DICOM header file.

DICOM-standard images contain what is called a header file which contains information regarding the patient, the examination and the image data - a section of one is shown in the following figure as an example. Note that in this case the image data refers to a hand/wrist image which is stored at a resolution of 2,920x2,920 pixels each of size 0.1 mm. In addition, default window display settings are shown. Furthermore, the form of image compression used can be included, i.e. whether lossless, which preserves the fidelity of image data, or lossy which degrades fidelity in the interest of image transfer speeds, for instance. Numerous other parameters can also be included in a header file.

The main advantage of PACS is related to the ease, convenience and accuracy factors which are common to all computerised environments.