Basic Physics of Nuclear Medicine/PACS and Advanced Image Processing

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Picture Archival & Communication Systems (PACS)[edit | edit source]

With the phenomenal development of computer technology in recent times has come the possibility of storing and communicating medical images in digital format. PACS systems are generally based on a dedicated computer which can access data stored in the digital image processors of different imaging modalities and transfer this data at high speeds to remote viewing consoles, to archival storage media and to other computer systems either within the hospital or at external locations – see the figure below:

Basic elements of a generic PACS solution for a teaching hospital. Images from the modalities are sent to short term RAID storage and archived simultaneously. Archived images may also be retrieved as required. Access to images is enabled through the high speed local area network which distributes the images to clinical and diagnostic workstations, to a Web server and to a teleradiology server for dispatch to remote sites via a wide area network (WAN).
GlossaryHIS: Hospital Information System; RIS: Radiology Information System; LAN: Local Area Network; RAID: Redundant Array of Independent Disks.

Successful implementation of PACS is critically dependent on several factors which include image format standardization, HIS and RIS integration, image display devices, image transfer rates and storage capacity. These features are discussed below.

Standardization of the manner in which the image data is interchanged between different medical imaging devices. The Digital Imaging & Communications in Medicine (DICOM) standard has been embraced by most equipment manufacturers to facilitate this. Another file format which has been developed specifically for nuclear medicine images is called Interfile. Besides specifying the format of digital image data, these information interchange formats also cover patient and examination details which are embedded within the image file. This latter feature is particularly important in medical imaging so that patient studies do not get mixed up, for example, and can be regarded as generating a birth certificate for each acquired image, or set of images. An example of such DICOM header information is shown in the following four figures (the header, which is typically one continuous document, has been broken into four parts here to assist with our discussion):

The first part of a DICOM file header.

Notice that the data provide patient details as well as the image type, the date and time of the study, the modality, the scanner manufacturer and image processing workstation used. The second part of this header is shown below:

The second part of a DICOM file header.

Notice that this data covers the slice thickness and spacing used in this SPECT study, image sampling and quantization information, the number of images and the photon energy window used by the scanner. The third part of this header is shown next:

The third part of a DICOM file header.

Notice that this data covers details of the scanner movement used to acquire the study. The fourth and final part of this header is shown below:

The fourth and final part of a DICOM file header.

Notice that this final part details the patient and scanner orientation as well as the actual image data.

Other image formats are in common use in medicine, for purposes other than primary diagnosis. These formats can be useful for teaching, multimedia and publication purposes. Examples of these formats are included in the following table.

Format
Compression
Comment
JPEG
Lossy
Small file sizes
PNG
Lossless
Header information
TIFF
Lossless/Lossy
Large file sizes
GIF
Lossy
Graphical data

The Joint Photographic Expert Group (JPEG) format is widely used for image transfer using the World Wide Web because the image data can be reduced in size using image compression techniques and hence can be transferred relatively quickly. The compression technique used by this format typically results in the loss of image data which cannot be exactly recovered. Hence the reference to Lossy compression in the Table. The format, as you might appreciate, is not used for primary diagnosis but is nevertheless useful for teaching and related applications.

The Portable Network Graphics (PNG) format is the most recent of these formats and has advantages in terms of Lossless compression, platform independent image display and compression features and the ability to embed patient and study identification information.

The Tagged Image File Format (TIFF) is widely used in the publication industry and provides the capability for both lossless and lossy compression. Its lossless compression however results in large image file sizes.

Finally the Graphics Interchange Format (GIF) is widely used for transferring graphical images (e.g. graphs, diagrams, flowcharts etc.) via the World Wide Web, and can also be used for animated graphics.

High quality display devices are generally needed for medical images. Cathode ray tube (CRT) and liquid crystal display (LCD) monitors are widely used with their visual characteristics matched to the display task. Monitors for digital mammograms and chest radiographs, for instance, require a relatively high luminance and spatial resolution, while those used for nuclear medicine and ultrasound images don't have such stringent specifications but do require colour and dynamic imaging functionality – and those used for CT and MRI are the same, except maybe for colour processing, but also require a high contrast resolution. In addition, monitors used for clinical review generally have lower spatial resolution requirements than those used for primary diagnosis.

The CRT technology has disadvantages which include a nonuniform luminance and a high veiling glare. The luminance of a CRT monitor is generally highest in the centre and falls off, as does its spatial resolution, towards the periphery of the screen. Veiling glare results from light reflections inside the tube and can have a substantial negative influence on both spatial resolution and contrast. LCD monitors, on the other hand, are characterized by increased luminance, luminance uniformity, spatial resolution and contrast, as well as a lower electrical power consumption and desktop footprint. Visualization of displayed images is affected however when the viewer is not directly in front of the screen, because of optical polarization effects, but this is about the only disadvantage relative to CRTs.

Luminance is an important feature because of its negative affect on diagnostic accuracy and a number of medical display standards therefore specify minimum values, e.g. the American College of Radiology specifies a minimum of 160 cd/m2. Viewboxes, which have been used traditionally in medical imaging, have a considerably higher luminance than computer display devices, whether they be CRTs or LCDs. Luminance values for various display devices are compared in the following table:

Display Device
Size (cm)
Resolution (Pixels)
Luminance (cd/m2)
Contrast Ratio
Mammography Viewbox
57
-
3,500 – 5,000
-
Conventional Viewbox
57
-
1,000 – 3,000
-
Greyscale – 3 Megapixels
53
2048 x 1536
600
600:1
Greyscale – 2 Megapixels
48
1200 x 1600
800
700:1
Colour
76
1280 x 768
450
350:1
Colour
51
1200 x 1600
350
350:1
Colour
46
1280 x 1024
240
350:1

Notice in the table that viewboxes have a luminance five times or more greater than LCD monitors. As a result, windowing techniques are widely used to compensate. Notice also in the table that greyscale monitors tend to have a higher luminance than their colour counterparts.

In addition, the image display workstation interface must be user-friendly. That is, interfaces which control the display, manipulation, analysis, storage and distribution of images need to be intuitive, efficient and specific within a medical context.

Connection
Speed
Time to Transfer
a 5 Mbyte Image File
Telephone Modem
56 kbit/s
about 12 minutes
ISDN
128 kbit/s
about 5 minutes
DSL
384 kbit/s
1.8 minutes
Ethernet
10 Mbit/s
4 seconds
Fast Ethernet
100 Mbit/s
0.4 seconds

Further, the efficient distribution of images throughout large hospital campuses and associated clinics has been enhanced by the availability of free web browsing software and has meant that distribution of images can be achieved at a fraction of what the cost was prior to their introduction.

Image transfer times should be short in any PACS system for obvious reasons. Ideally an image should appear on the monitor within 2 seconds of the request for the image. The increasing availability of high speed networks are allowing this requirement to be met more readily. A comparison of transfer speeds for a number of common network connections are shown in the following table.

Finally, PACS environments should have access to relatively cheap archival storage up to a few Tbytes (i.e. a few million Mbytes) of image data and must provide retrieval of non-current image files in a reasonable time – say, less than a minute or two. Current solutions include robotic digital tape archives and optical disk Juke Boxes.

The Internet & The World Wide Web[edit | edit source]

The Internet is a global assemblage of computer networks and an explosion in its use has occurred in recent years. Its origins can be traced to activities associated with connecting US university, military and research networks about 30 years ago by the Advanced Research Projects Agency and the Inter-Networking Working Group, to the US National Science Foundation network in 1986, through the release of public-domain software by groups at the European Organization for Nuclear Research (CERN) in 1991 and at the University of Illinois National Center for Supercomputing Applications (NCSA) in 1993 to the recent generation of substantial global interest.

The system facilitates the transfer of data, computer programs and electronic mail and allows for discussion of specialised topics among newsgroups as well as other features such as telnet, internet relay chat and file sharing. Irrespective of the application however, the system essentially allows for the convenient exchange of information between computers on a global basis. This section gives a very brief overview of the Internet from a general perspective of electronic communication protocols and the World Wide Web.

All forms of communication, be they based on electronic or other means, are reliant on some form of protocol. A common protocol when someone answers a telephone, for instance, is to say hello, to give a greeting or to announce the location/telephone number of the receiver. Communication between computers connected to the Internet uses a protocol called the Transmission Control Protocol/Internet Protocol (TCP/IP). This approach is an amalgam of two protocols, the details of which are of no great relevance to our discussion here, other than to note that they together provide an electronic communication protocol which allows two computers to connect via the Internet. One feature of TCP/IP to note however is that it can be used to communicate between different types of computers, i.e. it is platform independent. An IBM-compatible personal computer can therefore communicate with, for example, an Apple computer or a UNIX workstation. Related protocols which are used when computers communicate over a telephone line are the Serial Line Internet Protocol (SLIP) and the Point-to-Point Protocol (PPP). Once communication has been established between two computers, an additional protocol is needed to exchange computer files. A common protocol used for this purpose is called the File Transfer Protocol (FTP). The types of files which can be transferred are typically computer programs as well as data such as word processed documents, spreadsheets, database files and images.

A refinement to FTP is the Hypertext Transfer Protocol (HTTP) which allows the transfer of documents which contain data in the form of different media-types, and is widely used for webpage display. Examples of media-types are text, images and sound. Finally, two protocols relevant to electronic mail are the Post Office Protocol (POP) and the Simple Mail Transfer Protocol (SMTP), and a protocol in use for newsgroups is the Network News Transfer Protocol (NNTP).

Illustration of a client-server connection on the WWW.

The World Wide Web (WWW) is a conceptual interpretation of the Internet when it is used to transfer documents using the HTTP. These documents are generally called web pages and are written using an editing language called Hypertext Markup Language (HTML). This format provides control over, for instance, the size and colour of text, the use of tables and, possibly most importantly, the facility to link the document to documents which exist elsewhere on the WWW. HTML also allows the insertion of various media-types into documents. Images can be inserted, for instance, in formats such as the Graphical Interchange Format (GIF), the Joint Photographical Experts Group (JPEG) format or the Portable Network Graphics (PNG) format, as discussed earlier, and image sequences can be displayed using one of the Moving Picture Experts Group (MPEG) formats. This latter functionality is useful for instance, for display of dynamic nuclear medicine studies.

The transfer of HTML documents is illustrated in the following figure. The user's computer (referred to as the client) is equipped with software (called a web browser) which allows it to interpret HTML documents and to communicate via the Internet using TCP/IP. The computer is also equipped with hardware which allows it to physically connect to the Internet, for example:

At the other end of the connection is a computer containing a document or set of documents of interest to the user. This second computer is called a server and contains the documents in HTML format. An example of a software package which is used within the server computer is Apache. The sequence of events is typically as follows:

  • The user establishes contact between the client and server computers by directing the browser at the Uniform Resource Location (URL) of the server and requests a given HTML document. The direction is typically of the form:

http: //www.server.type.code/doc.html

      where:

http:// the transfer protocol to be used
server the name of the server computer
type shorthand for the environment where that computer resides, e.g. com: company and edu: educational institution
code shorthand for the country where the server is located, e.g. au: Australia and i.e.: Ireland
doc the name of the document
.html identifies the format of the document
  • The server receives the request, gets the requested document from its storage device and sends the document using HTTP to the client.
  • The client receives the document and the browser interprets the HTML so that text, links and media-types are presented appropriately on the display device.

Many WWW browsers also provide the ability for the user to download files using the file transfer protocol (FTP), to send and receive e-mail messages and to contribute to newsgroups. For example, the process for downloading files using FTP is similar to that illustrated in the last figure except that the user directs the browser at a URL of the form:

ftp: //ftp.server.type.code/doc.xxx

Sophisticated WWW browsers, such as Netscape Navigator and Internet Explorer, also provide the ability to generate more than basic web-pages at the client's computer. One implementation is the ability to interpret client-side scripts. These are small programmes which are downloaded as part of the HTML document and are executed using the client computer's resources. By this means, for instance, the script can read the date and time from the client computer or use its arithmetic functions to make calculations, and embed this information in downloaded webpages. Client-side scripts can be written using languages such as JavaScript.

Another implementation is the ability to execute small applications (called applets) which are downloaded with the HTML document and run on the client computer. Such applets can be generated using languages such as Java (not to be confused with JavaScript!). Applets are well developed for graphics applications, such as animations and scrolling banners. One exciting development in this field is the ability to download image processing software along with an image, so that the user can manipulate the image without the need for a special image processing program.

Finally, a refinement to HTTP server software allows interaction from the client so that information can be returned to the server for executing specific tasks, such as searching a database, entering information into a database or automatically correcting and giving feedback on multiple choice exam questions. Additional software is required for this server-side processing – a common form of which uses the Common Gateway Interface (CGI) protocol. Small CGI programs are generally referred to as scripts and are written in a language such as Perl.

Online databases can also be accommodated within the client/server model. For example, mySql is a package which has been widely adopted for this purpose. Scripts for administering this server software can be written in languages such as PHP.

As might be anticipated, the field of electronic communication introduces a vast range of additional concepts to those discussed above – and, like PACS, are just as unrelated to medical concepts! However, further treatment of the subject is beyond our scope here since our interest is confined in the main to the distribution of medical images.

Spatial Registration of Images[edit | edit source]

Correlative imaging is widely used in medical diagnosis so that information gleaned from a number of imaging modalities can be merged to form a bigger picture about a patient's condition. However, the actual merging of image data on a routine basis in hospitals and clinics has had to await the development of relatively cheap and powerful computers and such image fusion is now commonplace. It is generally necessary to spatially align image data prior to the fusion process so as to address differences in orientation, magnification and other acquisition factors. This alignment process is generally referred to as image registration.

Suppose we have two images to be registered – a planar nuclear medicine scan and a radiograph, for instance:

A nuclear medicine bone scan of a patient's hands on the left and a radiograph of their right hand on the right. The arrowed curves indicate examples of correspondence between these images on the basis of our knowledge of anatomy.

The registration process generally assumes that a correspondence exists between spatial locations in the two images so that a co-ordinate transfer function (CTF) can be established which can be used to map locations in one image to those of the other. In the above example, as in many clinical situations, a number of compatibility issues need to be addressed first. The obvious one arises from the different protocols used for image acquisitions, i.e. a palmar view in the bone scan and a posterior-anterior projection radiograph. We can handle this issue in our example case by extracting the right hand data from the bone scan and then flipping it around the horizontal axis. A related issue arises when different digital resolutions are used – in this case, the nuclear medicine image was acquired using a 256 x 256 x 8-bit resolution, while the radiograph was acquired using a 2920 x 2920 pixel matrix with a 12-bit contrast resolution. Since we may be interested in maintaining the fine spatial resolution of the radiograph, we can magnify the bone scan to the radiographic resolution using an interpolated zoom process. The outcome of these steps is illustrated below:

A nuclear medicine bone scan of a patient's hand on the left extracted from the previous image and mirrored horizontally, and the radiograph on the right. The arrowed lines indicate examples of correspondence between these images.

When we assume minimal spatial distortion and identical positioning in the projection radiograph, we can infer a spatially uniform CTF, i.e. the transform applied to one pixel can also be applied to each and every other pixel. Let's call the two images to be registered A and B, with image A being the one to be processed geometrically to correspond as exactly as possible with image B. The CTF can then be represented by the following equations:

u = f(x,y)

and

v = g(x,y)

where:

  • f and g define the transform in the horizontal and vertical image dimensions;
  • (u,w) are the spatial co-ordinates in image A; and
  • (x,y) those in image B.

The first computing step is to generate an initially empty image C in the co-ordinate frame (x,y) and fill it with pixel values derived from applying the CTF to image A. The resultant image we can say is a version of image A registered to image B.

The question, of course, is how to determine the CTF. For situations where simple geometric translations and rotations in the x- and y-dimensions are required, the functions f and g can involve relatively straight-forward bilinear interpolations. Such transformations can also compensate for image magnification effects, and the resultant processes are referred to as rigid transforms. When spatial non-uniformities are encountered, non-rigid transforms can be used to apply different magnification factors in both x- and y-dimensions, as well as other geometric translations – in which case higher order interpolants can be applied.

Determination of the parameters of the CTF is obviously needed and there are numerous methods we can use, for example:

  • Landmarks – where corresponding locations of prominent anatomical features in both images can be identified and two sets of co-ordinates can be derived on this basis to define the CTF. Note that artificial landmarks can be created using external markers during image acquisition, where, for instance, a set of markers which are both radioactive and NMR sensitive can be fixed to the surface of the patient during image acquisition for subsequent registration of SPECT and MRI scans.
  • Function Minimization/Maximization – where an indicator of the quality of registration is monitored as various geometric transformations are applied to the image in an iterative fashion to search for a set of parameters which minimize (or maximise) this indicator. When both images are SPECT scans, acquired some months apart for instance, a quality measure such as the sum of the absolute pixel value differences can be applied. A more complex quality measure is typically required when the two images are from different modalities. We will concentrate our discussion below on this latter scenario.

The concept of the joint histogram is important in this regard. Individual statistical histograms for our two images are shown below:

Histograms of pixel values displayed in black (with the logarithms of the frequencies displayed in grey so that lower frequencies can be discerned) for the bone scan on the left and the radiograph on the right.

The bone scan's histogram indicates that a substantial proportion of the image is composed of dark pixels, with a much smaller number of brighter pixels arising from the hot spot. The histogram for the radiograph indicates that the image is composed of a substantial number of bright pixels with a broad spread of grey shades. Note that the term frequency used in this context refers to the frequency of occurrence of pixel values, i.e. the number of times individual pixel values occur in an image, and not to the terms temporal frequency and spatial frequency which we've encountered elsewhere in this wikibook.

The joint histogram is a related concept to the individual image histograms where the pixel values for pairs of pixels in the two images are plotted against each other on the one graph. In other words, the value of a pixel in one image is plotted against the value for the same pixel location in the second image. A good way to introduce this concept is by first of all comparing an image with a duplicate of itself, and then with shifted versions of this duplicated image, as illustrated in the following figure. We can use colour processing to assist in our visual comparisons, where the bone scan (the reference image) can be displayed using a red CLUT, for instance, and its shifted version using a green CLUT. As a result, where the two images overlay each other, the layered image data is displayed in shades of yellow, i.e. red plus green on the colour spectrum, in overlapping regions.

The top row in our figure shows the situation when the two images are in perfect alignment. Note the resultant yellow-only colour scale. The joint histogram for this case consists of a diagonal straight line because the values of all pixel pairs in the two images are identical. The next row of our figure illustrates the effect of a horizontal shift of eight pixels between the two images. Notice that regions of mismatch are seen in shades of red and shades of green, as the case may be, and overlapping regions in shades of yellow. The joint histogram now has the appearance of a scatter plot because pixel values in the two images no longer correspond spatially. A bright pixel value in one image might now overlay with a dark region of the other, for instance, and vice-versa.

Joint histograms of an image against itself on the top row, and against spatially shifted versions of itself on the other three rows. See text for details.

The third row in our figure illustrates the effect of a rotation of 15 degrees and the bottom row shows the combined effect of shifting and rotation. The basic lesson to learn from this is that perfect alignment between two identical images is indicated by a straight diagonal line in the joint histogram and lack of alignment results in a form of scatter plot. The major lesson to learn is that when two images are out of alignment, statistical techniques can be applied which endeavour to minimize the scatter in the joint histogram – and hence effect a spatial registration of the two images.

The situation is a bit more complicated when the two images to be registered are acquired using different imaging modalities, e.g. a nuclear medicine scan and a radiograph, because their individual histograms are likely to be substantially different from each other – as we've seen in our earlier figure. Nevertheless, methods to minimize the scatter plot in the resultant joint histogram can be used to register the two images, as illustrated in the following figure:

Joint histograms, on the top row for a non-registered bone scan and a radiograph and, on the bottom row, following function minimization.

Compatibility issues between the two images for this registration process were addressed by first converting the radiograph from 2920 x 2920 x 12-bit resolution to 256 x 256 x 8-bits. The top row of the figure illustrates the situation and the joint histogram shows significant scattering in this data, as might be expected. The bottom row illustrates the outcome of a Mutual Information (MI) maximization process where a solution was found that involved shifting, rotating and magnifying the bone scan. While the joint histogram still depicts substantial scattering, the MI index can be seen to increase from 0.17 in the non-registered situation to 0.63 following registration, and the overlay image depicts the lesion in the bone scan registered to (or colocalized with, as is sometimes said) the radiograph.

This type of image registration can ideally be generated automatically using a computer. An iterative process is generally followed, where the MI indicator is maximized initially for low resolution versions of the two images and then progressively for increasingly higher resolutions. Note however that reducing the resolution of the radiograph can substantially effect its spatial quality and that, while registration may be effected at this lower resolution, the resultant CTF can be used with appropriate magnification to register the bone scan with the full resolution radiograph – as illustrated below:

The bone scan registered with the radiograph, where a yellow CLUT has been used for the radiograph and a red/white CLUT for the bone scan data.

Other image similarity measures besides Mutual Information can also be applied depending on the nature of the data in the two images. These include:

  • Count Difference Minimization: where the sum of the absolute count differences between all pixels is minimized.
  • Shape Difference Minimization: where segmentation techniques are used to define the borders of the object to be registered in the two images and a similarity measure based on the distance between these borders is minimized.
  • Sign Change Maximization: which maximizes the number of positive/negative sign changes following subtraction of shifted versions of one image relative to the reference image.
  • Image Variance Minimization: which minimizes the statistical variance between two images.
  • Square Root Minimization: which minimizes the square root of the absolute count differences between all pixels in the two images.
  • Gradient Matching: which is based on comparing edges in the two images.

The technique chosen for an individual pair of images is primarily dependent on the nature of the image data. Some techniques may be readily applied to intra-patient, intra-modality studies where, for instance, bone scans of the same region of the same patient acquired some time apart are compared – in a follow-up study for example. Other techniques may be required for inter-modality comparisons, e.g. a nuclear medicine and an MRI scan, while others still for inter-patient, intra-modality comparisons – where a patient's images are compared with those in an atlas of normal and diseased conditions, for instance.

Finally, its of relevance to note that while spatial registration techniques were introduced here using 2D images, the approach can also be readily extended to the 3D case through comparisons of statistical features in voxel data, e.g. registering SPECT and CT scans, or PET and MRI scans, for example. Here registration of individual slices can be applied sequentially through the volume of interest.

Image Segmentation[edit | edit source]

Many forms of image analysis require the identification of structures and objects within an image. Image segmentation is the process of partitioning an image into distinct regions by grouping together pixels which belong to the same object. Two general approaches have been developed:

  • Threshold Definition: where some property of an image is compared to a fixed or variable threshold on a pixel-by-pixel basis. A simple example is a grey-level threshold where a look-up table (LUT) of the form illustrated in the left hand panel of the figure below is applied, and where the value of the threshold, T, can be adjusted interactively.
A threshold look-up table on the left and thresholding a bimodal distribution on the right.
This is a useful technique when the image contains a single, well-defined object or group of objects of similar pixel value superimposed on a background with a substantially different pixel value. However, difficulties can arise with grey-level thresholding when objects are close together, e.g. cardiac chambers. Histogram analysis can be used as an alternative where pixel values are thresholded on the basis of their frequency of occurrence, as illustrated in the right hand panel of the figure above. Other alternatives include thresholding colours when a CLUT is applied, monitoring the time of arrival of a tracer or contrast medium in a region of an image and analysis of the variation in pixel values in the neighbourhood of a pixel within an object of interest.
  • Region Growth: which exploits two characteristics of imaged objects:
  1. pixels for an object tend to have similar pixel values, and
  2. pixels for the same object are contiguous.
A common technique is based on firstly defining a starting pixel in the object and then testing neighbouring pixels on the basis of a specific criterion for addition to a growing region. This criterion could be based on pixel value considerations, as in the following figure for instance, or on the anticipated size or shape of the object.
Region Growth: The object of interest in a CT scan is identified in the top left panel. A pixel value range for this object is specified to highlight the resultant region in the top right panel, to identify the borders of the region in the bottom left panel or to extract the object into another image in the bottom right panel.
Note that this approach can be extended to grow regions in three-dimensions when the image data consists of a set of contiguous tomographic slices.

Image Fusion[edit | edit source]

A method of combining image data to form a fusion display is required once images have been registered. A simple approach is to add the two images. Multiplying them is also an option. However, this form of image fusion tends to obscure the underlying anatomy when hot spots exist in the nuclear medicine data, as illustrated in the following figure:

A bone scan is combined with a radiograph through addition on the left, and multiplication on the right.

A second approach is to interlace the two images so that a fusion display is built up using alternate pixels, alternate pixel groups or alternate lines of data from each image, as illustrated in the following figure:

A bone scan is combined with a radiograph using interlaced vertical lines on the left, interlaced alternate pixels in the middle and tiling on the right.

However, such interlacing can highlight features associated with the interlacing process itself, as illustrated by the vertical lines in the left panel of the figure above. The so-called checkerboard display, as illustrated in the right panel above, is a related technique where alternate groups of pixels are displayed in a tiled fashion.

A third approach is to use an image compositing technique called Alpha Blending, which uses a transparency value, α, to determine a proportional mixing of the two images, as illustrated in the following figure:

A bone scan is combined with a radiograph using alpha blending, with a transparency value of 0.5 on the left, and 0.2 on the right.

This type of approach is highly developed in the publishing industry and a wide range of fusion options are available. A common one, which was used for the images above, is to apply an equation of the form:

Fused Image = (α) Image1 + (1-α) Image2.

A transparency value of 0.5 was used to generate the image in the left panel above, for instance, with the result that the underlying anatomy can be discerned through the hot spot. A powerful feature of this approach is that the fusion transparency can be varied interactively so as to optimize the data presentation, for instance, or to confirm the quality of the registration process (as illustrated in the right hand panel above).

This blending approach can be extended to incorporate a variable opacity function, where different transparency values are applied to different parts of the grey scale of one image. Note that the terms transparency and opacity have a reciprocal relationship in this context. Example blends are shown in the following figure:

Image blending using four different opacity functions – Linear: a linear opacity function; High-Low-High: a high opacity is used for both small and large pixels values and a low opacity for intermediate pixel values; Low-High-Low: low opacity used for both small and large pixel values and high opacity for intermediate pixel values; Flat: a constant opacity is applied.

The High-Low-High opacity function, for instance, applies a high level of opacity to pixel values at the top and bottom ends of the contrast scale of one of the images and a low opacity to intermediate pixel values. The result is improved visualization of fused data outside of hot spot regions – as illustrated in the top right panel above. The Low-High-Low function has the opposite effect and generates the capability to visualize the relevant anatomical detail with a highlighted region around it – as shown in the bottom left panel above. Logarithmic, exponential and other opacity functions can also be applied, depending on the nature of the two images to be fused.

The choice of opacity function and CLUT for use in image fusion applications would appear to be an artistic, in contrast to a scientific or infotech, endeavour in that the final result is often achieved using aethestic impressions which convey the medical information of relevance. Each study can require a reasonably unique combination of image processing steps depending upon whether hot or cold spots exist in the nuclear medicine study and upon the nature of the image data in the anatomical study, be it from radiography, X-ray CT, sonography or from one of the various forms of magnetic resonance imaging. It is for this reason that computers used for this type of application tend to feature highly intuitive and tacit user interfaces with powerful visualization capabilities. OsiriX, for example, runs only on the Macintosh platform for this reason. A second example is provided by a major medical equipment manufacturer which gave the name Leonardo to a product line!

A final method of image fusion that we'll mention briefly is referred to as Selective Integration, where segmentation techniques, for instance, can be used to extract structures from one image so that they can be pasted to relevant regions of a second, spatially-registered image.

We'll conclude this chapter with an example illustrating the 3D alignment and misalignment of the fused study shown in the following image:

Fused SPECT and CT multi-planar reconstructions with an interface which can be used to control their relative 3D alignment.