• No results found

The Origins of Mammography Imaging – Equipment and Rudiments of Technique

Concerns about breast disease were reported long before the discovery of X-rays (Wentz & Parsons, 1997). The first reported attempt to use X-rays for breast tissue imaging was by Salamon, a German surgeon, in 1913. He imaged 3000 mastectomy specimens observing the close correlation between the radiographic and pathologic abnormalities of breast tissue under investigation. He also described the radiographic appearance of malignant breast lesions. However, mammography only began being performed on patients from the mid to late 1920s in Europe, the United States, and South America where it helped to explain many breast abnormalities (Vyborny & Schmidt, 1989). The radiographic appearance of benign breast lesions and their distinctive features over breast carcinomas were reported by Walter Vogel (1931, as cited in Gold, Bassett, & Widoff, 1990). In the same year, Seabold (1931) documented his findings about breast disease detected by radiography. Work by Gershon- Cohen and Strickler (1938) described the normal radiographic appearances of breast tissue at different ages and across a range of menstrual conditions. The diagnostic information available from early mammograms was restricted due to technical limitations of the mammography equipment available in this period. This compelled many researchers to use different contrast agents such as gases for pneumocystography or iodinated compounds for ductography. Many studies reported the adverse effects of such contrast agents (Romano & McFetridge, 1938). Therefore, to address these mammographic limitations, developments in mammography technologies were required (Vyborny & Schmidt, 1989).

Early mammograms were performed using conventional radiography machines with tungsten anode tubes. Most of these tubes had a minimum energy level of 50 kVp, with some of them going down to 40 kVp (Law, 2006). Warren (1930) developed a stereoscopic system for in

23

vivo breast tumour identification. He utilised double sided emulsion film and dual high speed intensifying screens. Exposure factors of 60 kVp, 70 mA, and a 2.5s exposure time were used. Using his system, Warren (1930) investigated breast cancer in 119 preoperative patients and found that the mammographic false positives and false negatives were evident in 6.7% of the examined patients. Following on from this work, many investigators started to develop mammographic techniques to improve the quality of the images.

The correlation between radiographic calcification and breast cancer was first documented in a Spanish article by Leborgne in 1949 (as cited in Gold, Bassett, & Widoff, 1990). The same report also highlighted the importance of breast compression for calcification visibility in mammograms. Another study published by Leborgne (1951) investigated the radiographic appearance of palpable breast cancers. He confirmed that the use of slight compression by a cotton pad placed between the breast and the cone, a conical tool made of copper and extended from the X-ray tube window down to the breast to collimate the X-ray bream, improved mammographic image quality. The different radiographic appearances of benign and malignant breast calcifications were also described by Leborgne (1951). In this study, Leborgne (1951) utilised 30 kV X-rays, non-screen films, a 60 cm focal-film distance, and 5 mAs for each 1 cm of breast thickness. Breast compression, together with good collimation and the use of non-screen films, were further advocated by Ingleby and Gershon-Cohen (1960) as means by which to generate high contrast images. Gershon-Cohen also suggested the simultaneous exposure of two non-screens films with 0.5mm aluminium layer between them to overcome breast thickness non-uniformity. The aluminium attenuates the X-ray photons reaching the lower film and consequently the upper film which receives higher exposure will demonstrate the thicker juxtathoracic portion of the breast, and the thinner peripheral portion will be demonstrated by the lower (less) exposed film (Gold, Bassett, & Widoff, 1990).

3.2.1 Early Developments of Image Receptor

High quality breast images were produced by Egan (1960) who used high resolution industrial films (without an intensifying screen) with high current (300 mA), 6 seconds time and 26-28 kV X-ray. Egan (1960) also placed a lead shield under the film holder to protect the gonads from possible radiation. The industrial films were supplied in envelopes rather

24

than cassettes. The main advantage of these industrial non-screen films were that they produce fine detail with relatively lower patient exposure at the breast surface compared to conventional film-screen. The processing of industrial films was achieved by conventional wet (manual) techniques. In order to obtain more detailed mammograms, some centres used two films of different speeds in the same envelope. The low speed film was to demonstrate information within radiolucent areas, while the high speed film was to demonstrate information within denser areas (Law, 2006).

Higher quality breast images, achieved due to edge enhancement, were produced in the 1960s by using xeromammography. Gould, Ruzicka, Sanchez-Ubeda, and Perez (1960) reported the superiority of xeromammography over industrial non-screen film with regard to mammographic image quality (Gold et al., 1990; Odle, 2004). In xeromammography a thin sheet of photoconducting amorphous selenium contained within a lightproof cassette was used for image recording. After exposure, the breast image information was recorded on a charged selenium plate forming the latent image which becomes visible after it is dusted with thermoplastic powder. Next, plastic coated papers were used for the permanent recording of the image (Assiamah, 2004). The main advantage of xeromammography over direct film mammography was the possibility of obtaining more acceptable images using conventional radiography tubes with tungsten/aluminium targets/filters and at conventional kilo-voltages (~50 kVp) (Huda, Nickoloff, & Boone, 2008; Vyborny & Schmidt, 1989). In addition to its use with conventional radiography machines, xeromammography can also be used with dedicated mammographic machines that use molybdenum/aluminium target/filter combinations. However, xeromammography was replaced by the more efficient film-screen mammography in 1990 (Odle, 2004).

Double-sided emulsion films with dual intensifying screens have been used in conventional radiography to record the image. In such systems the X-ray photons are more efficiently absorbed by the intensifying screen than by film, and are then converted into light photons which produce the image on the film (Vyborny & Schmidt, 1989). This process has the advantage of reducing patient dose by about 8 times, but it has the disadvantage of increasing image unsharpness (Huda et al., 2008). In order to reduce image unsharpness in mammography, single-sided emulsion film with a single thin intensifying screen is used,

25

because a thinner intensifying phosphor layer screen produces less image unsharpness. In the mammographic energy range the photoelectric cross-sections of intensifying screen phosphor are very high and result in efficient X-ray photon recording, even with the thin phosphor layer (Vyborny & Schmidt, 1989). Since each screen emits a narrow wavelength of light, the film had to be sensitive to that wavelength. Both the screen and wavelength-specific film were manually placed inside black plastic bags or envelopes in a darkroom. To achieve firm contact between them, the bag or envelope must be a sealed vacuum. This evacuation process was performed either manually or automatically. Manual evacuation was achieved through an opening supplied with a nozzle in one corner of the bag and a special hand pump - in this case the bags were reusable. Automatic evacuation was performed via a box which heat sealed the envelope after the evacuation of the air. For the latter, the bags could be reused 3-5 times. This process was used until the late 1980s wherein the mammography cassettes were introduced (Law, 2006).

3.2.2 Dedicated Mammography Machine Development

Throughout the development of mammography, the key to producing the required high quality mammograms with an acceptable radiation dose was through the introduction of dedicated mammography machines (Odle, 2004). The earliest dedicated mammography machine was the Senograph. It was developed and tested in 1965 by Charles Gros in collaboration with Compagnie Generale de Radiographie (CGR) (Steen & Tiggelen, 2007). Senograph was subsequently marketed by CGR from 1967 (Nass, Henderson, & Lashof, 2001). It was the first commercial mammography system with molybdenum/molybdenum target/filter combinations (Vyborny & Schmidt, 1989). For mammographic imaging, the useful part of the molybdenum X-ray beam is the characteristic radiation of molybdenum, because the energy of its K-edge characteristic radiation is approximately 20 keV, where the majority of its other characteristic radiations have energies around 19 keV. For a tungsten anode, the continuous X-ray beam with 10-20 keV is the useful portion for breast imaging because the characteristic radiations of tungsten have energies 69 keV and around 9 keV for the K and L edges (Law, 2006). The use of molybdenum target tubes resulted in the production of lower energy radiation with a more limited energy range than that produced by conventional radiography tubes. This resulted in optimum image contrast (Vyborny & Schmidt, 1989).

26

The nominal focal spot size of the Senograph tube was 0.7 mm. This focal spot size reduced geometric unsharpness, which in turn improved the mammographic image quality by increasing the contrast between the glandular tissue, fat tissue, and calcification (Gold et al., 1990). The X-ray tube stand and film holder of the Senograph were designed to facilitate optimum patient positioning during the examination (Vyborny & Schmidt, 1989). Moreover, a copper cone extending from the X-ray tube down to the breast was provided with Senograph. This cone offered advantages in scatter radiation reduction; it also helped in breast localisation. Cones of different shapes and sizes, semicircular and elliptical, were available to accommodate different breast sizes and shapes. In later generations of dedicated mammography machines both the collimation and compression devices were built in. In early dedicated mammography machines the compression device was an inflatable balloon within the cone and was pumped up after patient positioning (Law, 2006).

3.2.3 Mammography Developments during the 1980s and 1990s

During the 1980s and 1990s mammographic equipment improved greatly. For instance, the nominal focal spot size decreased to 0.2 mm and 0.5 mm for small and large foci (Law, 2006). This reduction in focal spot size increased breast image sharpness by minimising geometrical unsharpness (Säbel & Aichinger, 1996). However, the use of such a focal spot size increases the tube thermal loading, resulting in potential for damage to the anode. Prolonged exposure times may reduce thermal loading but this brings with it an added risk of patient movement. In order to reduce this risk, rotating anode tubes were developed (Bushong, 2013). For additional image quality enhancement, mammography machines employed grids in order to reduce the scattered radiation which reached the film. However, these grids increased the patient radiation dose by 3 times or more. For mammographic purposes, the grids had 32 lines per inch with 5:1 grid ratios (Law, 2006). The air gap technique was developed as an alternative to the grid but had the same disadvantage with regards to patient dose, again increasing it by 25-30% (Jacobson, 2001). The later development of the magnification technique helped to clarify suspicious areas and microcalcifications without the need for the air gap technique (Säbel & Aichinger, 1996). Since the molybdenum/molybdenum target/filter combination was suitable for average breasts only, other target/filter combinations were introduced such as molybdenum/rhodium

27

(Mo/Rh), (rhodium/rhodium (Rh/Rh), and tungsten/rhodium (W/Rh). The target/filter combination is determined by compressed breast thickness and breast composition (density) (Law, 2006). Also, tubes with dual track anodes have been produced. One of these tracks was molybdenum and the other was either rhodium or tungsten. Filters are automatically selected depending on the track material. Automatic exposure control (AEC) was introduced to obtain a constant mean optical density regardless of breast thickness, breast composition and exposure factors. However, this was changed with the introduction of film-screen because mammographic image quality depends on the X-ray beam (Säbel & Aichinger, 1996). Ionisation chambers or other electronic X-ray detectors were connected within the exposure time control circuit; these were placed beneath the film cassette. When the required amount of radiation is reached the exposure is terminated automatically. Later, post exposure mAs meters were also added to the AEC (Law, 2006).

Different film screen assemblies were then introduced with different film speeds (Law, 2006). The sensitivity of the film-screen combination system is defined by a quantity called system dose. This represents the required air kerma to produce the receptor-specific exposure (Säbel & Aichinger, 1996). Since the required tube kVp for mammography is low (25-35 kVp) when compared to general radiographic procedures, dedicated X-ray generators for mammography were introduced with tube voltages down to around 25 kVp in steps of 1 kV. Finally, in order to achieve more patient comfort and better image quality with less radiation dose, collimation and compression devices were developed. Motorised compression devices were introduced, and diaphragms, which are two pairs of adjustable lead hemistiches mounted within the X-ray tube behind the tube window with a light source to illuminate the X-ray field, were utilised instead of copper cones (Law, 2006; Statkiewicz-Sherer, Visconti, & Ritenour, 2010).