Radiation in medicine: Origins, risks and aspirations

The use of radiation in medicine is now pervasive and routine. From their crude beginnings 100 years ago, diagnostic radiology, nuclear medicine and radiation therapy have all evolved into advanced techniques, and are regarded as essential tools across all branches and specialties of medicine. The inherent properties of ionizing radiation provide many benefits, but can also cause potential harm. Its use within medical practice thus involves an informed judgment regarding the risk/benefit ratio. This judgment requires not only medical knowledge, but also an understanding of radiation itself. This work provides a global perspective on radiation risks, exposure and mitigation strategies.


INTRODUCTION
Radiation is a form of energy which travels from a source as waves or as energized particles. At the lower end of the radiation spectrum we find radio waves and microwaves, which are generally considered harmless (Figure 1). Sunlight consists of radiation from long wavelength infrared to short wavelength ultraviolet. Beyond the ultraviolet range, the types of radiation we find have so much energy that they can knock electrons out of atoms, in a process known as ionization.
We all experience low doses of ionizing radiation from space, from the air and from rocks and earth around us. When appropriately harnessed, ionizing radiation also has a number of useful applications in medicine, which can increase our exposure. However, in affecting the atoms of living things, this form of radiation poses a health risk, through potential damage to tissue, genes and DNA. Controlled exposure and the risk/benefit equation must therefore always be at the forefront of clinical decision-making. 1

HISTORICAL PERSPECTIVE
The invention of the x-ray by Wilhem Roentgen in 1895 was a transformative moment in the history of medicine, for the first time making the inner workings of the body visible without a need to cut into the flesh. 2 Roentgen, a Professor of Physics in Würzburg in Germany, was at the time experimenting with electrical currents through cathode ray tubes ( Figure 2). Although the glass tube he was using was covered in thick black cardboard, and the room was completely dark, Roentgen noticed that a nearby screen, covered in barium platinocyanide (a fluorescent material), became illuminated. He quickly realized that this was due to radiation being emitted from his experimental apparatus. Furthermore, a number of different objects could be penetrated by this radiation, and a projected image of his hand on the screen showed a contrast between opaque bones and translucent flesh. One week after his initial discovery, Roentgen replaced the screen with a photographic plate, and x-ray imaging was born. 3 Roentgen began lecturing on his invention in January 1896, and a few weeks later an X-ray was used in Canada to find a bullet in a patient's leg. Within a year, the world's first Radiology Department was set up at Glasgow Royal Infirmary, and quickly produced images of kidney stones and of a penny lodged in a child's throat. Shortly after, an American physiologist used a similar system to actively trace food going through the digestive system.
During the 20 years following Roentgen's discovery, x-rays gained increasing popularity, both as a fairground curiosity and as a powerful diagnostic tool in the medical setting. Their use in the treatment of wounded soldiers in the Boer War (1899-1902) and World War 1 (1914-18) cemented the use of X-rays at the heart of medical diagnostic practice. Roentgen was awarded the very first Nobel Prize for Physics for his discovery in 1901. 3 Around the same time as Roentgen's work, scientists like Henri Becquerel and Marie and Pierre Curie were among the first to discover natural radiation, whilst investigating the properties of fluorescent minerals. When storing some such minerals (a uranium compound) in a drawer with photographic plates, Becquerel noticed that the latter became exposed, and concluded that this must be due to a type of highly penetrative radiation being given off by the mineral itself. 4 As scientists began to look at this phenomenon more closely, they discovered that radioactive atoms are naturally unstable, and that in order to become stable, they emit particles and/or energy, in a process known as radioactive decay. Polonium and radium were discovered by the Curies over this period. Radium would become particularly important as a source for gamma rays, first extensively used in industrial radiography during the US Navy's ship-building program in World War 2. By 1946, Cobalt and iridium were developed as man-made sources of gamma radiation for industry. Since these were cheaper to produce and more powerful than radium, they quickly replaced it in all industrial applications. 5 The widespread and unrestrained use of x-rays and other radiation technologies in their early years inevitably led to serious injuries. It took time to establish a direct link between radiation exposure and such injuries however, due to the slow onset of many conditions, and to a lack of understanding. Thomas Edison, Nikola Tesla and William J Morton all reported eye irritation as a common symptom from their experimentation with x-rays and fluorescent materials, but it would be many years before the science of radiation protection, or 'Health Physics' as it is known today, properly took hold. 6

TYPES OF IONIZING RADIATION
The major types of ionizing radiation emitted during radioactive decay are alpha particles, beta particles and gamma rays (Figures 3 -4). Other types, such as x-rays, can be both naturally occurring, or machine-produced. 7 † Alpha particles Alpha particles gained particular notoriety during the early days of particle physics, when they were used to bombard a variety of targets. One of the most celebrated experiments of this kind was Figure 2. Wilhelm Roentgen (the first person to discover the potential for using electromagnetic radiation to create X-ray images) (right). The X-ray of his wife's hand with a wedding ring, first ever captured X-ray on a photographic plate (1895) (left). 2 conducted by Earnest Rutherford in 1917, leading to the discovery of the atom's structure. Consisting of two protons and two neutrons, Alpha particles are relatively large in atomic terms ( Figure 5). They are generally emitted from the decay of only the heaviest radioactive nuclei, such as uranium, actinium and radium. Although very energetic, and high in ionizing properties, the weight and size of alpha particles means they lose their energy over relatively short distances, and can easily be stopped by a layer of paper or human skin. As such, 'external' bodily exposure to alpha radiation carries little risk to health. However, if somehow inhaled or ingested, alpha particles can cause highly focused ionization, releasing all their energy just across a few cells and causing severe damage at both cellular and genetic level. This makes alpha particles possibly the most dangerous form of radiation. 8,9 Figure 4. The major types of ionizing radiation emitted during radioactive decay. 7 Figure 5. Alpha particle decay. 8 Figure 3. Comparison of the penetrating power of the three types of radiation (Alpha, Beta, and Gamma). 7 † Beta particles Beta particles are small, fast moving, negatively charged electron-like particles, emitted from an atom's nucleus during radioactive decay ( Figure 6). Beta particle emission occurs when the ratio of neutrons to protons in an atom's nucleus is too high. In such cases, the excess neutrons transform into a proton and an electron. The proton remains in the nucleus whilst the electron is ejected with high energy. Common emitters of beta particles include carbon-14 and strontium-90. Beta particles are more penetrating than alpha ones, but cause less damage due to their ionization being spaced over a larger area. They can travel further in air, but are easily stopped by a layer of clothing or a thin sheet of aluminium. Some beta particles are capable of penetrating the skin and causing a degree of skin burn, but on the whole, as with alpha particles, ingestion or inhalation remains the principal cause for concern. 10 † Gamma rays Gamma rays, emitted both in radioactive decay and nuclear explosions, have the smallest wavelength and the greatest energy of any waves known in the electromagnetic spectrum ( Figure 7). Unlike alpha and beta particles, which have both energy and mass, gamma rays are just pure energy. 11 The penetrative power of gamma rays is such that several inches of a material such as lead, or several feet of concrete are required as a barrier to stop them. Gamma rays can pass through the whole human body easily, potentially causing severe damage to tissue and DNA. However, their power to kill cells has been successfully harnessed and focused by medical science, in the form of radiation therapy for cancer. 12 † X-rays Due to their widespread use in the clinical setting, x-rays are familiar to almost everyone. Like gamma rays, x-rays are photons of pure energy, but they are generally less penetrating, due to their lower energy. They share many basic properties, but are emitted from different parts of the atom; gamma rays from within the nucleus, and x-rays from outside. X-rays occur naturally, but can also be produced by machines, using electricity, as discovered by Roentgen.?Many millions of x-ray machines are in daily use around the world, ranging from medical (x-ray and CT scans), used to make detailed images of bones and soft tissue in the body, to airport security screening and industrial inspection and process controls. Medical diagnostic radiology, based on x-rays, is the single largest source of man-made radiation exposure, accounting for over 40% of all radiation exposure for the average American during their lifetime. 13 UNDERSTANDING RADIATION RISKS Radiation can damage living tissue by changing cellular structure and damaging an organism's DNA.
The amount of damage depends on a number of variables, including the type and quantity of radiation absorbed and its energy. 14 Because radiation damage is done at cellular level, the effect of minor or even moderate exposure may be difficult to detect, and often can be successfully repaired by the body. However, certain types of cells are more sensitive to radiation damage than others, and with greater exposures, cellular recovery might be less successful and turn cancerous. Radiation can kill cells outright, as well as damaging their DNA. This obviously creates a hazard, but also opportunities for medical intervention, if cellular death can be precisely targeted (e.g. in radiation therapy for cancer). 15 Much of our knowledge of the risks of radiation is based on studies of survivors from the atomic bombs at Hiroshima and Nagasaki in Japan at the end of the Second World War. Other studies of radiation industry workers and of people receiving high doses of medical radiation have added to our understanding. Today, radiation ranks among the most thoroughly investigated causes of disease, and more is known about the mechanisms of radiation at the molecular, cellular and organ system levels than for almost any other health stressor. This has allowed health physicists to determine 'safe' levels of radiation to be used for medical, scientific and industrial purposes to ensure that relative risk does not exceed that associated with other commonly used technologies. 16 How do we quantify radiation? There are 4 separate but inter-related units for measuring radiation; . Radioactivity, which refers to the amount of ionizing radiation released by a material . Exposure, which measures the amount of radioactivity travelling through the air . Absorbed dose, which describes the amount of radiation absorbed by an object or person . Effective dose, which combines the absorbed dose and the medical effects for that type of radiation The absorbed dose can be calculated on the basis of total radiation energy absorbed (Joules) per unit of mass (kg) in an affected area of tissue or organ. The most common unit of measure for this is the Gray (Gy), where one Gray is equivalent to one Joule per kilogram. With beta and gamma radiation, the Effective Dose (expressed in Sievert, or Sv) is equivalent to the absorbed dose. For alpha radiation however, which is more damaging to the body, the Effective Dose is greater. 17 How are the effects of radiation classified? The biological effects observed in irradiated persons fall into one of two categories; Deterministic, due largely to a "kill" effect on cells, and Stochastic, related to mutations which may result in effects over time, such as cancer or hereditary mutations.
A) Deterministic effects, such as skin necrosis and cataract, have a practical threshold dose below which effects are negligible or not evident, but as a general rule, severity of the effects increases with the radiation dose. The threshold dose is not an absolute number, but can vary between individuals.
B) Stochastic effects, such as cancers and hereditary mutations, where the relationship between dosage and severity of effect is much weaker. Stochastic injuries occur when there is injury to the DNA backbone that fails to heal adequately. 18 A single X-ray photon may cause this effect, however the risk of acquiring such injury increases with dose/exposure (linear no-threshold hypothesis). Stochastic risk is particularly challenging to address given its delayed and cumulative effect, lack of a "safe" threshold dose, and absence of a reliable biomarker. 19

SOURCES OF RADIATION
Lifetime exposure to radiation comes from a variety of sources, both natural and man-made.
Naturally occurring (background) radiation Almost half of the radiation we are exposed to comes from the environment around us. Many elements found in the earth's crust emit radioactivity, including uranium, radium, polonium, thorium and potassium. Levels of exposure will depend on the make-up of the local soil and rocks. Another natural source is cosmic radiation. Earth is constantly exposed to radiation created by processes occurring in the sun, other stars and throughout the Universe.
Perhaps the most damaging source of natural radiation is radon, a tasteless, colorless, odorless gas produced by the decay of radium, an element present in nearly all rocks and soils. Radon gas seeps into buildings from cracks and other openings in floors and walls. Since radon gas emits alpha particles, accumulated radon within buildings can pose a serious health hazard via inhalation. Radon causes an estimated 20,000 cases of lung cancer per year, and is second only to smoking as a cause of lung cancer death. Smokers living in a home with high radon levels are particularly at risk. 20

Radiation in medicine
In countries with a developed clinical sector, up to a further 50% of our radiation exposure can be attributed to medical sources ( Figure 8). Most of this comes from the use of standard x-ray and CT scan technology to diagnose injuries and disease. Other procedures such as radiation therapy also use radiation to treat patients. 21

GENERAL PRINCIPLES FOR MINIMIZING RADIATION RISK IN MEDICAL USE
The most effective way to reduce patient risk in radiological examinations is through appropriate test performance and through the optimization of radiological protection for the patient. These are primarily the responsibility of the radiologist, the nuclear medicine clinician and the health physicist.
The basic principle of patient protection requires that procedures should seek to achieve diagnostic information of satisfactory clinical quality using the lowest reasonably achievable dose. Evidence obtained from a number of countries indicates a significant variability in entrance doses routinely administered to patients (i.e. doses measured at the body surface, at the site where the xray beam is entering), varying by a factor of 100 in some cases. As most doses in these studies tend to cluster at the lower end of the distribution, it is clear that entrance doses at the higher end (say above the 70th or 80th centile) are difficult to justify as adhering to an optimal risk/benefit ratio. 22    A beneficial first step towards radiation risk-reduction for patients is therefore the development of an agreed protocol of diagnostic reference tables of appropriate radiation for different procedures and patient types (e.g. children vs. adult), at an institutional, regional or national level, based on observed international best practice. An initiative of this kind provides not only a valuable learning or guidance tool, but it can also assist with quality control, helping to quickly identify institutions or equipment requiring corrective action in order to reduce patient risk. Measures that strengthen communication, transparency and implementation between radiologists, health physicists and audit teams can also help to significantly impact on radiation dose reduction for patients, whilst at the same time improving effectiveness of diagnosis. 23,24 As a matter of policy, certain procedures should be phased out, as better alternatives become available. For example, the use of fluoroscopy or photofluorography in the screening of tuberculosis in children is no longer indicated (normal radiography is a less harmful alternative for this age group), and more generally, fluoroscopy without electronic image intensification exposes patients to unacceptably high doses of radiation compared to alternatives. Such procedures are currently banned in most developed countries. 25 In parallel, the use of fluoroscopically-guided interventional procedures has increased dramatically over the past two decades. The number and spectrum of such procedures continue to expand across different specialties. Patients (and staff) are generally subjected to significantly higher radiation dosages compared to diagnostic studies; averaging 15 mSv for a simple coronary intervention and 50 mSv for a complex electrophysiological procedures, equivalent to 750 and 2500 posteroanterior chest X-rays respectively. The direct benefits of these procedures usually outweigh the potential hazards associated with such high doses of radiation. However, even with this favorable risk/benefit ratio, efforts to minimize risk must apply. Quality assurance and improvement programs focusing on minimizing exposure to patients and staff, continuous education, dose monitoring, proper use of equipment and protective garments/shields, and adherence to radiation safety guidelines issued by various professional societies, cannot be overemphasized. 26

Radiation Risks and Children
Radiation control is a concern both in the case adults and children. However, with regard to children and fetuses, three unique considerations apply, which must inform our actions; 1. Children are considerably more sensitive to radiation, as demonstrated in numerous epidemiological studies of exposed populations. 2. Children have a longer life expectancy than adults, resulting in a longer window of opportunity for radiation damage to be expressed. 3. Children may receive a higher dose of radiation than necessary, if equipment settings and dosages are not adjusted for their smaller body size.
Radiation-induced malformations or intellectual impairment, either in the developing fetus or children, are extremely unlikely through normal diagnostic radiology or nuclear medicine procedures. However, a small but significant risk of cancer induction does exist, and must be borne in mind even at typical diagnostic levels of radiation (, 50 mGy). The risk of developing radiation related cancers can be several times higher for a young child, compared to an adult undergoing similar diagnostic or interventional procedures. Radiation dose reduction must therefore be a priority goal particularly for procedures carried out on children, or in pregnancy. In pediatric use, dose reduction is achieved in practice principally through technical factors specific to children. In nuclear medicine, the smaller size of children means that acceptable images can be achieved using smaller administered doses than for adults, whilst in diagnostic radiology, particular care must be exercised in ensuring that radiation is focused as narrowly as possible on the specific area of interest. 27 Reducing fetal radiation in pregnancy Before a diagnostic procedure is performed on a female patient of child-bearing age, it is important to determine whether she may be pregnant, and if so, whether the fetus is in the primary radiation area, and whether the procedure might involve a relatively high dose (e.g. barium enema or pelvic CT scan). Medically-indicated diagnostic studies which are remote from the fetus (e.g. x-rays of the chest or extremities, lung scans) can be safely carried out at any time during pregnancy, provided the equipment is in good working order. Commonly, the benefit of making an informed diagnosis outweighs the potential contra-indications of the radiation risk in such cases.
If an examination is at the higher end of diagnostic dosing, and the fetus is either in, or near, the radiation beam, the risk/benefit equation requires doses and procedures to be minimized as much as possible whilst still retaining sufficiency for effective diagnosis. This can be done by tailoring the examination to minimize the number of radiographs required, or in the case of nuclear medicine, by encouraging maternal hydration and rapid voiding of radiopharmaceuticals through the urinary tract, to reduce fetal exposure. 28 Radiation risk and CT (computed tomography) use in pediatrics CT can be a life-saving tool for diagnosing illness and injury in children. Between 5 and 9 million CT examinations are performed on children annually in the United States alone, and use of this procedure is increasing steadily, both due to its utility in common diseases and because of technical innovation.
Yet despite its many clear advantages, CT also poses a major disadvantage in terms of significant radiation exposure. Despite accounting for only 12% of diagnostic radiological procedures in the USA, CT scans deliver around 49% of the US population's collective radiation absorption from medical procedures as a whole. 29 The first study to directly assess the risk of childhood cancer following CT scans found a clear doseresponse relationship for both leukemia and brain tumors, with risk growing alongside increased cumulative radiation absorption. A cumulative dose of around 50-60 mGy to the head was found to increase the likelihood of brain tumors threefold in children. Likewise, exposing bone marrow to a similar dose of radiation was found to increase the risk of leukemia by the same amount. For both findings, comparison was made with a control group having a cumulative radiation absorption of less than 5 mGy to the relevant regions of the body. These findings mirrored estimates from studies after the atomic bomb explosions in Japan. 30 The number of CT scans required to reach a cumulative threshold of 50-60 mGy depends on the equipment used, the age and size of the patient, and the scanner settings themselves. On typical current settings for pediatric CT, two to three head scans are sufficient to expose the brain to this level of cumulative radiation. In the case of bone marrow, this threshold is reached at between 5 and 10 procedures. The above is based on accepted US scanner settings for the , 15 age group.
Despite these findings, it is important to stress that the absolute cancer risks associated with CT scans are small. The absolute lifetime risk, as estimated in the literature, is about 1 case of cancer per 1000 CT scans performed, with a maximum incidence of 1 in 500 patients scanned. Strong justification exists for the continued use of CT scanning in pediatrics. However, once again, a careful assessment of the risk/benefit equation remains paramount, as does a commitment to reducing patient exposure to medical radiation to the minimum necessary to obtain results. 31 Where CT is used in pediatric settings, several immediate steps and long term strategies can be put in place to help safeguard patient safety. From a process perspective, specialists should: 1. Minimize use of ionizing radiation based procedures like CT on children, opting for non-ionizing options such as ultrasound or magnetic resonance imaging (MRI) whenever possible 2. Adjust exposure parameters for pediatric CT based on development of size/weight based protocols, and on the limitation of radiation to the smallest necessary area 3. Adjust settings for pediatric CT to reflect the area being scanned -lower mA and/or kVp settings should be considered for skeletal, lung and some angiographic and follow up scans 4. Limit scan resolution to 'adequate for diagnosis'. The highest definition images are not always necessary, but expose patients to more radiation 5. Limit the use of multiple scans -usually taken at different phases of contrast enhancement, these are rarely necessary for diagnosis, but considerably increase the radiation dose and risk.
Longer term, we should encourage and strengthen the development and use of specific pediatric CT protocols, and seek to educate practitioners, increase awareness and foster information exchange through publications, conferences and professional associations. In addition there is a need for continued research to further clarify the relationship between CT radiation and cancer risk, to determine the algorithm between CT quality and dose, and to improve customization of CT scanning for individual children. 32 Magnetic resonance imaging An alternative form of imaging that has been developed over the last 40 years is that of magnetic resonance imaging (MRI) or MR as it is often known. This uses radio frequency radiation from the far left hand end of the electromagnetic spectrum displayed earlier. This radiation is low energy and cannot directly damage tissue or DNA. It should be noted, however, that if enough of this radiation is introduced to the body then it can cause tissue heating that could then cause damage and MRI scanners have strict limits on the quantity of radio frequency radiation in order to avoid this. For the vast majority of MRI the radiofrequency magnetic field is used to excite the hydrogen nuclei that then emit a signal that decays away in the timescale of tens of milliseconds. The signals on MR images depend firstly on the density of hydrogen nuclei or protons in the water or fat based tissues and then on many other factors including so called relaxation times, flow and diffusion. The weighting of the numerous other factors can be altered by modifying the so called MRI sequence and this gives great potential to MRI for characterising different soft tissues or measuring blood flow for example.
In cardiovascular MR sequences have been developed for a wide range of applications including cine imaging for measurement of cardiac function, various methods of characterising the myocardium and identifying damaged tissues therein, measurement of myocardial perfusion, measurement of bulk blood flow and flow patterns in the heart and blood vessels and angiographic imaging of the vasculature.
Potentially, one of the most important applications is the ability of MR to image and characterise disease in the vessel wall, as this could enable detection of cardiovascular disease at a much earlier stage than at present. Imaging the vessel wall is challenging particularly in the coronary arteries which move not only during the cardiac but also the respiratory cycle. This is now possible, however, by using motion tracking techniques such as described by Scott et al. 33 As an example, Figure 9 illustrates a single slice of a 3D stack of a right coronary artery, through plane (left) and in plane (right).
Another exciting development is that of diffusion tensor imaging which has the potential to investigate the micro-structural architecture of the myocardial tissue and measure changes brought about by disease. Measurement of the diffusion tensor is particularly challenging in the heart due to its movement during the cardiac and respiratory cycles. Recently, however, methods have been developed that have been shown to provide reproducible measures of DTI parameters 34 . Figure 10 illustrates, using a colour scale, the myocardial fibre helical angle maps from a mid-ventricular short axis slice. The characteristic transition from left handed to right-handed helix is seen across the myocardial wall from epi-to endocardium.

CONCLUSIONS
The discovery of X-rays at the end of the 19th century was undoubtedly a significant milestone in the development of clinical practice. Advances in radiation based diagnostic and interventional procedures since then have brought about huge benefit for patients.
However, exposure to increased levels of radiation does carry some significant risks. Considerable effort has, over the past 100 years, been devoted by the research community to better understand and Figure 9. A single slice of a 3D MRI stack of a right coronary artery through plane (left) and in plane (right). 33 quantify these risks. However, a number of studies have shown that this knowledge has not been consistently applied to practice within the medical community, resulting in huge variations in the levels of radiation absorption to which patients are exposed, for comparable procedures.
A guiding principle for clinicians must be the minimization of risk for patients during radiological procedures, balanced against the need for good quality results. Particular care must also be taken in relation to children.
At the policy level there is a need to develop standardized reference tables for acceptable radiation levels administered in the major clinical applications, both for adult and pediatric groups. These should be based on gold-standard practices as observed from international studies and experience.
In terms of implementation, a more rigorous application of quality control based on these standards, and linked both to capacity building and to correctional measures could significantly impact on patient safety in this context. Principles of radiation physics and biology, radiation safety, and measures to minimize exposure should continue to be mandatory components in the training and certification of all health-care professionals dealing with radiation.
It is generally accepted that from a risk perspective, there is no such thing as an absolutely 'safe' dose of radiation. However, bearing in mind the huge potential benefit to patients in both diagnostic and interventional settings, our focus must remain on the risk/benefit equation, and on ensuring that we continue to reduce the former whilst growing the latter, through continued improvement efforts in technology, in policy and practice.