Inconsistencies and open questions regarding low-dose health effects of ionizing radiation.

The effects on human health of exposures to ionizing radiation at low doses have long been the subject of dispute. In this paper we focus on open questions regarding the health effects of low-dose exposures that require further investigations. Seemingly contradictory findings of radiation health effects have been reported for the same exposed populations, or inconsistent estimates of radiation risks were found when different populations and exposure conditions were compared. Such discrepancies may be indicative of differences in sensitivities among the applied methods of epidemiological analysis or indicative of significant discrepancies in health consequences after comparable total exposures of different populations under varying conditions. We focus first on inconsistencies and contradictions in presentations of the state of knowledge by different authoritative experts. We then review studies that found positive associations between exposure and risks in dose ranges where traditional notions (generalized primarily from high-dose studies of A-bomb survivors or exposed animals) would have predicted negligible effects. One persistent notion in many reviews of low-dose effects is the hypothesis of reduced biological effectiveness of fractionated low-dose exposures, compared to that of the same acute dose. This assumption is not supported by data on human populations. From studies of populations that live in contaminated areas, more and more evidence is accumulating on unusual rates of various diseases other than radiation-induced malignancies, health effects that are suspected to be associated with relatively low levels of internal exposures originating from radioactive fallout. Such effects include congenital defects, neonatal mortality, stillbirths, and possibly genetically transmitted disease. A range of open questions challenges scientists to test imaginative hypotheses about induction of disease by radiation with novel research strategies. ImagesFigure 1.

The state of knowledge of health effects from low-dose exposures to ionizing radiation has recently been reviewed in extensive reports by three prestigious national and international commissions of scientific and medical experts with partially overlapping membership, known by their acronyms UNSCEAR (United Nations Scientific Committee on the Effects of Atomic Radiation)(1), BEIR V (Biological Effects of Ionizing Radiation) (2), and ICRP (International Commission on Radiological Protection) (3). Publication of these reports was followed by a number of summaries in scientific journals, authored by recognized radiation experts, that purport to present a scientific consensus of lowdose effects in a more accessible format for health professionals. A critical comparison between various presentations of accepted views, however, reveals inconsistencies regarding "established" facts and unsettled questions (4).
In 1990 the BEIR V Committee (composed of 17 experts on radiation epidemiology, bioeffects, and risk estimation) issued a more than 400-page report (2) which serves as a widely quoted and prestigious review of low-dose radiation health effects. In the body of this report, the committee acknowledges some critical areas of uncertainty and controversy, particularly with regard to estimates of radiogenic risk pertaining to anthropogenic increases in lowdose exposures above unavoidable natural background levels, both occupational and environmental. Obviously, such estimates are of the greatest importance to guidelines for the protection of public health. Yet, within the BEIR V report, we find inconsistencies between the committee's conclusions, as stated on different pages. Moreover, few of these obviously unresolved questions found their way into the most widely quoted Executive Summary. Subsequent authoritative overviews in scientific journals have not only glossed over some of these inconsistencies in the BEIR V report, but they also present different views of what constitute "well established" and "unproven" aspects of low-dose health effects. We highlight some of these inconsistencies by quoting or paraphrasing statements from the BEIR V report and comparing them with assertions on the same topics from three subsequent journal reviews, all citing BEIR V as a major source. Editorial comments, reflecting on the citations, have been placed in square brackets. In our discussions, "low doses" means the dose range well below 50 cGy.
We select five controversial issues in the debate about protracted low-dose exposures to illustrate our point.
BEIR V Shape ofa dose-effect curvefor cancer induction. In several places of its report (2), the BEIR V Committee concurs with the large team of scientists at the Radiation Effects Research Foundation in Hiroshima, Japan, which has collected and analyzed the Life Span Study (LSS) of A-bomb survivors for decades: after a one-time (acute) exposure, a linear, nonthreshold relation between excess mortality from cancers, except leukemia, and dose gives an excellent fit to the 1950-1985 LSS data, if restricted to doses below 200 cGy. However, BEIR V "recognizes that its risk estimates become more uncertain when applied to very low doses," and the committee concedes rather obliquely that "departures from a linear model at low doses, however, could either increase or decrease the risk per unit dose" (2.-6).
Dose-rate effectivesness factor at low doses. In its report, the BEIR V Committee states: "For low-LET radiation [low linear energy transfer, such as from [ and y radiation], accumulation of the same [total] dose over weeks or months, however, is expected to reduce the lifetime risk appreciably, possibly by a factor 2 or more" (2: 6). Such a downward correction for linearly extrapolated risk values is called DREF (dose rate effectiveness factor).
On the next page, however, we read: While experiments with laboratory animals indicate that the carcinogenic effectiveness per Gy of low-LET radiation is generally reduced at low doses and low dose rates, epidemiological data on the carcinogenic effects of low-LET radiation are restricted largely to the effects of exposures at high dose rates. Continued research is needed, therefore, to quantify the extent to which carcinogenic effectiveness of low-LET radiation may be reduced by fractionation or protraction of exposure. (2: 7) For decades, findings from animal experiments at high doses have given sup-The effects .on human health ofexposwres to ioniziga. radiation at lodw "do a v-.e long been th sbe of".. w& dispute tn paer we focus:on. openn questions: regardingg the health effects of low-dose exposures that require further investigations. Seemingly contradictory'? findings of radiation health effects ..hae: been reported for the same exposed populations, o~r.incni.n .t esti-  102:656-667 (1994) port to the speculation that the human dose-effect relation for cancer induction is strongly concave if low-dose exposures are accumulated over extended time periods (dose fractionation). Such a relation implies a practically zero-effect threshold at doses of the order of natural background irradiation and a significantly smaller risk per unit dose at lower than at higher doses.
Fifteen pages later, the committee states: "There are scant human data that allow an estimate of the dose rate effectiveness factor (DREF)" (2: 22). Then, in a subsequent section the report picks up the same topic: Since the risk models were derived primarily from data on acute exposures. . . the application of these models to continuous low doserate exposures requires consideration of the dose rate effectiveness factor (DREF).... For the leukemia data, a linear extrapolation indicates that the lifetime risks per unit bone marrow dose may be half as large for continuous low dose rate as for instantaneous high dose rate. For most other cancers in the LSS, the quadratic contribution is nearly zero, and the estimated DREFs are near unity. Nevertheless, the committee judged that some account should be taken of dose rate effects and in Chapter 1 suggests a range of DREFs that may be applicable. (2: 171) Biological effectiveness ofX-rays versus yrays. Referring to work by a previous authoritative radiation commission, the International Commission on Radiation Units and Measurement (ICRU)(5), BEIR V states: Most human exposures to low-LET ionizing radiation are to X-rays, while the A-bomb survivors received low-LET radiation in the form of high energy gamma rays. These are reported to be only half as effective as orthovoltage X-rays. While that is not the conclusion of this Committee, which did not consider this question in detail, it could be argued that since the risk estimates that are presented in this report are derived chiefly (or exclusively) from the Japanese experience they should be doubled as they may be applied to medical, industrial, or other X-ray exposures. (2: 218) The physical basis for such a possible effect is the roughly fourfold higher ionization density in tissue by medical X-rays than that by high-energy 7-rays (6).
Role offree radicals in tumorigenesis by ionizing radiation. Regarding the role of free radicals in tumorigenesis, the report states: "To the extent that the effects of radiation are mediated by free radicals, which can also mediate the effects of promoting agents, sequential exposures to radiation may serve to promote tumorigenesis through mechanisms similar to those of chemical promoting agents" (2: 139).
The report gives no further consideration to the question, whether radiogenic free radical production, in particular, at low doses and low dose rates could link protracted low-level exposures to various diseases or immune depression, known to be promoted by these highly reactive chemical species (7).
Radiation hormesis. On page 383 the report states: Although "beneficial" effects of radiation have been alleged on the basis of reduced mortality in high background areas in the United States, analyses that include an adjust-ment for altitude indicate no 'beneficial' effects.... This apparently "beneficial" effect of radiation may, in fact, be an example of confounding... (2: 383) The first of the three summaries in Table 1 was published in a journal for public health professionals by members of the BEIR V Committee (8). Hence its statements conform largely with the BEIR V report, except for some significant omissions. The other two summaries (9,10) in Table 1 show deviations, as well as omissions, compared to the BEIR V report. They have been directed to physicians and radiologists in general. The usefulness of reviewing unanswered questions after BEIR V for the purpose of identifying new directions for investigations was recently recognized by other researchers in the field (11).
This paper is predicated on the premise that a special focus on unrefuted positive associations of very low-dose exposures with health effects that are inconsistent with long-held notions will suggest unorthodox hypotheses. Testing these hypotheses will require investigations in yet insufficiently explored areas that are likely to reveal a greater-than-expected complexity of interactions between lowdose radiation exposures, other environmental toxics, and disease.
Because of their dominance in shaping prevalent notions about the effects of radiation, we briefly review the findings from the A-bomb survivor study, with particular emphasis on low-dose effects. In subsequent sections we summarize some studies that are pertinent to our above-stated premise.
Follow-up Study of A-bomb Survivors Evolution of Official Low-Dose Radiation Risk Estimates. Officially adopted radiation risk estimates about health effects of radiation at low doses have been based primarily on extrapolations from the continuing follow-up study of about 90,000 inhabitants of Hiroshima and Nagasaki who had survived the first 5 years after the physical and social devastation caused by the atomic bombs, followed by subsequent climatic hardships. This cohort of 5-year survivors [the Life Span Study (LSS) cohort] was originally divided into eight subcohorts with doses ranging from approximately 0 to over 400 cGy (rem). Until the mid-1970s, cancer mortalities among survivors with exposures below 100 cGy had not shown statistically significant excesses above Japanese national averages, in contrast to findings at higher exposures. To respond to growing demands for occupational and general radiation protection standards, national and international radiation regulatory commissions had to resort to models for downward extrapolation to reasonable levels of occupational exposure from the well-established high-dose observations. Those models were also influenced by data from limited follow-up studies of patients who had received high doses of radiation for therapeutic purposes. By implicitly postulating the existence of a universally valid dose-effect relation and by generalizing from high-dose experiments on much shorter-lived rodents to human response at much lower doses, the ICRP (12) , UNSCEAR (13), and BEIR III (14) reports in the late 1970s all concluded either explicitly or implicitly that linear, no-threshold extrapolation from high-dose A-bomb survivor mortalities would, in fact, overestimate low-dose radiogenic risks. For fractionated low-dose exposures (thus for most occupational and environmental exposures), the radiation committees recommended that linearly extrapolated risk values should subsequently be corrected (divided) by dose-rate effectiveness factors (DREFs) of at least a factor 2, with the greatest risk reduction to be applied to the lowest doses and/or dose rates.
However, microdosimetric analyses have shown that at decreasing doses, the concept of dose rate loses its meaning entirely because of the discrete nature of the radiation-cell interaction: the smallest possible effect must be caused by a single cell traversal (15,16).
More recently, evaluations of cancer risk from ionizing radiation have undergone significant upward revisions compared to those published about a decade earlier (1-3). Those revisions were necessitated primarily by 1) considerable differential increases in cancer deaths among the low-dose subcohorts of the A-bomb survivors during the follow-up period extended to 1985 (showing longer than expected latencies) and 2) far-reaching revisions of the individual dose estimates for the LSS survivors (DS86 dosimetry).
For the nonleukemia A-bomb data, Radiation Effects Research Foundation (RERF) analysts found that a DREF value much above one for acute low-dose exposures is not consistent with the updated data (17)(18)(19). Yet, disregarding the evidence from the low-dose range of the Abomb survivor study, the above microdosimetric analyses, and epidemiological findings from protracted, low-dose occupational exposures, the summary conclusions by UNSCEAR (1), BEIR V (2), and ICRP (3) retained their previous recommendations to reduce estimates of radiogenic risks, based on a linear dose-effect model, for protracted low-dose exposures by -4^;-9 I L lEI Shape of dose-effect curve for cancer induction The nonthreshold dose-incidence hypothesis, first supported by the association between childhood leukemia and prenatal diagnostic X-irradiation at doses comparable to natural background, has been extended to other malignancies, as well as to genetically significant mutations. Data on teratogenic effects (e.g., small brain size or severe mental retardation) are also compatible with a nonthreshold linear dose-effect curve.
Dose-rate effectiveness (DREF) factor at low doses In the absence of adequate human data on the carcinogenicity of protracted low-LET irradiation, the BEIR V Committee was unable to specify the extent to which their projections may overestimate the risks of a dose of radiation that is accumulated over long periods of time.
Biological effectiveness of X-rays versus y-rays [Not mentioned.] Role of free radicals in tumorigenesis by ionizing radiation [Not mentioned.]

Radiation hormesis
Although several studies have found that the rates of cancer and other diseases vary inversely with natural background radiation levels, which some investigators have interpreted as evidence of beneficial ("hormetic") effects of low-level irradiation; the relationship does not persist after the effects of altitude and other confounding variables have been adequately controlled.
aEditorial comments are in brackets.
A-bomb survivor study as universal standard. The interpretations of A-bomb survivors' cancer mortality or incidence statistics by scientists at RERF in Hiroshima and other official commissions have become the authoritative standard to which all findings from epidemiological studies on other exposed populations, such as nuclear workers, have been compared. In particular, studies that found substantially higher radiogenic risks at low doses and low dose-rates than those officially adopted (20) have been labeled "renegade" by some recognized radiation experts and have been imputed to be in error by others (21)(22)(23). Rather than questioning the comparability of incongruent studies, some epidemiologists invoke The linear model furnishes the most conservative (i.e., highest) risk estimates for exposures to low doses of radiation, even though evidence establishing the linear model as the correct relationship is still relatively inconclusive.
Suggests that a DREF of 2.25 (from a 1980 BEIR report) should be applied to the BEIR V risks. [No specific justification is given, other than that it would reduce risks closer to earlier estimates.] [X-ray exposures of most medical workers far below protection guidelines are discussed, but there is no mention of a possibly higher biological effectiveness of X-rays compared to y-rays on which the guidelines are based.] [Not mentioned.] [Not mentioned.] bias of unknown origin in the occupational data in order to set aside their own findings if they differ from those derived from LSS statistics (24). Scant attention has been given to evidence in the RERF data that these discrepancies might reflect unrecognized intrinsic incommensurabilities in health profiles (such as lasting selection effects after the initial disaster in Hiroshima and Nagasaki, combined for some survivors with permanent immune depression) and age distributions between the LSS cohort and a worker population-quite apart from the vastly different characteristics of irradiation (25,26). Adopting the LSS findings as a universal standard also implies the untested hypothesis that a single dose-effect relationship can Induction of mutations in human cells is a nonthreshold linear function of dose, independent of dose rate. The dose-response for induction of breast cancer is linear without threshold. While there are several epidemiological studies that have purported to show carcinogenic or leukemogenic effects of irradiation in the dose range below 10 cGy, there are no theoretical reasons, nor are there supporting animal data , or low-dose A-bomb survivor data in the range 1-9 cGy suggesting that there should be a convex upward dose relation, that would be required to observe a rapidly rising cancer incidence at very low doses, close to natural background.
The dose-rate effect for induction of specific gene mutations in human cells may be significantly less than that observed in rodent cells. Nevertheless, when the experimental data are considered along with limited epidemiologic data, a DREF of 2 has been recommended for chronic exposures. However, little or no decrease in risk was observed for induction of breast cancer, when the dose was received in a protracted manner, as opposed to a single brief exposure.

[Not mentioned.]
Ionization results in the production of free radicals that are extremely reactive and may lead to permanent damage of affected molecules.
A lack of correlation between cancer incidence and background radiation was observed in different studies. Low-dose epidemiologic studies in populations of limited size must be carefully controlled, and are often prone to bias by confounding factors. describe all conditions of exposure (23).
Evaluation of incremental excess cancer risk from mortalities among the lowest dose subcohorts. Linear extrapolation models used by BEIR V and RERF to predict low-dose risk values can be checked by a straightforward analysis of mortality data, limited to the lowest dose subcohorts. The methods used in all official analyses of A-bomb mortality data have weighted the resulting risk values toward those observed in the medium-to high-dose range (22). Recently, two groups of researchers published independent analyses that were restricted to cancer mortalities among the A-bomb survivors who had been exposed to less than 50 or 100 cGy (16,28,29). These low-dose subcohorts Environmental Health Perspectives --9_-. -_-9- 9 99 include about 80% of the entire LSS cohort. Using the 1950-1985 follow-up data (30) and combining new DS86 subcohorts from both cities, these authors have shown statistically significant (p < 0.01) excess mortalities (for cancers except leukemia) for the combined 6-19 cGy subcohort (mean colon dose 10.9 cGy) compared to the combined 0-5 cGy subcohort (mean colon dose 0.7 cGy) (Fig. 1). The 0-5 cGy dose group was chosen for comparison, rather than RERF's zero dose group, since the combined sub-cohort includes survivors, nominally unexposed to the radiation flash from the explosions, as well as an unknown fraction who at that distance from the epicenter were affected by fallout exposures (31). This additional dose is not reflected in DS86 estimates of individual doses. Other uncertainties have arisen recently in regard to the contributions of neutrons to individual doses of survivors, especially affecting the low-dose subcohorts who were located at large distances from the explosions (32,33). For the lowest dose DS86 subcohorts, we can thus expect that upward corrections in mean doses will have to be made, with the greatest correction to the lowest mean doses, decreasing rapidly with increasing DS86 mean dose. A graphical display of cancer mortality versus mean dose elucidates more directly the relevant dose-response association than the usual display of relative risk versus dose (Fig. 1).
The distribution of survivors according to sex and age at exposure does not vary by more than a few percent across the relevant low-dose subcohorts (29). Consequently, for the limited purpose of inspecting the gross features of the dose-response relation, aggregate mortalities were analyzed, introducing only negligible systematic errors. Weighted linear regression analysis over the dose ranges listed in Table 2 and displayed in Figure 1 yields a higher slope for mortality versus dose (or incremental risk per unit dose) for the dose range 0-19 cGy than for the dose range 6-99 cGy.
While only weakly statistically significant, the 1950-1985 survivor mortality data for the low-dose range suggest that the incre-mental excess cancer risk per cGy for single exposures may be greater below 20 cGy than in the medium dose range (20-100 cGy), for which our estimate of excess lifetime risk (9 ± 1) x 104 p-cGy ( Fig. 1; Table   2) is consistent with the value of about 12 x 104 p-cGy published by RERF analysts (30) or the value of about 7 104 p-cGy from BEIR V (2) . To check our conjecture and possible bias from using aggregate mortalities, one of RERF's chief statisticians applied a more extensive model for fitting excess relative risk that includes stratification for city, sex, age at exposure, and follow-up period. For the mortality data below 100 cGy, he found improvement in the fit for excess relative risk, R, proportional to the square root of dose (convex curve) compared to a linear dose dependence (Pierce DA, personal communication, 1991). Unfortunately, updated mortality data for  have yet to be published by RERF. Nonuniform upward corrections to subcohort mean doses due to unaccounted-for fallout or neutron doses might well augment the convex shape of the dose-effect relation. In this context, it is noteworthy that RERF analysts, studying the issue of a hypothesized threshold and the shape of the dose-response curve for leukemia (acute lymphocytic leukemia or ALL and chronic myeloid leukemia or CML) among the LSS cohort at low doses, found a better fit of the data to a nonthreshold convex dose-effect relation (logarithmic with dose) than to a linear one with a hypothesized 5 cGy threshold (34). Summary of low-dose effects. Findings from the A-bomb survivor follow-up studies (DS86, 1950(DS86, -1985 follow-up) that contradict the validity of applying a DREF to low-dose exposures are as follows: 1) Both the A-bomb survivor cancer mortality  and incidence data  fail to suggest the existence of a threshold for cancer induction down to very low doses (19,35,36).
CA detailed discussion of this estimation is given in Nussbaum and K6hnlein (28,29). The errors shown are standard errors.
Carter (34) found a better fit of the data to a nonthreshold upward convex dose-effect relation (logarithmic with dose) than to a linear one with a hypothesized 5 cGy threshold (p = 0.056).
3) Doses in the range from less than one to a few cGy have been associated with brain damage in prenatally exposed children ofA-bomb survivors (38). 4) Mortality for solid cancers in the 6-19 cGy dose group (mean colon dose 10.9 cGy) is significantly higher (p < 0.01) than it is in the 0-5 cGy dose group (mean colon dose 0.7 cGy), and there is evidence for a convex dose relation (Fig. 1, Table 2).

Effects from Occupational Exposures
Critical evaluation ofgovernment-sponsored nuclear worker studies. So far, practically all epidemiological studies of nuclear worker populations in the industrialized world have been funded by government agencies. A panel of 12 independent physicians and epidemiologists, in a critical review of 124 governmental studies (39), concluded in 1992 that 1) "The DOE's (and its predecessor agencies') epidemiology program is seriously flawed. .
.NO (391 61); 2) "There appear to be major inaccuracies and serious questions as to consistency and reliability in the measurements of the radiation exposures." (39c s 61); 3) "The nearly exclusive focus on mortality studies. . . eliminates from consideration virtually all cancers _ which may be related to radiation exposure but which will not or have not yet caused death, and thus severely limits our knowledge of the health consequences of lowlevel ionizing radiation exposure. . ." (39. 62); and 4) ". . the problems and flaws evident in many investigations are precisely those which tend to produce false negative results" (39. 62).
A large number of the mortality studies under review found no statistically significant association between cancer induction and low-dose radiation exposures. Most of them extended over limited follow-up periods, too short to observe long latencies. Also, when workers' mortalities are being compared to national rates, the findings are biased toward lower risk for all causes of death among radiation workers (healthy worker effect).
Nevertheless, in a few of the reviewed studies and in some that have been published more recently, significant increases in specific types of cancer were found; for example, prostatic cancer (40,41), multiple myeloma (24,42,44,45), lymphatic and hemapoetic neoplasms and bladder cancer (42), leukemia (43), and lung cancer (46,47). These positive findings have either been dismissed as due to unknown causes or chance by the authors, or they have been ignored in revisions of radiation protection standards (48).
However, there is no reasonable justification for ignoring findings of positive associations of radiogenic risk with exposure on the basis of their smaller number or because of disagreeing with inconclusive or negative findings unless specific substantial errors in the analysis can be shown. Mutually inconsistent epidemiological findings are likely indicators of essential differences in sensitivity to detecting small dose-related excess mortalities at low exposures which depend critically on the choice of case and control populations, on the dependability of dose records over long periods of time, and on adequate statistical controls for a variety of selection effects associated with mortality rates (49). Some nuclear worker studies over extensive follow-up periods that included statistical controls for external and internal healthy worker selection effects, as well as for several other confounding factors are reviewed below.
Cancer mortality among Hanford workers. The U.S. Atomic Energy Commission (AEC) in 1964 contracted T.F. Mancuso to study the lifetime health and mortality experiences of tens of thousands of workers at the nuclear weapons production installations at Hanford, Washington; Oak Ridge, Tennessee; and Los Alamos, New Mexico. After a sufficiently long follow-up period, the analysis by Mancuso et al. (50) for  deaths identified a small excess of certain types of cancer among about 28,000 Hanford workers with cumulative radiation exposures well below the maximum permissible dose. For another group of cancers, they found a slightly negative association with dose, an effect that remained unexplained at the time. The authors concluded that 1) low-dose, low dose-rate radiogenic cancer risks appear to be 10-20 times greater than those extrapolated from the A-bomb survivor study, 2) workers within the nuclear industry are an the whole considerably healthier than the general population, and within the workforce, those who perform the riskier (and higher paid) jobs are healthier than the average worker (external and internal healthy worker effect), 3) a supralinear (convex) dose-response relation improved the fit to their data in the lowest dose range, 4) the most probable cancer latency period was about 25 years, sometimes extending to 40 or more years (50)(51)(52)(53). Those conclusions have been rejected by a majority of radiation scientists, who prefer a Department of Energy (DOE)-sponsored nuclear worker study, using a more conventional method of analysis, different methods of data stratification, and different statistical controls for healthy worker effects which found no statistically significant association between recorded worker doses and cancer mortality, except for multiple myeloma (44).
After years of an unresolved dispute between rival analyses (54), a court settlement granted Kneale and Stewart access to the updated mortality data held by the DOE. The Birmingham team reanalyzed Hanford worker mortalities from 1944 to 1986, revising their original methodology to a modified case-control study, an adaptation from the model used by the BEIR V committee (2) for analysis of the A-bomb survivor statistics. This model includes 10 "essential controlling factors" (confounding factors that can obscure the association between exposure and cancer) and three "modulating factors" (factors that can alter the dose-effect relation). Age at exposure was included among both categories to allow for change in sensitivity to cancer induction with exposure age. The other two modulating factors were exposure year (to allow for variability in standards for dose recording with changing technology and management techniques) (55,56) and interval between exposure and death or follow-up. Kneale and Stewart (57) reconfirmed their earlier finding of a statistically significant radiogenic risk for the workforce as a whole for average occupational doses considerably below the regulatory limits. About 3% of all cancers were considered to be associated with recorded occupational doses. In contrast to their previous results and to those of other occupational studies, the authors found a strongly increasing sensitivity for cancer induction for exposures after age 55. Also, the convex deviation from linearity of the dose-effect relation in the occupational dose range found previously (52) was no longer statistically significant in the 1993 reanalysis (57). Uncertainties in dosimetry might account in part for this change.
In a parallel analysis of essentially the same mortality data, using a more conventional methodology and a smaller number of controlling and modifying factors, Gilbert et al. (45) found a weak negative association of radiogenic risk with increasing occupational dose, except for multiple myeloma. This result is qualitatively similar to the earlier finding by Kneale et al. (52) of a negative association with dose for a group of cancers in tissues, deemed to be less radiosensitive at the time. However, with the 1993 refinement of controls for selection effects within the workforce and a year-by-year recording of accumulated occupational doses to allow for age-related radiosensitivity, the association turned significantly positive for all cancers (57).
Subsequently, a team of DOE scientists published a combined mortality study among workers at three nuclear sites, including Hanford. Gilbert et al. (24) found positive associations with dose for 12 types of cancer: those for cancer of the esophagus and the larynx, as well as for Hodgkin's disease were statistically significant. This study also corroborates the conclusion by Kneale and Stewart (57) of a strong increase of sensitivity for radiogenic cancers with age. The bibliography (24) suggests, however, that the DOE team may have been unaware of this publication. The significant radiogenic cancer risk at low occupational dose levels, as well as the strong effect of exposure age on risk, are inconsistent with the interpretations of the follow-up studies of acutely exposed Abomb survivors (2). Consequently, Gilbert et al. (24) discount their significant positive associations, as well as the age effect, ascribing them to unknown sources of bias or chance fluctuations.
Cancer mortality among Oak Ridge workers. Frome and co-workers (58) reported the first phase of a comprehensive study of the mortality of all white, male workers employed at federal nuclear plants in Oak Ridge, Tennessee during the World War II era (1943)(1944)(1945)(1946)(1947). After 30 years of follow-up, the standardized mortality ratio (SMR), a ratio of observed mortality over the mortality among the U.S. population, for all causes was 1.11, and there was a significant upward trend of 0.74% per year. The excess mortality was primarily _ -_a-_ due to lung cancer and diseases of the respiratory system. Wing and co-workers (20,23) studied more than 8000 Oak Ridge workers  with accumulated occupational doses under 50 cSv for all but 0.2% of the workers (mean dose 1.73 cSv, median 0.14 cSv). For about 25% of the cohort, no measurable exposure was recorded, while about 68% had records of accumulated doses below 5 cSv over their entire period of employment. As in Hanford, the Oak Ridge research team found for the entire cohort a strong external healthy worker effect (SMR 0.74 for all causes and SMR 0.79 for all cancers). A subcohort of workers who had at some time been monitored for possible internal contaminations showed a higher SMR value for leukemia than the workforce as a whole.
The main conclusions from this study are 1) there is an excess of leukemias among the workforce, compared to the general population (SMR 1.63), and 2) the incremental relative risk for all cancers is about 5% per cSv. This value is about 25 times greater than the risk estimate in BEIR V (2) for low-dose exposures if their recommended DREF of 2 for low doserates is applied.
Two earlier studies in which the same group of workers was followed through 1977 found no association between radiation dose and cancer mortality (59,60). Apart from having incorporated 7 more years of follow-up, the Wing et al. (20) study considered the effects of various controlling factors in greater detail.
Some negative reactions to the the Oak Ridge worker study were reminiscent of those to the first Hanford worker study (50); however, Wing    for an average of 18.6 years. They found a significant healthy worker effect (SMR 0.77 for all causes, 0.82 for all cancers), comparable to the U.S. findings. With a lag time (latency) of at least 10 years, and for the most part among the subcohort monitored also for internally deposited radionuclides, a statistically significant incremental risk (p< 0.00 1) of 7.6% per cGy was found, comparable to the Oak Ridge results (20).
There are other mortality studies of workers in British nuclear establishments (41,42) that involved a larger number of employees. Kendall et al. (62) pooled workers from a number of different instalnations. Compared to the studies discussed previously and the Beral et al. (61) study, the pooled data showed a smaller or a sta-tistically insignificant association of cancers with radiation. However, unless closely similar in hiring criteria, standards for dose recording, and exposure circumstances, pooling of worker data from different nuclear establishments can be expected to reduce sensitivity of a low-dose epidemiological study by diluting or masking small but real correlations (63).
Leukemia mortality among nuclear workers. By pooling leukemia mortality data from seven studies of nuclear workers in the United States and Great Britain, Wilkinson et al. (43) provided evidence of a modest excess of leukemia from exposure to low doses of ionizing radiation at low dose rates. An adjusted relative risk of 1.8 was observed when individuals with 1-5 cSv exposure were compared with those who had cumulative occupational doses of less than 1 cGy. The combined data indicate a small elevated risk of leukemia for protracted doses of ionizing radiation under 5 cSv. As mentioned above, the pooling of data with nonuniform dosimetry and record keeping procedures could well underestimate the strength of association found. (64) investigated whether mutant frequency in peripheral T-lymphocytes of radiotherapy technicians exposed on the average to 0.3 cGy per month of cobalt-60 y-radiation can be associated with recently absorbed dose. The study cohort consisted of 13 exposed technicians wearing dosimeters. The matched controls were 12 physiotherapy technicians working in the same hospital with no radiation exposure. The analysis revealed that the mutation frequency is linearly correlated with dose in the range from 0-0.7 cGy. In radiotherapy patients (treated for breast cancer), Messing et al. observed after much higher doses (4 Gy) mutation frequencies of only 1% of those at the very low doses and dose rates. This suggests that at higher doses multiple damage and cell killing become prominent, reducing the mutation yield.

Mutational effects among radiotherapy technicians. Messing and co-workers
Cancers among commercial airline pilots. Airline pilots are subject to cosmic radiation, accumulating yearly doses up to about 1 cGy, or the equivalent of three to four times natural background for an average U.S. citizen . A cancer mortality and incidence study among about 900 Canadian male pilots showed significant excess rates for several cancers, including Hodgkin's disease and nonmelanoma skin cancer (66). By using a standardized mortality ratio (SMR) analysis with the population of British Columbia as the control group, the positive findings represent underestimates of excess incidence and mortality rates because the reported healthy worker effect (SMR 0.80 for all causes of death) was not taken into account. Individual dose estimates were not included in the analysis; thus the findings suggest, but do not firmly establish, an association with exposure to low doses of ionizing radiation, possibly in synergism with exposures to a range of electromagnetic frequencies and, for rectal cancer, with a high-fat diet. High altitude exposure and/or aviator status also correlate significantly with cancerous conditions of the skin, testicles, bladder, and thyroid in a study of U.S. pilots (67). A study of chromosome aberrations induced in lymphocytes of pilots and stewardesses also confirms effects of very low-dose exposures in this occupation (68).
Given the large number of pilots and the readily available data on flight times, elevations and monitored intensities of cosmic radiation, this group presents a unique opportunity to extend epidemiological studies at low doses to these populations worldwide.
Possible genetic effects of low-dose exposures. Gardner and co-workers (69,70) conducted a case-control study of leukemia and lymphoma among young people near the Sellafield nuclear plant in West Cumbria, Great Britain. A total of 52 cases of leukemia, 22 non-Hodgkin's lymphoma, and 23 cases of Hodgkin's disease, occurring in people under age 25, born in the area and diagnosed there from 1950 to 1985, were analyzed. A total of 1001 matched controls were included in the epidemiological study. Hodgkin's disease cases showed no significant correlation with the various factors under investigation. Gardner and his associates examined several possible environmental pathways for exposures to internal radioisotopes, such as from playing on contaminated beaches or eating contaminated seafood or vegetables, or living in the proximity of the Sellafield plant. Prenatal X-rays or viral infections during pregnancy were also considered as factors in explaining the excess leukemia cases. A high correlation of leukemia cases was found, however, with doses received by fathers working at Sellafield before conception of the child. A relative risk of about 7 was found if the absorbed dose in the 6 months before conception was more than 1 cSv. A similar relative risk (6.24) was observed if the total accumulated exposure of the father was 10 cSv or more before conception. The fact that ionizing radiation may be leukemogenic to the offspring has important potential implications for the protection of radiation workers and their children, as well as for radiation biology. It is for this reason that the work by Gardner  Leukemia clusters have also been found near other British nuclear plants, several supporting the findings of increased risk for leukemia in young people, some corroborating the genetic effects suggested by Gardner et al., others inconclusive on this score (73,74) A study of leukemia and non-Hodgkin's lymphoma cases near Seascale suggests the contribution of other causes to the excess because only part of these cases could be accounted for by preconceptual paternal exposure (75). Other British studies found negative or ambiguous results (76)(77)(78)(79). The indications for genetic effects from preconceptual parental exposures are in part consistent with earlier findings of elevated risks for several diseases for children whose parents had been exposed to diagnostic X-rays.
While some types of birth defects showed significant association with parental employment at Hanford but not with monitored parental preconception doses, other defects showed a significant association with parental accumulated occupational exposures. Taken together, the data were in part contradictory, and the authors interpreted them as false positives (80).
An important extension and support for the findings by Gardner et al. (70) was provided by the large database of the Oxford Survey of Childhood Cancers. It showed that elevated risk for genetically transmitted childhood cancers was associated with potential paternal exposure to internal sources of radiation but not significantly with recorded external exposures (81). In the Gardner study, only external paternal doses had been reliably monitored, which could, however, well have been correlated with internal doses received by the workers at the Sellafield plant. The possible association of internal exposures with somatic effects among nuclear workers has also been noted in some of the studies discussed previously, though estimates of internal doses have not been available in any of the studies cited.

Epidemiological Studies of General Populations
Well-Defined Radiation Sources Prenatal X-ray examinations. The Oxford Survey of Childhood Cancers (OSCC) includes data on all British children who died from cancer before the age of 16 years, starting in 1953. Its most recent findings were based on more than 15,000 geographically and birth-date matched case-control pairs. In the period 1950-1979, about 7% of all childhood cancer deaths and 8% of those with onset of malignancy between the ages of 4 and 7 years were associated with prenatal obstetric X-ray examinations, with an estimated average dose of about 0.5 cGy in the 1950s. The dose per examination declined significantly over subsequent years. The resulting excess risk from prenatal X-ray examinations of the fetus was estimated to be about 20 deaths per 104 person-cGy, with a three times higher risk for exposure during the first trimester of pregnancy than during the last trimester (82)(83)(84)(85)(86). This fetal risk factor during the first trimester is about nine times that given by BEIR V for a general population (2). Also, there remains an unexplained discrepancy between in utero radiogenic cancer risks for a normal population of children in Great Britain and for prenatally exposed children ofA-bomb survivors who were still alive in 1950 (87). On that basis, the OSCC team's claim of a causal relation between fetal exposures and induction of excess childhood cancers was rejected for decades. Meanwhile, Stewart and Kneale (25,26) presented evidence from the LSS data that could explain the lack of concordance on account of significant differences between populations and their post-exposure experiences. A U.S. tri-state leukemia survey found parental preconception and prenatal exposures of children to diagnostic X-rays to be associated with increased risks for leukemia and a number of other diseases in children (88).
Childhood mortalities and external background radiation. In the past, a number of studies claimed lower cancer mortality rates in geographic locations with higher background exposures. Such findings have been cited to claim beneficial effects of low-dose radiation (hormesis) (89). However, when a number of such studies, based on U.S. vital statistics data, were critically analyzed, the authors concluded: "When we adjust linearly for altitude, the negative correlations between mortality and background radiation all disappear or become positive. . . . We see no support here for the claim that ionizing radiation is beneficial at low doses" (90: 388). Nevertheless, a second international conference on hormesis, held in Kyoto, Japan, in July 1992, attracted about 250 scientists from all over the world.
With fetal tissue being particularly sensitive to radiation during its earliest period of development (85), local variations in neonatal mortality may be expected to be correlated positively with local variations of external exposures (or with internal deposition of radioactive contaminants).
A Birmingham team of scientists was able to correlate the very large database on the geographical distribution of childhood cancers in Great Britain of the OSCC with accurate measurements of terrestrial y-ray dose rates over a 100 km xlOO km grid covering England, Scotland, and Wales (91). The terrestrial doses for that area vary by up to a factor of five, between about 15 nGy/hr and 80 nGy/hr (0.013-0.070 cGy annually). This study suggests that "background radiation might be an element of the causal chain of the majority of childhood cancers" (91: 16). It is noteworthy that a simple regression analysis of childhood cancers found a negative correlation with dose, in qualitative agreement with the above-mentioned studies with inadequate controls for confounding factors that continue to be cited in support of radiation hormesis. When confounding socioeconomic factors, identified as being strongly correlated with childhood cancer mortality, were included in the OSCC analysis, the association with background dose turned significantly positive. Consistent with the British OSCC results, a recent U.S. study also found a significant association between childhood cancer incidence and a variation in annual external background -ray dose rate by nearly a factor two (0.05-0.092 cGy per year) over an area within a radius of approximately 10 miles from the Three Mile Island nuclear plant. On the basis of risk factors derived from the A-bomb survivor study, no detectable trend in cancer among children should have been found from variations in background exposures of such small magnitude. This study, however, found a 50% increase in risk of cancer for children under 15 years with every 0.01 cGy increase in estimated annual background 7-ray dose (92). As in the British background study above, the high sensitivity to radiation is most likely related to exposures during the earliest fetal stages of development.
Increased cancer risk after scalp irradiation by X-rays. Modan et al. (93) found an increased risk of cancer for the most recent 5-year period of a long-term follow-up study of Israeli children who had scalp irradiation for tinea capitis between 1949 and 1959. The original cohort included 10,834 irradiated children. Estimated mean doses were 9 cGy to the thyroid, 4.8-6.6 cGy to the pituitary, and 1.6 cGy to the breast. Until 1982 there were no indications of an increased cancer risk compared to matched controls. Since then, however, incidence of breast cancer increased significantly, showing a long latency period for induction in childhood. A high relative risk of about 12 (with a large uncertainty) was found in women who had been exposed to a mean breast dose of 1.6 cGy at age 5-9 years. Thyroid cancers eventually became also more frequent in the exposed population.

Environmental Contamination from Radioactive Fallout
In contrast to the extensive epidemiological literature pertaining to solid cancer and leukemia induction from well-defined and MO9 == -*monitored external low-dose exposures, far fewer studies have been published on reported health effects among populations most likely affected by long-term low-dose irradiation from a variety of inhaled or ingested radioisotopes, originating from radioactive fallout. In part, this may be a reflection of the difficulties in evaluating low and persistent internal doses. Various biomarkers are currently under investigation for possible future use as retroactive biological dosimeters for internally exposed persons. More importantly, however, standard epidemiological methods are ill suited to establish plausible association between multiple radioisotope exposures and a broad spectrum of poorly defined disease patterns that have traditionally not been accepted as possible consequences of lowdose exposures by radiation experts.
Leukemia mortality and thyroid disease downwind from the Nevada test site. Near the nuclear test sites in Nevada, a case-control study was conducted involving 1177 individuals (cases) who died of leukemia between 1952 and 1981 and 5,330 persons who died of other causes (controls). The authors estimated active bone marrow dose from external exposure to radioactive deposition on the soil from fallout to range from 0 to 3.0 cGy. The median bone marrow dose for all cases and controls was 0.32 cGy, compared to a dose of 0.49 cGy from terrestrial and cosmic background radiation accumulated over the period of fallout (7 years). A significant association with external marrow dose was found for acute leukemias discovered from 1952 to 1963 among those individuals who were younger than 20 years at exposure. The observed risks at these low exposures are about double those predicted by the BEIR V (2) model, but the difference was not statistically significant. The results are, however, consistent with several previous studies of populations exposed to fallout from the Nevada atmospheric bomb tests (94). While a positive association was observed between leukemia and external exposure, internal exposures from ingested fission products might well play a dominant role if tissue concentrations of radioisotopes can be assumed to be correlated with levels of external contamination.
Also, a statistically significant excess of thyroid neoplasms was found among schoolchildren from communities in southwest Utah, southeast Nevada, and southeast Arizona, potentially exposed to fallout between 1951 and 1958. Estimated doses from radioactive iodine isotopes to the thyroid ranged from 0 to 460 cGy (17 cGy average for Utah ) (95).
Leukemia clusters near civilian nuclear power plants. An increased incidence of leukemia was found by Clapp et al. (96) during the years 1982-1985 in a five-town area of Massachusetts near Plymouth, where the Pilgrim I nuclear plant is known to have released various radioisotopes during the years 1974-1975. Age-adjusted morbidity odds ratios of about 1.4 were calculated, comparing incidence for the 4year period, 1982-1985, in five coastal towns near the plant with that in the surrounding communities in southeast Massachusetts. The excess was primarily found in adults and the elderly. The authors claim the most likely association to be with airborne radioactive effluents trapped in a coastal flow pattern.
In contrast, a study by Jablon et al. (97), sponsored by the U.S. National Cancer Institute, found no statistically significant excess risk of death from any of the cancers the authors surveyed for people living near nuclear facilities. The study involved 107 U.S. counties with or near nuclear installations. The authors state that any excess cancer risk present in those counties was too small to be detected with the statistical methods employed in the study. In a critical review, Clapp (98) suggests that the seemingly discrepant results of the two studies might be due to their different sensitivities for detecting small excess risks. Leukemia clusters have also been reported recently around several German nuclear power plants (99,100).
Leukemia in the United States andfallout from nuclear testing. Levels of depositions of low levels of fission products such as strontium-90 from atmospheric nuclear explosions have been found to be associated with increased leukemia rates among children. The strongest association of a composite exposure index that used strontium-90 concentrations in food, cow's milk, and human bone was found with acute and myeloid leukemia rates about 5.5 years after the peaks in fallout among 5-9 year olds. Regional differences in leukemia rates corresponded to different levels of the exposure index. The leukemia rates fell again sharply after the cessation of atmospheric tests (101).
Radioactive emissions and breast cancer. Breast cancer mortality rates for the years 1984-1988 were found to be associated with documented cumulative airborne releases of fission products (including iodine-131, strontium-90, strontium-89, and cesium-137) from nuclear power plants in nine census regions of the United States during the period 1970-1987. Assuming the average inhaled or ingested radioactivity (i.e., internal dose) to be directly correlated with these releases, the authors found approximately a logarithmic relationship between mortality and exposure (102). Such a supra-linear doseresponse relationship for internal exposure in a dose range, estimated to be of the order of external yearly background (if confirmed), would be qualitatively consistent with a proposed disease mechanism that involves biochemical chain reactions in human tissue, progressing from oxygen free-radicals produced at low dose rates of ionizing radiation and subject to saturation concentrations (14,103).
In light of the special sensitivity of the developing fetus and breast tissue to radiation, a direct link between ingestion of radioisotopes by the pregnant mother and subsequent birth outcomes or induction of breast cancer provides a plausible explanation for the above observations, provided that other time-correlated confounding factors can be reasonably excluded.
Radioactive fallout and congenital defects. 1) A discontinuity in historical trends for early infant (neonatal) mortality in West Germany before April 1986 and after fallout from the Chernobyl reactor accident had reached western Europe showed a significant association with higher levels of fall-out in southern as compared to northern Germany (104)(105)(106). The authors' proposed causal association of infant mortality with environmental radioactive contamination involving the food chain has been significantly strengthened by another, completely independent study.
2) First-day infant mortality, firstthrough sixth-day infant mortality, and stillbirth statistics have been followed for England and Wales and for the United States from 1935 to 1987. Approximately exponentially declining historical trends were interrupted in the early 1950s for about 10 years but were then followed by a steeper decline until the rates resumed their pre-1950s exponentially declining trend in the late 1970s. An earlier hypothesis had linked the interruption in declining mortality rates to changes in neonatal care for sick newborn infants, including restrictions in oxygen intake. Whyte's (107) analysis of the relevant statistical data and their correlation with time periods during which such changes in neonatal care can be documented in England, Wales, and the United States makes the oxygen restriction hypothesis untenable or incomplete. Whyte concludes that his observations indicate a common maternal-fetal cause such as an economic or environmental factor (107). As the period from 1950 to 1980 extends from postwar economic depression into recovery, wealth, and recession, there are no clear economic or universal nutritional correlates, although an analysis in greater detail would be valuable. Among environmental factors, the documented rise in strontium-90 depositions from fallout following atmospheric nuclear testing between 1950 and 1964 Volume 102, Number 8, August 1994 4i9 stands out, suggesting that it is a likely correlate with the observed changes in trends.
3) A recent study by a team of scientists from the official childhood cancer registry in Mainz, Germany, reported a statistically significant increase in a very rare kind of tumor of the nerve cells in young children neuroblastomaa) for babies born in 1988, 2 years after the explosion of the Chernobyl reactor (108). For the 1988 birth cohort in areas with more than 104 Bq/m cesium-137 soil contamination, the number of cases recorded until mid-1992 was 1.96 times the expected number for Germany during the years 1980-1987 (22.5 cases ger 106 live births); for areas with 6 x 10 -10 Bq/m contamination, the number of cases was 1.65 the expected number, and for areas with less than 6 x 103 Bq/m2 radioactive cesium deposition the ratio was 0.98. Similar increases in neuroblastoma rates were found for babies born for the years after 1988. Given the clear association of relative risk for a rare congenital defect with levels of radioactive cesium contamination, a causal relationship is likely. 4) Based on changes in historical trends, Gould and Sternglass (109) suggest an association between a rise in the percentage of low-birthweight live births in the United States from 1945 to 1965 and the observed buildup of strontium-90 in human bone following the period of atmospheric bomb tests. Health effects around ChernobyL Much publicity was given to the negative findings of an epidemiological study by a large international team of prestigious radiation scientists sponsored by the International Atomic Energy Agency in Vienna (IAEA) and the World Health Organization (WHO). The purpose of the study was to verify the reported appearance of a broad spectrum of medical problems among the surrounding populations affected by radioactivity released in the Chernobyl explosion. The study found no significant association between radioactive contamination of the environment and the repor-ted diseases. While confirming an increased rate for a variety of health problems, the team of radiation experts suggested fear of radiation (radiophobia) as the most likely cause of the medical symptoms (110). The international team had relied on questionable health records supplied by the government of the former Soviet Union, rather than on hospital records, the study population did not include the thousands of "reactor liquidators," and the controls were chosen from areas of the same general district that had been only slightly less contaminated.
The IEAE experts' conclusion stands in marked contrast to clinical reports from Minsk (Belarus) of an alarmingly large and persisting increase of particularly invasive thyroid cancers in children shortly after the Chernobyl accident (111). A companion publication by five western scientists, including a member of WHO, validated the Belarus data (112) and highlighted the disparity between these and the earlier IAEA findings. The reports from Minsk, together with other clinical reports on the catastrophic health consequences of the Chernobyl explosion in other parts of the former Soviet Union and in Poland (113)(114)(115) and the questionable database, as well as their choice of controls, discredit the IAEA scientists' conclusions. Yet, these authoritative conclusions have never been revised. Given a unique opportunity to research the multiple effects of widespread radioactive contamination on public health, reliable epidemiological studies with participation of independent scientists, not under government agencies' contracts, should receive highest priority for funding.
Chromosome aberrations induced by Chernobyl fallout. Cytogenetic analyses, performed on peripheral blood lymphocites from members of the populations affected by fallout from the Chernobyl explosion, yielded a spectrum of chromosome aberrations. Scheid et al. (116) and Verschaeve et al. (117) found significant increases in chromosome aberrations about 5 years after the accident, which they related to continued exposures from incorporated radioisotopes. Reliable dose estimates for these individuals were not available. On the other hand, an IAEA-sponsored study (110) (the same study that related a broad spectrum of diseases among that population to radiophobia) found no significantly elevated levels of such aberrations in the blood of similarly exposed populations. This finding was supported by Neel et al. (118). However, the latter team used for their controls a population with a 10 times higher background level of aberrations than that usually quoted in the relevant literature. These authors postulate that the appearance of "rogue" lymphocytes (cells with multiple aberrations) has a viral origin, while the findings by Scheid

Conclusions and Discussion
A number of findings reviewed in the previous sections are at variance with the summaries of the "state of knowledge," which have been primarily based on official interpretations of the A-bomb survivor followup study. Neither the fetal hypersensitivity to radiation, nor an increase in susceptibility for cancer induction for an aging population are part of the accepted notions on radiation effects at low doses. Nor does this body of assumptions link low-dose exposures resulting from radioactive fallout (either from nuclear testing or from reactor accidents) to any of the observed congenital effects reported. When levels of fallout contamination over large areas of the globe became known, local authorities everywhere, referring to the pronouncements by official national and international radiation regulatory commissions, reassured the populations under their jurisdiction that their levels of exposure would be much too low to cause any adverse health effects. In the light of the foregoing evidence, sadly, these statements have now lost their credibility. Also, on the basis of the foregoing summaries of studies, we draw the following conclusions below.
Dose-effect relation at very low doses. While the A-bomb survivor mortality data from 1950 to 1985 yield a nonthreshold linear dose-effect relation for cancers (other than leukemia) down to about 20 cGy with a suggestion of an increased excess relative risk in the lowest dose range, the most recently published cancer incidence statistics   (36) show a statistically strong nonthreshold linear acute doseeffect relation for all solid tumors down to the 1-10 cSv organ dose range, with an excess relative risk about 40% larger than that derived from the mortality data. Some of the epidemiological studies of protracted occupational exposures with lifetime accumulated doses under 50 cSv and mean doses of the order of natural background find excess risks per unit dose for cancers substantially in excess of those predicted by linear extrapolation from the LSS mortality or the incidence data. This apparent discrepancy in initial slope of the dose-effect curve could be due to bias from selection effects (25,26), uncertainties in dose assignments in the LSS cohort, or the accumulated occupational doses (55,56). However, we like to emphasize that the hypothesis of a universal dose-effect relation, which would require consistency of risk over such widely different population characteristics and conditions of radiation exposures, remains unproven.
Presumed reduced biological effectiveness of ionizing radiation. The occupational exposure studies reviewed here, the prenatal X-ray and external background exposure studies, as well as the studies related to airborne radioactive emissions, are all inconsistent with the hypothesis of reduced biological effectiveness of ionizing radiation at protracted irradiation .
Enhanced biological effectiveness ofmedical X-rays relative to high-energy y-rays.
Environmental Health Perspectives = -. 9 -. -9.-9 -9 .99 9 99-.... --This extremely important question in terms of its implications for public health has only been touched upon in the BEIR V report by referring to a 1986 review by the International Commission on Radiation Units and Measurement (5), but without in-depth discussion. BEIR V (2) suggests, however, that the radiation risk estimates as derived from the acute y-ray exposures of the Japanese survivors, which form the basis for all radiation protection guidelines, may underestimate these risks by a factor of two for medical, industrial, or other low-energy X-ray exposures. In the three reviews of the current state of knowledge of radiation effects, especially directed toward physicians, this topic is not even listed among the open questions, implying that the generally accepted risk values (derived from the A-bomb studies) are applicable to all medical exposures as well. Yet, there are well-documented findings (123,124) of twice as large a mutational effect in Tradescantia for 250-kVp X-rays compared to cesium-137 y-rays, and there is a physical basis for expecting such a difference in biological effectiveness. The significance of these radio-biological findings for human exposures is an unsettled question with broad ramifications for radiation protection. Free radicals, low-dose exposures and health. Except for mentioning the possible creation of free radicals by ionizing radiation in the BEIR V report and by one of the reviews cited, the possibility that this interaction and disturbance of intracellular communication could provide a strongly nonlinear alternative biological mechanism to the well-known direct mutational interactions of radiation with human cell nuclei in the induction of disease-in particular, at very low doses-has not become part of the discussions of low-dose radiation effects, in spite of a burgeoning literature linking free-radicals to a wide spectrum of diseases, as well as suggesting possible treatments (7,103,121).
Radiation hormesis hypothesis. All of the low-dose studies of radiation effects in human populations reviewed above are inconsistent with hypothesized long-term cancer-reducing effects of such exposures in excess of unavoidable natural background of human populations (hormesis). One can only speculate about the continued popularity of this conjecture among some groups of radiation experts.

Suggestions for New Research
By comparing statements about the abovelisted five aspects in different authoritative presentations of known health effects of low-dose exposures and by focusing on inconsistencies or selective omissions, we have identified unsettled questions in the mainstream state of knowledge. However, the identification of unsettled questions can be extended by reviewing findings from a number of unrefuted studies on populations other than the LSS cohort of A-bomb survivors that are inconsistent with traditional notions and, therefore, have been rejected, ignored, or glossed over in purportedly comprehensive reviews of the field. These inconsistencies raise a range of additional questions about the limitations of currently accepted concepts.
Finally, in the aftermath of the widespread fallout from the explosion of the Chernobyl reactor in the former Soviet Union, there are suspected associations of disease with radiation exposures that have barely been reported in the scientific literature. An additional relevant summary of observed health effects as a consequence of the Chernobyl nuclear explosion, presented at the International Workshop on the Impact of the Environment on Reproductive Health (30 September-4 October 1991, Copenhagen, Denmark) was brought to our attention at the time this paper was submitted (125). In the United States, only a handful of government-funded health studies have been initiated among populations ("downwinders") that have been at risk for internal exposures by various pathways as a result of radioactive releases into the environment from weapons production and testing facilities, in some instances possibly in synergism with chemical exposures. A few examples have been presented. These populations at risk include large groups of civilians and tens of thousands of military personnel who had been stationed at nuclear sites or who were involved with nuclear bomb tests. Some official epidemiological studies on these populations were admittedly "defensive" in nature (126), responding to pressures by affected populations for material compensation. On the other hand, an increasing number of wellresearched investigative reports and smallscale health surveys, organized by members of the affected populations themselves (127)(128)(129)(130)(131)(132) document the existence of clusters of cancers and similar patterns of other serious health problems among downwinders near various nuclear sites. An increasing body of verifiable observations, not matched by reasonable alternative explanations, presents a challenge to public health agencies to commence large-scale, unified health surveys and to radiation experts to extend their research strategies into insufficiently investigated interactions of radiation with human health. There is an urgent need for the formulation of novel, guiding questions that need to be translated into testable hypotheses.