Are A-bomb survivor studies an appropriate basis for nuclear worker compensation?

an Appropriate Basis for Nuclear Worker Compensation? Wakeford (2003) and Little (2003) wrote in response to our comments about the use of A-bomb survivor studies as a basis for U.S. nuclear worker compensation decisions (Wing and Richardson 2002). Wakeford (2003) disagrees with our statement (Wing and Richardson 2002) that there is discrepancy between childhood cancer risk estimates from in utero radiation in A-bomb and diagnostic X-ray studies. Doll and Wakeford (1997) stated that “children exposed in utero to radiation from atomic bomb explosions have not experienced any corresponding risk of cancer,” the discrepancy “being the most serious reason for doubt” about findings from the Oxford Survey of Childhood Cancers (OSCC) (Stewart et al. 1956; Bithell 1993), the first and largest study to demonstrate the link between childhood cancer and in utero irradiation. Doll and Wakeford (1997) concluded,

and Little (2003) wrote in response to our comments about the use of A-bomb survivor studies as a basis for U.S. nuclear worker compensation decisions (Wing and Richardson 2002). Wakeford (2003) disagrees with our statement (Wing and Richardson 2002) that there is discrepancy between childhood cancer risk estimates from in utero radiation in A-bomb and diagnostic X-ray studies. Doll and Wakeford (1997) stated that "children exposed in utero to radiation from atomic bomb explosions have not experienced any corresponding risk of cancer," the discrepancy "being the most serious reason for doubt" about findings from the Oxford Survey of Childhood Cancers (OSCC) (Stewart et al. 1956;Bithell 1993), the first and largest study to demonstrate the link between childhood cancer and in utero irradiation. Doll and Wakeford (1997) concluded, only one reason would appear to be serious: namely lack of any comparable excess in cohorts of children irradiated in utero, most notably in those exposed to radiation from the explosion of the atomic bombs in Japan.
Although we raised questions about potential sources of bias in the A-bomb data, Wakeford (2003) argues that risk estimates for the in utero cohort of A-bomb survivors are "compatible with" findings from the OSCC, given the imprecision of these estimates. There are indeed few childhood cancers among those exposed in utero. However, the OSCC suggests approximately a 40% excess of childhood cancers following exposure to a prenatal X ray, the majority of cases being childhood leukemias (Bithell 1993). The work of McMahon et al. (1962) supports this finding but suggests an excess exclusively of childhood leukemia (Bithell 1993). In contrast, among Japanese survivors exposed in utero, there were no childhood leukemias and no childhood cancers before the age of 10 years (the age range of most OSCC cases). One case of childhood cancer in a 10-year-old has been reported (nephroblastoma) and a second case was reported in a 14-year-old (hepatoma) (Kato et al. 1989;Yoshimoto et al. 1988Yoshimoto et al. , 1991. Wakeford (2003) argues that, based on these two cases, a dose-response estimate for childhood cancer occurring at ages < 15 years following in utero exposure is compatible with, although smaller than, the effect estimate derived using OSCC data. More than imprecision, the absence of childhood leukemia and childhood cancers before 10 years of age may be indicative of selection bias in the A-bomb cohort (Stewart 2000). Little (2003) wrote that uncertainties in external radiation dose estimates for A-bomb survivors are of comparable magnitude to those for badge-monitored nuclear industry workers. Because there was no instrumentation in place for measuring the survivors' doses at the time of exposure, survivors' dose estimates rely on information derived from a questionnaire. Therefore, issues related to the survey administration are fundamental to the accuracy of these data. The survey was conducted under direction of an occupying military force (Lindee 1994). Respondents had experienced a nuclear holocaust and potentially were suffering posttraumatic stress disorder. It is suspected that not all survivors truthfully reported their proximity to the epicenters of the bombings (Watts 2000). Estimates of uncertainty in radiation doses derived under simple statistical assumptions about the distributions of errors do not account for such problems. Little (2003) also questioned concerns about selective survival in the A-bomb study. Shimizu et al. (1999) described evidence of selective survival in the Life Span Study based on analyses of noncancer mortality, and Stewart and Kneale (2000) noted differences in radiation-mortality dose-response relationships among survivors with and without reports of acute injuries, suggesting evidence of selective survival. Little's assertion that simple assumptions about errors in dose estimates reduce the statistical significance of differences in dose-response relationships between these groups (Little 2003) is premised on a misplaced reliance on statistical significance testing. Furthermore, Little fails to address Stewart and Kneale's central premise (Stewart and Kneale 2000): selection effects varied with age, being greatest for the very old and the young at the time of bombing. Stewart (2000) argued that a more pronounced survival of the fittest at the extremes of age has influenced evidence of radiation effects.
Critical evaluation of potential sources of bias in the A-bomb studies is timely because of proposals that this study provide the basis for judgments in worker compensation.

Developmental Effects of Herbicides in Mice
We would like to respond to criticisms of our paper (Cavieres et al. 2002) raised by Lamb et al. (2003) and Ashby et al. (2003) in the July 2003 issue of EHP. The order of our responses generally follow the sequence in the letter by Ashby et al.
There were inadvertent numerical errors in our data presentation but not in our analysis. The corrections can be found in the errata in this issue of EHP (111:A751). In all cases the numerical errors are small in magnitude, and most of them involve minor differences in sample size. Our statistical analysis and conclusions were based on the correct data set and are not affected by these presentation errors.
We regret that these small numerical presentation errors shifted the focus of discussion from the broader implications of our research. In Table 2 of our paper (Cavieres et al. 2002), we showed that in all seasons every treatment group had fewer young than the control group. Only occasional points were statistically significant. Only when we combined the data to get a larger sample size and a smaller standard error did the treatment effects become significant. We concluded that most researchers will find only trends and not significant effects because treatment group sample sizes in individual experiments are too small. Lamb et al. (2003) referred to discrepancies between our paper (Cavieres et al. 2002) and the PhD thesis by Cavieres (2001). The analysis in our paper (Cavieres et al. 2002) was performed on the original computer data files, which we consider to be the ultimate source. The thesis (Cavieres 2001) was based on an earlier analysis of the data and contains many other measurements not included in our paper. The data presentation in the thesis (Cavieres 2001) is based on exposure period (preimplantation and organogenesis exposure vs. organogenesis exposure only), whereas the data presentation in our paper (Cavieres et al. 2002) is based on season. This makes it very difficult to correctly compare data in the thesis with data in our paper, especially when comparisons are made by people who are not familiar with the whole work.
We used analysis of covariance with litter size as a covariate to test the juvenile weight and length data. This eliminated the known decrease in juvenile weight and length in larger litters and tested for the possible hidden effects of the pesticide doses.
When we tested the control litter distribution, it was not significantly different from normal (Kolmogorov-Smirnov test, p = 0.54). Larger litters may have been truncated to 12 either by loss of young during pregnancy or by the females eating newborns before we could find and count them.
The important point shown in Figure 1 is that the number of litters in every treatment group is numerically smaller than the smallest control litter. The version of Figure 1 in our original article (Cavieres et al. 2002) did not emphasize these differences as much as we had hoped. Figure 2 in our paper (Cavieres et al. 2002) was an attempt to show how this difference in number occurred.
A major point of Lamb et al.'s (2003) criticism of our paper involved our discussion of the possible causes of decreased implantation sites in treated animals shown in our original Figure 2 (Cavieres et al. 2002). When we were considering possible causes of embryo loss, we included loss during the first days after implantation. Implantation in mice does not occur only on gestation day (GD) 5, but is a gradual process occurring from GD4.5 to GD6 (Kaufman and Bard 1999).
We detected implantation sites by staining the uterus for iron and viewing it under a dissecting microscope. Embryos are not large enough to leave a visible implantation-site stain until about GD8. All our mice were dosed from GD6 to GD15, with spring mice additionally dosed on GD0-GD5. This means that all females were exposed during the first days after implantation (GD6-GD8), when lost embryos would not leave a visible implantation-site stain. The significant losses in summer-experiment mice strongly indicate that embryo losses during GD6-GD8 caused the effects.
In Table 4 of our paper (Cavieres et al. 2002), every treatment group except for the very-low-dose winter group also had more resorptions than the control group. Although these losses were not statistically significant, they suggest that losses of embryos continue to occur throughout pregnancy. Both implantation and resorption data are plotted in the new Figure 1 as difference from control: the decrease from control in the case of implantations, and the increase from control in the case of resorptions. Again, implantation deficit indicates a loss of embryos during early pregnancy, and excess resorption indicates a loss of embryos during later pregnancy. Figure 1 clearly shows a loss of embryos in all groups during pregnancy.
Both Lamb et al. (2003) and Ashby et al. (2003) criticized our presentation of data on a seasonal basis. We used this presentation not to promote a discussion of seasonal effects but rather to show the entire data set without showing each individual experiment. We chose to let the reader judge our data for themselves. Ashby et al. (2003) criticized our merging of birth data to generate our Figure 2 (Cavieres et al. 2002). This figure was drawn primarily to show resorptions-the difference between implantations and births. To do this accurately, we had to use the births from the litters for which we had implantation data. The consistency of our data over the various seasons (Table 2; Cavieres et al. 2002) justifies combining the data for the final analysis. Notice that three of the four seasons plotted by Ashby et al. in their Figure 1 (Ashby et al. 2003) show a U-shaped response in births. This resorption data is shown much more effectively in our new Figure 1.
We take special exception to the inference by Ashby et al. (2003) that mice can use only visual cues to detect season; they forgot subtle but significant aspects of animal biology and natural history. Mice have many senses other than vision, and their senses are far better than ours, especially olfaction, which can detect pollen, the smell of rain, and other seasonal fragrances. There are many cues available for animals to detect season, even in our light-tight animal-care facilities. For example, immune responses determined in laboratory conditions are affected by seasonal influences, with a lower response usually occurring during winter months, although this sensitivity to winter depends on the strain (Dozier et al 1997;Ratajczak et al 1993). Lamb et al. (2003) criticized our discussion that our data may follow an inverted dose-response curve. In criticizing our discussion, they tried to minimize the enhanced effect on decreased litter size at the low end of the dose range, a response that does not follow the classical linear dose-response relationship. Inverted dose responses have been observed for some time in both dose responses to hormones and to radiation. Inverted dose responses also have been documented for endocrine, immune, and neurologic responses to pesticides and other environmental contaminants (Olson et al. 1987;Levin et al. 2002;Welshons et al. 2003).

Dose Deviation from control (n)
Implantation deficit Resorption excess Figure 1. Implantation deficit and resorption excess plotted as difference from control. Implantation deficit is a decrease from control, while resorption excess is an increase from control. Implantation deficit indicates a loss of embryos during early pregnancy, and resorption excess indicates a loss of embryos during later pregnancy.

Ethics of Pesticide Testing in Humans
With some dismay, we read the article by Meaklim et al. titled "Fenitrothion: Toxicokinetics and Toxicologic Evaluation in Human Volunteers" that appeared in the March 2003 issue of EHP  A fundamental problem is that no federally mandated ethical standards exist for safeguarding volunteers in pesticide studies. The U.S. Environmental Protection Agency has never established such standards. This is a situation very different from that which exists in the case of clinical trials of drugs conducted under the auspices of the Food and Drug Administration.
A serious ethical impediment to pesticide testing in humans is that there is no conceivable way in which the administration of a pesticide to a person can benefit the health of that person. The logic that permits controlled clinical trials of pharmacologic agents that may directly benefit human health does not pertain here.
We strongly recommend that EHP adopt an explicit policy for the consideration of future manuscripts that might involve testing pesticides in human volunteers. We specifically suggest that no further papers involving testing of pesticides in humans be accepted until federal policy on pesticide testing in humans has been clarified.