The challenge of mechanism-based modeling in risk assessment for neurobehavioral end points.

The mathematical form for a dose-time-response model is ideally not just a convenience for summarizing or fitting a particular data set--it represents a hypothesis. The more this hypothesis reflects a mechanistically sophisticated view of the likely reality, the more it can lead to potentially informative validating or invalidating types of predictions about the results of real experiments and (in the long run) reasonably credible predictions outside the range of direct observations. This paper first reviews some distinctive features of the nervous system and neurotoxic responses and theoretically explores some basic quantitative implications of these features. Relationships are derived for how dose-response relationships for the inhibition of function should depend on the numbers of neurons in series or redundant parallel arrangements that are required or capable of performing the function. Previous work is reviewed in which some less nervous-system-specific features were the foci of quantitative risk-assessment modeling for specific neurotoxic end points. These include a) rates of repair of putatively reversible damage in the case of acrylamide; b) human interindividual variability in susceptibility to fetal/developmental effects in the case of methylmercury; and c) opportunities to use intermediate biomarkers to assist in integrated animal toxicological and epidemiologic investigations of the chronic cumulative risks posed by agents that contribute to neuronal loss with increasing age and pathology.

approaches to the problem of quantitative modeling of neurotoxic risks depart very seriously from the well-explored paradigm that has been used for quantitative risk assessment for genetically acting carcinogens. In particular, parameters measuring neurologic function are often continuous rather than quantal (3), and there is a diverse set of modes of response of the nervous system to toxic insults (e.g., acute-reversible, acute-irreversible, chronicreversible, chronic-irreversible, latent-irreversible). Moreover, the observable modes of response often differ even for the same chemical as a function of dose rate and aggregated dose.
Quantitative risk assessment inevitably involves the projection of available information beyond the experimental or observational circumstances where it was collected. In general, an interpretive framework is imported from our background understanding of the system, and this is used to both select relevant data for analysis and interpret the significance of the data for the different circumstances in which we want to make risk predictions.
We can say that such a framework has a causal mechanistic basis to the extent that it arises from particular causal processes with defined quantitative implications. For example, some cases in which mechanistic understanding gives rise to particularly well-founded quantitative predictions include the following: * The high-concentration approach to saturation of Michaelis-Menten enzyme kinetics arises naturally from fundamental considerations of the finite binding capacity of protein molecules for substrates, and the finite processing times required for enzyme-catalyzed reactions (4). * Expectations of low-concentration linearity in chemical transport and reaction arise from fundamental consideration of mass action-the fact that all those processes ultimately result from the frequency of molecular collisions, which depends linearly on the concentration of each reactant (when temperature is held constant). * The development of physiologically based pharmacokinetic modeling over the last 20 years has been based on the recognition that the dynamics of transport and accumulation of simple organics in different body compartments can be at least approximately understood based on equilibrium partition coefficients, physical compartment sizes, and blood flow rates (5,6). * The expectation of normal concentration distributions used in gaussian plume modeling (of the dispersion of air pollutants) arises ultimately from the central limit theorem: addition or subtraction of many random numbers yields a normal distribution. This is applied to air dispersion via the recognition that with air turbulence, individual molecules and small parcels of air are being subjected to many random translations in different directions, equivalent to addition and subtraction of random numbers. Similarly, where differences among individuals in internal body burdens of a toxicant arise from the multiplication of many random variables (e.g., concentration in an exposure medium such as fish; amount of the medium taken in; and residence time of the toxicant in the body) lognormal distributions are expected and frequently found (because multiplication or division of many random numbers is equivalent to the addition or subtraction of their logarithms) (7). Similar reasoning has been used to support the traditional assumption of a lognormal distribution of thresholds used in log-probit dose-response analysis (8) for quantal noncancer effects.
The key point to understand is that the mathematical form for a dose-timeresponse model is ideally not just a convenience for summarizing or fitting a particular data set-it represents a hypothesis about the state of the world. The more this hypothesis reflects a mechanistically sophisticated view of the likely reality, the more it can lead to potentially informative validating or invalidating types of predictions about the results of real experiments and (in the long run) reasonably credible predictions outside the range of potential direct observation.
To be productive in laying the groundwork for quantitative neurotoxicity risk assessment, mechanistic theories must help identify a) which of the many types of potentially obtainable data about a set of neurotoxic responses are salient for understanding likely dose-time-response relationships for realistic human occupational and environmental exposures and b) what adjustments and adaptations need to be made in using those data to project risks for a large, diverse exposed human population-recognizing that the data will be collected for either small groups of relatively uniform animals with well-characterized but usually simple exposures, or sometimes larger groups of haphazardly selected human subjects with incompletely characterized and often complicated exposures.
Beyond the challenges for risk assessment projections listed above, a further set of difficulties arises because the functional tests that are used for measuring neurobehavioral toxicity are quite a simplified representation of the diverse tasks that are critical for our human functioning in society. Exactly how does impaired performance in maintaining balance on a rotating rod, recognizing a particular pattern of flashing lights, or learning to solve a maze, translate into impaired performance in driving a car or filling out a tax form, let alone learning to read? Clearly, in the long run it will be necessary to have a theory that allows us to evaluate the significance of neurotoxicity data not only across differences in exposure levels-durations and species but across rubstantial differences in task complexity.
Models are simplifications of reality. They are intended to represent the likely behavior of a complex system of interest by focusing on the behavior of a few salient features or components of the system. If the features included in the model are the prime determinants of the behavior of the system (that is, if other variables that could also affect the system are relatively constant or relatively unimportant), then there is a hope that the model will reasonably accurately represent the behavior of the system over a particular domain of conditions. This paper first reviews some distinctive features of the nervous system and neurotoxic responses that appear to be good candidates to be the focus of quantitative modeling for neurobehavioral responses. Then some basic potential quantitative implications of these features are theoretically explored. Finally, some previous work is reviewed in which some less nervous-system-specific features have been explored in quantitative risk assessment modeling for specific neurotoxic end points. These are a) rates of repair of putatively reversible damage in the case of acrylamide, b) human interindividual variability in susceptibility to fetal/developmental effects in the case of methylmercury, and c) opportunities to use intermediate biomarkers to assist in integrated animal toxicologic/epidemiological investigations of the chronic cumulative risks posed by agents that contribute to neuronal loss with increasing age and pathology. Special Features of the Nervous System with Promise for Neurotoxicity Modeling On its face, no system of the body is more intricate than the nervous system. Surely it is an understatement to report that the mechanistic connections between the internal processes of the nervous system and accessible neurobehavioral end points are incompletely understood. Nevertheless, several decades of scientific study have yielded a substantial body of information-about the structure of the system; about the functional effects of damage to various portions of the system; and about the capabilities of several chemical and physical agents (e.g., noise) to induce damage and impair the performance of the system in various ways.
At least three special features of the nervous system seem likely to be salient for the modeling of some neurotoxic risks: Irreplaeability. Some of the basic elements of the nervous system (neurons) do not reproduce in adults and cannot be replaced if lost due to the action of a toxic, physical, or infectious agent. Neuronal numbers decrease with normal aging in primates (by different amounts in different areas of the brain), and important progressive neurologic conditions are associated with an enhanced loss of specific sets of neurons (9,10). Moreover, model systems in which the behavioral effects of neurotoxic agents are found to be parallel to behavioral effects induced by specific physical lesions in specific neural areas show promise of providing a quantitative calibration of neurotoxic damage in terms of the location, extent, and severity of physically observable neuronal loss (11).
Redundancy. For many tasks that the nervous system performs, excess capacity is available, at least in normal individuals. This means that performance on those tasks may not be proportionately degraded [or, depending on the measurement system, detectably degraded at all (12)] in parallel with the loss or impairment of underlying functional elements. In combination with the normal age-related decline in neurons, this also is a likely cause of delays of many years in the manifestation of neurotoxic effects following an acute toxicant-induced injury.
Plasticity. To a considerable extent, when damage occurs to some elements, function can be recovered as surviving elements take on tasks formerly performed elsewhere (13). In normal aging brains, neuronal loss appears to be substantially offset in terms of function by dendritic proliferation (14). Other adaptive responses have also been noted at the subcellular level. Chronic exposure to anticholinesterase agents, leading to excessive transmission across cholinergic synapses, leads to compensatory down regulation of the number of receptors for acetylcholine on the postsynaptic membrane (15,16).
Approaches for Mechanismbased Modeling of Toxic Processes in Response to Some Special Features of the Nervous System From the foregoing discussion it appears likely that at least some neurotoxic effects are caused as a result of the inactivation (temporary or permanent) of neurons. Such inactivations can be treated as quantal events in at least some cases, most notably for cases involving frank cell death such as the production of chronic neurodegenerative conditions. (In other cases, there will surely be graded responses even on a cellular level, as for an agent that simply slows conduction or changes the likelihood Environmental Health Perspectives -Vol 104, Supplement 2 * April 996 of synaptic transmission via manipulation of the number or affinity of available receptors for a neurotransmitter, or the rate of inactivation of a neurotransmitter after release.) Basic Implications of Paraliel versus Series Arrangement ofNeurons in Performing Functions Treating the action of some neurotoxicants as operating via quantal inactivation of individual cells allows a relatively simple representation of the ideas of redundancy and varying task complexity. Redundancy is straightforward to represent as an array of parallel cells, or systems of cells, each of which (if not inactivated) is capable of performing the function. Similarly, we can imagine that one practical definition of task complexity for the nervous system is the length of the chain of neurons, all of which must function properly to accomplish the task. Pushing the electrical analogy implicit in the previous use of the word parallel to represent redundancy, we can say that a functional unit in the nervous system will often consist of some number of cells acting in series to accomplish a particular task.
Let Pcell survival be the probability that an individual neuron survives an exposure to a toxicant in a particular time period. Pcell survival is, of course, a function of toxicant concentration; later we will use log probit and single-hit functions to represent this. Because all the cells acting in series must survive for a functional unit to remain active, we can expect that 'function preserved =(1cl survival)N [1] where Nis the number of cells in series in the functional unit.
Similar reasoning yields a simple formula to represent redundancy. Because all of the parallel units must fail in order for function not to be preserved, where there is only one cell in each functional unit we have: p function preserved = 1Pall n parallel cells fail = 1-(Pone cell fails) = 1-(l1-Pcell survival [2] where N is the number of cells/functional units capable and available to perform the task in parallel. Combining these two expressions, we can infer the probability of function preservation for a system composed of n parallel functional units, each ofwhich consists of Ncells in series: Pfunction preserved -Pall n parallel functional units fail [3] The potential usefulness of this very simple framework is that it can lead to empirically testable predictions about the relationships between a) measurable doseresponse relationships for inactivation at the cellular level (Pcell survival as a function of the internal concentration and time of exposure of the relevant neurons); b) structural features of the multicellular system responsible for the function (n redundant parallel units consisting of Ncells in series); and c) measurable dose-response relationships for inhibition of the function. This framework would be particularly powerful for generating experimental predictions if the structural features could be systematically varied in related series of experiments. For example, the functioninhibition dose-response behavior could be measured for a closely related series of tasks that are progressively more complicated for the organism in terms of requiring a longer series of similar cells to work for successful performance. Redundancy could also be varied in quantitatively measurable ways; for example, by comparing the doseresponse relationships for the inhibition of function in younger versus older animals or in animals with and without some physically or chemically generated lesion that reduced the available size of the pool of relevant neurons capable of performing the measured function.
The following analysis focuses on the implications of the framework for the relationships between the structural variables and three measurable response parameters: a) the IF1O, the delivered dose required to reduce function by 10% (this will be close to the smallest reliably measurable response in many cases); b) the IF50, the delivered dose required to reduce function by 50%; and c) the IC50, the delivered dose required to inactivate 50% of the relevant cells (either reversibly or irreversibly, depending on the action of the toxicant under study).
The idea of delivered dose is included in these definitions as a reminder that the effects of any high-dose pharmacokinetic nonlinearities in the relationship between external dose and dose at the active site may need to be the subject of a separate modeling process.

Dose-Response Relationships for the Inactvation of Individual Cells
To derive predictions for the relationships among these response parameters as a function of the structural variables, the only added element we need is a dose-response relationship for cellular inactivation. A simple function that can be used for reference comparisons is classic one-hit inhibition, derived from the Poisson probability of each cell receiving various numbers of inactivating hits. P -P p _e-kd cell survival -0 hits- [4] where d is the dose of the toxicant, and k is the number of inhibitory hits per unit dose. This function would be expected to result from a mechanism in which the probability of inactivating damage to a vitally important organelle was a simple linear function of toxicant concentration x time at low doses. Setting k arbitrarily at 1, the IC for this function occurs at a dose of ln(2 (0.693) dose units.
A more plausible general dose-response function arises from the idea that individual cells could be presumed to be lost on a threshold basis but with a diversity of individual cell thresholds arising from different internal exposures to the toxicant (e.g., resulting from locations relatively close or relatively far from a cell of another type responsible for metabolically converting the parent toxicant molecule to a directly active form) and different cellular capacities for rapid detoxification or repair. If the distribution of thresholds for inhibition of neuron units is lognormal, then we obtain the familiar log probit doseresponse relationship of Finney (8): Pcell survival = Pfraction of neurons with thresholds over d where d is dose as before and tI is the cumulative distribution function for the normal distribution-the area under a normal curve below z, where z is the number of standard deviations above or below the lognormal distribution of individual cellular thresholds at the delivered dose (d). (In the original formulation of Finney, 5 was added to z in defining probits to avoid negative numbers, but this will be dispensed with here.) At the IC 0, the delivered dose exceeds the threshol5ds of half of the neuron population, z = 0 and d= 10(-lb). b in this formulation is known as the probit slope-a measure of the spread of the distribution of individual thresholds. 1/b is the standard deviation of the distribution of the logarithms1o of the threshold doses. Some preliminary indication of the range of probit slopes that may be observed in practice can be gleaned from data analyzed recently by Slikker and Gaylor (17) on the relationship between administered MPTP (1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine) dose and dopamine concentrations in relevant regions of the brain in different species. Figure 1 shows the correspondence between the data and the fits done by Slikker and Gaylor (17) to a mathematical model of the form, Fraction of control dopamine concentration = I+ad [6] where d is dose and a is a fitted constant. Mechanistically, this model is best suited to the representation of concentrationresponse relationships for enzyme inhibition, where the dose or concentration of the toxicant producing 50% inhibition is 1/a. If, however, the action of MPTP is taken as primarily cell killing and if we assume that the fraction of control dopamine concentrations observed are a rough indication of the fraction of the relevant cells that have survived treatment, then a log probit fit to the data may yield a preliminary indication of the spread of the lognormal distribution of the thresholds of the relevant cells. A series of such fits to the same data used by Slikker and Gaylor (17) is shown in Figure 2.
It should be immediately noted that it is not fair to directly compare the accuracy of the fits to the data in Figure 2 to those in Figure 1. The probit model form used for Figure 2 has two adjustable parameters whereas the Slikker and Gaylor model form has only one. Figure 2 does not show a fit to the monkey data included in Figure  1 because, with only two data points, a two-parameter model must necessarily fit perfectly. The probit slope for such a fit to the monkey data is about 2, similar to that seen in Figure 2A.
The inference to be drawn from Figure  2 is that a plausible range of probit slopes for individual cellular responses in the initial analysis here should include values from about 2 to 6. For later reference, a probit slope of 2 represents a considerable spread of individual cellular thresholds for inhibition. Ninety-five percent of the thresholds for such a cell population would be expected to be spread out over a 100fold range of values, from 10-fold less than the median to 10-fold greater than the median. A probit slope of 6 suggests a considerably narrower 95% range-approximately 4.6-fold, from a little more than half to a little more than double the median value. Figure 3 is a comparison of the three cellular dose-inhibition relationships that will be used for exemplary analysis here. It can be seen that in the experimentally measurable range (from about 10 to about 90% inhibition) the probit slope 2 function is relatively close to the one-hit function, whereas the probit slope 6 We can now use Equation [3] to derive some expected relationships between the measurable parameters defined earlier. Figure 4 and Tables 1 and 2 show the consequences of parallel and series multiplicities based on the one-hit model of individual cellular sensitivities. It can be seen in Figure 4 that, as the number of redundant parallel units is increased (for a function dependent on only one cell in series), the dose-response curve for functional loss becomes increasingly sigmoidal in shape, and of course the IF50 (the dose required to produce a 50% loss of function) is progressively increased. Tables 1 and 2 Tables 1, 3, and 5 show the IF50/IC50 ratio-the ratio of the dose needed to * Probit slope 6 produce 50% inhibition of the function to the dose needed to inactivate 50% of the individual cells. As might be expected, increasing redundancy increases this ratio which indicates that a larger dose is needed to inhibit 50% of the function than is needed to inactivate 50% of the cells. For functions with a high degree of redundancy but relatively few cells in series, substantial effects may be detectable at lower doses by neuropathology/neurochemistry than by changes in function. By contrast, increasing the length of the chain of cells in series in each functional unit decreases the IF50/1C50 ratio-indicating that the function is likely to be 50% inhibited at a smaller dose than 1. 5 2.0 is required to inactivate 50% of the cells. Therefore for relatively demanding tasks (defined as those requiring the correct ,e-response functions for the survival of functioning of many cells in series) with relatively little redundancy, measures of neurobehavioral function are likely to be substantially inhibited at doses at which major changes are not apparent by neuro-E 6 Cells in parallel chemistry/neuropathology measurements.  Table 5), the increase in the IF with the same range of redundancy is limited to 1.6-fold, and the effect of increasing ts on the sensitivity of function to inhibi the number of cells in series from 1 to 6 is limited to 2-fold. A markedly different result can be seen in Tables 2, 4,   All of these results are, of course, initial expectations based on a very simple representation of the neural systems. In some cases some of the redundant pathways may be redundant for only a portion of the entire chain length (that is, either of two parallel series of neurons may activate a single neuron, which then triggers the final response). This and other possibilities seem likely to give rise to behavior between the simple extremes examined above. Overall, this framework has some promise for integrating dose-response information at the cellular and functional levels of organization with basic structural information about the neural systems supporting specific functions. The predictions of this system seem most likely to be experimentally testable in cases in which distributions of cellular thresholds for inactivation are relatively broad and where a graded series of tasks can be shown to involve increasing series chain lengths or where different levels of redundancy can be produced by physical lesioning, simultaneous administration of other agents, etc. After such testing, the framework has some promise for developing expectations for how functional dose-response relationships should change for tasks involving different chain lengths associated with different tasks and different degrees of redundancy that may be available in humans with advancing age and progressive neuropathology.

Generic Issues in Noncancer Risk Assessment Repair Rates ofIncipient Damage
One of the advantages of mechanistic modeling as an aid to basic scientific investigation is that when the predictions of a mechanistic model are refuted by subsequent observations, the direction of failure can provide mechanistically interpretable indications of potentially productive directions for further investigation. Work done some years ago explored the implications of a very simple model of the effects of repair of putatively reversible damage on dosetime-response data for acrylamide axonopathy (20). The data sets analyzed were those that provided information on some specific manifestation of toxicity produced by different combinations of acrylamide dose rate and duration of exposure (21,22). The initial model for analyzing these data was built around three assumptions: a) a particular adverse effect occurs whenever a specific amount of damage is accumulated in the relevant portions of the nervous system (i.e., there is no appreciable delay between the production of damage and the manifestation of the resulting effects); b) damage is produced at a rate that is approximately linear with the mg per kg dose administered to the animals; and c) repair of the accumulated damage occurs at a rate that depends directly on the amount of accumulated damage that there is to be repaired.
The first assumption provided the primary tool for quantitatively analyzing the data. For each data set, the repair rate was found that made the amount of accumulated damage approximately equal for each of the dose and time combinations that were observed to produce a particular response. Figure 5 illustrates the buildup of internal damage to levels producing an effect under this model, with a 2.9% per day rate of repair.
Recently, new data have been analyzed by Hattis and Crofton (23) on the doserate and time responses of some motor activity end points for acrylamide neurotoxicity in rats both during and after the end of acrylamide dosing. Without knowledge of the posttreatment recovery results, the data for rate of loss of function for the period during dosing were fit to the initial model, and predictions were made for the times when various amounts of functional recovery should appear (23). In all cases the predicted recovery times were distincdy less than the times observed for the actual recovery of various amounts of function. Contrary to our model expectations, three out of the four sets of recovery measurements showed some appreciable lag time before recovery of function began to be observable ( Figure 6). The effect was most noticeable for the two highest dose rates and the most profound levels of functional impairment. Substitution of a saturable rather than a simple linear repair process improved the fit but did not make the theory compatible with the data.
The lack of adequate fit for these simple hypotheses leads us to some biologically interesting hypotheses for possible exploration in further work: * There might be functional compensation or adaptation by the organism to overcome partial damage to a component of the nervous system. * There might be a two-stage damage process, with the second stage of damage representing less easily reversible (or even irreversible) damage. For such a two-stage process, the need for acrylamide to act on the same neuron twice * There might be a more complex relationship than was assumed between the underlying pathological events and the resulting functional changes. Recent data suggest that recovery of function is not coincident with observable repair of physical damage. At a time point where animals display almost complete recovery of function from a 90-day exposure to acrylamide, histological studies yield little evidence of tissue repair in the peripheral nervous system. It is therefore possible that the functional recovery is attributable to an anatomical compensatory mechanism (i.e., plasticity) such as sprouting at the neuromuscular junction by fibers that still work, which can result in fewer fibers innervating more muscular tissue (24,25). These possibilities may be explored in further experimental and dynamic modeling work.
Interndividual Variability in Suseptibility ght be expected to produce a greater One strategy in using quantal data for a )portion of stage 2 damage for the quantitative risk assessment is to assume ,her dose rates.
that an observable response is produced when some underlying continuous parameter (e.g., internal dose or damage) exceeds some critical threshold, as was done in the previous section for acrylamide. Changes in the frequency of quantal responses as a function of dose are therefore interpreted as changes in the fraction of the population whose individual thresholds have been exceeded. An unusual analysis of human data of this type was included in a report on seafood safety by the Institute of Medicine 6.7 mg/kg/day (26 (Table 7). Analogous, but unfortunately not completely comparable, analyses of data for a variety of adult effects, based on measurements of methylmercury in blood, suggest probit slopes in the range of 2 to 8 (Table 8). A probit slope of 1 would imply that 95% of the population had thresholds for effect spread over four orders of magnitude, from 100-fold less than the median to 100-fold more than the median. Such a large amount of interindividual variation would imply appreciable risks (of the order of 10-5 to 10-2), even at the much lower dosages that are present in the diets of people who consume relatively large amounts of fish with relatively large methylmercury concentrations. A probit slope of 2 would suggest less but still appreciable variability with the thresholds of 95% of the population spread over a 100-fold range in dosage, from 10-fold lower to 10fold higher than the threshold for the median person.
Unfortunately, in addition to statistical uncertainties in the determination of these slopes from limited data, there are important questions of biological interpretation. A conclusion that the relationships represented in the upper part of Table 7 represent true interindividual variability depends on an assumption that the biomarker of exposure used in this case-the maximum hair mercury found at any time during gestation-is the most appropriate direct causal predictor of response that can be developed. (Other possibilities might well include the concentration of mercury in maternal blood at a specific sensitive time during gestation or a weighted sum of concentrations x duration over a specific set of sensitive periods.) Any inaccuracy in the assessment of the relevant dose used for the analyses of Table 7 would tend to cause a bias in the estimation of the probit slopes toward lower values (and hence higher estimates of interindividual variability and low-dose risks).
In recent work, the relationship between uncertainty in the dosimeter and estimates of the probit slope has been quantitatively estimated (4). It was found that measurement and estimation inaccuracies as large as a geometric standard deviation of 3 would be needed to make a true probit slope 6 relationship appear to have a probit slope as low as 1.
A further implication of this analysis is that if one were able to measure or estimate uncertainties in the individual doses estimated in epidemiological studies, it would be possible to at least make an approximate reconstruction of the slopes of the underlying dose-response relationships.

Neurotoxic Processes
It has been stressed that neurons do not regenerate in adult life and that some important neurological conditions are caused by the chronic cumulative loss of specific types of neurons in older people. Because in many cases the brain can lose considerable numbers of the relevant cells before gross clinical signs of impairment are obvious, exposure to agents causing this category of conditions can proceed apparently innocuously for decades without being noticed. Even after symptoms 'The equation fit is probit of excess risk over background=intercept+(slope)*1ogl(blood Hg in ppb); data from Institute of Medicine (26). bEstimated from data in the lowest one to three dose groups. cEstimated from the peak hair mercury data using the relationship ppb blood = 4*(ppm hair). dThe number of dose groups available for analysis, less 2 for the number of parameters estimated from the data (the intercept and probit slope). 'The probability that a deviation as large as that observed between the log probit model and the data would have been expected by chance, even if the log probit model were a perfect description of the underlying dose-response function. 'Estimated from data in the lowest one to three dose groups. bThe number of dose groups available for analysis, less 2 for the number of parameters estimated from the data (the intercept and probit slope). cThe probability that a deviation as large as that observed between the log probit model and the data would have been expected by chance, even if the log probit model were a perfect description of the underlying dose-response function. dBased on blood measurements made approximately 65 days after exposure. For comparison with the fetal blood ED50 values, these data should be adjusted upward by approximately 2-fold. Data from Institute of Medicine (26). Measures ofthe Current Rate ofLoss of the Relevant Cells. When the relevant cells die, do they release measurable amounts of a distinctive form of an enzyme or some other component that could be used as a biomarker? Do they, perhaps in their death throes, emit a distinctive type of electrical signal that could be picked up, identified, and measured?
Many years ago Weiss and Spyker (29) suggested that methylmercury might accelerate the loss of neurons in adult life and contribute to a chronic cumulative process that could fall within this category. More recently, some support for this hypothesis has been found in a case-control epidemiologic study among people in Singapore, which has found an association between blood levels of mercury and risk of Parkinson's disease (30). This finding, using a biomarker of exposure, needs to be followed up in confirmatory studies and (hopefully) with the aid of new biomarkers of the types suggested above.