Imagine that you are the pilot of a commercial aircraft, full of passengers. Your warning system can provide a visual (red light) and an auditory (loud beep sound) signal to alert you to low altitude. Upon detection of either warning signal, you must respond immediately by pulling up and regaining altitude. Clearly, a quick response is vital. Would you respond to the threat faster in the presence of both warning signals, as compared with just one? Observe that in providing two rather than one signal, the workload on the perceiving system has increased.Footnote 1 How does this affect the speed of responding? Because either signal can produce a correct response, we call this an OR paradigm.

On the other hand, many situations necessitate perceiving two or more signals, whether distinct modalities or simply features of a pattern. For example, before the final approach for landing, the same pilot must receive auditory confirmation from air-traffic control and visually confirm that there is no other aircraft on the lane. Thus, the pilot must complete processing of two sources of information, which by itself adds time to the processing duration. Again, the increase in workload, from one to two, also can affect response times (RTs). Due to the requirement of completing two signals, we call this an AND paradigm. As we will detail subsequently, in order to assess the internal efficiency in processing, it is critical to take into account the decisional stopping rule (e.g., OR vs. AND) that is being used by the participant. In fact, the decisional rule itself tends to increase or decrease RTs as the workload is varied, with no other changes in architecture or individual-channel efficiency.

For well over a hundred years, mean RT has been the preferred statistic with which to measure efficiency (although accuracy runs a close second). In this article, we review recent progress and propose a method of unifying various statistics of efficiency that afford depiction in a single space using a common scale. Two small experiments were run to yield data that we use to exemplify the new scaling structures.

Over the last few decades, mathematical characterization of mental processes has helped to refine our notions regarding how RTs relate to the fundamental ability of a system to deal with heavier task duties (Kahneman, 1973; Townsend & Ashby, 1978, 1983). We informally refer to this ability as efficiency or, more technically, as workload capacity.Footnote 2 A number of statistics besides mean RT are now being used to provide more precise assays of capacity. Townsend and Ashby (1978) dealt with many aspects of capacity and suggested that hazard functions (defined below) offer a highly articulated way of measuring it. Around the same time, J. Miller (1978, 1982) suggested an upper bound on RTs for channels involved in a race within a redundant-target paradigm. In such a design, which calls for an inclusive-disjunctive paradigm, or simply an OR paradigm (since the participant is instructed to detect the presence of a particular target item, or another target item, or both), any target item leads to a correct response and, hence, prompts a race in a parallel system. Likewise, Grice and colleagues proposed a lower bound on performance in such systems (e.g., Grice, Canham, & Gwynne, 1984). We subsequently have worked out a general theory of workload capacity based on a type of statistic called C OR(t) and linked it both to Grice’s and Miller’s race model bounds (Townsend & Nozawa, 1995).

More recent efforts along these lines expanded the theory to include tasks that demand processing of all presented items (Townsend & Wenger, 2004b). We refer to the latter as AND (also termed conjunctive) designs because processing must generally be applied to all items in order for a correct response to occur (processing is often said to be exhaustive in such cases). Because assays of capacity depend on the stopping rule (OR requires only first-terminating or minimum time processing, whereas AND requires maximum finishing time or last-terminating), we developed an appropriate capacity measure for this case, called C AND(t) (when we present the formula for this measure we shall explain why it is necessary to employ different capacity measures for the OR and AND cases). Colonius and Vorberg (1994) had put forth upper and lower bounds appropriate for AND designs, analogous to those of Miller’s and Grice’s in the OR paradigm, and our theory encompasses those and connects them to C AND(t).

Other properties of the human information-processing system are also important in determining the efficiency and speed with which we process multiple signals. Thus, the architecture of the attendant processes—that is, the arrangement of subprocesses (such as, serial vs. parallel arrangement)–also affects the efficiency of a system as the number of subtasks increases. The decisional stopping rule (stopping rule for short), which determines how much information a system processes before ceasing, as noted above, can also play a major role in efficiency assessment. Finally, potential correlations, the so-called dependence issue, across subtasks, channels, or items, can impact capacity measurements. The foundations of these issues are developed in a series of papers and books (e.g., Townsend, 1972, 1974, 1990; Townsend & Ashby, 1983; Townsend & Wenger, 2004a) and are outlined here only as needed.

Formerly, measures proposed by various investigators have had to be computed and plotted in different scales, or spaces, which render comparisons challenging if not impossible. In one case, the measurements are plotted as probabilities, whereas in the other, they are plotted as ratios of transformations of probabilities. (More technical descriptions will follow below when the requisite machinery has been provided.). Our general goal here is to propose unified workload capacity spaces, which we designate C OR space and C AND space, for the OR and the AND designs, respectively, and a set of strategies for portraying a number of important statistics within each space. Furthermore, we provide detailed formulae for transforming data at the different levels and bounds into statistics that can be immediately plotted in the new spaces. Anyone interested in measuring a system’s capacity and the efficiency of processing can use the convenient summary in Table 1 to obtain formulae for estimating and plotting, on a single figure, the various statistics from empirical data (collected in the appropriate multiple-target task, often termed the redundant-target task, which we outline below).

Table 1 Summary of the pertinent tests in their conventional and novel forms. The latter is a transformation onto the capacity coefficient space (C space, in short). The three top rows correspond to an OR experiment, whereas the bottom three rows correspond to an AND experiment. C(t) is expressed in terms of F(t) and S(t), rather than H(t) and K(t), to allow convenient estimation from experimental data

The next section provides an outline of some aspects of capacity measures’ relationships under different experimental circumstances. It is mildly more technical than the other sections and can likely be skipped on a first reading.

Measures of capacity and their interrelationships

In what follows, we assume that our probability distributions always possess densities.

Static and single-signal versus multiple-target capacity measures

Efficiency of processing can be measured either under experimentally static conditions or with variations in the effort required by the participants. In the former, the experimenter may well be interested in the way in which efficiency changes across time, rather than across conditions such as workload. Machinery is required for both situations. Originally, we employed the term capacity to refer to a number of quantitative measures of efficiency, especially when collecting RTs. In fact, our results, theoretical and experimental, started using the term capacity from the beginning (e.g., Townsend, 1972 and, especially, 1974).

As the methodology has evolved, it has become clear that there is a need to distinguish the static, with regard to task effort, versus the alteration of workload. Of course, we often see the same tools in both but used and analyzed in a rather different manner. Thus, the Townsend and Ashby (1978) theory proposed the hazard function (see below for rigorous treatment) as a potentially effective measuring instrument of efficiency, which they generically called capacity. The hazard function has been employed since then frequently as a labor-static design (not to exclude the possibility that a participant might tire or warm up during a session, but the experimenter is not so manipulating the task). In addition, certain powerful stochastic methods have recently been enlisted that considerably enhance the utility and statistical sensitivity of hazard function analyses with psychological data (Chechile, 2003; Wenger & Gibson, 2004).

Here, we propose to refer to a measurement of efficiency made in a situation where the effort is manipulated as workload capacity. Correspondingly, we suggest using capacity either in a static domain or, after so advising the audience, as a generic concept. From here on out, we focus on workload capacity.

The hazard and integrated hazard functions

The hazard function gives the instantaneous rate of completion at any point in time, given that the process under observation has not yet completed. In a stochastic renewal process, it can be taken as the rate of performing work in terms of number of completions per unit time (e.g., Parzen, 1962, pp. 168–169).

The integral of the hazard function up to an arbitrary time t can analogously be interpreted as the amount of work done, or energy expended, in units of the to-be-completed items or channels. Both the hazard and the integrated hazard function are valid indicators of efficiency, but the hazard function is naturally a more fine-grained measurement than its integral, just as power is more fine grained than its integral energy expended in physics.

In principle, the hazard function could be used in workload capacity measures (in the same type of ratios as derived for integrated hazard functions) but hazard functions are challenging to estimate, despite the advances in this technology (e.g., Wenger & Gibson, 2004). Our experience has been that the integrated hazard function is a more stable statistic, since integrated functions often are, and a rough and ready estimate is immediately supplied by taking the logarithm of the survivor function.

The yardstick: standard parallel process models

Statistics such as the mean RT, the hazard function, or others can be employed to assess speed of processing. However, to purchase theoretical authority in the sense of model discrimination, we need to formulate some benchmark or yardstick of comparison. Such a yardstick affords the ability to assess workload capacity across differing systems by employing such statistics as the hazard function in a nonparametric way.

The choice of the yardstick measuring device and, in particular, the architecture classification are, to some extent, simply a matter of preference and convention. As our measuring instrument, we have settled on systems with independent, parallel channels whose individual-channel processing characteristics do not change as the workload is varied. We refer to this model type as the standard parallel model (e.g., Townsend & Wenger, 2004a). Aside from personal preference, the following seems to be a reasonable claim: In the fields of study where quantitative modeling of RTs has been most advanced, much interest has centered on whether performance is sufficiently fast to merit the term parallel processing or, alternatively, whether performance may be even faster than ordinary parallel processing. Visual search, attention, and short-term memory search are examples of the first theme, while redundant-signals perception forms a prime exemplar of the second theme. There may turn out to be some special benefit to applying a different architecture as the yardstick—say, serial processing—but that remains to be seen. This issue is discussed further in the Agreeing on a Taxonomy of Psychological Systems subsection in the Discussion section.

Finally, it is important to note that the two distinct capacity measures we shall soon employ, for OR and AND tasks, are expressed relatively to the performance of the same standard parallel model described above.

Workload capacity for differing decision rules

Consider a proposition put forward in terms of the predicate calculus. The proposition can be presumed to be a logical concatenation of constituent atomic statements, each of which is true or false (e.g., present vs. absent and so on). The experimenter requires the participant to make a binary response on the basis of the proposition, for instance, its overall truth or falsity. Thus, the famous Sternberg task (1966) is of this nature where the participant adjudges whether the statement “a target item is contained in the memory set” is true or not.

Any stimulus, so constituted, can be answered by establishing the truth or falsity of a finite number of queries—for example, items or channels completed. The two extremes are found in OR and AND designs where the first item completed provides an answer versus all items must be finished, respectively. In order to lessen confusion, we will treat the OR and AND experimental tasks and pertinent measures in separate sections.

Redundant-target (OR) task

The capacity functions can be brought into play whenever a clearly expressible workload is varied and RTs are measured. Up to now, they have primarily been applied in redundant-target search tasks. In such tasks, the participant attempts to detect whether one or more operationally defined targets are present (visual, acoustic, or tactile stimuli, etc.). Although the details of the design differ, all redundant-target paradigms include a subset of trials on which multiple targets appear. Redundant-target trials prima facie permit OR responding, since the perception of any of the targets can legitimately cause a cessation of processing. We will deal with AND trials subsequently. Hereon, we write as if all signals are visual, but of course, this need not be the case.

In a visual redundant-target detection task, targets may be presented in one location (say, A), in another (B), in both (AB; the double-target display), or in neither. On the latter type of trials, which are often called target-absent trials, targets do not appear at all. Participants are asked to respond affirmatively if they detect the presence of at least one target (i.e., target A alone, B alone, or AB), and respond negatively or withhold response otherwise (i.e., if no target was present). When a single target is present, A alone, the cumulative distribution function (also known as the cumulative frequency function, especially when data are considered) is expressed as P(RTA  ≤  t)  =  F A(t) and similarly for B. When both locations are occupied by targets, we write P(RTAB  ≤  t)  =  F AB(t). In many experiments, trials with redundant targets randomly alternate with those containing single targets (and those with no target at all) on a within-block basis.

The measures we discuss below are all based on RTs collected in redundant-target tasks, and use correct responses only. In many redundant-target studies that focus on RTs, including the experiments reported here, error rates are sufficiently low to be safely ignored. When error rates are high enough, other measures of performance can be brought into play, which focus on the pattern of errors rather than on RTs.Footnote 4

Miller’s race model inequality (OR tasks)

It will be necessary to begin employing theoretical terms. Some, like the race model inequality, are immediately susceptible to formal quantitative meaning. Others will be offered first informally and later technically.

J. Miller (1978, 1982) proposed an upper bound for performance on double-target trials—if the two targets are racing and the winner determines the processing time—the race model inequality:

$$ {F_{{AB}}}(t) \leqslant {F_A}(t) + {F_B}(t) $$
(1)

The inequality states that the cumulative distribution function for the double-target display, F AB(t), cannot exceed the sum of the single-target cumulative distribution functions, if processing is, as J. Miller (1982) expressed it, an ordinary race. The term race is generally interpreted as occurring between parallel channels. Although maintaining this convention, it should not be forgotten that other architectures, such as serial, might be theoretical candidates in many cognitive situations. Serial models generally obey the above race inequality, but there exist plausible situations where they do not (Townsend & Nozawa, 1997).

Now consider the strategic model class that is formed from parallel models that possess stochastically independent channels and whose individual channels run just as fast, but no faster, when other channels in the system are operating, the condition of unlimited capacity (which we define formally in the next section, Capacity Coefficient). We referred to this model type earlier as the standard parallel model.

Any standard parallel model, with a first-terminating stopping rule (i.e., “stop as soon as any target is found,” implying a minimum time statistic), makes the prediction FAB(t)  ≤  F A(t|A,B)  +  F B(t|A,B), where on the right-hand side, the conditional notation (A,B) indicates the presence of both targets, as opposed to the single-target trials [e.g., F B(t|B)], and the subscript indicates which channel is being considered. If it is the case that F A(t|A,B)  =  F A(t|A) and F B(t|A,B)  =  FB(t|B), an assumption known as context invariance which is tantamount to unlimited capacity, then Inequality 1 will follow.Footnote 5 If the system is not unlimited capacity, it may be limited capacity or super capacity (e.g., Townsend & Nozawa, 1995). If processing is limited capacity, it will satisfy the race inequality. Since the race inequality forms an upper bound, a super capacity model may, but need not, produce RTs that violate the bound (e.g., Townsend & Wenger, 2004b).

At this stage in the development, we need to introduce the notion of coactivation. A system is coactive if the channels are parallel but, rather than each channel making a decision, the information from each channel is coalesced, usually summed, into a final conduit which then makes a decision about the presence of a target (Colonius & Townsend, 1997; J. Miller, 1982; see also Townsend & Wenger, 2004b, for a general formal treatment and references to a variety of studies on this topic). Hence, although parallel, there is no race, since the channels do not make individual decisions. By convention, the speeds of the individual channels do not change when more signals are added. The result is strong super capacity that always violates the race bound (Townsend & Wenger, 2004b).

In the top left panel of Fig. 1, we illustrate the race model bound against the prediction of a standard parallel model (the model specifications are discussed in greater detail in the Model Predictions for OR Paradigm–Standard Parallel Model section). The solid lines with circle or triangle markers correspond to the cumulative distribution functions for the two single-target conditions. The dotted line represents the sum of the two single-target distributions at each time t—the race model bound. The inequality is satisfied as long as the observed distribution of the double-target condition (thick solid line without any markers) stays below the dotted line.

Fig. 1
figure 1

Predictions of a standard parallel model with two independent and parallel channels and unlimited capacity (top row) and a coactive model (bottom row), in the OR task. The conventional form of the race bound is presented on the left panels as the sum of the single-target cumulative distribution functions. Grice bound is given by the cumulative distribution function of the faster single target (B in this illustration). In the right panels, we present the same bounds transformed onto the capacity coefficient space

J. Miller (1982) argued that violations of the inequality allow rejection of all race models and may, consequently, be taken as evidence in favor of coactivation.Footnote 6 If one takes as a defining characteristic of a race that it be parallel and that the first channel to finish determines the processing time, then we would have to differ. The reason is that we have shown that race models in this sense, but which possess mutually facilitatory interchannel interactions, can readily predict violations of the race inequality (Townsend & Wenger, 2004b). The positive interchannel interactions defeat context invariance and cause super capacity (Eidels, Houpt, Altieri, Pei, & Townsend, 2011). These models thereby form a class of systems that are super capacity but not coactive.

Now we are in a position to offer rigorous quantitative definitions of our workload capacity measure and the terms limited-, unlimited-, and super-capacity.

Capacity coefficient (OR tasks)

Townsend and Nozawa (1995) proposed an assay of performance on double-target trials—the capacity coefficient—which is the ratio of the integrated hazard function of the double-target condition to the sum of the integrated hazard functions of the single-target conditions:

$$ {C_{{OR}}}(t) = \frac{{{H_{{AB}}}(t)}}{{{H_A}(t) + {H_B}(t)}} $$
(2)

Here, we have defined the survivor function as the complement of the cumulative distribution function, S(t)  =  1 − F(t); the hazard function as the probability density function divided by the survivor function, h(t)  =  f(t)/S(t); and the integrated hazard function, H(t) as the integral of the hazard function from zero to t. Equivalently, H(t)  =  − log[S(t)], which permits ready estimation from the data.Footnote 7

This measure, C OR(t), employs the standard parallel model (recall: parallel, unlimited capacity, and with stochastically independent channels) as a kind of yardstick. If C OR(t)  =  1 for all values of t, performance is identical to that of the standard parallel model. That is, C OR(t) values of 1 imply that the system has unlimited capacity, such that processing in a given channel is not affected by the increase in workload due to the increase in the number of targets; that is, a given channel has the same processing rate whether a target is presented to another channel or not. Furthermore, C OR(t) values that are below 1 imply that capacity is limited, such that increasing the processing load (by increasing the number of targets on the display) takes a toll on performance. Those limitations, when viewed through the “eye” of a parallel system, reveal performance that would entail from parallel channels, which slow down as workload increases. Finally, if C OR(t)  >  1, the system is said to have super capacity and a parallel interpretation indicates that processing efficiency of individual channels actually increases with increased work load. Of course, there are a number of different ways in which a system might produce limited or super capacity. Also, other models may exist that mimic the unlimited-capacity signature of the standard parallel model. In theory, even a serial model might elicit exactly C(t)  =  1, but that seems highly unlikely in natural systems.

An increasing set of studies have used both the race model inequality and the capacity coefficient to describe the properties (capacity, independence vs. coactivation) of the system under investigation (e.g., Townsend & Nozawa, 1995; Wenger & Townsend, 2004b; and more recently, Berryhill, Kverga, Webb, & Hughes, 2007; Eidels, Townsend, & Algom, 2010b; and Eidels, Townsend, & Pomerantz, 2008). Although related (cf. Townsend & Wenger, 2004b), these measures are not identical and are generally presented separately.Footnote 8 Here, we show how the race model inequality can be mapped onto the capacity coefficient space. Thus, the two measures can be conveniently displayed on the same plot and compared with each other.

Townsend and Nozawa (1995) showed that the capacity coefficient can also be written using survivor functions. Substituting H(t)  =   − log[S(t)] into Eq. 2,

$$ {C_{{OR}}}(t) = \frac{{ - \log [{S_{{AB}}}(t)]}}{{ - \log [{S_A}(t)] - \log [{S_B}(t)]}} = \frac{{\log [{S_{{AB}}}(t)]}}{{\log [{S_A}(t) \cdot {S_B}(t)]}} $$
(3)

This is important because we can then write the race model inequality in terms of survivor functions and, hence, in terms of the capacity coefficient, C OR(t) (see Appendix 1 for the full derivation).Footnote 9 Thus, an alternate but mathematically equivalent expression for the race inequality is

$$ {C_{{OR}}}(t) \leqslant \frac{{\log [{S_A}(t) + {S_B}(t) - 1]}}{{\log [{S_A}(t) \cdot {S_B}(t)]}} $$
(4)

This bound is illustrated in the top right panel of Fig. 1 as a dotted line. Observed C OR(t) values that exceed the line violate the race model inequality. The solid line, at C OR(t)  =  1, corresponds to unlimited capacity (and thus marks the prediction of our yardstick standard parallel model). Note that when the race model bound is viewed in terms of the capacity function, it is, when defined, always higher than 1 (it approaches 1 when t approaches 0 but increases sharply further away from 1 as t increases—such that it becomes increasingly difficult to violate), implying that the race model bound is a conservative estimate of super capacity (cf. J. Miller, 1991; Townsend & Wenger, 2004b). In other words, a processing system may have super capacity (i.e., C OR(t)  >  1), and yet these C OR(t) values may still lie below the race model bound, not violating it.

The Grice inequality (OR tasks)

There is a bound on limited capacity (as opposed to the race model bound on super capacity) proposed by Grice and colleagues (Grice et al., 1984):

$$ {F_{{AB}}}(t) \geqslant MAX[{F_A}(t),{F_B}(t)] $$
(5)

If this inequality is violated, the system is limited capacity in a quite strong sense. In this case, performance on double-target trials [F AB (t)] is slower than on those single-target trials that contain the faster of the two targets, MAX [F A(t), F B (t)]. In the top left panel of Fig. 1, the Grice bound is represented by the cumulative distribution function of the faster of the two single-target conditions (in this example, it is B). The inequality is satisfied as long as the observed distribution of the double-target condition (thick solid line) stays above this line. By converting Grice’s bound to its survivor function form, we can map it onto the C OR(t) space. Doing so, we allow all pertinent tests (capacity coefficient, race model bound, and Grice’s bound) to be parsimoniously presented on a single plot.

There is more than just convenience in this way of presentation. The relations between the capacity measure and the two bounds then reveal which of the assays are more conservative (or more liberal) and to what extent. The interpretation of the observed capacity coefficient values may be aided by the simultaneous presentation of the race model and Grice’s bounds, and vice versa.

As for the race model inequality, the Grice bound can be rewritten as a function of C OR(t):

$$ {C_{{OR}}}(t) \geqslant \frac{{\log \{ MIN[{S_A}(t),{S_B}(t)]\} }}{{\log [{S_A}(t) \cdot {S_B}(t)]}}\left( {{\text{see}}\,{\text{Appendix}}\,1} \right) $$
(6)

In the top right panel of Fig. 1 the Grice bound is represented by the dash–dot line. The bound is satisfied as long as the observed C OR(t) function (thick solid line) stays above this line. Notice that this line marks poor performance to increased load, since it is below C OR(t) = 1 for all t.

AND task

As was pointed out earlier, AND tasks demand the processing of all displayed items. Take, for example, the redundant-target search task described earlier, with four possible displays: target A alone, target B alone, targets AB, or no targets at all. Participants are asked to respond affirmatively if and only if A and B both appear (the double-target display). Thus, in order to respond correctly, they have to exhaustively process all items. After the thorough presentation of the OR quantities, the AND paradigm can be treated rather expeditiously.

The Colonius–Vorberg inequalities and the AND capacity coefficient

Colonius and Vorberg (1994) proposed upper and lower bounds appropriate for AND tasks, analogous to the Miller and Grice bounds in the OR paradigm:

$$ {F_A}(t) + {F_B}(t) - 1 \leqslant {F_{{AB}}}(t) \leqslant MIN[{F_A}(t),{F_B}(t)] $$
(7)

Here, F AB(t) denotes \( {F_{{MAX({T_A},{T_B})}}}(t) \) and refers to the cumulative distribution function of the double-target condition in the AND task [note that F AB(t) refers to the cumulative distribution function of max(T A,T B) in the AND case, but to the distribution function of min(TA,TB) in the OR case]. Because processing has to be exhaustive, completion time on each trial is given by the completion time of the slower of two (or more) processes. Townsend and Wenger (2004b) developed a capacity measure appropriate for AND decisions, written as

$$ {C_{{AND}}}(t) = \frac{{{K_A}(t) + {K_B}(t)}}{{{K_{{AB}}}(t)}} = \frac{{\log [{F_A}(t) \cdot {F_B}(t)]}}{{\log [{F_{{AB}}}(t)]}} $$
(8)

The function K(t) is analogous to the integrated hazard function, H(t). If we let k(t) be equal to the density divided by the distribution function, k(t) = f(t)/F(t), then it can be thought of as the conditional probability density that processing completed in just the last instant, given that it completes at or before t. In that sense, k(t)—also termed the reverse hazard function (Chechile, 2011)—is analogous to the hazard function, h(t), which we defined earlier as h(t)  =  f(t)/S(t), or the probability that a process just completed, given that it had not completed before time t. K(t) is then defined as the integral of k(t) from t to infinity in an analogous way to H(t) being defined as the integral of h(t) from 0 to t. Furthermore, in analogy to H(t), K(t)  =  log[F(t)].Footnote 10

Note that in our C AND(t) index (Eq. 8), the single-target quantities are in the numerator, unlike the C OR(t) index (Eq. 3). This inversion results in a common way to interpret C AND(t) and C OR(t): Values greater than 1 imply performance superior to that of a standard parallel model; less than 1 is inferior to a standard parallel model; and values equal to 1 imply that performance is identical to that of a standard parallel model.

One of our reviewers asked whether it is necessary to define different capacity measures for the OR and AND tasks. A distinctive definition of capacity for the AND case is absolutely necessary since the stopping rule plays a critical role in the measurement of workload capacity. Basically, the exhaustive AND rule adds more time to the process over and above that required for the OR rule, even though there may be no change in the channels’ processing rates in the two. This fact compels differences in the capacity formulas. Thus, it would be “unfair” to call a model for the AND task more limited in capacity than that for the OR task, simply because the former requires more channels to be completed. Note, though, that apart from the distinct decision rule for OR and AND cases, the yardstick model for both capacity measures is the same standard parallel model. For the mathematical detail, we point the reader to Appendix 2 and other sources (e.g., Neufeld, Townsend, & Jette, 2007; Townsend & Wenger, 2004b).

We now show how to present different performance measures on a unified AND capacity space. Combining Eqs. 7 and 8, we can express F AB(t) in terms of C AND(t) and, consequently, express the Colonius–Vorberg bounds on the C AND space:

$$ \frac{{\log [{F_A}(t) \cdot {F_B}(t)]}}{{\log [{F_A}(t) + {F_B}(t) - 1]}} \leqslant {C_{{AND}}}(t) \leqslant \frac{{\log [{F_A}(t) \cdot {F_B}(t)]}}{{\log \{ \min [{F_A}(t),{F_B}(t)]\} }} $$
(9)

In Table 1 we summarize the pertinent inequalities for both the OR and the AND tasks, in their conventional and transformed forms. The various tests are expressed in terms of F(t) and S(t), so they can be conveniently estimated from experimental data.

Model predictions

In this section, we present, on the unified C OR and C AND spaces, simulation results from two models: (1) the standard parallel model—that is, a two-channel independent parallel model with unlimited capacity—and (2) the coactive model realized by parallel Poisson counters feeding into a final common channel.

Model predictions for OR paradigm: standard parallel model

The present instantiation of the standard parallel model assumes two independent channels, with unlimited capacity, each acting as a Poisson counter (e.g., Smith & Van Zandt, 2000; see also Townsend & Ashby, 1983, and Vickers, 1979, for general treatments of counting models). Evidence (measured in counts) toward a detection response is accumulated separately on each channel, until a prescribed criterion is met. In an OR task, processing time on a double-target condition is simply the time for the faster channel to complete its processing—that is, the time it takes for the number of counts in the faster channel to reach the criterion value. Mathematically, this model can be written as \( {S_{{AB}}}(t) = {S_A}(t) \cdot {S_B}(t) \), which is just a formal way of saying that the probability that a response has not been made by time t given two targets is the product of the probabilities that a response has not been made by time t given either single target alone (Mordkoff & Yantis, 1991, p. 535; Townsend & Wenger, 2004b, p. 1013). With this definition at hand, we can calculate and plot the predictions of the standard parallel model for the OR case.

Processing latencies on each channel of this model are Gamma distributed, with specified rate and criterion values. We explored a wide range of parameter values. In the following example, however, we assumed an equal criterion value (of 5 counts) for each of the channels. Rate values for channels A and B were set to .2 and .25, respectively.Footnote 11 Figure 1 shows the prediction of this model in the conventional form of cumulative distribution functions (top left panel) and in the novel form, mapped onto the C OR space (top right panel). C OR(t) values on the top right panel are exactly 1. When we report the noisier empirical results, we address the issue of sample noise by plotting error bounds.

Model predictions for OR paradigm: Coactive model

This model too assumes two parallel channels. Unlike a race model, however, information from the two channels converges downstream to satisfy a single criterion. Hence, the evidence accumulation rate in a coactive Poisson model is the sum of rates of the individual channels. In this example, we assumed the same criterion value (5) and processing rates for the single channels (.2, .25) as before. The processing rate for the double-target distribution is then given by summing the rates (0.45). In the bottom row of Fig. 1, we present the prediction of a coactive model with respect to the pertinent bounds in the conventional form (bottom left panel) and in the novel form, mapped onto the C OR space (bottom right panel). C OR(t) values on the bottom right panel are above 1, and the race model inequality is violated at almost all t values, suggesting that capacity is super to a rather strong degree. This result is compatible with the simulations of Townsend and Wenger (2004b) and the analytic results of Townsend and Nozawa (1995).

Model predictions for AND paradigm: standard parallel model and coactive model

In the upper row of Fig. 2, we present predictions of a two-channel independent-parallel model with unlimited capacity (standard parallel model), which indicate the expected unlimited capacity. This model is identical to the standard model specified earlier for the OR case, except for the decision rule. To obtain this plot we used Gamma distributions with the same parameters as in the OR example (i.e., criterion = 5, rateA = .2, rateB = .25). The distribution function for the double-target condition in the AND case is given by \( {F_{{AB}}}(t) = {F_A}(t) \cdot {F_B}(t) \) (e.g., Townsend & Nozawa, 2004b, p. 1015), which means that the probability that a response has been made by time t given two targets is the product of the probabilities that a response has been made by time t for both single-target A and single-target B. In the second row, we present predictions of a coactive Poisson counter model where, as in the OR case, the rate for the double-target distribution is the sum of single-target rates (.45). Dramatic super capacity is the result.

Fig. 2
figure 2

Predictions of a standard parallel model (top row) and a coactive model (bottom row) in the AND task. The data with respect to the Colonius–Vorberg (C–V) bounds are presented on the left panels in the conventional distribution form, where the dotted line marks the lower bound and the upper bound is given by the distribution function of the slower single-target conditions (A in this illustration). In the right panels, we present the same bounds and data, transformed onto the capacity coefficient space

Following the simulations, we wished to perform a prototypical experiment to probe and illustrate the new procedures. We chose a modification of Townsend and Nozawa’s (1995) design, which used dots as stimuli. Experiment 1 employed the OR task, as did the original study. Experiment 2 was an AND task, which has not previously been run with these stimuli.

Experiment 1 (OR task)

Method

The following experiment was performed in order to produce some exemplary data for illustrative purposes. The present data come from two highly trained participants, but the qualitative results, mainly parallelism, use of the optimal stopping rule (an OR decision), and modestly limited capacity (e.g., C(t)  <  1 and failure to violate either the Miller race bound above, or the Grice bound below), have so far described all participants; no individual differences were observed in the basic aspects of information processing in this task.Footnote 12 These individuals participated, on consecutive days, in four experimental sessions (about an hour each) of a visual target detection task. The task was administered in a completely dark room, after 10 min of darkness adaptation.

In the task, a bright dot (luminance of 67 cd/m2; subtending to 0.2° of visual angle at a viewing distance of 50 cm) could be presented (on a black screen) above a fixation point, below it, or on both positions or not presented at all. We refer to these targets as the top (A) and bottom (B) signals respectively. On a double-target display, two dots were displayed on a vertical meridian, equally spaced above and below a fixation point at an elevation of ± 1°. On a single-target display, only one dot was presented, either above or below fixation by 1°. The target-absent display consisted of a blank black screen. Stimuli were mixed within blocks.

The stimuli were generated via Microsoft Painter by an IBM-compatible (Pentium 4) microcomputer and displayed binocularly on a super-VGA 15-in. color monitor with a 1,024 × 768 pixel resolution, using DMDX software (Forster & Forster, 2003). On a trial, a single-pixel fixation point (subtending 0.05° of visual angle at a viewing distance of 50 cm; luminance of 0.067 cd/m2) was presented on the center of the screen for 500 ms, followed by a blank black screen (500 ms), and then followed by the stimulus display. The stimulus appeared on the screen for 100 ms and was then replaced by a blank screen. The participants were instructed to respond as quickly as possible. The response sampling began with the onset of the stimulus display and continued for 4,000 ms. The intertrial-interval was 1,000 ms. The participants were asked to respond affirmatively by pressing the right mouse key with the right index finger upon detection of at least one target (i.e., two dots, single dot on top, single dot at the bottom) and to respond no by pressing the left mouse key with the left index finger if no dot was present.

The probabilities of presenting both targets, presenting the top target alone, the bottom target alone, or no target at all were equal to .25. This means that the overall probability of a yes response in the OR task is .75, and a response bias toward a positive response may ensue. However, the present methodology is insensitive to such biases, except insofar as, in principle, a contingency might be inadvertently set up so that information about the presence or absence in one location might inform about the same in the other location (Mordkoff & Yantis, 1991). The .25 prior frequencies of presentations for all four stimuli obviate the possibility of that kind of “correlation” (Mordkoff & Yantis, 1991; Townsend & Nozawa, 1995). In addition, the different statistics discussed here [C OR(t), race model inequality, and Grice inequality] are all based on trials from the same response (yes), so response bias, even if present, makes no difference to these statistics.

Each session started with a practice block of 100 trials, followed by five experimental blocks of 160 trials each, with a 2-min break between blocks. The order of trials was randomized within a block. Overall, a large number of trials, 3,200, were collected for each participant at each experiment (OR, AND), allowing for tests at the distributional level.

Results

We recorded and analyzed RTs from correct responses. Error rate was low (3.4% and 2.2% for participants 1 and 2, respectively) and no accuracy–RT trade-off was observed. Using the RTs, we estimated the cumulative distribution functions and other requisite statistics for each of the experimental conditions.

In Fig. 3, we present the relevant RT distributions and the pertinent tests. In the left panels, the cumulative distributions of single- and double-target conditions are presented against the conventional form of the race model bound. In the right panels, the same data are presented in the capacity space. Focusing on the right panels, both participants exhibited limited capacity, evident by capacity coefficient values that are consistently below 1. Yet these capacity limitations were not severe enough to lead to violations of the lower, Grice bound. To better inform the reader about the stability of the estimated capacity function, we plot in thin solid lines the standard error of estimation (estimated by bootstrapping, with a correction to Van Zandt, 2002).Footnote 13 Strong inferences about workload capacity can be made as long as the confidence interval is tight enough. For the time range where it was tight, the observed workload capacity is always below 1 for Participant 1 (anywhere between 230 and 500 ms) and is either below or roughly bounded by 1 for participant 2 (anywhere between 330 and 800 ms).

Fig. 3
figure 3

Experimental results from a target detection OR task. Data from participant 1 (top row) and participant 2 (bottom row) are presented against the conventional form of the Grice and the race model bounds (left panels) and on the capacity coefficient space (right panels). The thin solid lines above and below the capacity coefficient function, in the right panels, represent ±1 standard error of the estimate (estimated by bootstrapping; see Footnote 13)

Therefore, models that predict super capacity, such as coactive models, can be rejected immediately, and for participant 1, even the standard parallel model is disconfirmed. In principle, coactive models with severe capacity limitations, or with a negative cross talk between processing channels can produce C(t) < 1. However, Townsend and Wenger (2004b) showed that the benefits of coactivation are robust enough to render this scenario unlikely. They simulated coactive systems with positive and negative interchannel dependencies and concluded that “… the presence of negative cross talk prior to pooling does not come close to off setting the advantage obtained by pooling the channels.” (p. 1024). A parallel model with fixed, evenly shared capacity (i.e., a fixed amount of processing resources that are equitably shared between channels) and a standard serial model both predict C OR(t) ≈ .5 (see Townsend & Ashby, 1983, pp. 76–91), and can therefore also be rejected with confidence. The simplest processing model that accommodates the observed data is a parallel model with rather significant limitations in resources, yet not to the level of the Grice bound. Again, results are in close alignment with the capacity findings of Townsend and Nozawa (1995, Experiment 1).

Experiment 2 (AND task)

Method

In this task, the same participants who earlier performed the OR task performed in four further sessions, with instructions to respond affirmatively if and only if targets A and B were simultaneously presented (i.e., if dots appear in both upper and lower positions). Otherwise, if no targets were present or if either target A or target B were presented alone, the participants had to respond negatively by pressing another key. Because the probabilities of presenting both targets, the top target alone, the bottom target alone, or no target at all were all equal to .25 (as in the OR task), this means that in the AND task the overall probability of a yes response was .25. All other experimental details were the same as those in the OR experiment.

There is a practical distinction, although not a theoretical one, between the estimated C AND(t) and C OR(t) functions. Recall that in the OR paradigm, the single-target data can be extracted from the yes single-target trial data. However, in the AND paradigm, the single-target trials require a no response. Residual processes not engaged in the target detection may alter the RTs associated with the single-target no responses. For example, Clark and Chase (1972) argued that negation responses may, in general, consume more time than affirmation responses. Given this logic, we have proposed (Eidels & Townsend, 2009) that a more realistic estimate of C AND(t) can be garnered by employing single-target yes data from an OR paradigm. This is the strategy we take here.

Results

CAND(t) and the Colonius–Vorberg bounds are plotted in Fig. 4. Capacity, as gauged by the capacity coefficient and shown in the right panels of Fig. 4, was quite limited in the early ranges, although never quite violating the lower Colonius–Vorberg bound. For medium and longer RTs, capacity became super, eventually superseding the upper bound, indicating the emergence of extreme super capacity.

Fig. 4
figure 4

Experimental results from a target detection AND task. Data from participant 1 (top row) and participant 2 (bottom row) are presented against the conventional form of the Colonius–Vorberg (C–V) bounds (left panels) and on the capacity coefficient space (right panels). The thin solid lines above and below the capacity coefficient function, in the right panels, represent ± 1 standard error of the estimate (estimated by bootstrapping; see Footnote 13)

General discussion

Comparing results from the OR and AND data and methodological considerations

Participants in Experiments 1 and 2 engaged in OR and AND tasks, with exactly the same stimuli in each paradigm, allowing for a direct comparison of capacity across the two tasks. Capacity on the OR task was limited for all t values (Fig. 3, right panels), whereas capacity on the AND task was super over an extensive range of RTs (Fig. 4, right panels).

The researcher might consider as one possible account of the difference in AND versus OR capacities that the way in which task demands could affect the mode of processing. Instructions to detect both A and B targets may push human participants toward unitization of several items into a single, holistic, representation, resulting in super capacity. In fact, there are now several sets of data suggesting that forcing conjunctions of display elements, rather than disjunctions, can provide an impetus toward super capacity.

In point of fact, Blaha and Townsend (2006) have shown that in a conjunctive categorization task, in which participants had to exhaustively search all items of a set in order to respond correctly, participants exhibited capacity values much higher than 1 after a few days of training. In contrast, when searching for one particular feature, capacity was limited even after several days of training. In addition, employing OR and AND designs with facial emotional features, Innes-Ker and Townsend (2003) also discovered super capacity with exhaustive AND processing but limited capacity in the OR counterpart. In a broad investigation of word and face perception, Wenger and Townsend (2006) also documented considerably more super capacity in their AND conditions. Nonetheless, the present dot stimuli seem extraordinarily simple in contemplating this explanation. The investigator might also be interested in following up on the very early period of limited capacity.

Recall that the negation responses in the AND paradigm induced us to select affirmation responses from the OR paradigm. (This should always be carried out on a within-participants basis.) We recomputed C AND(t), using instead the negation single-target responses from that condition and found uniform super capacity. Thus, for the major range of times, either technique delivers the inference of super capacity. Nevertheless, since this strategy may inflate C AND(t) due to slower no single-target responses, the experimenter may well prefer the yes single-target data (i.e., single-target data from an OR experiment) in computing this capacity index.

RT influences from various aspects of the processing system and the yardstick

Multiple sources of information can, in principle, be processed serially, in parallel, or in some hybrid fashion (see, e.g., Schweickert, 1978; Schweickert & Townsend, 1989). Additionally, they might be processed independently from one another or interact in different ways. The processing efficiency of the system across various levels of workload can change as the effort required of the participant is varied. Naturally, the change can come about due to the architecture, for example, serial versus parallel, stopping rule, for example, OR versus AND tasks, channel or item dependencies, and so on.

Some commentators have been puzzled by the term workload capacity when it is used in the context of redundant targets in an OR task. The confusion apparently stems from the expectation that redundant OR responding will always be faster than, say, responding to either target separately. First, our capacity methodology assays the overall performance, which naturally compiles the contributions from both channels (stages, etc.). And our yardsticks allow, or take into account, the stopping rule. From that point of view, a system that is limited capacity in going from a single to double signal situation in an AND task will, with no further changes to the efficiency of the individual subprocesses, still be limited capacity in an OR situation. This fact is quite critical, since it will hold even if the limitations are sufficiently moderate that some benefits to the redundant target stimuli accrue (i.e., there is a redundancy gain, to use the parlance in the field of redundant signals). It is the underlying efficiency of the entire system that is being tested, not just the aspect of whether or not RT is shorter with two targets, rather than one.

Furthermore, it can be observed that a minimum-time serial system will not predict that a redundant-target trial is faster than a single-target trial (in fact, their averages should be the same). Finally, with regard to this particular issue, we have discovered stimuli and tasks where participants reveal capacity that is as poor as, or even poorer than, a fixed capacity (equivalently, minimum-time serial processing) parallel system. When workload capacity descends below the fixed capacity level, redundancy gains no longer occur.Footnote 14

The workload capacity measures we constructed are founded on the class of parallel systems in which the channels neither slow down nor speed up as the load is increased and, in addition, assume stochastic independence among the channels. The installation of this variety of parallel systems as the “yardstick” is somewhat subjective, and other investigators may think of good reasons to pick others—for instance, serial systems or coactive systems.

Our choice seems natural, and useful, to us for the following reasons. The parallel versus serial processing issue has surfaced in many diverse arenas and in a number of cases, the primary “antagonists” are what we refer to as standard serial models versus standard parallel models. The former class is based on one-at-a-time processing with identical, independent distributions for each completion time. The latter class is just that specified immediately above. The visual search literature forms one apposite example of this approach. Another imposing body of research largely pits race models against “something better,” the something better usually being coactivation, which brings us to the next topic.

Agreeing on a taxonomy of psychological systems

The inductive side of our developments has primarily been drawn from the research spheres of visual and memory search and the redundant-signals literature. As is usually the case in psychological theory and methodology, the terminology has been rather vague in the past in either area. It presently seems important to establish conventions upon which investigators can agree or, at least, take as a point of departure.

In particular, terms such as race model and coactivation have been somewhat blurrily defined and further changed over the years. Let’s first take up the definition of race model, which has played such a vital role in the redundant-signals purview. Starting in the 1970 s, and generated primarily by J. Miller and his colleagues (e.g., J. Miller, 1978), race models were operationally defined by models (presumably, parallel models) that did not violate Miller’s race model inequality. What we call the standard parallel model is what past researchers used as a standard (often referred to, somewhat awkwardly, as probability summation). Miller’s bound is trivially satisfied by these models, as he well realized, thus promoting the more general, but somewhat vague, definition of race.

Subsequently, researchers began to turn to more theoretical definitions based on parallel channels that fed into a final common pooled channel, whose activation decided the final decision. All the rigorous examples with which we are familiar assumed stochastically independent processing in the distinct channels up until the pooling operation. We shall derive our proposed set of conventions from the analysis of Colonius and Townsend (1997), who took into consideration the evolution of terminology.

Our taxonomy first defines the class of separate decisions parallel models as any parallel system where detection decisions are made in the distinct channels, perhaps prior (as in redundant-signals designs) to imposing a final logical decisional rule. The class of race models is then defined as separate-decisions parallel processing with a minimum-time stopping rule. Such models can violate any of the bounds when channel speeds are somehow changed across workload level—for instance, through channel interactions. However, if the marginal channel speeds are preserved when the load is changed, the resulting race models will obey the Miller and Grice inequalities (ditto for the AND bounds; see Colonius, 1990; Colonius & Vorberg, 1994).

Coactive models are next defined in terms of systems containing early independent parallel channels followed by a pooling into a single channel where the ultimate decision is made. Of course, hybrid process models are also possible (Colonius & Townsend, 1997). For instance, one can design a system with positively or negatively interactive parallel channels but that posits a final pooling stage, as in our coactive models. With extremely negatively interacting channels, a hybrid coactive system can exhibit overall limited capacity. Again, these are conventions. Some might prefer to classify any system with a final pooling conduit, as “coactive,” even if the separate channels interact prior to the pooling.

Our chosen taxonomy bears the following consequences, among others. (1) Separate decisions parallel models predict super capacity when channel facilitation is allowed (Eidels et al., 2011; Townsend & Wenger, 2004b). (2) In fact, separate decisions parallel models with mutual channel facilitation can devolve to true coactivation as a special case (Colonius & Townsend, 1997). (3) Separate decisions parallel models can be limited capacity if their channels inhibit one another (Eidels et al., 2011; Townsend & Wenger, 2004b). (4) Separate decisions models might, as noted above, therefore either violate, or not, the Miller race inequality. (5) Coactivation models are invariably super capacity and readily violate the race inequality, if the individual channels are not seriously affected when the workload increases (Townsend & Wenger, 2004b). (6) This system of classification implies that the former operational definition through violation of the race inequality is no longer viable. (7) Within this proposed convention, models whose channels interact early on and later converge via pooling should, by convention, not be called “coactive,” although they may be viable hybrid models in certain conditions.

Finally, some have objected to segregating coactive and separate-decisions models in our classification. The reason is that both are multichannel, prior to the pooling or channel detections, respectively. Our taxonomy does possess the merit of preserving and extending the vital distinctions that have evolved in the field of redundant-target perception.

Influence of base times on capacity measures

A component of the observed RTs, which we call base time, is known to have a potential effect on the estimates of the capacity coefficient and the race model bound. Base time is a traditional term for that duration that is consumed by subsidiary mechanisms before or after the internal processing interval of interest, and it often is symbolized by the random variable T 0. Then the random variable for RT, is just RT = T p + T 0, where T p is the random processing time of interest. Typically, T 0 includes motor time and early sensory coding. It is virtually always assumed that T 0 is stochastically independent of the processes under study and also impervious to experimental manipulations of the latter. Progress has been made in the assessment of T 0 influences on our capacity measures and bounds, which will aid the researcher assaying capacity using the present methodology.

Townsend and Nozawa (1995) pointed out qualitatively (no rigorous proof) that the numerator of C OR(t) contains only one T 0 component, whereas the denominator contains two, which, in general, will cause a lower estimate of C(t) from what it would be if we could insert only the actual processing times of concern into the formula. As it turns out, analogous deformations act on the various bounds, although Ulrich and Giray (1986; see also Colonius, 1990; Colonius & Vorberg, 1994) showed that inclusion of the base time would not permit satisfaction of the Miller or Grice inequalities if the processing time distributions violated them in the absence of the base time.

Nonetheless, Townsend and Honey (2007) recently proved that, when considering the base time component in RTs, the capacity coefficient and race model inequality both generate an underestimate of the processing capacity of a system. In other words, one may observe C OR(t) values below 1, where, in fact, the real capacity of the system is unlimited or even super. Likewise, they demonstrated that both a maximum violation of the race bound and the area of the violation will be statistically less. The reason is the presence of T 0 in both the single-target RTs, but only once in the double-target RTs. They also simulated standard parallel models possessing base times of various magnitudes and found that, although the standard parallel models predict capacity coefficient values of 1, the inclusion of a base time component pushed C OR(t)  <  1.

In this article (see Appendix 3), we prove for the first time that in AND tasks, the presence of T 0 in both the single-target RTs, but only once in the double-target RTs, has an effect opposite to that in OR tasks. Namely, if the standard parallel model is in force, our estimate of AND capacity with base time is C AND(t)  ≥  1, as opposed to the contrary result in the case of C OR(t)! We return to utilize this fact later. In addition, Appendix 3 reports simulation results and considers the implications of base times in the OR and AND tasks.

Townsend and Honey (2007) discussed their theoretical findings in OR paradigms with regard to the experimental literature and provisionally concluded that, for base times that appear to be in the ballpark of what has been estimated in the laboratory, inferences about performance through C OR(t) and the bounds would not be seriously damaged. In both the OR and AND paradigms, the magnitude of the variance of T 0 versus that of the processing times of interest will be the determining factor, as Townsend and Honey showed in the OR case. The typical T 0 variance appears negligible relative to the variability in the processes under study. The upshot for present purposes is that a C OR(t)  >  1 result and a violation of Miller’s race bound are then strong evidence for true super capacity, whereas very mild limited capacity might be due to the base time. Large departures of C OR(t) below 1 indicate true limitations in capacity whether due to limited resources or lateral inhibition. Our present simulations suggest that a substantial C AND(t) in conjunction with a violation of the upper Colonius–Vorberg bound will provide sturdy evidence for high super capacity but a small increase of C AND(t) over C AND(t)  =  1 may be due to T 0. Likewise, values of C AND(t)  <  1 cannot be evoked by T 0.

The theoretical findings above come into play when the researcher contemplates that, for each of the observers, limited capacity was found in the present OR task but super capacity, for the most part, in the AND task. This asymmetry plainly opens the possibility to the researcher that the base time could be at least partly responsible. Thus, she/he should ask whether it is possible that the base time was responsible for the difference in OR and AND experimental results, by way of leading to underestimations of the capacity index in the former and overestimation in the latter. The conclusion to be reached from extensive simulations (Appendix 3) is that base time is a sufficient explanation for the moderately limited capacity found in the OR data but that it is insufficient to account for the substantial super capacity discovered in the AND data.

The discovery of the opposite directions of influence by the base time in the OR and AND designs appears to be a quite felicitous event, as suggested by its aid in resolving the OR–AND capacity asymmetry above. It implies that carrying out both OR and AND designs can, as in the present circumstance, aid the experimenter in evaluating whether apparent limited capacity in the OR task is due to base time, and analogously for super capacity in the AND task. It is beyond our present scope, but this property may also help in adjudicating best model comparisons when parameterized models are fit to data in the two paradigms.

The discussion above opens up the possibility that other factors—rather than channels’ processing rates under varying loads—can drive our capacity measures up or down. Fitting parametric models to data can provide estimates, rather than assumptions, about such parameters and whether they vary with workload. In a recent study, Eidels, Donkin, Brown, and Heathcote (2010a) have analyzed data from a redundant-target OR task by using the nonparametric C OR(t) measure, as well as by fitting the linear ballistic accumulator model (Brown & Heathcote, 2008). They found close agreement between the techniques and, in particular, discovered that the efficiency of processing across load conditions (single vs. double target) was driven by channels’ accumulation rates, and not by other parameters such as base time. For example, in a group of participants who exhibited super capacity, the within-channel processing rate for double targets was estimated as substantially higher than the single-target rate, meaning that each channel actually performed better when the other channel was activated. Base times, however, were almost identical and could not account for the observed super capacity, in complete agreement with Townsend and Honey’s (2007) conclusion.

Other factors, such as decision threshold, may also play a role in determining the speed of response. Indeed, Eidels et al. (2010a) estimated for the group of participants above a lower threshold value for yes trials, as compared with no trials, presumably because, in the standard OR design, 75% of the trials require a positive response. However, in the OR task, this difference cannot contaminate C OR(t) estimates, since the index is calculated only on the basis of yes trials. Future research may probe the effects of decision threshold on C AND(t).

Distinct roles for the C(t) functions and the bounds in a global theory

Consider a prototypical target detection task (e.g., J. Miller, 1978). As we have seen, the workload capacity coefficient (Townsend & Nozawa, 1995; Townsend & Wenger, 2004b) measures the effects of workload (number of features, items, channels, and so forth to be processed) on performance. These C(t) functions can operate within distinct stopping rules such as OR and AND paradigms. Miller’s race model inequality and Grice’s inequality place upper and lower bounds on the performance of parallel processing with a minimum-time (OR) stopping rule (J. Miller, 1978, 1982; Grice et al., 1984). Violation of the Colonius and Vorberg (1994) upper bound indicates a processing speed that is superior to that expected from a standard parallel model employing an exhaustive (AND) stopping rule, and their lower bound does the opposite below. Both types of assessment can readily be extended to arbitrary workloads (for bounds, see Colonius, 1990, and Colonius & Vorberg, 1994; for capacity indices, see Blaha & Townsend, 2006).

The question might occur to the reader as to whether both the C(t) index functions and the various bounds are really needed. A related issue is whether one approach might be in some sense “better” than the other.Footnote 15

These statistics bear distinct purposes and advantages and disadvantages, although we view them as mutually supportive. Violation of Miller’s race model inequality permits rejection of a large class of race models, but not all, since reasonable parallel race models with cross-channel facilitation can readily violate that bound (Townsend & Wenger, 2004b). Within our theory, it assesses whether capacity is super beyond the bound level. Similarly, violation of the Grice inequality indicates extremely limited capacity not ordinarily associated with race models (but possible through, for example, mutually inhibitory channels; Townsend & Wenger, 2004b).

A limitation of the race model inequality is that it is only useful for F A (t)  +  F B(t)  ≤  1. Furthermore, although the race bound is typically very close to C OR(t)  =  1 for small values of t, there is a rapid divergence of the bound as time becomes larger, as shown for the first time in the above exposition. The Grice bound is always substantially below C OR(t)  =  1.

The Colonius–Vorberg inequalities play an analogous role in AND designs. The lower Colonius–Vorberg bound is virtually equivalent to C AND(t)  =  1 for very slow responses. However, the divergence of the capacity coefficients for shorter values of time provides concrete evidence of the difference in the statistics. The upper Colonius–Vorberg bound is typically significantly removed from C AND(t)  =  1. Analogously to the fact that the (upper) race bound is useful only when F A (t)  +  F B(t)  ≤  1, the lower Colonius–Vorberg bound is appropriate only when F A (t)  +  F B(t)  ≥  1.

We hope we have persuaded the reader that both methodologies contribute distinct but essential information and that they offer valuable complementary evidence. Indeed, both the capacity functions and the various bounds are contained in an overall general theory of capacity (Townsend & Wenger, 2004b), which now permits contrast and comparison on the same scale. In the present article, we extend this general theory by developing a new common-scale framework. This framework affords the simultaneous depiction of all pertinent measures of performance employing a common C(t) scale and in the same data space. Using the transformation formulae in Table 1 (presented in their F(t) and S(t) form, so they can be easily computed from experimental data), one can plot, for instance, the lower Colonius–Vorberg alongside the prediction of a standard parallel model, C AND(t)  =  1, and experimental and/or simulated data on the same scale.

In closing, it is worth noting that although we have emphasized OR and AND experimental designs, our methods can be extended to any situation involving an arbitrary number of objects required to be processed in some manner. For instance, any task based on the sentential logical calculus will necessitate a finite number of elements to be processed. For instance, an XOR (exclusive OR) task—“respond positively if one signal is present, or another, but not both”—demands the determination of exactly one target and one nontarget on a yes trial, implying exhaustive processing.

Interestingly, stimuli containing two targets or no targets can also call for exhaustive processing. The present methodology for assessment of capacity, as well as that of architecture and stopping rule and dependence, can be generalized to such tasks.

Conclusion

This study proposes a conception of capacity spaces wherein not only can the original C(t) indices be depicted, but also such other vital statistics as the Miller, Grice, and Colonius and Vorberg bounds can be graphed and contrasted. Naturally, experimental data and, perhaps, simulations of complex models whose capacity predictions may not be immediately evident can be graphed in such a space and compared with pertinent theoretical bounds. Although we have emphasized several varieties of parallel processing and the major inequalities, it is feasible to portray the predictions of other architectures—for instance, serial models or more complex architectures (e.g., Schweickert & Townsend, 1989). In addition, it is perfectly feasible to chart, say, AND predictions, or data for that matter, within an OR capacity space. Thus, the AND predictions from a standard parallel model in such a portrayal reveal how much a system is “paying” by failing to implement a disjunctive, first-terminating, minimum-time decision strategy, but rather using an exhaustive, conjunctive rule.