Skip to main content
Log in

Generalizing from the Past, Choosing the Future

  • SQAB Tutorials
  • Published:
Perspectives on Behavior Science Aims and scope Submit manuscript

Abstract

Behavior in the present depends critically on experience in similar environments in the past. Such past experience may be important in controlling behavior not because it determines the strength of a behavior, but because it allows the structure of the current environment to be detected and used. We explore a prospective-control approach to understanding simple behavior. Under this approach, order in the environment allows even simple organisms to use their personal past to respond according to the likely future. The predicted future controls behavior, and past experience forms the building blocks of the predicted future. We explore how generalization affects the use of past experience to predict and respond to the future. First, we consider how generalization across various dimensions of an event determines the degree to which the structure of the environment exerts control over behavior. Next, we explore generalization from the past to the present as the method of deciding when, where, and what to do. This prospective-control approach is measurable and testable; it builds predictions from events that have already occurred, and assumes no agency. Under this prospective-control approach, generalization is fundamental to understanding both adaptive and maladaptive behavior.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

Similar content being viewed by others

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sarah Cowie.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

The present article is intended to illustrate a conceptual approach to understanding why the environment exerts imperfect control over behavior. For these purposes, we adopt the equations used by Cowie and Davison (2020) to model the generalization of reinforcers across time and location shown in Fig. 1. To model temporal generalization, reinforcers in each time bin were redistributed across surrounding time bins according to a Gaussian function with standard deviation (s) at time t since a marker event (Panels C and D in Fig. 1):

$$ {s}_t={s}_0+\frac{a_s}{1+{e}^{-\left(\frac{t-{X}_0}{\beta_s}\right)}}. $$
(1)

In Equation 1, the parameter a is the extent of the increase in generalization between the times at which generalization is least (s0) and most likely (i.e., the asymptote). X0 is the time (x-value) at which st is halfway between its asymptotically low and high values, and β is the slope of the function around this point (i.e., the speed with which generalization increases).

Because of the discrete nature of the two response locations in the procedure, we modeled generalization across location by shifting a proportion of reinforcers at each time m to the other alternative. The proportion of reinforcers generalized to the other location (m) at time t (Panels E and F in Fig. 1) was calculated as:

$$ {m}_t={m}_0+\frac{a_m}{1+{e}^{-\left(\frac{t-{X}_0}{\beta_m}\right)}}. $$
(2)

The parameters in Equation 2 are the same as in Equation 1, but apply to generalization across location (m) rather than time (s). As Cowie and Davison (2020) did, we used the same X0 parameter for both temporal (s) and spatial (m) generalization.

The discriminated reinforcers (R’) in Panels E and F of Fig. 1 are thus derived from the obtained reinforcers using the equation:

$$ \mathit{\log}\frac{R{\prime}_{1,t}}{R{\prime}_{2,t}}=\mathit{\log}\left(\frac{\sum_{n=1}^{tmax}f\left(t,n,\gamma n\right)\left\lfloor \left(1-m\right){R}_{1,n}+m{R}_{2,n}\right\rfloor }{\sum_{n=1}^{tmax}f\left(t,n,\gamma n\right)\left[\left(1-m\right){R}_{2,n}+m{R}_{1,n}\right]}\right)+\log c. $$
(3)

In this instance, the parameters are the same as in Equations 1 and 2, and tmax is the maximum time since a marker event, dictated by the procedure itself. In the example in the present article, we displayed the effects of the two generalization processes sequentially to illustrate their separate effects on the discriminated structure of the environment. As Equation 3 shows, both processes are in fact applied simultaneously when fitting the quantitative model to the data.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cowie, S., Davison, M. Generalizing from the Past, Choosing the Future. Perspect Behav Sci 43, 245–258 (2020). https://doi.org/10.1007/s40614-020-00257-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40614-020-00257-9

Keywords

Navigation