Abstract
Pigeons pecked keys on concurrent-chains schedules that provided a variable interval 30-sec schedule in the initial link. One terminal link provided reinforcers in a fixed manner; the other provided reinforcers in a variable manner with the same arithmetic mean as the fixed alternative. In Experiment 1, the terminal links provided fixed and variable interval schedules. In Experiment 2, the terminal links provided reinforcers after a fixed or a variable delay following the response that produced them. In Experiment 3, the terminal links provided reinforcers that were fixed or variable in size. Rate of reinforcement was varied by changing the scheduled interreinforcer interval in the terminal link from 5 to 225 sec. The subjects usually preferred the variable option in Experiments 1 and 2 but differed in preference in Experiment 3. The preference for variability was usually stronger for lower (longer terminal links) than for higher (shorter terminal links) rates of reinforcement. Preference did not change systematically with time in the session. Some aspects of these results are inconsistent with explanations for the preference for variability in terms of scaling factors, scalar expectancy theory, risk-sensitive models of optimal foraging theory, and habituation to the reinforcer. Initial-link response rates also changed within sessions when the schedules provided high, but not low, rates of reinforcement. Within-session changes in responding were similar for the two initial links. These similarities imply that habituation to the reinforcer is represented differently in theories of choice than are other variables related to reinforcement.
Article PDF
Similar content being viewed by others
References
Ahearn, W., Hineline, P. N., &David, F. G. (1992). Relative preferences for various bivalued ratio schedules.Animal Learning & Behavior,20, 407–415.
Aoyama, K., &McSweeney, F. K. (2001). Habituation may contribute to within-session decreases in responding under high-rate schedules of reinforcement.Animal Learning & Behavior,29, 79–91.
Autor, S. M. (1969). The strength of conditioned reinforcers as a function of frequency and probability of reinforcement. In D. P. Hendry (Ed.),Conditioned reinforcement (pp. 127–162). Homewood, IL: Dorsey.
Bacotti, A. V. (1977). Matching under concurrent fixed-ratio variable-interval schedules of food presentation.Journal of the Experimental Analysis of Behavior,27, 171–182.
Bateson, M., &Kacelnik, A. (1995). Preferences for fixed and variable food sources: Variability in amount and delay.Journal of the Experimental Analysis of Behavior,63, 313–329.
Bateson, M., &Kacelnik, A. (1997). Starlings’ preferences for predictable and unpredictable delays to food.Animal Behaviour,53, 1129–1142.
Battalio, R. C., Kagel, J. H., &MacDonald, D. N. (1985). Animals’ choices over uncertain outcomes: Some initial experimental results.American Economic Review,75, 597–613.
Baum, W. M. (1974). On two types of deviation from the matching law: Bias and undermatching.Journal of the Experimental Analysis of Behavior,22, 231–242.
Bednekoff, P. A. (1996). Risk-sensitive foraging, fitness, and life histories: Where does reproduction fit into the big picture?American Zoologist,36, 471–483.
Broster, B. S., &Rankin, C. H. (1994). Effects of changing interstimulus interval during habituation inCaenorhabditis elegans.Behavioral Neuroscience,108, 1019–1029.
Caraco, T. (1980). On foraging time allocation in a stochastic environment.Ecology,61, 119–128.
Caraco, T., Blanckenhorn, W. U., Gregory, G. M., Newman, J. A., Recer, G. M., &Zwicker, S. M. (1990). Risk-sensitivity: Ambient temperature affects foraging choice.Animal Behaviour,39, 338–345.
Caraco, T., Martindale, S., &Whittam, T. S. (1980). An empirical demonstration of risk-sensitive foraging preferences.Animal Behaviour,28, 820–830.
Cartar, R. V., &Dill, L. M. (1990). Why are bumble bees risk-sensitive foragers?Behavioral Ecology & Sociobiology,26, 121–127.
Case, D. A., Nichols, P., &Fantino, E. (1995). Pigeons’ preference for variable-interval water reinforcement under widely varied water budgets.Journal of the Experimental Analysis of Behavior,64, 299–311.
Cicerone, R. A. (1976). Preference for mixed versus constant delay of reinforcement.Journal of the Experimental Analysis of Behavior,25, 257–261.
Davis, M. (1970). Effects of interstimulus interval length and variability on startle-response habituation in the rat.Journal of Comparative & Physiological Psychology,72, 177–192.
Davison, M. C. (1969). Preference for mixed-interval versus fixed-interval schedules.Journal of the Experimental Analysis of Behavior,12, 247–252.
Davison, M. C. (1972). Preference for mixed-interval versus fixed-interval schedules: Number of component intervals.Journal of the Experimental Analysis of Behavior,17, 169–176.
Duncan, B., &Fantino, E. (1970). Choice for periodic schedules of reinforcement.Journal of the Experimental Analysis of Behavior,14, 73–86.
Essock, S. M., &Reese, E. P. (1974). Preference for and effects of variableas opposed to fixed-reinforcer duration.Journal of the Experimental Analysis of Behavior,21, 89–97.
Fantino, E. (1967). Preference for mixed- versus fixed-ratio schedules.Journal of the Experimental Analysis of Behavior,10, 35–43.
Fantino, E. (1969). Choice and rate of reinforcement.Journal of the Experimental Analysis of Behavior,12, 723–730.
Fantino, E., &Davison, M. (1983). Choice: Some quantitative relations.Journal of the Experimental Analysis of Behavior,40, 1–13.
Fleshler, M., &Hoffman, H. S. (1962). A progression for generating variable-interval schedules.Journal of the Experimental Analysis of Behavior,5, 529–530.
Frankel, P. W., &vom Saal, W. (1976). Preference between fixed-interval and variable-interval schedules of reinforcement: Separate roles of temporal scaling and predictability.Animal Learning & Behavior,4, 71–76.
Gibbon, J. (1977). Scalar expectancy theory and Weber’s Law in animal timing.Psychological Review,84, 279–325.
Grace, R. C. (1994). A contextual model of concurrent-chains choice.Journal of the Experimental Analysis of Behavior,61, 113–129.
Grace, R. C. (1996). Choice between fixed and variable delays to reinforcement in the adjusting-delay procedure and concurrent chains.Journal of Experimental Psychology: Animal Behavior Processes,22, 362–383.
Graham, F. K. (1973). Habituation and dishabituation of responses innervated by the autonomic nervous system. In H. V. S. Peeke & M. J. Herz (Eds.),Habituation: Vol. 1. Behavioral studies (pp. 163–218). New York: Academic Press.
Ha, J. C. (1991). Risk-sensitive foraging: The role of ambient temperature and foraging time.Animal Behaviour,41, 528–529.
Ha, J. C., Lehner, P. N., &Farley, S. D. (1990). Risk-prone foraging behaviour in captive grey jays,Perisoreus canadensis. Animal Behaviour,39, 91–96.
Hamm, S. L., &Shettleworth, S. J. (1987). Risk aversion in pigeons.Journal of Experimental Psychology: Animal Behavior Processes,13, 376–383.
Hastjarjo, T., Silberberg, A., &Hursh, S. R. (1990). Risky choice as a function of amount and variance in food supply.Journal of the Experimental Analysis of Behavior,53, 155–161.
Herrnstein, R. J. (1964). Aperiodicity as a factor in choice.Journal of the Experimental Analysis of Behavior,7, 179–182.
Houston, A. I., &McNamara, J. M. (1982). A sequential approach to risk-taking.Animal Behaviour,30, 1260–1261.
Hurly, T. A., &Oseen, M. D. (1999). Context-dependent, risk-sensitive foraging preferences in wild rufous hummingbirds.Animal Behaviour,58, 59–66.
Hursh, S. R., &Fantino, E. (1973). Relative delay of reinforcement and choice.Journal of the Experimental Analysis of Behavior,19, 437–450.
Ito, M., Takatsuru, S., &Saeki, D. (2000). Choice between constant and variable alternatives by rats: Effects of different reinforcer amounts and energy budgets.Journal of the Experimental Analysis of Behavior,73, 79–92.
Kacelnik, A., &Bateson, M. (1996). Risky theories: The effects of variance on foraging decisions.American Zoologist,36, 402–434.
Kagel, J. H., MacDonald, D. N., Battalio, R. C., White, S., &Green, L. (1986). Risk aversion in rats (Rattus norvegicus) under varying levels of resource availability.Journal of Comparative Psychology,100, 95–100.
Killeen, P. (1968). On the measurement of reinforcement frequency in the study of preference.Journal of the Experimental Analysis of Behavior,11, 263–269.
Laming, P. R., &McKinney, S. J. (1990). Habituation in goldfish (Carassius auratus) is impaired by increased interstimulus interval, interval variability, and telencephalic ablation.Behavioral Neuroscience,104, 869–875.
Lawes, M. J., &Perrin, M. R. (1995). Risk-sensitive foraging behaviour of the round-eared elephant shrew (Macroscelides proboscideus).Behavioral Ecology & Sociobiology,37, 31–37.
Leventhal, A. M., Morrell, R. F., Morgan, E. F., Jr., &Perkins, C. C., Jr. (1959). The relation between mean reward and mean reinforcement.Journal of Experimental Psychology,57, 284–287.
Lobb, B., &Davison, M. C. (1975). Performance in concurrent interval schedules: A systematic replication.Journal of the Experimental Analysis of Behavior,24, 191–197.
Logan, F. A. (1965). Decision making by rats: Uncertain outcome choices.Journal of Comparative & Physiological Psychology,59, 246–251.
MacEwen, D. (1972). The effects of terminal-link fixed-interval and variable-interval schedules on responding under concurrent chained schedules.Journal of the Experimental Analysis of Behavior,18, 253–261.
Mazur, J. E. (1984). Tests of an equivalence rule for fixed and variable reinforcer delays.Journal of Experimental Psychology: Animal Behavior Processes,10, 426–436.
Mazur, J. E. (1986). Fixed and variable ratios and delays: Further tests of an equivalence rule.Journal of Experimental Psychology: Animal Behavior Processes,12, 116–124.
Mazur, J. E. (1988). Choice between small certain and large uncertain reinforcers.Animal Learning & Behavior,16, 199–205.
Mazur, J. E. (2001). Hyperbolic value addition and general models of animal choice.Psychological Review,108, 96–112.
McNamara, J. M. (1996). Risk-prone behavior under rules which have evolved in a changing environment.American Zoologist,36, 484–495.
McSweeney, F. K. (1992). Rate of reinforcement and session duration as determinants of within-session patterns of responding.Animal Learning & Behavior,20, 160–169.
McSweeney, F. K., Hinson, J. M., &Cannon, C. B. (1996). Sensitization-habituation may occur during operant conditioning.Psychological Bulletin,120, 256–271.
McSweeney, F. K., Murphy, E. S., &Kowal, B. P. (2001). Withinsession changes in responding during concurrent variable interval variable ratio schedules.Behavioural Processes,55, 163–169.
McSweeney, F. K., Roll, J. M., &Cannon, C. B. (1994). The generality of within-session patterns of responding: Rate of reinforcement and session length.Animal Learning & Behavior,22, 252–266.
McSweeney, F. K., Roll, J. M., &Weatherly, J. N. (1994). Withinsession changes in responding during several simple schedules.Journal of the Experimental Analysis of Behavior,62, 109–132.
McSweeney, F. K., Swindell, S., &Weatherly, J. N. (1996). Withinsession changes in responding during concurrent schedules with different reinforcers in the components.Journal of the Experimental Analysis of Behavior,66, 369–390.
McSweeney, F. K., Swindell, S., &Weatherly, J. N. (1999). Withinsession changes in responding during concurrent fixed interval variable interval schedules.Animal Learning & Behavior,27, 236–248.
McSweeney, F. K., Weatherly, J. N., &Roll, J. M. (1995). Withinsession changes in responding during concurrent schedules that employ two different operanda.Animal Learning & Behavior,23, 237–244.
McSweeney, F. K., Weatherly, J. N., &Swindell, S. (1996). Withinsession changes in responding during concurrent variable-interval schedules.Journal of the Experimental Analysis of Behavior,66, 75–95.
Menlove, R. L., Inden, H. M., &Madden, E. G. (1979). Preference for fixed over variable access to food.Animal Learning & Behavior,7, 499–503.
Morris, C. J. (1986). The effects of occasional short (FR 1) reinforcement ratios on choice behavior.Psychological Record,36, 63–68.
Navarick, D. J., &Fantino, E. (1972). Transitivity as a property of choice.Journal of the Experimental Analysis of Behavior,18, 389–401.
Navarick, D. J., &Fantino, E. (1974). Stochastic transitivity and unidimensional behavior theories.Psychological Review,81, 426–441.
Navarick, D. J., &Fantino, E. (1975). Stochastic transitivity and the unidimensional control of choice.Learning & Motivation,6, 179–201.
Perez, S. M., &Waddington, K. D. (1996). Carpenter bee (Xylocopa micans) risk indifference and a review of nectivore risk-sensitivity studies.American Zoologist,36, 435–446.
Pubols, B. H., Jr. (1962). Constant versus variable delay of reinforcement.Journal of Comparative & Physiological Psychology,55, 52–56.
Reboreda, J. C., &Kacelnik, A. (1991). Risk sensitivity in starlings: Variability in food amount and food delay.Behavioral Ecology,2, 301–308.
Rider, D. P. (1979). Concurrent ratio schedules: Fixed vs. variable response requirements.Journal of the Experimental Analysis of Behavior,31, 225–237.
Rider, D. P. (1981). Concurrent fixed-interval variable-ratio schedules and the matching relation.Journal of the Experimental Analysis of Behavior,36, 317–328.
Rider, D. P. (1983a). Choice for aperiodic versus periodic ratio schedules: A comparison of concurrent and concurrent-chains procedures.Journal of the Experimental Analysis of Behavior,40, 225–237.
Rider, D. P. (1983b). Preference for mixed versus constant delays of reinforcement: Effect of probability of the short, mixed delay.Journal of the Experimental Analysis of Behavior,39, 257–266.
Shafir, S., Wiegmann, D. D., Smith, B. H., &Real, L. A. (1999). Risk-sensitive foraging: Choice behavior of honeybees in response to variability in volume of reward.Animal Behaviour,53, 1055–1061.
Sherman, J. A., &Thomas, J. R. (1968). Some factors controlling preference between fixed-ratio and variable-ratio schedules of reinforcement.Journal of the Experimental Analysis of Behavior,11, 689–702.
Staddon, J. E. R., &Innis, N. K. (1966). Preference for fixed vs. variable amounts of reward.Psychonomic Science,4, 193–194.
Stephens, D. W., &Charnov, E. L. (1982). Optimal foraging: Some simple stochastic models.Behavioral Ecology & Sociobiology,10, 251–263.
Stubbs, D. A., &Pliskoff, S. S. (1969). Concurrent responding with fixed relative rate of reinforcement.Journal of the Experimental Analysis of Behavior,12, 887–895.
Thompson, R. F., &Spencer, W. A. (1966). Habituation: A model phenomenon for the study of neuronal substrates of behavior.Psychological Review,73, 16–43.
Trevett, A. J., Davison, M. C., &Williams, R. J. (1972). Performance in concurrent interval schedules.Journal of the Experimental Analysis of Behavior,17, 369–374.
Williams, B. A., &Fantino, E. (1978). Effects on choice of reinforcement delay and conditioned reinforcement.Journal of the Experimental Analysis of Behavior,29, 77–86.
Young, J. S. (1981). Discrete-trial choice in pigeons: Effects of reinforcer magnitude.Journal of the Experimental Analysis of Behavior,35, 23–29.
Zabludoff, S. D., Wecker, J., &Caraco, T. (1988). Foraging choice in laboratory rats: Constant vs. variable delay.Behavioural Processes,16, 95–110.
Author information
Authors and Affiliations
Corresponding author
Additional information
Preparation of this manuscript was partially supported by Grant R01 MH61720 from the National Institute of Mental Health to F.K.M. Some of these data were presented at the 2001 meeting of the Northwest Association for Behavior Analysis in Victoria, British Columbia, and at the 2002 meeting of the Association for Behavior Analysis in Toronto, Ontario. Experiment 3 was conducted as part of a master’s thesis submitted to the Department of Psychology, Washington State University, by B.P.K.
Rights and permissions
About this article
Cite this article
McSweeney, F.K., Kowal, B.P. & Murphy, E.S. The effect of rate of reinforcement and time in session on preference for variability. Animal Learning & Behavior 31, 225–241 (2003). https://doi.org/10.3758/BF03195985
Received:
Accepted:
Issue Date:
DOI: https://doi.org/10.3758/BF03195985