Skip to main content

Learning to Control at Multiple Time Scales

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2714))

Abstract

In reinforcement learning the interaction between the agent and the environment generally takes place on a fixed time scale, which means that the control interval is set to a fixed time step. In order to determine a suitable fixed time scale one has to trade off accuracy in control against learning complexity. In this paper, we present an alternative approach that enables the agent to learn a control policy by using multiple time scales simultaneously. Instead of preselecting a fixed time scale, there are several time scales available during learning and the agent can select the appropriate time scale depending on the system state. The different time scales are multiples of a finest time scale which is denoted as the primitive time scale. Actions on a coarser time scale consist of several identical actions on the primitive time scale and are called multistep actions (MSAs). The special structure of these actions is efficiently exploited in our recently proposed MSA-Q-learning algorithm. In this paper, we use the MSAs to learn a control policy for a thermostat control problem. Our algorithm yields a fast and highly accurate control policy; in contrast, the standard Q-learning algorithms without MSAs fails to learn any useful control policy for this problem.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. T. G. Dietterich. Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of Artificial Intelligence Research, 13:227–303, 2000.

    MATH  MathSciNet  Google Scholar 

  2. S. Pareigis. Adaptive choice of grid and time in reinforcement learning. NIPS, volume 10. MIT Press, 1998.

    Google Scholar 

  3. R. E. Parr. Hierarchical Control and Learning for Markov Decision Processes. PhD thesis, University of California, Berkeley, CA, 1998.

    Google Scholar 

  4. M. Riedmiller. High quality thermostat control by reinforcement learning — a case study. In Proceedings of the Conald Workshop 1998, CMU, 1998.

    Google Scholar 

  5. M. Riedmiller. Concepts and facilities of a neural reinforcement learning control architecture for technical process control. Journal of Neural Computing and Application, 8:323–338, 2000.

    Article  Google Scholar 

  6. R. Schoknecht and M. Riedmiller. Speeding-up reinforcement learning with multistep actions. ICANN, LNCS 2415, pages 813–818, 2002. Springer.

    Google Scholar 

  7. R. S. Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. NIPS, volume 8, pages 1038–1044. MIT Press, 1996.

    Google Scholar 

  8. R. S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. AI, 112:181–211, 1999.

    MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Schoknecht, R., Riedmiller, M. (2003). Learning to Control at Multiple Time Scales. In: Kaynak, O., Alpaydin, E., Oja, E., Xu, L. (eds) Artificial Neural Networks and Neural Information Processing — ICANN/ICONIP 2003. ICANN ICONIP 2003 2003. Lecture Notes in Computer Science, vol 2714. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44989-2_57

Download citation

  • DOI: https://doi.org/10.1007/3-540-44989-2_57

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40408-8

  • Online ISBN: 978-3-540-44989-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics