Skip to main content
Log in

Two timescale convergent Q-learning for sleep-scheduling in wireless sensor networks

  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

In this paper, we consider an intrusion detection application for Wireless Sensor Networks. We study the problem of scheduling the sleep times of the individual sensors, where the objective is to maximize the network lifetime while keeping the tracking error to a minimum. We formulate this problem as a partially-observable Markov decision process (POMDP) with continuous state-action spaces, in a manner similar to Fuemmeler and Veeravalli (IEEE Trans Signal Process 56(5), 2091–2101, 2008). However, unlike their formulation, we consider infinite horizon discounted and average cost objectives as performance criteria. For each criterion, we propose a convergent on-policy Q-learning algorithm that operates on two timescales, while employing function approximation. Feature-based representations and function approximation is necessary to handle the curse of dimensionality associated with the underlying POMDP. Our proposed algorithm incorporates a policy gradient update using a one-simulation simultaneous perturbation stochastic approximation estimate on the faster timescale, while the Q-value parameter (arising from a linear function approximation architecture for the Q-values) is updated in an on-policy temporal difference algorithm-like fashion on the slower timescale. The feature selection scheme employed in each of our algorithms manages the energy and tracking components in a manner that assists the search for the optimal sleep-scheduling policy. For the sake of comparison, in both discounted and average settings, we also develop a function approximation analogue of the Q-learning algorithm. This algorithm, unlike the two-timescale variant, does not possess theoretical convergence guarantees. Finally, we also adapt our algorithms to include a stochastic iterative estimation scheme for the intruder’s mobility model and this is useful in settings where the latter is not known. Our simulation results on a synthetic 2-dimensional network setting suggest that our algorithms result in better tracking accuracy at the cost of only a few additional sensors, in comparison to a recent prior work.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. A short version of this paper containing only the average cost setting and algorithms and with no proofs is available in [24]. The current paper includes in addition: (i) algorithms for the discounted cost setting; (ii) a detailed proof of convergence of the average cost algorithm using theory of stochastic recursive inclusions; and (iii) detailed numerical experiments.

  2. Since we study long-run average sum of (2) (see (4) below), we can consider the problem of tracking an intruder in an infinite horizon, whereas a termination state in [14] was made necessary as they considered a total cost objective.

  3. A simple rule to choose a state \(s\) such that there is a positive probability of the underlying MDP visiting \(s\). Such a criterion ensures that the term (II) of (11) converges to the optimal average cost \(J^*\).

  4. One may use an \(\epsilon\)-greedy policy for TQSA-A as well, however, that will result in additional exploration. Since TQSA-A updates the parameters of an underlying parameterized Boltzmann policy (which by itself is randomized in nature), we do not need an extra exploration step in our algorithm.

  5. This is because the intruder stays in the starting location for at least one time step and the exploration of actions initially results in a positive probability of a random action being chosen.

References

  1. Abounadi, J., Bertsekas, D., & Borkar, V. (2002). Learning algorithms for Markov decision processes with average cost. SIAM Journal on Control and Optimization, 40(3), 681–698.

    Article  MathSciNet  Google Scholar 

  2. Baird, L. (1995). Residual algorithms: Reinforcement learning with function approximation. In: ICML, pp 30–37.

  3. Beccuti, M., Codetta-Raiteri, D., & Franceschinis, G. (2009). Multiple abstraction levels in performance analysis of wsn monitoring systems. In: International ICST conference on performance evaluation methodologies and tools, p. 73.

  4. Bertsekas, D. P. (2007). Dynamic programming and optimal control (3rd ed., Vol. II). Belmont: Athena Scientific.

    Google Scholar 

  5. Bertsekas, D. P., & Tsitsiklis, J. N. (1996). Neuro-dynamic programming. Belmont: Athena Scientific.

    MATH  Google Scholar 

  6. Bhatnagar, S., & Lakshmanan, K. (2012). A new Q-learning algorithm with linear function approximation. Technical report SSL, IISc, URL http://stochastic.csa.iisc.ernet.in/www/research/files/IISc-CSA-SSL-TR-2012-3.pdf.

  7. Bhatnagar, S., Fu, M., Marcus, S., & Wang, I. (2003). Two-timescale simultaneous perturbation stochastic approximation using deterministic perturbation sequences. ACM Transactions on Modeling and Computer Simulation (TOMACS), 13(2), 180–209.

    Article  Google Scholar 

  8. Bhatnagar, S., Sutton, R. S., Ghavamzadeh, M., & Lee, M. (2009). Natural actor-critic algorithms. Automatica, 45(11), 2471–2482.

    Article  MathSciNet  MATH  Google Scholar 

  9. Bhatnagar, S., Prasad, H., & Prashanth, L. (2013). Stochastic recursive algorithms for optimization (Vol. 434). New York: Springer.

  10. Borkar, V. (2008). Stochastic approximation: A dynamical systems viewpoint. Cambridge: Cambridge University Press.

  11. Cui, Y., Lau, V. K., Wang, R., Huang, H., & Zhang, S. (2012a). A survey on delay-aware resource control for wireless systemsLarge deviation theory, stochastic lyapunov drift, and distributed stochastic learning. IEEE Transactions on Information Theory, 58(3), 1677–1701.

    Article  MathSciNet  Google Scholar 

  12. Cui, Y., Lau, V. K., & Wu, Y. (2012b). Delay-aware BS discontinuous transmission control and user scheduling for energy harvesting downlink coordinated MIMO systems. IEEE Transactions on Signal Processing, 60(7), 3786–3795.

    Article  MathSciNet  Google Scholar 

  13. Fu, F., & van der Schaar, M. (2009). Learning to compete for resources in wireless stochastic games. IEEE Transactions on Vehicular Technology, 58(4), 1904–1919.

    Article  Google Scholar 

  14. Fuemmeler, J., & Veeravalli, V. (2008). Smart sleeping policies for energy efficient tracking in sensor networks. IEEE Transactions on Signal Processing, 56(5), 2091–2101.

    Article  MathSciNet  Google Scholar 

  15. Fuemmeler, J., Atia, G., & Veeravalli, V. (2011). Sleep control for tracking in sensor networks. IEEE Transactions on Signal Processing, 59(9), 4354–4366.

    Article  MathSciNet  Google Scholar 

  16. Gui, C., & Mohapatra, P. (2004). Power conservation and quality of surveillance in target tracking sensor networks. In: Proceedings of the international conference on mobile computing and networking, pp. 129–143.

  17. Jiang, B., Han, K., Ravindran, B., & Cho, H. (2008). Energy efficient sleep scheduling based on moving directions in target tracking sensor network. In: IEEE international symposium on parallel and distributed processing, pp. 1–10.

  18. Jianlin, M., Fenghong, X., & Hua, L. (2009). RL-based superframe order adaptation algorithm for IEEE 802.15.4 networks. In: Chinese control and decision conference, IEEE, pp. 4708–4711.

  19. Jin Gy, Lu, & Xy, Park M. S. (2006). Dynamic clustering for object tracking in wireless sensor networks. Ubiquitous Computing Systems, 4239, 200–209.

    Article  Google Scholar 

  20. Khan, M. I., & Rinner, B. (2012). Resource coordination in wireless sensor networks by cooperative reinforcement learning. In: IEEE international conference on pervasive computing and communications workshop, pp. 895–900.

  21. Konda, V. R., & Tsitsiklis, J. N. (2004) Convergence rate of linear two-time-scale stochastic approximation. Annals of applied probability, pp. 796–819.

  22. Liu, Z., & Elhanany, I. (2006). RL-MAC: A QoS-aware reinforcement learning based MAC protocol for wireless sensor networks. IEEE International Conference on Networking (pp. 768–773). IEEE: Sensing and Control.

    Google Scholar 

  23. Niu, J. (2010) Self-learning scheduling approach for wireless sensor network. In: International conference on future computer and communication (ICFCC), IEEE, Vol. 3, pp. 253–257.

  24. Prashanth, L., Chatterjee, A., & Bhatnagar, S. (2014). Adaptive sleep-wake control using reinforcement learning in sensor networks. In: 6th international conference on communication systems and networks (COMSNETS), IEEE.

  25. Prashanth, L. A., & Bhatnagar, S. (2011a). Reinforcement learning with average cost for adaptive control of traffic lights at intersections. In: 14th International IEEE conference on intelligent transportation systems (ITSC), pp. 1640–1645.

  26. Prashanth, L. A., & Bhatnagar, S. (2011b). Reinforcement learning with function approximation for traffic signal control. IEEE Transactions on Intelligent Transportation Systems, 12(2), 412–421.

    Article  Google Scholar 

  27. Premkumar, K., & Kumar, A. (2008). Optimal sleep-wake scheduling for quickest intrusion detection using sensor networks. Arizona, USA: IEEE INFOCOM.

    Google Scholar 

  28. Puterman, M. (1994). Markov decision processes: Discrete stochastic dynamic programming. New York: Wiley.

    Book  MATH  Google Scholar 

  29. Rucco, L., Bonarini, A., Brandolese, C., & Fornaciari, W. (2013). A bird’s eye view on reinforcement learning approaches for power management in WSNs. In: Wireless and mobile networking conference (WMNC), IEEE, pp. 1–8.

  30. Spall, J. C. (1992). Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Transactions on Automatic Control, 37(3), 332–341.

    Article  MathSciNet  MATH  Google Scholar 

  31. Sutton, R., & Barto, A. (1998). Reinforcement learning: An introduction. Cambridge: Cambridge University Press.

    Google Scholar 

  32. Tsitsiklis, J. N., & Van Roy, B. (1997). An Analysis of Temporal Difference Learning with Function Approximation. IEEE Transactions on Automatic Control, 42(5), 674–690.

  33. Watkins, C., & Dayan, P. (1992). Machine learning. Q-learning, 8(3), 279–292.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to L. A. Prashanth.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 65 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Prashanth, L.A., Chatterjee, A. & Bhatnagar, S. Two timescale convergent Q-learning for sleep-scheduling in wireless sensor networks. Wireless Netw 20, 2589–2604 (2014). https://doi.org/10.1007/s11276-014-0762-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11276-014-0762-6

Keywords

Navigation