Skip to main content

Abstract

Automated systems, controlled with programmed reactive rules and set-point values for feedback regulation, require supervision and adjustment by experienced technicians. These technicians must be familiar with the scenario where the controlled processes are carried out. In automated greenhouses, achieving optimal environmental values requires the expertise of a specialist technician. This introduces the need for an expert in the installation and the problem of depending on them.

To reduce these inconveniences, the integration of three paradigms is proposed: user-centered design, deployment of data capture technology based on IoT protocols, and a reinforcement learning model. The objective of the reinforcement learning model is to make decisions in the programming of set-points for the climate control of a greenhouse. In this way, the need for manual and repetitive supervision of the specialized technician is reduced; meanwhile, the control is optimized.

The design, led by an expert technician in greenhouse installations, provides the necessary knowledge to transfer to a reinforcement learning model. On the other hand, deploying the required set of sensors and access to external data sources increases the capacity of the learning model to be deployed to current installations. The proposed system was tested in automated greenhouse facilities under the supervision of a specialized technician, validating the usefulness of the proposed system.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Botvinick, M., Ritter, S., Wang, J.X., Kurth-Nelson, Z., Blundell, C., Hassabis, D.: Reinforcement learning, fast and slow. Trends Cogn. Sci. 23(5), 408–422 (2019). https://doi.org/10.1016/j.tics.2019.02.006. ISSN 1364–6613

    Article  Google Scholar 

  2. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)

    Article  Google Scholar 

  3. Yang, S., Wan, M.P., Chen, W., Ng, B.F., Dubey, S.: Model predictive control with adaptive machine-learning-based model for building energy efficiency and comfort optimization. Appl. Energy 271, 115147 (2020). http://www.sciencedirect.com/science/article/pii/S0306261920306590

  4. Drgona, J., Picard, D., Kvasnica, M., Helsen, L.: Approximate model predictive building control via machine learning, Appl. Energy 218, 199–216 (2018). http://www.sciencedirect.com/science/article/pii/S0306261918302903

  5. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT press, Cambridge (1999)

    MATH  Google Scholar 

  6. Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, Hoboken (2005)

    MATH  Google Scholar 

  7. Brockman, G., et al.: Openai gym. ArXiv Preprint. ArXiv:1606.01540 (2016)

  8. Missaoui, R., Joumaa, H., Ploix, S., Bacha, S.: Managing energy smart homes according to energy prices: analysis of a building energy management system. Energy Build. 71, 155–167 (2014). https://doi.org/10.1016/j.enbuild.2013.12.018

    Article  Google Scholar 

  9. Maddalena, E.T., Lian, Y., Jones, C.N.: Data-driven methods for building control—a review and promising future directions. Control Eng. Pract. 95, 104211 (2020). https://doi.org/10.1016/j.conengprac.2019.104211

    Article  Google Scholar 

  10. Chen, B., Cai, Z., Bergés, M.: Gnu-RL: a precocial reinforcement learning solution for building HVAC control using a differentiable MPC policy. In: Proceedings of 6th ACM International Conference on System Energy-Efficient Buildings, Cities, Transports, pp. 316–325 (2019)

    Google Scholar 

  11. Wen, Z., O’Neill, D., Maei, H.: Optimal demand response using device-based reinforcement learning. IEEE Trans. Smart Grid 6(5), 2312–2324 (2015)

    Article  Google Scholar 

  12. Wei, T., Wang, Y., Zhu, Q.: Deep reinforcement learning for building HVAC control. In: Proceedings of 54th Annual Design Automation Conference, pp. 1–6 (2017)

    Google Scholar 

  13. Mankowitz, D., Hester, T.: Challenges of real-world reinforcement learning. ArXiv arxiv:1904.12901 (2019)

  14. Arroyo, J., Manna, C., Spiessens, F., Helsen, L.: Reinforced model predictive control (RL-MPC) for building energy management. Appl. Energy 309, 1 (2022). https://doi.org/10.1016/j.apenergy.2021.118346

    Article  Google Scholar 

  15. Morcego, B., Yin, W., Boersma, S., van Henten, E., Puig, V., Sun, C.: Reinforcement learning versus model predictive control on greenhouse climate control. arXiv preprint arXiv:2303.06110 (2023)

  16. Gillies, M., Fiebrink, R., Tanaka, A.: Human-Centred machine learning. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. Association for Computing Machinery, New York (2016)

    Google Scholar 

Download references

Acknowledgements

We are very grateful to Palmeera Farms (Palmeera) Biotechnology-based company, member of the Spanish Association of Biotechnology Companies (Asebio) and attached to the Alicante Science Park (PCA) for their collaboration in this work.

Funding

This study is part of the AGROALNEXT program (AGROALNEXT/2022/048) and has been supported by MCIN with funding from the European Union NextGenerationEU (PRTR-C17.I1) and the Generalitat Valenciana. This study was partially supported by the Research Center for Communication and Information Technologies (CITIC) and the School of Computer Science and Informatics (ECCI) at the University of Costa Rica, Research Project No. 834-B9-189.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to F. Javier Ferrández-Pastor .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ferrández-Pastor, F.J., Cámara-Zapata, J.M., Alcañiz-Lucas, S., Pardo, S., Brenes, J.A. (2023). Reinforcement Learning Model in Automated Greenhouse Control. In: Bravo, J., Urzáiz, G. (eds) Proceedings of the 15th International Conference on Ubiquitous Computing & Ambient Intelligence (UCAmI 2023). UCAmI 2023. Lecture Notes in Networks and Systems, vol 842. Springer, Cham. https://doi.org/10.1007/978-3-031-48642-5_1

Download citation

Publish with us

Policies and ethics