An Offline Deep Reinforcement Learning for Maintenance Decision-Making

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Nov 24, 2021
Hamed Khorasgani Haiyan Wang Chetan Gupta Ahmed Farahat

Abstract

Several machine learning and deep learning frameworks have been proposed to solve remaining useful life estimation and failure prediction problems in recent years. Having access to the remaining useful life estimation or likelihood of failure in near future helps operators to assess the operating conditions and, therefore, provides better opportunities for sound repair and maintenance decisions. However, many operators believe remaining useful life estimation and failure prediction solutions are incomplete answers to the maintenance challenge. They argue that knowing the likelihood of failure in the future is not enough to make maintenance decisions that minimize costs and keep the operators safe. In this paper, we present a maintenance framework based on offline supervised deep reinforcement learning that instead of providing information such as likelihood of failure, suggests actions such as “continuation of the operation” or “the visitation of the repair shop” to the operators in order to maximize the overall profit. Using offline reinforcement learning makes it possible to learn the optimum maintenance policy from historical data without relying on expensive simulators. We demonstrate the application of our solution in a case study using the NASA C-MAPSS dataset.

How to Cite

Khorasgani, H., Wang, H., Gupta, C. ., & Farahat, A. (2021). An Offline Deep Reinforcement Learning for Maintenance Decision-Making. Annual Conference of the PHM Society, 13(1). https://doi.org/10.36001/phmconf.2021.v13i1.3009
Abstract 442 | PDF Downloads 374

##plugins.themes.bootstrap3.article.details##

Keywords

Reinforcement learning, Maintenance optimization

Section
Technical Research Papers

Most read articles by the same author(s)