Large Language Model Empowered Deep Reinforcement Learning to Support Prescriptive Maintenance
| dc.contributor.author | Luo, Jinxia | |
| dc.contributor.department | Chalmers tekniska högskola / Institutionen för data och informationsteknik | sv |
| dc.contributor.department | Chalmers University of Technology / Department of Computer Science and Engineering | en |
| dc.contributor.examiner | Turanoglu Bekar, Ebru | |
| dc.contributor.supervisor | Chen, Siyuan | |
| dc.date.accessioned | 2025-11-05T13:45:50Z | |
| dc.date.issued | 2025 | |
| dc.date.submitted | ||
| dc.description.abstract | To enhance the reliability and efficiency of complex industrial systems, it is critical to adopt approaches that transcend traditional reactive and preventive maintenance methods. Prescriptive maintenance, by leveraging advanced artificial intelligence (AI) techniques to recommend and optimize specific maintenance actions before failures occur, has emerged as a vital strategy for modern industries to reduce downtime and operational costs. In this context, Deep Reinforcement Learning (DRL) has demonstrated significant potential in prescriptive maintenance tasks, enabling autonomous optimization of maintenance strategies through self-learning and adaptive decision-making. However, in real-world factory maintenance scenarios, DRL models frequently face challenges like cold-start problems, imprecise judgments, and a lack of flexibility in complicated dynamic contexts. These challenges prevent them from being widely used in complicated and highly dependable systems. To address these limitations, this paper proposes an innovative smart maintenance framework that effectively combines cutting-edge Large Language Models (LLMs) with DRL approaches. The framework leverages the powerful capabilities of LLMs in domain knowledge comprehension, semantic reasoning, and knowledge transfer to convert complicated, real-time, structured maintenance data into high-dimensional state representations that DRL models can easily use. The LLMs provide the DRL agents with rich previous knowledge and decision support by automatically extracting important attributes and identifying possible failure patterns. We conducted a case study at a lab-scale drone manufacturing plant, and the results showed that the proposed framework significantly improved the accuracy of smart maintenance decisions. Compared with previous studies[1], the average reward of traditional deep reinforcement learning algorithms was 0.65, while after introducing large language models, the average reward steadily increased to 0.82. In addition, fluctuations in the early stages of learning were markedly reduced, resulting in a smoother overall training process. Furthermore, we designed multiple sets of cross-experiments with different DRL algorithms and LLM models, all of which outperformed those in previous studies[1], further validating the effectiveness and generalizability of the proposed framework. The LLM-DRL integrated smart maintenance framework presented in this study not only refines and improves smart maintenance decision-making but also aligns with Industry 4.0’s broader objectives of operational efficiency, flexibility, and sustainability. Our research provides new theoretical and technological perspectives for advancing maintenance intelligence, enabling greater autonomy, improved generalization, and broader industrial applications. | |
| dc.identifier.coursecode | DATX05 | |
| dc.identifier.uri | http://hdl.handle.net/20.500.12380/310727 | |
| dc.language.iso | eng | |
| dc.relation.ispartofseries | CSE 25-61 | |
| dc.setspec.uppsok | Technology | |
| dc.subject | large language models, deep reinforcement learning, digital twin, prescriptive maintenance, decision-making, state representation | |
| dc.title | Large Language Model Empowered Deep Reinforcement Learning to Support Prescriptive Maintenance | |
| dc.type.degree | Examensarbete för masterexamen | sv |
| dc.type.degree | Master's Thesis | en |
| dc.type.uppsok | H | |
| local.programme | Software engineering and technology (MPSOF), MSc |
