Please use this identifier to cite or link to this item:
Title: Aircraft Maintenance Check Scheduling Using Reinforcement Learning
Authors: Andrade, Pedro 
Silva, Catarina 
Ribeiro, Bernardete Martins 
Santos, Bruno F. 
Keywords: aircraft maintenance; maintenance check scheduling; reinforcement learning; q-learning
Issue Date: 2021
Project: European Union’s Horizon 2020 REMAP project, grant number 769288 
Serial title, monograph or event: Aerospace
Volume: 8
Issue: 4
Abstract: This paper presents a Reinforcement Learning (RL) approach to optimize the long-term scheduling of maintenance for an aircraft fleet. The problem considers fleet status, maintenance capacity, and other maintenance constraints to schedule hangar checks for a specified time horizon. The checks are scheduled within an interval, and the goal is to, schedule them as close as possible to their due date. In doing so, the number of checks is reduced, and the fleet availability increases. A Deep Q-learning algorithm is used to optimize the scheduling policy. The model is validated in a real scenario using maintenance data from 45 aircraft. The maintenance plan that is generated with our approach is compared with a previous study, which presented a Dynamic Programming (DP) based approach and airline estimations for the same period. The results show a reduction in the number of checks scheduled, which indicates the potential of RL in solving this problem. The adaptability of RL is also tested by introducing small disturbances in the initial conditions. After training the model with these simulated scenarios, the results show the robustness of the RL approach and its ability to generate efficient maintenance plans in only a few seconds.
ISSN: 2226-4310
DOI: 10.3390/aerospace8040113
Rights: openAccess
Appears in Collections:FCTUC Eng.Informática - Artigos em Revistas Internacionais

Files in This Item:
File Description SizeFormat
Aircraft-maintenance-check-scheduling-using-reinforcement-learningAerospace.pdf527.93 kBAdobe PDFView/Open
Show full item record

Google ScholarTM




This item is licensed under a Creative Commons License Creative Commons