Open Access Academic Publishing | Indexed in Google Scholar | CC BY-NC-ND 4.0
Book Chapter

Explainable Deep Reinforcement Learning for Autonomous Decision-Making in Dynamic Environments

Download PDF
Mrs. D.Nisha
Assistant Professor (Sr.G), Department of Information Technology, SRM Valliammai Engineering College, Kattankulathur, Chengalpet District, Tamil Nadu, India.
davidnisha21@gmail.com
Pages: 15-28
Keywords: Explainable Reinforcement Learning; Deep Q-Network; SHAP Explanations; Autonomous Decision-Making; Policy Interpretability

Abstract

Deep Reinforcement Learning (DRL) has emerged as a powerful paradigm for enabling autonomous decision-making in complex and dynamic environments. However, the ‘black-box’ nature of deep neural networks often hinders the transparency and interpretability of DRL agents, posing significant challenges for their adoption in safety critical applications. This chapter introduces the field of Explainable Deep Reinforcement Learning (XRL), a critical area of research focused on developing methods to understand, interpret, and trust the decisions made by DRL agents. We provide a comprehensive overview of XRL, covering fundamental concepts, a review of the current literature, and a detailed examination of a proposed methodology. We demonstrate the application of XRL in the context of the classic LunarLander-v3 control problem, showcasing how techniques like SHAP (SHapley Additive exPlanations) can provide valuable insights into the agent’s decision-making process. The chapter presents a thorough analysis of simulation results, including training performance, feature importance, and comparative evaluations, to highlight the benefits of integrating explainability into DRL systems. We conclude with a discussion of the broader implications of XRL and future research directions for developing more transparent, robust, and trustworthy autonomous systems.

References

  1. Volodymyr Mnih et al. “Human-level control through deep reinforcement learning”. In: nature 518.7540 (2015), pp. 529–533.
  2. Erika Puiutta and Eric MSP Veith. “Explainable reinforcement learning: A survey”. In: International cross-domain conference for machine learning and knowledge extraction. Springer. 2020, pp. 77–95.
  3. Zelei Cheng, Jiahao Yu, and Xinyu Xing. “A survey on explainable deep reinforcement learning”. In: arXiv preprint arXiv:2502.06869 (2025).
  4. Alexandre Heuillet, Fabien Couthouis, and Natalia D´ıaz-Rodr´ıguez. “Explainability in deep reinforcement learning”. In: Knowledge-Based Systems 214 (2021), p. 106685.
  5. Amina Adadi and Mohammed Berrada. “Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)”. In: IEEE access 6 (2018), pp. 52138–52160.
  6. Poornaiah Billa et al. “Efficient Detection of Lung Diseases using Deep Learning through Scan Images”. In: 2024 International Conference on Computational Intelligence for Security, Communication and Sustainable Development (CISCSD). IEEE. 2024, pp. 225–229.
  7. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. “Distilling the knowledge in a neural network”. In: arXiv preprint arXiv:1503.02531 (2015).
  8. Darani Rajasekhar et al. “An Improved Machine Learning and Deep Learning based Breast Cancer Detection using Thermographic Images”. In: 2023 Second International Conference on Electronics and Renewable Systems (ICEARS). IEEE. 2023, pp. 1152–1157.
  9. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “Why should i trust you?” Explaining the predictions of any classifier”. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016, pp. 1135–1144.
  10. Anduel Mehmeti, Gabriella Gigante, and Salvatore Venticinque. “Explainable Reinforcement Learning for Assisting Air Traffic Controllers”. In: International Conference on Advanced Information Networking and Applications. Springer. 2025, pp. 148– 157.
Next-Generation Artificial Intelligence: From Foundations to Intelligent Applications Next-Generation Artificial Intelligence: From Foundations to Intelligent Applications