Mitigating cascading failure in power grids with deep reinforcement learning-based remedial actions

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

3 Scopus Citations
View graph of relations

Author(s)

  • Xi Zhang
  • Qin Wang
  • Donghong Li
  • Dong Liu
  • Yuanjin Yu

Related Research Unit(s)

Detail(s)

Original languageEnglish
Article number110242
Journal / PublicationReliability Engineering and System Safety
Volume250
Online published6 Jun 2024
Publication statusPublished - Oct 2024

Abstract

Power grids are susceptible to cascading failure, which can have detrimental consequences for modern society. Remedial actions, such as proactive islanding, generator tripping, and load shedding, offer viable solutions to mitigate cascading failure in power grids. The success of applying these solutions lies in the timeliness and the appropriate choice of actions during the rapid propagation process of cascading failure. In this paper, we introduce an intelligent method that leverages deep reinforcement learning to generate adequate remedial actions in real time. A simulation model of cascading failure is first presented, which combines power flow distribution and the probabilistic failure mechanisms of components to accurately describe the dynamic cascading failure process. Based on this model, a Markov decision process is formulated to address the problem of deciding on the remedial actions as the failure propagates. Proximal Policy Optimization algorithm is then adapted for the training of underlying policies. Experiments are conducted on representative power test cases. Results demonstrate the out-performance of trained policy over benchmarks in both power preservation and decision times, thereby verifying its advantages in mitigating cascading failure in power grids. © 2024 Elsevier Ltd

Research Area(s)

  • Cascading failure, Deep reinforcement learning, Mitigation, Power grid, Proximal policy optimization, Remedial action