A Deep Reinforcement Learning-based Multi-Agent Framework to Enhance Power System Resilience using Shunt Resources

2021 
Existing power system resilience enhancement methods, such as proactive generation rescheduling, movable sources dispatch, and network topology reconfiguration, do not explore the capability and flexibility of shunts to maintain voltage stability during and after disrupting events. Besides, existing methods rely on accurate system models that are not easily scalable for large integrated power grids. In this paper, a data-driven multi-agent framework based on a deep-reinforcement-learning algorithm is proposed to overcome the computation and scalability concerns related to precise system models and to plan for the deployment of shunts for power system resilience enhancement. Specifically, voltage violations due to outages of multiple lines during wind storms are taken as an example of a power system resilience improvement problem. Then, a multi-agent based hybrid soft actor critic (HSAC) algorithm is developed for offline siting and sizing as well as online controlling of shunt reactive power compensators to enhance voltage resilience. The HSAC algorithm is derived from the fundamental SAC algorithms that contain both continuous and discrete action spaces. The proposed multi-agent framework learns from previous experiences and eventually gets trained to determine proper locations and sizes for shunts to avoid voltage violations during multiple line failures. The proposed approach is demonstrated on the IEEE 57-bus and IEEE 300-bus systems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    2
    Citations
    NaN
    KQI
    []