DeepGrid: Robust Deep Reinforcement Learning-based Contingency Management

2020 
Increasing uncertainty raised by the integration of renewable energy resources requires an enormous number of simulations to be carried out for the security assessment of the power grid. However, it is challenging to assess the steady-state and dynamic security indices for different system contingency events by doing an exhaustive analysis in real-time due to the computational and communication constraints. One promising solution is using data-driven techniques along with the system models to train an intelligent contingency management framework to better handle the contingencies in real-time. Nevertheless, implementing a data-driven technique to obtain the best remedial actions necessitates to account for the effect of the measurement noise on the performance of the contingency management. To tackle these challenges, we leverage a robust deep reinforcement learning (DRL) algorithm called Double Deep Q-Network (DDQN) to design a recommender system capable of prescribing optimal control actions with the help of the real-time digital simulator (RTDS). The use of RTDS system in combination with the advanced DRL algorithm allows to explore a wide variety of system contingencies in order to derive better remedial actions. The performance of the proposed algorithm is evaluated in IEEE 9-bus system under different loading conditions, and different network configurations in presence of noisy measurements.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    2
    Citations
    NaN
    KQI
    []