Deep Reinforcement Learning for Decentralized Control of Distribution Power System
||Deep Reinforcement Learning for Decentralized Control of Distribution Power System|
||SNIC Small Compute|
||Mengfan Zhang <email@example.com>|
||Kungliga Tekniska högskolan|
||2022-12-21 – 2024-01-01|
With the large integration of renewables, the traditional power system becomes more sustainable and effective. Yet, the fluctuation and uncertainties of renewables have led to large challenges to voltage stability in distribution power systems. To address the stability issue, several research works have been conducted on the recent power system. First, the extra devices, e.g., static Var compensator and static Var generations are adopted for the reactive power adjustment of the system to enable the voltage stability. However, the extra devices will introduce the extra cost to the system, which is not economical. Second, the renewable generations equipped with smart converters are used to adjust the power of the system without extra devices using centralized optimal power flow (OPF) of decentralized droop control. Yet, due to the fluctuation of renewables, the OPF needs continuous calculations to obtain the changing optimal setpoint of each converter, which brings a heavy communication burden to the grid. Meanwhile, the objective of voltage control is to seek the optimal setpoint of each bus to reduce the power loss, which could not be guaranteed with the decentralized droop method.
This project is trying to address these challenges by proposing a deep reinforcement learning (DRL) method for distribution power systems. First, the coordinated control of distribution system will be modeled as the Markov Decision Process, while each grid-connected interface inverter of renewables is modeled as a deep neural network (DNN) based agent. Second, a novel centralized training combined with a decentralized execution DRL framework will be applied to reduce the communication requirements and improve the control performance even under the communication delay. With the designed reward function, the agents will interact with and seek the optimal coordinated control strategy. The offline-trained agents will execute online in a decentralized way to guarantee the voltage stability of the distribution without any extra communication. This project will develop a communication-free and accurate voltage control strategy for the distribution system under the uncertainties of renewables.
A distribution system model will be built through the platform provided by SNIC, where the proposed DRL agents will be trained on the same platform. Meanwhile, a case study will be demonstrated through the platform.