Sharing of Energy Among Cooperative Households Using Distributed Multi-Agent Reinforcement Learning.

2019 
Due to the increase of the complexity and uncertainty in the future sustainable energy system new control algorithms for decentralized acting energy entities are needed. We present an approach of distributed Reinforcement Learning in a multi-agent setup to find a control strategy of two cooperative agents within an energy cell. In order to practice energy sharing to decrease the energy cell's overall interdependence on the electrical grid, we train two independently learning agents, an energy storage and an electric power generator using Q-learning. We compare the learned strategy of the agents under partial and full observability of the environment and evaluate the interdependence of the energy cell on the electrical grid. Our results show that distributed Q-learning with independently learning agents works in the setup of an energy cell without the necessity of information exchange between agents. The algorithm under partial observability of the environment reaches comparable performance to that of full observability with fewer need of communication but at the cost of five times longer training time.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    3
    Citations
    NaN
    KQI
    []