Reinforcement Learning for Joint Design and Control of Battery-PV Systems
No Thumbnail Available
Author
Cauz, Marine
Bolland, Adrien
Miftari, Bardhyl
Perret, Lionel
Ballif, Christophe
Wyrsch, Nicolas
Abstract
The decentralisation and unpredictability of new renewable energy sources require rethinking our energy system. Data-driven approaches, such as reinforcement learning (RL), have emerged as new control strategies for operating these systems, but they have not yet been applied to system design. This paper aims to bridge this gap by studying the use of an RL-based method for joint design and control of a real-world PV and battery system. The design problem is first formulated as a mixed-integer linear programming problem (MILP). The optimal MILP solution is then used to evaluate the performance of an RL agent trained in a surrogate environment designed for applying an existing data-driven algorithm. The main difference between the two models lies in their optimization approaches: while MILP finds a solution that minimizes the total costs for a one-year operation given the deterministic historical data, RL is a stochastic method that searches for an optimal strategy over one week of data on expectation over all weeks in the historical dataset. Both methods were applied on a toy example using one-week data and on a case study using one-year data. In both cases, models were found to converge to similar control solutions, but their investment decisions differed. Overall, these outcomes are an initial step illustrating benefits and challenges of using RL for the joint design and control of energy systems.
Publication Reference
36th International Conference on Efficiency, Cost, Optimization, Simulation and Environmental Impact of Energy Systems, ECOS 2023, pp. 3131 - 3142
Year
2023