TY - JOUR
T1 - Deep-learning- and reinforcement-learning-based profitable strategy of a grid-level energy storage system for the smart grid
AU - Han, Gwangwoo
AU - Lee, Sanghun
AU - Lee, Jaemyung
AU - Lee, Kangyong
AU - Bae, Joongmyeon
N1 - Funding Information:
This work was supported by the Nano·Material Technology Development Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Science, ICT and Future Planning (no. NRF-2017 M3A7B4049547 ). This work was also supported by the Korea Institute of Energy Technology Evaluation and Planning (KETEP) and the Ministry of Trade, Industry & Energy (MOTIE) of the Republic of Korea (no. 20172010106280 ). In addition, this work was supported by the Korea Institute of Energy Technology Evaluation and Planning (KETEP) and the Ministry of Trade, Industry & Energy (MOTIE) of the Republic of Korea (no. 20193010032460 ). This work conducted under framework of the research and development program of the Korea Institute of Energy Research (C1-2409).
Publisher Copyright:
© 2021 Elsevier Ltd
PY - 2021/9
Y1 - 2021/9
N2 - A profitable operation strategy of an energy storage system (ESS) could play a pivotal role in the smart grid, balancing electricity supply with demand. Here, we propose an AI-based novel arbitrage strategy to maximize operating profit in the electricity market composed of a grid operator (GO), an ESS, and customers (CUs). This strategy, the buying and selling of electricity to profit from a price imbalance, can also cause a peak load shift from on-peak to off-peak, a win-win approach for both the ESS operator (EO) and the GO. Particularly, to maximize the EO's profit and further reduce the GO's on-peak power, we introduce a stimulus-integrated arbitrage algorithm, providing an additional reward to the EO from the GO with different weights for each peak period. The algorithm consists of two parts: the first is recurrent neural network-based deep learning for overcoming the future uncertainties of electricity prices and load demands. The second is reinforcement learning to derive the optimal charging or discharging policy considering the grid peak states, the EO's profit, and CUs’ load demand. We find it significant that the suggested approach increases operating profit 2.4 times and decreases the on-peak power of the GO by 30%.
AB - A profitable operation strategy of an energy storage system (ESS) could play a pivotal role in the smart grid, balancing electricity supply with demand. Here, we propose an AI-based novel arbitrage strategy to maximize operating profit in the electricity market composed of a grid operator (GO), an ESS, and customers (CUs). This strategy, the buying and selling of electricity to profit from a price imbalance, can also cause a peak load shift from on-peak to off-peak, a win-win approach for both the ESS operator (EO) and the GO. Particularly, to maximize the EO's profit and further reduce the GO's on-peak power, we introduce a stimulus-integrated arbitrage algorithm, providing an additional reward to the EO from the GO with different weights for each peak period. The algorithm consists of two parts: the first is recurrent neural network-based deep learning for overcoming the future uncertainties of electricity prices and load demands. The second is reinforcement learning to derive the optimal charging or discharging policy considering the grid peak states, the EO's profit, and CUs’ load demand. We find it significant that the suggested approach increases operating profit 2.4 times and decreases the on-peak power of the GO by 30%.
KW - AI
KW - Deep learning
KW - Energy storage system
KW - Recurrent neural network
KW - Reinforcement learning
KW - Smart grid
UR - http://www.scopus.com/inward/record.url?scp=85109466142&partnerID=8YFLogxK
U2 - 10.1016/j.est.2021.102868
DO - 10.1016/j.est.2021.102868
M3 - Article
AN - SCOPUS:85109466142
SN - 2352-152X
VL - 41
JO - Journal of Energy Storage
JF - Journal of Energy Storage
M1 - 102868
ER -