TY - JOUR
T1 - Intelligent charging and discharging of electric vehicles in a vehicle-to-grid system using a reinforcement learning-based approach
AU - Maeng, Julie
AU - Min, Daiki
AU - Kang, Yuncheol
N1 - Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2023/12
Y1 - 2023/12
N2 - Recent advances in electric vehicle (EV) technology have increased the importance of vehicle-to-grid (V2G) systems in the smart grid domain. These systems allow bidirectional energy and information flow between consumers and suppliers, enabling the EV to act as an energy storage system that can provide surplus energy to the grid. V2G is particularly useful for reducing the peak demand and load shifting for utilities, acting as a backup system for renewable energy. To optimize the benefits of these systems, the intelligent management of charging and discharging is essential, while considering the electricity prices and user requirements. However, uncertainties such as commuting behavior, charging preferences, and energy requirements, pose challenges in determining the optimal charging/discharging strategy. In this study, individual EV charging/discharging is formulated as a sequential decision-making problem and a model-free reinforcement learning (RL) approach is utilized to learn the optimal sequential charging/discharging decisions until the EV battery reaches its end-of-life. The goal is to minimize the charging cost for the individual user and maximize the use of the EV battery as the vehicle proceeds through various charging and discharging cycles, while also considering the distance traveled by the vehicle. The proposed algorithm is evaluated using real-world data, and the learned charging and discharging strategies are examined to investigate the effectiveness of the proposed method. The experimental scenarios demonstrated that utilizing the RL approach is advantageous compared to the other approaches for reducing the overall cost and maximizing the use of EV batteries.
AB - Recent advances in electric vehicle (EV) technology have increased the importance of vehicle-to-grid (V2G) systems in the smart grid domain. These systems allow bidirectional energy and information flow between consumers and suppliers, enabling the EV to act as an energy storage system that can provide surplus energy to the grid. V2G is particularly useful for reducing the peak demand and load shifting for utilities, acting as a backup system for renewable energy. To optimize the benefits of these systems, the intelligent management of charging and discharging is essential, while considering the electricity prices and user requirements. However, uncertainties such as commuting behavior, charging preferences, and energy requirements, pose challenges in determining the optimal charging/discharging strategy. In this study, individual EV charging/discharging is formulated as a sequential decision-making problem and a model-free reinforcement learning (RL) approach is utilized to learn the optimal sequential charging/discharging decisions until the EV battery reaches its end-of-life. The goal is to minimize the charging cost for the individual user and maximize the use of the EV battery as the vehicle proceeds through various charging and discharging cycles, while also considering the distance traveled by the vehicle. The proposed algorithm is evaluated using real-world data, and the learned charging and discharging strategies are examined to investigate the effectiveness of the proposed method. The experimental scenarios demonstrated that utilizing the RL approach is advantageous compared to the other approaches for reducing the overall cost and maximizing the use of EV batteries.
KW - Battery degradation
KW - Charging/Discharging
KW - Reinforcement Learning
KW - Scheduling
KW - Vehicle to Grid
UR - http://www.scopus.com/inward/record.url?scp=85178480433&partnerID=8YFLogxK
U2 - 10.1016/j.segan.2023.101224
DO - 10.1016/j.segan.2023.101224
M3 - Article
AN - SCOPUS:85178480433
SN - 2352-4677
VL - 36
JO - Sustainable Energy, Grids and Networks
JF - Sustainable Energy, Grids and Networks
M1 - 101224
ER -