TY - JOUR
T1 - DIP-QL
T2 - A Novel Reinforcement Learning Method for Constrained Industrial Systems
AU - Park, Hyungjun
AU - Min, Daiki
AU - Ryu, Jong Hyun
AU - Choi, Dong Gu
N1 - Publisher Copyright:
© 2005-2012 IEEE.
PY - 2022/11/1
Y1 - 2022/11/1
N2 - Existing reinforcement learning (RL) methods have limited applicability to real-world industrial control problems because of their various constraints. To overcome this challenge, in this article, we devise a novel RL method to enable the optimization of a policy while strictly satisfying the system constraints. By leveraging a value-based RL approach, our proposed method is not limited by the challenges faced when searching a constrained policy. Our method has two main features. First, we devise two distance-based Q-value update schemes, incentive and penalty updates, which enable the agent to decide on controls in the feasible region by replacing an infeasible control with the nearest feasible continuous control. The proposed update schemes can adjust the values of both continuous and original infeasible controls. Second, we define the penalty cost as a shadow price-weighted penalty to achieve efficient, constrained policy learning. We apply our method to the microgrid control, and the case study demonstrates its superiority.
AB - Existing reinforcement learning (RL) methods have limited applicability to real-world industrial control problems because of their various constraints. To overcome this challenge, in this article, we devise a novel RL method to enable the optimization of a policy while strictly satisfying the system constraints. By leveraging a value-based RL approach, our proposed method is not limited by the challenges faced when searching a constrained policy. Our method has two main features. First, we devise two distance-based Q-value update schemes, incentive and penalty updates, which enable the agent to decide on controls in the feasible region by replacing an infeasible control with the nearest feasible continuous control. The proposed update schemes can adjust the values of both continuous and original infeasible controls. Second, we define the penalty cost as a shadow price-weighted penalty to achieve efficient, constrained policy learning. We apply our method to the microgrid control, and the case study demonstrates its superiority.
KW - Constrained action space
KW - distance-based update schemes
KW - industrial control system
KW - microgrid control
KW - reinforcement learning (RL)
UR - http://www.scopus.com/inward/record.url?scp=85126548738&partnerID=8YFLogxK
U2 - 10.1109/TII.2022.3159570
DO - 10.1109/TII.2022.3159570
M3 - Article
AN - SCOPUS:85126548738
SN - 1551-3203
VL - 18
SP - 7494
EP - 7503
JO - IEEE Transactions on Industrial Informatics
JF - IEEE Transactions on Industrial Informatics
IS - 11
ER -