DIP-QL: A Novel Reinforcement Learning Method for Constrained Industrial Systems

Hyungjun Park, Daiki Min, Jong Hyun Ryu, Dong Gu Choi

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Existing reinforcement learning (RL) methods have limited applicability to real-world industrial control problems because of their various constraints. To overcome this challenge, in this article, we devise a novel RL method to enable the optimization of a policy while strictly satisfying the system constraints. By leveraging a value-based RL approach, our proposed method is not limited by the challenges faced when searching a constrained policy. Our method has two main features. First, we devise two distance-based Q-value update schemes, incentive and penalty updates, which enable the agent to decide on controls in the feasible region by replacing an infeasible control with the nearest feasible continuous control. The proposed update schemes can adjust the values of both continuous and original infeasible controls. Second, we define the penalty cost as a shadow price-weighted penalty to achieve efficient, constrained policy learning. We apply our method to the microgrid control, and the case study demonstrates its superiority.

Original languageEnglish
Pages (from-to)7494-7503
Number of pages10
JournalIEEE Transactions on Industrial Informatics
Volume18
Issue number11
DOIs
StatePublished - 1 Nov 2022

Bibliographical note

Publisher Copyright:
© 2005-2012 IEEE.

Keywords

  • Constrained action space
  • distance-based update schemes
  • industrial control system
  • microgrid control
  • reinforcement learning (RL)

Fingerprint

Dive into the research topics of 'DIP-QL: A Novel Reinforcement Learning Method for Constrained Industrial Systems'. Together they form a unique fingerprint.

Cite this