TY - JOUR
T1 - Explainable Artificial Intelligence for Fault Diagnosis of Industrial Processes
AU - Jang, Kyojin
AU - Pilario, Karl Ezra Salgado
AU - Lee, Nayoung
AU - Moon, Il
AU - Na, Jonggeol
N1 - Publisher Copyright:
IEEE
PY - 2023
Y1 - 2023
N2 - Process monitoring is important for ensuring operational reliability and preventing occupational accidents. In recent years, data-driven methods such as machine learning and deep learning have been preferred for fault detection and diagnosis. In particular, unsupervised learning algorithms, such as auto-encoders, exhibit good detection performance, even for unlabeled data from complex processes. However, decisions generated from deep-neural-network-based models are difficult to interpret and cannot provide explanatory insight to users. We address this issue by proposing a new fault diagnosis method using explainable artificial intelligence to break the traditional trade-off between the accuracy and interpretability of deep learning model. First, an adversarial auto-encoder model for fault detection is built and then interpreted through the integration of Shapley additive explanations (SHAP) with a combined monitoring index. Using SHAP values, a diagnosis is conducted by allocating credit for detected faults, deviations from a normal state, among its input variables. The proposed diagnosis method can consider not only reconstruction space but also latent space unlike conventional method, which evaluate only reconstruction error. The proposed method was applied to two chemical process systems and compared with conventional diagnosis methods. The results highlight that the proposed method achieves the exact fault diagnosis for single and multiple faults and, also, distinguishes the global pattern of various fault types.
AB - Process monitoring is important for ensuring operational reliability and preventing occupational accidents. In recent years, data-driven methods such as machine learning and deep learning have been preferred for fault detection and diagnosis. In particular, unsupervised learning algorithms, such as auto-encoders, exhibit good detection performance, even for unlabeled data from complex processes. However, decisions generated from deep-neural-network-based models are difficult to interpret and cannot provide explanatory insight to users. We address this issue by proposing a new fault diagnosis method using explainable artificial intelligence to break the traditional trade-off between the accuracy and interpretability of deep learning model. First, an adversarial auto-encoder model for fault detection is built and then interpreted through the integration of Shapley additive explanations (SHAP) with a combined monitoring index. Using SHAP values, a diagnosis is conducted by allocating credit for detected faults, deviations from a normal state, among its input variables. The proposed diagnosis method can consider not only reconstruction space but also latent space unlike conventional method, which evaluate only reconstruction error. The proposed method was applied to two chemical process systems and compared with conventional diagnosis methods. The results highlight that the proposed method achieves the exact fault diagnosis for single and multiple faults and, also, distinguishes the global pattern of various fault types.
KW - CSTR
KW - Computational modeling
KW - Data models
KW - Explainable AI
KW - Fault detection
KW - Fault diagnosis
KW - Indexes
KW - Modeling
KW - Monitoring
KW - Tennessee Eastman process
KW - auto-encoder
KW - fault diagnosis
UR - http://www.scopus.com/inward/record.url?scp=85148663839&partnerID=8YFLogxK
U2 - 10.1109/TII.2023.3240601
DO - 10.1109/TII.2023.3240601
M3 - Article
AN - SCOPUS:85148663839
SN - 1551-3203
SP - 1
EP - 8
JO - IEEE Transactions on Industrial Informatics
JF - IEEE Transactions on Industrial Informatics
ER -