Abstract
In recent years, there have been many studies on the application and implementation of machine learning techniques in the financial domain. Implementation of such state-of-the-art models inevitably requires interpretability for users to understand the result and trust. However, as most of the machine learning methods are “black-box,” explainable AI, which aims to provide explanations to users, has become an important research issue. This paper focuses on explanation by counterfactual example for a bankruptcy-prediction model. Counterfactual-based explanation offers an alternative case for users in order for them to have a desired output from the model. This paper proposes a genetic algorithm (GA)-based counterfactual generation algorithm using feature importance whilst taking other key factors into account. Feature importance was derived from a prediction model, and key factors for counterfactuals include closeness to the original dataset and sparsity. The proposed method presented advantages over the nearest contrastive sample and a simple counterfactual generation algorithm in the experiment. Also, it provides relevant and compact explanations to enhance the interpretability of the bankruptcy prediction model.
Original language | English |
---|---|
Article number | 119390 |
Journal | Expert Systems with Applications |
Volume | 216 |
DOIs | |
State | Published - 15 Apr 2023 |
Bibliographical note
Publisher Copyright:© 2022
Keywords
- Bankruptcy prediction
- Counterfactual-based explanation
- Explainable artificial intelligence