Deep neural networks are one of the most important methods in machine learning. The advantages of neural networks are their excellent prediction performance and versatility using deep architecture and generalized input–output forms. However, as neural networks are black-box models, they lack explanatory power for their predictions. In this study, we propose a new neural network architecture that includes the interpretability of predictions for multivariate time-series (MTS) data by employing a generalized additive method. In addition, we examine parameter sharing networks to decrease the model's complexity, along with hard-shared networks. We conducted experiments to demonstrate that the interpretable neural architecture can quantify the contributions of each input value to the prediction of each MTS by every time step and variable. Experimental results involving a toy example and four real-world datasets demonstrate that the performance of the proposed method in predicting MTS data is comparable to that of state-of-the-art neural networks, while providing reasonable importance for each input value.
Bibliographical noteFunding Information:
This work was supported by the Ewha Womans University Research Grant of 2023. This work was partially supported by the National Research Foundation of Korea (NRF) (No. 2020R1F1A1075781 ) Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No. 2022R1A6A3A13056750 ).
© 2023 Elsevier Ltd
- Deep learning
- Explainable artificial intelligence
- Multivariate time series prediction
- Neural additive models
- Parameter sharing