Conventional end-to-end learning algorithm considers only the final prediction output and ignores layer-wise relational reasoning during the training. In this paper, we propose to use a forward and backward interacted-activation (FBI) loss function that regularizes training a CNN so that the prediction model can provide interpretable results for classification. From our best knowledge, the proposed algorithm is the first work to use a regularization function without any prior knowledge or pre-defined terms to allow for a CNN to be more explainable. It is demonstrated with quantitative and qualitative analysis that the proposed technique can be used for efficiently train a CNN with more interpretability, applied to a well-known classification problem.
|Title of host publication||Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019|
|Publisher||IEEE Computer Society|
|Number of pages||4|
|State||Published - Jun 2019|
|Event||32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019 - Long Beach, United States|
Duration: 16 Jun 2019 → 20 Jun 2019
|Name||IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops|
|Conference||32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019|
|Period||16/06/19 → 20/06/19|
Bibliographical noteFunding Information:
This work has supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT)(No.NRF-2019R1C1C1010249)
© 2019 IEEE Computer Society. All rights reserved.