Abstract
Conventional end-to-end learning algorithm considers only the final prediction output and ignores layer-wise relational reasoning during the training. In this paper, we propose to use a forward and backward interacted-activation (FBI) loss function that regularizes training a CNN so that the prediction model can provide interpretable results for classification. From our best knowledge, the proposed algorithm is the first work to use a regularization function without any prior knowledge or pre-defined terms to allow for a CNN to be more explainable. It is demonstrated with quantitative and qualitative analysis that the proposed technique can be used for efficiently train a CNN with more interpretability, applied to a well-known classification problem.
Original language | English |
---|---|
Title of host publication | Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019 |
Publisher | IEEE Computer Society |
Pages | 40-43 |
Number of pages | 4 |
ISBN (Electronic) | 9781728125060 |
State | Published - Jun 2019 |
Event | 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019 - Long Beach, United States Duration: 16 Jun 2019 → 20 Jun 2019 |
Publication series
Name | IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops |
---|---|
Volume | 2019-June |
ISSN (Print) | 2160-7508 |
ISSN (Electronic) | 2160-7516 |
Conference
Conference | 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019 |
---|---|
Country/Territory | United States |
City | Long Beach |
Period | 16/06/19 → 20/06/19 |
Bibliographical note
Publisher Copyright:© 2019 IEEE Computer Society. All rights reserved.