A Kernel Decomposition Architecture for Binary-weight Convolutional Neural Networks

Hyeonuk Kim, Jaehyeong Sim, Yeongjae Choi, Lee Sup Kim

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

24 Scopus citations

Abstract

The binary-weight CNN is one of the most efficient solutions for mobile CNNs. However, a large number of operations are required to process each image. To reduce such a huge operation count, we propose an energy-efficient kernel decomposition architecture, based on the observation that a large number of operations are redundant. In this scheme, all kernels are decomposed into sub-kernels to expose the common parts. By skipping the redundant computations, the operation count for each image was consequently reduced by 47.7%. Furthermore, a low cost bit-width quantization technique was implemented by exploiting the relative scales of the feature data. Experimental results showed that the proposed architecture achieves a 22% energy reduction.

Original languageEnglish
Title of host publicationProceedings of the 54th Annual Design Automation Conference 2017, DAC 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781450349277
DOIs
StatePublished - 18 Jun 2017
Event54th Annual Design Automation Conference, DAC 2017 - Austin, United States
Duration: 18 Jun 201722 Jun 2017

Publication series

NameProceedings - Design Automation Conference
VolumePart 128280
ISSN (Print)0738-100X

Conference

Conference54th Annual Design Automation Conference, DAC 2017
Country/TerritoryUnited States
CityAustin
Period18/06/1722/06/17

Fingerprint

Dive into the research topics of 'A Kernel Decomposition Architecture for Binary-weight Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this