TY - GEN
T1 - A PVT-robust customized 4T embedded DRAM cell array for accelerating binary neural networks
AU - Shin, Hyein
AU - Sim, Jaehyeong
AU - Lee, Daewoong
AU - Kim, Lee Sup
N1 - Funding Information:
This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (NO. 2017R1A2B2009380).
Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - Deep neural networks (DNNs) are widely used for real-world applications. However, large amount of kernel and intermediate data incur a memory wall problem in resource-limited edge devices. The recent advances of a binary deep neural network (BNN) and a computing in-memory (CIM) have effectively alleviated this bottleneck especially when they are combined together. However, previous CIM-based accelerators for BNN are highly vulnerable to process/supply voltage/temperature (PVT) variation, resulting in severe accuracy degradation which makes them impractical to be employed in real-world edge devices. To address this vulnerability, we propose a PVT-robust accelerator architecture for BNN with a computable 4T embedded DRAM (eDRAM) cell array. First, we implement the XNOR operation of BNN in a time-multiplexed manner by utilizing the fundamental read operation of the conventional eDRAM cell. Next, a PVT-robust bit-count based on charge sharing is proposed with a computable 4T eDRAM cell array. In result, the proposed architecture achieves 6.9× less variation in PVT-variant environments which guarantees a stable accuracy and 2.03-49.4× improvement of energy efficiency over previous CIM-based accelerators.
AB - Deep neural networks (DNNs) are widely used for real-world applications. However, large amount of kernel and intermediate data incur a memory wall problem in resource-limited edge devices. The recent advances of a binary deep neural network (BNN) and a computing in-memory (CIM) have effectively alleviated this bottleneck especially when they are combined together. However, previous CIM-based accelerators for BNN are highly vulnerable to process/supply voltage/temperature (PVT) variation, resulting in severe accuracy degradation which makes them impractical to be employed in real-world edge devices. To address this vulnerability, we propose a PVT-robust accelerator architecture for BNN with a computable 4T embedded DRAM (eDRAM) cell array. First, we implement the XNOR operation of BNN in a time-multiplexed manner by utilizing the fundamental read operation of the conventional eDRAM cell. Next, a PVT-robust bit-count based on charge sharing is proposed with a computable 4T eDRAM cell array. In result, the proposed architecture achieves 6.9× less variation in PVT-variant environments which guarantees a stable accuracy and 2.03-49.4× improvement of energy efficiency over previous CIM-based accelerators.
KW - Binary Convolutional Neural Network
KW - Processing in-memory
KW - eDRAM
UR - http://www.scopus.com/inward/record.url?scp=85077788939&partnerID=8YFLogxK
U2 - 10.1109/ICCAD45719.2019.8942072
DO - 10.1109/ICCAD45719.2019.8942072
M3 - Conference contribution
AN - SCOPUS:85077788939
T3 - IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD
BT - 2019 IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2019 - Digest of Technical Papers
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 38th IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2019
Y2 - 4 November 2019 through 7 November 2019
ER -