Abstract
Resistive crossbar arrays can carry out energy-efficient vector-matrix multiplication, which is a crucial operation in most machine learning applications. However, practical computing tasks that require high precision remain challenging to implement in such arrays because of intrinsic device variability. Herein, we experimentally demonstrate a precision-extension technique whereby high precision can be attained through the combined operation of multiple devices, each of which stores a portion of the required bit width. Additionally, designed analog-to-digital converters are used to remove the unpredictable effects from noise sources. An 8\times 15 carbon nanotube transistor array can perform multiplication operation, where operands have up to 16 valid bits, without any error, making in-memory computing approaches attractive for high-throughput energy-efficient machine learning accelerators.
Original language | English |
---|---|
Article number | 9142231 |
Pages (from-to) | 133597-133604 |
Number of pages | 8 |
Journal | IEEE Access |
Volume | 8 |
DOIs | |
State | Published - 2020 |
Bibliographical note
Publisher Copyright:© 2013 IEEE.
Keywords
- Crossbar array
- dot product
- matrix multiplication
- precision extension