16-Bit Fixed-Point Number Multiplication with CNT Transistor Dot-Product Engine

Sungho Kim, Yongwoo Lee, Hee Dong Kim, Sung Jin Choi

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Resistive crossbar arrays can carry out energy-efficient vector-matrix multiplication, which is a crucial operation in most machine learning applications. However, practical computing tasks that require high precision remain challenging to implement in such arrays because of intrinsic device variability. Herein, we experimentally demonstrate a precision-extension technique whereby high precision can be attained through the combined operation of multiple devices, each of which stores a portion of the required bit width. Additionally, designed analog-to-digital converters are used to remove the unpredictable effects from noise sources. An 8\times 15 carbon nanotube transistor array can perform multiplication operation, where operands have up to 16 valid bits, without any error, making in-memory computing approaches attractive for high-throughput energy-efficient machine learning accelerators.

Original languageEnglish
Article number9142231
Pages (from-to)133597-133604
Number of pages8
JournalIEEE Access
Volume8
DOIs
StatePublished - 2020

Bibliographical note

Publisher Copyright:
© 2013 IEEE.

Keywords

  • Crossbar array
  • dot product
  • matrix multiplication
  • precision extension

Fingerprint

Dive into the research topics of '16-Bit Fixed-Point Number Multiplication with CNT Transistor Dot-Product Engine'. Together they form a unique fingerprint.

Cite this