Abstract
Most machine learning algorithms involve many multiply-accumulate operations, which dictate the computation time and energy required. Vector-matrix multiplications can be accelerated using resistive networks, which can be naturally implemented in a crossbar geometry by leveraging Kirchhoff's current law in a single readout step. However, practical computing tasks that require high precision are still very challenging to implement in a resistive crossbar array owing to intrinsic device variability and unavoidable crosstalk, such as sneak path currents through adjacent devices, which inherently result in low precision. Here, we experimentally demonstrate a precision-extension technique for a carbon nanotube (CNT) transistor crossbar array. High precision is attained through multiple devices operating together, each of which stores a portion of the required bit width. A 10 × 10 CNT transistor array can perform vector-matrix multiplication with high accuracy, making in-memory computing approaches attractive for high-performance computing environments.
Original language | English |
---|---|
Pages (from-to) | 21449-21457 |
Number of pages | 9 |
Journal | Nanoscale |
Volume | 11 |
Issue number | 44 |
DOIs | |
State | Published - 28 Nov 2019 |
Bibliographical note
Publisher Copyright:© 2019 The Royal Society of Chemistry.