Jump to main content
Jump to site search

Issue 44, 2019
Previous Article Next Article

Precision-extension technique for accurate vector–matrix multiplication with a CNT transistor crossbar array

Author affiliations

Abstract

Most machine learning algorithms involve many multiply–accumulate operations, which dictate the computation time and energy required. Vector–matrix multiplications can be accelerated using resistive networks, which can be naturally implemented in a crossbar geometry by leveraging Kirchhoff's current law in a single readout step. However, practical computing tasks that require high precision are still very challenging to implement in a resistive crossbar array owing to intrinsic device variability and unavoidable crosstalk, such as sneak path currents through adjacent devices, which inherently result in low precision. Here, we experimentally demonstrate a precision-extension technique for a carbon nanotube (CNT) transistor crossbar array. High precision is attained through multiple devices operating together, each of which stores a portion of the required bit width. A 10 × 10 CNT transistor array can perform vector–matrix multiplication with high accuracy, making in-memory computing approaches attractive for high-performance computing environments.

Graphical abstract: Precision-extension technique for accurate vector–matrix multiplication with a CNT transistor crossbar array

Back to tab navigation

Supplementary files

Publication details

The article was received on 06 Aug 2019, accepted on 27 Oct 2019 and first published on 28 Oct 2019


Article type: Paper
DOI: 10.1039/C9NR06715A
Nanoscale, 2019,11, 21449-21457

  •   Request permissions

    Precision-extension technique for accurate vector–matrix multiplication with a CNT transistor crossbar array

    S. Kim, Y. Lee, H. Kim and S. Choi, Nanoscale, 2019, 11, 21449
    DOI: 10.1039/C9NR06715A

Search articles by author

Spotlight

Advertisements