SpDRAM: Efficient In-DRAM Acceleration of Sparse Matrix-Vector Multiplication

Jieui Kang, Soeun Choi, Eunjin Lee, Jaehyeong Sim

Research output: Contribution to journalArticlepeer-review

Abstract

We introduce novel sparsity-Aware in-DRAM matrix mapping techniques and a corresponding DRAM-based acceleration framework, termed SpDRAM, which utilizes a triple row activation scheme to efficiently handle sparse matrix-vector multiplication (SpMV). We found that reducing operations by sparsity relies heavily on how matrices are mapped into DRAM banks, which operate row by row. These banks operate row by row. From this insight, we developed two distinct matrix mapping techniques aimed at maximizing the reduction of row operations with minimal design overhead: Output-Aware Matrix Permutation (OMP) and Zero-Aware Matrix Column Sorting (ZMCS). Additionally, we propose a Multiplication Deferring (MD) scheme that leverages the prevalent bit-level sparsity in matrix values to decrease the effective bit-width required for in-bank multiplication operations. Evaluation results demonstrate that the combination of our in-DRAM acceleration methods outperforms the latest DRAM-based PIM accelerator for SpMV, achieving a performance increase of up to 7.54× and a 22.4x improvement in energy efficiency in a wide range of SpMV tasks.

Original languageEnglish
Pages (from-to)176009-176021
Number of pages13
JournalIEEE Access
Volume12
DOIs
StatePublished - 2024

Bibliographical note

Publisher Copyright:
© 2013 IEEE.

Keywords

  • DRAM
  • Processing-in-memory
  • sparsity
  • SpMV

Fingerprint

Dive into the research topics of 'SpDRAM: Efficient In-DRAM Acceleration of Sparse Matrix-Vector Multiplication'. Together they form a unique fingerprint.

Cite this