A DRAM Bandwidth-Scalable Sparse Matrix-Vector Multiplication Accelerator with 89% Bandwidth Utilization Efficiency for Large Sparse Matrix

Hyunji Kim, Eunkyung Ham, Sunyoung Park, Hana Kim, Ji Hoon Kim

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Sparse matrix-vector multiplication (SpMV) plays a crucial role in diverse engineering applications, including scientific/engineering modeling, machine learning, and information retrieval, as depicted in Fig. 1 [5]. To efficiently store sparse matrices and minimize memory waste, the widely employed COO (Coordinate) compression format stores only the coordinates (row index, column index) of the non-zero elements in the matrix along with their corresponding values. However, the memory-intensive nature of SpMV operations, combined with irregular memory access patterns and limited data reuse resulting from the COO format, pose significant challenges for achieving high-performance implementations [6]. To assess the performance and efficiency of FPGA-based SpMV accelerators [1]-[4], which are typically optimized for specific hardware platforms, Bandwidth Utilization (BU) serves as a key metric for fair comparisons across different hardware specifications [6].

Original languageEnglish
Title of host publication2023 IEEE Asian Solid-State Circuits Conference, A-SSCC 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798350330038
DOIs
StatePublished - 2023
Event19th IEEE Asian Solid-State Circuits Conference, A-SSCC 2023 - Haikou, China
Duration: 5 Nov 20238 Nov 2023

Publication series

Name2023 IEEE Asian Solid-State Circuits Conference, A-SSCC 2023

Conference

Conference19th IEEE Asian Solid-State Circuits Conference, A-SSCC 2023
Country/TerritoryChina
CityHaikou
Period5/11/238/11/23

Bibliographical note

Publisher Copyright:
© 2023 IEEE.

Fingerprint

Dive into the research topics of 'A DRAM Bandwidth-Scalable Sparse Matrix-Vector Multiplication Accelerator with 89% Bandwidth Utilization Efficiency for Large Sparse Matrix'. Together they form a unique fingerprint.

Cite this