AlphaAccelerator: An Automatic Neural FPGA Accelerator Design Framework Based on GNNs

Jiho Lee, Jieui Kang, Eunjin Lee, Yejin Lee, Jaehyeong Sim

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This work introduces AlphaAccelerator, a graph neural network-based approach to developing an FPGA accelerator designed to optimize the inference performance of deep neural networks. We train a pair of neural networks one of which embeds a neural architecture into a latent vector (encoder). The other one generates a corresponding hardware architecture from the latent vector (decoder), which is optimized for a certain FPGA. In this way, we can fully automate the process of building a highly efficient neural accelerator. Evaluation results show that hardware accelerators made by our method demonstrate higher performance with less resource consumption compared to the existing works.

Original languageEnglish
Title of host publicationProceedings - International SoC Design Conference 2024, ISOCC 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages143-144
Number of pages2
ISBN (Electronic)9798350377088
DOIs
StatePublished - 2024
Event21st International System-on-Chip Design Conference, ISOCC 2024 - Sapporo, Japan
Duration: 19 Aug 202422 Aug 2024

Publication series

NameProceedings - International SoC Design Conference 2024, ISOCC 2024

Conference

Conference21st International System-on-Chip Design Conference, ISOCC 2024
Country/TerritoryJapan
CitySapporo
Period19/08/2422/08/24

Bibliographical note

Publisher Copyright:
© 2024 IEEE.

Keywords

  • Design Automation
  • FPGA
  • Neural Accelerator

Fingerprint

Dive into the research topics of 'AlphaAccelerator: An Automatic Neural FPGA Accelerator Design Framework Based on GNNs'. Together they form a unique fingerprint.

Cite this