Abstract
This work introduces AlphaAccelerator, a graph neural network-based approach to developing an FPGA accelerator designed to optimize the inference performance of deep neural networks. We train a pair of neural networks one of which embeds a neural architecture into a latent vector (encoder). The other one generates a corresponding hardware architecture from the latent vector (decoder), which is optimized for a certain FPGA. In this way, we can fully automate the process of building a highly efficient neural accelerator. Evaluation results show that hardware accelerators made by our method demonstrate higher performance with less resource consumption compared to the existing works.
| Original language | English |
|---|---|
| Title of host publication | Proceedings - International SoC Design Conference 2024, ISOCC 2024 |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Pages | 143-144 |
| Number of pages | 2 |
| ISBN (Electronic) | 9798350377088 |
| DOIs | |
| State | Published - 2024 |
| Event | 21st International System-on-Chip Design Conference, ISOCC 2024 - Sapporo, Japan Duration: 19 Aug 2024 → 22 Aug 2024 |
Publication series
| Name | Proceedings - International SoC Design Conference 2024, ISOCC 2024 |
|---|
Conference
| Conference | 21st International System-on-Chip Design Conference, ISOCC 2024 |
|---|---|
| Country/Territory | Japan |
| City | Sapporo |
| Period | 19/08/24 → 22/08/24 |
Bibliographical note
Publisher Copyright:© 2024 IEEE.
Keywords
- Design Automation
- FPGA
- Neural Accelerator