Recently, mobile applications that internally execute artificial intelligence workloads are increasing. In mobile environments, performance degradation due to memory thrashing may occur while artificial intelligence workloads perform the training of large data set due to limitations in memory capacity. In this paper, we analyze the memory reference characteristics of artificial intelligence workloads, and observe that artificial intelligence workloads can cause performance degradation by generating a lot of I/Os (Inputs / Outputs) to NAND flash storage in mobile systems due to weak temporal locality and irregular popularity bias in memory write operations. Based on this observation, we discuss system architectures that can efficiently execute artificial intelligence workloads in mobile systems. Specifically, we adopt small persistent memory as write accelerator and show how efficiently the memory write operation of mobile systems can be managed. Simulation experiments show that the proposed system architecture can improve I/O time significantly compared to existing mobile systems.
|Title of host publication
|Proceedings - IEIT 2023
|Subtitle of host publication
|2023 International Conference on Electrical and Information Technology
|Institute of Electrical and Electronics Engineers Inc.
|Number of pages
|Published - 2023
|2023 International Conference on Electrical and Information Technology, IEIT 2023 - Malang, Indonesia
Duration: 14 Sep 2023 → 15 Sep 2023
|Proceedings - IEIT 2023: 2023 International Conference on Electrical and Information Technology
|2023 International Conference on Electrical and Information Technology, IEIT 2023
|14/09/23 → 15/09/23
Bibliographical notePublisher Copyright:
© 2023 IEEE.
- AI workload
- Mobile system
- machine learning
- memory reference
- write acceleration