Abstract
Container-based deep learning has emerged as a cutting-edge trend in modern AI applications. Containers have several merits compared to traditional virtual machine platforms in terms of resource utilization and mobility. Nevertheless, containers still pose challenges in executing deep learning workloads efficiently with respect to resource usage and performance. In particular, multi-tenant environments are vulnerable to the performance of container-based deep learning due to conflicts of resource usage. To quantify the container effect in deep learning, this article captures various event traces related to deep learning performance using containers and compares them with those captured on a host machine without containers. By analyzing the system calls invoked and various performance metrics, we quantify the effect of containers in terms of resource consumption and interference. We also explore the effects of executing multiple containers to highlight the issues that arise in multi-tenant environments. Our observations show that containerization can be a viable solution for deep learning workloads, but it is important to manage resources carefully to avoid excessive contention and interference, especially for storage write-back operations. We also suggest a preliminary solution to avoid the performance bottlenecks of page-faults and storage write-backs by introducing an intermediate non-volatile flushing layer, which improves I/O latency by 82% on average.
Original language | English |
---|---|
Article number | 11654 |
Journal | Applied Sciences (Switzerland) |
Volume | 13 |
Issue number | 21 |
DOIs | |
State | Published - Nov 2023 |
Bibliographical note
Publisher Copyright:© 2023 by the authors.
Keywords
- container
- deep learning
- event trace
- performance
- system resource
- virtual machine