TY - GEN
T1 - Accelerating Storage Performance with NVRAM by Considering Application's I/O Characteristics
AU - Kim, Jisun
AU - Bahn, Hyokyung
N1 - Funding Information:
ACKNOWLEDGMENT This work was supported by the Basic Science Research program through the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2016R1A2B4015750). Hyokyung Bahn is the corresponding author of this paper.
Publisher Copyright:
© 2018 IEEE.
PY - 2018/5/25
Y1 - 2018/5/25
N2 - In this paper, we present a storage performance accelerator that utilizes a small size of fast NVRAM along with HDD. To do so, we first characterize the storage access patterns for different application types, and make two prominent observations that can be exploited in managing NVRAM storage efficiently. The first observation is that a bulk of storage I/O does not happen on a single specific partition, but it is varied significantly for different application categories. Our second observation is that there are more than 40% of single access data in storage I/Os due to the existence of host-side buffer cache. Based on these observations, we show that acceleration of storage performance can be maximized by using NVRAM as a back-end storage partition (such as file system, journal area, or swap area) rather than using it as a cache device. Specifically, we propose an architecture that uses NVRAM as a swap, a journal, and a file system partitions, respectively, for graph visualization, database, and multimedia streaming applications. Empirical evaluation results show that our storage architecture with application-aware NVRAM allocation reduces the total I/O time by 24% on average and up to 52% compared to the case that uses NVRAM as a cache device.
AB - In this paper, we present a storage performance accelerator that utilizes a small size of fast NVRAM along with HDD. To do so, we first characterize the storage access patterns for different application types, and make two prominent observations that can be exploited in managing NVRAM storage efficiently. The first observation is that a bulk of storage I/O does not happen on a single specific partition, but it is varied significantly for different application categories. Our second observation is that there are more than 40% of single access data in storage I/Os due to the existence of host-side buffer cache. Based on these observations, we show that acceleration of storage performance can be maximized by using NVRAM as a back-end storage partition (such as file system, journal area, or swap area) rather than using it as a cache device. Specifically, we propose an architecture that uses NVRAM as a swap, a journal, and a file system partitions, respectively, for graph visualization, database, and multimedia streaming applications. Empirical evaluation results show that our storage architecture with application-aware NVRAM allocation reduces the total I/O time by 24% on average and up to 52% compared to the case that uses NVRAM as a cache device.
KW - hybrid storage
KW - I/O
KW - NVRAM
KW - storage cache
KW - storage system
UR - http://www.scopus.com/inward/record.url?scp=85048470342&partnerID=8YFLogxK
U2 - 10.1109/BigComp.2018.00063
DO - 10.1109/BigComp.2018.00063
M3 - Conference contribution
AN - SCOPUS:85048470342
T3 - Proceedings - 2018 IEEE International Conference on Big Data and Smart Computing, BigComp 2018
SP - 383
EP - 389
BT - Proceedings - 2018 IEEE International Conference on Big Data and Smart Computing, BigComp 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE International Conference on Big Data and Smart Computing, BigComp 2018
Y2 - 15 January 2018 through 18 January 2018
ER -