NVRAM is being considered as an additional memory/storage component of future computer systems. This paper investigates how much performance gain can be obtained if we add NVRAM as the memory/storage component of computer systems. Specifically, we present a storage system accelerator that utilizes a small size of NVRAM cache. To do so, we formally define the NVRAM caching problem and analyze the storage access patterns that can be exploited in managing NVRAM cache. Our analysis shows that there are more than 40% of single-write data in storage I/Os due to periodic flushes triggered from the host side. Based on this observation, we show that acceleration of storage performance can be maximized by using NVRAM as a selective storage cache device. Empirical evaluation results show that our storage architecture with flush-aware NVRAM cache reduces the total I/O time by 26% on average and up to 62% compared to the case that does not use our scheme.