In this paper, we present a storage performance accelerator that utilizes a small size of fast NVRAM along with HDD. To do so, we first characterize the storage access patterns for different application types, and make two prominent observations that can be exploited in managing NVRAM storage efficiently. The first observation is that a bulk of storage I/O does not happen on a single specific partition, but it is varied significantly for different application categories. Our second observation is that there are more than 40% of single access data in storage I/Os due to the existence of host-side buffer cache. Based on these observations, we show that acceleration of storage performance can be maximized by using NVRAM as a back-end storage partition (such as file system, journal area, or swap area) rather than using it as a cache device. Specifically, we propose an architecture that uses NVRAM as a swap, a journal, and a file system partitions, respectively, for graph visualization, database, and multimedia streaming applications. Empirical evaluation results show that our storage architecture with application-aware NVRAM allocation reduces the total I/O time by 24% on average and up to 52% compared to the case that uses NVRAM as a cache device.