Recently, Storage-Class Memory (SCM) has advanced as a new memory/storage medium, and legacy memory subsystems optimized for DRAM-HDD architectures need to be redesigned. In this paper, we revisit the memory subsystems that use SCM as an underlying storage device and discuss the challenges and implications of such systems. Specifically, we analyze two memory layers influenced by fast storage devices: buffer cache and paging systems. In case of buffer cache, our analysis shows that caching of a file block gains only when the block from SCM storage is accessed at least twice after entering the cache. This is contrasting to the HDD case, in which only a single access in the cache also gains. In case of paging systems, we found out that a small page is effective in improving data access latency although it does not gain in terms of the page fault ratio. However, we further observed that a small page degrades the TLB miss ratio, which eventually deteriorates the address translation latency. Thus determining an appropriate page size is necessary by considering the trade-off between address translation and data access latency, under SCM storage. We anticipate that the result of this paper will be helpful in designing memory subsystems with ever faster SCM storage devices.