HugeMap: Optimizing Memory-Mapped I/O with Huge Pages for Fast Storage

2020 
Memory-mapped I/O (mmio) is emerging as a viable alternative for accessing directly-attached fast storage devices compared to explicit I/O with system calls. Mmio removes the need for costly lookups in the DRAM I/O cache for cache hits, as they are handled in hardware via the virtual memory mechanism. In this work we present HugeMap, a custom mmio path in the Linux kernel that uses huge pages for file-backed mappings to accelerate applications with sequential I/O access patterns or large I/O operations. HugeMap uses huge pages to reduce CPU processing in the kernel I/O path compared to regular mmap. We explore the benefits and trade-offs of huge pages in HugeMap using microbenchmarks, IOR, and an in-house persistent key-value store designed for mmio. Our experiments show up to \(3.7\times \) higher throughput and up to \(4.76\times \) lower system time, compared to regular page configurations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    10
    References
    0
    Citations
    NaN
    KQI
    []