VESPA: VIPT Enhancements for Superpage Accesses.

2017 
L1 caches are critical to the performance of modern computer systems. Their design involves a delicate balance between fast lookups, high hit rates, low access energy, and simplicity of implementation. Unfortunately, constraints imposed by virtual memory make it difficult to satisfy all these attributes today. Specifically, the modern staple of supporting virtual-indexing and physical-tagging (VIPT) for parallel TLB-L1 lookups means that L1 caches are usually grown with greater associativity rather than sets. This compromises performance -- by degrading access times without significantly boosting hit rates -- and increases access energy. We propose VIPT Enhancements for SuperPage Accesses or VESPA in response. VESPA side-steps the traditional problems of VIPT by leveraging the increasing ubiquity of superpages; since superpages have more page offset bits, they can accommodate L1 cache organizations with more sets than baseline pages can. VESPA dynamically adapts to any OS distribution of page sizes to operate L1 caches with good access times, hit rates, and energy, for both single- and multi-threaded workloads. Since the hardware changes are modest, and there are no OS or application changes, VESPA is readily-implementable. By superpages (also called huge or large pages) we refer to any page sizes supported by the architecture bigger than baseline page size.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    36
    References
    0
    Citations
    NaN
    KQI
    []