Cache Where you Want! Reconciling Predictability and Coherent Caching

2019 
Real-time and cyber-physical systems need to interact with and respond to their physical environment in a predictable time. While multicore platforms provide incredible computational power and throughput, they also introduce new sources of unpredictability. Large fluctuations in latency to access data shared between multiple cores is an important contributor to the overall execution-time variability. In addition to the temporal unpredictability introduced by caching, parallel applications with data shared across multiple cores also pay additional latency overheads due to data coherence. Analyzing the impact of data coherence on the worst-case execution-time of real-time applications is challenging because only scarce implementation details are revealed by manufacturers. This paper presents application level control for caching data at different levels of the cache hierarchy. The rationale is that by caching data only in shared cache it is possible to bypass private caches. The access latency to data present in caches becomes independent of its coherence state. We discuss the existing architectural support as well as the required hardware and OS modifications to support the proposed cacheability control. We evaluate the system on an architectural simulator. We show that the worst case execution time for a single memory write request is reduced by 52%.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    3
    Citations
    NaN
    KQI
    []