Reconciling Predictability and Coherent Caching

2020 
Real-time systems are required to respond to their physical environment within predictable time. While multi-core platforms provide incredible computational power and throughput, they also introduce new sources of unpredictability. For parallel applications with data shared across multiple cores, overhead to maintain data coherence is a major cause of execution time variability. This source of variability can be eliminated by application level control for limiting data caching at different levels of the cache hierarchy. This removes the requirement of explicit coherence machinery for selected data. We show that such control can reduce the worst case write request latency on shared data by 52%. Benchmark evaluations show that proposed technique has a minimal impact on average performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    2
    Citations
    NaN
    KQI
    []