Hiding memory latency using fixed priority scheduling

2014 
Modern embedded platforms contain a variety of physical resources, such as caches, interconnects, main memory, etc., which the processor must access during the execution of a task. We argue that processor task execution and accesses to physical resources should be co-scheduled in real-time systems to predictably hide resource access latency. In particular, in this work we focus on co-scheduling task execution and accesses to main memory to hide DRAM access latency. Since modern systems implement DMA controllers that can be operated independently of processor execution, this allows us to hide memory transfer latency by scheduling DMA transfer in parallel with processor execution. The main contribution of this paper is a dynamic scheduling algorithm for a set of sporadic real-time tasks that efficiently co-schedules processor and DMA execution to hide memory transfer latency. The proposed algorithm can be applied to either uniprocessor or partitioned multiprocessor systems. We demonstrate that we improve processor utilization significantly compared to existing scratchpad and cache management systems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    25
    Citations
    NaN
    KQI
    []