Energy-Performance Considerations for Data Offloading to FPGA-Based Accelerators Over PCIe

2018 
Modern data centers increasingly employ FPGA-based heterogeneous acceleration platforms as a result of their great potential for continued performance and energy efficiency. Today, FPGAs provide more hardware parallelism than is possible with GPUs or CPUs, whereas C-like programming environments facilitate shorter development time, even close to software cycles. In this work, we address limitations and overheads in access and transfer of data to accelerators over common CPU-accelerator interconnects such as PCIe. We present three different FPGA accelerator dispatching methods for streaming applications (e.g., multimedia, vision computing). The first uses zero-copy data transfers and on-chip scratchpad memory (SPM) for energy efficiency, and the second uses also zero-copy but shared copy engines among different accelerator instances and local external memory. The third uses the processor’s memory management unit to acquire the physical address of user pages and uses scatter-gather data transfers with SPM. Even though all techniques exhibit advantages in terms of scalability and relieve the processor from control overheads through using integrated schedulers, the first method presents the best energy-efficient acceleration in streaming applications.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    7
    Citations
    NaN
    KQI
    []