Latency Critical Operation in Network Processors

2019 
This paper presents the recent advancements made on the Advanced-IO-Processor (AIOP), a Network Processor (NPU) architecture designed by NXP Semiconductors. The base architecture consists of multi-tasking PowerPC processor cores combined with hardware accelerators for common packet processing functions. Each core is equipped with dedicated hardware for rapid task scheduling and switching on every hardware accelerator call, thus providing very high throughput. A hardware pre-emption controller snoops on the accelerator completions and sends task pre-emption requests to the cores. This reduces the latency of real-time tasks by quickly switching to the high priority task on the core without any performance penalty. A novel concept of prioritythresholding is further used to avoid latency uncertainty on lower priority tasks. The paper shows that these features make the AIOP architecture very effective in handling the conflicting requirements of high-throughput and low-latency for nextgeneration wireless applications like WiFi (802.11ax) and 5G. In presence of frequent pre-emptions, the throughput reduces by only 3% on AIOP, compared to 25% on optimized presentday NPU architectures. Further, the absolute throughput and latency numbers are 2X better.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    4
    References
    0
    Citations
    NaN
    KQI
    []