Memory Network Architecture for Packet Processing in Functions Virtualization

2021 
Packet processing tasks in network functions issue memory requests to understand the packet information, update the packet content, and search the databases, which requires high-performance memory systems. While network functions virtualization (NFV) is expected to reduce the cost of network infrastructure by replacing dedicated network equipment with commercial off-the-shelf (COTS) hardware and virtual network functions (VNFs), VNF performance suffers from the poor memory systems that lack function-dedicated memories and memory parallelism in COTS servers. While several works presented parallel memories for packet processing based on 3 dimensional (3D)-stacked dynamic random access memories (DRAMs), data transfer latency between the processors and memories are not considered. Although there are processing-in-memory (PIM) architectures that offload a part of processing in memories to reduce data transfers, the majority of processing is still in the processors, which requires data transfers for multiple packet processing tasks. This paper proposes a memory network architecture using 3D-stacked DRAMs to increase throughput and reduce accumulated latency when there are multiple packet processing tasks. Packets that enter the memory network receive packet processing at each 3D-stacked DRAM without data transfers between the processors and memories. The evaluation results show that the proposed architecture increases throughput and reduces accumulated latency for memory accesses and data transfers when there are multiple packet processing tasks, compared to the conventional architecture with 3D-stacked DRAM-based parallel memory, where every memory access requires data transfers.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    7
    References
    0
    Citations
    NaN
    KQI
    []