FPGA Acceleration of Approximate KNN Indexing on High- Dimensional Vectors

2019 
Accurate and efficient Machine Learning algorithms are of vital importance to many problems, especially on classification or clustering tasks. One the most important algorithms used for similarity search is known as K-Nearest Neighbor algorithm (KNN) which is widely adopted for predictive analysis, text categorization, image recognition etc. but comes at the cost of high computation. Large companies that process big data on modern data centers adopt this technique combined with approximations on algorithm level in order to compute critical workloads every second. However, a significant computation and energy overhead is formed further with the high dimensional nearest neighbor queries. In this paper, we deploy a hardware accelerated approximate KNN algorithm built upon FAISS framework (Facebook Artificial Intelligence Similarity Search) using FPGA-OpenCL platforms. The FPGA architecture on this framework addresses the problem of vector indexing on training and adding large-scale high-dimensional data. The proposed solution uses an in-memory FPGA format that outperforms other high performance systems in terms of speed and energy efficiency. The experiments were done on Xilinx Alveo U200 FPGA achieving up to 115× accelerator-only speed-up over single-core CPU and 2.4× end-to-end system speed-up over a 36-thread Xeon CPU. Also, the performance/watt of the design was 4.1 × from the same CPU and 1.4× from a Kepler-class GPU.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    3
    Citations
    NaN
    KQI
    []