Approximate Similarity Search with FAISS Framework Using FPGAs on the Cloud

2019 
Machine Learning algorithms, such as classification and clustering techniques, have gained significant traction over the last years because they are vital to many real-world problems. K-Nearest Neighbor algorithm (KNN) is widely used in text categorization, predictive analysis, data mining etc. but comes at the cost of high computation. In the era of big data, modern data centers adopt this specific algorithm with approximate techniques to compute demanding workloads every day. However, high dimensional nearest neighbor queries on billion-scale datasets still produce a significant computational and energy overhead. In this paper, we describe and implement a novel design to address this problem based on a hardware accelerated approximate KNN algorithm built upon FAISS framework (Facebook Artificial Intelligence Similarity Search) using FPGA-OpenCL platforms on the cloud. This is an original deployment of FPGA architecture on this framework that also shows how the persistent index build times on big scale inputs for similarity search can be handled in hardware and even outperform other high performance systems. The experiments were done on AWS cloud F1 instance achieving 98\(\times \) FPGA accelerator speed-up over single-core CPU and 2.1\(\times \) end-to-end system speed-up over a 36-thread Xeon CPU. Also, the performance/watt of the design was 3.5\(\times \) from the same CPU and 1.2\(\times \) from a Kepler-class GPU.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    3
    Citations
    NaN
    KQI
    []