Parallelization and Optimization of SIFT on GPU Using CUDA

2013 
Scale-invariant feature transform (SIFT) based feature extraction algorithm is widely applied to extract features from images, and it is very attractive to accelerate these SIFT based algorithms on GPU. In this paper, we present several parallel computing strategies, implement and optimize the SIFT algorithm using CUDA programming model on GPU. Each stage of SIFT is analyzed in detail to choose the parallel strategy. On the basis of the elementary CUDA-SIFT and CUDA architecture, we optimize the implementation from several aspects to speedup the CUDA-SIFT. Experimental results demonstrate that our implementation after optimization is 2.5 times faster than previous optimization, and our CUDA based SIFT can run at the speed of 20 frames per second on most images with 1280x 960 resolution in the test. Using 1920 x1440 image to test, we have obtained a speed of 11 frames per second on average, which is about 60 times faster than the CPU implementation of SIFT. In short, our implementation obtains appropriate accuracy and higher efficiency compared to CPU implementations and other GPU implementations, which is attributed to our dedicated optimization strategies.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    8
    Citations
    NaN
    KQI
    []