Database Processing by Linear Regression on GPU

2011 
In today's era, there is a great importance to parallel programming to gain high performance in terms of time required for data computation. There are some constraints to achieve parallelism on CPU (Central Processing Unit). It is possible to achieve data parallelism by SIMD (Single Instruction Multiple Data) on General Purpose Graphics Processing Unit (GPGPU) integrated with Central Processing Unit (CPU). In Database processing, most of the research is going on. In this implementation, Linear Regression Algorithm is used to achieve parallelism in database processing on images using a programming model, Compute Unified Device Architecture (CUDA) which uses multithreading technique. Most of the time is required to perform various operations on huge content-based database e.g. to read big images, datasets, etc. Linear Regression is one of the algorithm to predict, forecast, mine huge amount of data. Linear Regression using CUDA can achieve high performance. Here, Linear Regression is implemented on Graphics Processing Unit (GPU) and on CPU to process image database for prediction of data by finding Covariance matrix, Eigen values and Eigen vectors. The strongest Eigen vector is the best fit line. The time spent for computation is compared in both the implementations.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    0
    Citations
    NaN
    KQI
    []