Leveraging hadoop framework to develop duplication detector and analysis using Mapreduce, Hive and Pig

2014 
The burgeoning volume of torrential data continues to grow exponentially in this very age of the Internet of Things. As this torrent of digital datasets continue to outgrow in datacenters, the focus needs to be shifted to stored data reduction methods and that too pertaining to NoSQL databases as traditional structured storage systems continuously tend to face challenges in providing the required storage, throughputs and computational power requirements necessary to capture, store, manage and analyze the deluge of data. Deduplication systems, thus designed, retain a single copy of redundant data on disk to save disk space, but what if we want to keep certain copies intentionally and need wishful elimination. This paper leverages Hadoop framework to design and develop a duplication detection system that detects multiple copies of the same data right at the file level itself and that too before transmission. Thereafter, various datasets are tuned for better performance and analysed using MapReduce,
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    2
    Citations
    NaN
    KQI
    []