Optimization of Distributed Crawler under Hadoop

2015 
Web crawler is an important link in the data acquisition of the World Wide Web. It is necessary to optimize traditional methods so as to meet the current needs in the face of the explosive growth of data. This paper introduces the process and the model of the current distributed crawler based on Hadoop, analyzes reasons for influencing the crawling efficiency, and points out defects of the parameter setting, the Urls distribution and the operating model of distributed crawler. The working efficiency is improved through the optimization of parameters configuration and the optimizing effect is further enhanced through the model modification. The experiment indicates that the working efficiency of the distributed crawler after the optimization is increased by 23%, which achieves the expected result.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    1
    References
    0
    Citations
    NaN
    KQI
    []