Multiple-signal duplicate detection for search evaluation

2007 
We consider the problem of duplicate document detection for search evaluation. Given a query and a small number of web results for that query, we show how to detect duplicate web documents with precision ~0.91 and recall ~77. In contrast, Charikar's algorithm, designed for duplicate detection in an indexing pipeline, achieves precision ~0.91 but with a recall of ~0.58. Our improvement in recall while maintaining high precision comes from combining three ideas. First, because we are only concerned with duplicate detection among results for the same query, the number of pairwise comparisons is small. Therefore we can afford to compute multiple pairwise signals for each pair of documents. A model learned with standard machine-learning techniques improves recall to ~0.68 with precision ~0.90. Second, most duplicate detection has focused on text analysis of the HTML contents of a document. In some web pages the HTML is not a good indicator of the final contents of the page. We use extended fetching techniques to fill in frames and execute Java script. Including signals based on our richer fetches further improves the recall to ~0.75 and the precision to ~0.91. Finally, we also explore using signals based on the query. Comparing contextual snippets based on the richer fetches improves the recall to ~0.77. We show that the overall accuracy of this final model approaches that of human judges.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    13
    Citations
    NaN
    KQI
    []