Finding pages on the unarchived Web
Presented at the IEEE/ACM Joint Conference on Digital Libraries, London, United Kingdom
Web archives preserve the fast changing Web, yet are highly incomplete due to crawling restrictions, crawling depth and frequency, or restrictive selection policies-most of the Web is unarchived and therefore lost to posterity. In this paper, we propose an approach to recover significant parts of the unarchived Web, by reconstructing descriptions of these pages based on links and anchors in the set of crawled pages, and experiment with this approach on the DutchWeb archive. Our main findings are threefold. First, the crawled Web contains evidence of a remarkable number of unarchived pages and websites, potentially dramatically increasing the coverage of the Web archive. Second, the link and anchor descriptions have a highly skewed distribution: popular pages such as home pages have more terms, but the richness tapers off quickly. Third, the succinct representation is generally rich enough to uniquely identify pages on the unarchived Web: in a known-item search setting we can retrieve these pages within the first ranks on average.
|Web Archives, Web Archiving, Web Crawlers, Anchor Text, Link evidence, Information Retrieval|
|Information (theme 2)|
|Web Archives Retrieval Tools|
|IEEE/ACM Joint Conference on Digital Libraries|
|Organisation||Human-centered Data Analysis|
Kamps, J, Ben-David, A, Huurdeman, H.C, de Vries, A.P, & Samar, T. (2014). Finding pages on the unarchived Web.