WebCite

WebCite is “an academic project, hosted at the University of Toronto / University Health Network’s Centre for Global eHealth Innovation” the aim of which is to cache web pages cited by academic writers and to provide not merely a stable URL but long-term, if not permanent, access to the cited web material.

The writer can initiate the spidering and caching either on a cite by cite basis or by submitting a whole document for archiving of all cited documents. Publishers can join the consortium and submit their books and journals for caching in anticipation of their being cited by authors in the future. At the moment something like 100 medical journals have joined WebCite.

Of course, where publishers have the copyright, their joining the consortium does away with copyright problems. But I’d say that WebCite’s argument (see their FAQ page) about copyright where an author has a cited page archived is a bit of whistling past the graveyard: it relies on the fact that Google and the Internet Archive cache web pages and on the U.S. case of Field vs Google. But it seems to me, without having given it the thought it deserves, that this explicit and targeted copying, so far from being a factor in support of a fair use exemption or implied licence as WebCite argues, would be simply a direct violation of copyright.

WebCite respects no-cache / no-archive tags and robot exclusion policies, and will, like Google, remove archived material upon request. Don’t know that this last “negative billing” cuts it, though.

What’s the more considered view of Slawyers?

Comments are closed.