Google, Belgium Pipes and Everything
A court of first instance in Belgium has ruled against Google in a suit by a group of newspapers operating in that country who alleged that Google’s News feature violated their copyright. The NY Times has the story, as does the EUobserver. I’ve been unable to find the decision online (anyone?), but it would seem that the complaint has to do with Google’s reproduction on its news pages of the titles of newspaper articles along with short excerpts.
It’s being reported in some places that this is a ruling against linking pure and simple, which doesn’t seem accurate to me. At any rate, this is a mere rock in a river that’s headed towards ever greater free use of others’ content on the web, and the eddy it creates will soon be smoothed out in the coming flood. Just look at Pipes, the subject of a Slaw post yesterday by Steve Matthews: the whole idea is to mash up what you can find into new forms. If it’s not locked down it will be available for re-use. Take a look at this piece on Read/WriteWeb that shows how with Pipes (and whatever else will be coming along) it’s possible to treat the entire web as a giant database able to be remixed.




I’ve just found the judgment. It’s en français and available at: http://www.copiepresse.be/copiepresse_google.pdf. The scanning job is pretty poor, but it seems to be legible even so.
I get what you’re saying Simon, and I’m sure there will be some companies that will attempt to lock things down, but is RSS/Pipes the best example?
Syndication efforts are typically intended to be public resources, where the company wants to have its content read, republished or repurposed. How different is a mashup that utilizes RSS from a corporate approved API?
Not sure which side you’re coming from, Steve. And I guess I wasn’t at all clear: I think it’s silly to put stuff on the web and imagine that it won’t get re-used, silly in the sense that the technology is not standing still and celebrating static pages anymore, which after all are modeled on print pages etc. I see Pipes as one of the early tools for letting the public mix any content it can find. Tracing the results back to a copyrighted source and suing the mixer, or the users of the mix, won’t be any more successful than the music industry’s attempts to stop copying.
I wonder if the Google lawsuit wasn’t at its core about something other than copyright and the money to be made that way… I’m not sure what, but it seems surprising that newspapers would imagine they could make appreciably more money by keeping their stuff off Google News.
Thanks Simon. I get it now. … What I was referencing was the distinction between A) scraping/spidering and caching content (Google’s approach), and B) the use of public RSS feeds.
You’ve summed up my opinion pretty well. RSS should be considered a consent technology – if you publish a feed, you consent.
On the Google cache front though, I’ve questioned before whether Google has consent from web publishers.
There’s a very sensible editorial on the decision by Struan Robertson of the UK firm Pinsent Masons (but not on behalf of the firm) who edits that firm’s e-news site Out-Law.com:
http://www.out-law.com/page-7759
He says reasonably enough that it’s about the money – the Belgian newspapers want some of Google’s money… but there are lots of well-known easy ways to prevent indexing and caching if they chose to use them.
They (the newspapers) seem in this to be as short-sighted as the book publishers that reject Google Books – they are trying to cut off an enterprise that will create much more knowledge of and interest in their products, because they want some of Google’s profits as well.
That’s aside from the technicality, though in principle these days, if you don’t use robot.txt or anti-caching technology, you’re as good as consenting to the processes that these simple devices prevent.
Those who program their bots to ignore such technical bars raise interesting questions of trespass to virtual chattels – a whole other issue…