Google cache raises copyright concerns: The NY Times is raising a fuss over the fact that Google’s caching feature allows people to view archived articles that they would have to pay for over at the Times’ site.
“Through a caching feature on the popular Google search site, people can sometimes call up snapshots of archived stories at NYTimes.com and other registration-only sites. The practice has proved a boon for readers hoping to track down Web pages that are no longer accessible at the original source, for whatever reason. But the feature has recently been putting Google at odds with some unhappy publishers.”
Where do you draw the line here, though? What’s to stop me as a reader from copying all the New York Times articles to my local machine while they’re in their 7-day “free” period? If they’re free for one second, someone could copy them locally.
If they then become no longer free, am I in trouble because I have a copy of the site when it was free? Dave Winer struck a deal with the New York Times about their archive policy over this same thing because many, many blog links to Times articles were breaking after seven days.
Also, did you know you can stop Google from caching your site by adding a “noarchive” robots META?