Posted
by
Cliff
on Monday June 30, 2003 @02:58PM
from the if-google-and-wayback-can-do-it... dept.

Dyer asks: "I used to run several high-trafficked anonymous surfing sites and if I wasn't getting emailed by a lawyer telling me to block someone's site from being accessed I was being woken up at 2am with a telephone call from a crazy person yelling, sometimes swearing at me with the impression that my site copied theirs and it resided on my server, when in actuality it was being accessed by my server at that instant and being relayed to the user. This is my point, how do services like Archive.org and Google's cache get away with what they're doing? You can call their services whatever you like, but it doesn't change the fact that they are copying people's websites and saving them onto their servers for everyone to access."

Well, it should be legal/allowed. If you don't want it read and archived, don't put it on the Web.

You know, I've been wondering about Java/Shockwave games. Certainly most kids would love a CD full of those games, and many companies have many different games online which mostly disappear a few months later.

Is anybody archiving these? Do we need to start?

Would the companies object?

You can play The Hitchhiker's Guide to the Galaxy [douglasadams.com] on Douglas Adams' web site. As it happens, if you know what you're doing you can also download the.z5 file and play it offline on any zip interpreter. Would the copyright owners object to it? I own that Infocom 33-game collection and all 5 books; the reason the game wasn't included in the collection is copyright hassles. Am I "entitled" to play it offline?

This ties in to today's "is ROM collecting wrong" story, except in this case you're actually offered the games, under mostly unclear terms.

Go into your/. user settings (preferences) and on the comments page, set 'Display Link domains' to 'Always show link domains'.
It does at leat give you the chance to think before you click. I mean, what do you think a link to a site named Goatsex is going to reveal?
Happy hunting.

How can I remove my site's pages from the Wayback Machine?
The Internet Archive is not interested in preserving or offering access to Web sites or other Internet documents of persons who do not want their materials in the collection. By placing a simple robots.txt file on your Web server, you can exclude your site from being crawled as well as exclude any historical pages from the Wayback Machine.
See our exclusion policy.
You can find exclusion directions at exclude.php. If you cannot place the robots.txt file, opt not to, or have further questions, email wayback2@archive.org.

In other words, by your NOT including a robots.txt file, you are implicitly granting them permission to cache your content. Also, the content is cached as it was published, complete with the appropriate markings, and is only publicly accessible content, so you'd be hard press to argue there is any economic harm from the caching, which means there would be likely be no damages from a successful copyright suit, which means a copyright suit would be pretty damned unlikely.

No, but you do have a door. People are free to drive by your house and take a picture of it, or anything else out in public view. That reminds me of the girl that lost the lawsuit [yahoo.com] against Girls Gone Wild. If someone doesn't want their web page to be archived or cached, they can "put up a door" by using "robots.txt". If they really don't want to let the public at large see it, lock the door by protecting the content with a password. If they want to make absolutely sure no one sees it that they don't spe

As you correctly point out in another post, copyright law has an exception for caching Internet content.

That may be true in some places, I don't know. Regardless, if the archive continues after the original site is taken down, it is no longer a cache, it is an outright copy.

And yes, this could be damaging. To give a close-to-home example, consider a case where a site gets/.ed so only a few people can see the real content. If that site is then updated in some critical way, the numerous caches all over

Regardless, if the archive continues after the original site is taken down, it is no longer a cache, it is an outright copy.

I'm not sure what you mean by "an outright copy." It's always an outright copy. But if the archive continues after the original site is taken down, it's still a cache.

If that site is then updated in some critical way, the numerous caches all over the web won't be (at least not immediately, and it is clearly unreasonable to expect anyone publishing a website to notify them all).

I'm not sure what you mean by "an outright copy." It's always an outright copy. But if the archive continues after the original site is taken down, it's still a cache.

My point is that the term "cache", as commonly used in computing, carries an inference that the cached material is identical to the original but faster to access. If the original is no longer there, or has changed, then you are no longer caching it, you are simply keeping a copy of the old data.

I gave references in two other posts, but here it is again, for the US at least. http://www4.law.cornell.edu/uscode/17/512.html It's part of the DMCA. In part, " the service provider described in paragraph (1) complies with rules concerning the refreshing, reloading, or other updating of the material when specified by the person making the material available online in accordance with a generally accepted industry standard data communications protocol for the system or network through which that person ma

OK, I've read the relevant parts of that. I fail to see how a web cache "...complies with rules concerning the refreshing, reloading, or other updating of the material when specified by the person making the material available online in accordance with a generally accepted industry standard data communications protocol for the system or network..."

The industry standard is that when you request information from a web site, you get the current version. (As I noted elsewhere, browser caching is quite differe

The industry standard is that when you request information from a web site, you get the current version.

Sometimes. If the current version isn't available (for instance because you're offline), then you get whatever is in the cache.

I'm mainly thinking of google, here. Google isn't intentionally displaying old content, and they take it down after a rather short period of time. Presumably they adhere to the "Expires" header and other relevant information. Certainly they adhere to the robots exclusion a

If the current version isn't available (for instance because you're offline), then you get whatever is in the cache.

Sorry, but I don't think this is reasonable. That caching is part of the browser software, and as I've noted repeatedly, that is a different issue to the web caches we're discussing here.

Nothing in the HTTP spec, or in any other relevant Internet standards, provides for any caching of old content and supplying it when a straightforward HTTP request for a file is sent. You get the current

Nothing in the HTTP spec, or in any other relevant Internet standards, provides for any caching of old content and supplying it when a straightforward HTTP request for a file is sent.

I don't think that's what the spec says. It says that you have to explicitly warn the end-user when semantic transparancy is relaxed by cache. It only says that the request must be explicit when relaxed by client or origin server.

We're all above the law. The government derives its power from the people, not the other way around.

Those two statements aren't in any way equivalent.

Just to be clear, I do not volunteer my time just to toot my horn or make a profit. I have given thousands of hours over the past 5-6 years helping out in very technical forums (not just writing amusing anecdotes on Slashdot for my own entertainment, which is hardly the same thing) and never made a penny from it. I post here anonymously, and do you see m

Ah, but they are. When the government tries to force an unjust law on the people, the people are under no obligation to accept it. Now sometimes it's in our own best interests to follow it anyway, just as we would sometimes give up certain freedoms under other gunpoint situations, but it's not always the best idea.

Just to be clear, I do not volunteer my time just to toot my horn or make a profit. I have given thousands of hours over the past 5-6 years

When the government tries to force an unjust law on the people, the people are under no obligation to accept it.

And who is to say that it is unjust? You? Copyright is a well-established legal principle, and there are very good reasons for it. The fact that you don't like it doesn't make it unjust.

Perhaps you have a better idea for how to make laws? Or should we dispense with them altogether, since no doubt someone thinks every illegal thing should be legal, typically those who want to break the law an

OK, I give up. You're worse than RMS. You persistently ignore the positives of things you don't like, you exaggerate the negatives, you put words into people's mouths, you ignore the wording of the law or just dismiss it outright when you happen not to agree with it, and your arguments are illogical, emotional and utterly without objective merit. The best you can do is attack figures of speech and twist what I've written to give it meanings I did not, so as to set up a range of straw men at which you can sh

In a copyright case, the courts first establish whether infringement has taken place, and this is determined irrespective of economic issues. It is determined purely on issues of subsistance, owernship, duration, etc - in terms of the statuory provisions and the existing case law. It is only then that exceptions (such as fair use, and specific exemptions - say - for public archives and libraries) are considered.

Read more carefully. The implications of my posting: The cachers are providing a mechanism to have your work excluded at your request, providing you with a non-court means to remedy the caching if you choose. Since it is all publicly available information anyway, the potential economic damage is minimal. There are usually two remedies provided to a plaintiff after a lawsuit over copyright: the violater is ordered to stop violating, and the violater is ordered to provide monetary compensation. In this case,

In this case, the first remedy is provided by the potential violator...

Yes, but it places the burden in the wrong place and so is not likely to be considered an adequate remedy by the courts. More properly, the violator should be seeking permission prior to re-distributing the content, rather than essentially saying to the copyright holder "Stop me before I copy again!"

I'm not sure I think that caching sites should be subject to traditional copyright law--it has some nasty implications for anyone who cu

In other words, by your NOT including a robots.txt file, you are implicitly granting them permission to cache your content.

Riiiiight. See you in court.

As I've just posted elsewhere, it is quite feasible that a site owner could be damaged if caches maintain information after the original site has been changed or taken down. For example, if updated information is placed on the original, this leaves the "cached" versions out of date and misleading anyone who reads them thinking they're seeing a perfect c

As I've just posted elsewhere, it is quite feasible that a site owner could be damaged if caches maintain information after the original site has been changed or taken down.

Damaged in what way? Aren't there archives of newspapers, journals, and magazines? And if time-sensitive information is present on a website, does the public have a right to see what was previously there? Websites can get away with a lot of instant censorship that way - you can check out this site [thememoryhole.org] for an archive designed in response

Damaged in what way? Aren't there archives of newspapers, journals, and magazines? And if time-sensitive information is present on a website, does the public have a right to see what was previously there?

If I put up information on a web site, for free, as a volunteer, then the public has no rights whatsoever, either legally or morally. Why the hell should they? They didn't do anything to earn them.

If you have a specific example related to this problem, I would love to hear it.

If I put up information on a web site, for free, as a volunteer, then the public has no rights whatsoever, either legally or morally. Why the hell should they? They didn't do anything to earn them.

The fact that the public has a right to anything you produce is the reason that the public domain exists. Copyright is instituted by governments to keep creative people in a position to keep creating - but when you're dead, the information should go somewhere to enhance the public good. If the human race is to

I think you're sort-of-right. The mere fact that a search engine gives you this facility to opt out does not create an implicit licence to use content by itself: there is an old principle of law that silence does not mean consent. If this were not the case, I could, e.g., write to you offering an opportunity to engage in a Nigerian money-laundering scam with the rider that "if you don't reply to this I will take it to mean you have accepted my offer" and then enforce that through contract law if you didn't

On the day of 9/11, I began to think that maybe a lot of things would be online that would disappear on the next update, forever. We tend to think of 1880 newspaper clippings as being perishable, not online media, but the opposite is true. So all day on 9/11 I archived news sites and about two hundred blogs using "wget -p".

Over the next week I archived some 4,600 blogs. They've kind of been sitting around waiting for me to weed through and organize. I've also been wgetting 30 or so large news sites' front page every 15 minutes or so on the hunch that I'll grab something emerging even if I'm AFK. Well...what can I do with this data?

The answer(s) to this question will definitely be of use to me. Thanks for asking it. Slash, thanks for posting it.

1) Burn it onto DVD. But I don't know which format is likely to survive the longest!

2) Hand it over in whatever form you can to your nearest major University and let them work out how to archive it. If they can find a way to do so reliably, it will be very valuable to their Faculty of History in a hundred years or so!

If you can do both, then great - you could distribute it to several Universities. Be sure to include a few European Unis that that have already been around for at

Paper can potentially last a long time (the US Constitution is still intact, for example). However, the average paper archive the size of a CD (which would physically be quite substantial) would require enough upkeep to make the cost of storing and maintaining it much greater than the cost of burning a new copy of the CD every ten or twenty years.

Sure. Ummm. I'm going in for surgery today so it'll have to wait until tomorrow, but drop me an email at slash@php.us and I'll get you a URL where you can just dl it. If you have dialup or some such, let me know and I'll snailmail you a CD with the data.

"Sure. Ummm. I'm going in for surgery today so it'll have to wait until tomorrow, but drop me an email at slash@php.us and I'll get you a URL where you can just dl it. If you have dialup or some such, let me know and I'll snailmail you a CD with the data."

I say try and set up a server for all this. You personally may not have the money, but I'm betting that your local university would be willing to help. Now, if they don't, you could get people to donate money to help you set up a server for all that stuff. I'd love to see some of it, since its got to be an interesting cross-section of post-9/11 America and such. As others have said, the Smithsonian may be interested too, but giving everyone access to your archives would be a great public service. I know I'm

No, my post was in response to people that get angry when their site's are mirrored. They seem to feel that even though they are distributing the content on an international network with millions of users, they can still control the information, solely through litigation, and that is not how things SHOULD be.

It might be useful to note that the archive servers are located outside the US, and that they act on requests to have information and websites removed from their archive. (IIRC). I would state that the Archive serves a compelling public interest, both in the sense of free speech, and in the basic idea of keeping a history or record of the internet. The archive is a museum of sorts.

Google, on the other hand, is gathering data for its search engine, and, of necessity, must have what essentially amounts to a copy of each web page in its stores in order to provide this service. If one does not want to have their data in Google, they simply use robots.txt, and Google doea not spider, cache, or store any data from that site if robots.txt is filled out. However, the site owner also denies themselves the ability to be listed, for 'free', in googles search pages. This could be thought of as the cost of being listed.

So I don't think either of those two situations have any problems defending themselves. An anonymizer could also be seen as providing a useful, protected service. An anonymizer is nothing more than a proxy service, and many ISPs use proxies now, not to mention caches and many other tools that store website information or meta information without notifying or requesting explicit permission to do so - they request implicit permission by sending a GET command.

Actually, the Internet Archive's main Wayback Machine [archive.org] servers are located in a co-location center in San Francisco, so it's not correct to say they're located outside the US. There is a mirror [bibalex.org] of the Archive's web content at the Library of Alexandria in Egypt, however - maybe that's what you're thinking of?

In any case, the Archive's work with the Library Of Congress and, increasingly, national libraries who want to archive the Web content of their countries, proves that the establishment also thinks Web a

Please note that robots.txt affects whether Google crawls various parts of your website at all. To prevent your pages from being stored in the Google cache (even if they are searchable using Google), you need to specify the META tag
<META NAME="GOOGLEBOT" CONTENT="NOARCHIVE"> in each and every of your pages.

We do not accept email from lawyers as a legitimate form of communication.

Email from lawyers is/dev/null'd.As for the waking up in the middle of the night...Um, turn off the ringer? Stop sleeping in the NOC? Maybe invest in a second phone line for your business instead of using moms POTS line.

I'd be damn happy if someone made backups and mirrors of a site I made. People will visit my site without using bandwith I pay for. Also, if disaster strikes I can get my site back because someone else was kind enough to back me up. The more the merrier

(FWIW, IANAL) Web site content is copyrighted. Therefore, you have a right to make your own personal copy, and backup copies, but it is not legal to redistribute those copies without the site owner's permission. I cannot imagine that the Wayback machine or the Google cache is legal. They are blatantly disregarding the site owners' copyright.

That said, I think the law should be changed or at least clarified, because it is patently (pun intended) obvious that those services are doing a vast social good, and should be encouraged.

Web site content is copyrighted. Therefore, you have a right to make your own personal copy, and backup copies, but it is not legal to redistribute those copies without the site owner's permission. I cannot imagine that the Wayback machine or the Google cache is legal. They are blatantly disregarding the site owners' copyright.

That would imply that every ISP running a public
squid cache is breaking the law, and Akamai's entire
business model
is based on illegal content-smuggling. I really don't th

Mod parent up! This link to the US Code is very useful in this context.

Heck, it's so useful that I'm going to quote some of it here:

TITLE 17 > CHAPTER 5 > Sec. 512. Prev | Next

Sec. 512. - Limitations on liability relating to material online

(a) Transitory Digital Network Communications. -

A service provider shall not be liable for monetary relief, or, except as provided in subsection (j), for injunctive or other equitable relief, for infringement of copyright by reason of the provider's transmitting, routing, or providing connections for, material through a system or network controlled or operated by or for the service provider, or by reason of the intermediate and transient storage of that material in the course of such transmitting, routing, or providing connections, if -

(1)

the transmission of the material was initiated by or at the direction of a person other than the service provider;

(2)

the transmission, routing, provision of connections, or storage is carried out through an automatic technical process without selection of the material by the service provider;

(3)

the service provider does not select the recipients of the material except as an automatic response to the request of another person;

(4)

no copy of the material made by the service provider in the course of such intermediate or transient storage is maintained on the system or network in a manner ordinarily accessible to anyone other than anticipated recipients, and no such copy is maintained on the system or network in a manner ordinarily accessible to such anticipated recipients for a longer period than is reasonably necessary for the transmission, routing, or provision of connections; and

(5)

the material is transmitted through the system or network without modification of its content.

(b) System Caching. -

(1) Limitation on liability. -

A service provider shall not be liable for monetary relief, or, except as provided in subsection (j), for injunctive or other equitable relief, for infringement of copyright by reason of the intermediate and temporary storage of material on a system or network controlled or operated by or for the service provider in a case in which -

(A)

the material is made available online by a person other than the service provider;

(B)

the material is transmitted from the person described in subparagraph (A) through the system or network to a person other than the person described in subparagraph (A) at the direction of that other person; and

(C)

the storage is carried out through an automatic technical process for the purpose of making the material available to users of the system or network who, after the material is transmitted as described in subparagraph (B), request access to the material from the person described in subparagraph (A),

if the conditions set forth in paragraph (2) are met.

(2) Conditions. -

The conditions referred to in paragraph (1) are that -

(A)

the material described in paragraph (1) is transmitted to the subsequent users described in paragraph (1)(C) without modification to its content from the manner in which the material was transmitted from the person described in paragraph (1)(A);

(B)

the service provider described in paragraph (1) complies with rules concerning the refreshing, reloading, or other updating of the material when specified by the person making the material available online in accordance with a generally accepted industry standard data communications protocol for the system or network through which that person makes the material available, except that this subparagraph applies only if those rules are not used by the person described in paragraph (1)(A) to prevent or unreasonably impair the intermediate storage to which this subsection applies;

Interestingly, the law cited makes explicit provision for several of the concerns I expressed in earlier posts in this thread, notably the issues of keeping the data up-to-date and of the information provider getting information from those visiting their site directly.

The normal Internet convention is that when I update my site, changes are immediately visible to everyone. (NB: browser caching is not equivalent to web caching here for several reasons.) Also, visitors to my site normally leave information

All in all, if that is the exemption I was referred to earlier in this thread, it looks as though the web caches are skating on very thin ice. If they did something like cloning material on a web site that was later removed in order to publish it in a book, I imagine they could wind up having a serious dispute with the publisher, or perhaps the author himself, either of whom might have a strong case that they suffered financially because of the actions of the caching site.

I was referring to the post where someone said there was an exemption under copyright law for web caches. I assumed the parts of the DMCA that were cited here were that exemption. In that case the validity of the original claim appears to be less clear than was suggested.

(FWIW, IANAL) Web site content is copyrighted. Therefore, you have a right to make your own personal copy, and backup copies, but it is not legal to redistribute those copies without the site owner's permission. I cannot imagine that the Wayback machine or the Google cache is legal. They are blatantly disregarding the site owners' copyright.

This confuses fair use on purchased items that you own with what you are allowed to do with temporary copies for viewing. By the same logic, you could legally tak

I really think archiving is important, and is one strenght of the internet : archiving your data without paying, or even asking for it.
I mean, there must be a lot of companies or organization (I think about the NASA, etc...) who probably have hundreds of terabytes of data, and don't want to spend money or time making backups. Add that most archiving medium won't last more than a couple of decades, and you'll understand that archiving is great because everyone can backup a little something, and all those w

There are limited provisions in copyright law (at least in the UK, and I expect to occur elsewhere in the world) for public libraries and archives. But these are indeed limited provisions and do not apply to a random commercial organisation that decides to provide such a service.

Firstly, in the general case of search engines providing indexing of content, this is legal and there are legal cases to back it up (in the UK: antiquesportfolio) so long as the indexes are not copies.

Secondly, in the case of USENET groups and mailing lists, then in the process of submitting a message to the mailing list or group, you have given an implicit license for the message to be reproduced within the nature of the particular technology at hand. This means if at a later date you object to a message in a mailing list that you wrote in the past, you don't really have the ability to retract it. In all cases, anyone deciding to use the material in another way (e.g. creating a commercial CDROM of USENET material for a marked up price) would be violating your (and others) copyright. However, if they were providing that CDROM as a distribution service for USENET itself (e.g. "get your monthly USENET CDROM") then this is probably within the bounds of legality as it is still transfer via the USENET system, and the cost is likely to be that to reflect media/distribution costs rather than some specific aim to make a commercial product out of your material.

Finally, in the specific case of copies of websites, yes this is a violation of copyright - but as far as I know this has not been tested in a court of law. The use of the Robots Exclusion Protocol and the NOARCHIVE, NOINDEX and NOFOLLOW elements allow a weasal argument suggesting that it is inherent in the WWW itself (as a new form of media / technology) that search engine indexing and archiving / caching is legal unless you specifically disallow it with this mechanism. It may also be the case that if this archiving / caching was carried out for profit or at price greater than fair for distribution/media then a party is making an economic gain out of your material and this suggests an inequitable violation of your economic rights.

Another point to remember is that in WTO treaties that resulted in DMCA provisions, as enacted in the UK and EU, there are specific fair use allowances for intermediate copies of a copyright work as necessary for the telecommunications medium itself (this would seem to allow things like store-and-forward systems, and caching).