Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Hugh Pickens writes "Journalist Alan D. Mutter reports on his blog 'Reflections of a Newsosaur' that a coalition of traditional and digital publishers is launching the first-ever concerted crackdown on copyright pirates on the Web. Initially targeting violators who use large numbers of intact articles, the first offending sites to be targeted will be those using 80% or more of copyrighted stories more than 10 times per month. In the first stage of a multi-step process, online publishers identified by Silicon Valley startup Attributor will be sent a letter informing them of the violations and urging them to enter into license agreements with the publishers whose content appears on their sites. In the second stage Attributor will ask hosting services to take down pirate sites. 'We are not going after past damages' from sites running unauthorized content says Jim Pitkow, the chief executive of Attributor. The emphasis, Pitkow says is 'to engage with publishers to bring them into compliance' by getting them to agree to pay license fees to copyright holders in the future. Offshore sites will not be immune from the crackdown: almost all of them depend on banner ads served by US-based services, and the DMCA requires the ad service to act against any violator. Attributor says it can interdict the revenue lifeline at any offending site in the world." One possible weakness in Attributor's business plan, unless they intend to violate the robots.txt convention: they find violators by crawling the Web.

Seriously. Following robots.txt is not law, only convention. I'm sure it doesn't take much to convince themselves to ignore it. Money, "doing the right thing", etc. If you view the copyright infringers as pirates, then why should Attributor follow their wishes?

Seriously. Following robots.txt is not law, only convention. I'm sure it doesn't take much to convince themselves to ignore it. Money, "doing the right thing", etc. If you view the copyright infringers as pirates, then why should Attributor follow their wishes?

I'd go even farther to say that sites that use robot.txt to eliminate crawling are probably not major targets - if they don't show up in search engine sthen tehy probably don't generate enough traffic to be worth the effort. Sites that are high traffic are much better targets - their revenue stream form ads is prbabaly significant enough that they don't want to risk losing it. Once enough fall into line they can worry about the ones that are not indexed - in fact they may just want to kill them off to preserve traffic to licensed sites.

I can see ways that their service could be effective while respecting robots.txt settings. They'd simply need to crawl the indexes of other search engines. After all, if a violator is not accessible through Google or Bing, it's probably a low priority.

So... why would robots.txt, which advises me of your wishes but to which I never actually agree, carry any more legal authority than a ToS document to which I do supposedly agree as a condition of using your system?

Right... because a judge will find that offer, consideration, and acceptance of a contract took place between a webserver and a bot? The court case you cite is irrelevant to an automated program that has no understanding and cannot accept conditions presented online.

Right... because a judge will find that offer, consideration, and acceptance of a contract took place between a webserver and a bot? The court case you cite is irrelevant to an automated program that has no understanding and cannot accept conditions presented online.

Awesome, so anyone can DoS a server, send mass spam or distribute a virus as long as a bot does it, because a judge will rule that the bot acted on its own and wasn't developed or set loose by anyone at all.

If the software wrote itself you might have a point, otherwise the people who wrote it are the ones responsible for how it acts.

If the infringing sites have a robots.txt that tells all crawlers to skip them, they will not show up in search engines. If they single out Attributor's crawler's user agent string, they would look very suspicious.

That's only if their web crawler even looks at robots.txt. It's not required, only a courtesy. I'm sure they'll not be so courteous and claim that they need to do this because the violators they're looking for would block them anyway.

The sure fire way to keep them out would be to find out what is IP address Attributor is using and block that at your firewall. The trouble with that is they could easily change their IP address or even employ something akin to a botnet to do their web crawling so that thei

don't worry. They're going to break a lot of laws, break a lot of legs, and basically commit suicide. At least when it's through we'll have less dinosaur industries to deal with.

They're literally planning to go to domain providers and threaten DMCA to get content taken down. Instead of, you know, DMCA'ing the website appropriately this is an end run around the legal process. Expect a quick smackdown. Why they would host such a company in California of all places to do this, where cali is the most clear abou

1) Put up a file sharing site with lots of music and movie files.
2) Craft a robots.txt to keep out the RIAA and MPAA....
Profit!!!

Robots.txt is a convention that was never intended to restrict checking for illegal content. The idea behind robots.txt is only to keep site indexers such as Google, Yahoo, etc. out of certain directories.

What on earth is the DMCA supposed to achieve, in the context of Ad-providers?

Sounds pretty scary to me.

Agreed. I've never heard of this, and a quick scan of the legislation doesn't turn up anything that appears to relate to this; the categories of service it regulates appear to be (a) telecoms providers transmitting data at user request, (b) those hosting temporary copies of content (e.g. caches), (c) those hosting content at the request of third parties, and (d) search engines, directories and other link

Sounds like they've learned their lesson from the RIAA. I'm not saying I agree with them and think they are right to do this. But, if you're going to try to enforce your interpretation of the law, this is at least a sane philosophy of doing so. Not going after damages is a smart move.

A lot of aggregator sites like this one base a lot of their topical content on articles printed elsewhere. While most (incl./.) don't print whole articles intact, a lot of them do quote heavily (what used to be called "fair use," back when that phrase actually meant anything). So their first step is to go after the sites that reprint the articles whole-cloth. But will they stop there?

Sorry, no, any website doing *that* should be shut down. I hate those assholes. They're the reason why a search for a given term in Google pops up thousands of sites with the *exact same content*, just ripped from one another.

80% is a reasonable starting point. If they start lowering it, we'll have to express our righteous indignation then. Fair use, when interpreted, is generally considered a LOT lower than routinely cutting-and-pasting 80% of articles, so they have a long way to lower it before we can honestly call our indignation righteous.

Seriously, this really isn't a "slippery slope" situation. It seems to be a well-thought-out and sane set of guidelines. If anything, they are being a bit generous for now, and they can still tighten this quite a bit without coming close to busting "fair use" or even "reasonable use".

Basically they are saying, "if you routinely use 80%+ of our articles as your own content, we're asking you to stop. We won't sue you for any past uses, we just want to make it clear that this isn't cool any more."

A fair usage (not the lack of quotes, I am not talking about a legal doctrine) would be to use about 20% of the source article (properly attributed) with a link back to the original article. Give credit where it's due (and cite your sources). Then add your own thoughts, or don't. But don't take whole-cloth articles and post them on your own site with your own ads.

Every discussion board I've ever participated in has pretty much recommended some really close variant to this anyway. It usually reads something like "cite a paragraph or two at most and have a link to the source article plainly visible nearby".

all this harrassment is going to do will be to push the global small internet publishers to services in other countries. Datacenters, Ad services in u.s. will lose customers. There are already strong companies servicing in those areas in Eu. Eu will be happy to receive that amount of business.

the stupor of american corporatism is overwhelming. they can even go to the extent of shooting themselves in the foot.

According to this [wikipedia.org], only Australia, Canada, USA, EU, Japan, South Korea, Mexico, Morocco, New Zealand, Singapore and Switzerland are currently part of that treaty. This (currently) leaves more than enough room for a whole lot of other countries (some of them as big as Russia and China) that are not part of it.

Are you kidding? ACTA's going to harmonise everything so closely to the US that they'll be able to prosecute anyone.

If you think Vanuatu et al are going to be signing up to ACTA, then I want some of what you're smoking.

Sure, most of the large economies will probably be signing, but there's no reason not to base an Internet business on a little island somewhere nice with friendly laws (and, as a nice side benefit, zero taxation).

As I understand it, advertisers targeting readers in the United States tend to choose ad networks that operate or at least have some sort of assets in the United States, not ad networks that operate in the European Union. Advertisers who target readers in the European Union probably will not want to pay to reach readers in the United States, especially for a product not available in the United States.

So.. when the ad is placed the customer selects the target country / region.

So I take it you're imagining an EU based ad network that deals with advertisers in foreign markets. But how would such an ad network efficiently deal with US advertisers while having zero assets in the US or in any other country with a takedown system remotely like that of the US?

You log on to the advertisers site and submit your ad along with your credit card number. And you do it from a country that is not subject to US or US like control. Remember with the internet you do not need a brick and mortar or even a flesh and blood presence anyway to do business.

And in the process find all the commercial sites using my copyrighted Flickr photos for their own purposes without my permission or payment. I'm tired of sending invoices and dealing with companies who tell you that your photo wasn't worth the $300 you charge and instead send you $50 thinking that it will clear up the matter.

I love the hypocrisy of all of this. They are just as much at fault as any of those aggregation blogs. They just have more money to be a pain in the ass.

I'm tired of sending invoices and dealing with companies who tell you that your photo wasn't worth the $300 you charge and instead send you $50 thinking that it will clear up the matter.

They’re basically giving you the finger. Don’t fuck around playing their little games... show them you mean business. Slap on a surcharge to cover your additional expense and send their name and remaining balance to a debt collector. It’s probably cheaper and less of a hassle than suing them in small claims court.

IANAL... you may want to ask a real lawyer what your options are, but seems to me you have a few.

"Offshore sites will not be immune from the crackdown: almost all of them depend on banner ads served by US-based services, and the DMCA requires the ad service to act against any violator. "

Not sure this is such a great idea - when you're broke you don't starve off the little income you're still getting... I'm inclined to think that in the near future, things will more likely go in the opposite direction, grey-legal stuff will be fully legalized to provide some as much extra economic stimulus as possible.

A coalition of traditional and digital publishers this month will launch the first-ever concerted crackdown on copyright pirates on the web, initially targeting violators who use large numbers of intact articles.

Details of the crackdown were provided by Jim Pitkow, the chief executive of Attributor, a Silicon Valley start-up that has been selected as the agent for several publishers who want to be compensated by websites that are using their content without paying licensing fees.

In a telephone interview yesterday, Pitkow declined to identify the individual publishers in his coalition, but said they include “about a dozen” organizations representing wire services, traditional print publishers and “top-tier blog networks.”

The first offending sites to be targeted will be those using 80% or more of copyrighted stories more than 10 times per month.

In the first stage of a multi-step process aimed at encouraging copyright compliance instead of punishing scofflaws, Pitkow said online publishers identified by his company will be sent a letter informing them of the violations and urging them to enter into license agreements with the publishers whose content appears on their sites.

If copyright pirates refuse to pay, Attributor will request the major search engines to remove offending pages from search results and will ask banner services to stop serving ads to pages containing unauthorized content. The search engines and ad services are required to immediately honor such requests by the federal Digital Millennium Copyright Act (DMCA).

If the above efforts fail, Attributor will ask hosting services to take down pirate sites. Because hosting services face legal liability under the DCMA if they do not comply, they will act quickly, said Pitkow.

“We are not going after past damages” from sites running unauthorized content said Pitkow. The emphasis, he said is “to engage with publishers to bring them into compliance” by getting them to agree to pay license fees to copyright holders in the future.

License fees, which are set by each of the individual organizations producing content, may range from token sums for a small publisher to several hundred dollars for yearlong rights to a piece from a major publisher, said Pitkow.

Attributor identifies copyright violators by scraping the web to find copyrighted content on unauthorized sites. A team of investigators will contact violators in an effort to bring them into compliance or, alternatively, begin taking action under DMCA.

The first offending sites to be targeted will be those using 80% or more of copyrighted stories more than 10 times per month.

In the first stage of a multi-step process aimed at encouraging copyright compliance instead of punishing scofflaws, Pitkow said online publishers identified by his company will be sent a letter informing them of the violations and urging them to enter into license agreements with the publishers whose content appears on their sites.

He posted the article, cited it as the original article (knowing there was a proper citation link above), and posted less than 80% of it. This is a completely legitimate use of the article as per Attributor's new rules. Two or three more words from the article would have made it an "80% rule" bust, but would still have been OK as long as he didn't make a habit of it. It's repeated use of more than 80% of source article text that Attributor wants to go after.

Most discussion boards already limit direct citation to a paragraph or two, or approximately 20% of the article.

So Attributor's 80% limit is making a clear statement that they are really only interested in pursuing people who make a routine habit of copying entire articles. And if the bulk of your content is coming from copying 100% of someone else's original news articles, you aren't exactly someone I want to waste my righteous indignation defending.

If a site posts articles yet has them excluded by robots.txt doesn't that defeat the purpose of posting the article where it can be indexed and found?

In other words if an article is posted, but robots.txt says to not index it, that article isn't going to show up in a search. Its a bit like rebroadcasting an NFL game in a movie theatre with no one in the theatre to watch it.

I've had an experience with Attributor myself, and it's given me a pretty low opinion of them. I'm the author of a CC-BY-SA-licensed calculus textbook, titled "Calculus." Someone posted a copy of the pdf on Scribd, as allowed by the license. So one day I got an email from one of the people who runs Scribd, saying that Attributor had sent them a takedown notice, which they were skeptical about. Attributor hadn't supplied any useful information about what they thought was a violation. I called Scribd, and they checked and said it was a mistake -- they were working for Macmillan, which publishes another book titled "Calculus." So here they were, serving a DMCA notice under penality of perjury, and they hadn't even checked whether the name of the author was the same, or whether any of the text was the same. Their bot just found that the title, "Calculus," was the same as the title of one of their client's books. Pretty scummy.

It's one thing to make a mistake, and entirely another to invoke the law to enforce a mistake. You're right, it's entirely possible the takedown was poorly written, but therein lies the problem with the takedown mechanism - there's no standard that it must reach before it can be served. Thus mistakes, honest or otherwise threaten people with very real, very wide-ranging and scary/expensive actions - completely in error. As such, as reasonable people, we expect anyone taking action as serious as a takedown w

It's not just copyright. The slow but steady alignment of copyright holders, oppressive governments, legal changes, media pressure and surveillance technology has wound itself around the internet worldwide, and now the real pressure is being applied. This is a secular change, largely unobservable over smaller intervals, but the end result is that the web in 10 and 20 years time will be a noticeably less free place than it is today. Everything you do online will be monitored, everything will be logged, everything will be legally defined and controlled, and every infringement will be subject to criminal penalties.

The parties responsible have the support of the politicians, the censors, the press, the money men and most of the public. We used to have the support of the geeks and their creativity in bypassing censorship. But let's face it; geeks have not created a truly disruptive technology since BitTorrent almost ten years ago. While Geekdom slept, the likes of Cisco and the major Telcos have constructed a frightening array of technologies for surveillance and control of the internet, and the fruit of their efforts can be seen in China, Iran and now even countries like Australia. Soon it will be seen all over the world.

The Web has changed. Governments are no longer going to tolerate the freedom and anarchy that it grants to the population at large. They now have the means, method and opportunity to put this genie back in the bottle. This crackdown is the first offensive on what is going to be a wide front. Expect the free net to lose.

I suspect that many sites that are using this type of content will find ways of hiding that fact by using non-display characters, breaking the article into multiple pages and the like to cover the fact that they are using the content. Would love to see their system in action on some test sites to figure out how much you need to do to cover the content and make it not match the original.

On the other hand, that's an utterly asinine comment to have made (the one you quote, not yours). Of course they'll ignore it, why on Earth wouldn't they? It is in no way binding, and robots are free to ignore it, just as site owners are free to block connections from specific incoming IP addresses, the owners of those IPs are free to switch to new ones, and so on, ad infinitum.

But in this case the court found the terms and conditions (including the forum selection clause) to be enforceable. In contrast to Specht, the ServiceMagic site did give immediately visible notice of the existence of the terms of the agreement.

If I write a robot to crawl a site looking for certain keywords (e.g. Metallica), I will not necessarily ever have had visibility of those terms.

Except that the recent RIAA case ruling that you don't need to have actually seen a copyright notice in order to be bound by it, due to the ubiquity of the notice, ToS are similarly ubiquitous, so you should be bound by that as well, seeing it or not.

The Robots exclusion standard [wikipedia.org]. Not that it will stop them; as others have pointed out, if they think they're "doing the right thing," I'm sure they will not be concerned about such a standard.

The worry here really isn't so much for the people who are hosting sites with infringing content. I'm sure a moral argument could be made that Attributor is well within the right to disregard the wishes of those who are breaking copyright law. However, I run several sites that have no infringing content whatsoever, sites with things that have content that, while not private, I don't particularly want spiders crawling. I'm not so naive to think that they don't do it anyway; I have server logs proving that they do. However, in this case, we have a company that is claiming to be legitimate completely ignoring my--someone who is not infringing--wishes and doing it.

Put another way, by convention, my neighbors don't use binoculars to peer into my house windows to see what I'm doing although there's currently not really anything stopping them from doing so. Even though I don't particularly have anything to hide, if I find that they are violating our polite social contract, then I'll put up shades just because it's none of their damn business.

I don't think that the robots.txt convention will be the thing that stops Attributor. I think that it will be that it won't take long for web site authors to figure out what user agents, IP address, etc. that Attributor is using and will block access from Attributor to their sites. Like I said, I have no infringing content on my sites, but if Attributor is going to ignore me politely asking their robots not to scan my sites, then I'm fully in the right to take further steps to forcibly prevent them from doing so.

Since, as you say, robots.txt will likely do nothing against them, the bigger question becomes "how do they plan to do their crawling?". Crawling from a well defined IP block, using software with user agent Attributor_copy_cop, will be laughably simple to block or present false noninfringing content to.

Spoofing the UA strings and(if necessary) some of the behavior of common web browsers is a simple software problem, so I assume that they'll do that(unless they are terminally incompetent). Out of curiosity, though, does anybody know how easy and cheap it would be (using legitimate methods not botnet style stuff) for such a commercial entity to obtain a reasonably large number of, ideally "residential looking", IPs that change fairly often? Do you just call verizon and say "I want 500 residential DSL lines brought out to so-and-so location"? Would you obtain the services of one of the sleazy datacenter operators who caters to spammers and the like and knows how to switch IP blocks frequently? Do you pay to have second lines installed at your employee's houses, with company scanner boxes attached?

One idea would be to use the many available cloud services like EC2, Google App Engine and Azure. The IP blocks those services come in are going to remain fairly regular, but they are so common that it might not be acceptable for a site to block everything from ghs.l.google.com (and whatever EC2 and Azure live on). It is still blockable, though, so it probably would have been better for them (from a technical standpoint) if they hadn't announced their existence and these sites had been slowly indexed by their service before anybody knew what was happening.

Another (better) idea would be to use a service like Tor. Sure, their latency is going to skyrocket, but that's not a big deal since interactivity isn't a primary concern of an indexing service. It's still blockable, if infringing site admins block Tor nodes. This may or may not be doable, as I would imagine many users of said infringing sites use anonymizing networks for their normal traffic.

Sure, either of the solutions I've come up with in five minutes can be circumvented, but the idea isn't to totally eliminate piracy, its to make it inconvenient enough to make getting the legitimate version easier.

"Only the second of these four rights is widely accepted in the USA. In addition to these four pure privacy torts, a victim might recover under other torts, such as intentional infliction of emotional distress, assault, or trespass.

Unreasonable intrusion upon seclusion only applies to secret or surreptitious invasions of privacy. An open and notorious invasion of privacy would be public, not private, and the victim could then chose not to reveal private or confidential information. For exampl

I welcome them to crawl my sites and ignore my robots.txt files. They won't get very far though. When my server detects that behavior it passes the IP to my firewall which adds it to the "drop these packets into a black hole" list.

I have quite a large table of IP addresses of idiots that violated robots.txt.

Sometimes I really wish we could just go back to the early 90's when big media thought the internet was a joke, we didnt need them then and frankly I usually think we would be better off without them now.

Home Internet access in the early to mid 1990s was dial-up. Do you want to go back to that?

If that was the tradeoff needed to prevent the internet from becoming one big corporately guided tour pay as you go presentation of what they want us to see or do...sure. Actually by the mid 90's I was on ISDN.

That is exactly my point. I wasnt trying to troll, simply pointing out that the internet was supposed to be a great equalizer, most media outlets have no desire to be part of the community they want to be the community and have gone out of their way to shut out anything that even resembles equality online. Linking has traditionally been the way that sites agregate news, many simply use rss summaries provided by the original content, what 80% are they going after? 80% of the rss summary would often mean i