Web browsing is copyright infringement, publishers argue

Thankfully, European top court rules against the publishers' "irrational" claims.

Europeans may browse the Internet without fear of infringing copyrights, as the EU Court of Justice ruled Thursday in a decision that ends a four-year legal battle threatening the open Internet.

It was the European top court's second wide-ranging cyber ruling in less than a month. The court ruled May 13 that Europeans had a so-called "right to be forgotten" requiring Google to delete "inadequate" and "irrelevant" data upon requests from the public. That decision is spurring thousands of removal requests.

In this week's case, the court slapped down the Newspaper Licensing Agency's (NLA) claim that the technological underpinnings of Web surfing amounted to infringement.

The court ruled that "on-screen copies and the cached copies made by an end-user in the course of viewing a website satisfy the conditions" of infringement exemptions spelled out in the EU Copyright Directive. The NLA's opponent in the case was the Public Relations Consultants Association (PRCA). The PR group hailed the decision.

"We are utterly delighted that the CJEU has accepted all of our arguments against the NLA, which represents eight national newspapers. The Court of Justice, like the Supreme Court before them, understands that the NLA's attempts to charge for reading online content do not just affect the PR world, but the fundamental rights of all EU citizens to browse the Internet," PRCA Director General Francis Ingham said. "This is a huge step in the right direction for the courts as they seek ways to deal with the thorny issues of Internet use and copyright law."

The NLA is the body that distributes reproductions of newspaper content, including the Guardian's. Its main argument was the cost that the licensing public relations companies pay for the reproductions should factor in to what is temporarily copied on a reader's computer.

David Pugh, the NLA's managing director, said opponents were making the case out to be as if the sky was falling, but it's not, he said. Pugh believed the issue was much narrower than portrayed.

"In our view, [the temporary copying] exception is designed to protect ISPs and telecoms companies when they're transmitting data from A to B in networks. The PR spin put on this case was that if our ruling was allowed to stand then users of the Internet would be criminalized for using a browser, but that's never been what it's about," he said.

Regardless, the case absolutely posed a fundamental question about Web surfing and infringement.

"Despite the ruling, one cannot overstate how irrational this case was to begin with. It’s hard to believe the question at stake was whether browsing the Internet is legal or not," said Jakob Kucharczyk, the Brussels director for the Computer & Communications Industry Association. "Even though the court has provided a clear answer to that question, one must wonder whether our copyright regime is apt for the digital era."

David Kravets
The senior editor for Ars Technica. Founder of TYDN fake news site. Technologist. Political scientist. Humorist. Dad of two boys. Been doing journalism for so long I remember manual typewriters with real paper. Emaildavid.kravets@arstechnica.com//Twitter@dmkravets

Great for him to admit they were protecting telecoms. At least it's not open to interpretation from that angle. Still miffed that nearly every Western court is being inundated with awful legal crud. Doesn't help that our massive legislative libraries continue to grow. If you want to make a quick buck, you just have to dig a bit and find the right loophole.

Maybe I am missing something here, but if publishers don't want their data to be copied into web browsers, screen buffers and caches, perhaps they should not be publishing it using a web server.

They want it to be copied but the charges to be per copy and further to count local browser cache as a copy in addition to any print copies.

If they could charge you for remembering it, I'm sure they would.

Well, of course. You might remember the article in sufficient detail to relate it's contents to someone with detail sufficient to dissuade them from purchasing a properly-licensed periodical, thereby depriving that periodical of profit and endangering the livelihood of the original author!

I mean, if you're going to go down the rabbit hole, you might as well jump in head-first, right?

The NLA is the body that distributes reproductions of newspaper content, including the Guardian's.

Writing like that makes it look like this Ars article is copied a bit too directly from the Guardian's one. That turn of phrase is normally used to disclose any potential conflict of interest between the organisation reporting and the parties involved in the court case, unnecessary here.

To answer the question at the end of the article. No, current copyright laws are not designed to work well in the digital world. Copyright laws are based on physically reproducing something, in the digital world you create many copies every time you interact with content, so literally it is infringement we just keep carving out exceptions and other weird legal excuses to allow existing laws to stay somewhat relevant.

You know, technically, they are correct. Computers are file-copying machines, and the Internet is a file-copying network. When a file is requested from a server, it is copied into that server's memory, and then copied again and transmitted to the client, which assembles the file in its own memory. That file is then copied again into a disk cache, and then copied again into your PC's screen buffer. Technically, any system that blanket disallows unauthorized copying of copyrighted works would, indeed, give an author the right to prevent any of those copies from taking place, and make pirates of anybody who even viewed a webpage. Which should tell you how out-of-sync the traditional concept of copyright is with modern life.

Maybe I am missing something here, but if publishers don't want their data to be copied into web browsers, screen buffers and caches, perhaps they should not be publishing it using a web server.

They want it to be copied but the charges to be per copy and further to count local browser cache as a copy in addition to any print copies.

If they could charge you for remembering it, I'm sure they would.

No... I don't think remembering it is at issue. That's free. People confuse simple memory with copy-write violations such as discussing the publication. Publishers charge per copy in the buffer plus a nominal fee to discuss the content. I recommend a subscription discussion fee. Saves money if for those that talk a lot.

Maybe I am missing something here, but if publishers don't want their data to be copied into web browsers, screen buffers and caches, perhaps they should not be publishing it.

FTFY.

These greedy bastards want to license it - that way, when a newer, more efficient platform comes out, they can force you to 'upgrade' to it. This is nothing more that stupidity wrapped up in a cash grab.

In addition, if that standard for copyright infringement were to be upheld, then everyone would be an infringer, including those self-same publishers.

It's nice to see that case get authoritatively squashed, but it's appalling and confounding that these types of cases get any traction and momentum in the first place.

I blame older justices who have no idea how the internet works and are easily baffled by pseudo technical jargon. Say what you will about millennials, a millennial judge would look at this case and laugh.

Maybe I am missing something here, but if publishers don't want their data to be copied into web browsers, screen buffers and caches, perhaps they should not be publishing it using a web server.

They want it to be copied but the charges to be per copy and further to count local browser cache as a copy in addition to any print copies.

I recommend publishers to charge also for the two copies on my retinas. And science has shown that the visual cortex makes several additional copies.

Well, then they'll charge you for each additional copy. If your vision isn't acute or you skim read in a way that causes you to not understand the book as it can be objectively read, they may sue for copyright infringement in the production of a derivative work.

Ars, I would like to officially apologize for the repeated infringements of your rights. Nearly every day, I copy the new articles to my computer! Sometimes several times! I'm a monster, I'm so sorry!

As do I. Filthy, intellectual property stealing pirates we are. As a matter of conscience, I'm going to have to stop reading altogether. From what I understand, the words in books and on screens are copied into my mind for interpretation. Clearly this is also infringement. We were all born dirty sinners and pirates.

I was hoping for the ruling to fall in their favour and then have the courts force Google to not show any links related to these publications at all just to see these idiots sue Google and force it to show these links again when their web traffic plummets to near zero.

You know, technically, they are correct. Computers are file-copying machines, and the Internet is a file-copying network. When a file is requested from a server, it is copied into that server's memory, and then copied again and transmitted to the client, which assembles the file in its own memory. That file is then copied again into a disk cache, and then copied again into your PC's screen buffer. Technically, any system that blanket disallows unauthorized copying of copyrighted works would, indeed, give an author the right to prevent any of those copies from taking place, and make pirates of anybody who even viewed a webpage. Which should tell you how out-of-sync the traditional concept of copyright is with modern life.

Just think of all the routers and switches that had those packets temporarily cached as they carried it along their network. So many copies that need licenses!

You know, technically, they are correct. Computers are file-copying machines, and the Internet is a file-copying network. When a file is requested from a server, it is copied into that server's memory, and then copied again and transmitted to the client, which assembles the file in its own memory. That file is then copied again into a disk cache, and then copied again into your PC's screen buffer. Technically, any system that blanket disallows unauthorized copying of copyrighted works would, indeed, give an author the right to prevent any of those copies from taking place, and make pirates of anybody who even viewed a webpage. Which should tell you how out-of-sync the traditional concept of copyright is with modern life.

Just think of all the routers and switches that had those packets temporarily cached as they carried it along their network. So many copies that need licenses!

To answer the question at the end of the article. No, current copyright laws are not designed to work well in the digital world. Copyright laws are based on physically reproducing something, in the digital world you create many copies every time you interact with content, so literally it is infringement we just keep carving out exceptions and other weird legal excuses to allow existing laws to stay somewhat relevant.

Yep, the best way to make copyright sensible in the digital age is to take copying out of copyright, literally. Eliminate copying as one of the exclusive rights that copyright holders have, while preserving all the rest (like distribution, public performance, etc). The courts have already moved in that direction by ruling piece by piece that practically every case of copying that doesn't infringe on the other exclusive rights is fair use. But consumers and entrepreneurs shouldn't be burdened by having to defend themselves with a patchwork of court precedent and fair use arguments. We should make this swath of use-rights black and white law, while leaving the more fuzzy stuff (like amount copied, is it educational/parody, etc) for fair use to handle.

Wow. At least this is a good ruling. Greedy people want money. They will always find a way... I wonder how these things get traction anyway..

You answered your own question. They get traction because greedy people (ie: politicians, judges, lawyers - not just rights organizations...) want money and can extract it from parties on -both- sides of the issue.

I suspect what it comes down to is the publishers desire for an internet tax. If they put their content behind a paywall, people don't pay, don't read, and they lose ad impressions. This request would be the first round in involuntarily collecting a "small" fee from everyone, in case they "might" look at a news page.