No linking to Japanese newspaper without permission

Nikkei has taken efforts to preserve its paywall to absurd new levels: anyone …

We've definitely entered an era of experiment when it comes to online content, as a number of publications with a tradition in the print world are testing out approaches like building paywalls, mixing free and paid content, and limiting the amount of content that's indexed by search engines.

But Japan's Nikkei newspaper has taken its attempts to control access to an entirely different level: it now requires a formal request for any inbound links to its site.

The New York Times, which reported on the new policy on Thursday, notes that the newspaper market in Japan is radically different from that in the US. Although some smaller outlets are experimenting with new ways of reaching readers, most papers require subscriptions to access online content, and the barriers have kept circulation of print editions quite high compared to the US.

Nikkei management appears worried that links could provide secret passages to content that should be safely behind the paywall, and this fear has led to the new approval policy.

So far, the rules haven't made it to the default site that's accessible from the US, but they do appear to have made it to the European version. Given the newsworthiness of the policy, it seems safe to risk the Nikkei's aggressive copyright stance by quoting them in full:

Please send an e-mail to e-media@eur.nikkei.com with information about your web site, web site address, aims of the link, your name and contact details etc prior to adding a link taken from our website. Generally, links from one's own website to the front page of our website are acceptable, though we retain the right to reject links to websites and links themselves of which we do not approve.

That text comes from this page. Obviously, we can't guarantee that the link will actually work, since it would be easy for Nikkei to determine that Ars Technica is referring readers to the page, compare our URL to the list of approved linkers, and block access to the content accordingly.

Also, for added irritation, Nikkei has disabled the right-click context menu so that copying links is more difficult. The context blocking takes place, at the moment, only on the Japanese language site; the English site appears unaffected.

Please send an e-mail to [address] with information about your web site, web site address, aims of the link, your name and contact details etc prior to adding a link taken from our website. Generally, links from one's own website to the front page of our website are acceptable, though we retain the right to reject links to websites and links themselves of which we do not approve.

So....you have to send an e-mail request for each story you want to link to?!? Awesome, this will work well for them.

Are there any lawful grounds for this type of demand? Seems to me, anyone linking to them will be free to entirely ignore this nonsense. This may have some kind of impact on more formal news websites (mostly, for reasons of professional courtesy) but the rest of the world may simply just not give a toss about it.

That text comes from this page. Obviously, we can't guarantee that the link will actually work, since it would be easy for Nikkei to determine that Ars Technica is referring readers to the page, compare our URL to the list of approved linkers, and block access to the content accordingly.

Unless I copy and paste the link of course. Wow that's pointless. (And annoying). And I esp. love when they try to kill the context menu. This is my browser dammit and if I disable JavaScript I can do to your damn page whatever I want.

That's totally mind-boggling... What's the point of forbidding to link to any given website? They don't want to have readers? Their paywall is so badly made that pointing directly through the URL still allows non paying user to see it? There is clearly something I just can't grasp in this kind of policy and I'd like to have some useful insights on the why of such choice.

It's their site and their content. If they want to enact dracanion rules, by all means. It'll only end up underminding their relevency.

Internet traffic is like water, it takes the path of least resistance. Construct all the dikes and channels you want, that's fine. Traffic will just go somehwere else to get essentially, the same news.

Internet traffic is like water, it takes the path of least resistance. Construct all the dikes and channels you want, that's fine. Traffic will just go somehwere else to get essentially, the same news.

Brilliant! I'm going to keep that analogy in mind from now on when these topics come up.

That's totally mind-boggling... What's the point of forbidding to link to any given website? They don't want to have readers? Their paywall is so badly made that pointing directly through the URL still allows non paying user to see it? There is clearly something I just can't grasp in this kind of policy and I'd like to have some useful insights on the why of such choice.

i think that's exactly it. they setup a paywall behind the front door and a curtain around the rest of the site, when what they actually want is a pay-fence around the entire site, except the front page.

that's an extremely lazy way of trying to accomplish what they want. did they hire some american interns to make this change?

It's their site and their content. If they want to enact dracanion rules, by all means. It'll only end up underminding their relevency.

Internet traffic is like water, it takes the path of least resistance. Construct all the dikes and channels you want, that's fine. Traffic will just go somehwere else to get essentially, the same news.

It may be their site, but they don't and can't have copyright for the link.

It seems like an interesting experiment. When it craters fantastically nobody else will repeat it.

Agreed.

I'd also like to see the major news aggregators (Yahoo! & Google) opening up features such that users can filter out news from certain sources.

For example, I don't want to see clippings from WSJ because they are just teasers that lead to their pay wall. In essence, the clippings in Google News amount to little more than advertising for subscriptions.

How exactly would they even know what site was linking to them anyways? AFAIK the url points to where you're going, and has nothing to with where you came from. They could have some sort of randomly generated code that is inserted into the url to identify it, but that should be easily removed.

It's their site and their content. If they want to enact dracanion rules, by all means. It'll only end up underminding their relevency. Internet traffic is like water, it takes the path of least resistance. Construct all the dikes and channels you want, that's fine. Traffic will just go somehwere else to get essentially, the same news.

It may be their site, but they don't and can't have copyright for the link.

I'm not super smart on the legalities/technicalities of linking to someone else's content. I don't know if the hyperlink itself is fair game, with the content that it points to strictly protected, I just don't know. Any newsy types want to chime in?

My comment was more in the vein that if they want to play these silly reindeer games, by all means. They'll just discourage aggregators and sites that control vast mindshare, from ever peering in their direction. It's the internet, plenty more fish in the sea.

How exactly would they even know what site was linking to them anyways? AFAIK the url points to where you're going, and has nothing to with where you came from. They could have some sort of randomly generated code that is inserted into the url to identify it, but that should be easily removed.

no, the URL doesn't tell any information about where a link came from, but the HTTP header that your browser generates when communicating with the remote server communicates a bunch of information transparently to the end user, among which is a bit of data called the HTTP_REFERER, which carries the referring page, and this information is kept in the server logs, and server side scripting languages such as php, cold fusion, jsp, etc, as well as the server software (apache, iis, etc) can access such information. The server can use the info to block or allow the pages based on the referrer information.

How exactly would they even know what site was linking to them anyways? AFAIK the url points to where you're going, and has nothing to with where you came from. They could have some sort of randomly generated code that is inserted into the url to identify it, but that should be easily removed.

no, the URL doesn't tell any information about where a link came from, but the HTTP header that your browser generates when communicating with the remote server communicates a bunch of information transparently to the end user, among which is a bit of data called the HTTP_REFERER, which carries the referring page, and this information is kept in the server logs, and server side scripting languages such as php, cold fusion, jsp, etc, as well as the server software (apache, iis, etc) can access such information. The server can use the info to block or allow the pages based on the referrer information.

As the article even suggested, they *could* have their software check a white-list of valid referrers, erroring if it's not on the list. But, unless they block blank referrers, you could easily get around that by just selecting the url in your address bar, and pressing enter (not refresh, since it'd send the same referrer as before).

You can also trivially forge referrers, so somebody could make a firefox extension that'd keep track of websites with referrer filters and automatically rewrite them when visiting that site.

That's the English version. Click the Japanese characters in the upper right--and then wait fully for the page load (the click-trapping code apparently doesn't load until near the end).

Most browsers (at least Opera, Firefox, and Konqueror off the top of my head) allow you to disable that functionality of Javascript. In fact, I generally always disable it (along with most of the other default options, like resizing/raising the window), since other than Google Maps, I've NEVER seen a website that that doesn't simply abuse that functionality.

As the article even suggested, they *could* have their software check a white-list of valid referrers, erroring if it's not on the list.

The overhead of managing that would be brutal. It's bad enough that a person's whole job will be to review these link requests; then they'd have to whitelist the referrer too? And then what if I reorganize my site, changing my URLs? Do I have to re-apply for permission, because the whitelisted referrer is my original URL?

Newspapers in Japan may have higher subscriber rates than in the US/Europe, but the rate of decline in readership has been significant over the last several years. Contrary to what NYT has posted, advertisements are a huge part of what keeps newspaper companies afloat, and the decline in spending over the last several years has dealt the industry a huge blow.

Also, I don't believe Nikkei won't be the last company to take this hardline approach. The year on year growth rate for internet advertising in Japan last year was a little over 1%, which is pretty darn slow, even taking into account the current economic situation (See link below, site is in Japanese). If companies here want to keep their online profits up, they are more likely to opt for Nikkei's strict "electric fence garden" instead of Sankei's "news is free" method. They are, after all, companies with stockholders to answer to, and a lot of employees to support.

As for Nikkei's website being forgotten because of it not being included in links, that isn't happening anytime soon. They have launched a blitz of TV commercials, online ads etc. to make sure that everyone knows about this new site, and I would say the awareness level is already pretty high around here.

In the end, this system might crumble, but for the time being it's hard to believe anyone but bloggers and competing newspapers will make a big deal out of this.

By the way, it's not Parry, it's Perry. Or if you're adamant about using Japanese characters which most people viewing this site can't read, 「ペリー」. Also, I believe the definition of a 「デジタル鎖国」 would be China. Good luck trying to intimidate that country into openness by sending a couple of steam ships into their harbors.

Nikkei management appears worried that links could provide secret passages to content that should be safely behind the paywall, and this fear has led to the new approval policy

Lol. Maybe they should take a little of the money they are paying their incompetent executives and hire a semi-competent web dev or two that can properly set up their site.

Unless your content is very good and very original a paywall will just serve to drive people to your competitors. Someone should also tell them that the rest of the world is paying google to put links to their sites wherever they can, not hide them.

Are there any lawful grounds for this type of demand? Seems to me, anyone linking to them will be free to entirely ignore this nonsense. This may have some kind of impact on more formal news websites (mostly, for reasons of professional courtesy) but the rest of the world may simply just not give a toss about it.

Doesn't matter if they don't sue. They can claim the sun and the moon till they try to evict someone.

On the other hand, they are perfectly within their rights to refuse to serve the same page to someone with a referral URL that doesn't match their 'approved links' list (as John alluded to in his comment about the link to their linking policy).

Except that Google has rules against serving up different pages to it's bot and others, so they could easily find themselves unindexed if they do it wrong.

As the article even suggested, they *could* have their software check a white-list of valid referrers, erroring if it's not on the list. But, unless they block blank referrers, you could easily get around that by just selecting the url in your address bar, and pressing enter (not refresh, since it'd send the same referrer as before).

You can also trivially forge referrers, so somebody could make a firefox extension that'd keep track of websites with referrer filters and automatically rewrite them when visiting that site.

oh, i know all this. i was just answering the question that was asked about how the site would know where linked URL was coming from.