from the EXPORTING-OUR-STUPIDITY dept

PayPal is ubiquitous. And that's unfortunate. Over the years, the payment platform has earned a reputation for acting in a way that can charitably be described as "hellishly inconsistent." For little to no reason, users have found their accounts shut down or suspended. And, thanks to US laws meant to prevent the PayPal-ing of material support to foreign terrorists, PayPal has been suspending accounts for innocuous payments containing certain trigger words in the descriptions.

A community newspaper's payment to enter a feel-good story about a family of Syrian refugees in an awards competition prompted PayPal to freeze the account of a national media organization after flagging the suspicious transaction, The Canadian Press has learned.

[...]

The weekly Flin Flon Reminder entered the article — titled "Syrian family adapts to new life" — last month as part of its submissions to the annual Canadian Community Newspaper Awards. The feature story from July 2016 outlines the challenges and triumphs as the family settled in the Manitoban town of 5,100 and the community's willingness to make them feel welcome.

The word "Syrian" set off PayPal's auto-monitor, which blocked the Flin Flon Reminder's $240 in entry fees. (To be considered for the awards, submitters must pay $60 per article submitted -- and it would appear Flin Flon submitted four of them.)

It would be one thing if the payment was flagged and then reviewed. But nothing in the story suggests PayPal took a second look at this until a larger media outfit -- the CBC -- started asking questions.

PayPal didn't limit itself to killing the sender's account. It suspended the receiver's account as well.

This week, Durnin called News Media Canada — formerly Newspaper Canada — to find out what had happened. They realized PayPal had frozen the News Media Canada account, said Nicole Bunt, who processes the awards entries.

PayPal supposedly reviews flagged payments within 72 hours. No one involved heard anything from PayPal until after the CBC's inquiries. The belated response from PayPal: "Um... US law mumble mumble mumble."

"You may be buying or selling goods or services that are regulated or prohibited by the U.S. government," PayPal said in an email to News Media Canada.

Oh, really? This is some spectacular review work by PayPal, considering both the sender and the receiver are located entirely in Canada. While US law may govern US transactions processed by the company, they should have little to no effect on completely extraterritorial transactions.

And the sole reason for PayPal's dual account nuking? The word "Syrian" being in the submission to the newspaper awards.

The note also requested a "complete and detailed explanation of the transaction" and the purpose of the payment, which identified with the story's headline.

That's the problem with keyword flagging. All it's ever going to do is produce false positives and inconvenience hundreds of non-terrorists. The algorithms deployed by PayPal are looking for terms no terrorist is going to use when transferring funds to allies. It works on the stupidest of assumptions: that memo lines are going to filled with suspicious keywords when actual nefarious transactions are taking place.

If you're going to build a US law-compliant service that relies on tragically flawed logic, the least you can do is actually review flagged transactions in a timely manner and provide actual people customers can talk to, to sort out these issues.

Instead, PayPal appears to have left this payment-vetting process to the machines and made it all but impossible to speak to someone who might be able to derive something from context. And it makes it worse by subjecting other countries to US law, whether or not the flagged transaction violates laws in the country where the funds are changing hands.

Then there's this kicker at the end of the CBC article.

PayPal did not immediately explain its process.

Yeah. Or EVER. That's the other problem. Go ahead and CYA by flagging keywords and keeping your Terms of Use vaguely-written and open to often-baffling interpretations. But do your customers a favor and at least answer questions about the specifics of their flagged transactions. At the very least, it would show some human has eyes on the process. If you can't be proactive, at least be usefully reactive.

from the horrible-people dept

You probably recall that during the recent and ongoing Syrian refugee crisis, Petra Laszlo, a camera woman for Hungarian news outlet N1, was recorded tripping refugees and kicking their children as they ran for their lives across the Hungarian border. Laszlo was ultimately fired by her employer, and initially "apologized" for her behavior by trying to claim that she wasn't an unnecessarily angry racist, she simply tripped and kicked refugees because she thought she was being attacked:

"The camera was shooting, hundreds of migrants broke through the police cordon, one of them rushed to me and I was scared,” she wrote. Then something snapped in me … I just thought that I was attacked and I have to protect myself. It’s hard to make good decisions at a time when people are in a panic."

Except in this new video-crazed era, we're all simply more accountable, and all of the photos and videos taken of her that day pretty clearly show her being an absolutely legendary, insufferable asshole:

Apparently not content to quietly go down in history as arguably one of the worst people currently on the planet, Laszlo has now declared that she intends to file two lawsuits once her trial is over (Hungarian authorities have taken her to court for disturbing the peace). Laszlo says one lawsuit will be filed against Facebook for failing to take down the oceans of well-deserved criticism she received after the incident, and one will be filed against the refugee she kicked who has since started a new life in Italy:

"Laszlo told Izvestia that she plans to sue Facebook for allegedly refusing to remove threatening groups on the site while deleting groups that supported her. She has also directed her anger towards Osama Abdul Mohsen, one of the Syrian refugees she kicked, and says she plans to sue him. "He changed his testimony, because he initially blamed the police," Laszlo said, though she can be clearly seen in two different videos kicking him. "My husband wants to prove my innocence. For him, it is now a matter of honor. It is now a matter of honor."

And really, what's more honorable than kicking and tripping children, then suing their families for good measure?

from the game-theory dept

Attention news agencies of Planet Earth. This is an all points bulletin for your benefit: stop passing off video game footage as real-life-happenings. Yes, what seems like a thing that shouldn't be able to happen has actually happened several times in the past, from video game footage passed off as a terrorist attack to state news agencies passing off video game footage as a potential threat to a nation's enemies. Some nations appear to even be trying to take advantage of it all, such as when Russia tried to sucker world news groups into thinking that it had found proof that America is arming Ukrainians with video game footage of a weapons cache. And, yet, it keeps happening.

The latest case is an Egyptian news agency bizarrely using footage from a Russian-made video game, Apache: Air Assault, published by Activision and featuring english-speaking characters, to proclaim Russian dominance against ISIS in Syria.

Now, I realize there are cultural and linguistic barriers here, but it shouldn't be terribly hard to understand that the voices in that footage are speaking English. And, though video games are becoming more realistic by the day, the footage and audio here is still video-game-ish enough that it's fairly easy to identify it as such with just a few minutes' watching. And yet, anchor Ahmed Moussa had this to say before airing the footage.

"Yes, this is Russia; this is the Russian army. This is Putin," he said. "This is the Russian federation. Are they confronting terrorism? Yes, they are. The Americans were too soft on ISIL. The US has been there for a year and a half, and we have seen not one bullet from them, nor have we seen anyone getting killed by them."

I'll give Moussa points for originality. After all, it's not every day you hear lamentations from the Middle East that Americans just aren't killing enough people.

from the https-matters dept

Norwegian writer Mette Newth once wrote that: "Censorship has followed the free expressions of men and women like a shadow throughout history." As we develop new means to gather and create information, new means to control, erase and censor that information evolve alongside it. Today, that means access to information through the internet, which motivates us to study internet censorship.

Organizations such as Reporters Without Borders, Freedom House, or the Open Net Initiative periodically report on the extent of censorship worldwide. But as countries that are fond of censorship are not particularly keen to share details, we must resort to probing filtered networks, that is, generating requests from within them to see what gets blocked and what gets through. We cannot hope to record all the possible censorship-triggering events, so our understanding of what is or isn't acceptable to the censor will only ever be partial. And of course it's risky, even outright illegal, to probe the censor's limits within countries with strict censorship and surveillance programs.

This is why the leak of 600GB of logs from hardware appliances used to filter internet traffic in and out of Syria is a unique opportunity to examine the workings of a real-world internet censorship apparatus.

Leaked by the hacktivist group Telecomix, the logs cover a period of nine days in 2011, drawn from seven SG-9000 internet proxies. The sale of equipment like this to countries like Syria is banned by the US and EU. California-based manufacturer Blue Coat Systems denied making the sales but confirmed the authenticity of the logs – and Dubai-based firm Computerlinks FZCO later settled on a US$2.8 million fine for unlawful export. In 2013, researchers at the University of Toronto's Citizen Lab demonstrated how authoritarian regimes in Saudi Arabia, UAE, Qatar, Yemen, Egypt and Kuwait all
rely on US-made equipment like those from Blue Coat or McAfee's SmartFilter software to perform filtering.

This technology is extremely powerful as it can perform deep-packet inspection, that is, examining in detail the contents of network traffic. They provide censors with a simple interface to fine-tune filtering policies, practically in real time.

Inside a censor's mind

Internet traffic in Syria was filtered in several ways. IP addresses (the unique addresses of web servers on the internet) and domain names (the URL typed into the address bar) were filtered to block single websites such as badoo.com or amazon.com, entire network regions (including a few Israeli subnets), or keywords to target specific content. Instant messaging, tools such as Skype, and content-sharing sites such as Metacafe or Reddit were heavily censored. Social media censoring was limited to specific content and pages, such as the "Syrian Revolution" facebook page.

The appliances were sometimes misconfigured, meaning the filter caused some collateral damage – for instance, all requests with the keyword "proxy" were blocked, probably in an effort to curb the use of censorship-evading proxies, but this also had the effect of blocking adverts and certain plug-ins that had no relation to banned content.

We found that Syrian users did try to get around the filters, using tools such as Tor, or virtual private networks (encrypted tunnels between two computers using the public internet), and that these were fairly effective. We also noticed that some tools not necessarily designed with circumventing censorship in mind could also be used to access blocked content – for example using peer-to-peer programs such as BitTorrent to download blocked software (such as Skype) and using Google Cache to access (over HTTPS) cached and mirrored versions of blocked web pages.

Avoiding the censor's knife

What emerges is the importance of encrypting web traffic by using secure (HTTPS) rather than standard (HTTP) web browsing. Many requests caught by the filter were only possible because keywords in the content of unencrypted network traffic could be read by the appliances. If traffic is encrypted, the page requested from the target domain, or a specific keyword in the request are not accessible. Through their efforts to enforce HTTPS by default, providers like Google and Facebook are taking steps in the right direction. They also serve a double purpose: protecting users' privacy against mass-surveillance, while also making it harder to implement fine-grained censorship policies.

We did consider that our work might help organizations on both sides of the censorship line. But we decided to publish because we believe that evidence-based analysis of censorship practices can help understand the underlying technologies, policies, strengths and weaknesses – and can inform the design of tools designed to evade the censor's knife.

While Western countries rely on export regulations and sanctions to restrict the worldwide availability of surveillance and censorship technologies – while apparently deploying them for their own use, as the Snowden files have revealed – it is time we had an open debate about their effectiveness and what can be done to limit their proliferation.

Emiliano De Cristofaro does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.This article was originally published on The Conversation.
Read the original article.

from the rules?-fuck-the-rules dept

When the US finally set up some "rules" for its extrajudicial killing-via-drones (after years of no rules at all, which allowed the CIA to "acquire a taste for killing people with drones"), one of the "rules" was that drone bombs wouldn't be used unless there was a "near-certainty that no civilians will be killed or injured." As President Obama noted, this was "the highest standard we can set" to avoid civilian casualties via drones. This high standard upset some bloodthirsty hawks like Rep. Mike Rogers, who saw things like actually trying to prevent civilian casualties as unnecessary "red tape." And, in fact, soon after the rules were in place, the Obama administration itself started realizing that it didn't really like the restrictions it put on itself.

So it's just going to ignore them. Last week, we wrote about how the administration has been redefining pretty much everything to justify the attacks on Syria, including what is meant by "civilian." However, even with that new definition, they've run into some very obvious problems: namely that there's increasing evidence that (despite repeated denials) the bombings did, in fact, kill civilians.

No problem, apparently, for the Obama administration, which has now decided that the very rules it set up in the past to avoid killing civilians with drones... no longer matter. Basically, it looks like the Obama administration just added a big fat asterisk to the "near-certainty" standard for civilian deaths, whereby those rules can be ignored... because the Obama administration says "this is different."

At the same time, however, Hayden said that a much-publicized White House policy that President Obama announced last year barring U.S. drone strikes unless there is a “near certainty” there will be no civilian casualties — "the highest standard we can meet," he said at the time — does not cover the current U.S. airstrikes in Syria and Iraq.

The “near certainty” standard was intended to apply “only when we take direct action ‘outside areas of active hostilities,’ as we noted at the time,” Hayden said in an email. “That description — outside areas of active hostilities — simply does not fit what we are seeing on the ground in Iraq and Syria right now.”

It's not much of a rule when you can exempt it based on... deciding to exempt it.

from the because-of-course-it-was dept

You may recall that, back in 2012, Syria suddenly dropped off the face of the internet. It actually happened twice. There was all sorts of speculation about how it happened.

At the time, Cloudflare's analysis was one of the most thorough, noting that it almost certainly "was done through updates in router configurations" rather than a physical failure or a cable cut or something. Of course, everyone assumed that it was the Syrian government, trying to cut off access to the outside world.

One day an intelligence officer told him that TAO—a division of NSA hackers—had attempted in 2012 to remotely install an exploit in one of the core routers at a major Internet service provider in Syria, which was in the midst of a prolonged civil war. This would have given the NSA access to email and other Internet traffic from much of the country. But something went wrong, and the router was bricked instead—rendered totally inoperable. The failure of this router caused Syria to suddenly lose all connection to the Internet—although the public didn't know that the US government was responsible. (This is the first time the claim has been revealed.)

Inside the TAO operations center, the panicked government hackers had what Snowden calls an “oh shit” moment. They raced to remotely repair the router, desperate to cover their tracks and prevent the Syrians from discovering the sophisticated infiltration software used to access the network. But because the router was bricked, they were powerless to fix the problem.

Fortunately for the NSA, the Syrians were apparently more focused on restoring the nation’s Internet than on tracking down the cause of the outage. Back at TAO’s operations center, the tension was broken with a joke that contained more than a little truth: “If we get caught, we can always point the finger at Israel.”

Thus, it appears that Cloudflare's speculation that it was done as a router update was entirely correct -- just that no one realized it was the NSA that was updating the routers, rather than the Syrians.

from the i'm-sure-that-won't-be-abused-at-all... dept

Update: It appears that this story was misreported by a few sources, and the fans were flamed by UK government comments about censoring videos. Youtube has as program that lets trusted sources more easily flag videos that are then reviewed fairly quickly by YouTube staff. However, these videos still get reviewed to see if they violate any of YouTube's terms of service, rather than automatically pulled down. It's still concerning that the UK government seems to think that it should censor content that even they believe is not legal, but it doesn't appear that YouTube is actually letting the UK government censor videos.

A few years ago, then-Senator Joe Lieberman went on a bizarre anti-free speech crusade against YouTube, arguing that by allowing "terrorists" to post videos to YouTube, people were watching those videos and magically turning into terrorists. Because YouTube videos are just that powerful. Given the public shaming, Google actually caved in and started banning "terrorist" videos. Of course, how do you define a "terrorist" video? The fact is we just don't know, and that's evidenced by the fact that Lieberman's efforts resulted in videos from a Syrian watchdog organization being taken down as terrorism -- when they were really reporting on the atrocities of that country's government. If anything, you'd think this would be a clear warning about the perils of trying to censor "terrorist" videos. You're going to get it wrong, and often block important and newsworthy videos.

But... instead it appears that this effort is only ramping up, and unfortunately, YouTube seems to be helping. Over in the UK, where the government has been gradually censoring more and more of the internet over the past few years, Google has apparently agreed to give the UK government broad powers to "flag" videos they argue are bad, even if they're not illegal. Ostensibly, the goal is to block videos that "proliferate jihadi material."

The YouTube permissions that Google has given the Home Office in recent weeks include the power to flag swaths of content “at scale” instead of only picking out individual videos.

They are in part a response to a blitz from UK security authorities to persuade internet service providers, search engines and social media sites to censor more of their own content for extremist material even if it does not always break existing laws.

And the UK government even admits that the videos it will be taken down are not illegal:

The UK’s security and immigration minister, James Brokenshire, said that the British government has to do more to deal with some material “that may not be illegal, but certainly is unsavoury and may not be the sort of material that people would want to see or receive”.

Of course, that kind of statement shows the program is wide open to abuse. The sort of material people would not want to see or receive? Well, then they just don't watch it. Besides, who gets to decide what people would not want to see? Because there's lots of important content that a government might not want its citizens to see, but which are kind of important to a functioning democracy and open society.

While I'm sure the pressure from the government here was quite strong, it's upsetting to see Google cave in to these kinds of requests. Giving the UK government a giant "censor this video" button seems like exactly the wrong approach.

from the that's-a-problem,-isn't-it? dept

The Atlantic is covering the fact that Syrian loyalists have been able to abuse Facebook's "abuse" policy to make various Syrian opposition Facebook pages disappear based on very questionable means. A number of activist pages have simply been wiped out, and even when the folks behind those pages appeal, the appeals get denied. Facebook has, effectively, wiped out much of the key information source for Syrian opposition activists. And this comes even as Facebook itself touted the fact that people in places like Syria were using Facebook to speak up and make themselves heard:

By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible. These voices will increase in number and volume. They cannot be ignored.

Well, until Facebook deletes those very pages based on questionable reasons. An example is given of a photo of a man sitting in a chair with a young child on his lap. The man turns out to be an activist who was later killed -- but for reasons known only to Facebook censors, the entire post was deleted by Facebook, claiming that it violates their policies.

While the caption notes that the guy was killed by "thugs" it's hard to see how that violates Facebook's policies. This gets at a point that we've been concerned about for quite some time. When you rely on someone else's platform for your speech, you're entirely at the mercy of their terms of service. People use Facebook because it's easy to connect with others and build communities, and that has value, but you're risking having that speech disappeared.

This is why it's often important for people to have platforms that they themselves control -- though even then there are points of weakness and attack. You can host your own site, but people will go after upstream providers, including hosting companies and registrars. And service providers who have more open policies get hounded into creating "abuse" policies that appear to make sense at first... even though those abuse policies themselves are open to abuse.

For example, there were plenty of really good reasons why Twitter beefed up its abuse police last year, after a bunch of people had very legitimate complaints about how they were dealing with incredibly abusive behavior on Twitter. But, of course, it's that kind of "abuse policy" that itself is now being abused by those in Syria seeking to stifle dissent.

And that's where this gets so tricky. When we see people use these platforms in such abusive ways, it's quite natural to want to see policies in place that let those abusive actions be stopped and taken down. But with such a process in place, you're almost guaranteeing that it will be abused as well, and legitimate speech -- such as that of these Syrian activists -- gets removed and deleted (including important historical documentation and discussions that are now gone forever).

from the aiding-the-enemy dept

Between Syria, Cuba, Iran and Sudan, Americans bothering to pay attention to the world around them are becoming increasingly familiar with how we sanction other countries and the intricacies of those sanctions. The intersection of sanctions and technology tend to revolve around the export of hardware, software, and services to nations with regimes we don't particularly care for. All of these sanctions are typically designed to achieve one or both of the following goals: altering the behavior of the regime in question and/or encouraging the people of that nation to rise up against the regime by making everyone completely miserable.

With that in mind, we can now conclusively say that at least some of the tech sanctions levied against some countries are completely useless and should be done away with, namely those that intermittently punish the people of Syria, Iran, Cuba and Sudan, preventing their people from accessing open education platforms.

Coursera, which according to its site aims “to change the world by educating millions of people by offering classes from top universities and professors online for free,” is now subjected to a recent directive from the US federal government that has forced some MOOC (Massive Online Open Course) providers to block access for users in sanctioned countries such as Iran and Cuba. Coursera explains the change in its student support center:

"The interpretation of export control regulations as they related to MOOCs was unclear for a period of time, and Coursera had been operating under one interpretation of the law. Recently, Coursera received a clear answer indicating that certain aspects of the Coursera MOOC experience are considered ‘services’ (and all services are highly restricted by export controls). While many students from these countries were previously able to access Coursera, this change means that we will no longer be able to provide students in sanctioned countries with access to Coursera moving forward."

While updates to this post suggest that connectivity to Syrians has been reestablished, that isn't so with regard to countries like Iran, Cuba or Sudan. Examine for a moment the practical application of this kind of sanctioning. The US has identified a regime we do not like, which will almost by definition be relatively well-educated, affluent, and powerful. That regime oppresses its people. To combat this, part of our sanctions policy is designed to prevent the oppressed people from accessing educational services that would offer a ladder towards the educational standards enjoyed by the offending regime. Knowledge is power, of course, and the ability to learn about the world outside of the pens in which these countries have placed their own people is a tool that could be used to encourage change in these countries. Don't take that from me, take it from the governments in those nations which are quite busy censoring the internet out of fear of their people becoming more educated. And now the US is essentially joining the censoring party, too.

In September 2011, the Electronic Frontier Foundation called on the US to lift all restrictions “that deny citizens access to vital communications tools.” But the US has continued its piecemeal approach, going back and forth between blocking new ranges of transactions to allowing the export of certain services.

“These sorts of export restrictions are overbroad and contain elements which have no effect on the Syrian regime, while preventing Syrian citizens from accessing a wealth of tools that are available to their activist counterparts in neighboring countries and around the world,” EFF stated.

Likewise in Iran, Cuba, and Sudan. A common complaint one sometimes gets from peace activists is that sanctions should be lifted because they don't hurt the regime, only the innocent civilians. That complaint is usually moot, because often times the entire point is to hurt the citizens to breed unrest building towards revolution. But in this case, the harm is repressing the capacity for change, and therefore serves no purpose. Even beyond the humanist concept of exporting information and education as a simple matter of human rights, these sanctions can only have the opposite effect of their intention.

Fortunately, Coursera appears to have a genuine interest in spreading education and, as they did with the Syrian issue, appear to want to work with the US government to get around these outdated sanctions.

Coursera ended the announcement of the changes that prevent access to their courses in sanctioned countries with the following note: “We value our global community of users and sincerely regret the need to take this action. Please know that Coursera is currently working very closely with the U.S. Department of State and Office of Foreign Asset Control to secure the necessary permissions to reinstate site access for users in sanctioned countries.”

If the US government has any interest in their sanctioning policies beyond using them as some kind of penis-measuring contest, they'll act quickly to give Coursera the ability to export education to the nations of oppressed people, otherwise known as the places where it is most sorely needed.

from the read-slower dept

There have been plenty of complaints about people who jump to conclusions too quickly online, but apparently at times that can actually have a material impact on things. Earlier today, the Israeli Defense Forces (who have been quite active on Twitter) put out a tweet commemorating the famed Yom Kippur war of 1973, in which Israel bombed Syria:

The tweet may have been somewhat ill-timed, and poorly thought-out, given the current "heightened tensions" with Syria. Either way, apparently some oil traders either skipped over or didn't understand the hashtag reference to "YomKippur73" (and the reference to the Soviet Union -- a country that hasn't existed in decades) and interpreted the tweet to mean that Israel was bombing Syria today. And, in response they started bidding up the price of oil. Because twitchy commodities dealers apparently do their pricing based on tweets.

Of course, the article notes that traders eventually realized their mistake... but the price of oil stayed up, rising over a dollar from $110.40 a barrel to $111.50 a barrel, and then continuing to rise a bit (though more slowly) after the mistake was understood. Isn't it great that key pieces of the global economy can change based on some people totally misreading a tweet? Makes me feel so comfortable about the state of the world today.