"I don't think the report is true, but these crises work for those who want to make fights between people." Kulam Dastagir, 28, a bird seller in Afghanistan

Apple and the Self-Surveillance State - NYTimes.com

Topic: Miscellaneous

1:31 pm EDT, Apr 11, 2015

Paul Krugman in favor of the surveillance state:

First, most people probably don’t have that much to be private about; most of us don’t actually have double lives and deep secrets — at most we have minor vices, and the truth is that nobody cares. Second, lack of privacy is actually part of the experience of being rich — the chauffeur, the maids, and the doorman know all, but are paid not to tell, and the same will be be true of their upper-middle-class digital versions. The rich already live in a kind of privatized surveillance state; now the opportunity to live in a gilded fishbowl is being (somewhat) democratized.

Gosh, where do I sign up!

I posted this largely because of it's obsurd, "let them eat cake" quality, which was also echoed in another recent Krugman column in which he wrote:

There are almost no genuine libertarians in America — and the people who like to use that name for themselves do not, in reality, love liberty.

What an incredibly arrogant thing to say! There are many people involved with libertarianism who've worked hard to preserve individual liberties, and there are many people involved with the left who have authoritarian views associated with their own, personal economic and social interests and don't give a damn about level playing fields other than as a selling point.

In NSA-intercepted data, those not targeted far outnumber the foreigners who are - The Washington Post

Topic: Miscellaneous

10:26 pm EDT, Mar 31, 2015

There have been so many useless Snowden disclosures that I didn't notice this. This is important, primarily because of the assurances that all the data is extremely difficult to access without authorization.

Wikimedia v. NSA: Standing and the Fight for Free Speech and Privacy | Just Security

Topic: Miscellaneous

9:32 pm EDT, Mar 31, 2015

When I first saw this suit I ignored it, but it may have more merit than I originally thought.

the government itself has now acknowledged and confirmed many of the key facts about the NSA’s upstream surveillance, including the fact that it conducts suspicionless searches of the contents of communications for information “about” its targets. These facts fundamentally change the standing equation: now we know that the NSA isn’t surveilling only its targets, but it’s instead surveilling everyone, looking for information about those targets. Finally, the volume of the plaintiffs’ international communications is so incredibly large that there is simply no way the government could conduct upstream surveillance without sweeping up a substantial number of those communications. In short, the plaintiffs in Wikimedia v. NSA have standing because the NSA is copying and searching substantially all international text-based communications, including theirs.

If its content, its not metadata, so all the rationalizations about metadata go out the window. We're talking about US to foreign traffic. Although the border search exemption is extremely broad, allowing for this would undermine all the rationalizations from the courts over the years that there is some limit to it. Whats that leave?

1. Richard Posner's fucked up argument that the 4th Amendment doesn't prohibit robots from watching you because they don't have emotions.

2. The idea that there is a general "intelligence collection" exception to the fourth amendment.

3, The idea that the Constitution requires the exact minimization procedures that happen to be in place. How prescient of them.

On Monday, JustSecurity published an article by Mike Schmitt titled Preparing for Cyber War: A Clarion Call. Its a great article that highlights a bunch of the thorny issues in International law that remain unresolved that we ought to take the time to sort out before a conflict arises that demands immediate answers. The biggest of these, in my mind, is the question of whether or not or when destruction of data meets the criteria of an armed attack. I think Schmitt is absolutely right here - real world events are going to demonstrate that destruction of data can be significant enough to alter the strategic course of nation states.

One thing that struck me about the narrative of the article is how quickly the possibility of defending a nation against attacks is dismissed:

In kinetic warfare, it is usually possible to eventually develop a counter-measure that deprives a weapon of its effectiveness, at least until development of a counter-countermeasure. For instance, Israel’s Iron Dome has achieved a very high success rate against rockets fired at urban areas. In cyber space, however, such a “fix” with respect to protecting the civilian population is less likely for three reasons. First, malware is very diverse and one size fits all countermeasures are usually unattainable. Second, the general population does not patch and update systems with sufficient frequency and care to reliably protect them from attack. Finally, technical attribution can be very difficult in cyber space, thereby making shooting back problematic.

The article then proceeds to dig into the third point - looking at different ways in which strike back is complicated by attributional factors and the potential for collateral damage. Although those concerns raise a number of great legal questions, which is really the focus of the article, from a practical standpoint in terms of preparedness, I think the first two points demand greater scrutiny as well.

I've spent years designing Intrusion Detection technology, and I don't think the countermeasure situation is necessarily all that different from the kinetic example Schmitt references. A variety of aspects of an attacker's TTPs can be embedded into network signatures, including the vulnerabilities targeted, the malware, the command and control points and protocols. Part of the trouble is the amount of time it takes to get that information embedded into network defenses (Schmitt's second point). However, that response time could be reduced by building better operational processes that allow threat information shared by the government to be put into production by network operators and managed security service providers in an automated fashion. The more integrated these systems are, the better equipped the government will be to rapidly respond when its necessary. We need to tighten the OODA loop here. ... [ Read More (0.3k in body) ]

I'm amazed by how sensitively people responded to some of the privacy issues. When someone walks into a bar wearing Glass ... there are video cameras all over that bar recording everything.

They STILL don't understand what went wrong with Google Glass!? I'll try to write more about this later, but this has the appearances of a serious cultural/institutional blindspot within Google. They really believe that privacy is irrelevant and they just can't wrap their heads around evidence to the contrary. It reminds me of that Upton Sinclair quote: "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"

The problem is that given the amount of information Google has been entrusted with, their failure to understand this failure means that it may be repeated in other contexts where the stakes are higher.

The whitepaper takes the position that the exchange of exploits and vulnerability information across borders is completely outside of the scope of what is controlled by Wassenaar. The whitepaper asserts that :

Exploitation is not concomitant with Intrusion Software nor is vulnerability research necessarily Intrusion Software development.

I'd like to think thats the case, but when I read the Wassenaar text I have trouble reaching the same conclusion. Even if Wassenaar didn't intend to cover vulnerability research, the text they wrote certainly seems to do so. I've come away with the conclusion that the Wassenaar authors may have crafted their policy under an erroneous understanding of how exploitation works.

Wassenaar defines "Intrusion Software" was follows:

"Software" specially designed or modified to avoid detection by 'monitoring tools', or to defeat 'protective countermeasures', of a computer or network-capable device, and performing... the modification of the standard execution path of a program or process in order to allow the execution of externally provided instructions.

Lets expand that part of defeating 'protective countermeasures' as those are also defined specifically in the Wassenaar text:

"Software" specially designed or modified to defeat techniques designed to ensure the safe execution of code, such as Data Execution Prevention (DEP), Address Space Layout Randomisation (ASLR) or sandboxing, of a computer or network-capable device, and performing... the modification of the standard execution path of a program or process in order to allow the execution of externally provided instructions.

This seems to be a perfect description of an exploit. In fact, I don't think that I could have written a clearer legal definition for "exploit" if I tried.

An exploit is software that modifies the standard execution path of a program in order to allow the execution of externally provided instructions. These days, most operating systems have countermeasures that are designed to make it difficult to write an exploit. Data Execution Prevention (DEP) and Address Space Layout Randomisation (ASLR) are examples of exploit countermeasures. If you're going to write a successful exploit for a modern operating system in this day and age, you have to contend with and defeat those countermeasures most of the time.

So, most exploits that are being written today meet both of these criteria. They defeat a countermeasure like DEP and then modify the execution path in order to ... [ Read More (1.0k in body) ]

I'm writing you because my understanding is that BIS is currently in the process of considering implementation of the new Wassenaar controls related to "Intrusion Software." These controls have started to raise some concerns within the professional community associated with information security vulnerability research. I asked XXXXXXXXXXXXX who I might reach out to in order to provide some input and he suggested that I start by emailing the two of you.

I appreciate your time in reading this. I have some experience working with the EAR as a technical SME within export compliance programs at IBM and Internet Security Systems, and I have great deal of professional experience with security vulnerability research and coordination, so I believe I have sufficient experience to provide you with an informed perspective.

Although there are a number of different concerns that have been raised regarding these new controls, I want to focus my comments specifically on the Category 4.E.1.C controls on "technology" for the "development" of "intrusion software." I don't believe that the potential unintended consequences of the technology controls in particular have received enough emphasis in the comments that I have read to date by other parties.

Computer security professionals use the word "vulnerability" to refer to a flaw in a software system which allows another program, such as an "intrusion" program, to modify "the standard execution path of a program or process in order to allow the execution of externally provided instructions." A great deal of the work that we do in information security has to do with finding and fixing these vulnerabilities, and that work involves getting information about newly discovered vulnerabilities into the hands of people who are in a position to fix them before that information falls into the hands of computer criminals. The exchange of information about these vulnerabilities is the life blood of information security, and that exchange often happens behind closed doors, across international borders, and sometimes, in exchange for money.

Unfortunately, the technical information that you would provide another person about a security vulnerability if you wanted them to fix it is the exact same information that you would provide them if you wanted to enable them to write an "intrusion program" that exploits it. In fact, one of the jobs that I personally held at IBM and Internet Security Systems was to take information about vulnerabilities that was provided to us and use that information to implement a corresponding "intrusion program" so that we could verify that the vulnerability had been fixed properly.

Therefore, an export control on "technology" for the "development" of "intrusion software" may wind up also controlling the exchange of information needed to fix the flaws that "intrusion software" takes advantage of. Any export control regime that d... [ Read More (0.5k in body) ]

The objective of counter-extremism messaging should be to dissuade people from supporting violence, not to defend policy choices made by lawmakers and politicians. This messaging is best done by non-government actors,

This might be the single most intelligent thing I've read on counter terrorism since 9/11.

We've engaged in mountains of bullshit - preemptive wars, torture chambers, totalitarian surveillance. There is very little evidence that any of it is effective and its all stuff we should have known wasn't going to work.

What people want is "pre-crime." But "pre-crime" is by definition not criminal and so its something that law enforcement simply isn't equipped to deal with.

This is more like suicide counseling than law enforcement. Instead of identifying at risk individuals and throwing them in dungeons, you identify at-risk individuals and you help them make better choices.

“Yup, in today’s inverted-neocon Left dumbery, it’s assumed you’re a *reactionary* if you care about sub-Saharan African victims of Arab/Muslim religious jihadis…It goes something like this: The US is the most powerful on the planet, and power is evil. So anything at all that is anti-American is good because it’s fighting Power; anything that distracts from that is evil; and anything that America professes to care about is even eviler, because of America’s monstrous hypocrisy.

“It makes you dumb just writing that down, but it’s Assange’s worldview and it’s pretty much the dominant Left’s as well.”

Sometimes it helps to keep in mind that most people just don't understand how to tell right from wrong, and nearly everyone is lying to them about it - but they are lies that they want to believe.

Introducing information sharing proposals with broad liability protections, increasing penalties under the already draconian Computer Fraud and Abuse Act, and potentially decreasing the protections granted to consumers under state data breach law are both unnecessary and unwelcome.