Posted
by
kdawsonon Tuesday December 14, 2010 @08:36PM
from the all-your-vpn dept.

Aggrajag and Mortimer.CA, among others, wrote to inform us that Theo de Raadt has made public an email sent to him by Gregory Perry, who worked on the OpenBSD crypto framework a decade ago. The claim is that the FBI paid contractors to insert backdoors into OpenBSD's IPSEC stack. Mr. Perry is coming forward now that his NDA with the FBI has expired. The code was originally added ten years ago, and over that time has changed quite a bit, "so it is unclear what the true impact of these allegations are" says Mr. de Raadt. He added: "Since we had the first IPSEC stack available for free, large parts of the code are now found in many other projects/products." (Freeswan and Openswan are not based on this code.)

No I didn't even know about that, but it was an interesting read. I just base that on the value that a backdoor would have. Imagine being able to spy on people that don't want anyone listening. It's just so valuable that I'm sure they would try very hard to get in on it.

Of course... your comment serves to underscore the importance of open source. While GP noted that it *should* have been caught in OpenBSD,.. at least the potential for it to have been caught was there. If it's in Linux as well, we'll know very soon since it's reasonably certain that people are looking now. If it's in MS products... well, that's something we'll never know.

They didn't, but they wanted too. Secret foreign relations were a thing that they thought characterised European autocracies. Later, the president Wilson in his 14 points for peace pointed secret diplomacy as a practice dangerous for peace.

Actually no, I was referring to the fact that the NSA helped in the development of Windows XP, Vista and 7... all publicly. It's not even a secret. They were also involved privately in 95 and 98.

Is Google really that hard to use?http://www.computerworld.com/s/article/9141105/NSA_helped_with_Windows_7_development

"Working in partnership with Microsoft and elements of the Department of Defense, NSA leveraged our unique expertise and operational knowledge of system threats and vulnerabilities to enhance Microsoft's operating system security guide without constraining the user to perform their everyday tasks, whether those tasks are being performed in the public or private sector," Richard Schaeffer, the NSA's information assurance director, told the Senate's Subcommittee on Terrorism and Homeland Security yesterday as part of a prepared statement.

This is not the first time that the NSA has partnered with Microsoft during Windows development. In 2007, the agency confirmed that it had a hand in Windows Vista as part of an initiative to ensure that the operating system was secure from attack and would work with other government software. Before that, the NSA provided guidance on how best to secure Windows XP and Windows 2000.

Oh, my sides! I guess that was an epic FAIL for the NSA then? (Either that, or Windows might actually have been more vulnerable to attack without their help.)

People always forget about the second mission of the NSA - securing the government computing infrastructure. That's why they cough up stuff like SELinux, or their hardening manuals.

Putting a backdoor into windows would be stupid unless you at the same time make sure there is a backdoor-free version for government use. Ensuring that would mean no government office can ever buy a windows off-the-shelf, it all has to be coordinated centrally. At an operation that size, I'm not sure you could keep it a secret.

Well,they can just build in backdoors to which only they have keys, and keep it secret.

They are a secret service. They know (from their own painful experience) that secrets do not stay secrets for unlimited times, and the more people know about it, the less so.

Seriously, sending an agent with a lockpick set is several orders of magnitude cheaper than creating a secret cryptographic backdoor. I'm very certain the NSA is no stranger to every trick in the book. I do, however, think that they are too smart to do the obvious thing.

Ah the old NSA DES conspiracy theory. The NSA suggested two changes to DES: 1) shorten the key 2) changed the S-boxes. They gave no public explanation for the latter and for years the story was that this somehow introduced a backdoor into the algorithm. The truth came out over a decade later:

"Some of the suspicions about hidden weaknesses in the S-boxes were allayed in 1990, with the independent discovery and open publication by Eli Biham and Adi Shamir of differential cryptanalysis, a general method for breaking block ciphers. The S-boxes of DES were much more resistant to the attack than if they had been chosen at random, strongly suggesting that IBM knew about the technique in the 1970s. This was indeed the case; in 1994, Don Coppersmith published some of the original design criteria for the S-boxes. According to Steven Levy, IBM Watson researchers discovered differential cryptanalytic attacks in 1974 and were asked by the NSA to keep the technique secret."

Except that in this case it's not so easy to audit it. Only the experts will likely understand the changes that were put in and probably won't be able to spot it immediately. Ie, a slight tweak to some table of numbers used by the encryption making it easy to decode.

Actually it would likely be harder. In the case of OSS, all you have to do is get people to contribute to the code. The FBI doesn't really have to be sneaky about it at all, other than that the people don't reveal who they work for. They could even lie about who they are as it is all done over the net anyhow. If it gets discovered, well no big deal really. I mean it is free and open, nobody made them accept those contributions. There's no legal problems that I can see.

In the case of a company, you have to either subvert or plant employees there. Doing that without a court order would be illegal. It also has to go on undetected, of course, and that is much harder since the employee works physically at the company. Then there's the problem that if it becomes known, you may have a lawsuit on your hands, or congressional inquiry, and so on. Big companies wield a lot of power and would likely not be amused in the slightest.

However what the GP is really saying overall is that if this turns out to be true (please note I am doubtful of that) it shows a weakness in the "many eyes" idea. That mantra is repeated over and over by OSS advocates almost like an incantation, that because something is open it means that all sorts of people are looking it over and there won't be anything evil in it. That is not the case, of course. Some OSS stuff is well audited, some is not. If this proves to be true it would show that even the pretty well audited stuff is not immune, that just having the source out in the open is not enough to guarantee security.

Zug, Switzerland. For four decades, the Swiss flag that flies in front of Crypto AG has lured customers from around the world to this company in the lake dis- [words missing] most sensitive diplomatic and military communications value Switzerland's reputation for business secrecy and political neutrality. Some 120 nations have bought their encryption machines here.

But behind that flag, America's National Security Agency hid what may be the intelligence sting of the century. For years, NSA secretly rigged Crypto AG machines so that U.S. eavesdroppers could easily break their codes, according to former company employees whose story is supported by company documents.

"The recent incident of "backdoors" in Microsoft software is indicative of a fundamental problem that electronic commerce will need to address very soon," Jerry Harold, president & co-founder of NetSec [...] Even if Microsoft has stringent internal requirements for software assurance, it's very difficult to catch a backdoor that may be hidden by a single coder deep inside hundreds of thousands of lines of code," said Harold"This is why NetSec builds its products on an operating system (OpenBSD) that has made security its number one goal," Harold told SOURCES. "The source for the operating system was re-built from the ground up for security and is publicly available. As a result, it is continuously subjected to rigorous security review by independent software engineers around the world. This has additional benefits because secure code often tends to be well designed, stable, and efficient."

They're still not even sure if the backdoor still works - the code gets edited often, and the subtle tricks that backdoors rely on can break quite easily that way.

And it's not like closed-source would be any better - then, the FBI can just pay the company to slip one in. I'm not worried about my OpenBSD box - it's already far more secure than my Windows rigs are. Hell, I haven't even bothered updating it in years - it's still running 3.6.

Basically, the idea is that bits of the key leak. And how is this accomplished?

For example - if a key bit is 0, you take one code path, if 1, another. Make the two paths different lengths. It may be possible to affect packet timing. Or... A function may end with "x - y" and then return "z". No leak? Not so clear, the carry/borrow may be leaking information to the caller (on x86 style hardware).

Anyway, it probably isn't a "back door", just some means of determining enough key bits to make brute force practical is enough. And this sort of thing can be subtle. It can even be based on the machine code generated for certain sequences by a particular compiler (the "x-y; return z" sequence above, for example).

No, but it was part of the post-Wassenaar agreement (Dec. 1998) that de-weaponized open source crypto. 10 years ago would have been around OpenBSD 2.8 (12/1/2000) which introduced AES and was the first release after the expiration of the RSA patent.

Why engage in mass speculation? Check out the code from the time period in question and audit it for a back door. I don't know why everyone should get up in arms over an allegation that may very well be unfounded.

You have to remember that something like that wouldn't be in the code with a/*evil shit goes here*/ before it. To have survived it would need to be well hidden. The idea that you can just look at code and find problems is false. I mean were that the case, no software would ever have any bugs.

So to find it could take a lot of work, even when you know there is something to look for.

This presumes, of course, there IS something to look for and this isn't just some guy making shit up. I'm leaning more towards that option since I don't see why the FBI wouldn't have a longer NDA. Classified material is generally done for 50 years, and something like that would surely be classified.

if classified, it would be CIA. FBI has nether mandate, nether authority to declare anything 'classified'.

Citation needed. In addition to being a law-enforcement agency, the FBI is the USA's domestic intelligence agency (actually a slightly weird state of affairs, if you're used to countries that like to keep military and civilian stuff separate). That means that, in theory, it does the same sort of stuff the CIA does, if said stuff happens within the USA - the American equivalent of MI5 and MI6, respectiv

Some years ago I was looking at a job at the FBI. Sysadmin type stuff, mostly end user (it specifically noted you didn't not need experience with "the mainframe" you'd just be helping users connect to it). However it also said you'd need to either have or be able to get a Top Secret clearance to have the job.

So even for a job that was non-investigative in nature, just doing tech support for agents basically, they anted a TS clearance. That tells you something about the likelihood of coming in to contact wit

More seriously, some of the code obfuscation competitions out there show that code auditing alone may not be enough to track down every vulnerability - a single dedicated enough individual can probably slip something past that's too subtle to notice, especially if they're making a lot of 'good' commits at the same time.

Now realise that the article suggests that there may have been several people at this and the problem becomes evident.

The code obfuscation competitions won't be good examples - since obfuscated code looks hard to understand, which would make it more noticeable to auditors, or even "normal programmers" looking at the code.

If the backdoor was done well, it may be impossible to confirm. Not that this is how it was done, but many encryption routines define lots and lots of constants. Random large primes and that sort of thing. You could assume that these constants were chosen for cryptographically sound reasons, and you might be right. You could also assume that these constants were created using an external "secret key", and that anyone with this secret key would be able to decrypt data, and you might be right. Or maybe i

I'm not a crypto geek, so I only recently read about Nothing up my sleeve numbers [wikipedia.org] (here on Slashdot, in fact). After seeing that I'd guess seemingly random large constants would already be considered suspicious.

Many eyes only works when the many eyes give two shits and are not worthless lackeys only pretend to have coding sK1llz. I know, I put all sorts of wacky references and useless nonsense into my Perl scriptings, and no one ever says a word. If my hat was black, someone's enterprise would be so screwed.

Because crypto is hard math and an absolute bitch to get right. The e-mail talks about inserting side-channel key-leaking mechanisms. Finding these may be nigh unto impossible because they simply could be a property of a specific mathematical function that has a subtle weakness.

In short, 99% of coders could audit this all day long and find absolutely nothing. You have to be a coder and a mathematician and a crypto specialist or you're probably just wasting your time.

from ftp://ftp.nluug.nl/pub/metalab/docs/linux-doc-project/linuxfocus/English/Archives/lf-2003_03-0273.html

I often like to point out an incomprehensible weakness of the protocol concerning the "padding" (known as covered channel): in both version 1 and 2 the packets, have a length which is a multiple of 64 bits, and are padded with a random number. This is quite unusual and therefore sparing a classical fault that is well known in encrypting products: a "hidden" (or "subliminal") channel. Usually , we "pad" with a verified sequence as for example, give the value n for the byte rank n (self describing padding). In SSH, the sequence being (by definition) randomized, it cannot be checked. Consequently, it is possible that one of the parties communicating could pervert / compromise the communication for example used by a third party who is listening. One can also imagine a corrupted implementation unknown by the two parties (easy to realize on a product provided with only binaries as generally are commercial products). This can easily be done and in this case one only needs to "infect" the client or the server. To leave such an incredible fault in the protocol, even though it is universally known that the installation of a covered channel in an encryption product is THE classic and basic way to corrupt the communication, seems unbelievable to me . It can be interesting to read Bruce Schneier's remarks concerning the implementation of such elements in products influenced by government agencies. (http://www.counterpane.com/crypto-gram-9902.html#backdoors).

I will end this topic with the last bug I found during the portage of SSH to SSF (French version of SSH), it is in the coding of Unix versions before 1.2.25. The consequence was that the random generator produced... predictable... results (this situation is regrettable in a cryptographic product, I won't go into the technical details but one could compromise a communication while simply eavesdropping). At the time SSH's development team had corrected the problem (only one line to modify), but curiously enough without sending any alert, not even a mention in the "changelog" of the product... one wouldn't have wanted it to be known, he wouldn't have acted differently. Of course there is no relationship with the link to the above article.

So what he was saying is, that they are padding with a potentially unencrypted random number, that can be used to guess earlier and later random numbers, and thus break SSH. The random number is a hint for crackers / PRNG guessers.

No, that a deliberately "broken" implementation of ssh (either on server or on client) could use the padding to leak the session key, and that without access to the code there would be no way to tell (... because the padding is "supposed" to be random...).

Quite clever actually, and reminescent about the ways how the French subverted the Luxembourgish Luxtrust system.

Luxtrust [luxtrust.lu] token are hardware crypto token containing a private key. The key (supposedly) is generated randomly by the token at initialization and never leaves the token, and can only be used to establish session keys and sign messages, where the critical calculation happens on the token. The key is used to secure banking transactions, so that for example, the French tax administration cannot spy on the communication between French citizens and their Luxembourgish bank.

That's the theory. The catch is, the tokens are manufactured by the French company Gemalto [gemalto.fr], and each token's random number generator will only ever "generate" private keys from a limited set (different for each token, of course). So, French tax administration can trivially infer the private key by looking up the public key in a table provided by Gemalto.

The scheme is virtually undetectable, because:

The keyset is different for each token

Each token can only be initialized a very limited amount of times (much smaller than number of possible keys for that token)

The tokens supplied to BSI [bsi.bund.de] for audit didn't have this weakness. And moreover, the German tax authorities would be quite happy to listen in too:-)

Result: Luxembourg spent millions on an inconvenient crypto scheme, which works neither on modern 64 bit compiters nor on mobiles, and which is useless for its purpose.

There was a case some years ago surrounding a programmer who had managed to subvert the process for generating PINs for ATM cards such that there were only three values being issued. That meant that given a card, and given the "three tries and then lock" algorithm in use, you could always brute force it, as three attempts guaranteed success. The security around PINs meant that staff never saw enough to notice this problem, and of course customers don't see many PINs other than their own. It's written up

So; this is going to be interesting. Imagine there were no back doors; how would you prove it? Want to discredit OpenBSD; that's how you would do it. Assume there are backdoors; now we have the first known clear example of illegally placed malware by a US Govt. group. The FBI is not the NSA, but they definitely have access to good people. Assume this was rogue players. Warrentless wiretapping against US Govt. lawyers! In the absence of any pointer to relevant code, I would go with it being FUD, but I expect to be proved wrong..

It doesn't have to be malware, A well thought out backdoor could be as simple a single byte buffer overflow or a combination of many other minor code defects that would allow a knowledgable person to use them as a backdoor. So it is possible even if you found the code it would still be questionable whether it was just a bug or intentional malevelance.

If it is true, it was submitted as source code, subject to review, accepted by the community, and installed by users. I see nothing illegal here.

I also don't see where it's necessarily warrantless wiretapping. Sure, it could be used for that, but this kind of thing could also absolutely be used for warranted wiretapping. The FBI goes to a judge, gets a warrant, captures the traffic, and decrypts it using the backdoor. Again, nothing illegal.

There are ethical issues with intentionally subverting such a project, but I don't see legal issues such as you claim.

for example Scott Lowe is a wellrespected author in virtualization circles who also happens top be onthe FBI payroll, and who has also recently published several tutorialsfor the use of OpenBSD VMs in enterprise VMware vSphere deployments.

I'd be more than a little surprised if any part of the US government would in fact agree to let non-disclosure agreements expire automatically. That alone makes me suspicious that the truth content of these allegations is a little thin.

Then again, I have to agree with Bob Beck (see http://marc.info/?l=openbsd-tech&m=129236730027908&w=2 [marc.info] ) that this is fairly likely to part of a personal vendetta of some sort, possibly against either the OpenBSD project or even something totally unrelated, using the OpenBSD project only as the attention-grabber in contexts such as/.

At this point we have only allegations with some finger pointing, I for one look forward to any real information to surface. The best way to draw out the real information behind this is to do what Theo did - publish the allegations and let the involved parties explain themselves in public.

Garibaldi: Think they'll ever find that transmitter you slipped G'Kar?Sinclair: No. because there isn't one.Garibaldi: There isn't? Wait—Sinclair: I lied. I figured if there were a transmitter, sooner or later they'd find it and remove it. But if I just told them there was, they'd keep looking. Indefinitely.Garibaldi: Commander, do you have any idea of the tests they'll put him through, the things they'll do to him trying to find a transmitter that's not there?Sinclair: Yes.

So this might mean Mac OS X is not affected? I'm not knowledgeable enough on *BSD to know.

While there is significant shared code between the BSD's and OS X and even Linux distributions; OpenBSD ships with an IPv4 IPSec stack that is pretty much only used by OpenBSD. OS X and most other BSDs use the KAME stack.

So this might mean Mac OS X is not affected? I'm not knowledgeable enough on *BSD to know.

I don't believe that Mac OS X is affected since OpenBSD only used the IPv6 part of the Kame Project [wikipedia.org]. Apparently OpenBSD developed their own version of IPSec while the other BSD variants used the IPSec implementation from the Kame Project.

Since Mac OS X's IPSec is derived from the one in FreeBSD and NetBSD it's not directly linked to the IPSec in OpenBSD. This doesn't mean that it hasn't been compromised, all code is suspect - even implementations in Linux and Windows - simply because it seems like people ha

No. NeXTSTEP pre-dated NetBSD and FreeBSD. NeXTSTEP was based on BSD Tahoe 4.3, and OS X took code from all three codebases (OS X was NetBSD-heavy in the early days until Jordan Hubbard joined Apple and influenced further conversion to FreeBSD code).

To this day you can find BSD code from all BSD codebases, but not quite as much from OpenBSD. Run 'strings' on the libraries to get the skinny.

The implementation described herein appeared in WIDE/KAME IPv6/IPsec stack.

The KAME [kame.net] stack is the same stack used in NetBSD [gw.com] and FreeBSD [freebsd.org].

Even though NeXTSTEP was forked earlier [levenez.com] from the BSD codebase than the other BSD flavors there has still been considerable sharing between it, Mac OS X, and the other BSD flavors. OpenBSD is one exception to this since it tends to be a more closed ecosystem than the other BSD variants.

Well, I would HOPE that if they've secretly cracked all the crypto then they can monitor everything Al Quaeda and Wikikeaks do or say. Since to be honest that level of crypto is being mostly used by schmucks these days

Since that doesn't seem to be the case, I think it's probably note likely that this claim is much more bogus. Why aren't they using these backdoors to punish enemies ?

Except for side channel attacks, which many implementations of the crypto primitives are vulnerable to, since avoiding all of them is very hard.

But that would be flaws in the primitives. Primitives can be misused in creating a cryptographic scheme, but the scheme was specified outside OpenBSD so mistakes in the scheme would not be specific to OpenBSD. We also know that the scheme was implemented more or less correctly, or it would fail to inter-operate with other IPSec implementations. Hmm... so unless IPse