Posted
by
timothy
on Sunday September 08, 2013 @03:47PM
from the go-ask-theo-de-raadt dept.

New submitter deepdive writes "I have a basic question: What is the privacy/security health of the Linux kernel (and indeed other FOSS OSes) given all the recent stories about the NSA going in and deliberately subverting various parts of the privacy/security sub-systems? Basically, can one still sleep soundly thinking that the most recent latest/greatest Ubuntu/OpenSUSE/what-have-you distro she/he downloaded is still pretty safe?"

In Unix systems, there’s a program named “login“. login is the code that takes your username and password, verifies that the password you gave is the correct one for the username you gave, and if so, logs you in to the system.

For debugging purposes, Thompson put a back-door into “login”. The way he did it was by modifying the C compiler. He took the code pattern for password verification, and embedded it into the C compiler, so that when it saw that pattern, it would actually generate code

that accepted either the correct password for the username, or Thompson’s special debugging password. In pseudo-Python:

Now comes the really clever part. Obviously, if anyone saw code like what’s in that

example, they’d throw a fit. That’s insanely insecure, and any manager who saw that would immediately demand that it be removed. So, how can you keep the back door, but get rid of the danger of someone noticing it in the source code for the C compiler? You hack the C compiler itself:

What happens here is that you modify the C compiler code so that when it compiles itelf, it inserts the back-door code. So now when the C compiler compiles login, it will insert the back door code; and when it compiles

the C compiler, it will insert the code that inserts the code into both login and the C compiler.

Now, you compile the C compiler with itself – getting a C compiler that includes the back-door generation code explicitly. Then you delete the back-door code from the C compiler source. But it’s in the binary. So when you use that binary to produce a new version of the compiler from the source, it will insert the back-door code into

the new version.

So you’ve now got a C compiler that inserts back-door code when it compiles itself – and that code appears nowhere in the source code of the compiler. It did exist in the code at one point – but then it got deleted. But because the C compiler is written in C, and always compiled with itself, that means thats each successive new version of the C compiler will pass along the back-door – and it will continue to appear in both login and in the C compiler, without any trace in the source code of either.

The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.

"The moral is obvious. You can't trust code that you did not totally create yourself...."

I agree, but that doesn't really help us in the real world--writing our own code doesn't reasonably work out for most people. So, what's the solution to your dismal conclusion? Ferret out those that cannot be trusted--doing so is the closest we will ever come to being able to "trust the code".

So, how does one go about ferreting out those that cannot be trusted? The Occupy Movement had almost figured it out, but wandered around aimlessly with nobody to point a finger at when they should have been naming names.

The NSA has made it clear that making connections--following the metadata--is often enough to get an investigation started. So why not do the same thing? Turn the whole thing around? Start focusing on their networks. I can suggest a good starting point--the entities that train the "Future Rulers of the World" club. The "Consulting Firms" that are really training and placing their own agents throughout the global community. These firms are the world's real leaders--they have vast funding and no real limitations to who and where they exert influence. In my opinion, they literally decide who runs the world.

Pay close attention to the people associated with these firms, the inter-relatedness of the firms and the other organizations "Alumni" end up leading. Pay very close attention to the technologies involved and the governments involved.

Look through the lists of people involved, start researching them and their connections...follow the connections and you start to see the underlying implications of such associations. I'm not just talking the CEO of Redhat (no, Linux is no more secure then Windows), but leaders of countries, including the US and Israel.

THIS is the 1%. These are the perpetrators of NSA surveillance, to further their needs...NOT yours. People with connections to these firms need to be removed from any position of power, especially government. Their future actions need to be monitored by the rest of society, if for no other reason then to limit their power.

As George Carlin once put it so well..."It's all just one big Club, and you are not in the fucking club."

quoting Ken Thompson
I would like to criticize the press in its handling of the "hackers," the 414 gang

God I guess...

The 414s gained notoriety in the early 1980s as a group of friends and computer hackers who broke into dozens of high-profile computer systems, including ones at Los Alamos National Laboratory, Sloan-Kettering Cancer Center, and Security Pacific Bank.

They were eventually identified as six teenagers, taking their name after the area code of their hometown of Milwaukee, Wisconsin. Ranging in age from 16 to 22, they met as members of a local Explorer Scout troop. The 414s were investigated and identified by the FBI in 1983. There was widespread media coverage of them at the time, and 17-year-old Neal Patrick, a student at Rufus King High School, emerged as spokesman and "instant celebrity" during the brief frenzy of interest, which included Patrick appearing on the September 5, 1983 cover of Newsweek.

A person would have to be absolutely arrogant to trust themselves alone to effect a secure environment. No one is that good, unless we are talking about "secure" systems that are essentially non-functional.

That's why we have communities of open source developers. Many minds and eyeballs enable a more comprehensive view of security, especially when they are watching changes incrementally accumulate. I think it is much harder to get even subtly surreptitious malware past developers this way.

I think you misunderstand the premise. You can trust code you yourself write to not be concealing deliberately malicious intent. It still maybe INSECURE, but you can at least be sure of the INTENT of code you write yourself. This isn't the case with third party software.

Pen geniuses still "fuzz" binaries, rather than trawl millions of lines of code.

Think about how Android vulnerabilities are discovered, by Blackhat Briefing presenters. They don't usually delve into the monolithic available sources. Many vulns only make themselves evident, when combined with microcode on devices or in combination with radio stacks, etc.

So C/C++ are a government conspiracy of the 60's so they could intercept data from the Internet, which hadn't been invented (or at least only used within military circles) at the time?
You're too stupid to be a C/C++ programmer. Stop doing it.

This argument is much, much too complicated. Plus, it can indeed be tracked down in the compiler binary. Compiling the compiler with an unrelated compiler will remove the malware in the compiler binary. You can use a really slow one for this effort, as you must use it only once.In reality, there are more than enough bugs of the "Ping of death" style, which can be used. Read "confessions of a cyber warrior".The worst thing Bell Labs brought into this world was the C and C++ languages and the associated programming style. Like char* pointers, uninitialized pointers possible and so on.

If Bell Labs had no foisted C and C++ on this world for "free", the government would have had to invent something to make their "cyber war space" possible. Wait, Bell Labs WAS the government.

If that's not enough, a single buffer overflow in firefox or Acrobat reader can trigger something like the Pentium F00F bug, and then they OWN THE CPU. Your stinking sandbox is wholly irrelevant at this time.

Before C, much less C++, there were languages like FORTRAN, COBOL, and PL/1. They were not as rigid about checking types and ranges as Java and Ada, for example. Even some versions of BASIC allowed definition of an "array" that was, in fact, a map of the entire system RAM. And, of course, peek() and poke. PL/1 has actual pointer support built into the language.

So don't blame C. The problems go way, way back. Some systems and languages were more secure than others, but none of them were all that airtight. The onlu commercial hardware architecture that I know of that approached being REALLY secure was the Intel iAPX 432, which practically gave each stackframe its own private address space. But that one never caught on.

If you're going to play it THAT way, then the exploits go back to assembler and every early digital computer. (Analog computers had different weaknesses.)

But please remember that early Fortrans (e.g. IBSYS FORTRANII) discouraged using pointers at all. I will grant that they didn't check array bounds, but the location of the array WRT the rest of the program was not guaranteed, and was subject to being changed with different compiler options. I don't know COBOL well enough to really comment, but it's my i

I don't know that I'd call assembler "exploits", since in assembler you're allowed to do any darn thing you want to. High-level languages exist as much to limit that ability as anything else.

None of the early FORTRAN implementations I worked with supported pointers as such. But the Primos OS was mostly written in FORTRAN (in fact the instruction set was optimized for FORTRAN), and I think there was a pre-defined integer array whose first element was memory location 0 and each word in that array thus had a 1

Unless the "unrelated compiler" is also compromised. How far down does the rabbit hole go?

This is why you start by compiling a very simple, basic compiler like PCC using your choice of random, potentially compromised compiler, then use that PCC binary to compile a new copy of PCC. The resulting PCC-compiled PCC binary should be both small enough and simple enough instruction-wise for a few dozen people to feasibly audit it by hand. Use that to then build a verifiably source-clean copy of GCC. Use that, i

Maybe modern ones, but if you go back a few generations your chances of it existing drop drastically. so what you do for high security....

1 - rely on OLDER hardware. Stuff from before the past two administrations would have a significantly higher chance of not having government back doors. Clinton era computers to start with.

2 - use a completely different architecture. ARM is your best friend here or SPARC. The chances of SPARC having this are insanely small

3 - Get processors from your countries "enemy" Russians dont use Intel processors for their KGB and Government operations. If they did they would be the biggest morons on the planet. Find out what they use and try to source them through the black or grey market channels.

Welcome to the new world of underground computer science. Oh and keep your mouths shut. Don't do stupid shit like bragging as to what you have and where you got it. I'd say "hack the planet" but the safest thing is to go off the net and transfer data via offline means for the highest security.

3 - Get processors from your countries "enemy" Russians dont use Intel processors for their KGB and Government operations. If they did they would be the biggest morons on the planet. Find out what they use and try to source them through the black or grey market channels.

If your prescription for fixing the issues of low security is to trust the Russian (nee Soviet) Government, I'm pretty sure you're doing it wrong.

Perhaps he's thinking to configure it so you only have to trust the Russian *or* US government.
Dunno how it'd work for compute nodes --- but if you have 1 Russian Firewall in front of one US firewall in front of one Chinese firewall -- it seems you could set up a network where unless all 3 of them collude your combo-firewall is safe.

Maybe modern ones, but if you go back a few generations your chances of it existing drop drastically. so what you do for high security....

1 - rely on OLDER hardware. Stuff from before the past two administrations would have a significantly higher chance of not having government back doors. Clinton era computers to start with.

2 - use a completely different architecture. ARM is your best friend here or SPARC. The chances of SPARC having this are insanely small

3 - Get processors from your countries "enemy" Russians dont use Intel processors for their KGB and Government operations. If they did they would be the biggest morons on the planet. Find out what they use and try to source them through the black or grey market channels.

Welcome to the new world of underground computer science. Oh and keep your mouths shut. Don't do stupid shit like bragging as to what you have and where you got it. I'd say "hack the planet" but the safest thing is to go off the net and transfer data via offline means for the highest security.

You forgot a 4th option. If you were TRULY paranoid, you could write your own CPU and emulate in a FPGA. You would also have to design the fpga on a wire wrapped CPU, which would suck, but it's possible.

That's not realistically very likely. Microcode typically never gets updated after the CPU ships, which means that as soon as some critical part of the compiled binary looks slightly different, the microcode won't have the desired effect. It doesn't take a large compiler change to screw that sort of thing up. Even tiny optimization changes would prevent microcode from usefully changing the behavior of a particular binary. The microcode level is just wa

Very pretty example, but badly flawed.
Thanks to Login being open source, and the abundance of de-compilers available from independent sources, shenanigans such as this can be readily detected by comparing the de-compiled code from the freely available source code and noting significant variations, specifically blocks of additional logic not included in the source.
While behaviour like that illustrated would go unnoticed in the closed source (Windows) world, and very likely does, it doesn't wash in the FOS

A laughable comment at best. The FOSS community does not have an army of people running around decompiling binaries just to check to see if it can match compiled code from source. This is significantly less useful than the argument that FOSS doesn't contain back doors because you can look at the source. Just a tip, the vast majority of users don't.

The vast majority of developers do, but the vast majority of developers don't as I said routinely get up in the morning and decompile published binaries. not the

Ken Thompson's theoretical attack against the Unix ecosystem was only practical because, at the time, he controlled a major portion of binary distribution and simultaneously a major portion of the information which could be used to defeat the attack, that being compiler technology. Nowadays, there are tons of different, competing compilers and systems for code rewriting, any of which can be used to "return trust" to a particular OS's binary ecosystem (if someone would take the time and effort to actually do

The big worry is not building from source, but builds delivered by companies like Ubuntu, which you have absolutely no guarantee are actually built from the same source that they publish. Ditto Microsquishy, iOS, Android, et. al.

Another one that concerns me is Chrome, which on Ubuntu insists on unlocking my keystore to access stored passwords. I'd much rather have a browser store it's passwords in it's own keystore, not my user account keystore. After all, once you've granted access to the keystore, any key can be retrieved.

And, in the case of a browser, you'd never notice that your keys are being uploaded.

In the Apple Keychain Access app the access to each key is restricted to a list of applications that are set by the user. You are allowed to grant access of a particular key to all applications, however.

It only unlocks the wallet for the user it's running as, it doesn't have crazy admin privileges.

If you care about security, you're already running the browser as a restricted user anyway--even if you did stupidly share passphrases between wallets (or accidentally mistype the wrong passphrase into the browser unlock window) it still shouldn't have FS permission to your primary wallet.

Plus you can run Chromium if you want to be able to audit the source, presuming you don't think someone's Ken Thompson'd chrom

Much better to use LastPass or whathaveyou instead of the Chrome keystore, IMHO. For one thing, you're right about separating that from your user account keystore, but also the Chrome keystore is pretty insecure. LastPass makes a point of this during installation, once you've OK'd the install it's able to silently access all your passwords.

Eventually you have to draw the line somewhere with regard to where you stop trusting. If the Linux kernel sources themselves contained a backdoor, I would be none the wiser, and neither would most of the world. Some of us have very little interest in coding, let alone picking through millions of lines of it to look for that kind of thing. And then of course there's syntactic ways of hiding backdoors that even somebody looking for one might miss.

You do, but if you're that worried, there's always truecrypt and keepassx. If you keep the database in a truecrypt encrypted partition, the NSA can't get at that with any reasonable period of time. You can also ditch the keepassx and just store it as plain text in the encrypted partition, but that's not very convenient.

You do, but if you're that worried, there's always truecrypt and keepassx. If you keep the database in a truecrypt encrypted partition, the NSA can't get at that with any reasonable period of time. You can also ditch the keepassx and just store it as plain text in the encrypted partition, but that's not very convenient.

Yes, that's actually my concern all the time. Of course, with open source, you could technically check the source of the system you are using. But then, you'd need to check every line of code, thinking exactly like the NSA (or what-not) in every piece of software you use, including the compiler you use to compile and the compiler compiler, etc, etc.

Additionally, you'd need to check the source of all the HW-components that come with their own BIOS, including the system's BIOS, networking ch

The reason you can boot from a raid card or network is because the BIOS loads and runs BIOS modules from those cards. You may be familiar with the Linux kernel, where most of the functionallity is in modules that become part of the kernel. BIOS is the same. One differentiator between a server motherboard and a consumer one is how much BIOS memory it has, to load modules from many different pieces of hardware. I have one machine with at least four different pieces of hardware that include BIOS. MOST of the BIOS on that machine didn't come with the motherboard.

For the Linux kernel, that's how development is done already, for quality control and bloat reduction. Nobody can commit by themselves, it takes at least three people to get a change into mainline. Each developer has their own copy of the tree into which changes are pulled, so they can see all changes that are made, and who made them.

For each part of the kernel, there are a number of people particularly interested in that bit who watch it and work on it. For example, the people making NAS and SAN devices and services keep a close eye on the storage subsystems. Myself, I watch the cm storage stack generally, more specifically LVM, and even more specifically snapshots. There are a few dozen people around the world with special interest in that particular part of the code. No backdoors will come in without some of us spotting it. What COULD happen is that some code could come in that isn't quite as secure as it could be.

It just so happens that I'm a security professional who uses advanced Linux storage systems for a security product called Clonebox, so that's at least one security professional closely watching that part of the code. Thousands of others watch the other parts.

It's convenient that a lot of the development is done by companies like Netapp, Amazon (S3) and Google. You can bet that when Amazon submits code, Netapp and Google are looking closely at it. When RedHat submits something, Canonical will point out any reasons it shouldn't be accepted.

When RedHat submits something, Canonical will point out any reasons it shouldn't be accepted.

I had a good laugh when I read this.

Red Hat employs hundreds of software engineers, contributing a lot to the entire Linux ecosystem. Canonical's resources in terms of code contribution are laughable in comparison and being a streamlined business Cacnonical has few, if any, resources to review third party code. They are happy to ride along, but the number of people at Canonical who actually write and read code outside the shiny UI field are hardly those with the expertise to review low level kernel code.

One of our advanatages is that I'm sure the Russians don't want NSA backdoors in Linux, and the NSA doesn't want Russian backdoors in Linux and neither want Chinese Backdoors and simalarly the Chinese want neither NSA or Russian backdoors. After all of this "Spy vs. Spy" Linux is unlikely to have backdoors. If your requirements are great enough that unlikely isn't good enough your probably shit outa luck because nothing will be good enough for you.

What would you do if you where a Chinese or Russian spook and discover a NSA backdoor in Linux ? You could cry foul! to Linus and get it fixed. However: a much more profitable action would be to silently fix it in your own security critical machines and then exploit it as much as possible on your targets in the West.

The big worry is not building from source, but builds delivered by companies like Ubuntu, which you have absolutely no guarantee are actually built from the same source that they publish. Ditto Microsquishy, iOS, Android, et. al.

The big concern is back doors built into distributed binaries.

So what is the practical difference between a "back door" and a security vulnerability anyway? They both remain hidden until found and they both can easily result in total ownage of the (sub)system.

History demonstrates "open source" community is not immune from injection of "innocent" security vulneribilities into open source projects by way of human error. I find it illogical to assume intentional vulnerabilities would be detectible in source code where we have failed to detect innocent ones.

And what about the hardware? And how can you be sure the compilers aren't putting a little something extra into the binaries. There are so many places for NSA malware to hide it's scary. Could be in the BIOS, could be in the keyboard or graphics firmware, could be in the kernel placed there by a malicious compiler. Could be added to the kernel if some other trojan horse is allowed to run. And just because the kernel, etc. are open source doesn't mean they have perfect security. The operating system is incredibly complex, and all it takes is one flaw in one piece of code with root privileges (or without if a local privilege escalation vulnerability exists anywhere on the system, which it surely does), and that can be exploited to deliver a payload into the kernel (or BIOS, or something else). Really, if the NSA wants to see what you're doing on your Linux system, rest assured, they can.

The NSA is a big organization. They do plenty of things that don't violate the Constitution, international treaties, or common sense.

SELinux is the least of our worries. It's not impossible to hide backdoors or vulnerabilities in an open-source product, but it is pretty difficult. And if the spooks managed to do it, they certainly wouldn't be putting their name on this product, because the people that they're really interested in are even more paranoid than you.

Backdooring a CPU wouldn't actually be that difficult. You'd need it to recognise a specific command sequence (128-bits long should do it) when reading memory to trigger the backdoor - that way you could activate it by sending a network packet, or reading external media, or routing traffic. And all the backdoor needs to do is run a simple 'set instruction pointer to immediately after this trigger.' It'd be impossible to defend against short of using an un-backdoored CPU to filter the trigger out, and even then it could be snuck through in an SSL session or a fragmented packet.

And best of all, it would *never* be detected. The schematics for a CPU are practically impossible to reverse-engineer from the masks, and both schematics and masks are strictly internal company property. Plus the number of people who could understand them in enough detail to spot a backdoor without years of specialist study could probably fit in one conference hall.

But you'd have to prevent knowledge of the backdoor from leaking. Hundreds of engineers work on each CPU, each group produces and verifies a new CPU design every year or so, there is considerable employee turnover every few years, and nobody has ever reported such a thing. So I find it unlikely.

Disclaimer: I work as a hardware engineer for a major CPU manufacturer.

i never understood why people go for AES. clearly, if NSA recommends it, in my view it is something to be avoided (i personally go for twofish instead). in ubuntu, ecryptfs uses aes by default, so i would not trust that.

The last time that the NSA weakened an algorithm they recommended was by shortening the key for DES. Snowden confirms that properly implemented crypto still works, and Rijndael (AES) still seems strong. The problem aren't the algorithms, because the mathematics still check out. The thing to fear are the implementations. Any implementation for which we are not free to inspect its source is suspect.

The last time that the NSA weakened an algorithm they recommended was by shortening the key for DES.

Minor correction: They strengthened the DES algorithm by substituting a new set of S-boxes which protected against an attack that wasn't publicly known at the time. They shortened the key space which made it more susceptible to brute forcing the key. Full strength DES has held up very well against attacks overall until its key length became a problem. It lasted much longer in use than intended.

I seem to recall that DES was never approved for protecting classified data, but that AES does have that approval.

Is there any particular reason why people don't strengthen AES (or any other symmetric encryption) by just reencrypting 1000 times? Perhaps interleaving each encryption with encrypting with the first 1, then 2 etc. It would make next to no difference for the end user, who's going to decrypt just once, but I imagine it would add a lot more time to the cracking of the encrypted data than increasing the size of the key.

Exponents are actually what protects information, multiplication just makes people feel good.

One Bruce Schneier is a (loud) advocate for increasing the number of rounds in AES. Currently it's set at 16, and he advocates increasing it to much more.
His main reason for this is that there's a differential crypto-analysis attack against known plaintext data encrypted with reduced rounds AES implementations.
In short: If you know or control some of the encrypted data, you can extract bits of the key by comparing changes between encrypted known data. The bits you gain reduce the keyspace you need to search.
AES according to the guidelines isn't vulnerable for this. Yet.

AES consists of well studied algorithms. Whether or not the NSA recommends it, it's still known to be secure by independent researchers. From what I understand the only breaks to it are marginally better than brute force, and not likely to result in the data becoming available in a useful period of time.

if the whole world goes for one cipher, then nsa can concentrate on creating and improving a single ASIC design for breaking it. we should be using hundreds of different algorithms. then they'd have to design hundreds of types of ASICs, build 100x more datacentres, increase taxation in USofA to 10x what it is now, yanks would rebel and overthrow that government and then there would be no more evil NSA. simples

The academic crypto community widely considers it secure after more than 10 years of effort to break it (note that twofish does not look less secure, but what makes you think that the NSA could break the AES and not twofish ? In fact nobody can break any of them).

In fact, you are dumber than you appear. I've said it more than once, encryption is not a magic spell. Trust me, if anyone has the mathematicians and the hardware to break *ANY* encryption it is the NSA. It's been their job for more than 60 years. If you can show me internal NSA documents that prove otherwise, I'll believe you. In the mean time, believe that no encryption algorithm is "secure".

You can sleep soundly if your computer is off and/or unplugged. Otherwise, you should always be on your guard.

Keep your confidential data behind multiple levels of protection, and preferentially disconnected when you are not using it. Never trust anything that is marketed at 100% safe. There will always be bugs to be exploited, if nothing else.

10000 laptops are stolen at airports every year. Presumably, they are off when that happens.

The NSA is not your problem; you are not important enough to be a target. When thinking about security, thieves are your problem. Theft happens, and happens often. Your computer is far more likely to get stolen than to be inflitrated by the NSA. And the solution is to encrypt your hard drive. Without encryption the thief will have access to everything you normally access from the computer - like your bank account. You wouldn't want that, would you? Today's CPUs all have AESNI support, so there is no excuse for not encrypting your laptop's hard drive. Do it today and get some financial peace of mind.

Wrong. You may become important in the future. So you are important enough to target. They are collecting data on everyone, and holding on to it. They just might not be actively going through all the data from everyone (or they might be, if they have enough computing power). But if it's recorded it doesn't really matter if they do it today or in 20 years. They've got you. "If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him." --Richelieu

If any of them let in such major flaws they would be found out fairly quickly... and that would destroy the reputation of the subsystem leader, and he would be removed.

Having the entire subsystem subverted would cause bigger problems.. but more likely the entire subsystem would be reverted. This has happened in the past, most recently, the entire changes made for Android were rejected en-mass. Only small, internally compatible changes were acc

or "Privacy" anymore. Perhaps there hasn't been for the last decade or so. We just didn't know at the time. ---- Enjoy your 21st Century. As long as people fail to defend their basic rights, there will not be such a thing as "security" or "privacy" again. My 2 Cents...

Matt Mackall, kernel hacker and Mercurial lead dev, quit Linux development two years ago because Linus insulted him repeatedly. Linus called Matt a paranoid idiot because Matt would not allow RdRand into the kernel, because it was an Intel CPU instruction for random numbers that could not be audited. Linus thought Matt's paranoia was unwarranted and wanted RdRand due to improved performance. Recently Theodore T'so has undone most of the damage, but call RdRand still exist in Linux. I do not understand exactly if there are lingering issues or not.

Yeah yeah and I'm having to go through the last couple years of E-mails and tell the various paranoid whackos, slightly demented old relatives and that one guy with the tinfoil that they were right and I was wrong. How do you think that makes ME feel?

No! It seemed like an ENTIRELY reasonable position, at the time, that there was NO CONCEIVABLE WAY that "they" would be listening to EVERYONE! That would be a COMPLETELY USELESS waste of resources to catch then probably-less-than-a-thousand people who were ACTUAL THREATS to security! People, I might add, who already knew NOT TO USE THE INTERNET for communication! "Mom," I told mom, in a reassuring tone of voice, "go ahead and use the internet. 'They' already know you're not a threat. Their file on you says

It's sad but you can't trust any mainstream Linux distro created by a US company, and you likely can't trust any created in other countries either. I'm not saying that as a pro-windows troll because you can trust MS's efforts even less.

I believe you can trust OpenBSD totally but it lacks many of the features and much of the convenience of the main Linux distros. It is rock solid and utterly secure though, and the man pages are actually better than any Linux distro I've ever seen.

I believe you can trust OpenBSD totally but it lacks many of the features and much of the convenience of the main Linux distros. It is rock solid and utterly secure though, and the man pages are actually better than any Linux distro I've ever seen.

Three points:

1) See the above discussion: you cannot trust anything that you did not create and compile yourself. With a compiler you wrote yourself. On a machine you created yourself from the ground up, that is not connected to any network in any way. OpenBSD does not make any difference if your compiler or toolchain is compromised.

2) Speaking of which, I cannot but note that OpenBSD had a little kerfuffle a while back, about a backdoot planted by the FBI in the OS? (Source 1 [schneier.com]) (Source 2 [cryptome.org]). I am willing to bet that (a) it's perfectly possible (though not likely), (b) if it was done, it was not by the FBI and (c) that the dev @openbsd.org are, right now, taking another long and hard look at the incriminated code.

3) Finally OpenBSD lacking features and convenience? Care to support that statement? I have a couple of computers running OpenBSD here, and they are just as nice - or even nicer - to use than any Linux. Besides, you don't choose OpenBSD for convenience - you use it for its security. Period.

The possibly bigger problem is that no matter what OS you use you can't trust SSL's broken certificate system either because the public certificate authorities are corruptible. And before someone says create your own CA, sure, for internal sites, but you can't do that for someone else's website.

This goes way beyond a simple question of OpenSSL certificates - think OpenSSH and VPN security being compromised, and you will have a small idea of the sh*tstorm brewing right now.

It's possible the NSA did something bad to the code, but it's not likely and it won't last.

For the "not likely" part, code accepted into Linux projects tends to be reviewed. The NSA can't be too obvious about any backdoors or holes they try to put in, or at least one of the reviewers is going to go "Hey, WTF is this? That's not right. Fix it.". and the change will be rejected. That's even more true with the kernel itself where changes go through multiple levels of review before being accepted and the people doing the reviewing pretty much know their stuff. My bet would be that the only thing that might get through would be subtle and exotic modifications to the crypto algorithms themselves to render them less secure than they ought to be.

And that brings us to the "not going to last" part. Now that the NSA's trickery is known, the crypto experts are going to be looking at crypto implementations. And all the source code for Linux projects is right there to look at. If a weakness were introduced, it's going to be visible to the experts and it'll get fixed.

That leaves only the standard external points of attack: the NSA getting CAs to issue it valid certificates with false subjects so they can impersonate sites and servers, encryption standards that permit "null" (no encryption) as a valid encryption option allowing the NSA to tweak servers to disable encryption entirely, that sort of thing. There's no technical solution to those, but they're easier to monitor for.

That won't even make it through the casual review. Most project maintainers don't like code that's impenetrable. Unless it's a fix for a critical bug that nobody else even has a proposal for a fix for, they're going to take one look at obfuscated code and toss it back with a "No thanks.". Especially if it's coming from a source they don't recognize, because messy complex obfuscated code also tends to be buggy unreliable unmaintainable code and they don't want the headache.

Obfuscated code is pretty obvious. There is a large body of conventions you have to follow to get anything into the kernel, precisely to prevent unreadable code. I have looked at a few kernel security patches and they were all clan and clear.

If you do not follow strong simplicity guidelines, a project the size of the Linux kernel will just fail by eventually becoming unmaintainable.

The NSA doesn't really need to have backdoors written into the systems, they have a lot of exploits in their bag of tricks that they've bought or found. Unfortunately the NSA only needs to find one exploit, but truly secure systems we need to find and fix them all:/

Specifically the leaks indicate - and this is based largely on speculation - that they have some sort of central database. That means they can collect keys opportunistically (Trojans, interception of cleartext communications containing the key like VM migrations, cracking via advanced mathematics, old-fashioned espionage, secret court orders, backdoors, etc) whenever they get a chance. So when they need to decrypt a communication, there's a chance the key is already in the database - even if they only obtai

No, but there's no reason to think that Linux is worse than anything else, and it's probably easier to fix.

If I were Linus I'd be putting together a small team of people who have been with Linux for years to begin assessing things. From Gilmour's posting it seems clear that IPsec and VPN functionality will need major change. Other things to audit include crypto libraries, both in Linux and the browsers, and the random number generators.

But certainly some examination of SELinux and other portions are also needed.

I don't see how anyone can answer the original question without doing some serious assessment. However I'm a bit skpetical whether this problem can actually be fixed at all. We don't know what things have been subverted, and what level of access the NSA and their equivalents in other countries have had to be code and algorithm design. They probably have access to more resources than the Linux community does.

Which means that any and every government that might possibly have any future dispute with the US is, right now, going over all their Windows servers and desktops in the military. diplomatic and intelligence services to see how much they can replace.

It'll take months just to write up the reports, and months more to run through the political commitees, and even then it'll be very undiplom

We are being told - and some of us suspected as much for a very long time - that the NSA &Co track everything we do, and have the ability de-encrypt much of what we think is secure; whether through brute force, exploits, backdoors, or corporate collusion.

Surely we should also assume that there are other criminal and/or hacker groups with the resources or skills to gain similar access? Another case of "once they know it can be done, you can't turn back."

I honestly believe that we're finally at the point where the reasonable assumption is that nothing is secure, and that you should act accordingly.

In one of the earlier stories today there was a post making all sorts of claims about compromised software, bad actors, and pointing to this paper: A Cryptographic Evaluation of IPsec [schneier.com]. I wonder if anyone bothered to read it?

IPsec was a great disappointment to us. Given the quality of the people that worked on it and the time that was spent on it, we expected a much better result. We are not alone in this opinion; from various discussions with the people involved, we learned that virtually nobody is satised with the process or the result. The development of IPsec seems to have been burdened by the committee process that it was forced to use, and it shows in the results. Even with all the serious critisisms that we have on IPsec, it is probably the best IP security protocol available at the moment. We have looked at other, functionally similar, protocols in the past (including PPTP [SM98, SM99]) in much the same manner as we have looked at IPsec. None of these protocols come anywhere near their target, but the others manage to miss the mark by a wider margin than IPsec.

I even saw calls for the equivalent of mole hunts in the opens source software world. What could possibly go wrong?

Criminals, vandals, and spies have been targeting computers for a very long time. Various types of security problems have been known for 40 years or more, yet they either persist or are reimplemented in interesting new ways with new systems. People make a lot of mistakes in writing software, and managing their systems and sites, and yet the internet overall works reasonably well. Of course it still has boatloads of problems, including both security and privacy issues.

Frankly I think you have much more to worry about from unpatched buggy software, poor configuration, unmonitored logs, lack of firewalls, crackers or vandals, and the usual problems sites have than from a US national intelligence agency. That is assuming you and 10 of your closes friends from Afghanistan aren't planning to plant bombs in shopping malls, or try to steal the blueprints for the new antitank missiles. Something to keep in mind is that their resources are limited, and they have more important things to do unless you make yourself important for them to look at. If you make yourself important for them to look, a "secure" computer won't stop them. You should probably worry more about ordinary criminal hackers, vandals, and automated probe / hack attacks.

they destroyed my trust in anything, i dont trust any operating system and software anymore, i dont trust the internet or any encryption method, the US Govt and all its elements have been proven to be a criminal gang of fascist kleptocratic totalitarian warmongering pigs.

Remember this [slashdot.org]? In December 2010 there was a scandal when a developer who had previously worked on OpenBSD wrote to Theo de Raadt and claimed that the FBI had paid the company he had been working with at the time, NETSEC Inc (since absorbed by Verizon), to insert a backdoor [linuxjournal.com] into the OpenBSD IPSEC stack. They particularly pointed to two employees of NETSEC who had worked on OpenBSD's cryptograhpic code, Jason Wright and Angelos Keromytis. In typically open-source fashion, de Raadt published [marc.info] the letter on an OpenBSD mailing list.
After the team began a code audit de Raadt wrote [marc.info],

"After Jason left, Angelos (who had been working on the ipsec stack alreadyfor 4 years or so, for he was the ARCHITECT and primary developer of the IPSEC stack) accepted a contract at NETSEC and (while travelling around the world) wrote the crypto layer that permits our ipsec stack to hand-off requests to the drivers that Jason worked on. That crypto layer contained the half-assed insecure idea of half-IV that the US govt was pushing at that time. Soon after his contract was over this was ripped out....

"I believe that NETSEC was probably contracted to write backdoors as alleged."

Over the years the NSA has contributed what seemed like positive things to computer security in general, and Linux specifically. They have helped correct some algorithms to make them more secure, and implemented things like SELinux.

However, now that their other actions and intentions have been starkly revealed, any and all things the NSA does (and has done) are now cast into steep doubt. Which is unfortunately because the NSA has a lot of really smart cryptographers and mathematicians that could greatly contribute to information security.

Now, however, their ability to contribute in any positive way to the open source community, or even to the industry at large, is gone forever. No one will trust them again. A sad loss for them, but also a potential loss for everyone. Nothing will quite be the same from here on out. And in the long run, without the help of smart, honest mathematicians and cryptographers, our security across the board will suffer. It's not the the revelations caused the damage, but that the NSA sabotaged things. Shame on them. Kudos to Snowden for helping us learn the extent of the damage.

Every encryption protocol you use has been sabotaged to be readable by them. You dont really think they will try 200 trillion keys to break your stream do you?No. They modified the protocols, (to make them more secure) and of course never explained the changes. They just mandated it.

Even the almighty NSA with it's insanely high budget can't crack all the encryption. But it does make me wonder if I should avoid everything they recommend.

Even that's no good if the problem is flaws in the spec rather than how it's implemented by OSs. If the NSA did things correctly they didn't have to muddle with actual Linux/BSD/etc src, they got flaws into the crypto definition itself that reduces the work needed to crack it. The better an OS follows the spec... the easier for the NSA to punch through.