Posted
by
kdawsonon Sunday February 10, 2008 @08:58AM
from the random-acts-of-kndness dept.

snake-oil-security writes "Last fall Amit Klein found a serious weakness in the OpenBSD PRNG (pseudo-random number generator), which allows an attacker to predict the next DNS transaction ID. The same flavor of this PRNG is used in other places like the OpenBSD kernel network stack. Several other BSD operating systems copied the OpenBSD code for their own PRNG, so they're vulnerable too; Apple's Darwin-based Mac OS X and Mac OS X Server, and also NetBSD, FreeBSD, and DragonFlyBSD. All the above-mentioned vendors were contacted in November 2007. FreeBSD, NetBSD, and DragonFlyBSD committed a fix to their respective source code trees, Apple refused to provide any schedule for a fix, but OpenBSD decided not to fix it. OpenBSD's coordinator stated, in an email, that OpenBSD is completely uninterested in the problem and that the problem is completely irrelevant in the real world. This was highlighted recently when Amit Klein posted to the BugTraq list."

if you think its a problem, exploit itnothing says "fix it" faster than a few thousand compromised hostsrelease a PoC that gets r00t, inform the security lists and stand backthats what full disclosure is for.

if it isnt exploitable then BSD can fix it at leisureor if thats not quick enough and as its Open Source, YOU fix it if you are that concerned

Anyway, besides rudely just posting a link like that in response, I was going to say that proof-of-concept code has at least already been published, and his point is that FreeBSD, NetBSD, DragonFlyBSD has fixes available. Apple is currently working on a fix for OS X. OpenBSD is not planning to fix this. More info can be found in my parent link.

Resistor thermal noise [wikipedia.org] is also inherently quantum in origin, and much easier to measure. All it takes is a resistor, a good analog amplifier, and an A/D converter -- which could all fit on a single piece of silicon if you wanted.

Yeah, other sources are similar, and Zener diodes are probably the easiest devices to produce noisier versions of (and still maintain high noie quality). I mention resistors mainly because the physics behind them is the easiest to understand.

I know waaay to little about quantum physics, I've never understood this can't measure it without changing it stuff or whatever =p, so I have a hard time arguing in either way regarding it. But I do guess there must be a spin and that you should be able to find out what it is just that our methods of reading it interferce with it or something? Kindoff useless to try to discuss it when I know so little about the subject thought. Would be intresting to know more.

Well, let me try to explain.Imagine a spin-1/2-particle (e.g. an electron). Such a particle has the peculiar property that if you measure its spin along any chosen axis, you'll always get either 1/2 ("spin up") or -1/2 ("spin down").

OK, let's assume we have just measures the spin in z direction and got +1/2. Let me first note that this is stable: If we measure the z-spin of the same particle again (assuming it didn't interact in between), we will again get +1/2 each time. That is, once we measures +1/2 in z

Almost all processes in a computer are truly random. The number of electron crossing this particular trace per second? It's certainly not constant.

The trick in computers is keeping the RAM and the harddisk from going random too fast, such that a temporary illusion of determinism can be achieved. A crossing cosmic-ray particle will flip every last bit at random -- it'll just take sufficiently many centuries that you can imagine your bits to be stable zeros and ones.

If you're working at the level where a friend has to explain the weaknesses in a PRNG class, one you roll yourself is highly unlikely to be better. There are many algorithms out there that have been very thoroughly analysed and explored by experts, and there's going to be one out there that's easy to find and better than your hand-rolled one. And, of course, what count as "weaknesses" depends on the application. A PRNG that's great for Monte-Carlo simulation may be too predictable for cryptography. A PRNG that's sufficiently hard to predict for cryptography may be too slow for Monte-Carlo simulation.

I'm no security expert and I don't know anything about the attack vectors that he claims, so maybe I shouldn't say too much, but I do know this: TFA mentions that the PRNG is used for such fields as DNS transaction IDs and IP header fragment IDs, and these fields were never even meant to be random from the beginning. Verily so, TFA even says that {Free,Net}BSD don't even use the PRNG by default, but uses sequential numbers unless a certain sysctl is tweaked.

Thus, it is my guess that even if the attack vectors are deemed serious enough, the OpenBSD team has decided that it doesn't matter, since these protocols were never designed for security anyway, and that one should use DNSSEC and/or IPSEC (or TLS) if one actually wants to be secure (it does raise the question as to why they decided to use a PRNG for those fields from the beginning, though). My second guess is that they don't even consider the attack vectors serious, though, since they probably require a cracked router to be effective anyway.

Indeed, if they do require a cracked router, then I don't see the issue to begin with. One of the attacks was that the attacker could inject data into a TCP stream and such things, and if he has a router cracked, then I'm pretty sure he could forge all the data he wants anyway, without using any particular software attack at all, and likewise with DNS data.

The idea behind the suggested attack vector is to find a way of sending matching packets *without* sitting in the path of the data. If you can guess certain values which the server will send to other hosts with a high probability and do so just by looking at packets which the server sends as answers to your requests, then you can spoof packets and other hosts will accept your misleading payloads as though they were coming from the server.

Really? If that is truly so, then I'd argue that that is what is the actual security flaw, and not the non-randomness of the IDs. For sure, you won't be able to carry out any of the IP attacks that way, since fragment IDs are local to the sending host. To be honest, I didn't understand how the DNS vulnerability worked to begin with (I didn't see it being explained anywhere), so I can't make any statements about it, though.

The reason that they weren't designed to be secure is that noone had thought of the "DNS poisoning" attack when the protocols were designed. If they had, they would have made the ID field longer. Since it is only 16 bits, I doubt that there is any very secure way of protecting someone from guessing the next value. The paper describes a method of narrowing it down to 8 possibilities by doing ~10^9 calculations.

The exploit described in the paper doesn't require a cracked router, just a malicious website. Once you can inject fake DNS entries for bankofamerica.com or ebay.com on some ISP's DNS server, the exploit has paid for itself.

DNS poisoning and the like are more likely to be used to compromise the user than his computer. After all, they can just put up their fake Bank of America clone that, thanks to poisoned DNS, is identical to the real one and steal his password.

Is the summary just supposed to be as shocking as possible? How about some details on why specifically they decided not to patch it?

It is entirely believable to me. Back in 1995 I told Marc Andressen at Netscape that he had a serious problem with the random number generator used to choose session keys for SSL. There was simply not enough randomness going in for there to be 128 bits going out.

Marc had every reason to listen to me, I had broken SSL 1.0 in ten minutes when he tried to demonstrate it at MIT. But it took several weeks to drill the problem into his thick skull.

So they eventually asked me for a description of how to do the thing right.

A year later the exact same bug was discovered independently. By this time they had hired some competent crypto people. I spoke to Taher about the problem later and his explanation was that they found the design note on the PRNG which was so comprehensive that they didn't think it necessary to check the actual code.

Zeinfeld, aka Phillip Hallam-Baker is the CTO of Verisign, and while he occasionally makes outlandish claims about himself and his past on Slashdot, most of those claims are well-grounded in reality.On SSL/TLS and similar security/crypto issues, he is always interesting and more likely to be right than not.

On supporting large scientific computing platforms, he is always interesting and more likely to be right than not. His system administration c.v. is impressive.

I have noticed that people are complete and utter idiots about two very important cryptographic algorithms. PRNGs and hash functions. I can't believe the number of people who still use a simple MD5 hash for software download verification. First, it isn't signed, so all someone has to do is alter both the hash and the code. Secondly, even if it were it's not very hard to make two pieces of code, one innocuous and one malicious that both have the same MD5 hash, and it's been true for years.

Because that is why they aren't using webkit, apache, samba, cups (or employ the guy who writes it), and several others in their default install.

While I would agree with you on the matter of trolling it really gets old when BSD users trumpet it constantly where-as in my experience GPL supporters tend to realise there are limitations. Of course I'm sure it is seen the same way across the bridge.

Webkit is LGPL, Apache is under the Apache license, Samba is under the GPL and CUPS (sourcecode copyright, company name and other tangibles) was purchased by Apple a year ago this month (as well as hiring the main developer).

Out of the four items you mention, only one is GPL. You could have done much better to suggest such examples as GCC et al.

The great thing about the BSD license, is that when people do contribute back (and they do, even big companies like Apple), you know its because they *want* t

So, in other words, the grandparent poster's point is valid and the larger more important issue remains: proprietary derivatives of non-copylefted free software uses the free software community as a market instead of treating us as equals.

The great thing about the BSD license, is that when people do contribute back (and they do, even big companies like Apple), you know its because they *want* to, not because they *have* to.

Nobody "has" to under the GPL; to the degree that what you said is true, the same is true of the GPL. Statements like yours ignore all the choices that lead up to distributing source code. There's nothing in the GPL that compels conveyance. There are only conditions in the GPL that compel source code conveyance with object code conveyance. It's trivially easy to not improve GPL-covered software or not distribute the improved version. The larger issue here is whether the free software community owes Apple anything. We don't. If they want to join us and work with us, great, if not they can write their own software. The GPL helps ensure that when people and organizations convey copies of programs they do so as equals. NeXT (now owned by Apple) already tried distributing GCC derivative software without distributing complete corresponding source code when GCC was under GPLv2. It made NeXT look like an ass and put them at risk of being able to distribute GCC at all. NeXT later rectified the situation by distributing complete corresponding source code in compliance with GPLv2.

Great argument, except none of the above are essential to their operating system, which is why they picked them up with a gpl license. It doesn't really matter if the source to any of those are shared or not.

Oh, and captain hater, last time I checked, the fix would be shared [apple.com].

... and if Apple wasn't using OSS at all, I'd bet that they'd be selling quite a few less laptops and desktops. I know I wouldn't have bought three laptops over the past 2 years. I also know several people who would not have gone the OS X route. GCC / FreeBSD / GNU are very strong selling points for Apple that they didn't have with OS 9. On that note, I think you're right to a large extent, if it came down to a choice between the GPL or closed source, I have a gut feeling Apple would have tried the close ro

People have different opinions on how things should be. When it's their license and their code, they get to decide. Nonetheless, maybe you should contact the Open Source Initiative. They're an organization which collects licenses and "certifies" them as to their openness. BSD's license is listed as open source.

Nuhuh. This is because the BSD license is semantically freer than GPL in precisely this case:

Apple are free to release their putative fix to the community, or not - their free choice. That's one more freedom, relative to being obliged to release any changes they make which lead to a binary release outisde of Apple, which the GPL would oblige.

And besides, if computing moves away from code executing on local CPUs and onto central servers to be accessed by web clients (the "cloud"), than even GPL code modified by,for example, Google is not distributed, so the patches are not mandatorily available under GPL either.

It's both more and less freedom, depending on if you are the developer or the user. There are benefits of both, even thought as I see the BSD alternative as more "free" even thought it doesn't guarantee the freedom.

It's about the developers freedom and the users freedom. The developer is free of leverage, and can act as they wish. The user is free of leverage, and can act as they wish. They're not allowed to use the legal system to enforce leverage around the code, obviously. But that doesn't prevent them doing anything they wish with the code, it just prevents them being bastards via the legal system.

Yes, that's always been one of my problems with the GPL. Way back before it existed, I used to release all my freeware projects with a license clause "You are free to use this code as you see fit, but any bug fixes you make must be sent back to me for incorporation into the master source." When the GPL came around I got a lot of pressure to relicense my code, because my license wasn't "free" enough, or didn't fit some johnny-come-lately organization's definition of "free software license."Freedom doesn't me

I never argued with the fact that with the BSD you don't have to contribute back; that's what my parent poster already pointed out. My point isn't pointless given that the situation he describes applies just as well to the GPL, just to a lesser extent.And yes, Affero GPLv3 would indeed make this apply to Google if they were using it as a server-side solution. But if it's in-house only - they still don't have to contribute back. If they (or anyone) develop for a specific client only, then only that client

The phrase "security through obscurity" has a well established meaning in the discussion of security measures. It refers specifically to systems that are only secure if the design is not known to the attacker.

Specific passwords (or other shared secrets like symmetric keys) are not part of the design. The design merely says that you use one, not which one you use - and security of the shared secret is only based on keeping which key / password

This most certainly WILL have impact on OpenBSD's status as "secure" OS. Indeed, OpenBSD claims to have "proactive" approach towards security whereas this issue should and will diminish some of the OpenBSD's "security goodwill".

Perception is important -- most of the pointy haired types don't really understand the issues; if the competition shouts loudly with noises about ''not wanting to fix a security bug'' they will believe it.

The next thing is that anything *nix or open source is not really interested in security.

I am sorry for this vague subject, but I can't remember the exact topics or incidents anymore, but there were numerous even mentioned on slashdot.But I wanted to show that most of todays security threatswere first percived hard to be used or totally unthinkable, even minor security problemswhich later were updated to the status of a serious threat, because the first look turned out to be wrong.

So when devellopers commit themselves to build the most secure OS, and than on the other hand show such no-interest

OpenBSD is on a fast track to losing its most favored secure OS status if they keep this up.

First they refused to implement WPA (despite the other BSDs having it), because it "doesn't provide real security" and "just use IPSEC".

Now they're refusing to address a weakness in their network stack (despite the other BSDs addressing it), again with the implication that everybody should just jump to IPSEC. What if you're in a situation where an IPSEC rollout is impractical or impossible?

Whatever happened to defense in depth? Whatever happened to "secure by default"? Whatever happened to constructive paranoia, such as randomizing of libc addresses, that was unlikely to have any real impact on security but was a nice extra, just in case? Why must I now upgrade to NetBSD to get security features that are lacking in OpenBSD? Isn't the shoe on the wrong foot?

What happened? Was there a change of management? Is OpenBSD under the thumb of a douchebag patch manager lately? Is this going to go away at some point? Those of us that sleep with OpenBSD firewalls like a gun under our pillow are taking notice.

First they refused to implement WPA (despite the other BSDs having it), because it "doesn't provide real security" and "just use IPSEC".

Umm, they're completely correct to take this stance. WPA is far inferior to IPSEC, security-wise. It's OpenBSD's job to help insulate you from insecure technologies. We could easily say, "Just because FreeBSD allows one-character passwords, OpenBSD should, too!" And you know what? We'd be wrong to think in that way.

IPSec is OSI layer three, WPA is layer two. Accordingly, they are not substitutes for each other; they are compliments.

So, OpenBSD is refusing to put a locking mechanism on the doorknob because it wants to make people use a deadbolt. Me, I'd want both; if it turns out my deadbolt had a defect and thus easily defeated, the doorknob lock would at least provide some security.

Theo has refused to implement other 'foreign' security changes in OpenBSD when they were first introduced, then turned around and implemented them after a while. He was contemptuous towards non-execute stacks when I spoke with him at Usenix many years ago, because he was convinced OpenBSD's code review policy made it irrelevant and because no-execute didn't stop all stack smashing attacks... but OpenBSD eventually picked it up.

Basically, he's very conservative, very resistant to change, and don't forget that's one of the things that made OpenBSD what it was to begin with... but if it really matters he'll come around.

From my impression that is an overstatement. OpenBSD will get WPA when someone writes it well enough for it to get in. Although the current devs don't want to write it themselves (as they don't feel they need it), they have left the door open for someone else to write it.

"doesn't provide real security" and "just use IPSEC" aren't reasons why it won't get in at all but reasons why that particular developer(s) isn't going to bother writing it themselves. OpenBSD is probably

But they tend to have a point.
They are right, ultimately, that the transport level is the "correct" level for security. WEP and WPA are both, ultimately, kind of pointless in that a determined attacker will be able to compromise them. It's just that WPA prevents a large class of casual attacks that WEP doesn't.
In theory, yes, someone concerned about secure network traffic will secure that traffic at the transport level -- the problem is that if you don't control both sides of the transaction, transport-

PRNG is used mostly by people who don't have a random number generator. PRNG is not needed by most (all?) current Unices and Linux distributions as they have a random number generator at/dev/random and/dev/urandom. Even older versions of Unix have patches that add a random number generator.

Where do you think the data for/dev/urandom comes from? It's a pseudo-random number generator unless you've got a hardware random number generator, but even that probably uses a pseudo-random algorithm.

Question for the cryptography slashdotters out there. I have only a superficial and mostly layman's knowledge of cryptography, so while I understand the need for random numbers exists, I don't know much about how they are created or used. I'm not clear on how it is possible for a digital machine, particularly a commodity hardware machine, to create random numbers that form the basis of seeds or simple one-time pads. It is my impression that any mathematical algorithm you can run in software is potentially g

If you use the Via x86 processors, they have a genuine hardware RNG built in (which seems to be based on thermal noise), and you can buy true RNG peripherals. But pretty much nobody uses them (Via chips are too slow, peripherals are an added expense), which means most systems have to default to PRNGs (because it's marginally cheaper).

People have done things like adding randomness via microphone noise, but I'm not really sure how reliable that is.The rest of it either isn't necessarily random, or isn't necessarily cheap enough / fast enough. And PRNGs can be made hard enough to guess that no one will. It's kind of like how RSA is possible to crack, if someone guesses the right prime factors, but with a sufficiently large key size, you can get to where all of the matter in the Universe, assembled into chips that vaguely resemble today's p

But it gets more interesting. Several other BSD operating systemscopied the OpenBSD code for their own IP ID PRNG, so they'revulnerable too. This is particularly so with Apple's Mac OS X,Mac OS X Server and Darwin, but also with NetBSD, FreeBSD andDragonFlyBSD (the 3 latter O/S however only use this PRNG whenthe kernel flag net.inet.ip.random_id is set to 1; it is 0 bydefault, resulting in a sequential counter to be used instead...).

This is really a ways out of my depth, but my naive understanding is that the PRNG is a problem because it is not actually random, and can therefore be predicted. Yet, the above states that the other BSDs in particular don't even use the randomization by default, and instead use the most predictable sequence possible. Am I missing something, or doesn't that mean the other BSDs are significantly more at risk (for whatever value of 'at risk' this threat actually corresponds to)?

After the Nth time someone as approached me talking about flaws in BIND's random number generator I just have to ask myself, why the hell do the bind people, with no real cryptographic knowledge, think they can write their own? Bind doesn't seem to even have an option to use the OS's PRNG.

I had an interesting discussion with Amit regarding all the hacks people (including the Bind people) do to try to roll their own random number generator and it prompted me to review our own IP randomization code (and the 'off' default). After review I was decidedly uneasy about its secureness, mainly because it was trying to use an algorithmically generated cycle for a tiny namespace (16 bits, actually 15 the way it was coded). The problem with the IP sequence space is that you can't just randomize it, you also have to ensure that sequence numbers are not immediately repeated. DNS has similar issues.

I gave up trying to improve the algorithm and decided to throw in the towel and allocate 128KB of memory to do a look-ahead running shuffle of the 65536 possible sequence number using the system's PRNG. It's not possible to do better then that, frankly. We also decided to turn on ip randomization by default.

So that brings me back to the question: Why the hell doesn't bind have an option to use the system PRNG? Not all systems have a good random number generator, but I trust ours far more then the junk coded into bind. For that matter, I don't really mind if bind ate another 128K of memory to secure its own sequence space, if that is what it takes.

I know enough about cryptology to know that I am not a cryptographer. But regardless of that, I can still get a good feel for someone else's code and what BIND does scares me. The y need to change their code to default to something more secure, even if it is memory intensive. If they want to give their users the option to use the less memory intensive algorithm that's fine with me, but the default needs to be more secure.

DNS has its own design issues, but that is no excuse for software to exasperate them.

I may be wrong, but I don't remember anyone claiming that OpenBSD is the "highest security OS." The last I checked, it wasn't on the list for A1. It's likely to be one of the most secure open source operating systems, but it's by no means the ultimate.

>If the OpenBSD developers say this isn't a security concern, I've got 100% confidence that they are correct.

I see you don't remember how OpenBSD developers downplayed remote root vulnerability in mbuf code, until COREsecurity gived them working exploit:].And this is that mega randomness with what OpenBSD team was so proud:] LOL.

If flawed, predictable PRNG code is so 'irrelevant in the real world' why does even Microsoft seek to improve upon it?

"Strengthens the cryptography platform with a redesigned random number generator, which leverages the Trusted Platform Module (TPM), when present, for entropy and complies with the latest standards. The redesigned RNG uses the AES-based pseudo-random number generator (PRNG) from NIST Special Publication 800-90 by default. The Dual Elliptical Curve (Dual EC) PRNG from SP 800-90 is also availa

If flawed, predictable PRNG code is so 'irrelevant in the real world' why does even Microsoft seek to improve upon it?

Because they have like six Turing award winners working for them including Butler Lampson? Of the top fifty people in network security you will find about a quarter work for Microsoft, more than for any other company, including IBM, RSA and VeriSign. They have the cash and they use it to buy the best.

Microsoft's problem is that you can't buy your way out of a shitty legacy code base in

I smell marketing droid oil. I do favor fixing security issues, but as soon as the TPM becomes involved, rational assumptions vanish. MS has a history of *fixing* things to include new technologies they are having a hard time pushing. TPM is a huge technology for them that they have had an incredibly difficult time pushing. Microsoft needs this technology to win for their game plan to succeed. Trusted Computing in general and remote control of custome

Ummm, no. Read the GP again: "...leverages the Trusted Platform Module (TPM) when present". That means it still works without the TPM, but presumably has to use other and non-hardware sources of entropy (e.g hashes of time(NULL), thread ID, tick count, CPU performance counters, etc.).

Your assertion that using hardware to reduce the determinism and thus reduce the predictability of a PNRG must be some sort of strategy to lock hardware and software together betrays an ignorance of the problems that comp

Time to start a new one. This meme got tiring ages ago...Still Alive, BSD version, sung to the tune of Jonathan Coulton's "Still Alive" from the game "Portal," originally vocalised by Ellen McLain in character as GLaDOS. I be asserting me fair use right of parody, yarr!

This was a triumph,I'm logging a note here: Huge success,We had to dummynet the heavy traffic,BSD Unix (R),We code what we must because we can,For the good of all of us,Including vendors as well,