Search

Subscribe

Quantum Cryptography Cracked

This presentation will show the first experimental implementation of an eavesdropper for quantum cryptosystem. Although quantum cryptography has been proven unconditionally secure, by exploiting physical imperfections (detector vulnerability) we have successfully built an intercept-resend attack and demonstrated eavesdropping under realistic conditions on an installed quantum key distribution line. The actual eavesdropping hardware we have built will be shown during the conference.

While I am very interested in quantum cryptography, I have never been optimistic about its practicality. And it's always interesting to see provably secure cryptosystems broken.

Also keep in mind that this is not "mathematically proven" security, but that it relies on a number of physical assumptions (quantum theory) that are somewhat accurate, but not absolute, especially as they currently do not mesh with relativity and to not make a whole lot of sense in the first place (at least to me ;-). This means such a system is only conditionally secure with a strong dependency on the accuracy of quantum theory. Physical theories have this tendency to only approximate reality, which is fine for physics, but not for crypto.

In other words, anybody believing in absolute security of quantum key exchanges (and no, it is not crypto, it is more "modulation") is waiting to be ripped off.

"Security Theater" is not about protecting passengers- it's about protecting revenue. If passengers aren't given a "show", fewer will fly after an "attack". Also, if per chance someone gets hurt, when the law suits come, it helps the defendant (air line) if they can show they "did everything possible" to protect the paying public. Follow the money...

Nice find, Bruce! I knew this would happen eventually. This is why I've always resisted the use of QKD in my clients and I's systems. Besides, it seems like quantum cryptographers are trying to solve a solved problem. What a waste of money...

Quantum cryptography is not a single item or theory. It is several theories and several physical components, braided into a system. The theories are proven but the components are not; nor is the complete system.

In any case, this exploit is not an attack on any of the underlying theories. Rather, it is an attack on a known vulnerability in one physical component (the detector). It will not work on other species of detector.

True, but the point remains: "provably secure" key exchange via unproven components leads to "proven insecure." This happens over and over in the crypto field, like SSL with odd client-side behavior and PGP on a keylogger-bait OS. Building ultra-strong crypto or provably secure software on a weak foundation. In this case, the QKD companies have been pushing the mathematical proofs of quantum theory as the weakest link in their security chain, when we've known all along that the equipment would earn that title.

It just reinforces my old adage: tried-and-true methods are safer (or more secure) than novel technology any day. The reason is that we've ironed out the kinks and will have fewer surprises. I'm sure QKD has plenty more surprises to come.

In any decent cryptosystem, the weakest link is wetware. The next is hardware, with bad implementations following close behind. Hardware-based attacks such as tempest and van Eck monitors work against quantum crypto the same way they work against any other crypto.

Yeah, Bruce? Have you got an old piece on Full Disclosure somewhere that's apropos the A5/1 break -- and the fact that the encryption was actually cracked something like *five years ago*, and it doesn't make a lot of sense to cast the current arguments in a "why didn't they give the cellcos time to fix it" light?

Door Salesperson: "Look! We've got the world's strongest door! Nobody can break through it!"
Bystander: "Yes, but over there's a broken window and people are robbing you blind, and you're paying no attention!"
Door Salesperson: "Shut up!"

Substitute "quantum cryptography" for "door", and "hardware detector" for "window".

With the exception of the one-time pad, no generally used non-quantum cryptosystem can be proved secure. The most that could happen is that it could be proved NP-hard to solve. (If you do have a proof that NP-hard problems cannot be solved efficiently, please check in with the Clay Mathematics Institute on how to win a million dollars.) To the extent of my extremely limited knowledge, nobody's proved even that with any modern cryptosystems.

Quantum cryptography relies more on basic physics and less on computation, and it may be possible to prove a scheme unbreakable provided the system components embody certain physical principles. It's then the job of the cracker to figure out how to break that embodiment.

"With the exception of the one-time pad, no generally used non-quantum cryptosystem can be proved secure."

Do you have any proof of that statement? It's been done before a few times. One must use formal methods and use them correctly. Many elements, from the environment to attackers to functionality, must be accurately modeled. Then, invariants are created which model the desirable security properties. Proofs are then made that the model meets the security requirements. This has been done for numerous forms of software and identified plenty of vulnerabilities. Most recently, the seL4's isolation mechanism was verified down to the C level. Formal methods is a tough subject and has problems, but I believe its successes disprove the notion that only OTP can be proven secure.

Personally, I don't need absolute and perfect proof at every level. The seL4 proofs, for instance, make me confident that the software will not fail on its own but the processor wasn't proven and x86's flaws may break the proof. However, Green Hill's Integrity-DO178 separation kernel was deployed on a processor with formally verified microcode (Rockwell Collins AAMP7G). They produced mathematical proofs that the implementation met the security requirements. Goes to show that you can prove it as much as you want if you have the time and money. The techniques exist and using component approaches lets you layer proofs to simplify the process. Isn't the verified kernel on the verified processor "proven secure" against the threat model? Does a real-world OTP implementation fair any better?

You bring up recent news on an important topic. The GSM encryption has long been known to be insecure and one could crack the algorithm with about $1,000 worth of equipment, more if one wanted real-time. The best bet was to consider GSM calls as unencrypted and unauthenticated (just to be safe). Unfortunately, solutions like CryptoPhone aren't satisfactory either. They are built on an insecure OS (WinMobile or Symbian), have secret firmware/drivers, and are usually remotely controllable by the manufacturer.

I've been working on this problem for a while now. A dedicated secure VoIP appliance and WiFi hotspots seems to be the only secure mobile solution, but a cellphone would be a lot more convenient. The only solution is an open source phone design that can be made by any manufacturer, which makes them less trusted. Then, the firmware, drivers, phone OS and encryption software is user-controlled and can be verified.

The closest design I came up with, based on MILS architecture, was the OpenMoko phone running the OKL4 microkernel with three user-mode partitions: paravirtualized Linux; encryption/authentication app; trusted path. The trusted path has exclusive access to human interface elements and controls which partitions get what data (e.g. keypresses). This ensures red-black separation: no plaintext touches main (insecure) phone OS. My design adds periods processing to prevent chip-level data leaks and reuses cryptophone's code for crypto, although I'd need a license if implemented. This design can be implemented with separation kernels as well.

We could do the same on a Cryptophone using paravirtualized Win Mobile if we had firmware and driver source code and loaded it ourselves. Either design still isn't perfect because we don't have full access to hardware. Without full inspection of privileged code, there still maybe bugs or backdoors in there. I just don't trust any of them and will just continue treating them like megaphones as far as confidentiality is concerned.

It is my understanding that proving the correctness of software, and proving the security of a cryptosystem are two rather different concepts. For example, you could potentially prove an implementation of RSA to be correct, according to a specification, but that does not mean that RSA itself was proven secure. RSA's security relies on the assumption that integer factorization is hard, but so far there is no proof that it is.

Here's my problem with quantum mechanics: it uses a mathematical model that assumes you don't perfectly understand what is going on in a system--namely statistics.

I don't say this to disparage quantum mechanics or statistics. Both have proven to be very successful at describing complex systems. I just think we're selling ourselves short by starting with the assumption that a part of a system is unknowable and calling that approach complete.

It seems to me that as long as quantum mechanics relies on statistics it will always remain an approximation theory.

"It seems to me that as long as quantum mechanics relies on statistics it will always remain an approximation theory."

Heh - All physical theories are approximations or "effective" theories. ALL of them. Relying on statistics means that a theory (or model really) is the most likely to make the most accurate and precise measurements. I would highly distrust any model that did NOT rely on statistics at it's base. It is the most powerful, precise and accurate tool we have to deal with the real uncertainties of the physical world.

If you add one word to the assertion: "With the exception of the one-time pad, no generally used non-quantum cryptosystem can be proved _unconditionally_ secure,"
then it is correct. All other conventional (symbol-manipulation) cryptosystems can be broken by sufficient computation. And in general, the amount of computation required for a break is estimated or 'believed', but not proven.

Apart from OTP, security proofs in cryptography typically have the form "subject to assumptions {various}, breaking cryptosystem S by attack A requires at least as much computation as solving problem P," where no one has proven the computation cost of solving P ;) And in many cases, no one has proven that A is the best attack on S.

Looked at this way, it's kind of pig's breakfast... but it's the best that ingenuity has been able to devise so far.

@Nick P: This proof of current unprovability works for known-plaintext attacks on cryptosystems with a key, which I think is pretty generally applicable (I excepted the one-time pad):

Given a plaintext, a ciphertext, and a key, it is possible to determine whether the ciphertext and key together produce the plaintext in polynomial time (one would hope O(n)). Therefore, the problem of what key connects plaintext and ciphertext is in NP. (Exactly where in NP is another question, and I haven't heard of a cryptosystem being proved NP-complete.)

There is no known proof that NP problems cannot be solved efficiently. It is generally believed to be the case, but finding an actual proof is very difficult. It's one of the Clay Math Institute's "millenium problems".

To apply this to unknown plaintext, it is necessary to make some statistical assumptions about the plaintext, but if you care to define what they are it again becomes a decision problem in NP. (You'd have to make some assumptions about the plaintext anyway, or you'd never know you'd solved it.)

The large/bright pulse attack on the optics in Alice or Bobs equipment is actually not a new attack but a subset of a known class of equipment enumeration attacks (Active TEMPEST if you will).

A practical example of which you can go and buy for around 100USD from any cheap digital camera store.

You know it as "anti red eye" technology.

A fundemental rule of focused optical systems is that if you shine a light in some of it about faces (ie 180 degrees) on the "avtive part" (photo detector / film / retina / mirror in SLR etc / etc / etc) and gets refocused back to the energy source (the flash on your camera). The thing is that there are always inperfections so you get diffusion etc which means that it's 180 +- X degrees where X can be quite large.

This is a specific case of a known problem with any transmission system that is to do with energy and incorrectly matched termination load (look up VSWR or sensing DC motor speed by back EMF).

That is any energy source that is not correctly matched to both the transmission media and the load will cause some of the energy to be reflected back towards the source of the energy supply.

A second issue is that any correctly matched system has to be matched from "DC to Daylight and beyond" if it is not to reflect energy. In any practical system this is virtualy impossible to achive.

Then you have the third issue of Time Domain Reflectometery (TDR) that a certain Journo (Campbell) developed into a practical anti-evesdroping technology that the "MI's" under the Thatcher Government stole the design for (as is quite normal with security ideas).

To put it all together again (I think this is the fourth or fith time I've mentionmed this on BruceLs blog ;),

Any system that emits or receives EM (or other) energy is for various practical reasons going to consist of a series of parts. Each of these parts are imperfect in that they have not just a frequency response but also a nonlinear amplitude response as well.

These failings will be different for each individual part of the emitter or receiver.

It is easily possible to enumerate (charecterise) a system by it's passive and active responses to energy pulses in various ways.

Now if a system is active and parts change their charecteristics in use then this can be easily deduced by an outside observer (Eve) simply by "illuminating the target" with appropriate energy.

In the case of QC equipment the "photons" have to be modulated in some way to become "qbits". The problem is that 99% of designs go about this the wrong way.

They have the light emitter or detector as the first/last item in the line up respectivly, this is followed/preceaded by filters to "enhance" performance. Then you have the modulator and often nothing else after it except the transmission path (graded glass transmission line).

Now the bandwidth of the modulator is considerably wider than the bandwidth of the filters protecting the emitter and detector.

So if Eve uses a light frequency outside of the filters passband but just on the edge of the modulators bandwidth. Some of the light Eve sends down will be "modulated" by the state of the modulator.

The upshot being that Eve knows the state of the QC "polarisers" at both ends so will know if a "qbit" will make it from Alices "single photon emitter" to Bob's "photo detector".

Opps Eve knows before even Alice or Bob what is and is not going to have happened. But worse neither Alice or Bob know simply because they are not looking for this type of "out of band" attack...

The process of "enumeration by illuminmation" is the "Dirty little dark secret" of TEMPEST and cannot be avoided in practical systems only "maybe" detected.

This is not the first time nor will it be the last time this sort of attack is publisised untill equipment manufactures wak up and start smelling the coffee.

Quantum cryptography was never proven secure from a man in the middle atack when the atacker can change the data, instead of only repeating. To be secure against such kind of atack, it needs an algorithm to autenticate the messages that are posted on the public channel. Even if perfectly implemented, your cryptography is only as secure as the autentication algorithm on this channel.

"Here's my problem with quantum mechanics: it uses a mathematical model that assumes you don't perfectly understand what is going on in a system--namely statistics."

That's the problem with the whole world 8)

Maths is not a science it is a series of rules with interesting properties based on some fundemental (and sometimes conflicting) assumptions that are constrained by another set of rules (logic). At least on aspect of this is that it can be shown that as a system the rules cannot be used to prove that the system is consistant amongst other things.

However for all it's many failings it is still by far the best tool we have for describing the tangable and touchable asspects of our physical world, and importantly predicting how they interact in fundemental ways. As such it is a very efficient modeling tool, BUT it is not reality...

That being said as we enter into the untouchable or intangable asspects of our Universe things start to behave in odd ways. Our orderly world of physical forces matter and energy are show to be inacurate.

For instance we know very acuratly the rate of decay of radio isotopes etc and we give them half life figures that are as acurate as we can measure. However nobody yet can tell you when or why any particular atom will release the appropriate energy/partical. More importantly even on "significant mass" of atoms you cannot say with any degree of certainty when any of them will release energy/partical.

Which gives rise to a significant question "is this something we will ever know"? If the answer is yes than QC is not secure. If no then QC is secure and a whole load of other aspects of our physical Universe will forever be beyond our knowledge...

Now what about the intangable entities in our Univers that are not constrained by energy/matter and the forces that arise as a consiquence, and are all ultimatly constrained by the speed of light and the forward progression of time.

Part of the "statistical2
" nature of our Universe is due to the constraints of energy/matter and their constraints of time and the speed of light.

Take information for instance, it exists without a physicaly realisable form it is intangable. It only becomes tangable when we try to capature it or use it (store / transmit / process).

On peculiar property of information is it exists without being "known". Many of the laws of our Universe are still unknown to use even though their existance can be deduced.

The classic example is Gravity it has obviously existed since the dawn of the Universe and it was not until Newton wrote about it's observable properties in a fundemental and "scientific way" that people actualy realised it existed and had limitations...

However Newton was wrong his model is only 99% of the story Einstien from an entirely different perspective took it a few 9's closer but we still do not know what it is and we may never "know".

As Seth Lloyd has pointed out all the "laws of nature" exist as information, however they are not yet knowledge. And if he and others are right we may never "know" them simply because as a system of which we are a part of it may not be possible to describe (See Kurt Godel's incompleatness theorems as to why as well as Church/Turing halting problem etc).

All that being said the OTP is "only provably secure" on the same assumption as QC, if either is shown to fail then both fall...

Oh and I have a distrust of the OTP proof as I do in QC.

The whole assumption of the OTP is that there is "now way to know" the state of the next key bit, therefore... This raises the old question of "How can you prove a negative"...

But what of quantum computing? Is it genuinly probablistic, or does it have the ability to reach out of "our system" of a physical Universe and thus be capable of acuratly and compleatly modeling our Universe that Godel amoungst others has sugested is not possible by our physical system (that is any system sufficiently complex cannot be used to describe it's self).

This sort of thinking is why we have "a cat in a box that might or might not be alive" and assertions that "God does not play dice". They can't both be true or can they 8)

We know little or nothing about information other than it is not tangable and is not of necessity constrained by energy/matter and thus may not be constrained by time or the speed of light. Our only real handle on it is "probablistic".

Thus it may not be possible as purely physical beings to be able to understand information. The likes of Roger Penrose have arguments that sugest our very thinking process is "quantum" in nature (as is our ability to smell, and use energy from sunlight).

My distrust is a little more down to earth. One thing we have shown is that encryption in all it's forms is possibly the ultimate form of "security by obscurity". However the information contained in ciphertext is still there it is not lost all it requires is the appropriate information to recover it.

The last century (1900's) showed just how far we can go on fereting out this structure, we currently have absolutely no idea on just how far we can go down this road, as we have barely placed the first few footfalls.

As with Newton and his view of gravity we now know it is not right. There is the oldish statment about physics,

"You get taught a succession of lies, each a little nearer the truth, but still not the truth, untill one day you may be lucky enough to tell more truthfull lies of your own"

It begs the questions can physists actualy tell the truth? and if so what are the implications of such tuths?

Time to go put the cat back in it's box for the night and go play dice ;)

"It seems to me that as long as quantum mechanics relies on statistics it will always remain an approximation theory."

QM does not really rely on statistics at all. It uses some terms that are borrowed from statistics, such as 'probability', but it means something entirely different by them.

For example, when I say "there's a 30% chance of rain" to mean that the conditions are such that about 30% of the time when they're like this it rains and about 70% of the time it doesn't, that's probability in the statistical sense. It expresses a lack of knowledge about the system's true current state or a lack of ability to infer future state from present state.

But you can also use probabilistic terms to precisely define a system state. For example, if you have seven coins sitting heads side up and three coins sitting tails side up, I can say "if you pick a coin at random, there's a 30% chance that it will be heads side up". That sounds very similar to the 30% chance of rain, but in fact, it's a perfect and complete description of the way the coins are facing.

By my understanding, presently available quantum key distribution systems are not ideal because they rely on non-perfect single photon sources. I believe that a scheme relying on a true single photon source or a source of entangled photon pairs should be secure (excluding man-in-the-middle attacks). Schemes utilizing pairs of entangled photons should permit entanglement purification and reduction of an attacker's information to zero. As far as I am aware, this type of key distribution system is not commercially available at present.

"presently available quantum key distribution systems are not ideal because they rely on non-perfect single photon sources. I believe that a scheme relying on a true single photon source or a source of entangled photon pairs should be secure"

First off there are single photon sources available if you want them and I belive they are in some QC systems.

However what the students involved with this group have shown is that QC is not just the "q-bits" it's a whole system. And importantly the system if not designed properly has failings to the point of no security.

Let me explain,

I you imagine a shannon channel with only one source and on detector you can see it as,

[S]ource---{channel}---[D]etector

For QC to work (as originaly described) you need two polorisers between the source and the detector one at either end of the channel,

[S]-[P]--{channel}--[P]-[D]

Now due to suseptability of both the source and the detector to stray photons etc producing noise which significantly effects the distance over which the channel will work you also include filters so you get the following,

[S]-[F]-[P]-{channel}-[P]-[F]-[D]

Or,

[S]-[P]-[F]-{channel}-[F]-[P]-[D]

Of the two the first is likley to give a longer channel length for a given bit rate (which is one of the vital marketing specs).

So looking at it as units you have,

[SFP{channel}PFD]

Now have a think about the polariser it is a wide band device as is the channel, where as the filters make the source and detector narrow band.

So Eve can shine an out of band (to the source and detector) down the channel in both directions and look at what gets reflected back,

Alice Eve Bob
[SFP{chan/^\nel}PFD]

Neither Alices source or Bob's detector see this out of band light due to the filters.

However the state of both of the polarisers is reflected back to Eve so she knowns if the photon from Alice is polarised North-South or East-West.

If Alices polariser and Bobs polariser are in the same state Eve knows the photon will be detected by Bob. If they are different she will know that Bob could not have received it she also knows if it was polarised NS or EW.

Eve knows everything that Alice and Bob knows except when the photon was sent.

However if she monitors the control channel the chances are she can work this out without any difficulty.

Thus she does not have to look at or change the Q-bit in any way and because she is using "out of band" photons she does not interfere with the narrow channel that Alice and Bob see.

Now there are a number of fairly simple things that the QC equipment designer can do to make Eve's life harder but... none of them currently have. Also Eve can perform a large number of other attacks against a QC system other than this one and the designer is going to find life not just tough but very expensive as well as "spec destroying" to combat what Eve can do.

The thing is QC suffers very very baddly from the difference between theory and practice.

The early test setup on an optical bed was so noisy not just audibly but electronicaly that the experiment nearly didn't work. And even when it was your ears would have told you what state the polarisers where in...

As I said there are many other attacks QC is vulnerable to that I am aware of from working in other communications fields most of which are "class attacks" that students/researchers have not yet to think about yet alone test and written up and many may never get written about.

The thing about QC is it is like the Quantum Bank Note Serial Number idea that gave rise to QC, it's perfect in theory but is it ever likley to be practical?

Sadly not. One reason being "switches" -v- "point2point". Early phones worked from point to point, it quickly became clear this was not practical for general usage outside of intercoms. The result was the telephone exchange or "switch". QC for various practical reasons is not likley to work securely with switches.

I need to correct you regarding your assumptions on the CryptoPhone. While indeed WindowsMobile is used as the base-OS in the current models, it is stripped down to remove both known vulnerabilities (some of them found in our own research) and likely attack vectors for which there are no currently known exploits. There is no "remote control" capability at all. We invest a great deal of time and effort into securing the OS, something that no other not-government affiliated encrypted phone vendor does. Just for reference: even NSA has choosen the same approach - hardening WindowsMobile - for the current generation of secure phones for US government use.

Certainly, there is always room for improvements. Which is why we aim to have our own custom-built OS distribution in one of our next products, to gain even more control of the platform. But your bold assumptions that we don´t care about OS security and can remote control phones are not correct.

Whoa, now, don't jump to conclusions. I never claimed you guys don't care about OS security: just that I don't trust the approach. I've praised your product on other forums for the quality of crypto, the OS hardening, and your distribution security. As for remote control. you can't prove that claim and that's the problem. While I won't name them, there have been two crypto companies with many big contracts that turned out to be NSA fronts. They all made the same claims and assurances you guys do, but they loaded the binaries on the phone and the people who did all the checks were employed by them. Nobody knew they were backdoored.

In other words, there's no real assurance for outsiders using an untrusted supplier with a closed loading process. I would trust your product if all trusted code was analyzed by security experts, hashed, and I could check, compile and load the code myself. This would prevent tampering. As of yet, nobody can prove your software isn't subverted. Compare this to the likes of PGP (the original) and OpenSSL: if I want, I can ensure the binary is as trustworthy as the code. You guys might be as great as you claim, but I can't prove it and many have lied in the past. Does my skepticism really surprise you?

On OS security, I'm glad you guys are looking into a replacement. Instead of rolling your own, I would suggest you use one of the many high security separation kernels or microhypervisors that have recently been developed. Cryptophone could currently easily be integrated into OKL4, which is like 15k-50kloc in the TCB. It recently proved its strong separation and performance in the Motorola Evoke. They already ported WinMobile to it running fully in user-mode. Drivers can actually be used from the user-mode OS's (neat trick, I add). You could run CryptoPhone's crypto stuff in separate cells right alongside it to ensure a partial red-black separation, with OKL4's ultra-fast IPC ensuring performance. Separation kernels would provide much higher assurance, but you'd have to port a bunch of crap. Examples include Green Hills INTEGRITY, LynxSecure, VxWorks MILS, and PikeOS. INTEGRITY is the best of them and I believe it already virtualizes Windows Mobile as well, which may reduce or eliminate the porting effort. Great middleware too, but they are not cheap.

I wish you guys the best of luck on improving your foundation (OS/firmware) and developing a way to prove no subversion for customers like me with trust issues. Don't let my concerns fool you: I really like your company's offerings, principles and processes. If OS security and subversion are my biggest complaints, then obviously you guys are doing something right. Most can't even implement public-key right. ;)

Thanks for clarifiying your points. Just one necessary correction: we actually publish our source code (under a restrictive license that allows its use only for verification), so you can verify that the binary on your phone is compiled from the source you looked into.

We can, for legal reasons, not do the same thing for the OS. Which is why we will move into the direction of an base OS with completely open source, thus maximizing the verifiable parts step by step.

Regarding hypervisors and seperation kernels, we are actually looking into a variety of these options to further harden our products, but it is too early to discuss these here. Feel free to contact me by mail.

I'm aware of your published source code. That doesn't help, though. My field of study is designing "high robustness" systems. That is: systems that can protect critical assets against sophisticated attackers with a lot of resources. In our model, which is what the government and many of your customers face, the biggest threat in crypto products is subversion. This means we have to ensure no malicious code in the product, which may be introduced by the company or a spy at various points: design; coding; maintenance; swapping binaries at distribution server. In our model, we can't entirely trust the manufacturer (CryptoPhone) or distributors. Ideally, a process would be in place that would allow us to be sure of two conditions: (1) the software on the phone is correct or secure; (2) the right software is on the phone, not switched by an enemy or the manufacturer. Right now, there are quite a few VPN's and embedded devices like this, but no cell phones. I've been designing one, but I need totally open hardware and a reliable comm stack. (openmoko disappointing...)

So, you've indicated you're taking this one step at a time and you've taken a great step in the right direction by publishing the source. I've reviewed it myself and found it, based on my albeit limited coding experience, to be of exceptional quality and readability. However, with your current processes, I can't prove that's whats on the phone. Ever since I've heard of CryptoPhone, I've been working on a solution to this problem. Here's an initial step that may make it very trustworthy, based on independent review.

First, the firmware and base OS (source, binaries, whatever) are hashed and signed by their distributors. You make your changes to the OS, with notes justifying each one, and submit this to a reputable IT security lab. They verify it and both of you compile it with the same software, hash and sign it. They publish their signature of the binary on their site. That lab, or maybe several, gets the published source code and integrates it with the signed copy of the OS. This is hashed and signed. Posted on the lab(s) web site. Any customer with subversion concerns will get copies of the signed hashes and can have at it checking what's on the phone. Possibly a process of sending the customer a basically blank phone and signed binaries and letting them load it themselves. If you made a rule that only the signed version is put on a phone, then regular customers who receive phones using the current approach receive greater assurances as well.

Right now, we don't have a totally trusted cell phone and we definitely need one. The approach I've outlined increases assurance against subversion while minimizing the cost (only one lab and emailing signed binaries). The lab has to be in the US though and one that works with big US companies or defense contractors. Several labs, preferably in different countries, would be even better but might be too costly. The idea is that we have two or more reputable groups vouching for the integrity and authenticity of the software. If this was true, I'd recommend it to some guys relying on US crypto products.

You can get a crypto card has been formally tested, certified and approved to be secure to ITSEC E4+ by the likes of CESG (Public face of GCHQ).

"Very simplisticaly" what you do is encode your audio using a hardware codec and then chop it down using one of those cute NSA CELP speach compression algorithms running on the gum stick. You pipe this into the Crypto Card that AES 256 etc encrypts it. You take the encrypted output and pipe it down a GSM V110 data pipe via the Motorola G24 or Java24 GSM/edge mobile phone unit.

Alternatively, somebody I used to work with years ago when "doing mobile phones etc" set up a company (INTSEC Ltd) and came up with a (supposadly) "Ultra Secure GSM Phone" which they call the Enigma and it supposadly has passed the CESG ITSEC E4 evaluation and is therfore approved for Military use. The website is www.tripleton.com

And before you ask the usual disclaimer - I'm not involved with the company in any way nore have I used let alone evaluated their products.

Start of /usr/sbin/rant: OpenSSL, as trustworthy as any binary, on any hardware. Give me two cans and a wire, and I'll give you something better. Do not forget using lots and lots of duct tape for shielding.

Security and pay for play in this "arena," is a still born baby, best of luck, unless you got access to intelligence Agency tech, and people personally.

Subversion concerns, and signed binaries and load, and from a website, sure does not build any trust with me, but that is my .02 fiat money standard.

When man forgets god is in the equation, man gets flooded and left searching for high ground that is still useless.

Words to the whatever, the sooner you learn to accept that security is just another step in cost based negotiation, the sooner you are on your way to working the problem better.
End of /usr/sbin/rant:

Yeah, the thought occurred to me. I guess it's the expense and fact that I have little experience with embedded hardware. I don't know how to design boards and stuff. (Prob really need to learn it.) Some of what you mentioned is in my deskside secure phone that we discussed on a previous blog post. The design originally used CELP, but I heard there was a patent on the MELP's and CELP's or some of them. So, I changed to the Speex codec, although I'd prefer CELP. I don't like having a crypto card or hardware compressor, etc. It's more people to trust your secrets to. In my design, I just use one or more regular processors and effecient algorithms. One involved several gumstixs chained together: red side (speech compression, instant messaging, etc.); multi-level crypto; black side (mainly transport). Alternatively, I use one really good processor and a MILS separation kernel. Need trusted drivers & peripherals, though. I know everyone says Intel solved this problem with VT-d technology, but Intel's processors have a ton of errata. Ergo: they don't work as advertised, have had several vulnerabilities at processor level, and shouldn't be trusted. If someone built an open-source IOMMU for Power or ARM, that would allow untrusted drivers/hardware on the phone & would solve a huge problem.

We can use COTS processors with encryption acceleration built-in, like certain Freescale PowerPC's or VIA's Eden. The reason is that these aren't purpose built for high assurance crypto. Their main market is consumer goods (read: unimportant), so they're less likely to be subverted. The VIA is still my favorite with onboard TRNG (independently tested), AES (ind. tested), SHA-1, SHA-2, and RSA support. All but RSA are national standards, which makes the chip even more desirable. I use RSA in personal designs though. It's lasted long time, weaknesses well-understood, and patent expired (yay!). I think IDEA's patent expires this year so Bruce can use his old favorite cipher without paying. I just wish we had a gumstix-like board with C7/Eden. So far, the ARTIGO is the only good box & it's definitely not a mobile solution. My inline-media encryptor design can use an ARTIGO, though. Lots of things can. {evil grin}

Thanks for the tip on that encrypted phone. I didn't know there was a commercial one that was evaluated that high. For those who don't know, ITSEC E4 is about equal to EAL5 in common criteria, which is quite hard to get. This makes it a viable alternative to CryptoPhone, but still no proof regarding subversion. As I mentioned before, at least one huge crypto comm. company that supplied militaries and embassies everywhere turned out to be an NSA front. Can't remember if the evidence was conclusive, but it was good enough to keep me from buying. They are in a neutral country and still in business. Stuff like that makes me want a phone verified to the firmware. Either totally open or at least evaluated and signed off by many parties. While that might not ease PackagedBlue's poetic fears, having to subvert half a dozen groups in different countries significantly raises the bar and level of confidence.

I know it's heresy, but I've been thinking about the new chinese processor (loongson?) & hardware. They've come a long way with it, planning to release four-core chips this year, and are open-sourcing almost everything about the hardware. Processor, firmware, drivers... everything. It's almost scary seeing how far they've come and where it's likely to be going. While I prefer to buy American, there's not a comparable offering over here. I figure if we verified the Chinese TCB or just built a compatible one and verified it, we could then load trusted (i.e. verified) software on it and be subversion resistant to the core. Would have to use random fabs, maybe lie about what the chip logic is used for, and check a portion of them to ensure no tampering. What you think about using their open-source stuff to develop trustworthy platforms?

"As I mentioned before, at least one huge crypto comm. company that supplied militaries and embassies everywhere turned out to be an NSA front. Can't remember if the evidence was conclusive, but it was good enough to keep me from buying. They are in a neutral country and still in business."

The neutral country would not happen to be the gun happy inventors of "Milch Chocolate" and run by banking gnomes by any chance? And as for their policy on Crypto, Ag h don't you just hate them ;)

More seriously though onto the other problems you raise.

I started to write up a reply and as on preveous occasions I was asuming you were not going to be the only reader...

Thus my reply started to cover quite a few points and be a bit indepth.

However it has taken on a life of it's own, and is in danger of becoming a book (or atleast a chapter in one ;)

So I'm going to chop it about a bit and post it up as a series of replies over the next day or so. And rely on Bruce's good nature as to alowing all the bits.

Sounds good and I'm sure he doesn't care. Our discussions have attracted at least two new people that I can remember. A few times I've thought about just getting your email and doing it there, but I like having a record of these discussions to help other readers and spur more interest into high assurance products.

I'm sure breaking up your next posts will also ease the pain in your fingers. I just still can't get over the fact that you type all that stuff on a cell phone. My fingers hurt after just a few paragraphs of texting.

If I understand you correctly you have used google to translate into English,

Is this correct?

Also if I understand what has been produced you are saying the following,

1) You have been told that a CPU cannot be used as a True Random Number Generator (TRNG)?

2) You have written a program that you belive shows this is untrue (a fallacy)?

3) You belive your program is efficient in that it produces over 70MB/s of True random number sequence.

4) You include a Visual C++ program that as a simple example will generate 10MBytes of data into a file "rand.dat"?

5) The principle of your TRNG is that even though the computer is determanistic when used with a multi tasking operating system the method by which tasks are schedualed can be used to provide True Random Data?

6) That is to any given task that has no knowledge of the rest of the system when the task starts is indetermanate to the task. You use this "uncertainty" as the entropy for your TRNG?

7) Your described method is to first like ARC4 build an array of 256 Bytes holding the same value as their index position,

That is ARY[i]=i for i=0 to i=256.

You then read the least significant byte of the system counter (syscnt &0x0FF or mod 256) and use this as an array pointer (n1) along with the incrementing array pointer (i) to swap bytes in the array.

That is like the ARC4 key update algoritm you swap bytes in the array.

However unlike ARC4 which reads data from a key file or array you use the least significant byte of the system counter.

Thus you are using the ARC4 key updater to continuously stir your array that acts as an "entropy pool".

8) the VC++ code you provide does not match the translated description.

It has instead been used to generate an array (str[NN+1]) of 1E6 chars (bytes?) of data rounded up to the next block of 256 chars.

You first initialise the entire array of 1E6 chars. However you will after the first 256 char block be writing numbers greater than 256 in size. Therefore you are assuming that your compiler either has chars greater than 8bits or the compiler produces code that automaticaly truncates ints to chars.

You then use this entire char array in blocks of 256 bytes.

When compleated this array is then written to file.

Each 256byte block in the 1E6 char array (str) is a copy of the first 256 bytes as they get ARC4 key updated from the lower 8 bits of the system counter.

However you program from visual inspection contains a number of anomyalies.

A) 'char ch' and 'int n1' get assigned initial values that are not used.

B) the first 256 bytes and last 256 will be copies of each other.

However I don't quite think you understand what it is you are trying to do.

If we assume that your program is run standalone on a single process operating system without any form of memory managment such as pageing or swappinng and no hardware interupt then you would not have any entropy in your RNG.

That is, if you where to examine the machine code of your program you would find that it uses an exact number of processor cycles to get around the inner loop and get back to 'n1=Nm()%nk;'.

Thus if the sub 'int Nm(void) {... ...}' reads a hardware timer that is driven by the same clock as the CPU then you would expect to see the integer value returned to be the counter updated by exactly the same increment each time.

However if the sub 'int Nm(void) {... ...}' reads a hardware timer that is driven by a different clock UNRELATED to the CPU clock then you would expect to see the integer value returned to be the counter updated by exactly the same increment each time PLUSS OR MINUS the clock drift rates. The entropy on this on such a fast loop is increadibly small with an expected XTAL clock drift measured in parts per million (PPM).

However in an operating system with hardware interupts enabled you would expect occasional jumps as an interupt steals CPU cycles from the process. However if the interupts are simply to update the system clock once every millisecond then the jump is predictable and can be seen on examining the state of the array in the data file.

If however the hardware interupts are caused by external events the entropy is dependent on how and when these events occure. An attacker can use this to their advantage to cause predictable changes.

If the OS uses memory managment that "swaps" or "pages" to the hard drive then yes you start to get some real entropy beyond the control of an attacker.

If you have multi tasking then the jumps in the counter value become dependent on what and how those tasks behave. This may or may not give real entropy.

However you need to remember that for most occasions around your inner loop you will get either a fixed jump or a fixed jump +- the PPM drift rate which might only be a change of 1 bit every thousand loops.

Exacty this type of random number generator using a timer goes back atleast to the early 1980's (and probably well before). I rejected two different designs in "electronic purses" for exactly the weak entropy problem.

The use of ARC4 key schedual to make a random number generator using clock drift was presented by me as a part of a "group task presentation" I gave at the EU sponsored IPICS 2000 course held in Stockholm.

However the idea presented used two different CPUs one that used ARC4 key update the other used a variation on the Mitchel-Moore generator to produce a "roulet wheel" system that was superior to the hardware design in the Intel chips (which suffered from RF Injection Faults). It was used as an example of why not to use such TRNG's. I also presented a 100% determanistic design of PRNG using ARC4 key update and using the BBS generator to produce a Crypto Secure PRNG, which is superior in many many ways (especialy against RF fault injection attacks).

If you go and look up the Linux /dev/random on Wikipedia you will find links to many usefull sources of information on the faults and strengths of "entropy pool" TRNs. You will also find a link to another Wikipedia page on hardware random generators.

Go and have a good read of them then go and look up stream generators such as SNOW 2 for further ideas. Also look in Donald Knuth's "Art of Computer Programing" Vol2 3rd edition for more on random number generators.

In modern systems TRNGs are rapidly becoming depreciated as they cannot be effectivly evaluated or tested to see if they are working correctly. One such system that is quite fast is to use AES in CTR mode which has the AES key updated by another AES in CTR mode. You actually clock the key generating CTR between every 2^7 or 2^8 outputs (209 or 211 are good numbers as are any primes in that range) whilst not reseting the output CTR counter. The Universe will have reached thermal equalibrium long before the system will have rolled around back to the begining (such is the joy of Chinese Clock mathmatics).

Oh and NEVER NEVER EVER think you can improve the quality of the output of an RNG by using a hash of any kind you cannot magic entropy into existance.

Mr. Bruce Schneier：
Thank you for that program defects. I would like to be properly learn from him. Able to my e-mail by sending letters? This icon can be used to help research questions.
Your students shijian

Actually, Clive Robinson helped you. There are many people on this forum, not just Bruce.

@ Clive

Posts like that make me wonder why I didn't know of you before I came to this forum. You mention all these projects and presentations, but I've literally never heard of you. Initial searches show about 3 guys with your name, profile, estimated age and location. I couldn't narrow it down further, as your an extremely hard guy to pin down. Please forgive my intrusion: I just like knowing who I'm talking to. With most big names in crypto or security engineering, I know who to address an email to. With you, I don't even know this much. So, who are you? If you want to give even a partial answer, then email it at Falconofglory@hotmail.com. Of course, if your a really paranoid type, feel free to decline here and I won't hold it against you. ;)

@Clive Robinson
Thank you for your attention.
Here the correct some errors of random generator source.
Not now generate array head and tail of the same defects.
If this is not really an array of random, how can we achieve the result is the same?

"Initial searches show about 3 guys with your name, profile, estimated age and location."

Ahgh huh, it's a few more than that (if you've read all my posts on this blog you will know that there are a couple more, and worse there are actually people out there who my "outlaws" think are me (but it was not) and I actually know the people concerned. I once had a mix up over passports with a friend when traveling in a group, a young lady working in a hotel gave mine to him and his to me we both traveled back through several EU passport control points to get home it was only a few days after getting back the problem was discovered... (which is why I have no faith in standard ID documents, and and somewhat less in biometric IDs ;)

Like Bruce I have an enquiring mind, and I've been told my CV is unbelivable even by those who know me and know it's 100% legit. One such person even shares my name (and is possibly one you have found).

I'm an "English man" and this is where the problem starts 8)

In England there are three types of "inventor" those who work for the Government or from the "public purse" those that work for companies and those that work out of "garden sheds" I've been all three at one point or another.

In the US there is a difference which is "no garden sheds" those with good ideas in the US generaly do (did?) not have many problems getting financing etc.

This is because in the US those with an inventive mind tend to be regarded as a valuable resource to investors. In the UK however inventors are "mad scientists" engineers are "blue collar workers" and neither is regarded as "professional" and are thus seen as an "exploitable resource" not something to invest in.

Then there is a major difference in patent law which further discourages the inventive mind in the UK but partialy encorages in the US (except for submarine patents Ugh).

Because I don't like the academic community much (not their fault but that of the UK Government) I have worked under the Official Secrets Act and Non Disclosure Agrements.

However the crypto community was a bit of an oddity in the ivory towers and gleaming spires of accademia. Untill recently little or no kowtowing to those with status. That is as a very young field of endevor there where very few entrenched personalities and Instatutional ivory towers.

Oddly although I don't have a PhD I have held positions where I've been called "Professor" and "Doctor" (just the oddity of the way proffesional curtisy works).

And the reason for not having a PhD partly I don't subscribe to what a friend at the LSE calls "accademic mastabation" and the ideas I thought worthy to follow where actually not understood by any wouldbe accademic sponsors.

For instance fault injection attacks by modulation of RF carriers. The only accademic who appeared interested some ten years after I proposed it was Ross Anderson when I warned him about it as an attack against his non synchronus logic idea to get around DPA issues. And guess what two students in his lab have just recently published a paper on the simple form of the attack.

However there are a couple of other reasons I never realy jumped up and down and found an appropriate accademic sponser. I'm to interested in moving on (thrill of the chase I guess) and I have zip respect for certain types of authority (those that demand respect/authority get right up my left nostril). However I don't help myself, I think up a problem work out the various aspects and say yup that's what an engineer needs to know and move on to something new. I prefer to let others with a fondness of doing the "formal paperwork" that gets papers published get on with it. My only real care is moving the "body of knowledge forward for all" not putting a fence around a bit of "field of endevor" real estate.

Unfortunatly that attitude does not go down well with the OSA and NDA types.

I got upset and left one company over a patent for a multiple hard disk controler. The managment decided to go for a very very limited scope just on parrellel access to get high bandwidth. Me I wanted them to include all sorts of things like parity and what we now call mirroring and striping. Thus the patent that could have covered all of the common RAID formats did not happen. The companies loss not mine, I took the ideas forward to another company who where designing high end graphics systems where the ideas with a little modification provided ways to handle Dynamic RAM (who apart from me remembers the why of CAS and RAS and a cute technology "Contents Addressable Ram" (CAM) ?)

An another area I was well ahead of the curve on is what we now call "Software Defined Radio" what few realise is you can use a very fast ECL RAM chip and a few ECL logic circuits to be a digital superhet line up with QPSK etc demodulator (oh and it also works the other way around).

Messing about with the ideas and proving them to the point where others could exploit them was what I enjoyed doing. Not profitable in monetary or reputational terms but very satisfying never the less (Doug "DC" Coulter might want to say a word or to on this subject).

Looking back I've probably had a PhD level idea about every month or so for the past 20-30 years in many different and very varid fields of endevor. I just like spotting and solving puzzels. In the same way I used to enjoy picking locks when I was eight likewise with faking finger prints and a whole host of other things (making duck ponds with fertiliser ;) and nearly killing a friend on showing them how you could show fingerprints with solvents and super glue). Some got me very very close to the wrong side of that dividing line where you might get your "collar felt" by various LEA's etc (oh and upsetting the likes of the DNA testing scientists by showing how to game the DNA system, is par for the course ;).

One bio-metric chalenge I still think about is the likes of retina and iris scanning. I know how to game some of them. But not using what I call "under the sink components" that are available to all. However I'm convinced it can be done I've worked out how to do it for almost all other bio-metrics (which is why I know they are a failed technology looking to gull money from those who "want answers not problems").

Sadly some of my best ideas are subject to various "official/legal rules" etc which is a shame as I get frustrated watching people scratching around in the dark when I can tell them where to shine the light (but I'm not allowed to until after they have stumbled over it in a public light 8(

It is also sad when you know somebody has done a very serious bit of work but nobody appears interested and you see them get dishartend (like the problems relating to "efficiency" and "security", or the TEMPEST "red eye" issue which has recently blown a hole in current QC systems ;). I'd love to say just do this then people will sit up and listen, but hey that's the rules of the game when you've taken the Queens Schilling at some point in your past.

Any way Bruce has an Email Address for me that I look at occasionaly (it's one I use as a social account to recive some interesting/funny/risky jokes etc which a boss might take exception to (if given cause in this HR driven PC mad world that is work these days 8(

The computer was originally not generate random numbers, but because there is a class of functions (C + + language) exists,

you can programmatically generate random numbers, these functions are characterized every time you call execution time is

different, which is the execution time of a certain random , By using this property, we can get the required random numbers.

These functions have SetWindowText (NULL),Beep (0,0), MessageBeep (MB_ICONQUESTION) and so on.
We call such functions as long as the continuous monitoring of their execution time, process execution time can achieve

their goals. But the function of the execution time is very short, with the general method of measuring time are too rough,

we must measure their own design. Each computer has its operating frequency as frequency, corresponding to a clock cycle, to

be a function of measuring clock cycles since the computer is running, running with the function of the number of cycles

experienced by the computer to measure time; measure the current Number of cycles, let the function run, and then measure

the number of the current cycle, the difference between two cycles represents the number of run-time function, so you can

make a loop to run a lot of time collecting, processing an array of running time are random array. In a variety of test data

on similar computer.
Let us look at the experimental data:
This is the function of running time 115636,114283,114899,115030,114488,114350,114866,115132,114317,114757
Group on the number of words in there 50100,48747,49363,49494,48952,48814,49330,49596,48781,49221
On the byte array, there 180,107,211,86,56,174,178,188,141,69
Obviously the number of words into 16 groups where not good, this is because the function running time is only the maximum

value of more than 16 large 0.7 times. But into the 8-digit group in the well, while other bits of data which can be

combined.
Detection by generating large amounts of data found no cyclical phenomenon, which is never repeated, so the probability of

all the elements of randomness appears good, so they are truly random number.

Encryption
1. Expressly, to establish long true random number key file
Plaintext + key = ciphertext
3. Encryption key file encryption key file
4 the combined ciphertext and the secret key file generated combination file. Into the transmission or storage.
5. Encryption end.
Decryption
1. Combined file
2. Decomposition combined file the ciphertext and the secret key file
Decryption key file key file
4 the ciphertext - key = plaintext
5. Decryption end.
Reference to the one-time pad garbled the design of the program. Expressly true random number encrypted form ciphertext. The key file is encrypted using a random function, and exist independently. Key file itself is garbled heap, encryption is garbled heap, although the output value of the random function, have inner contact, but the encryption target is a true random array, it obscures its defect can also be said that the true random array of encrypted pseudo-random sequence, not crack the key here, and you can not recover the key file, you can not recover the key file can not be decrypted ciphertext, the truth is so simple.