For the English readers without Dutch sources, here is some additional information which might or might not be in the article:

Police seized and operated a server of a company called Blackbox Security which offered 'crypto phones'. Basically phones pre-installed with some software sometimes with all ability to communicate disabled aside from that application. The price for these phones was 1500 EUR including a 6 month plan, afterwards 750 EUR per 6 months for usage.

While the Dutch DA and Police have not given any details as to how the security was broken there are some clues (this is speculative as fuck):

* A users guide that used to be published on the Blackbox Security website hints towards their chat application being XMPP+OTR.
* Real-time access but no historical access hints towards an MITM to change the previously exchanged OTR keys (a common way to 'break' into a conversation).
* The application seemed to not enforce and/or check the key signatures for changes.

The reason given by the DA for the publication of this news is that threats of violence and reprisal were being made on the taken down chat messages by owners of these devices after police action was taken against them. They wrongfully blamed the people they were communicating with of leaking the information. As tensions increased retribution hits were likely and the risk to the public would be there. Hence: release the information.

To re-iterate, there is so far no reason to believe the actual cryptographic protocols in use got broken but that yes; taking over the server allowed them to MITM the OTR key exchanges and/or pretend to be another client. I could get more technical but since this is all speculation I don't see much value to it.

I've looking into this issue for the last 2 days:
The APK we all looked into is IronChat-3.40-release.apk [1], after decompiling I think it is a copy/fork of Conversations version 1.14.6 with a few more commits until October 13th 2016 [2] with Secret Space Encryptor version 1.7.2c [3].

About unencrypted support: "supportUnencrypted() { return false; }" is in the config, so probably not [4].
There is a theory (I kind of started [5]) that it might be MitM because of bad UX. This is because they used OTR without TOFU which was only added in Conversations 1.15, after the fork [6].

I have been designing cryptographic protocols for securing card payment transactions, card data and pins. I then was going with those systems through stringent certifications from various organizations, especially PCI.

Not enforcing signatures when exchanging keys? Is this crypto-kindergarten?

With a well designed payment system it is expected that the attackers have access to basically all of infrastructure -- servers, databases holding keys, network, disassemble terminals, bribe employees, etc. and still have no chance injecting their own keys, read pins or get any cryptographic material of value.

Why can't you just guys who pretend to build secure system spend some times reading real requirements from PCI or Visa or Mastercard to get at least some idea how real secure systems are built in at least one area?

I suppose that in the payment systems, there's a trusted party that has the ability to compromise transactions. The different parties in the system trust that party and rely on audit and potentially arbitration or litigation to resolve disputes about individual transactions or classes of transactions.

For end-to-end encryption for messaging applications, there may not be any such entity that everyone can trust. In that case, there needs to be a solution for key exchange to allow new parties and devices to join the system. In payments you could presumably say "the banks/banking association/cryptographic contractor of the banking association is the authority that certifies new entities that join the system". In messaging you probably can't do that if you're concerned that law enforcement will force that entity to add false certifications!

In other words, I think you're referring to a cryptographic problem with a somewhat different threat model.

There don't and should not be any trusted parties. Every trusted party is, by definition, a point of failure if the party is compromised.

PCI requires that organizations control cryptographic material using rules of dual control and split knowledge. No individual should have access to entire cryptographic key and any processes and devices should require at least two people to operate.

For example, HSM-s are ALWAYS operated by at least two security officers. Cryptographic keys are generated by HSM in the form of multiple components onto multiple smartcards. Each smartcard is stored in a separate safe where only the security officer/s assigned to that component have access. The HSM to be injected with keys must be operated by multiple security officers with their components. The HSM is regularly inspected -- each security officer brings his key from his safe, two keys are required to open the enclosure where the HSM is located. When the payment terminal is injected with keys there are two operators present monitoring each other to prevent tampering with the process. Etc.

With good understanding of the concepts it is possible to build secure system. It's not that hard.

I think we're talking about different levels of the system. The attack in this case was not about an individual employee unilaterally taking an improper action, but about a company being officially compelled by a government to take an action that was contrary to the interest of an end-user. In the financial system this happens all the time and is considered somewhat unremarkable.

If this company had had a dual control mechanism where multiple security officers had to be involved in order to issue signatures, presumably the company's executives would have told those security officers "we have to issue this signature because the government requires us to", and presumably the security officers would then have done it. It wasn't a rogue action from the organizational point of view, only from the customer's point of view.

Also, in a messaging application new public keys have to be certified extremely frequently because new users and devices are constantly joining the system with new keys. Presumably this happens in an automated online fashion (otherwise, the security officers aren't going to get much sleep). That makes it even more challenging to subdivide the responsibility for certification, for many reasons.

I don't mean to disparage the precautions that financial organizations have implemented, and I agree that some parts of the software world sometimes seem extremely cavalier in comparison. But I still think that in this particular case the threat models are extremely different.

1. They are choosing and then providing devices. This is very important because it means they have physical contact with the device, initially, so they have means to bootstrap cryptographic system by way of injecting keys, etc.

2. They are middlemen transferring messages between multiple parties that use their devices without having to understand the secret part of the message and only routing the messages.

3. They are paid well enough for the service that they should be able to cover expensive devices and processes like manual key injection or expensive hardware security modules.

4. The core of the business is security, if it is not provided nothing else will change the fact they have not provided what they were paid for.

The only real difference from payment industry is that the threat is from governments, too.

I forgot about the point that they physically provide the devices, which might be relevant somehow. But how could they use financial-industry-like controls to prevent themselves from being compelled by a government to certify a man-in-the-middle attack? How can the processes distinguish between "we believe this statement is true" and "the government compels us to state that we believe this statement is true"?

You'd think, but as I remember even the EMV specs can be a bit handwavy about the entropy of derivation components, and the tendency to rely on both limited attempts in hardware, and the security of keys generated and stored in HSMs (which people essentially use hashicorp to replace these days), the attack surface is fairly broad for a police service.

There is a basic bootstrapping problem with all these systems, where either you have a TSM facility of some sort, or you accept the ostensibly very low likelihood that your key provisioning protocol gets compromised.

Trouble is, if you are a target of any intelligence interest, any linux remote zero day means your provisioning server is probably going to get owned out of the gate as soon as someone seizes one of your devices.

Not checking a signature seems like an unforced error, but really, there are so many plausible ways this could have happened.

Bootstrapping can be done correctly, but it requires a lot of planning and preparation to do right. We had to scrap few attempts at bootstrapping. For example, we had to scrap our first attempt because we have allowed single employee to receive the package with the HSM.

The security of well designed system will not be impacted by any number of zero-days or attackers having free access to your network, devices, databases, etc. This is because well designed system will not base security of the data it protects on components and mechanisms that cannot be trusted to be secure.

> Blackbox-security.com, the site selling IronChat and IronPhone, quoted Snowden as saying: "I use PGP to say hi and hello, i use IronChat (OTR) to have a serious conversation," according to Web archives. It wasn’t immediately known if the endorsement was authentic.

This strikes me as... very inauthentic. I think anyone with even a basic understanding of crypto would do things the other way around.

For all the flak it's received, PGP has no known flaws if you know how to use it properly. IronChat may have had some security merits (I can't say), but until there's been some in-depth audits you can't trust it.

It's not about the technical features that are theoretically available, it's more about how much you can believe they actually hold in reality.

This was all in the context of Edward Snowden allegedly praising ironchat over pgp. I trust the man to be able to use the latter properly and thus being able to use it for secure communications, more than ironchat.

OTR uses AES-128 w/the Diffie–Hellman key exchange, and SHA-1 hashes to confirm integrity. So in terms of technology it is a pretty well travelled road. And the advantages of OTR over PGP make it worth seriously considering for secure messaging.

As to specific implementations I cannot say, but that's true with both OTR and PGP.

> As to specific implementations I cannot say, but that's true with both OTR and PGP.

That's exactly the root of the problem. When you say you use pgp, there's a very high chance you're using gnupg from the command line, ie the one that has been reviewed by every security expert who has wanted any bit of recognition. When you say you're using OTR, everything depends on the specific implementation, so there are infinitely more ways your setup can be compromised.

I do agree that from a purely technical point of view OTR is better than PGP (except maybe the need for both parties to be online at the same time, but that's a minor inconvenience when comparing to the additional security OTR provides). But in this case the technical merits are not really important, what is really important is the complete system, and in that view the old, crufty, hard-to-use PGP wins.

You seem to be taking the name of one specific company a little too literally. As I showed previously it is a public protocol that interoperates with open source clients, it isn't a black box in any way shape or form.

I feel like this discussion is getting away from the original premise:

> anyone with even a basic understanding of crypto would do things the other way around.

i.e. use PGP instead of OTR. Nobody has yet even attempted explain their reasoning as to why? Bringing up one specific vendor is a deflection, rather than an answer.

You seem to be inferring focus on the name of the company more than intended. Is this not closed-source software? Might not it have some weaknesses at any point in its implementation or update mechanisms--by design or inadvertent--that put it at a disadvantage to PGP?

How were the police able to seize the company that sold IronChat in the first place? That's like shutting down Open Whisper Systems for criminals using Signal. Do we know if the company knowingly did business with organized crime? Or defied a court order? Or is it simply illegal to sell encryption hardware in .nl?

Not sure whether this is how they did it, but a new law has given police 'hacking powers'.
Otherwise, it'd probably be a court order based on the claim they knowingly did business with organized crime.

The most obvious way this could have been done is by MITMing the key exchange. The giveaway is in the last paragraph: "The IronChat app, Schellevis reported, also failed to automatically check if the server it used to exchange messages with other users was the correct one."

Why would they even need to to trust a server for key exchange? Wouldn't it be more reasonable to just exchange public keys at the time of adding the contact to their contact list and then only update the keys using previously used keys?

Every reasonable person that cares about privacy seriously has a second device with access to the same account and as soon as they loose a phone they tell everybody the account is compromised and make a new account. It's amazing how ridiculous some criminals happen to be, I know many don't even care to delete messages after reading.

I wonder why they would make public this information. Wouldn't it be advantageous for them to have those with criminal intent continue to utilize the platform and the police to have an established surveillance method? Or would it be unavoidable and become public information in criminal complaint filings and such?

It was made public because the exposed operations of criminals led to threats being made against assumed 'leaky partners'. The police did not want these people to retaliate against other people and possibly endanger bystanders.

I know nothing about the IronChat service, but if what is shown in that archived page is the real device then it's pretty obvious that it was doomed to be cracked. A cellphone? Seriously... a freaking cellphone?

Cellphones -all of them- use binary closed blobs to manage device drivers, and to date there is not a single cellphone in the known universe which is free of proprietary closed code. That includes also the Librem5, which is a wonderful step in the right direction, still not completely free of closed blobs, hence not secure.

So what's the problem with (closed) device drivers? Well, they run all the time, they run at maximum privileges (higher than root) and they cannot be audited to spot any malicious code, which makes them the most effective place to hide spyware code. If any government tells a hardware manufacturer to "put our spyware into your driver or your business ends tomorrow" they comply, nobody can spot the code and there's no anti malware software that will detect it.

But why one should care if all text is end-to-end encrypted? Well, on a bugged phone there's no such thing as safe encryption. Let me be more clear: if you tap the text on the virtual keyboard or any device connected to say the USB port or through bluetooth, the text is read by the relevant drivers (higher priority, closed, not auditable) before it reaches the encryption code (lower priority user app) then it can be stored, transmitted (network drivers are closed too) etc. Closed device drivers can be used on most platforms (including PCs) to build a covert channel where information (text, sound, images etc.) travels completely unbeknownst to the user, so a platform can't be considered as secure until every single bit of software and firmware contained can be checked.

So, how did the police decrypt that traffic? I can only speculate that they confiscated one of these devices, then built a bugged driver for some vital devices within it, then got to the manufacturer and forced them to inject that tampered driver as an online update for that given model of phone, possibly installing only if some conditions were verified to be sure it was one of the targets.

If that scenario is half true, then there is not a single piece of computer hardware in the world one can safely assume to be secure. An Arduino-like board, maybe, until the day they'll build faster ones around bigger chips carrying closed blobs inside.

While what you're saying is technically correct, I think you vastly overestimate the resources available and likely to be committed by law enforcement, and the competence of the people making/selling these phones.

Closed source software can be secure. Whether there are binary blobs or not is not really relevant.

> So what's the problem with (closed) device drivers? Well, they run all the time, they run at maximum privileges (higher than root) and they cannot be audited ...

Modern phones have baseband/AP separation; closed source components are often running more like a peripheral on good phones.

For any of those components to exfiltrate the data it would have to somehow get access to the network, persistently store it, or use some other side-channel... Yeah, I'm sure those tiny bluetooth chips can do all that over the limited peripheral interface they use to communicate with the kernel.

> So, how did the police decrypt that traffic? I can only speculate that they confiscated one of these devices, then built a bugged driver for some vital devices within it, then got to the manufacturer and forced them to inject that tampered driver as an online update for that given model of phone, possibly installing only if some conditions were verified to be sure it was one of the targets.

Absolutely ridiculous. Why would you spend the work to create a malicious driver when you could just update the app code itself if you can push updates to the phones?

There's no reason anyone would use malicious drivers when they could use malicious application code; the latter is a darn sight easier to manage.

The most likely scenario, however, is that there was a bug in the cryptography that assumed the servers to be trusted or assumed some specific key had authority to mint new keys (e.g. a trusted CA that the police got the private key to).

Your post is a rant against closed-source driver blobs, when the reality is they're a difficult to exploit vector, at best.

I would like to direct you to tptacek's comment on the Librem5 [0] where he indicates that he, a security professional, believes the iPhone to be the most secure because of the level of auditing and security work they've put into it.

> ... there is not a single piece of computer hardware in the world one can safely assume to be secure ...

Thanks are not either 'secure or not'. They are secure against something.

Security is a continuum. An iPhone (with secure enclave, good disk encryption) is more secure than my laptop, which in turn is probably more secure than the average wordpress server.

I fully believe that my iPhone will withstand even motivated attackers with physical access. I don't think my laptop will. I don't know how it would fare against a nation-state specifically targetting me, but I don't really have to worry about that.