From what is written, I understand this to mean that users can select this feature for specific conversations. That not all messages are subject to this encryption.

I am not usually one for paranoia, but is anyone else becoming more suspicious about Facebooks motivations and involvement with gov? This feature is a massive boost for intelligence services dealing with unsophisticated actors. This reduces the haystack significantly, by users self flagging messages that may be incriminating. Multi-millions of FB messages must be sent every day, brute-forcing encryption on all of these is probably not possible. A small % marked as 'secret conversation'? Much easier.

Why doesn't FB just apply encryption on all messages? Surely they have the resources avail. Is it because this feature makes somebody else's job a lot easier?
If my suspicions are correct, what sort of threats would this pick up. Are serious threats likely to use FB messages with 'secret conversation' flagged to co-ordinate actions?

Hundreds of millions use Messenger from a web browser.
No secure way to verify code or store keys without
routing through mobile.

I wouldn't use the web version if they had not disabled Jabber access... and then I could use OTR.

This trend makes me very sad... IM networks are getting more centralized as ever. I don't feel thankful for this kind of development. End-to-end encryption should not be a feature of the service provider, but the client. The way this works with Whatsapp/FB/Google just requires us to believe that their proprietary client is actually doing what it promises. And for me that I don't have a smartphone they just don't even promise anything.

I just wish I could use XMPP or Matrix with my non-nerdy friends. There was this time Google seemed to not be evil with GTalk/XMPP, but then... The business cynicism that dominates this industry, allowing people to claim that they "connect people" at the same time that they put everyone in digital prisons, makes me really want to leave computing and go live in a cave.

Enabling interoperable chat to non-nerdy friends is basically Matrix's raison d'être. We're not there yet (see https://matrix.org/blog/2016/07/04/the-matrix-summer-special... for the current status), but if we don't provide it we will have failed in the whole mission to defragment these silos, letting users choose which service to trust without losing interoperability.

The good news is that the Olm end-to-end cryptographic ratchet that Matrix is in the process of deploying (https://matrix.org/git/olm) is built using the same algorithms as Signal Protocol's ratchet (although it's an independent implementation) - so we're hopeful that at least technically the window is open in future for using Matrix to defragment all the services who have adopted Signal Protocol (WhatsApp, Google Allo, FB Messenger, Signal itself etc) without compromising the E2E privacy. Right now this is total sci-fi, and won't likely happen (whilst preserving E2E crypto) without cooperation from FB, Google etc.

However, we hope to get Matrix to the point where they see that the longer term benefits of participating in a healthy open ecosystem outweigh the short term benefits of trying to lock users into a silo - just as email eventually interoperated over the early internet. That's a while off, but this is still the goal.

The best way to make sure this happens is to play with Matrix in its current form, and help us write bridges (e.g. https://github.com/matrix-org/matrix-appservice-bridge/blob/...) to interface as many silos as possible into Matrix. The more bridges, the more useful Matrix is, and the higher the chance of building an ecosystem which eventually Google, FB and friends will find attractive.

On the serverside (which may be "local" depending on where your server is running), the history for the rooms you participate in is stored as a matter of course. Matrix operates by replicating that history between the servers participating in the conversation. If that history is e2e encrypted, any device with the necessary cryptographic ratchet state can decrypt it. (Note that rooms may be configured to change ratchet frequently to deliberately stop history being decrypted multiple times, providing a form of PFS).

On the client side, it depends on the client you're using. Many native Matrix clients and many bridged clients give the option to store local history (whether it was originally encypted or not). Smarter clients will store it encrypted at rest by whatever mechanisms the OS and hardware provides.

I honestly find this centralization as worrying as mass surveillance itself. I'm as afraid of the Facebooks of this world as I am of any government, and I don't want all my communication locked in with one company.

This is why I will not use or recommend Signal. Moxie's anti-federation stance is unacceptable to me. It's replacing one problem with another.

I don't like this either. My long time wishes have been to have federated systems for instant messaging and for social networks. Neither seem to be getting any traction with the big and rich corporations actively working against any kind of interoperability. All the pro-privacy solutions depend on getting all your contacts into another walled garden, albeit a nicer one than the likes of Facebook and Google.

Beyond federation, I would also like the solutions to have good features and usability. Right now I'm bouncing between a few walled systems:

1. Telegram - really fast development pace, poor crypto, E2E encryption is only for chosen chats and single device.

2. Signal - slow to deliver messages, not multi-device and has usability bugs and issues. I update it somewhat regularly and try using it, but still go back to Telegram because basic expectations aren't met.

3. Wire - I discovered this recently and like the feature set (it's a lot richer than Signal). It claims to use the Signal protocol and has support for voice and video too. All chats are E2E encrypted (unlike Telegram where non-secret chats are by default not encrypted on the device or on Telegram's servers), and it has multi-device support with message sync (only from the time the device is joined to the account). Clients are available for different mobile and desktop platforms. But this one also has poor usability in getting started with it and has simple things missing - like no message delivered or message read indicators (the latter could at least be present as a user selectable option for privacy). I don't know how slow this is to deliver messages, but it's definitely not as fast as Telegram is. Not knowing if a message reached or not is unacceptable in this era.

I'm still waiting for some more strong solutions to appear in this space. Seeing that more platforms are adopting the Signal protocol, it would be great to have some standardization in user identification and federation. I actually do not want any of these solutions to be completely free and wish they would provide some way to help them monetarily (at least for the people who do want to help them). I feel repelled by the "free forever" and "we'll sell premium things later, like stickers" parts. That also brings suspicions about the motives of the company/developers and the future viability of the application or platform. At least allow people to donate to you so that you feel some kind of return obligation for all users!

You can't run your own signal server. All accounts use phone numbers in the same namespace as ID, and all messages go from the phone to opensystems.org, further on to google, and from google to the destination phone (with lots of encryption being added and removed at various points). This has advantages (it's difficult for the Man distinguish a received signal message from other android notifications) but also disadvantage (moxie can do traffic analysis and you can't do anything about it).

Yeah, so instead of being in Whisper Systems' walled garden, I can set up my own and ask people to install Rvense's Magical Messenger App. Sit there in my treehouse with a bucket on my head and a NO DUMMIES sign or something.

You can sideload apps without a developer subscription. It's annoying but works. But you have an unsolved update problem on both Android and iOS. You really shouldn't do this if you're not 100% sure of the implications.

No, it's not a joke and you shouldn't treat it as such. Non-technical people really shouldn't be whining that their "free service" doesn't cater to a click-and-run crowd. The source is available to the public to create their own, and changing a URL in the code is a single regex command away.

Don't casually disregard him because you or others can't understand basics of doing what it takes to alter and run a service in your own private space.

I don't casually disregard him. I thoughtfully and with consideration disregard him, and you as well. The idea that there is a priestly-class of technical people and "non-technical people shouldn't whine" is silly. This is not for technical people. This is for non-technical people. I've been doing this stuff for twenty years. But me being able to do it doesn't do a damned thing to help the people who actually need help.

I don't need Signal to communicate with knowledgeable people. We need something to communicate with everyone else.

And yet the same argument has been made time and time again against SMTP. Let's stand back for a second and understand why SMTP has stood the test of time. Yes it has flaws that allow the "first contact" problem (ie spam). But the people working on SMTP at least understand the weaknesses and advantages of that.

The moment you open the door for federation, the protocol is written in stone forever. All it takes is one server in the federation network with a substantial user base that chooses not to update. (See SMTP).

OWS decided that relinquishing the ability to force updates (i.e. away from a broken cryptosystem) would sacrifice too much in the way of security to be consistent with the project's goals.

I didn't say this was a good way for normal users. Normal users don't care about federation and don't want to run their own server. But for people on HN it should be easy, and if you and your hacker friends don't trust moxie you can do it. I never said you should, just that's it's possible and not hard.

Compiling the client is much less daunting than running your own server, so "easy" seems like a fair description in this context. I don't think there's a large intersection between "People who can't easily compile the client" and "People who would run or audit a secure messaging server."

My response is that we just have to try harder. He's focusing only on encryption and ignoring the massive social problem of one company or a few companies having a monopoly on digital communication. Just like whether or not I'm spied on is irrelevant to whether or not I have the right to private (encrypted) communications, it doesn't matter who has monopoly or what they use it for. We should not be building infrastructure that encourages monopolies, and unfederated services are by definition monopolies.

Did you just coin that? It appropriately captures what is going on, but without having the positive connotation that comes from a 'walled garden'. I love the phrase.

As an example outside of messaging, I have a fitbit and 'digital prison' so aptly describes what happens with my personal health data. I can't get my heart rate data out of their prison, because the fitbit warden doesn't see it fit to grant me the privilege to access my raw data.

"Cult compound" is probably how I would describe it. Sure, you can leave, but there is immense social pressure to continue in what has become the norm, despite there clearly being something not okay with what is going on. And good luck convincing others to leave when you do.

Never heard "cult compound" in this context so far and by thinking about it I really think it better fits then "walled garden". "Walled garden" sounds like you go there because of its beauty or for getting the best crops while in reality you go there because it's the most crowded place.

Facebook has convinced millions of people to give up vast amounts of private information to an apparatus that would make the Stasi or KGB drool.

Part of cult indoctrination is giving up personal and private information, documents, secrets and property to participate and become part of the whole. Meanwhile, leaders profit from the property and information given up and use secrets to blackmail or breakdown an individual's identity so they become dependent on the group.

People opted into several of these services before they were "locked in", many of whom never noticed the switch of XMPP backplanes in Facebook Messages and GTalk. Some of them haven't even seemed to notice the loss of a few technically minded friends from their messenger windows, changes in branding, and or even really changes in apps. The ambivalence is not a preference or a "want" for a locked in experience. There isn't even a preference for a type of experience: the general human thought process is "I want to talk to my friend Jim" never "I want to use [Facebook Messenger/Google Hangouts/SMS/smoke signals] to talk to my friend Jim".

I've seen people that ritualistically open certain apps to talk to certain friends and networks of friends, but have no idea what apps they are using beyond the background navigation needs of "the one with the fuzzy green icon on my last page" and "the blue one with the annoying notifications".

Locked in platforms are a consequence of network efforts, not a "preference" for some mystical "guaranteed experience": the experience and the platform don't matter if the social interactions aren't there.

The diaspora of communications platforms hasn't hit home to the average consumer yet, and it's currently background inconvenience that people are using five to ten different apps to communicate these days in some cases, but that doesn't mean average consumers are entirely ignorant of the situation either. (To some extent that's why OS-level notification systems have become so important to the average consumer; at least when you have a half-dozen messenging apps all the notifications arrive in the same place.)

This is what I thought. If one were starting a new messaging platform, how would you implement the Signal protocol from scratch I wonder? I'm assuming for people who don't have strong security backgrounds, this means dissecting the Signal source code from Github.

You mean inventing Signal from scratch (which is rough) or incorporating the libsignal protocol into a new messaging app?

All of the libsignal repos have a good readme that explains init [1][2] , so you can start there. Browsing Signal source is helpful not so much to understand the protocol, but to see if any special precautions were taken against side-channel and other implementation pitfalls.

I mean, you could contract moxie (hey moxie, what's your price?)

EDIT: there's also this [3] independent implementation of libsignal in golang that tries to make some targeted modifications to fit their need. Can't vouch for its quality, but it's an interesting effort nonetheless.

No, I'm just repeating the oft-said maxim of 'Don't Roll Your Own Crypto' [1] (which applies in the form of know-what-you're-doing-while-implementing-someone-else's-crypto) while emphasizing that you should probably be a domain expert in the domain you're developing in. If I know nothing about High Frequency Trading or Oil Exploration, I wouldn't want to be coding for it at all, and nor would my employer.

I believe it's a reasonable expectation that people who implement secure messaging be domain experts in crypto AND messaging.

There's a saying about sex: make one mistake and you have to support it for the rest of your life. Security and cryptography are orders of magnitude worse. It's bad enough that errors compound, but it doesn't end there. You can have subtle and counter-intuitive failure modes where a single step outside the happy path is enough to completely annihilate the security of your system.

I feel there is no analogy that could capture the absurd complexity and catastrophic failure potential.

To give some background - I've been working with applied crypto since 90's, and professionally (on and off) since early 2000's. That experience is still next to worthless: I know for a fact that I am not good enough to actually implement anything that could withstand the attacks of a motivated and well-funded adversary. (Or even that of a bored PhD student.)

The best I can do is find tools and components that have been battle hardened by the handful few exceptional professionals. At least that way my hubris shouldn't amount to too much damage.

You don't say? (not you, Facebook) How about the dozens of times Facebook disrupted the user experience of the service for its own benefit? How about the dozen+ times it changed people's settings from private to public, after people previously manually enabled a certain setting to be private, or after having a setting by default as private initially and letting people believe that such action is private? Wasn't THAT disrupting to users' experience?

Of course it was. But it benefited Facebook, and that's the difference here. They just don't want to "disrupt" the experience in a way that also hurts the company's bottom line, even if it's better for users.

In other words, it's just a weak excuse for not doing it by default, or at least allowing people to always set it as default (although knowing Facebook, I'd probably worry that they'd revert it back to non-E2E without even making it obvious that it did that. Announcing a new privacy policy change doesn't really count).

> No secure way to verify code or store keys without routing through mobile.

Weeeeeeell, not quite. Every device and browser the user uses can get it's own private key, and then you use Facebook's central servers and SSL to exchange keys.

If I don't trust Facebook for key exchange, then I can't trust them with their app. Any encryption they implement, they can trivially circumvent by putting a backdoor in their app.

But if I trust their (closed source, frequently updated) app at all, then I can trust them to relay people my public keys. Especially since MITM can be discovered by comparing hashes over a hard to manipulate channel (telefone, video, or IRL).

And if you're really paranoid, you could think about using an open-source app and letting a trusted third party handle the keys.

Yes I completely agree. Actually, what is the difference in security between an app that generates a key pair on first use and exchanges the public key via a 3rd party servers and a web application doing exactly that. As long as the connection is secure and you trust the 3rd party for not having you send the private key to them, the model seems reasonably secure for both scenarios.

Though public key exchange can be improved on mobile by better direct communication capabilities like barcode scanning, rfid, bluetooth etc.

Am I missing something or is this more of a "JS is not reliably fast on all devices so we rather don't" kind of thing?

I assume the issue is more to do with private key storage on the device. JS provides several local storage mechanisms, but none of them have cryptographic guarantees and all are potentially vulnerable to attacks like XSS and "attacks" like development console disclosure. There's been calls for the Web Application Platform to standardize on an HTML5 "Key Store" for cryptography, but thus far it doesn't sound like any consensus has been managed to be reached on how that key store would operate.

Certainly, there are mitigations such as very short lived private keys and relying on existing sandboxing and XSS protections browsers already have to do for JS local storages, but it's easy to understand how from a paranoia standpoint there's no guaranteed safe key store just yet in a browser, especially not one backed up by OS-level security guarantees as one would be able to use on mobile devices.

That's all fine, but why isn't there a setting somewhere, where I can just knowingly choose "always use end-to-end encryption" at least?

The way it's implemented now still makes it a pain/inconvenient thing to do when you want private conversations. So let's face it. Facebook just wants to get away with the minimum necessary to convey that it cares about privacy, while knowing that only 0.1% of the conversations will ever be encrypted this way.

The fact that "We don't want to disrupt people's current experience." did not previously outweigh other considerations does not mean it is not a consideration. It does imply an upper bound on the weight though.

I recently started using Wire and liked the richer feature set. But I'd say it still has some way to go on usability, features and speed. I hope you look at Telegram as an inspiration on those (not on crypto though, where you seem to have the one commonly accepted as the best).

While it might be disappointing, I firmly believe this is a technical and business decision, not a conspiracy. If you look at the features Messenger offers, and the direction the product has been moving, it relies heavily on server-side technology.

Facebook have added contextual ride-share ordering, person-to-person payments, bots, etc. Unlike WhatsApp or Signal, Messenger also still works over the Web without an app or a smartphone-based login: how do you implement credible E2E over the Web, without using the phone as a crutch? How do you allow multi-device support, with a message history, and do E2E on all conversations?

I think they've (rightly, IMO) assessed that most people value those features and convenience over 100% E2E conversations, but want to offer the option to those who don't. Besides, if Facebook were really completely in bed with the surveillance state, why would they have just rolled out full E2E on WhatsApp?

> Unlike WhatsApp or Signal, Messenger also still works over the Web without an app or a smartphone-based login: how do you implement credible E2E over the Web, without using the phone as a crutch? How do you allow multi-device support, with a message history, and do E2E on all conversations?

Wire [1] does multi-device E2E encryption, sync of message history and allows users to use phone numbers and email addresses as identifiers. It also uses the Signal protocol.

Note: I do not work for Wire nor am I associated with it in any way, except as a user. I discovered it only recently and am trying it out, in addition to using Telegram as my most frequent client and Signal.

> I am not usually one for paranoia, but is anyone else becoming more suspicious about Facebooks motivations and involvement with gov?

I worked for Facebook; I am friends with the people who developed this: I would like to reassure you strongly (well, as much as an Internet stranger can) on their motives. They are the good guys, and this was develop with people being spied on by abusive governments in mind — because those people use Messenger and would like to do it for those conversations too. Don’t trust me, but reach out to them if you are curious: few people appreciate their work, so they are generally happy to talk (about published work; unannounced products are very much off limits).

Facebook does collaborate with governments when the messages are not encrypted and the crimes are clear, and the court issued warrants. I seriously doubt any of those are political dissidents. Facebook engineers receive very generous poaching offers all the time, and they would have minimal economic damage from leaving the company over something like this. The shit-show from engineers leaving over that would massive (and many engineers are true believers).

I wasn’t in the company six months ago when that project was decided, but from my experience of the internal culture, I know that there was a debate on whether this feature could cover crimes that Facebook would object to — and the need for protecting the good guys obviously had the upper hand.

> Why doesn't FB just apply encryption on all messages?
> Surely they have the resources avail.

Scaling. I know nothing about this, but I have no doubt that this is the main reason. Any tiny amount of extra memory, computation, etc. times a billion becomes massive. Facebook struggles building enough capacity for all its services: the data center expansions you hear about are done a break-neck pace, to meet service expansion deadlines. The engineers working on this are heroes internally, and the units they use are unheard of outside of astronomy.
More specifically, they probably want to test some scaling aspects, but it might be unethical to use the usual approach of A/B testing.

I'm former FB Infra and agree with your first point re: motives. There's a lot of true believers working in the security space at Facebook, and I have tons of respect for them. I used to work closely with many of them.

However, scaling and/or capacity is not the reason E2E encryption isn't applied on all messages. The crypto operations are relatively trivial in terms of cpu.

This comment summarizes what FB's CSO said about why they are not launching E2E broadly yet. It boils down to usability concerns. Sounds like they are working on it:

Usability with Facebook M, much like Google Allo, will require E2E encryption to be turned off in order to take advantage of those features. Alex didn't mention that, but I think that's the real reason. Also FB wants to know what's going on in your conversations. That metadata can be used for advertising.

This might sound offensive, but please don't take it that way, I just don't know how else to phrase it:

I have no reason to trust you, or them. Just because you think people have good motives, doesn't mean it's true. I'm sure there are great people working there, but I'm also sure there are shady people working there. Just like at any big org. "Even though you don't know me, trust me, these guys are cool" arguments don't really help anything.

Hi. To move all messages to be E2E encrypted, we need credible solution for web clients and every other platform, including old feature phones. This is easier said than done, but is something we are thinking about.

I think people underestimate just how ludicrously hard it is to provide an encrypted experience that's as good as plaintext. Even showing a chat on multiple devices becomes a hard problem. I agree with you that it's a step in the right direction, and Viber and Whatsapp have a much easier problem to solve, given that both only support device-to-device messaging. The only app that supports multi-device chats that I know of is Silent Phone, but I admit to not being very up to date with the instant messaging landscape.

I'm building one. When you start with the idea that it's going to be encrypted and that you won't know what users are sending back and forth you have to make some concessions but functionally I don't think the average person would even know that my application is encrypted from a UX perspective. You need to handle abuse client-side, it requires a bit more thought but I'm sure that's not beyond Facebook.

Feature phone is actually oldspeak. It's what we called cell phones with features back when most cell phones didn't have features. Then smartphones came along and reset the expectations for what features a phone has, leaving feature phones in the dust. But there was a time where feature phone meant something positive.

The same reason Gmail can't work with end-to-end encryption--they want to advertise at you based on message content.

I highly doubt there is any government intervention in FB's business strategy, but there seems to be plenty of cooperation after the business decisions are made. (The same is largely true with Microsoft, Google, and yes, even Apple.) It's not really a conspiracy, or a matter of paranoia--it's been very widely reported for a while now, and people just generally don't seem to give a shit (myself increasingly included).

EDIT: I do care. But I think (a) people need to take privacy into their own hands, since companies will never be incentivized to do it and (b) we've entered a new cultural era, where the levels of privacy enjoyed in the past are no longer socially normal.

> The same reason Gmail can't work with end-to-end encryption--they want to advertise at you based on message content.

I wonder how they'd do if they were more open about it. "You're getting Gmail for free because we read your email and advertise to you. However, if you want to pay for a premium account (or Google Apps for Work) then we won't advertise to you, won't read your email and we'll even make end-to-end encryption easy and convenient".

I mean, it seems like a good compromise. I'd be happier with the rampant advertising and profiling going on if it was only for signed-in users and everyone was given the choice - use it for free or pay and get guaranteed privacy and no ads.

It would be a (financially) bad choice for corpo. And answer why, is the same in many similar questions 'why can't corpo do this-and-that'.

Here it is (google as an example only, simplified) the answer:

* Put two googles side by side. competing.

* one is a current one, earning money from ads on you being product, and keeping it under the radar (although in fine print etc etc)

* second is the one devised: premium accounts plus free w/ads

* wait 5 years and observe by market efficiency evolution which of the (competing) companies wins. The first one. In this specific scenario, the second loses because time and money spent on 'premium accounts' will not be compensated by revenues from it. While at the same time, the first google, spending this capital difference purely on the ads department will make its ads department (here 100% of it) much better than ads part in second company. hereby winning on the market.

In general (as the google example is SIMPLIFIED, due to the alphabet scale. Please don't use google apps FOR WORK (emphasis mine ;) as an counter example - it's B2B product :) it is because: we, humans, don't like companies who use fine print etc, yet those companies win case by case with the 'moral, pure pricing and fine ethics' companies BECAUSE OF BIASES AND ERRORS (+) OF HUMAN BRAINS during choice making shopping, exploited day by day.

That analysis only makes sense if that is the only product of the company. Google would still be spending way more on making ads better than a pure gmail competitor just because it shows ads for so many other things.

I think if companies were fully transparent with what they're doing with the data and made sure almost all of their users knew exactly what they're doing with it (meaning, it shouldn't just be in a privacy policy no one ever reads, but it should be actively "promoted" somehow within the service, everytime the data is collected and used) people would be a lot more careful about what data they share through these services.

This is exactly why these companies don't want to be more transparent unless they absolutely have to (like what the EU is doing to Google), but it really should be more regulated by governments, because I think it's a very "fair" thing to do - sharing everything you're doing with someone's info. It's not about restricting the data collection through regulation, just being transparent about it. That might lead to less useful information for the data collectors, but people should be informed and it should trump everything else.

End to end means that your computer is the only one that can read your gmail email, because the key is stored on your computer and not in the cloud. The whole reason we all moved to webmail from pop3 is to have access to our mailboxes on any device. If you put the key in the cloud and make it decryptable by some user-known data (like a password), there's very little point in having the key - brute forcing a user's key becomes very simple. Also, any legitimate service provider is going to provide a backup way to "recover your password", which means they have a shortcut to decrypting your data. So you can have one or the other, not both. Either you have simple, easy access to your data from multiple devices, or you have secure end to end encryption.

Text advertisements are cheap, bandwidth-wise. I wonder if they couldn't just send everybody dozens of advertisements, and decide on the client side what to show. Or even, download a dozens of MBs of graphical ads with each app update / the first time you visit the site.

You could go further and do something like, email sent by people to people is off-limits and will be end-to-end encrypted and not looked at - but (almost) all automatically generated email is fair game and will be processed. Google already does that with Google Now (and shows you your flights, package deliveries etc.). I really prefer them making more explicit what they look at and what not.

> Text advertisements are cheap, bandwidth-wise. I wonder if they couldn't just send everybody dozens of advertisements, and decide on the client side what to show. Or even, download a dozens of MBs of graphical ads with each app update / the first time you visit the site.

Shh, don't tell everyone about my next project. I think it can be done with images as well sending 5 of them for various demographic groups would be doable and should create enough noise (one tampon ad, a video game ad, an ad for amazon, a mountain dew ad, and a retirement plan ad for example). For those who care not for anonymity, but more for bandwidth they can opt out and only download the ad's they need.

> The same reason Gmail can't work with end-to-end encryption--they want to advertise at you based on message content.

Facebook can do fine without that information: they have more information than they can develop ad services for at the moment; they could use a lot more engineers though. The goodwill from the dev community in allowing appropriate targeting is far more important to them.

Two good examples being location and language: you can advertise for people in a certain place or who speak a certain language, but there are many unserved combinations, like tourists, language minorities and commuters. In my case, I’d love to see ads on learning Swedish: Facebook can see some of my friends speak it, and this is new.

Source: worked for Fb, but in Engagement, not the Ads team; offered a lot of targeting suggestions, but most where “presumably not the biggest opportunities (they) could work on”. At Facebook’s scale, we are talking really big numbers.

I think the more likely scenario is they know the value of the conversations they datamine, but don't want to lose customers to these other message apps that tout their encryption. So they do the bare minimum to support encryption while still having access to most data.

Over here in the real world, Facebook does not need to worry about "[losing] customers to these other message apps that tout their encryption" because the latter are not even in the game. There are only a few messaging apps that matter, because the value of the app is in its network and who it can connect you to rather than some feature checklist for the HN crowd.

Dropping secure E2E encryption into an app that is deployed on hundreds of millions of phones across the globe is a game changer. Period. Once you climb down from your lofty ivory tower and consider the impact this will have globally then perhaps you you will be a bit less dismissive of the goals and efforts of the team that convinced a company that lives on data to delierately blind itself to some for the sake of their users privacy and security.

I can't tell if you've deliberately misinterpreted evgen's comment or not. But this feature is being deployed to hundreds of millions of phones. Even if only 1% of people use this, that's millions more people using E2E secured communication. Hardly "barely anyone".

This does seem a likely scenario. Unfortunately all the stories of over the last few years have severely eroded the benefit of doubt that I give these companies, which is a great shame. I imagine others due to this, approach similar moves with great suspicious.

Sure, but those assumptions are implicit in the statement I replied to Multi-millions of FB messages must be sent every day, brute-forcing encryption on all of these is probably not possible. A small % marked as 'secret conversation'? Much easier.

If you don't trust Facebook, you don't need to evaluate the quality of their implementation of secret messaging, you just shouldn't use it.

NSA mostly likely hasn't broken AES or ECC. And the rest of the arguments don't apply in this case; the parent was talking about hope having a few users use encryption makes it easier for Facebook to compromise security. Using gimped algorithms or sharing keys would affect both scenarios equally.

They have taken exactly the same approach as Google with the Allo service. This is most probably done because people want to use Chat Bots and services and these do not work with E2E encryption. Well they could work but it would be dishonest branding as the messages would leak.

It could work but as I said it would be dishonest. If a bot can read your messages then you have to trust a third party to not leak/store them. I believe that he goal of E2E encryption is to remove the need to trust somebody. (Of course you still have to trust FB that they do not backdoor or hinder the encryption but you do not have to trust the third parties)

I do not speak of chatbots such as Cleverbot. For example your bot will listen to your messages and provide contextual information, such as theatres airing a movie you are currently talking about. The bot also probably needs to keep track of the context, which probably means keeping the chat history for some time. In order to learn the chatbot needs to keep the history forever and scan it for ML purposes. All of these mean that the bot has to keep the history somewhere unencrypted, basically nullifying the benefit of E2E.

Brute-forcing is infeasible. The meta-data is certainly visible however, so intelligence agencies could glean a lot of information about certain users from other sources. You're right that "private conversations" does throw up a red flag.

You answered your own question. No, serious threats do not use Facebook Messenger. The reason it's limited deploy is technical (for now) not caging. Metadata is more important than content. They will always have the metadata.

n excellent point, however I will add to your thinking that we want our government to be able to eavesdrop on conversation provided it's done constitutionally, with probably cause, in an open court and with a warrant.

If we can't get those things then it's pointless to fight back with encryption. We must have our constitutional protections.

There's no way for a site to securely store keys in the browser. The server can't put them there because then the server would have them too. A client-side script could generate them, but it can't store them without extensions (or the server via some JS it sends) also having access to them.

This is why Signal and WhatsApp require the client to run on the phone - the phones are doing the decryption for the web apps.

This is flaky, consumes a lot of battery and generally is somewhat error-prone - probably not something FB wants to deal with.

Yes. But let's assume FB doesn't want the keys (because if they have them, then it's no longer E2E encryption), then client-side generated keys in a browser are still exposed to XSS attacks and extensions.

Installing a malicious extension, tricking users into typing commands in the developer tools, XSSing FB, all of these are much easier to do than attacking a native app on a phone.

A false sense of security can be more damaging than no sense of security.

Certainly it can be important to know when you have a "no compromises" security option versus "mostly better than plaintext but maybe not secure".

It could be a UX judgment to not confuse users they have a "secure connection" when in fact they might not. Look at all the various attempts over the years browsers have made to keep the UX semi-reliable and easy for users to understand whether or not their SSL connection is secure.

The apps are updated relatively infrequently, and you can put off updating to see if problems are found with each release. In the browser, you'd have to analyze every script Facebook ever sends you to see if it's exfiltrating your key.

- Signal does have some multi-device support (the Android and Desktop clients, iOS not yet). I still sometimes have minor issues but overall it works very well.

- Signal does include end-to-end encrypted voice calls (what used to be called RedPhone) that also work quite well. It's my go-to "call from Wifi abroad" solution to avoid roaming charges, and also works very well with a good 4G/3G signal

- The browser issue seems unsolved as of now, WhatsApp's web thingy (routing through the phone) seems to work quite well but obviously only if the phone is on, and WhatsApp requires a phone while FB messenger doesn't so this isn't an option for them.

"End-To-End Encrypted ‘Secret Conversations’" in software that is ordinarily used to harvest electronic phone books and rummage through user photos, from a company that made its whole fortune trying to obliterate privacy as a part of human culture?

It's going to be pretty high standards of proof to give this anything that resembles credibility.

Credibility isn't binary. Facebook was way down to 0% on the credibility scale, and now it's up to 5%. Let's applaud this move instead of being hostile when they're trying to improve, because the only thing that will achieve is to discourage improvement.

The first question you should ask in evaluating any encryption or privacy-based software is, 'do I trust the provider?'

What sane person would ever trust Facebook to keep something private? If anything, the addition of "secret conversations" will be more intrusive, because this service delivers more nodes to the social graph: how many secret conversations, with whom, where, at what times, etc. Facebook and the global spy regimes it provides content for care more about building a network of associations than actual content. Who cares about the needle when you can control the haystack?

Yeah, but then, again, how much credibility does Open Whisper Systems actually deserve?

For me personally, they've lost a huge chunk of credibility by lending credibility to companies for apparently rewriting the definition of "end-to-end-encryption" from previously referring to the users as the end to now referring to some magical proprietary blackbox as the end.

And the other chunk of credibility sort of died off, as they apparently seem to think depending on proprietary Google software for their products is how you do privacy.

That is, their Android-client for Signal depends on Google Play Services for receiving notifications, even though LibreSignal exists, which is a fork specifically to remove that dependency, and from which they could have easily pulled that changed code in as a fallback.

And when they figured they should make a desktop client, apparently the best technology that they could think of, was a Google Chrome extension.

I've read the whole thread and I'm surprised that nobody mentionned how easy it would be for Facebook to store the secret keys.

Page 10 of the white paper mentions that there is a remote key stored on Facebook servers which can be used to decrypt the local key. If Facebook still is to be trusted, I don't see what's the deal here.

I think that as soon as you put the words "end-to-end" encryption on a marketing material, you have to be ready to open-source your client. This is the cost that companies aiming to be credible can't escape.

End-to-end encryption without open-source has no value. It is a waste of energy for the company doing that too - or perhaps a marketing cost.

You've read it wrong. The local key is encrypted with the remote key. It means if someone steals your phone, they still have to pass facebook's authentication (e.g. 2fa) before being able to read your messages.

It's worth remembering that this does not protect metadata. It's believed (though not known for sure) that WhatsApp logs metadata for their encrypted messages, and it looks like Facebook do the same here.

you're suggesting that the metadata of "alice messaged bob at 1:20am" is worth more than "hey bob, I'm looking to have $100,000 laundered, same methods as last time. I'll drop to the usual location. thanks, Alice."

I think we're going to have to get there (wherever there is: Utopia is a different place for different people) incrementally. It may be unsatisfactory in the short/near term and arguably give people a false sense of security but if we can start educating those non-techies around us bit by bit we may stand a chance at driving mass adoption. Honestly, I'm in no way claiming I can see the future or have even some of the answers, this shit is hard - perhaps even impossible.

https://ricochet.im/ is a good option if you're looking to minimize the amount of metadata that gets leaked. Desktop only at the moment, though, due to the difficulties of running a Tor hidden service from a mobile device.

The situation is considerably better on Marshmallow because the app only prompts for permission for the extra features right before they are used - and many of those never will be. The app covers a lot of functionality (too much?).

I use Swipe for Facebook ever since they got rid of messages. It needs no permission, and while the experience can't be as good as Messenger, it will at least let me reply to the occasional message I receive.

I don't want to be encouraging Facebook's user-hostile moves, so I'll stop using messages completely if they make my life hard enough, but Swipe is a nice stopgap.

PGP fills a niche that OTR and Axolotl do not. It provides a bunch of operations that are not supported by OTR (anything that requires signatures that are verifiable by third parties). PGP is used by the masses (package signatures and similar functions), it's just that they don't use PGP for communication.

PGP while a decent technology is very cumbersome. Worse, it's easy to shoot yourself in the foot if you aren't an advanced level user. I've used it for years on a regular basis with various implementations. It's just never been user friendly or easy to use. If it was more user friendly it might have had better adoption rates. As moxie has pointed out, we need something better for the masses, it looks like we are finally seeing progress towards that goal now.

I'm not immediately seeing any insight into whether this covers conversations initiated in-browser. If this does exist, it'd be interesting to see how they've tackled the security of crypto logic in-browser and compare it to what Cyph has in place for in-browser code signing.

Edit: Yep, this seems device-to-device; there doesn't seem to be a web component here. Still useful given how many people use messenger primarily via phone, and I suspect implementation wasn't hard given WhatsApp did it first. It would be neat to see if Messenger and WhatsApp are ever bridged through this.

Because if you serve the library responsible for the encryption from the server, an attacker can perform a man in the middle attack and change that library. This will change when browsers will start implementing the web crypto api. https://www.w3.org/TR/WebCryptoAPI/

If you're dropping by defcon, you should catch the talk. There'll be very little focus on this mechanism specifically since we want to share a bunch of things people can actually freely use (and this isn't one of them), but you can catch Ryan afterwards and spark a conversation to find out more.

Well, the keywords there are next to. Like @remy_ implied, you need a mechanism for guaranteeing that the logic you're executing in-browser is protected from server compromise. That's where the Cyph example came in, since as far as I can tell, Cyph is the team to have hacked together a solution to that dilemma, though Cyph also is not using the Signal Protocol right now.

Anyway, there's an upcoming defcon talk which'll lightly touch on how web standards were mangled and viciously abused to make that happen, but since the talk is deliberately not vendor-specific, the focus on it will be brief. Disclaimer on my end is that I was involved in the initial review of their code-signing implementation. https://www.defcon.org/html/defcon-24/dc-24-speakers.html#Za...

Cyph's solution for this, last I saw it, is a Rube Goldberg machine of rolling HPKP pins used in attempt to keep the application from reloading itself from its own servers. It's incredibly complicated. And no matter how much asm.js you use, it's still crypto running in a browser instance. There are other very significant problems with browser crypto --- problems that even affect Chrome Applications.

You and I+Cure53+others will probably always disagree on this topic since you've always been the primary catalyst for dissent on this, but it's not at all complicated nor deserving of the analogy to a Rube Goldberg machine.

It's been well vetted enough. It might be a hack, but it's a sufficiently effective hack. We can agree to disagree :)

You coming to either Black Hat or DEF CON? Let me at least connect the two of you after the AppSec Glory talk.

I can't tell if you're talking about browser crypto (in which case my perspective is shared by a large number of crypto engineers) or Cyph's weird caching scheme, in which case a couple people from Cure53 are the only ones who have ever tried to evaluate it.

lol, I'm definitely biased given that I'm long since familiar with the idea and wrote the implementation, but I think it's more conceptually counterintuitive/clever/weird than it is actually complicated in terms of having many moving parts.

WebSign does have a patent pending on it (I think it's fair enough to say that the whole system of accomplishing in-browser code signing this way was non-obvious), but HPKP Suicide itself doesn't. Bryant (eganist) and I are actually disclosing and open sourcing implementations of a few non-code-signing applications of HPKP Suicide at Black Hat and DEF CON next month.

For one, device-native binaries have to be signed in order to actually run on the phone (well, without introducing some other tweak such as explicitly permitting unsigned applications).

Implementing this with JS is far, far, far more difficult, and the only solution known (touched on in my other comments) still pisses people off because it's running in a web context that, if improperly mitigated, can still facilitate disruption via code injection i.e. XSS.

That said, we're all conveniently ignoring the fact that all of this assumes that the devices themselves haven't been owned. If you think you're a target of entities capable of getting into a fully patched phone, you've got bigger problems.

So this is "whomever is signing the app gets compromised" vs "whomever is hosting js gets compromised". Right?

Facebook could afford a security team just as capable as Apples security team, and make sure the js server remains secure. And if they can't, their own signing procedure can get compromised before the app is uploaded to Apple for review.

So it would need a browser plug-in instead of just a web page? That seems fair enough. Extra kudos if they manage to use a standard plug-in for several websites. Maybe we can even get some kind of standard for this.

I made the comparison to Cyph for a reason. Every time Cyph comes up, there's some amount of lively debate about whether they've actually got working in-browser code signing (which they filed a patent on). I'd argue they do, but I'm also one of the guys who reviewed the implementation.

Even if they have been able to successfully get in-browser code signing working, how can they possibly gaurantee no leakage of data? I mean, there are so many inroads, from mitm to rogue browser plugins... It seems (near) impossible to secure them all.

Rogue browser plugins can be discounted as that's tantamount to a compromised device in the context of a downloaded E2EE app. MITM, well it's E2EE with trust-on-first-use for the crypto logic and implements the works when it comes to web security protocols, such as HPKP. There's enough mitigation there. TOFU in this context would mean once you visit once, the logic that's pinned in the browser then performs signature validation on crypto logic from then on.

No one's gonna get it perfectly right. Signal's probably the closest in terms of the cleanness of the protocol. It'd be neat to see Cyph implement the Signal Protocol in-browser as well, but that's neither here nor there. Ultimately, all of this will shift an attacker's focus away from owning the messaging application (or any component of it, be it the servers, the connection, etc.) to owning the devices and the users directly.

Your next technical battles will be in device security and user-friendly secure authentication. Your existing wars against your users (think phishing) will continue to heat up.

This is a weird response. The comment you replied to suggested (very accurately) that there are lots of different ways that browsers leak information, and gave the example of rogue plugins. You (accurately) dismiss rogue plugins, and then go on to write as if that were the only, or even the most important, vector for leaks. But, obviously, no.

Can you cite some other ways browsers leak information besides browser plugins? It's you who are making the extraordinary claim, that Cyph has come up with a way to make browser crypto reliably secure. I think the onus should be on you to demonstrate how carefully you've thought this out.

I'll make you a deal: if you name all the ones I know about, I'll tell you so. :)

If the messenger is not open sourced, it's trivial for Facebook to add something to the client binary (now, with a flag or at some later date) before the Signal libraries are hit. I'm not saying they are doing so, but without a clear way to verify continually, this is just short of security theater. Then there's Facebook facilitating the key exchange which of course is another blind trust as well as all the juicy meta data. Maybe this will quiet some of the nerves of privacy conscious individuals already on the network, but it seems to me more like a marketing label.

I still find it hard to believe so many people trust what they believe to be private communication with close-lipped advertisement companies.

Sorry, but an implementation is not a specification. It is not only much harder to read, but it is also a moving target. It is borderline impossible to make an independent implementation of the protocol this way.

Besides, the fact that it is licensed under the GPL might mean that somebody porting the protocol to a different platform / language might be creating a "derived work" which also must be licensed under the GPL.

Genuinely thrilled to see the Signal protocol adopted for Facebook and Google stuff (albeit optionally). Now if we can just get Microsoft and Amazon to hop on-board, we might actually have a shot at getting this standard to be pervasive.

The Messenger app has Android M permissions model now. you can say no to all of the prompts if you want. I have all of Camera, Contacts, Location, Microphone, Phone, SMS, and Storage permissions disabled.

As much as this is a step in the right direction, you have to specifically enable encryption for individual conversations in Messenger. This implementation seems a little sketchy to me. They really should just encrypt every conversation automatically. Otherwise, encryption only encourages scrutiny.

WhatsApp for Android doesn't seamlessly fetch and decrypt messages on new installations; you have to set up Google Drive backups from your old phone, and then set up restore from those backups on your new phone.

"One of the controversial things we did with Signal early on was to build it as an unfederated service. Nothing about any of the protocols we've developed requires centralization; it's entirely possible to build a federated Signal Protocol based messenger, but I no longer believe that it is possible to build a competitive federated messenger at all."

If you need to ask this question then you don't have the skills to verify that your device is running a secure protocol even if I handed you the source code. There are people out there who do have the skills, and you can be certain that they will be reverse-engineering the distributed binaries as soon as possible. I would be extremely surprised if there was not a lightning talks or two at DefCon or BH this year going over the actual on-device code and how closely it hews to the known protocol.

I'd be very glad. I keep facebook for the people who don't have any proper IM (and for the in my view much more legit micro blogging on FB).

I'm scared of what will be possible to extract from my chat logs in a few years, but the benefit of being able to IM people that only have FB feels greater right now.

Biggest problem I see so far is the multiple devices issue, but for most it will be just Desktop and Mobile, so why can't you send each message twice, encrypted separately for each device (automatically, not manually)? Does OTR3 have this feature?

I wonder by "end-to-end" do they mean they will be implementing Signal Protocol, that would be a pretty awesome increase in security (for most peoples threat models)...WhisperSystems are genuinely amazing at what they do, they have like less than 5 staff and very little budget but they are literally saving hundreds if not thousands of lives. Any chance some rich HN member will recognise this and open up the chequebook to OWS?