Security Tips Every Signal User Should Know

Illustration: Mark Pernice for The InterceptIllustration: Mark Pernice for The Intercept

This article is out of date. Check here for more recent information about Signal.

There are dozens of messaging apps for iPhone and Android, but one in particular continues to stand out in the crowd. Signal is easy to use, works on both iOS and Android, and encrypts communications so that only the sender and recipient can decipher them.

It also has open source code, meaning it can be inspected to verify security. You can download Signal from the Android Play Store and the iPhone App Store.

Although Signal is well-designed, there are extra steps you must take if you want to maximize the security for your most sensitive conversations — the ones that could be misinterpreted by an employer, client, or airport security screener; might be of interest to a snooping government, whether at home or abroad; or could allow a thief or hacker to blackmail you or steal your identity.

I discuss these steps at length below, in order of importance. If you wish to jump ahead to a specific section, you can click the appropriate link:

Lock Down Your Phone

Signal uses strong end-to-end encryption, which, when properly used, ensures that no one involved in facilitating your conversation can see what you’re saying — not the makers of Signal, not your cellphone or broadband provider, and not the NSA or another spy agency that collects internet traffic in bulk.

But Signal’s encryption scheme can’t stop someone from picking up your phone and opening the app to read through your conversations. You have to take additional precautions.

If you’re using Android:

Set up screen lock, which requires you to draw a pattern, type a numeric PIN, or type a password to unlock your phone. You can do this from the Settings app under Security > “Screen lock.” Try to make it random, and avoid using anything obvious such as birthdates. Don’t tell anyone how to unlock your phone unless you’re OK with them reading all of your encrypted messages.

Encrypt your phone’s storage. A screen lock is not much use if a thief can copy your phone’s data to a different device. Encrypting the flash memory on your phone blocks such an attack by scrambling your data so that it can only be unlocked using the same pattern, PIN, or password used to unlock your phone. You can do this from the Settings app under Security > “Encrypt phone.” Note that you need to have a full battery before Android lets you encrypt your phone, and you may have to wait up to an hour while your phone is encrypting.

Install all updates promptly. Updates fix security bugs, so every day you haven’t installed them is a day you’re vulnerable to attack. You can check for Android updates by opening the Settings app, and under System tap “About phone” > “System updates.” You should also update all of your apps from the Play Store promptly.

If you’re using an iPhone:

Set a strong passcode. iPhones automatically have encrypted storage, but this encryption only protects your data if you lock your device with a passcode. Everyone should use at least a six-digit passcode, and you should up that to 11 digits if you’re concerned that your phone might fall into the hands of a powerful attacker like a government. Avoid using anything obvious such as birthdates. I wrote about this in detail in February — skip to the bottom of that article for instructions on changing your passcode, and for considerations about using Touch ID.

Install updates promptly. Updates fix security bugs, so every day you haven’t installed them is a day you’re vulnerable to attack. You can check for iPhone updates in the Settings app under General > Software Update. You should also update all of your apps in the App Store app under the Updates tab.

Hide Signal Messages on Your Lock Screen

Signal’s powerful encryption won’t necessarily help you if other people can see incoming Signal messages displayed on your lock screen. Displaying messages on the lock screen is Signal’s default behavior, but you should change this if your phone is frequently in physical proximity to people who shouldn’t see your Signal messages — roommates, coworkers, or airport screeners, for example.

Open the Settings app, and under “Device” > “Sound & notification” select “When device is locked.”

The options are “Show all notification content,” “Hide sensitive notification content,” or “Don’t show notifications at all.” I recommend you choose “Hide sensitive information content” — this way you’ll still be notified when you get a Signal message, but you’ll have to unlock your phone to see who it’s from and what it says.

If you’re using an iPhone:

Open the Signal app and click the gear icon in the top-left to get to Signal’s settings. Under “Notifications” > “Background Notifications,” tap “Show.”

The options are “Sender name & message,” “Sender name only,” or “No name or message.” I recommend you choose “No name or message” — this way you’ll still be notified when you get a Signal message, but you’ll have to unlock your phone to see who it’s from and what it says.

To completely remove Signal notifications from your iPhone’s lock screen, open the Settings app, tap “Notifications,” scroll down to the list of apps, and tap Signal. From here you can turn off “Show on Lock Screen.”

Verify That You’re Talking to the Right Person

I said earlier that Signal ensures your communications stay private when it is properly used. Using Signal properly involves verifying that your communications are not subject to a “man-in-the-middle attack.”

A man-in-the-middle attack is where two parties (Romeo and Juliet, for example) think they’re speaking directly to each other, but instead, Romeo is speaking to an attacker, Juliet is speaking to the same attacker, and the attacker is connecting the two, spying on everything along the way. In order to fully safeguard your communications, you have to take extra steps to verify that you’re encrypting directly to your friends and not to impostors.

Most messaging apps don’t provide any way to do this sort of verification. Signal provides two: one for verifying voice calls and one for verifying text conversations.

Verify Your Phone Contacts

It’s easy to verify the security of phone calls on Signal, but you have to verify every call.

For each call, the Signal app displays two words on the callers’ phone screens. In the screen shot below, for example, each screen shows the words “shamrock paragon.” Juliet and Romeo read these words to one another; if the words are the same, and they recognize one another’s voices, the call is secure. If the words are different, someone is attacking the encryption in the call and you should hang up and try calling again, but this time from a different internet connection.

It’s not required, but a popular convention is for the receiver to answer the phone by reading the first word, as in, “Shamrock?” And the caller to respond with the second word, as in, “Paragon.”

I admit that this sounds like magic, but I assure you that it’s only mathematics. Here’s how it works: When Juliet calls Romeo using Signal, her app communicates with his app and comes up with a shared secret that no one else can possibly learn, even if they’re spying on this exchange — watch this five-minute video if you want to get some information about how this works. The Signal app on each phone takes this shared secret and converts it into the two-word authentication string. As long as the shared secret is exactly the same, the authentication string will be exactly the same as well.

Verify Your Text Contacts

It’s more complicated to verify the security of Signal text chats, but once you’ve verified a text chat correspondent, you won’t have to re-verify them again until they get a new phone or re-install Signal.

Each person you text with in Signal has something called an identity key. When Juliet sends Romeo a message for the first time, her Signal app downloads a copy of his identity key and stores it on her phone and visa versa. So long as these identity keys are valid — the key that Juliet has stored for Romeo is actually Romeo’s real key and not some attacker’s key — then the messages they send to each other are secure.

Because it’s unlikely that anyone is trying to attack your encrypted messages the very first time you send a contact a message, Signal automatically trusts the identity key that it downloads. This makes Signal easy to use: All you need to do to have an encrypted conversation is send someone a message, and that’s it. But if you discuss anything sensitive, you still might want to confirm.

To verify the identity key, you first navigate to the verification screen.

If you’re using Android:

Open the Signal app and tap on a conversation to open it

Tap the contact’s name and phone number at the top of the screen

Tap “Verify identity”

If you’re using an iPhone:

Open the Signal app and tap on a conversation to open it

Long-press the contact’s name at the top of the screen until the verification screen appears

Next, you want to confirm you have the correct identity key for your contact. You can do this either by scanning “QR codes,” which work similarly to the bar codes used to ring up groceries, or by comparing “fingerprints,” which are 66-character blocks of text.

Verifying a Text Contact in Person

If you’re able to meet up in person, here’s how you verify identity keys using QR codes:

If you’re using Android:

To be verified, tap the barcode icon in the top-right of the verification screen and select “Display your QR code” (you may be prompted to install the Barcode Scanner app the first time you do this; it is safe to install).

To verify someone else, tap the barcode icon on the verification screen and choose “Scan contact’s QR code,” and then point your camera at the contact’s QR code.

If you’re using an iPhone:

To be verified, tap the QR code icon on the verification screen.

To verify someone else, tap the camera icon on the verification screen, and then point the iPhone camera at the person’s QR code.

When you successfully verify a contact, Signal should pop up a message that says, “Verified!”

Verifying a Text Contact Remotely

If you can’t meet up in person, you can still verify that you have the right identity key by comparing fingerprints — however, it’s kind of annoying.

You need to share your fingerprint with your contact using some out-of-band communication channel — that is, don’t share it in a Signal message. Instead, share it in a Facebook message, Twitter direct message, email, or phone call. You could also choose to share it using some other encrypted messaging app, such as WhatsApp or iMessage. (If you’re feeling paranoid, a phone call is a good option; it would be challenging for an attacker to pretend to be your contact if you recognize their voice.)

Once your contact gets your fingerprint, they need to navigate to the verification screen and compare, character by character, what you sent them with what they see. If they match, your conversation is secure.

Your contact should share their fingerprint with you in the same way, and you should confirm that what they sent you matches what’s on your verification screen as well.

If you’re using Android, unfortunately there’s no way to copy your own fingerprint to your phone’s clipboard to paste into another app. If you want to share it using another app on your phone, you’ll have to manually type it.

If you’re using an iPhone, you can copy your own fingerprint to your phone’s clipboard like this: Open the Signal app and click the gear icon in the top-left to get to Signal’s settings. Tap Privacy, then tap Fingerprint.

Verifying a Text Contact Who Gets a New Phone

From time to time, you might see a warning in a Signal conversation that says “Identity key changed. Tap to verify new key.” This can only mean one of two things:

Your Signal contact switched to a new installation of Signal, most likely because they bought a new phone, or,

An attacker is trying to insert themselves into your Signal conversations.

The latter is less likely, but the only way to rule it out completely is to again go through one of the verification processes for text contacts described above.

Archive and Delete Messages

After Juliet sends a message to Romeo using Signal, copies of this message exist in only two locations: on Juliet’s phone and on Romeo’s phone. Unlike other messaging apps, Signal doesn’t store a copy of your messages on internet servers (“in the cloud”). Still, if you have a sensitive conversation, it may be a good idea to delete it when you no longer need it.

You can also archive conversations that you want to keep around but don’t want cluttering your Signal app. Here’s how to delete and archive Signal conversations.

When you open the Signal app, you will see a list of your conversations — your inbox, essentially. You can swipe a conversation to the right to archive it, which moves it out of your inbox and into an “archived conversations” list. Deleting a message or conversation varies depending upon your phone’s operating system:

If you’re using Android:

To delete a message, open the conversation, pick the message you’d like to delete, and long-touch it. This will select the message and give you the option to delete it. Similarly, to delete a conversation, pick a conversation from your inbox and long-touch it. This will select the conversation and give you the option to delete it.

If you’re using an iPhone:

To delete a message, open the conversation, pick the message you’d like to delete, long-touch it, and choose “Delete.” To delete a conversation, pick the conversation you’d like to delete from your inbox and swipe to the left to delete it.

Deleting messages is permanent. If you delete a message from your Signal app, and the person you’re talking to deletes it from their Signal app, the message will be completely gone.

Wait! Before you go on about your day, ask yourself: How likely is it that the story you just read would have been produced by a different news outlet if The Intercept hadn’t done it?
Consider what the world of media would look like without The Intercept. Who would hold party elites accountable to the values they proclaim to have? How many covert wars, miscarriages of justice, and dystopian technologies would remain hidden if our reporters weren’t on the beat?
The kind of reporting we do is essential to democracy, but it is not easy, cheap, or profitable. The Intercept is an independent nonprofit news outlet. We don’t have ads, so we depend on our members — 24,000 and counting — to help us hold the powerful to account. Joining is simple and doesn’t need to cost a lot: You can become a sustaining member for as little as $3 or $5 a month. That’s all it takes to support the journalism you rely on.Become a Member

Contact the author:

I don’t know if they put backdoors in. It’s possible but you are not providing evidence for this, just vague FUD. Conspiracy theorists are people who make vague assertions without evidence. Investigators and researchers are people who uncover details and provide evidence. You fall into the former category, I’m sorry to say.

You are making it seem like the truth is in the details.

Part of my point (which I will try to make even more explicit) is that they don’t even have to put backdoors in the hosting OS, because:

1) they don’t own it

2) is in full or partially closed source and/or proprietary

3) to just an example of google (which Android’s is for the most part open source) doesn’t even care about “security patches and upgrades” leaving most of their phones at the mercy of the NSA (Oh, now wait, I have no proof for that and it sounds too sinister, sick. They do it “obviously” because they want for people to upgrade to “their latest, greatest” …)

Android’s source code is released by Google under open source licenses, although most Android devices ultimately ship with a combination of open source and proprietary software, including proprietary software required for accessing Google services.[3]
…
At the same time, as Android has no centralised update system most Android devices fail to receive security updates: research in 2015, concluded that almost 90% of Android phones in use had known but unpatched security vulnerabilities due to lack of updates and support.[23][24]
~
RCL

All this is true, but it isn’t evidence that there are backdoors in Signal. Just because the app is distributed on Google Play does not mean they ship a backdoor in it.
Backdoors in Google Android are not the same as backdoors in Signal. I agree there probably are backdoors in Google (not AOSP/CM/Replicant) Android in the Google apps, but this does not affect the status of Signal.
It’s much easier just to watch people through the Google Play Services, which runs as root, than compel Open Whisper Systems, who might tell people that they are being coerced into making Signal malicious.

… which OS’s internals are owned by the NSA should you have said. I would also add that Signal very well knows that, as do “Sound” and “Symphony”

Even (someone pretending to be) your “Richard Stallman” hero and pointing to docs on his site has been pointing that out to Lee the wasteful irrelevance of trying to encrypt anything in a compromised system:

In that case, believe it or not, Lee was trying to persuade people into “outsmarting MS”!

I have a simple, basic question for you, smart people. All those “encrypted messaging ‘apps'” are technically doing basically the same damn thing lavabit was. lavabit business was not only shutdown by USG, but they even tried to gag order Ladar Levison

Canaries can help prevent web publishers from misleading visitors and prevent tech companies from misleading users when they share data with the government and are prevented from talking about it. One such situation arose — without a canary in place — in 2013, when the U.S. government sent Lavabit, a provider of encrypted email services apparently used by Snowden, a legal request to access Snowden’s email,

thwarting some of the very privacy protections Lavabit had promised users. This request included a gag order, so the company was legally prohibited from talking about it. Rather than becoming “complicit in crimes against the American people,” in his words,

Er… Do you have a better solution? Just because most people use a proprietary OS doesn’t mean that we have completely lost the fight against mass surveillance. Certainly, Signal should make it so Google apps are not required. Certainly, Signal should recommend Replicant and mobile OSs not controlled by Google. Yet Signal is better than nothing.
Signal have no access to the messages as far as I know, because they are meant to be encrypted client-side. So unlike LavaBit, which held the keys to the kingdom in its private keys, as most people weren’t using end to end encryption, Signal can’t divulge the messages.
I don’t know if they put backdoors in. It’s possible but you are not providing evidence for this, just vague FUD. Conspiracy theorists are people who make vague assertions without evidence. Investigators and researchers are people who uncover details and provide evidence. You fall into the former category, I’m sorry to say.

I wouldn’t be surprised if there’s a piece of classified legislation that makes it illegal to produce NSA-proof encryption and illegal to talk about it being illegal to make systems that are actually secure.

Joseph P. Nacchio was the only head of a communications company to demand a court order, or approval under the Foreign Intelligence Surveillance Act, in order to turn over communications records to the NSA.[9]
[9] “NSA has massive database of Americans’ phone calls”

“In June 2002, Nacchio resigned amid allegations that he had misled investors about Qwest’s financial health. But Qwest’s legal questions about the NSA request remained.”

Also, what is the point of citing our “representatives” when they themselves admitted to not even knowing what N-S-A stood for, let alone that it was recording the metadata and content of all domestic and as much as international comms they could en mass.

“Actually, all U.S.-based companies MUST, “by the rule of law”, submit to snitching.”

Fair enough; but that is a different issue, as it doesn’t address the OP’s concern: the fact that (to my knowledge) there exists no legislation yet “that makes it illegal to produce NSA-proof encryption and illegal to talk about it being illegal to make systems that are actually secure.”

Bull. The NSA can crack anything with ease. The fact that Signal exists is likely evidence that NSA can not only crack it, but has a back door.

I wouldn’t be surprised if there’s a piece of classified legislation that makes it illegal to produce NSA-proof encryption and illegal to talk about it being illegal to make systems that are actually secure.

These steps will keep kids, thieves, and opportunists out of your business, but nothing will keep an interested government agency out of your business if you’re communicating via electronic means. Nothing.

Don’t kid yourself. The protestations against supposedly strong encryption by the government are designed to ensure people use those systems to do their dirt so they can be exploited. Seriously. Secure against US government snooping. No. Wishful thinking. Cracking civilian and commercial systems is child’s play for the US government.

I’m sorry, but this is, to use your own words, “bull”.
The messages are encrypted by the client. What this means is that Signal has no access to the messages. The keys are stored on the device, Signal cannot decrypt the messages unless you were to give them the key. The source code is public ally available. This gives a reasonable guarantee that it does not include backdoors.
There is the possibility that compiled copies available from Google Play have a backdoor but this is unlikely. You can build from source and use it.
While there are legitimate criticism about OWS’s attitude to versions of Signal they don’t control, you are not making these criticisms. You are spreading FUD, quite frankly.

Bull. The NSA can crack anything with ease. The fact that Signal exists is likely evidence that NSA can not only crack it, but has a back door.

Right? I’m sure that they have keys to my house, too. Why do I bother locking my doors? The fact that locks exist proves I’m right in thinking this way.

I wouldn’t be surprised if there’s a piece of classified legislation that makes it illegal to produce NSA-proof encryption and illegal to talk about it being illegal to make systems that are actually secure.

Not yet, so it wouldn’t be legislation, it would be “off-the-books” CIA/NSA policy. Ask Ed Snowden?

These steps will keep kids, thieves, and opportunists out of your business, but nothing will keep an interested government agency out of your business if you’re communicating via electronic means. Nothing.

Seems obvious now that you mention it; we should all simply give up trying.

“Not yet, so it wouldn’t be legislation, it would be “off-the-books” CIA/NSA policy. Ask Ed Snowden?”

I’m no Ed Snowden but my guess is their interest would likely be in identifying popular apps and taking advantage of things like weakening the encryption and/or finding ways to make use of stuff like the ‘Keys Under Doormats’ and ‘Trusting Trust’ research. Through Ed’s (I hope he wouldn’t object to me using his first name; ‘Snowden’s’ seems gauche to me too) disclosures, for instance, we know that they have a habit of targeting sysadmins and developers. It seems logical to me that they’d be more interested in attempting to subvert crypto at the compiling stage or thereabouts (one reason reproducible builds is such a big deal) and/or doing something with libs and/or PRNG so as to be in control of crypto in that way than trying to deal with a bunch of random stuff popping up all over the place (takes more man hours, more work). Winds up being especially easy on Windows where people almost never compile from source (even their crypto apps).

Forgot to add, but especially important given the topic of this article in particular: For as few people as compile things for Windows, not many more people compile their own on OSX, not many people compile even their own crypto on linux — and almost none compile their own packages for their smartphones. Which (ironically) is where so many people communicate in the first place (with regards to crypto specifically used in chat and web applications not just storage containers and the like).

Regarding compiling, currently there is a chain of trust. Everything with compiled copies requires trust – do you trust the people compiling more than the original source code? Personally, I trust the original source code more than the compiled copies, but not much more. From the point of view of a malicious figure, it is more practical to infect the source code with a backdoor than target one person who maintains the software on one particular operating system distribution (although this would be useful for malware distribution as well).
At any rate, the answer to mistrust of compiled copies is reproducible builds. This way, multiple trusted people from around the world can build the software independently, and then each cryptographically sign the release of the compiled software. Anyone can build it and achieve the same result. There are efforts to make the Tor Browser Bundle’s build process reproducible, and some GNU/Linux distributions such as GuixSD and NixOS have functional package managers. What this means is that the build process is treated as a mathematical problem, where identical inputs (library versions, dependencies, build options, etc) always produce the same result. So, when you attempt to install a package, the package manager first attempts to find a compiled version with this output (a specific ID number is assigned to some version of the package with given conditions) and if no compiled version is found, it builds from the source code. As more packages are made reproducible this gets better and better. This is a a practical solution to these problems, in my opinion.

“Because it’s unlikely that anyone is trying to attack your encrypted messages the very first time you send a contact a message, Signal automatically trusts the identity key that it downloads.”

That’s pretty naive of Signal’s developers. With intercepts on over 95% of the fibre optic cables and connection points like ISPs around the world, with the NSA’s systems like XKEYSCORE and QUANTUMINSERT they can automatically fingerprint Signal traffic then automatically MITM that initial connection to make it display whatever public keys they want.

Also if you’re attempting to verify a public key via an already easily compromisable method such as phone call, internet website etc then you’re doing it wrong. The only provably secure method is an in-person (face-to-face) meeting. There exists methods to automatically re-create or fake someone’s voice based on existing voice samples. NSA would have those voice samples already if you’ve ever used an insecure phone or Skype etc. Also verifying a fingerprint via some random internet site like Twitter is also MITMable. I may accept a blockchain based identity such as keybase.io as somewhat more secure. But then again quantum computers will ruin that. Also you would need to download the application, verify it’s source code then run it locally. Not just look up the identify on the keybase.io website to confirm the public key. HTTPS is terribly insecure for nation state attackers and they can intercept or alter data on the page.

For journalists dealing in very important data I would expect in-person meetings to verify the public keys as the preferable option, or an existing secure channel in which the public key fingerprints were verified properly e.g. OTR or PGP.

This method is called Trust On First Use (TOFU), and it’s most commonly used in SSH. The alternative is make it similar to OTR, where you start out in an “unverified” conversation and after you verify you can change it to “verified”. However, then nearly everyone will use it the way that most people use OTR, where nearly all their conversations are “unverified” and they won’t get meaningful warnings when there actually is an attack.

With intercepts on over 95% of the fibre optic cables and connection points like ISPs around the world, with the NSA’s systems like XKEYSCORE and QUANTUMINSERT they can automatically fingerprint Signal traffic then automatically MITM that initial connection to make it display whatever public keys they want.

All communication between the Signal app and server is itself encrypted with TLS, so all passive surveillance of ISPs and XKEYSCORE can pick up is the fact that a device is using Signal, not anything else. If they wanted to do active MITM attacks they would have to compromise the Signal server and do them from there. But also, this isn’t happening. If they were, those of us that regularly verify Signal fingerprints would discover the attack (thanks to the ability to verify that the encryption works), and no one seems to have discovered it.

Being as good as one can get to provide desired security and safety might not be good enough; with proprietary software, it is impossible to assess the state of one’s safety or correct problems if found to be inadequate (this is not Signal hacker’s fault, these are problems only the OS licensing and distribution can solve).

I think both technically and ethically it is, to some extent, Signal hacker’s fault to encourage people entertain untrue illusions knowing very consciously well that they are lying.

I think there is plenty that can be done from both a hardware and software point of view. Yet, we have to still truthfully educate “We the people” (instead of bullsh!ting them telling them about this and that “app”) and try to persuade them out of “easiness”, “ubiquity” into some technical hippie “awareness” (which I don’t think we technical people are good at), but at the end of the day we will have to do.

Basically what I have in mind is:

1) start a culture of like-minded people: only one could start by him/herself, but the fewer people the easier it is for them to mess with us, it. They have set a pesky noise on my phone ever since right after I posted this comment:

2) our culture would be of totally open individuals openly documenting everything in an accessible, readable way, so that even if we would, say, use Linux/Debian as primary distro, anyone could do it in other Linux or Unix Open source OSs

3) we should make everything culturally friendly

4) first networking should be removed off the kernel (I tried to make them do so):

// __ RFE: moving networking out of the kernel and into to user land …

https://lkml.org/lkml/2013/8/12/476
~
5) we should strategize vertical contextualizations towards a clear separation of:
5.1) I and O features of the kernel/OS
5.2) networked and local services (such as the clipboard, keyboarding history …)
5.3) networked and local users (root if existent in init 5 should not be networked )

6) we should exploit the initialization process to set up a totally stealthy logging context a background (probably even dedicating a processor, USB ports, NIC … to it, which is not reported by the OS after you are past a certain init step)

7) include as part of the GUI easy ways for people to monitor bs (get snapshots of their screens, clips)

8) Linux/Debian should be totally encrypted and checks should be made to make sure BIOS and installation directories haven’t been tampered with

9) tor should not only be part of it, but also some sort of filtering web access interface should proxy all access to the Internet (so that, say, google’s redirecting URLs and all that crappy ad are parsed out) …

10) we will have to “fix” the Internet using functional FoF networks (instead of facebook so-called “AI” [email protected])

Richard Stallman, the founder of the free software movement, says that “freedom is worth the inconvenience”. We must make it as easy as possible to switch to an operating system that is free as in freedom, but people must be committed to freedom, they must want it. Our society puts too much emphasis on consumer qualities rather than what is good for us;Signal is a good app but it is not ideal, considering the policies of Signal’s developers. We must recommend and promote true Libre alternatives such as ChatSecure and Silence, and the libre Android system Replicant.
I think we must educate people. Educating about privacy means being honest to them. Sometimes this makes people annoyed, because they see it as criticism. But if each of us can make even 3 people (and ideally, around 10) value privacy and software freedom (security starts with libre software but it is a different issue to an extent) then that it is very good, as they will help to spread the word.
One other thing, I think is important, is to mention the contribution of GNU. First, some history, this is addressed to everyone here. In the 1980s, Richard Stallman started work on the GNU Project, which aimed to produce an operating system that was made up of just free software. Over the years, they produced almost everything, including a C compiler (GCC), a C standard library (glibc), various system utilities to replace UNIX tools (coreutils, as well as GNU make, GNU autotools, text editors like GNU Emacs, GNU Wget). All was missing was the kernel, the program that controls hardware. In the 1990s, Linus Torvalds produced the Linux kernel (indeed, Torvakds said that if the GNU kernel was ready, he would not have produced Linux), and on mailing lists, they combined this with many of the GNU utilities and programs. They called this whole system Linux, but this is not apt at describing the system, as it doesn’t give credit to GNU, which provides the basic system. Yet, although Linux is really important, it is one program only (and hosts a few others, such as util-linux but this is not part of the kernel). You can’t actually run an OS without GNU. Therefore, the accurate way to describe modern “Linux” systems (such as Fedora, Debian, Ubuntu, SuSE, CentOS) is GNU/Linux. Think of it like a tower, the very bottom being Linux, the foundations and first few floors being GNU, and anything being added further as your miscellaneous programs and high-level tools. Mention GNU, for their contribution to freedom, rather than those that merely made it more popular, happy to take a lot of credit.
There are various other systems, such as Android, that use Linux only, and their own libraries, or Busybox utilities. These should not be called GNU/Linux if they use no GNU, and instead be called Android/Linux and Busybox/Linux more accurately. Some systems that use the GNU kernel, Hurd, or the FreeBSD kernel can be called GNU/Hurd (or just GNU as Hurd is part of the GNU project) or GNU/kFreeBSD.
Confusing it may be, but the contribution of GNU is really important.
Sorry for a long post, but as a like-minded person I thought it is beneficial to bring up, and for other people reading the comments.
More info found here: https://www.gnu.org/gnu/gnu.html

Thanks, Micah. Another informative article. I still use a VM set up per a previous article of yours; it’s pretty easy to use and update, which given my tech experience is mandatory.

On another note: I’ve been trying out Opera’s developer browser w/free VPN (at least they say it’s a VPN) and native ad-blocking and have found it to be quite stable – and more importantly, as fast as my current connection.

It would be nice to see you do a security and functionality review of this and similar browsers.

Happy national holiday, Micah, and thank you for everything. I hope you and those close to you can relax a bit and enjoy it like most others. I’m taking it easy this evening and just grillin’ brats and HN hot dogs.

Maybe I’ll have a Mitty or two – where I daydream my communications with anyone are truly private, again.

Research in Motion agreed to give access to private communications to the governments of United Arab Emirates[72] and Saudi Arabia[73] in 2010, and India in 2012.[74] The Saudi and UAE governments had threatened to ban certain services because their law enforcement agencies could not decrypt messages between people of interest.[75]

It was revealed as a part of the 2013 mass surveillance disclosures that the American and British intelligence agencies, the National Security Agency (NSA) and the Government Communications Headquarters (GCHQ) respectively, have access to the user data on BlackBerry devices. The agencies are able to read almost all smartphone information, including SMS, location, e-mails, and notes through BlackBerry Internet Service, which operates outside corporate networks, and which, in contrast to the data passing through internal BlackBerry services (BES), only compresses but does not encrypt data.[76]

Documents stated that the NSA was able to access the BlackBerry e-mail system and that they could “see and read SMS traffic.”[76] There was a brief period in 2009 when the NSA was unable to access BlackBerry devices, after BlackBerry changed the way they compress their data. Access to the devices was re-established by GCHQ.[76] GCHQ has a tool named SCRAPHEAP CHALLENGE, with the capability of “Perfect spoofing of emails from Blackberry targets”.[77][78]

In response to the revelations BlackBerry officials stated that “It is not for us to comment on media reports regarding alleged government surveillance of telecommunications traffic” and added that a “back door pipeline” to their platform had not been established and did not exist.[76]

It should be noted that similar access by the intelligence agencies to other mobile devices exists, using similar techniques to hack into them.[76]

The BlackBerry software includes support for the Dual_EC_DRBG CSPRNG algorithm, which due to being probably backdoored by NSA, National Institute of Standards and Technology “strongly recommends” no longer be used. BlackBerry Ltd. has however not issued an advisory to its customers, because they do not consider the probable backdoor a vulnerability. BlackBerry Ltd. also owns US patent 2007189527, which covers the technical design of the backdoor.[79]

I also don’t trust javascript dependent pages. At least the reply capability should “degrade ‘gracefully'” I am sure engineers working for TI know that very well

I also agree with you when you point out that it doesn’t make any sense to talk about “securing” software running on proprietary OS, BIOS and hardware. I think Micah Lee knows this very well (notice the explanation he gives about MITM attacks to encryption not making any sense when they can own you …)

First, it’s a shame that one needs Javascript (JS) to reply in the appropriate place in a thread. I don’t wish to run JS so perhaps an Intercept admin can reparent this post to avoid breaking a thread. Form submission over HTTPS certainly doesn’t require JS to work. Please Intercept code hackers, make it so that the JS is completely unnecessary to fully work with your site including posting followups in the right place so they thread properly. It’s great that the rest of the site (reading articles, getting the RSS feed, posting thread-starting articles) doesn’t require JS at all.

Being as good as one can get to provide desired security and safety might not be good enough; with proprietary software, it is impossible to assess the state of one’s safety or correct problems if found to be inadequate (this is not Signal hacker’s fault, these are problems only the OS licensing and distribution can solve). I encourage people to either not get a tracker in the first place. As Richard Stallman, founder of the free software movement, points out in his talks: sometimes freedom requires a sacrifice; giving up a tracker isn’t a big sacrifice to make in exchange for increasing one’s software freedom (which allows one to gain more privacy, and more security). If users won’t do without their tracker, consider trackers to be devices which gather more data than you realize and only use your tracker to ultimately help out free software developers for the time when we get trustworthy hardware running fully free OSes. Developers need feedback on how to get non-technical computer users to work securely in addition to all of the normal debugging complex programs require.

Regarding the Unaphone: I was unable to find information on the UnaOS licensing. UnaOS appears to be a modified Android OS which is said (in its press release) to be “affordable”, “secure”, and have “No connections to Google’s office in Mountain View, no connections to US Department of Defence, no connections to US Department of Justice” but (as far as I can tell) is just another proprietary OS running on problematic hardware. UnaOS won’t let users install programs of their own choosing on the OS, as far as I can tell. UnaOS developers appear to have conflated security with locking computer owners out of controlling their own computers! I don’t see how that design choice respects a user’s freedom to run, share, and modify their computer’s software. A demo (https://www.youtube.com/watch?v=bPkg4Utr_yM) shows the user interface and appears to come with Adobe Reader, a proprietary PDF reader. Being proprietary is bad enough, but this company and this program are also notoriously bad at security. Keeping the user from completely controlling their own computer will not keep users safe from malicious activity. Free software is a prerequisite for the security computer users all need and deserve. Proprietary software users eventually learn that the software they aren’t free to inspect, alter, or share is the software which exposed them to malicious activity.

One convenient way to catch the Wahaabi and Salafi Muslim terrorists is to provide them with free public wifi under conveniently located security cameras.

Edward Snowden covered himself with a towel while typing in his password in the HK hotel as you may confirm. We don’t want to alert the ISIS-type folks to this method. Thanks Micah for not mentioning this.

This is a good, sane article, and I don’t agree with the repeated comments of the form,

– “signal could totally be intercepted, I could root you with a printer, omghax,” or
– “wah wah some other flashpan shatchat app is slightly more convenient,” or
– “I feel primarily allied with the brand identity of Blackberry, not rational adulthood.”

However, “Verifying a Text Contact Who Gets a New Phone” shows that Signal devs have work to do. It should be possible to store a backup of your identity outside the phone, because “switched to backup identity” is vastly more useful to the other party than “switched to random identity,” and losing or wiping phones is common for everyone but particularly for people under threat and, or because of, frequent travel. It isn’t a matter of how hard-core you are to do the “extra work” of reverifying everyone. It’s about Signal dropping the ball on its core function of verifying endpoints. Most secure messaging systems (without a centralized and therefore vulnerable PKI) in practice slowly fall back to “unverified” because of device-loss or multiscreen-phonebook-skew chaos, and Signal makes it worse by fetishizing phones, binding credentials to them as if they were real TPMs. Signal should address the problem they have worsened, for the sake of security, not convenience.

While it’s great to try and bring privacy-supporting computer use to the masses, I’m not convinced this promise can be realized right now.

I find it unlikely such efforts as Signal succeed where the OS is proprietary (aka not freedom respecting, user-subjugating software), and the hardware is under the control of proprietors via schemes like a second processor that controls the processor running user-facing applications (see https://www.fsf.org/blogs/community/replicant-developers-find-and-close-samsung-galaxy-backdoor for an example). It seems to me writing such programs for non-free OSes on non-freedom-respecting hardware is akin to running a program to secure one’s privacy on a computer one cannot trust; it’s futile and at best ends up presenting a facade of security to the user while (perhaps inadvertently) helping share the user’s secrets.

It seems to me the best advice right now is to simply not trust one’s privacy to trackers (aka mobile phones, cell phones). Either don’t own one, or understand that whatever data one puts into the tracker (no matter the input mechanism: texting, anything in mic/camera range, GPS location data, etc.) can be spied on regardless of the program running atop the proprietary OS. Sadly, given the information in the aforementioned link I’m not convinced even Replicant OS apps can be trusted to keep one’s privacy either. But I don’t blame the Replicant hackers for that, that is the untrustworthy hardware design paying off for the proprietors.

I’d like to see development of free OSes (such as Replicant) and free software (free as in freedom) continue so that when trustworthy hardware is available we’ll have free software ready to use with those devices.

Hi,
I too agree that the proprietary OS poses a great risk to privacy and security, especially IOS, where Apple basically own you. At least on Android you can install applications from outside of Google Play, from F-Droid.
However, I don’t agree that it is pointless to write libre software for these operating systems. It is good to do this as you can protect people’s privacy, security and freedom by providing libre alternatives to bad applications and services.
What I would say is that The Intercept should write about libre operating systems, such as GNU+Linux and Replicant. Similarly, I think they should promote ChatSecure and Silence over Signal, as these more adequately respect freedom and are available on F-Droid (unlike Signal who are hostile to independent builds as they don’t control it).
You actually can have mostly-trustworthy hardware already such as computers that have been reverse-engineered to support Libreboot. Similarly, although it would be nice to have free software running on the modems of mobile phones, location tracking can happen regardless. The bootloaders on mobile phones are proprietary but they don’t do much so aren’t a concern of mine. Things are bad especially with Intel hardware but it is possible to protect privacy and freedom if you know how.
I think Micah Lee should write about these issues. (Especially Replicant and Signal alternatives) What do you say, Micah?

Hi,
The Unaphone does not solve the problems of proprietary software and hardware. The sad thing is that all these projects are just to make money. We need proper investment to produce libre hardware and software.
To use a mobile phone in relative freedom, Replicant is the best we have.

I know we need libre baseband software. However, as it currently stands, although libre baseband software prevents remote control, I know that location tracking occurs which is far more of a danger to everyday life, if we can prove the baseband software has little access to the system (“modem isolation”). Remember, just because the software is libre doesn’t mean it isn’t malicious, or backdoored, or secure, although obviously security starts with free software.

One cannot patch, update, modify or even reflash (at least not easily, and certainly not with much actual trust) their own baseband for the most part. I wasn’t suggesting libre baseband is *the* answer, but I think it’s important as part of the answer, since IMHO one should be able to recover from anything at the firmware and bios-and/or-equivalent level for all of one’s devices (and be able to inspect that code). I guess I’m saying while noone can (at least as things are now) currently wipe out EVERY possible way to backdoor or hack someone anywhere from ring0, ring1, ring2, on up, it shouldn’t be impossible to rebuild a system that has been modified at such a low level.

I agree location tracking is one of the most egregious invasions of privacy we all face (not just people who might be considered ‘interesting’ or merely unfortunate). I’m not sure I see a solution if every option is either backdoored, cooptable, or exploitable though. I’ve seen your other posts (I enjoy them :)) and I think we can both agree that it’s very very difficult to rule out the possibility of wiping out every bug and every bug class (including bugdoors) but I think it’s far better for everyone if they can have fully-audited, open-source options. I suspect those with a good knowledge of the concepts outlined in, for eg, TAOSSA, would be greatly offended by at least some of the code.

I thought I’d mentioned reproducible builds in one of my other posts but I can’t find it; I’m sure it’s here somewhere. I saw your recent reply near the top though and wanted to say I agreed with your comments there as well. :)

I see your point. This is the ideal, but we are probably not going to get there without community hardware or some SoC manufacturer to release full source code. This would probably be a Chinese one, or perhaps Texas Instruments, but never Qualcomm or Intel. This is partly because there are laws and regulations by the FCC to make them compliant to “protect the network”.
Under the current circumstances, a mobile device with an isolated modem and a libre or extremely simple bootloader, with the rest fully free, is the best we have. Ironically this is probably less malicious than most Intel computers, providing the modem has no access to the hardware and the bootloader is so simple as to have less functionality than many circuits.
However, it is better to have a computer powered by Libreboot, in order to place VoIP calls e.g. through WebRTC and other secure alternatives, and messages through TorChat, Internet Relay Chat and XMPP. One could use Qubes OS or one of the free GNU+Linux distributions in order to do this.
At any rate current mobile devices are very creepy. We should bin them as soon as possible, especially ones with Qualcomm processors (it has two processors, of whicg the modem processor can spy on your whole system) and get dumb phones with removable batteries or phones supported in Replicant. And help Replicant by sending them devices which they have evaluated as possible candidates and funding and help develop drivers replacements.
Just my thoughts on mobile spying devices.

Real reply tomorrow — it’s getting late — but wanted to ask you first (if you see this before my reply): You’re not worried about the security and privacy implications (including information leaks) that can exist as a result of enabling (or not disabling) WebRTC? With Qubes I’m (and have been) quite leery of the fact it’s built in top of Xen which has had quite a spate of vulnerabilities. I agree it’s still better than most of the other options, but it does worry me — the lack of existence of a more secure in general virtualization solution worries me (one could make an argument for qemu on top of hardened gentoo but I suspect that if qubes intimidates most people, that’d be far more difficult of a sell than even qubes, even to other techies).

I have a BlackBerry Passport, so Signal is unfortunately not among the options that I can consider. Because of this, I have a paid subscription for BBM Protected, which offers additional encryption on the regular BBM consumer version. Have you ever heard of this and if so, what is your opinion of it?

Get rid of your subscription, it is simply not worth it. It is not secure so long as BlackBerry can read it, as they probably can because you use their crypto keys. Secure from your average criminal but not from spooks and state spies, and employees of the BlackBerry corporation. You probably aren’t in much danger of targeted state surveillance but I don’t think such power should be at the disposal of anyone.
I think you should get a dumb phone with an easily removable battery for preventing location tracking. If you need smartphone features, get a phone powered by Replicant. You can send encrypted text messages using Silence, and encrypted IM using ChatSecure. You can make encrypted VOIP calls using LinPhone and CSipSimple. These are not controlled by corporations (although they may contribute and find some of these applications) so you minimise surveillance and increase privacy.

Sad that it’s come to this but it has. Unfortunately; whatever you do it will be like locking your possessions; that only dissuades the lazy crooks. For example, a $10 lock will secure a $100 bicycle. For a $7,000 bicycle they will come with cutters, grinders, torches and freezing sprays and the real bad guys will get what they want. We are in the “Brave New World” now with bread and circuses sufficing but as things get worse and they will, the 1984 mailed fist will come out. When the swat squad comes to your door, they won’t ask nicely.

Signal has caused me great trouble with contacts not receiving my messages. If one of your contacts uninstalls Signal, they will no longer receive text messages from you if you are using Signal, and there is no warning on either end. The messages just disappear. When I send messages to Signal users who have iphones, they are never notified. I’m afraid to uninstall signal because I’m worried I will stop receiving text messages in this way. I am also only able to text people who are also using Signal when I have a data connection. I find Signal to be very unreliable and I wish that I had not started using it. Encryption is great, when it doesn’t break standard operating.

I did find that tool after some searching when the problem was first identified. But I can’t expect my less than technical friends to even think to look for something like that after they uninstall the product. It’s great that this app provides end to end encryption, but in my opinion, it is not worth recommending to others. You simply can’t offer an app that is going to make text messaging unreliable. I’m in trouble with my friend’s for suggesting they use Signal after they stopped receiving some of their text messages. Simply unacceptable, I will not recommend the app again.

It seems like cops could remove the SIM from a locked phone, unregister it, and receive future messages sent to it in plaintext. This is another way Signal’s poor handling of the “lost phone” case is a security problem, not a convenience problem.

Besides that, any agency with SMS passive surveillance capability could unregister phones at will. The target may notice this, but more likely she will lose it in the noise of general phonespaz, or blame Signal for being flakey.

These publicly-documented backdoors have never existed for more traditional encryption ecosystems like PGP, S/MIME, XMPP OTR, or Pond. Admittedly the traditional ecosystems came with other problems that Signal solves, but what happened to the spirit of “no new problems” and building on earlier work?

I think using phone numbers to identify endpoints was a poor decision because it gives Whisper Systems / Twitter an excuse to resist federation, but it also creates this security corner-case they have failed to solve. Signal is probably the most practical app currently available, but it doesn’t meet my expectations. Nothing I know of does. I think the expectations are pretty basic, but unfortunately it seems messaging app designers need to spend >95% of their effort on fashion and self-promotion to get anywhere, and Signal is certainly no exception to that.

Signal is anything but easy to use. My contacts with iphone never get message notifications if they use Signal. If a contact uninstalls Signal because they are tired of not receiving text messages like normal, they simply stop receiving my Signal messages entirely. I have a friend who had not replied to my texts for some time. Finally we figured out that after he found Signal to be buggy and uninstalled it, anyone who was previously sending him Signal messages would simply not get through. Signal is a buggy POS and I wish I had never started using it. I’m afraid to uninstall it because then I will be stuck never receiving text messages anymore. I highly advise anyone reading this to think twice before installing Signal. It’s not ready for prime time.

It depends on your threat model, on how much resources you think an attacker might be willing to spend on hacking your phone. The Erase Data setting, as well as the progressively longer time delays, aren’t enforced in hardware. If an attacker is willing to take apart your phone and do some complicated attacks, they can bypass the Erase Data feature. Here’s more information about how that works: https://www.aclu.org/blog/free-future/one-fbis-major-claims-iphone-case-fraudulent

But having a long enough passcode protects you against even this, because the longer your passcode, the longer it will take your attacker to guess every possibility. 4 digits has only 10,000 possible passcodes, but 11 digits has 100,000,000,000 possibilities, and shouldn’t be possible for anyone ever to crack it until long after you’ve died. More information: https://theintercept.com/2016/07/02/security-tips-every-signal-user-should-know/

Right now for a normal user, I would say either a Nexus Android phone or an iPhone.

If you’re a techie nerd, I’m particularly excited about CopperheadOS, which is a security-focused Android distribution that runs on Nexus phones. However you can’t both enable all the security features and get Google Apps, which ironically makes using Signal way more difficult.

Hi Micah,
Have you heard of Replicant? It is an Android distribution that does not have any proprietary software in it. Most importantly, there is no proprietary code running in kernel space, so you are less likely to be spied on by manufacturers.
It also runs on phones with reputed good modem isolation, meaning the Replicant developers are reasonably certain that the phone can’t be attacked and controlled remotely.
Will you write an article on Replicant?
Thanks!

Be careful with that Romeo & Juliet example. These days, Romeo would be considered a pedophile, and Juliet was a victim of child abuse and grooming. You don’t want to be teaching people how to abuse children now do you!?

– While Signal is great for easy contacting like a text message or phone call based on someone’s phone number, it is not anonymous. Signal-like protocol OMEMO on XMPP is the next step up. ChatSecure/Conversations and Zom are great and are working on OMEMO but have OTR at the moment. Zom is like an XMPP for dummies. It sets you up with a free account at dukgo.com (Duck Duck Go service) and a password (changeable) and auto log in and generates a 1024 DSA key and shows the SHA-1 hash for you.

– WhatsApp is actually a more complete implementation of Signal protocol. Correct me if I’m wrong, but it has key continuity for calls and uses the same Ed25519 (a type of ECC signing key of 256 bits with a SHA-256 hash) that is used for texts.

SAS is a pain to re-establish every time and since it is the same app and same contact, then using a common key makes sense.

SAS is also a no-go for non-English speakers. I can’t believe this isn’t addressed more often. Not to mention a lot of people aren’t terribly bright in certain departments and won’t pronounce half of the words right or be comprehensible. “Paragon”? I wish all my friends knew how to say that word, let alone what it meant. Most do, but others don’t. Grown people who are college educated native speakers can’t spell “you’re” or pronounce/spell “intensive” or “conscience”. Having to explain to your friend how to pronounce the word defeats the purpose. And again, most people on Earth do not speak English fluently.

Birthday attacks are only applicable if the attacker makes two documents and asks you to sign one.

I make a contract saying I will buy your bicycle for $50, but I also made a contract saying I will buy your house for $50.

I add null characters at the end of each contract until I get a match. It is only by doing this to two documents, that I make the chance of a collision exponentially greater (halving the theoretical security).

If you are trying to come up with a collision and one thing is already fixed, I have 160 bits of security.

SHA-1 has flaws and is less than 160-bit secure.

DSA1024 and DH1536 as used in OTR version 3 are also not good long term. You should rotate your keys once a year I’d say.

But! I made a note of the upcoming OMEMO which uses HMAC-SHA256 and Curve25519 (512 bit ecc diffie-hellman epehemeral; keys and Ed25519 (with SHA256 and 512 ecc signing key) and AES256 (extra protection from Meet in the middle.)

Hi Micah and thanks for the great article. What’s your take on MITM detectability when Snowden has repeatedly talked about government agencies such as the NSA stealing keys from the end points, and that smartphones can be taken over by sending a single SMS to it. What threat model is Signal solving? What threat models is it not solving?

Having malware on an endpoint isn’t the same as a MITM attack. If your iPhone or Android phone has been hacked (and rooted), the attacker can spy on everything you are doing on it. In that case, you wouldn’t detect a MITM attack in Signal because there wouldn’t be a MITM attack (the attacker would be logging keystrokes/reading files/looking at your screen, not attacking the crypto, so the fingerprints would match).

Also, keep in mind that it’s not necessarily trivial to take over a phone. If you update the software on your phone promptly, you fix the known vulnerabilities. This means means you’re only vulnerable to zero day exploits which are more rare, and often expensive, compared to public ones that anyone can find. iOS and Android also use sandboxing, which means exploiting an app often isn’t enough to root your phone. An attacker needs to string multiple exploits: a vulnerability in the app, plus a vulnerability in the OS sandbox, before they can start spying on Signal messages undetected. (But it depends on the exploit — I think SMS vulns often exploit OS-level code rather than app-level code, which might give the attacker root with a single exploit.)

This is exactly what you want. You want to make it so 1) an attacker can’t passively spy on you (like they can with normal text message and phone calls) and 2) an attacker can’t do an active network attack (like a MITM) to spy on you. You want them to be stuck with hacking your endpoint as the last resort, the only option. And then you want to make your endpoint as hard as possible to hack. That’s the best that’s possible for anyone to do.

I understand the difference between CNE and MITM. I guess I’m wondering about how difficult invisible MITM would be if keys are stolen from the end point. The reason government agencies might want to do that is because, compared to MITM, active transmission of everything you do on phone to network would be much more noisy. Over WiFi you could view the traffic yourself, and over mobile data the excessive data usage might be visible.

“This means means you’re only vulnerable to zero day exploits which are more rare, and often expensive, compared to public ones that anyone can find”

They are more rare and expensive. However, as far as I know 0-days are not one-time use only, but they remain unpatched for a long time before they’re discovered. Very few users actively monitor their devices or run IDS (if there even is one for smartphones).

If there was a 0-day OS-level vulnerability in SMS of latest version of Android/iOS, how difficult would it be to scale the attack to mass surveillance, provided the exploit doesn’t raise warnings in target system?

“You want them to be stuck with hacking your endpoint ”

I understand. But it appears governments like UK are moving towards bulk equipment interception, so it’s no longer just individual targets that are getting hacked, soon it will be entire cities. How do we drag governments back to targeting?

In their eyes they *are* targeting, which I think is part of the problem. By going for retroactive access, they’re basically giving themselves a free pass to say they’re not surveilling everybody when in fact (and I’m sure you know) they are. It’s part of what led to Clapper’s egregious forehead-sweating performance while being questioned about ‘collection’. So to answer your question, I think we first have to get them to see what ‘targeting’ even means. From what I’ve seen from a little bit of a distance, they’re pretty fast and loose about what might constitute ‘suspicious'; trying to avoid ubiquitous surveillance of everything, for instance, would in their eyes make you a ‘hard target’. I’m not sure how we can roll that back. If it can be done I think it has to be started at a local level, though. I’d really like it if there were towns and cities that flat-out banned street cameras and public transportation (and vehicular) cameras/mics etc — where certain people in certain parts of certain cities were willing to ‘risk’ a bit more crime or a bit less crime being solved in exchange for that level of privacy. Ironically the lower-middle class may have the best chance at making a difference in that regard, since back/side streets and non-development-type suburban areas at least have less cameras in residential areas. It’s funny to me how the wealthy and the impoverished both seem to be the most surveilled for different reasons; the wealthy want it to protect them from the impoverished, but at the same time they’re giving up their privacy to get it. And of course the impoverished are the most affected since they’re perceived to be shadier. That that extends to people waiting for a bus or in a subway or train station or what-have-you is outrageous.

The fact that those agencies have access to THIS data is something that I’d like to be covered in more depth. I never really worried that much about surveillance cameras when they were basically separate and on taped loops that wrote over themselves every day or two (I didn’t like it but I didn’t feel like my privacy was forever invaded); the current setup, however, one can assume it’d be a combination of attacking the endpoint of the places with cameras (many of them connect theirs to their internal network not realizing they’re exposing it to the entire internet if their router’s hacked) or just getting access via similar methods as we saw in the Snowden documents about accessing online services (in the case of municipal and larger networks).

I’m a bit off-topic but not really — you mentioned entire cities; can we be sure they’re not already hacked in that manner? I strongly suspect some are. Which goes back to my statement about localities: The best defense may be to encourage our localities to NOT network all of this stuff (even if they INSIST on it being done, something more like a black-box system might be imposed, creating more of a physical onus to check it ONLY in case of something truly egregious; it’s not a compromise I’d like but it’s difficult to shake peoples’ mindset from ‘more is better’ unfortunately)), to not even have it in the first place, and to pass more local laws preventing longer-term storage of surveillance footage and require posting of such cameras when in a location where they’re used.

With regards to 0days, they are not one-time use only but they’re rarely used en-masse; they can be, and sometimes are, but my understanding is the government prefers not to use them soas to make them last longer unless they’re truly after someone or some piece of information — but prefers not to use doesn’t mean doesn’t use (and one assumes they have quite a stockpile at the NatSec level; I don’t believe the FBI often has that level of access unless it’s acting as a ‘customer’ but I’m sure it occasionally does (eg the iPhone case (where it kind of was still a customer), some of the Tor cases, FinFisher and its ilk (again, customer)). They can last a long time or a little time depending on many factors, including luck and the skill of the person being attacked — not just how many times it’s been attempted to be used. 0days can die in a day.

There WAS a major SMS vulnerability a year or two ago published by jduck & co. One assumes that was used here and there on various people, including various governments (they actually probably prefer not to use 0day if they can help it since that’s even more deniable; it’s especially easy to avoid burning one (potentially) in the case of mobile since they’re (comparatively) so rarely up-to-date (or even updateable)).

FWIW you can hide both mobile data and wifi data if you have the correct level of operating system access (especially on a mobile device where it’s hard to attach something outside of the device to monitor the traffic (one can argue there’s adb and the like but then you threaten possibly breaching the integrity of any machine you’d attach it to to do said monitoring).

Forgot to mention — I find it horrible that various states have 2-party consent laws for monitoring phone calls but just walking outside or taking a bus assumes consent to be surveilled in video, audio, or both (and sometimes more).

AS the author of the article says: Keep your device (Android or iPhone) updated. The Android vulnerability being passed around by Apple fanboys has already been patched. Don’t take my word for it – check it out.

Sure — that’s all fine. But what I like is the belt and suspenders approach because all software of any size has bugs. So I’d like to have the messages encrypted by signal, and the whole device encrypted as well, because we don’t know how many vulnerabilities exist that we don’t know about yet and/or haven’t been patched yet.

UPDATES on this article ought to be ON THE READY from that main menu as you can bet your bippy the NSA CIA & F…B…I… have just finished reading this and have convened the teams to get past the security in all kinds of ways incl impersonation.

disclaimer: good security is required for persons, businesses and countries and police authorities need to be able to stop serious crimes before they cause real damage. At the same time, corporations and governments should not be employing police services to stop competition, disrupt democratic processes, or violate freedom of speech and press.

Surely an even more important security tip is not to let anyone know if you know someone else’s password. I mean in some countries, when the moon is full and the wolfsbane blooms, you might be able to claim some kind of right against self-incrimination not to reveal your password. But it’s 100% sure that if you know someone else’s password, you’ll be offered a choice between betraying them and going to jail, and you won’t even get the lousy pieces of silver.

Yet people will constantly invade other peoples’ lives and harass and bully and stalk them online. I’ve been thinking there need to be new laws that equate using such things to being a form of digital rape. I’m talking about those ‘lulz’ and ‘activist’ and ‘hacker’ types who go after their own — the very sorts of people most likely to get thrown into that sort of predicament, I assume.

The laws against hacking were the first great mistake of the internet. We could have had companies humiliated into doing the right thing by punk teenagers posting little graphics on their front pages. Instead, the businessmen managed to keep their “respect” by such means as it is usually maintained (brutalizing those who would disagree) until such time as crews in nonextraditable countries got down to work.

The thing is, I don’t know if the American kids and coders have left the skills and the attitude and the free spirit to properly test corporate security even if the laws were completely repealed tomorrow.

I think prohibiting hacking is a violation of freedom of speech. It’s manipulative speech, but regardless of what the hacker’s computer said to the recipient’s computer, anything the recipient’s computer does in response is essentially voluntary. It’s the technological equivalent of, “He said something confusing and I did something stupid in response! Punish him!”

I think you misinterpreted me. I said ‘lulz and activist and hacker types who go after their own’ — not hackers. The CFAA and most of the current hacking laws are awful. At the same time, we don’t have strong enough laws for cyber harassment. And it’s really screwed up when people help to destroy what they say they care about by eroding trust instead of building trust. And the current trend towards political correctness in the hacker community can be just as dangerous as the malicious stuff I was talking about above. The funny thing is, I am absolutely against extradition for hacking cases (btw please support Lauri Love’s bid to not be extradited if whoever reads this is in the UK) — especially when it’s by the US. But I think I might understand if it was for SWATting and/or online harassment to an citizen of another country (yes that means I also think Americans should be extradited for the same sorts of offenses). Those cause actual grievous injury.

I don’t disagree with the meat of your argument — and SWAT teams now are totally overused and incredibly violent (and usually unnecessary; they’re terrifying to begin with). But that doesn’t change — and in fact it sort of emphasizes my argument — SWATting is done specifically BECAUSE of all of these things. Nobody’s suprised when it happens that does it. They find that part funniest of all. That’s flat-out sociopathy, and I’ve yet to hear of any way to rehabilitate that short of actual punishment. Too many kids consider it a joke — even when they SWAT one another — like it’s some sort of a status symbol. They derive JOY from terrifying, terrorizing, and destroying property at someone else’s hands. If it were murder, there’d be a strong statute and punishment; I’m not sure why it shouldn’t be treated as though it were a contract kill by an unwitting bunch of killers. I won’t go into the erosion of public trust and yelling wolf aspects. Sorry, I feel a bit strongly about this… mostly because I’m frustrated by seeing these sorts of things be called ‘part of hacker culture’ to some of the new generation. It’s a horrific premise.

I think that maliciously providing false information to police or 911 operators should be a crime in itself, but the hacking is primarily the fault of whoever was in charge of securing the device if it is expected to not get hacked, because you can’t reasonably expect without exception every single person in the entire world to be nice, and everyone is safer if we fix these bugs instead of hiding them.

DDOS/spam is kind of a separate entity from unauthorized access hacking though..

Can we agree that SWAT teams and the people SWATting people are both to varying degrees legally, morally and ethically responsible for harm (including of a psychological nature)? Can we further agree that (probably) most SWAT people think they’re doing something ‘good’ (despite how people who’ve been terrified by a SWAT team might feel), whereas someone SWATting someone probably knows what will at least happen, what they hope will happen, what might happen, and that they’re not being ‘good’ doing it (putting aside the quite bad but slightly less horrible call to get a day off from school by calling in a false threat — still a crime, though)?

I think we can both agree that computer hacking can be not too bad, somewhat neutral, bad, really bad, and deadly — but also sometimes good (in certain cases) even by the criminal definition — but that ‘hacking’ as a concept in and of itself is not bad (but IMHO unlike with guns when people DO overstep they tend to do so in such a way that they interpret ease with amount of harm — one reason I’ve been pretty much anti-metasploit but not anti-hacking, personally. Learning how systems work for one’s own edification shouldn’t ever be illegal as long as you’re not accessing other peoples’ property; when you are, there should be varying levels based not only on intent but actual harm; I’ve seen these get mixed up all-too-often… At the same time I’d consider someone robbing a bank by gunpoint far more terrifying than robbing on online. And I think most people consider SPAM a nuisance (if occasionally a big one) but not in and of itself necessarily all THAT harmful, usually; illegal, yes, but not that many people would die over it. I wouldn’t send it or want it but I don’t think it’s something to extradite someone over — that should be locally handled.

We can only fix the bugs we know about — and sometimes we can’t even fix those. Yes, the owner of systems has some responsibility but only some; at the end of the day you’re not let off the hook for robbing someone if they leave their door wide open… it’s still an invasion. I don’t think it’s even malicious a lot of the time, intentionally — but I do think the ease by which people can do things by being enabled to do so makes it very easy for a non-skilled, lightly skilled, or even moderately-skilled person to unintentionally completely screw up a system. Sometimes that can have unintended consequences. I’m not sure how that should be punished. Certainly it shouldn’t be punished the way the US often punishes people. It shouldn’t get someone extradited.

I wouldn’t mind seeing some of the intelligence agency personnel who’ve hacked people in other countries (regardless of where they’re from) being extradited to those foreign countries to stand trial by their countries’ rules and laws, by their countries’ citizens. There’s a double standard there that’s a big part of the problem.

There’s also a double standard (back to the kids stuff) in trying to say you encourage things like operational security and good privacy practices but then going around and destroying the lives of your peers by violating theirs, ‘doxing’ them publicly, affecting their physical lives and safety, harassing them online and other things that happen online when factions of things like Anonymous get out of control and turn on their own. I’ve never been a part of any of that (not a fan of drama) but some of them are probably our future activists once they get a bit more mature, and maturity isn’t always rewarded in some subcultures; sometimes the opposite gets rewarded instead.

I agree one can’t reasonably expect without exception that every single person in the entire world will be nice (heck, I don’t even expect most people to be) but I think the hacker ‘culture’ has taken a nosedive in the past decade or so and that worries me — because at least some of those people are going to be the ones working for the three letter agencies eventually.

WindowsPhone is tough even though I loved the UI. Microsoft has basically killed the platform off and there aren’t many developers making things or updates for it (things you need for a secure messaging app) – and Microsoft is not a good company when it comes to respecting its customers privacy with regards to the government:

Unfortunately there isn’t a version of Signal for Windows Phone. But WhatsApp is available, and it’s also end-to-end encrypted, and you can verify the crypto similar to how I described in this article. It has some worse privacy around metadata (here’s my last article explaining it all), but it’s not bad, and a lot more people use it than use Signal.

Play around with it and have fun. Just use it as a phone/media player/GPS/computer. Don’t use if for anything that requires security, like banking.

Do a factory reset every so often. A rebuild agent would be nice so that it reloads your installed apps after the reset. I used to have a rebuild script for Debian that would tear down the computer and rebuild it. That should wipe out and most roots and malware.

All messages sent using signal go trough signal’s servers. That means signal is spying for the government. If they were good guys they would have created a peer to peer system. Of course if they did that they wouldn’t earn (steal) any money…

It’s encrypted by the client, so they can’t read it (this is what end-to-end encryption means).
However, Signal is against federation, so you can’t use your own Signal server. Similarly, they don’t provide a .apk file without the malicious Google Play Services and library dependencies. They took down rebuilds that removed Google apps.
If you want a private messenger that also respects your freedom, try ChatSecure or Silence, both available on F-Droid, a FOSS app repository.

Tell them you want to use this for Family conversations since it makes them private. Normally there isn’t too much pushback for that. Going beyond that (friends using it etc. really depends on them, but once they know it makes things private, that helps).

Remind them of ‘the fappening’ and related incidents and tell them their odds of a similar incident is reduced if they go to Signal/TextSecure/etc, and even further reduced if they go decentralized and communicate with xmpp/jabber and otr (eg ChatSecure). Ask them how they’d feel if you could see their messages if that happened ;). Chances are there’s something there they do consider private but they don’t really think about it til you mention their parents might see it (it may seem hyperbolic but hacks are often random). Something that doesn’t store a chat history on a middle server is always better.

BTW in the same vein (or similar anyway) remind them chances are you know their security questions/answers. This might get them to actually choose something harder to guess (than something their friends also know… high school is cruel; help them protect themselves from bullies and online harassment). Always nice when a parent wants to help a kid learn good habits (and btw sorry for the Trump comment; wasn’t directed at you, just was offhand, didn’t have time for a proper reply).