Further Reading

On a bright April morning in Menlo Park, California, I became an Internet spy.

This was easier than it sounds because I had a willing target. I had partnered with National Public Radio (NPR) tech correspondent Steve Henn for an experiment in Internet surveillance. For one week, while Henn researched a story, he allowed himself to be watched—acting as a stand-in, in effect, for everyone who uses Internet-connected devices. How much of our lives do we really reveal simply by going online?

Henn let me into his Silicon Valley home and ushered me into his office with a cup of coffee. Waiting for me there was the key tool of my new trade: a metal-and-plastic box that resembled nothing more threatening than an unlabeled Wi-Fi router. This was the PwnPlug R2, a piece of professional penetration testing gear designed by Pwnie Express CTO Dave Porcello and his team and on loan to us for this project.

Enlarge/ NPR's Steve Henn in his home office and studio, with the Pwnie Express PwnPlug R2 that collected his Internet traffic for a week.

The box would soon sink its teeth into the Internet traffic from Henn's home computer and smartphone, silently gobbling up every morsel of data and spitting it surreptitiously out of Henn's home network for our later analysis. With its help, we would create a pint-sized version of the Internet surveillance infrastructure used by the National Security Agency. Henn would serve as a proxy for Internet users, Porcello would become our one-man equivalent of the NSA’s Special Source Operations department, and I would become Henn's personal NSA analyst.

As Henn cleared a spot on his desk for the PwnPlug, he joked that it might not provide anything useful for us to analyze. In the year since Edward Snowden pulled back the curtain of secrecy around the NSA’s dragnet surveillance programs, many of the major Internet service providers targeted by the spy agency have publicly announced plans to better protect customers, often through the expanded use of encryption.

Our experiment would answer the question: could a passive observer of Internet traffic still learn much about a target in this post-Snowden world?

Henn dialed up Porcello and put him on speakerphone as we finalized the location and setup of the PwnPlug. As I snapped in an Ethernet cable, Henn turned on his iPhone and connected to the PwnPlug’s Wi-Fi network. Porcello watched remotely as data from Henn's network suddenly poured into a specially configured Pwnie Express server.

“Whoa,” Porcello said. “Yep, there’s Yahoo, NPR... there’s an HTTP request to Google... the phone is checking for an update. Wow, there’s a lot of stuff going on here. It's just thousands and thousands of pages of stuff... Are you sure you’re not opening any apps?”

He checked his phone and found that Mail, Notes, Safari, Maps, Calendar, Messages, Twitter, and Facebook were running in the background—and making connections to the Internet. The Safari Web browser proved the most revealing. Like most people who use the iPhone, Henn had left open dozens of websites; when his phone had connected to the PwnPlug’s network, the browser had refreshed them, revealing movies he was checking out for his kids, a weather report, and research he was doing for work.

In the first two minutes of our test, we had already captured a snapshot of Henn’s recent online life—and the real surveillance hadn't even begun.

Your own personal NSA

While the NSA runs hundreds of surveillance programs, its broad, passive surveillance of the Internet has just two key components: Turbulence, a network monitoring system that skims traffic from the Internet’s fiber-optic backbone, and XKeyscore, an analytics database that processes the captured traffic, using rules that look for specific strings of text or patterns in data (e-mail addresses, phone numbers, file attachments). According to leaked NSA documents and whistleblower testimony, pieces of both Turbulence and XKeyscore are scattered about the world near Internet chokepoints such as the infamous “secret room” at AT&T’s San Francisco offices that has been described by former AT&T employee Mark Klein.

To recreate this setup in miniature, the PwnPlug in Henn’s office was configured as a Wi-Fi access point; it acted as our equivalent of the NSA’s Turbulence. While the PwnPlug is generally used for network penetration testing, Porcello configured the device used in our test only to intercept traffic outbound to or inbound from the Internet, not traffic that began and ended on Henn's home network. The device captured every packet matching these criteria and sent it over a secure SSH connection back to a server at Pwnie Express headquarters in Berlin, Vermont.

Further Reading

How the NSA went from off-the-shelf to a homegrown "Google for packets."

The remote machine at Pwnie acted as our diminutive version of XKeyscore. To emulate the NSA's processing of captured traffic, Porcello ran a number of open source analytics tools against Henn's traffic, including the ngrep packet search tool, the tshark and Wireshark traffic analysis tools, the tcpflow data stream capture tool, the dsniff suite’s passive monitoring tools, and tcpxtract for capturing files within Internet traffic.

For more than a month before the experiment began, Ars Technica and NPR made technical and legal preparations to ensure that any data captured from Henn would be handled with confidentiality and care. The focus would be solely on Henn’s personal online activities; we explicitly did not attempt to penetrate NPR’s corporate network, to hack Henn’s computer or phone, or to grab traffic from Henn's other family members. We would simply watch the traffic passing between our test Wi-Fi network and the Internet in the same way that the NSA collects data from millions of Internet users around the world each day.

Our full access to Henn's activities lasted for several days while he reported a single story. To make Henn as accurate a proxy as possible for the average unsuspecting Internet user, one condition stipulated for the test was that when the PwnPlug was active, Henn wouldn’t take extra measures to avoid surveillance (though he followed his normal operational security protocols). Henn could also pull the plug on our test at any time.

The experiment unfolded in two phases. In the first, we simply observed Henn’s normal Internet traffic. In the second, Henn, Porcello, and I stopped the broad surveillance of Henn and turned our tools on specific traffic created by leading Web applications and services. Here's what we found.

Porcello found that both iOS apps and system updates appeared to be delivered to devices as unencrypted .zip files. Google Play Store content and apps and Android OS updates are also delivered unencrypted.

What the hell?

Sean, do you know whether these downloads are at least authenticated?

A phone platform's app store is the first link in the chain for security. If your local spy agency can replace your apps as you download them, then finding supposedly secure apps to download is meaningless. If the downloads are not authenticated, then the only way to remain (kinda, sorta) secure is to never add any apps to your phone, because every new app is potentially malware. A spy agency (heck, even an ISP) could simply inject a keylogger into every app download they detect on the networks they have control over.

App = vector

Sure, they (this includes your ISP and anyone in the forwarding path) can replace apps as you download them, but they cannot sign them with a private key that the phone has the public for. You could potentially push a package that exploits a flaw in the parsing code and gets remote code execution without the private signing key, but you can attempt to do that with any data that goes into the phone.

Porcello found that both iOS apps and system updates appeared to be delivered to devices as unencrypted .zip files. Google Play Store content and apps and Android OS updates are also delivered unencrypted.

What the hell?

Sean, do you know whether these downloads are at least authenticated?

A phone platform's app store is the first link in the chain for security. If your local spy agency can replace your apps as you download them, then finding supposedly secure apps to download is meaningless. If the downloads are not authenticated, then the only way to remain (kinda, sorta) secure is to never add any apps to your phone, because every new app is potentially malware. A spy agency (heck, even an ISP) could simply inject a keylogger into every app download they detect on the networks they have control over.

App = vector

Sure, they (this includes your ISP and anyone in the forwarding path) can replace apps as you download them, but they cannot sign them with a private key that the phone has the public for. You could potentially push a package that exploits a flaw in the parsing code and gets remote code execution without the private signing key, but you can attempt to do that with any data that goes into the phone.

I think that's a little excessive. The Feds just compel for the keys if they have a target.

Porcello found that both iOS apps and system updates appeared to be delivered to devices as unencrypted .zip files. Google Play Store content and apps and Android OS updates are also delivered unencrypted.

What the hell?

Sean, do you know whether these downloads are at least authenticated?

A phone platform's app store is the first link in the chain for security. If your local spy agency can replace your apps as you download them, then finding supposedly secure apps to download is meaningless. If the downloads are not authenticated, then the only way to remain (kinda, sorta) secure is to never add any apps to your phone, because every new app is potentially malware. A spy agency (heck, even an ISP) could simply inject a keylogger into every app download they detect on the networks they have control over.

App = vector

Sure, they (this includes your ISP and anyone in the forwarding path) can replace apps as you download them, but they cannot sign them with a private key that the phone has the public for. You could potentially push a package that exploits a flaw in the parsing code and gets remote code execution without the private signing key, but you can attempt to do that with any data that goes into the phone.

I think that's a little excessive. The Feds just compel for the keys if they have a target.

Other than Lavabit, are there any cases where major companies such as Microsoft, Google or Apple have been compelled to release their private signing keys? I don't believe so.

Porcello found that both iOS apps and system updates appeared to be delivered to devices as unencrypted .zip files. Google Play Store content and apps and Android OS updates are also delivered unencrypted.

What the hell?

Sean, do you know whether these downloads are at least authenticated?

A phone platform's app store is the first link in the chain for security. If your local spy agency can replace your apps as you download them, then finding supposedly secure apps to download is meaningless. If the downloads are not authenticated, then the only way to remain (kinda, sorta) secure is to never add any apps to your phone, because every new app is potentially malware. A spy agency (heck, even an ISP) could simply inject a keylogger into every app download they detect on the networks they have control over.

App = vector

Sure, they (this includes your ISP and anyone in the forwarding path) can replace apps as you download them, but they cannot sign them with a private key that the phone has the public for. You could potentially push a package that exploits a flaw in the parsing code and gets remote code execution without the private signing key, but you can attempt to do that with any data that goes into the phone.

I think that's a little excessive. The Feds just compel for the keys if they have a target.

Other than Lavabit, are there any cases where major companies such as Microsoft, Google or Apple have been compelled to release their private signing keys? I don't believe so.

If they were under a gag order, would we even know about it? Multi-billion dollar companies are a bit less willing to break the law to serve conscience than a one-man company.

"We reached out to Google for comment. It turns out that we had found a bug in Google search—one that Google has since corrected."

Rather amazes me that a huge wealthy technically saavy company like Google doesn't perform or contract for penetration testing and analysis to catch things like what the author's experiment uncovered.

Let me take that a step further.

Apple will book about $182 billion of revenues this year; Amazon $91 billion; Microsoft $85 billion; Google $52 billion. To a greater or lesser extent, these companies' PR Departments all make a Big Deal® about protecting user information from predation by various black hats and unwanted government intrusion. And yet, a rather straightforward investigation (not to belittle the fine article in any way) finds a host of bugs, insecurities and “oopsies!”

The shocking thing about the SSL bug was that such a high fraction of the world's commerce depends on a tip jar on the desks of a couple of over-dedicated volunteers. The shocking thing about THIS article is that with half a trillion dollars at stake from just a small handful of household names, the security efforts get so carried away with hypothetical or advanced exploits, that nobody bothers to check whether the basement door is locked at night.

I may be wrong but I believe Ars provides https for their Subscribers.

We're actively working toward making SSL/TLS usable on Ars. As noted, subscribers can already toggle it on. Rolling it out for everyone will have to wait until we can do it without throwing mixed content warnings, and that is taking a considerable amount of time to do because it involves working with our ad folks in NYC and getting buy-in from all of the different sources that come together when an Ars page is rendered.

We'll get there, but we're not there yet and we have no timeframe. As Aurich noted a few months ago when this last came up in the feedback forum—and as Sean's piece here aptly demonstrates—doing it right is far more important than slapping an 80% solution together and calling it done.

My first response to the question of Ars encryption was, “If I REALLY cared about the privacy of my beliefs/interests, I wouldn't be a subscriber and I wouldn't be posting.

My second response was, “hmmmm… I wonder what tracking Ars' ad services perform? As a subscriber, I don't see these, but I'm regularly amused that when Google hijacks the URL because it's the home page for new dox on my employer's IE, it hijacks my typing "delta" and actually throws up a barrier to my getting to delta.com—ads for delta airlines fill most of the screen— versus how my personal laptop/Safari immediately fills in the frequently-used URL.

Every such incident provokes a raft of ads for the airline that I use 50X/year. (A popup just says it's time to leave the clubroom & head for the gate.) Ditto the search I did at B&H photo for outdoor speakers; the models I looked at were in the ad at NYT.Com.

Any web site that depends on ad revenues is incentivized to share as much personal info as possible, the better to get more ad revenue from the agencies. I hope Ars can write up what it takes to preserve its members' and users' personal info.

It seems from the number of original bugs you discovered in this limited test that this is not really an area of focus for many of these services. What are the limitations to more widespread testing and securing of these services?

I would definitely like to see a follow up article on what we can do to prevent data leakage. VPN will provide some security but are there other options? Is VPN a perfect solution either?

...

A VPN will secure the data stream between you and the VPN server, ensuring that everything between those two points is encrypted. That provides a certain amount of protection against snooping on your local network or from your own ISP, as well as putting up a bit of a road-block if anyone wants to try and trace your traffic back to you. I don't think it does a whole lot to protect you against NSA type surveillance. They're not scanning the traffic on your local network, they're scanning internet-backbones and grabbing your data outside of the protective tunnel of your VPN, and the tools they employ to identify your traffic, such as analyzing various cookies like the Google PREF cookie, are also functioning outside of the protection of your VPN.

That doesn't mean you shouldn't get a VPN. It's a valuable security tool, but I wouldn't assume that I was any kind of "safe" from government surveillance when using it.

I never use the wifi on my BlackBerry Q10, except at home, through my own network. I never use internet explorer, I use FireFox, the OS on all but one of my computers is Linux. I'd love to know other people's experience and expecially the author's with these attempts at home and mobile security My mobile carrier is Sprint and my internet connection is through ATT Uverse.Thanks,Tony Renier

I never use the wifi on my BlackBerry Q10, except at home, through my own network. I never use internet explorer, I use FireFox, the OS on all but one of my computers is Linux. I'd love to know other people's experience and expecially the author's with these attempts at home and mobile security My mobile carrier is Sprint and my internet connection is through ATT Uverse.Thanks,Tony Renier

Yeah, yeah, I know it has nothing to do with the story. It just popped in my head and 'I was all like click, clikc, type type, paA-send'. No, it's not an attack on the National People Republic of Radio.I kid, I kid. I've been a listener to public radio/NPR since I was fifteen '[i]What, for like two years you noob, go back to instagram!!! Faggooot![i](in the voice of southpark)'. No, long then two years, but least then half the age of the average sum of all pledge drives since 1975 for one radio station. But seriously I've been a listener forever. It's all in good fun.

I'm surprised someone was surprised that commodity VoIP is generally in the clear. I don't even know of any common personal offerings that encrypt the audio streams. I'd bet those phones you see on Ars staffers desks are sending audio in the clear as well.

A trip through wireshark's "decode audio" menu is scary the first few times you try it.

A professional VOIP installation would have encryption. But some ad hoc multiuser free conference calling scheme probably would go the extra mile.

</Quote> Unencrypted VoIP calls from an app. While Uberconference only provided us one of Henn's phone numbers in the clear, Dave Porcello tested another VoIP app called RingCentral and found that it left everything unencrypted, including the call itself. Porcello was able to extract the full audio of a call from an iPhone’s Internet traffic—and says he won't be using that particular app anymore. </quote>

Holy Freaking Cow! RingCentral sends calls in the clear? So essentially every two bit hacker with a packet sniffer can wiretap calls! Well that is a problem.

I’ve worked with several professional VoIP/SIP providers. (e.g. We were registered as a telephone company and we made our living by selling VoIP services that interoprated with the PSTN.) SIP signaling/call setup information is almost always sent in the clear, as is VoIP media/audio (RTP). (The password is encrypted by hashing it with a server provided salt, so account/billing security is taken care of.)

Encrypting the SIP signaling between a PBX/Phone and the provider would be a trival matter. However, most people are concerned with media/audio and encrypting that probably won’t happen for a while. Why? People want VoIP to interoperate with the PSTN. Most VoIP providers have several upstream providers which they have to terminate the traffic to. Those upstream providers want and will only accept an unencrypted RTP streams, so most VoIP providers won’t encrypt RTP since they’ll just have to unencrypt it at the proxy server to send it unecrypted to the upstream providers. This is extra processing power and is more computationally intensive than just relaying the RTP stream to another provider.

Of course it is possible in theory to set up an infrastructure where each host that touches an RTP stream decrypts it with its private key and encrypts it with its upstream negibor’s public key. But I’d be surprised if that would happen. This is a commodity business, people want low costs and anything that adds to a provider’s cost is something that they’ll avoid, since people won’t pay for it. (Yes, some geeks will pay for it, but not enough of them.)

Oh, implementing a system where each telephone number/end point has a public/private key that the sender encrypts audio with to the endpoint would be illegal in the US. Federal law requires that providers be able to provide raw unencrypted audio.

A little late here, but I'd really like to see what you can see on Apple's encrypted texts (iPhone to iThing) and whether adding attachments (photo's - which we know the NSA is mining from texts normally) gets encrypted as well or not?

A little late here, but I'd really like to see what you can see on Apple's encrypted texts (iPhone to iThing) and whether adding attachments (photo's - which we know the NSA is mining from texts normally) gets encrypted as well or not?

And see what you can find from Apple's encrypted video chats?

This is what you can get without breaking encryption. Padding would prevent most of this, but that's expensive.

Sean Gallagher / Sean is Ars Technica's IT Editor. A former Navy officer, systems administrator, and network systems integrator with 20 years of IT journalism experience, he lives and works in Baltimore, Maryland.