from the your-privacy-preferences-now-mean-absolutely-nothing dept

A few months ago, we noted how Verizon and AT&T were at the bleeding edge of the use of new "stealth" supercookies that can track a subscriber's web activity and location, and can't be disabled via browser settings. Despite having been doing this for two years, security researchers only just noticed that Verizon was actively modifying its wireless users' traffic to embed a unique identifier traffic header, or X-UIDH. This identifier effectively broadcasts user details to any website they visit, and the opt-out settings for the technology only stopped users from receiving customized ads -- not the traffic modification and tracking.

AT&T responded to the fracas by claiming it was only conducting a trial, one AT&T has since claimed to have terminated. Verizon responded by insisting that the unique identifier was rotated on a weekly basis (something researchers found wasn't true) and that the data was perfectly anonymous (though as we've long noted anonymous data sets are never really anonymous). While security researchers noted that third-party websites could use this identifier to build profiles without their consent, Verizon's website insisted that "it is unlikely that sites and ad entities will attempt to build customer profiles" using these identifiers.

As such, you'll surely be shocked to learn that sites and ad entities are building customer profiles using these identifiers.

Not only that, they're using the system to resurrect deleted tracking cookies and share them with advertising partners, making consumer opt-out preferences moot. According to security researcher Jonathan Mayer (and tested and confirmed by ProPublica), an online advertising clearinghouse by the name of Turn has been using Verizon's modifications when auctioning ad placement to websites like Google, Facebook and Yahoo for some time. When asked, Verizon pretends this is news to the company:

"When asked about Turn's use of the Verizon number to respawn tracking cookies, a Verizon spokeswoman said, "We're reviewing the information you shared and will evaluate and take appropriate measures to address." Turn privacy officer Ochoa said that his company had conversations with Verizon about Turn's use of the Verizon tracking number and said "they were quite satisfied."

Like Verizon's implementation of the program, Turn lets users opt out of receiving targeted ads, but users have no way of really opting out of being tracked or having their packets manipulated without prior consent. As the EFF notes, your only option is to use a VPN for all your traffic, or to use a browser add-on like AdBlock, which doesn't fully address the issues with the use of a UIDH header. Amusingly, Turn tries to claim to ProPublica that it's actually using Verizon's UIDH to respect user behavioral ad opt out preferences, but the website found that repeatedly wasn't working:

"Initially, Turn officials also told ProPublica that its zombie cookie had a benefit for users: They said they were using the Verizon number to keep track of people who installed the Turn opt-out cookie, so that if they mistakenly deleted it, Turn could continue to honor their decisions to opt out. But when ProPublica tested that claim on the industry's opt-out system, we found that it did not show Verizon users as opted out. Turn subsequently contacted us to say it had fixed what it said was a glitch, but our tests did not show it had been fixed."

Even if Turn's being honest, there are plenty of companies that aren't going to bother being ethical. Verizon, which in 2008 insisted that consumer privacy protections weren't necessary because public shame would keep them honest, pretty clearly isn't interested in stopping the practice without legal or regulatory intervention. So yeah, again, we've got a new type of supercookie that tracks everything you do, can't be opted out of, and is turning consumer privacy completely on its ear, but there's absolutely nothing here you need to worry your pretty little head about.

from the trust-us,-we're-the-auto-industry dept

Hoping to assuage growing fears that vehicle data won't be abused, nineteen automakers recently got together and agreed to a set of voluntary principles they insist will protect consumer privacy in the new smart car age. Automakers promise that the principles, delivered in a letter to the FTC (pdf), require that they "implement reasonable measures" to protect collected consumer data, both now and as the industry works toward car-to-car communications. The principles "demonstrate the industry's commitment to its customers" and "reflect a major step in protecting consumer information" insists the industry.

Should you bother to actually read the principles, the promised revolution in privacy protection quickly become less apparent. While the principles do require that automakers clearly communicate with customers (and by clear they mean "hey, here's some fine print saying we're selling your location data now"), many don't appear to actually do much of anything. Like this particular gem:

"Data Minimization, De-Identification & Retention: Participating Members commit to collecting Covered Information only as needed for legitimate business purposes. Participating Members commit to retaining Covered Information no longer than they determine necessary for legitimate business purposes."

With "legitimate business purposes" being whatever they see fit, that doesn't mean much. Similarly, the industry's "groundbreaking" promises are also heavily peppered with the ambiguous word "reasonable," which can of course mean whatever they'd like it to mean:

"Participating Members commit to implementing reasonable measures to protect Covered Information against unauthorized access or use."

Aka, we'll make some kind of ambiguous effort to secure your data. As with most efforts of this type, the goal is to preempt government from crafting new (or enforcing existing) privacy protections as the industry moves into more aggressive ways of monetizing location data. Said promises unsurprisingly aren't easing the worries of both safety and privacy advocates as we move into the vehicle black box age, notes the Associated Press:

"Industry officials say they oppose federal legislation to require privacy protections, saying that would be too "prescriptive." But Marc Rotenberg, executive director of the Electronic Privacy Information Center, said legislation is needed to ensure automakers don't back off the principles when they become inconvenient. "You just don't want your car spying on you," he said. "That's the practical consequence of a lot of the new technologies that are being built into cars."

With many parts of this technology DRM locked, users won't have much control over or access to their own data (something the EFF is trying to fix with their latest slate of DMCA exemption requests). It's also worth noting this supposed circle of automotive trust was already quite rusted before cars became more intelligent; most car dealerships and garages are paid by Carfax to report vehicle mileage and accident repair, with Carfax in turn being paid for that data by insurance companies. Similarly most of the in-car infotainment systems rely on cellular chipsets from companies like AT&T and Verizon, who quite happily sell any and all location data that isn't nailed down, and consistently experiment with creative new privacy violations (despite very similar promises they'd be on their best behavior).

So while it's very sweet that the auto industry is promising to respect your privacy as they push into brave new data snoopvertising and location data tracking territory, like so many self-regulatory promises before it they likely aren't worth the paper they're printed on.

from the you're-the-product----and-the-guinea-pig dept

As we noted a few weeks ago, Verizon and AT&T recently began utilizing a controversial new snoopvertising method that involves meddling with user traffic to insert a unique identifier traffic header, or X-UIDH. This header is then read by marketing partners to track your behavior around the Internet, which Verizon and AT&T then hope to sell to marketers and other third parties. In addition to the fact they're modifying user traffic, these headers can also be read by third parties -- even if customers opt out from carrier-specific programs.

After the practice received heat from security experts and groups like the EFF, AT&T has since announced they're backing away from the practice. AT&T insists that unlike Verizon (who has been using this technology commercially for two years with clients like Twitter), AT&T's implementation was only a trial. That trial is now complete, insists AT&T, and while they may return to the practice -- AT&T promises it will be somehow modified so user information isn't broadcast and opting out actually works:

"AT&T says it has stopped its controversial practice of adding a hidden, undeletable tracking number to its mobile customers' Internet activity. "It has been phased off our network," said Emily J. Edmonds, an AT&T spokeswoman....AT&T said it used the tracking numbers as part of a test, which it has now completed. Edmonds said AT&T may still launch a program to sell data collected by its tracking number, but that if and when it does, "customers will be able to opt out of the ad program and not have the numeric code inserted on their device."

The EFF confirms that the appearance of the header has indeed declined on AT&T's network. But while AT&T appears to have smelled the looming lawsuit on the wind, Verizon so far has stood tough on their use of the technology. Verizon says that the company's program continues but as with any program, Verizon is "constantly evaluating." Years ago when Verizon was fighting tougher privacy rules, the company proclaimed that "public shame" would keep them honest.

This particular privacy abuse took two years for savvy network engineers and security consultants to even spot, and so far there's no indication that two weeks of public scolding have done anything to thwart Verizon's ambitions. Cue the class actions and regulatory wrist slaps.

from the privacy-schmivacy dept

Over the last few years, Verizon has been ramping up its behavioral tracking efforts via programs like Verizon Selects and its Relevant Mobile Ad system, which track wireless and wireline subscriber web behavior to deliver tailored ads and sell your information to third parties. Unknown until a few weeks ago however was the fact that as part of this initiative, Verizon has started using what many are calling controversial "stealth," "super" or "perma" cookies that track a user's online behavior covertly, without users being able to disable them via browser settings.

Lawyer and Stanford computer scientist Jonathan Mayer offered up an excellent analysis noting that Verizon was actively modifying its users' traffic to embed a unique identifier traffic header, or X-UIDH. This header is then read by marketing partners (or hey, anybody, since it's stamped on all of your traffic) who can then build a handy profile of you. It's a rather ham-fisted approach, argues Mayer, who notes that while you can opt-out of Verizon selling your data, you can't opt out of having your traffic embedded with the unique identifier. He also offered up a handy graphic detailing precisely how these headers work:

As the story grew the last few weeks, ProPublica noted that Twitter's mobile advertising arm is already one of several clients using Verizon's "header enrichment" system, though Twitter didn't much want to talk about it. Several tools like this one have popped up since, allowing users to test their wireless connections (note it doesn't work if your cellular device is connected to Wi-Fi, and may be masked by the use of Google Mobile Chrome, Opera Mini, or if viewed through apps like Flipboard).

Kashmir Hill at Forbes also has a great article exploring the ramifications of the system and asked Verizon and AT&T (who has started trials of a similar system) what consumer protections are in place. Both companies proclaimed that the characters in their headers are rotated on a weekly and daily basis to protect user information. But as we've noted time and time again, there's really no such thing as an anonymized data set, and security consultant Ken White argues that only part of the data in the headers is modified, if at all:

"White has been tracked for the past 6 days across 550 miles with a persistent code from both Verizon and AT&T. He has a smartphone with Verizon service and a hotspot with AT&T service. In AT&T’s case, the code has four parts; only one part changes, he says. “It’s like if you were identified by a birth month, a birth year, a birth day, and a zip code, and they remove one of those things,” said White. You’d still be able to reasonably track that person with the other three. Verizon’s code meanwhile hasn’t changed for him, and it’s been almost a week."

"A couple of years back during the debate on net neutrality, I made the argument that industry leadership through some form of oversight/self-regulatory model, coupled with competition and the extensive oversight provided by literally hundreds of thousands of sophisticated online users would help ensure effective enforcement of good practices and protect consumers."

Yet here we have an example where the behavior Verizon was engaged in was so surreptitious, even some of the best networking and security experts in the business didn't notice Verizon was doing it until two years after the effort was launched. Apparently, holding Verizon accountable is going to take a little more than a public scolding in the town square. The EFF has stated they're taking a look at possible legal action against Verizon for violating consumer privacy law.

from the our-post-cookie-era dept

ProPublica has a new story about the rise of "canvas fingerprinting," a new method of tracking users without using cookies. It's a method that is apparently quite difficult to block if you're using anything other than Tor Browser. In short, canvas fingerprinting works by sending some instructions to your browser to draw a hidden image -- but does so in a manner making use of some of the unique features of your computer, such that each resulting image is likely to be unique (or nearly unique). The key issue here is that the popular "social sharing" company AddThis, which many sites (note: not ours) use to add "social" buttons to their website, had been experimenting with canvas fingerprinting to identify users even if they don't use cookies. As ProPublica's Julia Angwin notes, it's very difficult to block this kind of thing -- and tons of sites make use of AddThis -- including WhiteHouse.gov (whose privacy policy does not seem to reveal this, saying it only uses Google Analytics as a third party provider).

The report does note that others who have tried canvas fingerprinting have found that it's not necessarily accurate enough yet, but the technology appears to keep getting better. Still, AddThis says it's likely to drop it anyway, because it's not good enough yet:

AddThis said it rolled out the feature to a small portion of the 13 million websites on which its technology appears, but is considering ending its test soon. “It’s not uniquely identifying enough,” Harris said.

AddThis did not notify the websites on which the code was placed because “we conduct R&D projects in live environments to get the best results from testing,” according to a spokeswoman.

The company also insisted it wasn't doing anything bad with the tracking, but even if you believe that's true, how long will it be until others make use of similar fingerprinting for more questionable behavior.

Given the attention this is getting, hopefully browsers will at least role out features that allow users more notification and control over such practices. Cookies are hardly a perfect solution, but at least users have control over them.

from the government-states-black-toner-shortage-as-primary-motivator dept

Back in January 2013, the ACLU managed to pry loose two secret memos on the FBI's GPS tracking from the DOJ with a FOIA request. The only problem was that the request didn't actually free much information. The responsive documents consisted of a few scattered paragraphs … and 111 pages of black ink.

The ACLU objected to this mockery of the words "freedom" and "information," noting that secret interpretations of existing laws is exactly the sort of thing the Freedom of Information Act was designed to discourage, not protect. So, the ACLU sued the government in hopes of being given something a little less redacted.

Yesterday, a federal district court ruled that the Justice Department does not need to disclose two secret memos providing guidance to federal prosecutors and investigators regarding the use of GPS devices and other location tracking technologies…

The Justice Department drafted the memos to address those open questions, but it claimed in court that it should not have to turn them over because they contain attorney work-product and sensitive law enforcement information. The district court disagreed in part, holding that government guidelines for the use of GPS tracking do not qualify as sensitive law enforcement information, because “Law enforcement’s use of GPS tracking is well known by the public.” But it concluded that the government may nevertheless keep the guidelines secret, on the ground that the results of DOJ’s reasoning “will be borne out in the courts.”

The documents apparently contain the DOJ's arguments for warrantless GPS tracking, but the American public won't be allowed to find out anything about the government's justification. Instead, the court has decided to take a hands-off approach and "allow" defendants to "discover" these arguments as they're presented in court. Or not, if the government decides its arguments are too super-sensitive to be released and pushes to present these justifications under seal.

As the ACLU notes, it's only because the FBI's general counsel spoke of these two documents during a panel discussion at the University of San Francisco that anyone even knows the secret memos exist and what they're comprised of. Until these arguments are tested in court, the government is free to determine how much privacy Americans are entitled to in regards to GPS location tracking. Fortunately, there are a few legislators exploring other options.

Senator Ron Wyden (D-OR) and Representative Jason Chaffetz (R-UT) have asked Attorney General Eric Holder to release the documents, reminding the attorney general that “there is no room in American democracy for secret interpretation of public law…" And if you want to skip right over interpretations of the law and get behind a strong Congressional fix, you can support legislation mandating a warrant for all location tracking here.

On a related note, the FBI has filed a motion for summary judgement in its legal battle over NGI (Next Generation Identification) documents requested by an EFF FOIA request. The agency is trying to keep more privacy-related information out of the public's hands, including more details on its facial recognition program and biometric database. The arguments deployed are largely familiar (release of more info would allow criminals/terrorists to circumvent the new technology), but the end result (if the motion is granted) will be the same -- more secrecy for the government and less privacy for millions of Americans.

from the fourth-amendment...-we-hardly-knew-ye dept

More documents have been uncovered (via FOI requests) that show local law enforcement agencies in California have been operating cell phone tower spoofers (stingray devices) in complete secrecy and wholly unregulated.

Sacramento News10 has obtained documents from agencies in San Jose, Oakland, Los Angeles, San Francisco, Sacramento and Alameda County -- all of which point to stingray deployment. As has been the case in the past, the devices are acquired with DHS grants and put into use without oversight or guidelines to ensure privacy protections. The stingrays in use are mainly limited to collecting data, but as the ACLU points out, many manufacturers offer devices that also capture content.

Some of these agencies have had these devices for several years now. Documents obtained from the Oakland Police Dept. show the agency has had stingrays in use since at least 2007, citing 21 "stingray arrests" during that year. This is hardly a surprising development as the city has been pushing for a total surveillance network for years now, something that (until very recently) seemed to be more slowed by contractor ineptitude than growing public outrage.

The device manufacturer's (Harris) troubling non-disclosure agreement (which has been used to keep evidence of stingray usage out of court cases as well as has been deployed as an excuse for not securing warrants) rears its misshapen head again, mentioned both in one obtained document as well as by a spokesperson reached for comment. One document states:

"The Harris (REDACTED) equipment is proprietary and used for surveillance missions," the agreement reads. "Its capabilities can only be discussed with sworn law enforcement officers, the military or federal government. This equipment's capabilities are not for public knowledge and are protected under non-disclosure agreements as well as Title 18 USC 2512."

The Sacramento County Sheriff's Dept. had this to (not) say when asked about its stingray usage:

"While I am not familiar with what San Jose has said, my understanding is that the acquisition or use of this technology comes with a strict non-disclosure requirement," said Undersheriff James Lewis in an emailed statement. "Therefore it would be inappropriate for us to comment about any agency that may be using the technology."

Law enforcement agencies are conveniently choosing to believe a manufacturer's non-disclosure agreement trumps public interest or even their own protection of citizens' Fourth Amendment rights.

The devices aren't cheap, either. Taxpayers are shelling out hundreds of thousands of dollars for these cell tower spoofers, and the agencies acquiring them are doing very little to ensure the money is spent wisely. ACLU's examination of the documents shows that many of the agencies purchased devices without soliciting bids.

It's hard to know whether San José or any of the other agencies that have purchased stingray devices are getting good value for their money because the contract was "sole source," in other words, not put out to competitive bidding. The justification for skirting ordinary bidding processes is that Harris Corporation is the only manufacturer of this kind of device. (We are aware of othersurveillancevendors that manufacture these devices, though a separate Freedom of Information Request we submitted to the Federal Communications Commission suggests that, as of June 2013, the only company to have obtained an equipment authorization from the FCC for this kind of device is Harris.)

With Harris effectively locking the market down, buyers are pretty much ensured prices far higher than the market would bear if opened to competition. (Not that I'm advocating for a robust surveillance device marketplace, but if you're going to spend taxpayers' money on products to spy on them, the least you can do is try to get the best value for their money…) Using federal grants also allows these departments to further avoid public scrutiny of the purchase and use by circumventing the normal acquisition process.

Beyond the obvious Fourth Amendment concerns looms the very real threat of mission creep. These agencies cite combating terrorism when applying for federal funds, but put the devices to use for ordinary law enforcement purposes. The documents cite stingray-related arrests, but since so little is known about the purchase, much less the deployment, there's really no way to tell how much data and content totally unrelated to criminal investigations has been collected (and held) by these agencies.

from the free-wifi dept

A few years ago, we wrote about why, for many years, whenever you were in a public place like an airport or a hotel, you'd often see an available WiFi option called Free Public WiFi -- though if you looked carefully, it was an ad hoc (computer-to-computer) network, rather than to a WiFi access point. It turns out that it was because of a stupid bug in Windows XP, which also explains why it's a lot less common these days. Of course, some people always would joke that it was really spy agencies trying to get you to connect to their WiFi. Except, that might not be that much of a joke. The latest reporting on Snowden documents from Glenn Greenwald, in association with some reporters from the CBC, reveals that the Canadian equivalent of the NSA, the Communications Security Establishment Canada (CSEC) has been tracking people as they connect to WiFi networks in a variety of public places including airports, hotels, coffee shops and libraries. The main focus appears to be on airports, but then they use that data to create a map of information about where someone goes.

“I can’t comment in detail on the intelligence operations or capabilities of ourselves or our allies. What I can tell you is that CSEC, under its legislation, cannot target Canadians anywhere in the world or anyone in Canada, including visitors to Canada.”

And yet, as this report shows, they absolutely are collecting tons of data on Canadians and visitors to Canada. And not just a few. The document shows that in a test that "swept a modest size city," they collected information on over 300,000 people. Also, they can then use that information to track where a person goes, creating profiles over time. The document reveals that they're just testing this capability (which was created in coordination with the NSA), but it indicates the plan is for all of the "Five Eyes" countries to use a similar system -- though the reporters say they've been told the system is now fully operational, and not just in test mode.

It's not clear from the document exactly how the CSEC is able to get this data. The CBC report questions a few potential sources, such as key airports and Boingo (the company that supplies WiFi to many public hotspots) and both deny providing the information. The Boingo denial seems reasonable, since in the presentation it actually indicates that they have trouble getting information on users on Boingo's network.

The reporters spoke to multiple experts who all say that there's no possible way that this effort is legal under Canadian law. After all, as Foster himself stated above, the CSEC cannot target anyone on Canadian soil, but they clearly do. The article even quotes Ontario's privacy commissioner Ann Cavoukian who seems horrified by this revelation:

Ontario's privacy commissioner Ann Cavoukian says she is "blown away" by the revelations.

"It is really unbelievable that CSEC would engage in that kind of surveillance of Canadians. Of us.

"I mean that could have been me at the airport walking around… This resembles the activities of a totalitarian state, not a free and open society."

For all the wonders of free WiFi, there's one more downside to keep in mind. If you're using it, you're almost certainly being tracked by a spy agency.

Of course, in between point A and point B, you have to imagine someone at the NSA rushed down to the FISA court seeking a Section 215 bulk "business records" order from every American car company for "mere metadata" on every driver in America, right? Just joking. Maybe.

Of course, even if Farley wasn't accurate in his initial statement, it's close enough to true anyway, since so many people carry mobile phones in their pockets, and those are easily tracked as well. In many cases, people are willing to get the benefits of location information, but we don't have nearly enough transparency or knowledge about what's being done with that information, or given the right to control or limit how that information is shared or used.

In an age where so much information is shared with companies, those companies need to move to solutions that involve much greater transparency and controls. Companies making use of your information need to start being upfront about the type of data they collect and how it's being used. The problem with the idea of Ford keeping track of which one of you has a lead foot isn't in that this is possible. Everyone knew it was already possible. It was just been the assumption that no one would actually do it. And that's the kind of thing that needs to change. Companies want to make use of our data, and sometimes it's for very useful purposes -- things that we're happy to get in exchange for the data. The problem is that too often, how the data is being used is hidden from us, and the "benefits" are not clearly laid out. Furthermore, once the data is gone... it's gone, and there are little to no controls about how it's used and shared.

Whether or not Ford in particular is tracking how fast you drive is barely the point. These days, someone is tracking how fast you drive, and as a driver, you should know who it is, and be able to limit how that information is used.

By September 2004, the NSA had developed a technique that was dubbed “The Find” by special operations officers. The technique, the Post reports, was used in Iraq and “enabled the agency to find cellphones even when they were turned off.” This helped identify “thousands of new targets, including members of a burgeoning al-Qaeda-sponsored insurgency in Iraq,” according to members of the special operations unit interviewed by the Post.

When a mobile device running the Android Operating System is powered off, there is no part of the Operating System that remains on or emits a signal. Google has no way to turn on a device remotely.

Google may not have a way, but that doesn't mean the NSA doesn't.

Nokia:

Our devices are designed so that when they are switched off, the radio transceivers within the devices should be powered off. We are not aware of any way they could be re-activated until the user switches the device on again. We believe that this means that the device could not be tracked in the manner suggested in the article you referenced.

Once again, we're looking at words like "should" and "not aware." This doesn't necessarily suggest Nokia does know of methods government agencies could use to track phones that are off, but it doesn't entirely rule it out either.

Samsung's response is more interesting. While declaring that all components should be turned off when the phone is powered down, it does acknowledge that malware could trick cell phone users into believing their phone is powered down when it isn't. Ericsson, which is no longer in the business of producing cell phones (and presumably has less to lose by being forthright), was even more expansive on the subject.

The only electronics normally remaining in operation are the crystal that keeps track of time and some functionality sensing on-button and charger connection. The modem (the cellular communication part) cannot turn on by itself. It is not powered in off-state. Power and clock distribution to the modem is controlled by the application processor in the mobile phone. The application processor only turns on if the user pushes the on-switch. There could, however, be potential risks that once the phone runs there could be means to construct malicious applications that can exploit the phone.

On the plus side, the responding manufacturers seem to be interested in ensuring a powered down phone is actually powered down, rather than just put into a "standby" or "hibernation" mode that could potentially lead to exploitation. But the implicit statement these carefully worded denials make is that anything's possible. Not being directly "aware" of something isn't the same thing as a denial.

Even if the odds seem very low that the NSA can track a powered down cell phone, the last few months of leaks have shown the agency has some very surprising capabilities -- some of which even stunned engineers working for the companies it surreptitiously slurped data from.

Not only that, but there's historical evidence via court cases that shows the FBI has used others' phones as eavesdropping devices by remotely activating them and using the mic to record conversations. As was noted by c|net back in 2006, whatever the FBI utilized apparently worked even when phones were shut off.

The surveillance technique came to light in an opinion published this week by U.S. District Judge Lewis Kaplan. He ruled that the "roving bug" was legal because federal wiretapping law is broad enough to permit eavesdropping even of conversations that take place near a suspect's cell phone.

Kaplan's opinion said that the eavesdropping technique "functioned whether the phone was powered on or off." Some handsets can't be fully powered down without removing the battery; for instance, some Nokia models will wake up when turned off if an alarm is set.

While the Genovese crime family prosecution appears to be the first time a remote-eavesdropping mechanism has been used in a criminal case, the technique has been discussed in security circles for years.

Short of pulling out the battery (notably not an option in some phones), there seems to be little anyone can do to prevent the device from being tracked and/or used as a listening device. The responding companies listed above have somewhat hedged their answers to the researcher's questions, most likely not out of any deference to government intelligence agencies, but rather to prevent looking ignorant later if (or when) subsequent leaks make these tactics public knowledge.

Any powered up cell phone performs a lot of legwork for intelligence agencies, supplying a steady stream of location and communications data. If nothing else, the leaks have proven the NSA (and to a slightly lesser extent, the FBI) has an unquenchable thirst for data. If such exploits exist (and they seem to), it would be ridiculous to believe they aren't being used to their fullest extent.