from the this-is-no-longer-theoretical dept

Last month, we wrote about Bruce Schneier's warning that certain unknown parties were carefully testing ways to take down the internet. They were doing carefully configured DDoS attacks, testing core internet infrastructure, focusing on key DNS servers. And, of course, we've also been talking about the rise of truly massive DDoS attacks, thanks to poorly secured Internet of Things (IoT) devices, and ancient, unpatched bugs.

That all came to a head this morning when large chunks of the internet went down for about two hours, thanks to a massive DDoS attack targeting managed DNS provider Dyn. Most of the down sites are back (I'm still having trouble reaching Twitter), but it was pretty widespread, and lots of big name sites all went down. Just check out this screenshot from Downdetector showing the outages on a bunch of sites:

You'll see not all of them have downtime (and the big ISPs, as always, show lots of complaints about downtimes), but a ton of those sites show a giant spike in downtime for a few hours.

So, once again, we'd like to point out that this is as problem that the internet community needs to start solving now. There's been a theoretical threat for a while, but it's no longer so theoretical. Yes, some people point out that this is a difficult thing to deal with. If you're pointing people to websites, even if we were to move to a more distributed system, there are almost always some kinds of chokepoints, and those with malicious intent will always, eventually, target those chokepoints. But there has to be a better way -- because if there isn't, this kind of thing is going to become a lot worse.

from the let's-get-it-done dept

There's been a lot of buzz over respected computer security expert Bruce Schneier recently talking about how someone, or some organization, or (most likely) some state actor, is running a series of tests that appear to be probing for ways to take down the entire internet. Basically, a bunch of critical infrastructure providers have noticed some interesting attacks on their systems that look like they're probing to determine defenses.

Recently, some of the major companies that provide the basic infrastructure that makes the Internet work have seen an increase in DDoS attacks against them. Moreover, they have seen a certain profile of attacks. These attacks are significantly larger than the ones they're used to seeing. They last longer. They're more sophisticated. And they look like probing. One week, the attack would start at a particular level of attack and slowly ramp up before stopping. The next week, it would start at that higher point and continue. And so on, along those lines, as if the attacker were looking for the exact point of failure.

The attacks are also configured in such a way as to see what the company's total defenses are. There are many different ways to launch a DDoS attacks. The more attack vectors you employ simultaneously, the more different defenses the defender has to counter with. These companies are seeing more attacks using three or four different vectors. This means that the companies have to use everything they've got to defend themselves. They can't hold anything back. They're forced to demonstrate their defense capabilities for the attacker.

This article is getting a collective "oh, shit, that's bad" kind of reaction from many online -- and that's about right. But, shouldn't it also be something of a call to action to build a better system? In many ways, it's still incredible that the internet actually works. There are still elements that feel held together by duct tape and handshake agreements. And while it's been surprisingly resilient, that doesn't mean that it needs to remain that way.

Schneier notes that there's "nothing, really" that can be done about these tests -- and that's true in the short term. But it seems, to me, like it should be setting off alarm bells for people to rethink how the internet is built -- and to make things even more distributed and less subject to attacks on "critical infrastructure." People talk about how the internet was originally supposed to be designed to withstand a nuclear attack and keep working. But, the reality has always been that there are a few choke points. Seems like now would be a good time to start fixing things so that the choke points are no longer so critical.

from the throw-away-your-phone dept

As you may have heard, if you have an iOS device (iPhone, iPad, even iPod Touch) you should be updating your devices, like a few hours ago. Seriously, if you haven't done it yet, stop reading and go update. The story behind this update is quite incredible, and is detailed in a great article over at Motherboard by Lorenzo Franceschi-Bicchierai. Basically after someone (most likely a gov't) targeted Ahmed Mansoor, a human rights activist in the United Arab Emirates with a slightly questionable text (urging him to click on a link to get info about prison torture), a team of folks from Citizen Lab (who have exposed lots of questionable malware) and Lookout (anti-malware company) got to work on the text and figured out what it did. And, basically the short version is that the single click exploits three separate 0days vulnerabilities to effectively take over your phone in secret. All of it. It secretly jailbreaks the phone without you knowing it and then accesses basically everything.

“It basically steals all the information on your phone, it intercepts every call, it intercepts every text message, it steals all the emails, the contacts, the FaceTime calls. It also basically backdoors every communications mechanism you have on the phone,” Murray explained. “It steals all the information in the Gmail app, all the Facebook messages, all the Facebook information, your Facebook contacts, everything from Skype, WhatsApp, Viber, WeChat, Telegram—you name it.”

So that's great.

The researches believe they've tracked back the exploit to a secretive hacking company called NSO Group. The full Citizen Lab writeup on all of this is quite fascinating as well. They estimate that this exploit from NSO probably costs in the range of a million dollars on the market, though obviously it's closed now. That doesn't mean that NSO or others don't have other exploits up their sleeves.

The report also notes that this kind of exploit is probably just used by nation states right now, but there's nothing to say that it couldn't move down the stack before too long, letting all sorts of mischievous characters look to basically completely pwn your phone. Pretty scary stuff, and yet another reminder of why it's so dangerous that folks like the NSA are hoarding 0days, rather than revealing them, and that the FBI is trying to force tech companies to break encryption and other tools that are necessary to block these kinds of attacks.

Intelligence agencies exist to gather information, analyze it, and deliver their findings to policymakers so that they can make decisions about how to deal with threats to the nation. Period. You can, and agencies often do, dress this up and expand on it in order to motivate the workforce, or more likely grab more money and authority, but when it comes down to it, stealing and making sense of other people’s information is the job. Doing code reviews and QA for Cisco is not the mission.

Suck it up, Cisco. That gaping hole uncovered by the Shadow Brokers was discovered at least three years ago by the NSA and if it chose not to tell you about it, it had its reasons. Namely: national security.

The Obama administration made sympathetic noises in the wake of the Snowden leaks, suggesting the NSA err on the side of disclosure. It simultaneously gave the agency no reason to ever do that by appending "unless national security, etc." to the statement.

But part of the phrase "national security" is the word "security." (And the other part -- "national" -- suggests this directive also covers protecting US companies from attacks, not just the more amorphous "American public.") Allowing tech companies who provide network security software and hardware to other prime hacking targets to remain unaware of security holes doesn't exactly serve the nation or its security. So, while Tanji may claim the NSA isn't in the QA business, it sort of is. The thing is the NSA prefers to exploit QA issues, rather than give affected developers a chance to patch them.

And if an NSA operative left behind a bag of tech tools in a compromised server, it really doesn't do much for the argument that the government can be trusted with encryption backdoors -- the sort of thing FBI Director James Comey is still hoping will materialize as a result of his never ending "going dark" sales pitch. Julian Sanchez, writing for Cato, points out the NSA's mistake should lead to some pretty severe trust issues.

This hack also ought to give pause to anyone swayed by the government’s assurances that we can mandate government backdoors in encryption software and services, allowing the “good guys” (law enforcement and intelligence agencies) to access the communications of criminals and terrorists without compromising the security of millions of innocent users. If even the NSA’s most closely guarded hacking tools cannot be secured, why would any reasonable person believe that keys to cryptographic backdoors could be adequately protected by far less sophisticated law enforcement agencies? The Equation Group hack is a disturbingly concrete demonstration of what network security experts have been saying all along: Once you create a backdoor, there is no realistic way to guarantee that only the good guys will be able to walk through it.

So, that's one huge problem with both the hoarding of exploits and the NSA's refusal to actually participate in the Vulnerability Equities Process. The definition the NSA has chosen for "national security" doesn't mesh with statements made by its cybersecurity overseers.

Back in 2014, federal cybersecurity coordinator Michael Daniel insisted in a post on the White House blog that the process is strongly weighted in favor of disclosure. The government, he assured the public, understands that “[b]uilding up a huge stockpile of undisclosed vulnerabilities while leaving the Internet vulnerable and the American people unprotected would not be in our national security interest.”

Maybe things have changed in the past couple of years, but they haven't changed as much as Michael Tanji claims. He states that the NSA is no longer charged with playing cyber-defense.

The one element in the intelligence community that was charged with supporting defense is no more. I didn’t like it then, and it seems pretty damn foolish now, but there you are, all in the name of “agility.” NSA’s IAD had the potential to do the things that all the security and privacy pundits imagine should be done for the private sector, but their job was still keeping Uncle Sam secure, not Wal-Mart.

That's simply not true. The NSA may secretly wish it had been completely rerouted to "attack" mode. That would more easily justify the hoarding of vulnerabilities and its ongoing refusal to hand over info to affected developers. But it's still supposed to be playing defense -- which means it has an obligation to both the American public who use software/hardware the NSA would rather see left unpatched, as well as the developers it's purposefully leaving open to malicious attacks.

Because computers are now the easiest way to spy on people, and because everyone — even U.S. adversaries — uses the same Internet, there has long been what officials like to call a "healthy" or "creative" tension between the foreign espionage mission and the information assurance mission of the NSA.

Crudely put, the IA's cyber mission is to find security holes in Internet infrastructure and common software and patch them; the signals intelligence mission is to find the same holes and keep them open as long as possible so they can be used to spy on foreigners.

When the two directorates merge, some fear that the much larger and better funded signals intelligence mission will simply absorb the IA mission.

As it stands now, the offensive side of the NSA's cybersquad is roughly twice the size of its defensive team -- which clearly indicates which end of the equation the NSA believes is more important to its national security mission.

The NSA's actions in regards to the Vulnerability Equities Process shows it believes some forms of national security are more equal than others. It's far more interested in ensuring its collections continue to be fed than it is with patching security holes -- holes it has often created -- that affect millions of US citizens and dozens of hacker-tempting firms.

It also shows the government is not to be trusted when it demands "good guy only" access. It can't protect the backdoors it's already created and it has only the slightest interest in protecting the nation from the bad guys that will inevitably find its secret entrances.

from the speak-up,-feinstein,-we-can't-hear-you dept

Last week, we wrote about the leak of various NSA hacking tools, that showed it had zero-day exploits for a bunch of hardware, including some from Cisco. This has raised some concerns about how long the NSA sat on these vulnerabilities without telling companies -- along with reaffirming what many people already suspected: that the supposed "Vulnerabilities Equities Program" (VEP), in which the NSA is supposed to disclose the vulnerabilities it finds to the companies to patch, is a complete joke.

But Marcy Wheeler has another important point about all of this. When the Snowden documents originally leaked three plus years ago, the various top members of the House and Senate Intelligence Committees -- the so-called Gang of Four -- were quick to speak out (and condemn) the leak. But, oddly, this time they're staying pretty quiet.

Within hours of the first Snowden leak, Dianne Feinstein and Mike Rogers had issued statements about the phone dragnet. As far as I’ve seen, Adam Schiff is the only Gang of Four member who has weighed in on this

U.S. Rep. Adam Schiff, the ranking Democrat on the House Intelligence Committee, also spoke with Mary Louise. He said he couldn’t comment on the accuracy of any reports about the leak.

But he said, “If these allegations were true, I’d be very concerned about the impact on the intelligence community. I’d also obviously want to know who the responsible parties were. … If this were a Russian actor — and again, this is multiple ‘ifs’ here — we’d have to ask what is causing this escalation.”

Say, Congressman Schiff. Aren’t you the ranking member of the House Intelligence Committee and couldn’t you hold some hearings to get to the bottom of this?

Meanwhile, both Feinstein (who is the only Gang of Four member not campaigning for reelection right now) and Richard Burr have been weighing in on recent events, but not the Shadow Brokers release.

If the House and Senate Intelligence Committees were really about "oversight" of the NSA, then shouldn't they have jumped on this immediately? Shouldn't they be looking into how the NSA manages the VEP? Shouldn't they be looking into how these tools got out? Why are they just staying silent or giving meaningless statements like Schiff's?

The question is whether or not the VEP is being used properly. If the NSA discovered its exploits had been accessed by someone other than its own TAO (Tailored Access Operations) team, why did it choose to keep its exploits secret, rather than inform the developers affected? The vulnerabilities exposed so far seem to date as far back as 2013, but only now, after details have been exposed by the Shadow Brokers are companies like Cisco actually aware of these issues.

According to Lawfare's contributors, there are several reasons why the NSA would have kept quiet, even when confronted with evidence that these tools might be in the hands of criminals or antagonistic foreign powers. They claim the entire process -- which is supposed to push the NSA, FBI, et al towards disclosure -- is broken. But not for the reasons you might think.

The Office of the Director of National Intelligence claimed last year that the NSA divulges 90% of the exploits it discovers. Nowhere in this statement were any details as to what the NSA considered to be an acceptable timeframe for disclosure. It's always been assumed the NSA turns these exploits over to developers after they're no longer useful. The Obama administration may have reiterated the presumption of openness when reacting to yet another Snowden leak, but also made it clear that national security concerns will always trump personal security concerns -- even if the latter has the potential to affect more people.

The main thrust of the Lawfare article is that the "broken" part of the equities process is that there should be a presumption of disclosure at all. The authors point out that it might take years to discover or develop a useful exploit and -- given the nature of the NSA's business -- it should be under no pressure to make timely disclosures to developers whose software/hardware the agency is exploiting.

[F]rom an operational standpoint, it takes about two years to fully utilize and integrate a discovered vulnerability. For the intelligence officer charged with managing the offensive security process, the VEP injects uncertainty by requiring inexpert intergovernmental oversight of the actions of your offensive teams, effectively subjects certain classes of bugs to time limits and eventual public exposure—all without any strategic or tactical thought governing the overall process.

[...]

Individual exploitable software vulnerabilities are difficult to find in the first place. But to engineer the discovered vulnerability into an operationally deployable exploit that can bypass modern anti-exploit defenses is far harder. It is a challenge to get policymakers to appreciate how rare the skills are for building operationally reliable exploits. The skillset exists almost exclusively within the IC and in a small set of commercial vendors (many of whom were originally trained in intelligence). This is not an area where capacity can be easily increased by throwing money at it—meaningful development here requires monumental investment of time and resources in training and cultivating a workforce, as well as crafting mechanisms to identify traits of innate talent.

The authors do point out that disclosure can also be useful to intelligence services. If these disclosures result in safer computing for everyone else, then that's apparently an acceptable side effect.

[T]here are three major, non-technical reasons for vulnerability disclosure.

First, disclosure can provide cover in the event that an OPSEC failure leads you to believe a zero-day has been compromised—if there is a heightened risk of malicious use, it allows the vendor time to patch. Second, disclosing to vendors allows the government to out an enemy’s zero-day vulnerability without disclosing how it was found. And third, government disclosure can form the basis of building a better relationship with Silicon Valley.

Saddling intelligence agencies with a presumption of disclosure is possibly a dangerous idea. Less-than-useful exploits that could be divulged to developers might be tied to other exploits still being deployed by intelligence services. Any suggested timeframe for mandatory disclosure would likely cause further harm by forcing the NSA, FBI, etc. to turn over exploits just as they're generating optimal results. On top of that, the authors point out that a push towards disclosure hamstrings US intelligence services as agencies in unfriendly nations will never be constrained by requirements to put the public ahead of their own interests.

But the process is definitely broken, no matter whose side of the argument you take. The NSA says it discloses 90% of the vulnerabilities it discovers, but former personnel involved in these operations note they've never seen a vulnerability disclosed during their years in the agency.

It's unlikely that the process will ever be fixed to everyone's satisfaction. The most likely scenario is that the VEP will continue to trundle along doing absolutely nothing while being ineffectually attacked by those opposing intelligence community secrecy. As it stands now, the presumption of disclosure is completely subject to any national security concerns raised by intelligence and law enforcement agencies. Occasional political climate shifts may provoke transparency pledges from various administrations, but those should be viewed as sympathetic noises -- presidential pats on the head meant to fend off troubling questions and legislative pushes to put weight behind the administration's words.

The emails, reviewed by The Associated Press, show that State Department technical staff disabled software on their systems intended to block phishing emails that could deliver dangerous viruses. They were trying urgently to resolve delivery problems with emails sent from Clinton's private server.

"This should trump all other activities," a senior technical official, Ken LaVolpe, told IT employees in a Dec. 17, 2010, email. Another senior State Department official, Thomas W. Lawrence, wrote days later in an email that deputy chief of staff Huma Abedin personally was asking for an update about the repairs. Abedin and Clinton, who both used Clinton's private server, had complained that emails each sent to State Department employees were not being reliably received.

After technical staffers turned off some security features, Lawrence cautioned in an email, "We view this as a Band-Aid and fear it's not 100 percent fully effective."

While trial-and-error is generally useful when solving connection problems, the implication is undeniable: to make Clinton's private, insecure email server connect with the State Department's, it had to -- at least temporarily -- lower itself to Clinton's security level. The other workaround -- USE A DAMN STATE DEPARTMENT EMAIL ADDRESS -- was seriously discussed.

This latest stack of emails also exposed other interesting things... like the fact that Clinton's private email server was attacked multiple times in one day, resulting in staffers taking it offline in an attempt to prevent a breach. (h/t Pwn All The Things)

In addition to the security issues, there's also some discussion about why Clinton was choosing to use her own server.

In one email, the State Department's IT person explains the agency already has an email address set up for Clinton, but offers to delete anything contained in it -- and points out that using the State Dept. address would make future emails subject to FOIA requests.

[W]e actually have an account previously set up: SSHRC@state.gov. There are some old emails but none since Jan '11 -- we could get rid of them.

You should be aware that any email would go through the Department's infrastructure and subject to FOIA searches.

So, there's one reason Clinton would have opted to use a personal email address and server. More confirmation of the rationale behind this decision appears in an earlier email (2010) from Clinton to her aide, Huma Abedin.

Abedin: We should talk about putting you on state email or releasing your email to the department so you are not going to spam.

Clinton: Let's get separate address or device but I don't want any risk of the personal being accessible.

There appears to be some intent to dodge FOIA requests -- either by ensuring "no documents found" when Clinton's State Department email address was searched, or by being able to control any release by being the chokepoint for responsive documents.

To accomplish this, Clinton's team set up a private email server that was insecure and did not follow State Department guidelines. In fact, her team brushed off the agency more than once before finally informing it that they simply would not comply with State Department regulations.

In a blistering audit released last month, the State Department's inspector general concluded that Clinton and her team ignored clear internal guidance that her email setup broke federal standards and could leave sensitive material vulnerable to hackers. Her aides twice brushed aside concerns, in one case telling technical staff "the matter was not to be discussed further," the report said.

The FBI investigation that Clinton refuses to call an investigation continues. There may be no criminal charges forthcoming, but there's already plenty of evidence that Clinton's use of a private email server was not only dangerously insecure, but put into place in hopes of limiting her accountability.

from the wtf dept

Once the DOJ told the court in San Bernardino that it had succeeded in hacking into the iPhone of Syed Farook, the big question people asked is whether or not the FBI would then tell Apple about the vulnerability. After all, the administration set up the so-called "Vulnerabilities Equities Policy" (VEP) with the idea of sharing most vulnerabilities it discovers with companies. The White House directly stated:

One thing is clear: This administration takes seriously its commitment to an open and interoperable, secure and reliable Internet, and in the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest. This has been and continues to be the case.

This spring, we re-invigorated our efforts to implement existing policy with respect to disclosing vulnerabilities – so that everyone can have confidence in the integrity of the process we use to make these decisions. We rely on the Internet and connected systems for much of our daily lives. Our economy would not function without them. Our ability to project power abroad would be crippled if we could not depend on them. For these reasons, disclosing vulnerabilities usually makes sense. We need these systems to be secure as much as, if not more so, than everyone else.

Still, one could make a strong case that this vulnerability should be disclosed... even if almost no one expected it to be. Amusingly, just a few days ago, Apple revealed that the FBI used the VEP to disclose a vulnerabilityfor the very first time, on April 14th, just as everyone was arguing about this. Of course, the flaw it revealed was not about hacking into the iPhone, and was actually about a flaw that Apple had discovered and fixed... nine months ago. But, again, if this is the very first time the FBI has disclosed something to Apple, it certainly suggests that the VEP process generally means nothing gets disclosed. In fact, the timing of this really suggests that someone in the DOJ recently flipped out and realized that there's now going to be scrutiny on the VEP, so they might as well disclose something. Thus, they found an old bug that had already been patched and "revealed" it.

“The F.B.I. purchased the method from an outside party so that we could unlock the San Bernardino device,” Amy S. Hess, executive assistant director for science and technology, said in a statement.

“We did not, however, purchase the rights to technical details about how the method functions, or the nature and extent of any vulnerability upon which the method may rely in order to operate. As a result, currently we do not have enough technical information about any vulnerability that would permit any meaningful review” by the White House examiners, she said.

Now, some are arguing that this suggests absolutely terrible bargaining on the side of the DOJ/FBI. But, another interpretation is that it's how the DOJ knew that it wouldn't have to reveal the flaw to Apple. Of course, this might also explain why the DOJ at one point appeared to claim that the hack in question only worked for Farook's phone. They later claimed that was a misstatement, and it really meant that it only applied to that iPhone configuration. But, if the FBI never actually got the details, then in some sense they'd be right that for the FBI the crack only worked for that one phone. And if they wanted to do it on another phone, they'd have to shell out another ~$1 million or so...

from the sna.fu dept

TL;DR: short URLs produced by bit.ly, goo.gl, and similar services are so short that they can be scanned by brute force. Our scan discovered a large number of Microsoft OneDrive accounts with private documents. Many of these accounts are unlocked and allow anyone to inject malware that will be automatically downloaded to users’ devices. We also discovered many driving directions that reveal sensitive information for identifiable individuals, including their visits to specialized medical facilities, prisons, and adult establishments.

The Freedom to Tinker Foundation has just released a study it compiled over the last 18 months -- one in which it scanned thousands of shortened URLs and discovered what they unintentionally revealed. Microsoft's OneDrive -- which uses link-shortening -- could be made to reveal documents uploaders never intended to share with the public. Worse, Freedom to Tinker discovered a small percentage of brute-forced URLs linked to documents with "write" privileges enabled.

Around 7% of the OneDrive folders discovered in this fashion allow writing. This means that anyone who randomly scans bit.ly URLs will find thousands of unlocked OneDrive folders and can modify existing files in them or upload arbitrary content, potentially including malware.

And, because Microsoft's automatic virus/malware scanning for OneDrive contents is less than robust, it wouldn't take much for any random person to wreak havoc on any number of devices with access to those contents.

OneDrive “synchronizes” account contents across the user’s OneDrive clients. Therefore, the injected malware will be automatically downloaded to all of the user’s machines and devices running OneDrive.

Fortunately for OneDrive users, the scanning method deployed by FTTF no longer works as of March 2016. But this doesn't necessarily mean the accounts are completely secure -- just that one avenue for attack/access has been closed.

Just as disturbing -- but for different reasons -- is the automatic link shortening tied to Google Maps. The links could be manipulated to discover all sorts of inferential information about people's private activities… or at least the activities they never thought they were sharing with the world. The directions and searches uncovered by FTTF's scanning activity potentially reveal plenty of sensitive information about Google Maps users.

Our sample random scan of these URLs yielded 23,965,718 live links, of which 10% were for maps with driving directions. These include directions to and from many sensitive locations: clinics for specific diseases (including cancer and mental diseases), addiction treatment centers, abortion providers, correctional and juvenile detention facilities, payday and car-title lenders, gentlemen’s clubs, etc. The endpoints of driving directions often contain enough information (e.g., addresses of single-family residences) to uniquely identify the individuals who requested the directions. For instance, when analyzing one such endpoint, we uncovered the address, full name, and age of a young woman who shared directions to a planned parenthood facility.

The same privacy concerns associated with the indiscriminate use of automatic license plate readers by law enforcement and warrantless access to cell site location info are present here: the reconstruction of people's lives via the "tracking" of their movements. In this case, however, the information generated is more "voluntary" than either of the other listed collections, which are far more passive than searching for directions using a web service provided by a company with an unquenchable thirst for data.

The good news is that the method deployed for the report no longer works for Google Maps-shortened links. But, once again, that does not mean the problems with link shorteners have been eliminated. FTTF points out that the March 2016 change by Microsoft (which claims it had nothing to do with FTTF reporting the vulnerability to it) only affects links generated after that date. Any previous short URLs are still vulnerable to traversal scans.

Google, however, did make a more of a serious attempt to prevent abuse of its shortened links.

All newly generated goo.gl/maps URLs have 11- or 12-character tokens, and Google deployed defenses to limit the scanning of the existing URLs.

While this news should be of concern to users of these services, it definitely has to be great news for law enforcement and intelligence agencies. So much for "going dark." Vulnerabilities in web services apparently provide access to otherwise "locked" cloud storage contents and Google Maps -- at least until it was fixed -- generating tons of location data for the taking.

It's also worth pointing out that the method used by Freedom to Tinker to complile this report is basically the same method used by Andrew "Weev" Auernheimer to expose AT&T users' email addresses: altering URLs to uncover data presumed to be hidden. Of course, AT&T's vindictiveness resulted in a 3.5 year prison sentence for Auernheimer. No legal threats have been made towards FTTF, but the sad thing is that security research is inherently risky, as you can never tell whether the entity affected will respond with a bug fix or a police report -- not until after they've been informed.

from the and-better-than-apple dept

Not surprisingly, Oliver's take is much clearer and much more accurate than many mainstream press reports on the issues in the case, appropriately mocking the many law enforcement officials who seem to think that, just because Apple employs smart engineers, they can somehow do the impossible and "safely" create a backdoor into an encrypted iPhone that won't have dangerous consequences. He even spends a bit of time reviewing the original Crypto Wars over the Clipper Chip and highlights cryptographer Matt Blaze's contribution in ending those wars by showing that the Clipper Chip could be hacked.

But the biggest contribution to the debate -- which I hope that people pay most attention to -- is the point that Oliver made in the end with his faux Apple commercial. Earlier in the piece, Oliver noted that this belief among law enforcement that Apple engineers can somehow magically do what they want is at least partially Apple's own fault, with its somewhat overstated marketing. So, Oliver's team made a "more realistic" Apple commercial which noted that Apple is constantly fighting security cracks and vulnerabilities and is consistently just half a step ahead of hackers with malicious intent (and, in many cases, half a step behind them).

This is the key point: Building secure products is very, very difficult and even the most secure products have security vulnerabilities in them that need to be constantly watched and patched. And what the government is doing here is not only asking Apple to not patch a security vulnerability that it has found, but actively forcing Apple to make a new vulnerability and then effectively forcing Apple to keep it open. For all the talk of how Apple can just create the backdoor just this once and throw it away, this more like asking Apple to set off a bomb that blows the back off all houses in a city, and then saying, "okay, just throw away the bomb after you set it off."

Hopefully, as in cases like net neutrality, Oliver's piece does it's job in informing the public what's really going on.