from the they're-about-surveillance dept

For quite some time now, we've been warning about the government's questionable attempts to pass "cybersecurity" bills that focus on "information sharing" with names like CISA and CISPA. Defenders of these bills insist that they're "just voluntary" and are necessary because it would enable private companies to share threat information with the US government, so that the US government could help stop attacks. Of course, we've been asking for years (1) why, if this is so useful, companies can't already share this information and (2) what attacks these bills would have actually stopped? No one ever seems to have any answers.

Defenders of the bill also insist that there really shouldn't be any privacy concerns because companies can just hand over the limited information on the attacks, not any personal user info. However, with the recent revelations from Pro Publica and the NY Times (via Snowden documents) about how the NSA uses "cyber signatures" in sniffing through the upstream collection (i.e., sniffing through all internet traffic by tapping into fiber backbones) computer security expert Jonathan Mayer notes that this completely changes the equation on just how bad these "information sharing" cybersecurity bills really are.

Before it was known that the NSA could do this, the argument was that sharing details of a cybersecurity threat would just lead to DHS and NSA taking that "threat" information, and then seeing if it can help figure out ways to prevent the threat. But, now that we know the NSA can sniff the entire upstream collection using such "cyber signatures" and then is allowed to collect and keep whatever it finds as an incidental collection, this becomes very clearly a surveillance bill -- just as Senator Ron Wyden warned.

That's because the new documents make it clear that the NSA not only wants to search based on these broad "cyber signatures" but then claims it gets to keep that data and can search through whatever it collects. These are the infamous "backdoor searches" that Senator Wyden has been warning about for ages.

So, these "information sharing" bills don't just give the NSA access to private information from companies, but really give the NSA the "cyber signatures" it needs to then snarf up a ton of other private information that it has long wanted access to. This is why closing the "backdoor search" loophole is so important as well -- and not letting any of these "information sharing" bills pass is also of utmost importance.

Oh, and one other sneaky thing in all of this that Mayer highlights: defenders of these information sharing bills insist that they're not surveillance bills because, as Rep. Adam Schiff noted: "this bill makes clear in black and white legislative text that nothing authorizes government surveillance in this act." But, as Mayer points out that's incredibly misleading because the government already has the authorization it needs, under the secret program that was just revealed. What the information sharing does is make that authorization much more powerful by making it easier for the NSA to collect the information it then can slide into the program in order to snarf up much more important private information.

from the don't-destroy-privacy-in-the-name-of-cybersecurity dept

There's a big "White House Cybersecurity Summit" down the road at Stanford today, where the President will release the details of a new executive order promoting "a framework for sharing information about cyber threats" which the administration hopes will lead organizations to better protect their data from malicious hacks.

The new executive order encourages businesses to form "information sharing and analysis organizations," or ISAOs, which would gather data about hacking attacks and share it with companies and the government.

A number of companies will announce Friday that they are incorporating the administration's cybersecurity framework, which was created after a 2013 executive order, into their companies. The framework helps businesses decide how to use cybersecurity investments, ways to implement cybersecurity for new companies and measure their programs against others. Intel, Apple and Bank of America use framework and will announce that they will require all vendors to use it. Both QVC and Walgreens will say they will employ the framework in their risk management practices, while Kaiser Permanente will commit to using it as well.

Of course, if you've been following the big fights over the past few years on cybersecurity legislation, you'll know that such "information sharing" has been a key component in most of the proposed bills, none of which have become law. Most of the bills have focused on one key thing: giving companies liability protection, so that they can't be sued over the information they share. From the beginning, however, we've asked a pretty simple question that no one has answered: what is currently preventing companies from sharing such threat information?

The answer, as reinforced by this move today by the White House, is absolutely nothing. Companies can (and in some cases already do) share "threat" information, and having them do so in a more organized fashion to prevent malicious attacks is, in fact, a good idea. What's not needed is a law that basically gives blanket immunity for companies to share almost any information to any government agency. That's been the problem with CISPA, CISA and similar bills: they're not about truly making information sharing about threats easier, since that can be done already. They're about giving blanket cover for companies to share even more information with government agencies such as the NSA.

With this new executive order and companies adopting the suggested framework, many of the "benefits" backers of cybersecurity legislation talk about will happen without the need for any new legislation. True threat information can be shared and companies can get wiser about protecting their information. But it doesn't give them blanket immunity if they start handing over other information to the government for other purposes, such as surveillance. That's important.

Yes, working together to prevent the growing number of online attacks is important. But that should never be used as a backdoor process to enable greater surveillance. Doing it this way, rather than by passing a questionable law, seems like a much more reasonable first step.

from the safe-as-homelands dept

The US government has basically declared war over the Sony hacking, offering full-throated support for the beleaguered embarrassed company. Why this one -- rather than the countless hacks of corporate networks (including those where credit card data and personal information were compromised) -- remains a mystery.

The end result has been a call for more government intrusion and a reanimation of CISPA's lumbering corpse. "Share with us," says the government. "Gird yourself for the cyber Pearl Harbor," say its supporters. "Let us handle it," say those whose desire for expanded government power exceeds their crippling myopia.

Yeah, let's do that. Let's allow the government to set the rules on cybersecurity. Let's give agencies like the DHS -- which can't even be bothered to secure its own assets -- more leeway to investigate and react to cyberthreats. (h/t to NextGov)

DHS lacks a strategy that: (1) defines the problem, (2) identifies the roles and responsibilities, (3) analyzes the resources needed, and (4) identifies a methodology for assessing this cyber risk. A strategy is a starting point in addressing this risk. The absence of a strategy that clearly defines the roles and responsibilities of key components within DHS has contributed to a lack of action within the Department. For example, no one within DHS is assessing or addressing cyber risk to building and access control systems particularly at the nearly 9,000 federal facilities protected by the Federal Protective Service (FPS) as of October 2014.

That's the Government Accountability Office's assessment of the DHS's qualifications as a potential cybersecurity agency. [pdf link] This is the agency tasked with securing federal assets and ensuring the safety of not only government employees, but Americans in general. And it can't do it. In fact, it can't even begin to do it.

Despite being specifically directed by 2002's Federal Information Security Management Act (FIMSA) to periodically assess risks, report on them and DO SOMETHING ABOUT IT, the agency has managed to blunder into 2015 with no specific plan to tackle cyberthreats to the federal buildings under its protection.

And, while the President and those pushing the revived CISPA seem rather keen on "sharing info," it's a one-way street, apparently. The DHS can't even be bothered to share with other government agencies.

The Interagency Security Committee (ISC), which is housed within DHS and is responsible for developing physical security standards for nonmilitary federal facilities, has not incorporated cyber threats to building and access control systems in its Design-Basis Threat report that identifies numerous undesirable events.

Whatever the DHS/ISC has managed to glean from situations like 2009's hacking of a Dallas hospital's HVAC system or 2006's hacking of Los Angeles traffic signals hasn't been passed on to other government agencies because the ISC believes "active shooters" and "workplace violence" are bigger threats. Maybe so, in terms of actual physical violence, but that's no excuse for ignoring something the government as a whole considers to be its next battlefield.

So, why is the DHS so bad at this? It would seem to be two things: the DHS is too big to move at the speed the threat mandates and it's always someone else's job. Because it has failed to take charge of the situation (despite a federal mandate and a 2013 presidential policy directive [p. 8-9]), no one seems to know what to do, how to do it or even who should do it.

[B]ecause DHS has not developed a strategy, several components within DHS have made different assertions about their roles and responsibilities. For example, FPS’s Deputy Director for Policy and Programs said that FPS’s authority includes cybersecurity. However, FPS is not assessing cyber risk because, according to this official, it does not have the expertise. Furthermore, although ICS-CERT has developed a tool to assess cyber risk, it also is not assessing cyber risk to building and access control systems at federal facilities. Moreover, NPPD’s Federal Network Resilience is to, among other things, identify common cybersecurity requirements across the federal government, but it also is not working on issues regarding the cyber risk of building and access control systems in the federal government.

An official from the Office of the Under Secretary of NPPD acknowledged that NPPD has not yet determined roles and responsibilities, including what entity should conduct cyber risk assessments of FPS-protected facilities or what assessment tool should be used. This official said that the Department has not developed a strategy, in part, because cyber threats involving building and access control systems are an emerging issue.

Somehow, despite being well-financed and incredibly large, the DHS can't find the time to properly assess the facilities it's supposed to be "securing."

Moreover, GSA [General Services Administration -- reports to the DHS] has not conducted security control assessments for all of its systems that are in about 1,500 FPS- protected facilities. In November 2014, GSA information technology officials said that from 2009 to 2014, the agency conducted 110 security assessments of the building control systems that are in about 500 of its 1,500 facilities. GSA has not yet assessed the security of control systems with network or Internet connections in about 200 buildings. GSA officials stated that they plan to assess these systems during fiscal year 2015.

The GSA isn't just being outpaced by hackers. It's being outpaced by the government's own slow stagger into the connected future. 800 systems are expected to switch from "standalone" to networked in the near future. The GSA plans to re-assess these systems' security after the changeover, but it's still working its way through the last half-decade's backlog. With its parent agency unable to provide guidance and its other agencies unwilling to share information, the GSA becomes the third prong in this triumvirate of failure.

And what it does actually get around to assessing isn't much help, either. Being crossed off the GSA's to-do list means being no more safe than you were before the agency finally strolled through the door.

Further, our review of 20 of 110 of GSA’s security assessment reports (between 2010 and 2014) show that they were not comprehensive and not fully consistent with NIST guidelines. For example, in 5 of the 20 reports we reviewed, GSA assessed the building control device to determine if a user’s identity and password were required for login but did not assess the device to determine if password complexity rules were enforced. This could potentially lead to weak or insecure passwords being used to secure building control devices.

GSA also conducted its assessments of building control systems in a laboratory setting which allowed it to test components and to identify weaknesses in their default configuration. However, GSA does not conduct further assessments after installation when configuration settings may no longer reflect their default values. As a result, GSA has limited assurance that the configurations assessed reflect the configurations implemented in the facility, thereby increasing the risk that vulnerabilities in building control systems may not be detected.

This is the government that wants the nation's companies to "partner up" against cyberthreats and cyberterrorism: the same government that can't even ensure its own infrastructure is protected. And no one cares because compromising control systems doesn't make for very sexy copy or hawkish soundbites about being "tough on cybercrime."

If you need a solid argument against the government's desire to play the part of (cyber)security guard to the nation's companies, look no further than the GAO's list of "Related GAO Products" (p. 34) that follows this report.

The government doesn't have the skills necessary to ply its wares in the cybersecurity business. If it can't lock down its own assets -- despite seemingly limitless funding and manpower -- it has nothing to offer the private sector but intrusiveness and harmful regulation.

Now, if you're a fan of bad news, you're going to love the worse news. The fight over who should head up the government's War on All Things Cyber doesn't put the DHS at the front of the list -- but it's not because the agency clearly can't handle the job. It's because agencies that are even more intrusive than the DHS want a piece of the action, namely the FBI and the NSA. If either of these two end up in that position, expect to find domestic surveillance rules relaxed. The latter agency defines cybersecurity as "peeking in at everyone," which is at odds with those on the receiving end (US companies) who believe being secure means removing backdoors or otherwise locking everyone out, not just the "bad guys." That isn't going to sit well with the FBI and NSA -- one of which believes no one should be able to "lock out" law enforcement and one that intercepts hardware and inserts backdoors when not deploying malware for the same purpose. So, the DHS may be the lesser of three evils, if only because its incompetence exceeds its reach.

from the not-the-public's dept

On Monday, President Obama gave a speech kicking off his big push on cybersecurity, with many of the details being released on Tuesday, and they don't look very good. There are a lot of different pieces, but we'll just highlight the two that concern us the most.

First up: information sharing/"cybersecurity." The key issue here: is it the return of CISPA? CISPA, of course, is the cybersecurity "information sharing" bill that is introduced each year, but which is really about giving the NSA a tool to pressure companies into sharing their information (by granting immunity from liability to those companies). In 2012, President Obama rejected the CISPA approach as not having enough protections for privacy and civil liberties. And, indeed, contrary to what some have said, the official proposal is not "endorsing CISPA." The approach is definitely more limited and the most major concern is addressed. Rather than giving the information to the NSA (or the FBI), Homeland Security gets it. DHS isn't wonderful, but it's better than the other two alternatives. Companies can still give the info to the NSA or FBI (or others), but won't get full immunity from lawsuits if they do.

But, where the new proposal falls woefully short is in its lack of privacy protections. It basically handwaves its way through the privacy question, saying there will be guidelines, but the guidelines aren't written yet, and they're fairly important here. Instead, there's just a plan to make them:

The Attorney General, in coordination with the Secretary of
Homeland Security and in consultation with the Chief Privacy and Civil Liberties Officers at the
Department of Homeland Security and Department of Justice, the Secretary of Commerce, the
Director of National Intelligence, the Secretary of Defense, the Director of the Office of
Management and Budget, the heads of sector-specific agencies and other appropriate agencies,
and the Privacy and Civil Liberties Oversight Board, shall develop and periodically review
policies and procedures governing the receipt, retention, use, and disclosure of cyber threat
indicators by a Federal entity obtained in connection with activities authorized in this Act.

Yes, it promises that those guidelines will limit the "acquisition, interception, retention, use and disclosure" of information, but it's still not entirely clear what the final guidelines will be. The second problem, still not addressed in all of this, is explaining why this is needed. People keep saying that we need "information sharing" because of "cyberthreats," but no one argues why that information sharing can't happen today, or points out what regulations today get in the way. That's because they don't. Companies can share information today, but the focus of this bill is to try to grant them broad immunity in case they share the wrong (private) info and it gets out.

The second concerning proposal is with the update to the CFAA (the Computer Fraud and Abuse Act). The CFAA, of course, is the widely misused "anti-hacking" law that has been stretched and twisted by law enforcement and prosecutors over time to argue that merely disobeying a terms of service could be seen as "hacking." While some courts have limited that ridiculous interpretation, the changes here seem fairly messy and could bring back that possibility. The language involves a lot of careful picking through to interpret it, and it appears that it may fix some small issues with the CFAA, but opens up other massive holes that are seriously problematic. The White House claims this fix would "enhance [the CFAA's] effectiveness against attacks on computers and computer networks."

But that's not the problem with the CFAA. The problem is that it's already seriously overbroad and used in dangerous ways. That's barely addressed. The main "fix" is that if you "intentionally exceed authorized access," there are conditions necessary to meet to trip the CFAA wire -- and a key one is that the value of the information obtained must "exceed $5,000." But, of course, with the way the gov't inflates the value of information... that seems like a pretty small hurdle. The really big problem, though, comes in section (e)(6) which adds in a troubling definitional change to "exceeds authorized access." This is the whole bit that's been used as evidence of "terms of service" violations. The key case that rejected this theory is the Nosal case and that seems to be completely wiped out with this little addition to exceeding authorized access:

for a purpose that the accesser knows is not authorized by the computer owner;

This is likely to be interpreted to mean that if a terms of service bans a certain type of use, they have "knowledge" and thus violating that kind of use is back to being a problem under the CFAA. As Orrin Kerr argues, this could be read to mean that if your employer says you can only use a computer for work reasons, and you surf for personal reasons, you've broken the law. It is also possible to read this section to mean that using someone else's Netflix or HBO GO password... could violate the law. Yikes!

Of course, one hopes that law enforcement wouldn't go after those types of violations, but a more serious concern may be the impact on security research. Finding a hole in a website online, allowing you to access data that was publicly exposed could be seen as exceeding access, on the basis that whoever finds it "knows [it] is not authorized by the computer owner." Basically, it requires the government to argue that whoever they're going after should have known that the computer owner "wouldn't like" it. That... opens up a big can of worms that the DOJ will abuse like crazy.

The new bill also says that you can be charged with racketeering related to CFAA violations, so long as the government can tie you to other people and claim that it's an "organized crime group." It also ups the penalties for things that might be considered "actual hacking" (i.e., getting around technological barriers to access a computer) -- making it automatically a felony with up to 10 years in jail (rather than the existing law, under which it could be a misdemeanor or a felony and the limit is 5 years in jail). And, of course, it expands civil forfeiture procedures so that law enforcement can seize (and likely keep) all your computer equipment if it thinks you're violating the CFAA. Looks like law enforcement can now go "shopping" for computers.

Once again, we seem to be facing a situation where the administration is more focused on what law enforcement wants, while paying lip service to the protections of the public from likely law enforcement and intelligence community abuse.

from the because-bad-ideas-never-die dept

This isn't a huge surprise, but Rep. Dutch Ruppersberger, the NSA's personal Rep in Congress (NSA HQ is in his district), has announced that he's bringing back CISPA, the cybersecurity bill designed to make it easier for the NSA to access data from tech companies (that's not how the bill's supporters frame it, but that's the core issue in the bill). In the past, Ruppersberger had a teammate in this effort, Rep. Mike Rogers, but Rogers has moved onto his new career as a radio and TV pundit (CNN just proudly announced hiring him), so Ruppersberger is going it alone this time around.

Not surprisingly, he's using the Sony Hack as a reason for why this bill is needed:

“The reason I’m putting bill in now is I want to keep the momentum going on what’s happening out there in the world,” Rep. Dutch Ruppersberger... told The Hill in an interview, referring to the recent Sony hack, which the FBI blamed on North Korea.

Fair enough, then perhaps Ruppersberger could explain how CISPA would have prevented the Sony Hack? Of course, he can't, because it wouldn't have helped. CISPA is focused on getting companies to share more information with the government (including the NSA and DHS), but there's no indication that Sony would have actually opened up its network for the NSA to snoop through and find these hackers (wherever they might have come from). Even if Sony had opened up its system to the government, it seems unlikely that the NSA would have magically spotted this hack and done anything about it.

Instead, using the Sony Hack as a hook is a cynical political ploy for a losing idea that is designed to harm the public and take away their privacy.

from the and-for-not-giving-his-precious-nsa-your-data dept

Rep. Mike Rogers is just about out of Congress, but the NSA's biggest defender (despite his supposed role in "overseeing" the agency) is using his last days on Capitol Hill to keep pushing his favorite causes. Over the weekend, he complained that President Obama basically should have gone to "cyberwar" with North Korea over the Sony hack.

“Unfortunately, he’s laid out a little of the playbook,” Rogers said. “That press conference should have been here are the actions.” ...

Without discussing specifics, Rogers said the U.S. has the capability to cripple North Korea’s cyberattack capabilities, which have been rapidly improving over the last few years.

“I can tell you we have the capability to make this very difficult for them in the future,” he said.

And I can tell you that Mike Rogers is full of bluster with little basis. First off, there is still some fairly strong skepticism in the actual computer security field that North Korea was behind the hack. Launching an all out attack without more proof would seem premature. Second, Rogers is simply wrong or clueless. We don't have the capability to "cripple" anyone's "cyberattack capabilities" unless he means taking out the entire internet. There are always ways around that. Even the reports that we've seen that do blame North Korea don't seem to think the full attack came from North Korea, so doing something like taking the few internet connections in North Korea off the map wouldn't do much good if the actual attack came from, say, China or Eastern Europe or somewhere else.

Third, can we just get over this ridiculous idea that a hack of one company, which may or may not have been by actors working for a government, is an act of either "terrorism" or "war." It's not. It's a hack. Tons of companies get hacked every day. Some have good security and still get hacked. Some, like Sony, appear to have terrible security and get hacked very easily. It's not terrorism. It's not war. It's a hack. We shouldn't be talking about retaliation or destroying countries over a hack. We should be talking about better security. Jim Harper does a good job explaining why an overreaction is a bad idea:

The greatest risk in all this is that loose talk of terrorism and “cyberwar” lead nations closer to actual war. Having failed to secure its systems, Sony has certainly lost a lot of money and reputation, but for actual damage to life and limb, you ain’t seen nothing like real war. It is not within well-drawn boundaries of U.S. national security interests to avenge wrongs to U.S. subsidiaries of Japanese corporations. Governments in the United States should respond to the Sony hack with nothing more than ordinary policing and diplomacy.

But, no, not Mike Rogers. Instead, he's using this as his opportunity to push for his favorite bad law: giving the NSA more power to sift through your data:

Rogers, who is retiring from Congress in just a few days, made a final plug for his bill to facilitate cybersecurity information sharing between the private sector and National Security Agency (NSA). The measure passed the House, but stalled in the Senate, held up by privacy concerns.

It’s necessary, Rogers argued, if the U.S. wants to protect itself from similar attacks in the future. Because of laws on the books, the NSA is limited in its ability to protect private critical infrastructure networks.

He's talking, of course, about his beloved CISPA, which would effectively remove any liability from companies for sharing your private data with the NSA (and the rest of the government). But, as per usual with Rogers, he's wrong about nearly all of the details. There is nothing in CISPA that would have made it so the NSA could have "protected" Sony. Sony's problem here was Sony's terrible computer security. So, no, we don't need CISPA or other cybersecurity legislation to better protect the internet.

And is Mike Rogers really trying to argue that Sony's private intranet is "critical infrastructure"?

Finally, there's nothing in the law today that stops a company from sharing "malicious source code" with the government or others. We already have a good way for dealing with that that doesn't require a new law that gives the NSA more access to everyone's data.

Either way, it looks like Rogers is going out in typical fashion -- shooting his mouth off in favor of his friends and pet projects, without actually understanding or caring about the details. No wonder he's going into AM talk radio. He'll be a perfect fit.

from the big-questions dept

Salon has published an excerpt from Shane Harris' new book (which looks excellent), @War: The Rise of the Military-Internet Complex. The specific excerpt is called: Google's secret NSA alliance: The terrifying deals between Silicon Valley and the security state, and it's an absolute must read. Frankly, Salon's title overstates the story. The article reveals more details about a ton of existing information sharing that goes on between the NSA and various tech companies to try to prevent malicious attacks from foreign threats (with the vast majority of them coming from China). The article focuses on some of the details behind Google's public admission that hackers in China had broken into Google's systems (as well as a number of other companies'). Harris' story reveals that Google's own tech team had effectively traced the hack back to some servers in Taiwan and had gotten into those servers themselves, discovering more information about what the Chinese hackers were up to (and that they'd hacked many other companies).

However, it also notes that this resulted in Google agreeing to work with the NSA on preventing such attacks in the future:

On the day that Google’s lawyer wrote the blog post, the NSA’s general counsel began drafting a “cooperative research and development agreement,” a legal pact that was originally devised under a 1980 law to speed up the commercial development of new technologies that are of mutual interest to companies and the government. The agreement’s purpose is to build something — a device or a technique, for instance. The participating company isn’t paid, but it can rely on the government to front the research and development costs, and it can use government personnel and facilities for the research. Each side gets to keep the products of the collaboration private until they choose to disclose them. In the end, the company has the exclusive patent rights to build whatever was designed, and the government can use any information that was generated during the collaboration.

It’s not clear what the NSA and Google built after the China hack. But a spokeswoman at the agency gave hints at the time the agreement was written. “As a general matter, as part of its information-assurance mission, NSA works with a broad range of commercial partners and research associates to ensure the availability of secure tailored solutions for Department of Defense and national security systems customers,” she said. It was the phrase “tailored solutions” that was so intriguing. That implied something custom built for the agency, so that it could perform its intelligence-gathering mission. According to officials who were privy to the details of Google’s arrangements with the NSA, the company agreed to provide information about traffic on its networks in exchange for intelligence from the NSA about what it knew of foreign hackers. It was a quid pro quo, information for information.

There's much more in there, including that this isn't a program, like PRISM, that gives the NSA access to emails or other such information, but rather is focused on helping detect potential holes and security risks within Google's hardware and software:

...it lets the NSA evaluate Google hardware and software for vulnerabilities that hackers might exploit. Considering that the NSA is the single biggest collector of zero day vulnerabilities, that information would help make Google more secure than others that don’t get access to such prized secrets. The agreement also lets the agency analyze intrusions that have already occurred, so it can help trace them back to their source.

As the article notes, this is a pretty big concern -- because of what else the NSA might eventually do with this information. It raises serious questions about the tradeoffs here. Yes, it's good if the NSA can better protect online services from foreign attacks, but many people certainly consider the NSA a big risk as well. As the article also makes clear, the NSA likes to hoard certain security holes for its own use -- and these kinds of information sharing arrangements are a pretty big concern on that front.

The NSA helps the companies find weaknesses in their products. But it also pays the companies not to fix some of them. Those weak spots give the agency an entry point for spying or attacking foreign governments that install the products in their intelligence agencies, their militaries, and their critical infrastructure. Microsoft, for instance, shares zero day vulnerabilities in its products with the NSA before releasing a public alert or a software patch, according to the company and U.S. officials. Cisco, one of the world’s top network equipment makers, leaves backdoors in its routers so they can be monitored by U.S. agencies, according to a cyber security professional who trains NSA employees in defensive techniques. And McAfee, the Internet security company, provides the NSA, the CIA, and the FBI with network traffic flows, analysis of malware, and information about hacking trends.

Companies that promise to disclose holes in their products only to the spy agencies are paid for their silence, say experts and officials who are familiar with the arrangements. To an extent, these openings for government surveillance are required by law. Telecommunications companies in particular must build their equipment in such a way that it can be tapped by a law enforcement agency presenting a court order, like for a wiretap. But when the NSA is gathering intelligence abroad, it is not bound by the same laws. Indeed, the surveillance it conducts via backdoors and secret flaws in hardware and software would be illegal in most of the countries where it occurs.

The excerpt notes, however, that the NSA has gotten really good at scaring the living daylights out of tech execs with special classified briefings, driving them into relationships with the NSA, separate from those kinds of paid relationships.

Starting in 2008, the agency began offering executives temporary security clearances, some good for only one day, so they could sit in on classified threat briefings.

“They indoctrinate someone for a day, and show them lots of juicy intelligence about threats facing businesses in the United States,” says a telecommunications company executive who has attended several of the briefings, which are held about three times a year. The CEOs are required to sign an agreement pledging not to disclose anything they learn in the briefings. “They tell them, in so many words, if you violate this agreement, you will be tried, convicted, and spend the rest of your life in prison,” says the executive.

[....]

But the NSA doesn’t have to threaten the executives to get their attention. The agency’s revelations about stolen data and hostile intrusions are frightening in their own right, and deliberately so. “We scare the bejeezus out of them,” a government official told National Public Radio in 2012. Some of those executives have stepped out of their threat briefings meeting feeling like the defense contractor CEOs who, back in the summer of 2007, left the Pentagon with “white hair.”

This, in turn, leads them to team up with various private security companies, leading to a rather "symbiotic" relationship:

Unsure how to protect themselves, some CEOs will call private security companies such as Mandiant. “I personally know of one CEO for whom [a private NSA threat briefing] was a life-changing experience,” Richard Bejtlich, Mandiant’s chief security officer, told NPR. “General Alexander sat him down and told him what was going on. This particular CEO, in my opinion, should have known about [threats to his company] but did not, and now it has colored everything about the way he thinks about this problem.”

The NSA and private security companies have a symbiotic relationship. The government scares the CEOs and they run for help to experts such as Mandiant. Those companies, in turn, share what they learn during their investigations with the government, as Mandiant did after the Google breach in 2010. The NSA has also used the classified threat briefings to spur companies to strengthen their defenses.

In one 2010 session, agency officials said they’d discovered a flaw in personal computer firmware — the onboard memory and codes that tell the machine how to work — that could allow a hacker to turn the computer “into a brick,” rendering it useless. The CEOs of computer manufacturers who attended the meeting, and who were previously aware of the design flaw, ordered it fixed.

That's an example where this kind of information sharing has been helpful in protecting the security of the public. And that's a good thing. But there are concerns about the costs on the other end, and really how trustworthy the NSA is on its end of these arrangements.

But reading this excerpt, I kept going back to a key point in the big debates over the various cybersecurity bills that Congress has put forth in the past couple of years, mainly CISPA and CISA. In both of those bills, the key point that supporters kept making is that such bills were needed to facilitate "voluntary information sharing" between tech companies involved in "critical infrastructure" and the government (including the NSA -- though some of the bills have put Homeland Security in place as a filter rather than having it go directly to the NSA).

But Harris' book seems to confirm exactly what many of us have been arguing for years: that there doesn't seem to be anything stopping companies from doing this sort of "voluntary" information sharing today, so why do they suddenly need new laws? The answer, of course, is one of liability. The new laws don't really knock down any regulatory barriers to sharing information: they just make sure that the companies can't be sued for those arrangements. Right now, it's not clear that companies would really be legally liable for these info sharing programs, but it can lead to lawsuits (and it wouldn't surprise me to see some class action suits being filed using Harris' book as evidence). The point of the cybersecurity bills is to put a blanket immunity on companies, which would then encourage them to do more of this kind of sharing, with the NSA providing "incentives" by scaring companies as described above.

As for the promise of supporters of these bills that it's only focused on "critical infrastructure" and not the rest of the web? Harris tackles that issue as well:

To obtain the information, a company must meet the government’s definition of a critical infrastructure: “assets, systems, and networks, whether physical or virtual, so vital to the United States that their incapacitation or destruction would have a debilitating effect on security, national economic security, national public health or safety, or any combination thereof.” That may seem like a narrow definition, but the categories of critical infrastructure are numerous and vast, encompassing thousands of businesses. Officially, there are sixteen sectors: chemical; commercial facilities, to include shopping centers, sports venues, casinos, and theme parks; communications; critical manufacturing; dams; the defense industrial base; emergency services, such as first responders and search and rescue; energy; financial services; food and agriculture; government facilities; health care and public health; information technology; nuclear reactors, materials, and waste; transportation systems; and water and wastewater systems.

It’s inconceivable that every company on such a list could be considered “so vital to the United States” that its damage or loss would harm national security and public safety. And yet, in the years since the 9/11 attacks, the government has cast such a wide protective net that practically any company could claim to be a critical infrastructure.

There's a lot more in the excerpt, and I assume a lot more in the book itself, which seems worth reading. It delves deeply into these relationships and how the NSA gets access to lots of information from telcos and tech companies. Again, actually protecting US infrastructure seems like an important goal, but from all of this, it's not clear how clearly the tradeoffs are recognized. More specifically, it seems quite troubling that this is being done by the NSA.

It is abundantly clear that the dual functions of the NSA absolutely must be split. The "cyber" protections side and the surveillance side need to be separated. Having the online protection side is important in protecting infrastructure, but tying it to the same organization looking for holes to spy on others just makes us all less safe. Furthermore, it makes it abundantly clear that no new cybersecurity laws are needed, since these companies are already quite free to share information with the government for the sake of cybersecurity.

from the bad-ideas dept

Reports are coming out that Congress is looking to push forward with bad cybersecurity legislation after the election, but before the new Congress takes over in January. We've discussed the bill in question, CISA, before. The main idea behind it is to immunize companies from liability if they share certain information with the government. Supporters of the bill note that the information sharing is entirely voluntary, but by taking away the liability it also makes it a lot more likely that companies will choose to give information to the government, and it's not yet clear why the government really needs that information. But the FUD levels are high, with Senator Saxby Chambliss actually suggesting the entire economy is at stake here:

"If we wait another year, we are really risking the economy of the United States."

Oh, come on. People have been saying this for years -- along with the whole "cyber pearl harbor" claims -- but have failed to present any explanation or details of how (1) there's a real risk to the economy or (2) how current laws block necessary solutions. On top of that, no one seems willing to explain how further information sharing will actually help stop online attacks. Remember, this is the same federal government that didn't even notice that the White House's own network had been breached until some other country told us about it. And yet, we now believe that if only US companies were feeding more information to the NSA that they'd magically be able to stop attacks (and save the economy?). That seems unlikely.

It also sounds like there may be some sort of potential trade-off, in which Congress will try to lump this bill with the USA Freedom Act, as the White House is said to be focused on surveillance reform over the cybersecurity bill. But, the reality is that the two are in many ways attached. And there are increasing worries that the final result on the USA Freedom Act will, in some ways, actually (yet again) enhance the NSA, rather than hold it back. Combine that with a cybersecurity bill that will give the NSA even more ways to get our data, and the end result could be the surveillance state increasing, rather than shrinking, with no actual benefit to the American public. There would be fewer privacy protections and just some arm waving about saving the US economy.

from the bad-ideas dept

One final story to highlight from James Bamford's really wonderful Wired profile of Ed Snowden. This one might not be that surprising, but the NSA was building an internal automated "cyberwar" system called MonsterMind, which would seek to detect an incoming "cyber attack" and then automatically launch a counterattack. Here's how Bamford describes Snowden's explanation in his article:

The massive surveillance effort was bad enough, but Snowden was even more disturbed to discover a new, Strangelovian cyberwarfare program in the works, codenamed MonsterMind. The program, disclosed here for the first time, would automate the process of hunting for the beginnings of a foreign cyberattack. Software would constantly be on the lookout for traffic patterns indicating known or suspected attacks. When it detected an attack, MonsterMind would automatically block it from entering the country—a “kill” in cyber terminology.

Programs like this had existed for decades, but MonsterMind software would add a unique new capability: Instead of simply detecting and killing the malware at the point of entry, MonsterMind would automatically fire back, with no human involvement.

Yeah, because false alarms never happen at all. Hell, just this week I was hearing about a series of false alarms when the US thought that Russia had launched thousands of nuclear missiles at the US. Imagine an automated system taught to respond to that?

And, of course, this only works... if the NSA has access to private company's networks:

In addition to the possibility of accidentally starting a war, Snowden views MonsterMind as the ultimate threat to privacy because, in order for the system to work, the NSA first would have to secretly get access to virtually all private communications coming in from overseas to people in the US. “The argument is that the only way we can identify these malicious traffic flows and respond to them is if we’re analyzing all traffic flows,” he says. “And if we’re analyzing all traffic flows, that means we have to be intercepting all traffic flows. That means violating the Fourth Amendment, seizing private communications without a warrant, without probable cause or even a suspicion of wrongdoing. For everyone, all the time.”

This puts into context some stories from last year, which noted that Keith Alexander seemed particularly focused on getting companies to give the NSA access to their networks. Last October, he gave a speech in which he pitched exactly that:

Drawing an analogy to how the military detects an incoming missile with radar and other sensors, Alexander imagined the NSA being able to spot "a cyberpacket that's about to destroy Wall Street." In an ideal world, he said, the agency would be getting real-time information from the banks themselves, as well as from the NSA's traditional channels of intelligence, and have the power to take action before a cyberattack caused major damage.

His proposed solution: Private companies should give the government access to their networks so it could screen out the harmful software. The NSA chief was offering to serve as an all-knowing virus-protection service, but at the cost, industry officials felt, of an unprecedented intrusion into the financial institutions’ databases.

The group of financial industry officials, sitting around a table at the Office of the Director of National Intelligence, were stunned, immediately grasping the privacy implications of what Alexander was politely but urgently suggesting. As a group, they demurred.

“He’s an impressive person,” the participant said, recalling the group’s collective reaction to Alexander. “You feel very comfortable with him. He instills a high degree of trust.”

But he was proposing something they thought was high-risk.

“Folks in the room looked at each other like, ‘Wow. That’s kind of wild.’ ”

This all should probably make you wonder why those very same financial institutions seem willing to shell out somewhere between $600,000 and $1 million per month for Alexander's "patent-pending" solutions to "cybersecurity."

Furthermore, this should shed some light on why the NSA was so in favor of CISPA and now CISA -- cybersecurity bills in Congress that would give private companies liability protections if they... shared network data with the NSA (and other parts of the federal government). The NSA needs those liability protections to get some companies to be willing to open up their networks to do this kind of MonsterMind offering, or they won't participate. It's also why Congress shouldn't pass such a bill.

from the because-of-course dept

We've written about the Senate's dangerous CIPA bill -- which is Congress' latest (bad) attempt to help increase the NSA-led surveillance state by giving companies blanket immunity if they share private information with the government... all in the name of overhyped "cybersecurity." We, of course, have been through this fight before, with the CISPA bill, which passed in the House a few times, but couldn't get any traction in the Senate. This time around, the (really bad) Senate version passed out of the Senate Intelligence Committee by a 12-3 vote (held in secret, of course). Not surprisingly, two of the three who voted against it are Ron Wyden and Mark Udall.

By now you should know: if Ron Wyden and Mark Udall are against something related to surveillance, you should be against it too (and the opposite is true as well).

The "good" news is that despite the overwhelming support by the NSA's biggest cheerleaders on the rest of the Senate Intelligence Committee, it seems unlikely that the bill will have enough support in the overall Senate. And it will hopefully remain that way. This bill is a dangerous one, that is solely designed to give the NSA and some companies additional legal "cover" for aiding the NSA's surveillance efforts. Thanks to Snowden's revelations, companies are, in general, a lot less willing to do that these days anyway, but giving those companies blanket liability to do so is a bad, bad idea.

And while there's still little to no evidence that the "cybersecurity threat" is anywhere close to as big as what the FUDmongers insist it is, even if that is true, no one has yet explained what laws actually get in the way of having companies share critical cybersecurity information as needed. And, if such laws really do exist, any solution should to just be narrowly focused on fixing those laws, rather than granting broad immunity for sharing just about any info.