I focus on the strategic, economic and business implications of defense spending as the Chief Operating Officer of the non-profit Lexington Institute and Chief Executive Officer of Source Associates. Prior to holding my present positions, I was Deputy Director of the Security Studies Program at Georgetown University and taught graduate-level courses in strategy, technology and media affairs at Georgetown. I have also taught at Harvard University's Kennedy School of Government. I hold doctoral and masters degrees in government from Georgetown University and a bachelor of science degree in political science from Northeastern University. Disclosure: The Lexington Institute receives funding from many of the nation’s leading defense contractors, including Boeing, Lockheed Martin, Raytheon and United Technologies.

Cyberwars: Lessons For Limiting Government Data Losses

In 2012 the National Security Agency distributed an issue brief about preventing leaks of sensitive information that began with the warning, “Loss of control over sensitive and protected data by organizations is a serious threat to business operations and national security.” That point hardly needed elaboration two years after Army private Bradley Manning sent hundreds of thousands of classified files to WikiLeaks, an organization dedicated to revealing government secrets. But far worse revelations were to follow in 2013, when contractor Edward Snowden began leaking over a million files detailing many of NSA’s most sensitive operations.

Like Manning, Snowden was a modest player in the intelligence community — a 29-year-old computer specialist posted to a field activity thousands of miles from NSA headquarters. The fact that he was able to use widely available tools such as web crawlers and portable memory devices to cause devastating damage to an agency leading the nation’s cybersecurity efforts inevitably raised doubts about whether the government could cope with better-resourced cyber-agents acting on behalf of foreign nations. Whatever you may think about the role NSA plays in gathering electronic intelligence, lack of protection against malicious actors operating in cyberspace is dangerous. In fact, it is potentially a threat to national survival.

(Disclosure:Many companies engaged in providing cybersecurity services to the government contribute to my think tank.)

The good news is that NSA and the rest of the intelligence community understand this threat very well. The bad news is that like all bureaucratic cultures, intelligence agencies operate with hidden biases and defects that can go undetected until a crisis forces them to confront their weaknesses. After years of debating what to do about malicious insiders like Manning and Snowden, a few key lessons have emerged. Here are five of them.

1. It isn’t enough to have smart policies. NSA leaders and their counterparts at other federal agencies have been analyzing cyber threats since before President Obama entered the White House. The policies they have developed to cope with the danger are comprehensive and thoughtful. However, smart policies don’t mean much if you fail to follow through by implementing them. The Snowden case reveals multiple errors in applying cybersecurity standards that enabled a bit player to severely damage agency operations — not because of technical failures, but because data-handling policies were not followed. According to the New York Times, Snowden’s superiors noticed peculiarities in his behavior, but their concerns were allayed by facile excuses on his part. Clearly, enforcement of policies with regard to data-loss prevention needs to be more rigorous — not just at headquarters, but across the organization.

2. Too much sharing can be dangerous. During the Cold War, information collected by the intelligence community was compartmentalized, meaning access was restricted to users with a “need to know.” That practice protected sensitive data, but it also produced a stovepiped culture in which all the information didn’t come together until it reached the top of the organization. Lack of sharing across the lower reaches of the community was identified as a factor in the success of the 9-11 attacks, because few users had enough information to see the plot that was unfolding. So the barriers came down and a culture of sharing was encouraged. That helped prevent follow-on attacks, but it also enabled junior operators to gain access to copious amounts of data previously beyond their reach. It looks like we have traded one security problemfor another, and the pendulum needs to start swinging back towards stronger protections.

3. When it comes to insiders, trust but verify. Intelligence tends to be a collaborative enterprise, so a measure of trust is required among practitioners to get the best results. Having now suffered a series of major intelligence losses due to the actions of malicious insiders, though, it seems that Ronald Reagan’s dictum of “trust but verify” shouldn’t be applied just to outsiders. Mechanisms are required to assure that operators inside the system are worthy of being entrusted with sensitive information. That means rigorous standards for awarding and renewing clearances, combined with ways of identifying patterns of behavior that might call the motives of insiders into question. Programs now exist for uncovering even the most subtle patterns that might point to bad intentions.

4. Every exit must be monitored. There was a time not so long ago when malicious actors only had a handful of ways to surreptitiously remove data from a network. With the explosion of digital media, though, those days are gone forever. There are numerous methods of moving data across the network, and the devices for physically carrying it out the front door range from tablets to USB sticks to I-Pods to smart-phones to DVDs. An effective data-loss prevention system must monitor data flows though all of these potential avenues of exfiltration, blocking transfers to unapproved users and unregistered devices, and alerting security personnel as to where potential breaches are occurring. As new digital technologies appear, monitoring systems must be continuously upgraded to assure no exit is left unguarded.

5. Big money isn’t necessarily the answer. The standard reflex in Washington when a new threats arises is to throw money at it. That approach to cybersecurity probably can’t work over the long run, though, because there will be too many novel dangers to spend vast sums on dealing with each one. The government needs to aim for a cyberwar “cost-exchange ratio” in which its expenses to negate an emerging danger are not much greater than the cost to an aggressor of mounting the threat in the first place. That suggests two guiding principles: (1) leverage the full functionality of whatever network defenses have already been purchased rather than constantly buying more stuff, and (2) use commercial off-the-shelf software whenever possible because it is likely to cost less and be ready to deploy faster

The latter point deserves some elaboration. The information revolution that has spawned today’s cyber threats is driven mainly by private entrepreneurs rather than government. There is no way that the protracted acquisition and budgetary cycles of the federal government can keep up with the pace at which new information technologies are advancing. The Snowden case, in which a bright young computer geek managed to circumvent the defenses of the government’s leading cybersecurity agency, is emblematic of this challenge. The government’s best hope of dealing with the danger is to turn to the same market sources and forces that made Edward Snowden’s exploits possible for solutions. If it tries to defeat cyber threats from within or from abroad with traditional acquisition practices, it is done for.

Post Your Comment

Post Your Reply

Forbes writers have the ability to call out member comments they find particularly interesting. Called-out comments are highlighted across the Forbes network. You'll be notified if your comment is called out.