Homeland Insecurity

Luckily for the victims, this digital mayhem is mostly wreaked not by the master hackers depicted in Hollywood techno-thrillers but by "script kiddies"—youths who know just enough about computers to download and run automated break-in programs. Twenty-four hours a day, seven days a week, script kiddies poke and prod at computer networks, searching for any of the thousands of known security vulnerabilities that administrators have not yet patched. A typical corporate network, Schneier says, is hit by such doorknob-rattling several times an hour. The great majority of these attacks achieve nothing, but eventually any existing security holes will be found and exploited. "It's very hard to communicate how bad the situation is," Schneier says, "because it doesn't correspond to our normal intuition of the world. To a first approximation, bank vaults are secure. Most of them don't get broken into, because it takes real skill. Computers are the opposite. Most of them get broken into all the time, and it takes practically no skill." Indeed, as automated cracking software improves, it takes ever less knowledge to mount ever more sophisticated attacks.

Given the pervasive insecurity of networked computers, it is striking that nearly every proposal for "homeland security" entails the creation of large national databases. The Moran-Davis proposal, like other biometric schemes, envisions storing smart-card information in one such database; the USA PATRIOT Act effectively creates another; the proposed Department of Homeland Security would "fuse and analyze" information from more than a hundred agencies, and would "merge under one roof" scores or hundreds of previously separate databases. (A representative of the new department told me no one had a real idea of the number. "It's a lot," he said.) Better coordination of data could have obvious utility, as was made clear by recent headlines about the failure of the FBI and the CIA to communicate. But carefully linking selected fields of data is different from creating huge national repositories of information about the citizenry, as is being proposed. Larry Ellison, the CEO of Oracle, has dismissed cautions about such databases as whiny cavils that don't take into account the existence of murderous adversaries. But murderous adversaries are exactly why we should ensure that new security measures actually make American life safer.

Any new database must be protected, which automatically entails a new layer of secrecy. As Kerckhoffs's principle suggests, the new secrecy introduces a new failure point. Government information is now scattered through scores of databases; however inadvertently, it has been compartmentalized—a basic security practice. (Following this practice, tourists divide their money between their wallets and hidden pouches; pickpockets are less likely to steal it all.) Many new proposals would change that. An example is Attorney General John Ashcroft's plan, announced in June, to fingerprint and photograph foreign visitors "who fall into categories of elevated national security concern" when they enter the United States ("approximately 100,000" will be tracked this way in the first year). The fingerprints and photographs will be compared with those of "known or suspected terrorists" and "wanted criminals." Alas, no such database of terrorist fingerprints and photographs exists. Most terrorists are outside the country, and thus hard to fingerprint, and latent fingerprints rarely survive bomb blasts. The databases of "wanted criminals" in Ashcroft's plan seem to be those maintained by the FBI and the Immigration and Naturalization Service. But using them for this purpose would presumably involve merging computer networks in these two agencies with the visa procedure in the State Department—a security nightmare, because no one entity will fully control access to the system.

How Insurance Improves Security

Eventually, the insurance industry will subsume the computer security industry. Not that insurance companies will start marketing security products, but rather that the kind of firewall you use—along with the kind of authentication scheme you use, the kind of operating system you use, and the kind of network monitoring scheme you use—will be strongly influenced by the constraints of insurance. Consider security, and safety, in the real world. Businesses don't install building alarms because it makes them feel safer; they do it because they get a reduction in their insurance rates. Building-owners don't install sprinkler systems out of affection for their tenants, but because building codes and insurance policies demand it. Deciding what kind of theft and fire prevention equipment to install are risk management decisions, and the risk taker of last resort is the insurance industry ... Businesses achieve security through insurance. They take the risks they are not willing to accept themselves, bundle them up, and pay someone else to make them go away. If a warehouse is insured properly, the owner really doesn't care if it burns down or not. If he does care, he's underinsured ... What will happen when the CFO looks at his premium and realizes that it will go down 50% if he gets rid of all his insecure Windows operating systems and replaces them with a secure version of Linux? The choice of which operating system to use will no longer be 100% technical. Microsoft, and other companies with shoddy security, will start losing sales because companies don't want to pay the insurance premiums. In this vision of the future, how secure a product is becomes a real, measurable, feature that companies are willing to pay for ... because it saves them money in the long run. —Bruce Schneier, Crypto-Gram, March 15, 2001

Equivalents of the big, centralized databases under discussion already exist in the private sector: corporate warehouses of customer information, especially credit-card numbers. The record there is not reassuring. "Millions upon millions of credit-card numbers have been stolen from computer networks," Schneier says. So many, in fact, that Schneier believes that everyone reading this article "has, in his or her wallet right now, a credit card with a number that has been stolen," even if no criminal has yet used it. Number thieves, many of whom operate out of the former Soviet Union, sell them in bulk: $1,000 for 5,000 credit-card numbers, or twenty cents apiece. In a way, the sheer volume of theft is fortunate: so many numbers are floating around that the odds are small that any one will be heavily used by bad guys.

Large-scale federal databases would undergo similar assaults. The prospect is worrying, given the government's long-standing reputation for poor information security. Since September 11 at least forty government networks have been publicly cracked by typographically challenged vandals with names like "CriminalS," "S4t4n1c S0uls," "cr1m3 0rg4n1z4d0," and "Discordian Dodgers." Summing up the problem, a House subcommittee last November awarded federal agencies a collective computer-security grade of F. According to representatives of Oracle, the federal government has been talking with the company about employing its software for the new central databases. But judging from the past, involving the private sector will not greatly improve security. In March, CERT/CC, a computer-security watchdog based at Carnegie Mellon University, warned of nineteen vulnerabilities in Oracle's database software. Meanwhile, a centerpiece of the company's international advertising is the claim that its software is "unbreakable." Other software vendors fare no better: CERT/CC issues a constant stream of vulnerability warnings about every major software firm.

Schneier, like most security experts I spoke to, does not oppose consolidating and modernizing federal databases per se. To avoid creating vast new opportunities for adversaries, the overhaul should be incremental and small-scale. Even so, it would need to be planned with extreme care—something that shows little sign of happening.

One key to the success of digital revamping will be a little-mentioned, even prosaic feature: training the users not to circumvent secure systems. The federal government already has several computer networks—INTELINK, SIPRNET, and NIPRNET among them—that are fully encrypted, accessible only from secure rooms and buildings, and never connected to the Internet. Yet despite their lack of Net access the secure networks have been infected by e-mail perils such as the Melissa and I Love You viruses, probably because some official checked e-mail on a laptop, got infected, and then plugged the same laptop into the classified network. Because secure networks are unavoidably harder to work with, people are frequently tempted to bypass them—one reason that researchers at weapons labs sometimes transfer their files to insecure but more convenient machines.

Remember Pearl Harbor

Surprise, when it happens to a government, is likely to be a complicated, diffuse, bureaucratic thing ... It includes gaps in intelligence, but also intelligence that, like a string of pearls too precious to wear, is too sensitive to give to those who need it. It includes the alarm that fails to work, but also the alarm that has gone off so often it has been disconnected. It includes the unalert watchman, but also the one who knows he'll be chewed out by his superior if he gets higher authority out of bed. It includes the contingencies that occur to no one, but also those that everyone assumes somebody else is taking care of. It includes straightforward procrastination, but also decisions protracted by internal disagreement. It includes, in addition, the inability of individual human beings to rise to the occasion until they are sure it is the occasion—which is usually too late. (Unlike movies, real life provides no musical background to tip us off to the climax.) Finally, as at Pearl Harbor, surprise may include some measure of genuine novelty introduced by the enemy, and possibly some sheer bad luck. The results, at Pearl Harbor, were sudden, concentrated, and dramatic. The failure, however, was cumulative, widespread, and rather drearily familiar. This is why surprise, when it happens to a government, cannot be described just in terms of startled people. Whether at Pearl Harbor or at the Berlin Wall, surprise is everything involved in a government's (or in an alliance's) failure to anticipate effectively. —Foreword by Thomas C. Schelling to Pearl Harbor: Warning and Decision(1962) by Roberta Wohlstetter

Schneier has long argued that the best way to improve the very bad situation in computer security is to change software licenses. If software is blatantly unsafe, owners have no such recourse, because it is licensed rather than bought, and the licenses forbid litigation. It is unclear whether the licenses can legally do this (courts currently disagree), but as a practical matter it is next to impossible to win a lawsuit against a software firm. If some big software companies lose product-liability suits, Schneier believes, their confreres will begin to take security seriously.