Expert Voice: Lewis M. Branscomb on Fighting Terrorism

Information technology is critical to the effort to fight terrorism, says Harvard University Professor Lewis M. Branscomb. But government is not providing the needed leadership, and IT executives aren't cooperating.

In June, the National Academies' Committee on Science and Technology for Countering Terrorism issued its report on counterterrorism. Called Making the Nation Safer: The Role of Science and Technology in Countering Terrorism, the report analyzes a variety of systems and infrastructuressuch as transportation, healthcare and energycritical to the economic and social well-being of the nation, with the goal of making recommendations for short-term action and long-term research in the fight against terrorism. (The report is available at www.nap.edu/books/0309084814/html/.)

Lewis M. Branscomb, a professor of public policy and corporate management at Harvard University's Kennedy School of Government, cochaired the committee together with Richard D. Klausner, the executive director of global health for the Bill & Melinda Gates Foundation in Seattle. For Branscomb, who served as chief scientist at IBM Corp. from 1972 to 1986, information technology holds a special place in his head and heart. "IT is a wonderful industry because it embodies the economic conservatism of the libertarians together with the imagination and creativity of wild-eyed liberals." Yet it's exactly that combination that makes relations between the industry and the federal government so difficult, he says. Executive Editor Edward H. Baker recently chatted with Branscomb about the National Academies' proposed IT research agenda and the cloudy prospects for cooperation between industry and the federal government.

CIO Insight: The committee's report identified several vulnerable points in the IT infrastructure. Can you discuss them?

Branscomb: The thing that concerned us most were two issues. The first involved the use of a cyberattack to enhance the consequences and impede the response to a terrorist attack of a different kindfrom conventional explosives, for example. You can think of a lot of ways for somebody who knew how to do it to create and send e-mail messages to the appropriate authorities and to the press that appear to come from the head of the FBI or some such authority, providing misleading information. The consequence might be the local authorities directing the public to do the wrong thing, making them more vulnerable. You could also imagine a cyberattack making communications among the first responders more difficult. So there's a whole set of possible scenarios involving a cyberattack used to amplify and complicate a conventional attack.

The second category of concern involves the elements of critical infrastructurefor instance, the Internet. It's very insecure, and it could be used to create messages that don't come from the people they're said to come from, and to bring Internet-based systems down with various kinds of worms and other devices such as denial-of-service attacks. When that kind of attack is detected, the word goes out, and the system is shut down. That, of course, is exactly the terrorists' objective. They're not trying to keep the system down for a week. They're happy if they can keep it down for two or three hours while the damage is done.

It's a serious problem because if and when we get the security system to the point that it's able to do complex networking of security sensors and things like that, well, that doesn't do you any good unless you have a network that connects them. And the network has to have the intelligence to be able to decide how many false positives you should tolerate, what to do about sensors that have been knocked offline, and how to combine all that information into a form that some real-world decision-maker can actually use and make a decision about. And if you knock those capabilities out by taking out the Internet, then you deny the emergency operations centers in cities access to the data that would presumably tell them what's going on.

Almost every technology that our report recommendsfor example, better filters in large office buildings with sensors to detect toxic substances or biological warfare elementsdoesn't do you any good unless the analysis of the information from the sensor gets back to the decision-makers. Once again, it probably depends on the Internet.

Is there a way to beef up the physical security of the Internet?

Well, first of all, you can beef up the security of the traffic over the Internet through encryption. That takes out a big part of the problem. Second, if the computers the system is using for emergency purposes are isolated from the normal servers and put in a place where physical security is available, you solve a lot that way. That's basically how the military does it.

From a much longer-range point of view, the fact that information systems tend to be insecure at some level is a concern, not only in a terrorist context, but in an everyday work context. Our conclusionand it's certainly consistent with my experienceis that the market for computer security only exists up to about the current level of the security available. There really isn't much of a market incentive for corporations and computer companies to spend a lot more money trying to build a secure operating system, for example. It's the same reason banks would rather allow 1 percent or 2 percent defalcations than spend the money it would require to make sure nobody could ever steal anything. The assumption is that the cost to do so is greater than the damage that's done by insecurity.

So the industry has done about as much in the way of secure architectures and operating systems as it feels justified in doing. And the universities do very little in this area. The experts in computer science in the universities tell me that only a handful of universities do much in the way of operating system security, and they are not, by and large, the strongest institutions. So we have a strong recommendation that for long-term purposes this is really not a satisfactory situation. It's an area where long-term basic research is needed, and the government certainly ought to fund it, and it certainly ought to be done in collaboration with the best people in the industry.

Do you see that situation changing in response to the events of last September?

Obviously, any place where the private sector sees a market for secure networks, for connecting security sensors or whatever, they're going to go for itif the government ever gets its organizational act together to provide direction. Unfortunately, that's not the case today. But sooner or later the federal government will, indeed, have some kind of technical agency that has money, knows what it wants to buy and will be out there trying to buy security systems. That will create some kind of a market, though I don't know how much of one.

So we have a problem, and the problem is not all that different for IT than it is for energy, for transportation, for any of the other areas of infrastructure. The fact is the IT industry has, by and large, spent about as much on developing and deploying secure systems as the market justifies. Most people believe there's no way to predict a catastrophic terrorist attack. So there's no way to determine how much industry can afford to spend to secure itself. And anyway, they assume that if it happens, the government will simply have to make them whole somehow or other.

So right now, most of the infrastructure industries are sitting around waiting for the government to announce what their policy is going to be. To what extent is the government going to try to regulate industry into securing itself? To what extent is the government going to subsidize the technology the IT industry should work on? They'll probably do some of that, but not a lot. To what extent would some special antitrust waivers allow the IT industry to get together and agree on what it's going to do so it can maintain a level commercial playing field?

Do you see any other incentives on the horizon for the IT infrastructure industry to work to improve network security?

Well, the government does have some other kinds of leverage. One that ought to be worrisome if you're an industry executive is that if it becomes a matter of record that a particular industry has been informed and is aware of a gross area of vulnerability, and if the world of science and technology can do something about it, yet that industry chooses not to do anything about it, then it would inherit a liability in the event of a disaster that causes economic loss. In some cases, the experts tell me they believe that the government simply has to inform the industry of its weaknesses, most of which the industry already knows about, and the industry will be forced to address it to avoid that liability. A related issue, one that hasn't been much explored, involves insurance. Insurance companies are probably going toand some of them have already startedhave a multilevel rate structure in which the rates go down if the company invests in better security. That might create a kind of market mechanism.

I spent a couple of hours with Swiss Re Co., one of the biggest reinsurance companies in the world, the other day in Switzerland. Let me tell you what this guy at Swiss Re told me. He basically said, "Look, even if we had an insurance industry price structure that provided an incentive to certain private sector industries to invest in certain kinds of technologies to reduce the risks, and we could prove to our clients that if they spent that money their insurance rates would go down enough that they would get payout in two years, I don't think they'd do it." I said, "What do you mean they wouldn't do it, even with a payout in two years?" He said, "Look, spending money on insurance is something the CFO has to do, but it's not part of the business algorithm for the company." And I thought to myself, yeah, that's exactly how the environment was in big corporations 15 years ago.

I was a director of Mobil Corp. for many, many years, and in the early days they regarded the environment as this nuisance foisted off on them by the greens and a bunch of dumb politicians. They thought of it as an expense they had to bear to stay out of the newspapers, and they resented every penny of it.

Nowadays, that's not their attitude at all. The environment is part of the cost of doing business. And if you're smart about it, you'll spend less on it than your competitors, and you'll do a better job, and you'll save money.

But that attitude toward catastrophe is not there because most companies take the view that if there is a catastrophe, it lies way beyond their ability to self-insure and recover, and so it's up to the government.

Obviously the problem isn't simple, I've implied that the problem was that one company can't afford to spend the money because the others won't, and then they're not competitive. But, in fact, even if everybody in the industry could get together and agree on what they had to do, it might turn out that the costs were such that, when passed on to consumers, they would just stop buying that service and the industry would die anyway.

You've touched on the fact that the federal government needs to provide guidance to private industry in the battle against terrorism. Can you elaborate?

First of all, the government certainly has to invest aggressively in the technical analysis, systems analysis, modeling and simulation of an attackdone, where you can, in cooperation with the industry because a good bit of that is already going on in industry. The goal: to try to model and simulate both the threats and the vulnerabilities in order to help the private sector work out what the optimal technological solutions are.

But the government has a problem: It needs to be able to make some judgments to advise industrywithout any guarantees that they're accurateon where the priorities need to be put for making the country more secure. Our report looked at nine areas of vulnerability, and we thought we found the right technical priorities within each. However, we cannot say whether a biological attack is more likely than an explosion plus a cyberattack, because we don't know what the motives of the terrorists are, or what their capabilities are. That's an intelligence issue.

So since the government owns all the intelligence resources, the government is also going to have to pick out of this huge array of vulnerabilities the ones that deserve serious and immediate attention. In those areas where the vulnerabilities are very high and the fixes are very cheap, I must say I would have the government go out to industry and split the cost 50/50, and just get it done as quickly as you can. There are examples like that in electrical energy distribution, where the vulnerability is unnecessarily big and some of the fixes are cheap.

But from a slightly more long-term point of view, I think it's terribly important that government assign the right official who has the authority and the willingness to call in the appropriate trade associations and other industry groups and begin immediately a dialogue on the optimum balance of industry self-interest, government regulation, shared costs, antitrust waivers, insurance incentives and any other package of policies necessary. Then the industries can come forward and say to the government, "Look, we'll cooperate with you providing you're willing to work with us to pick out the most rational and relevant and least economically destructive policies for getting this work done."

And that has not happened. I've talked to people in the biotech industry and the pharmas. They've been wandering the streets of Washington, D.C., and they can't find anybody to talk to who can say anything dependable about how the government proposes to get vaccines made, for example. In my view, the number one problem is the dialogue between the government and industry that so far the administration has not undertaken, as best as I can tell.

Can you speak to the third part of the report's IT agenda: the use of analytical and intelligence technologies to detect terrorism beforehand or, as you refer to it in the report, information fusion?

We gave a high priority to the information fusion issues in the longer-term research agenda. There are, of course, some nearer-term things that need to be done, but the real problem there is that the various institutions that own the primary information are not yet cooperating with each other. So you can't actually get it down to common data standards and get this stuff together even to do the fusion. Even the FBI and the CIA aren't doing that yet, although they profess their intention to do so.

My impression, very stronglyand I speak for myself and not the studyis that even if you weren't confronting the problem that some data is audio and some is video and some is pictures and some is text and some is in one language and some in another, even if everything was just text data, and even if all the people that had the data were willing to toss it into one database somewhere where it could be studied, which is not the present case, there are still very difficult problems associated with knowing how reliable the data is and what to assume about the probable errors in the data. All of the accuracy/reliability characterization is a huge problem. In science, that's just terribly obvious. Putting together data from multiple sources to draw scientific conclusions requires a huge amount of disciplined work at the front end in order to know what to do with the numbers, what to believe, whether the first digit is the only significant digit or if there are three digits that are significant. And doesn't that significance depend upon time of day and environment, or who said it or signed it, or whatever?

All that takes a huge amount of scrubbing and experience, which is one reason we put a special chapter in the report on systems analysis and the problems of modeling and simulation. All that is related, in my view, to the data mining and data fusion issues, because exercising the data, running simulations through the data to see what comes out the other end and whether it's sensible or not, is necessary in order to discipline the data.

As an expert in these thorny public policy issues, how much of a constraint, in your view, is the issue of privacy on the nation's efforts to combat terrorism?

Well, it's a good bit of a constraint. One obvious place is the President's Department of Homeland Security bill, which would exempt the entire department from the Freedom of Information Act. That's probably a little more liberty from Congressional and public scrutiny than Congress is going to buy. However, there's got to be something that allows private sector data to get to the government without automatically giving it to the world. And while the private sector initially worried about it getting to competitors, now you've got to worry about it getting to Al Qaeda, to the terrorists.

Another issue works in the other direction. If we had a large array of many different kinds of sensors capable of sensing the presence of nuclear material or certain biologics or toxic chemicals or explosives or heat or fire, and we had those sensors deployed in the right places and connected to the right decision algorithms, then you wouldn't need to search people because the sensors would do the searching for you passively. And you wouldn't need to open every carton and every suitcase. In that kind of world, you can make the argument that the sensors and the databases and the networks give you more civil liberties, not less. And I think that's a legitimate argument.

Do you see a slackening off in the urgency to counter terrorism?

I think it depends hugely on the pattern of terrorist activity around the world. If it goes away, the U.S. public won't sustain its urgency. If we have a sprinkling of incidents, some big, some middle-size, some small, around the world, and it looks like it's just not going away, I think people will get their acts together.

But right now, it's really tough. Take the nuclear threat. That's the worst threat we have. If the terrorists could get their hands on 50 kilograms of highly enriched uranium, they don't need a weapon, they can make a weapon. It's well-known how to do that. Not a very efficient one, but it'll work, it'll kill tens of thousands of people. They can get it in the country pretty easily, and they can assemble it without a lot of difficulty.

However, we have no way of calculating how hard it is to get hold of the stuff. And so the priority for the government is to make sure that that's impossible and make a deal with the Russians to mix their highly enriched uranium with ordinary uranium so that it's still useful in a power plant but not in a bomb. But how would you calculate the risk? It's unknowable.

You know, vulnerability to terrorism is a new problem as far as Al Qaeda is concerned, but it is not a problem that's going to go away even when Al Qaeda goes away. So long as the vulnerabilities are there and the demonstration of their accessibility is known, these threats are going to exist. We have a competitive economy that maximizes efficiency and does so at the expense of resilience, and we need to restructure the whole economy over the next 20 years and pay more than we're now paying to make it more robust. There's nothing wrong with that; that's just the way it ought to be. So the government's view really should not be, "How do I bludgeon industry into countering catastrophic terrorism," but "How do we gradually, slowly and intelligently restructure the whole economy so that it's much more resilient against all kinds of damagehuman error, hurricanes, earthquakes and terrorism."

The reason we never use the phrase "war on terrorism" in our report is that war is a very poor metaphor for what this is. This is a whole new condition of the economy with respect to risk management. Twenty years from now, this will all be normal.

Lewis M. Branscomb, who was trained as a physicist, is the Aetna Professor of Public Policy and Corporate Management (emeritus) at Harvard University's John F. Kennedy School of Government, where until recently he directed the school's Science, Technology and Public Policy Program. Between 1972 and 1986, he served as the chief scientist at IBM Corp. and a member of the company's Corporate Management Board.