A limited constitutional government calls for a rules-based, freemarket monetary system, not the topsy-turvy fiat dollar that now exists under central banking. This issue of the Cato Journal examines the case for alternatives to central banking and the reforms needed to move toward free-market money.

The more widespread use of body cameras will make it easier for the American public to better understand how police officers do their jobs and under what circumstances they feel that it is necessary to resort to deadly force.

Americans are finally enjoying an improving economy after years of recession and slow growth. The unemployment rate is dropping, the economy is expanding, and public confidence is rising. Surely our economic crisis is behind us. Or is it? In Going for Broke: Deficits, Debt, and the Entitlement Crisis, Cato scholar Michael D. Tanner examines the growing national debt and its dire implications for our future and explains why a looming financial meltdown may be far worse than anyone expects.

The Cato Institute has released its 2014 Annual Report, which documents a dynamic year of growth and productivity. “Libertarianism is not just a framework for utopia,” Cato’s David Boaz writes in his book, The Libertarian Mind. “It is the indispensable framework for the future.” And as the new report demonstrates, the Cato Institute, thanks largely to the generosity of our Sponsors, is leading the charge to apply this framework across the policy spectrum.

Search form

Tag: threat

There’s a delicious irony in some of the testimony on cybersecurity that the Senate Homeland Security and Governmental Affairs Committee will hear today (starting at 2:30 Eastern — it’s unclear from the hearing’s page whether it will be live-streamed). Former National Security Agency general counsel Stewart Baker flubs a basic mathematical concept.

If Congress credits his testimony, is it really equipped to regulate the Internet in the name of “cybersecurity”?

Exponential growth occurs when the growth rate of the value of a mathematical function is proportional to the function’s current value. It’s nicely illustrated with rabbits. If in week one you have two rabbits, and in week two you have four, you can expect eight rabbits in week three and sixteen in week four. That’s exponential growth. The number of rabbits each week dictates the number of rabbits the following week. By the end of the year, the earth will be covered in rabbits. (The Internet provides us an exponents calculator, you see. Try calculating 2^52.)

The vulnerabilities of computers, networks, and data may be growing. But such vulnerabilities are not a function of the number of transistors that can be placed on an integrated circuit. Baker is riffing on Moore’s Law, which describes long-term exponential growth in computing power.

Instead, vulnerabilities will generally be a function of the number of implementations of information technology. A new protocol may open one or more vulnerabilities. A new piece of software may have one or more vulnerabilities. A new chip design may have one or more vulnerabilities. Interactions between various protocols and pieces of hardware and software may create vulnerabilities. And so on. At worst, in some fields of information technology, there might be something like cubic growth in vulnerabilities, but it’s doubtful that such a trend could last.

Why? Because vulnerabilities are also regularly closing. Protocols get ironed out. Software bugs get patched. Bad chip designs get fixed.

There’s another dimension along which vulnerabilities are also probably growing. This would be a function of the “quantity” of information technology out there. If there are 10,000 instances of a given piece of software in use out there with a vulnerability, that’s 10,000 vulnerabilities. If there are 100,000 instances of it, that’s 10 times more vulnerabilities—but that’s still linear growth, not exponential growth. The number of vulnerabilities grows in direct proportion to the number of instances of the technology.

Ignore the downward pressure on vulnerabilities, though, and put growth in the number of vulnerabilities together with the growth in the propogation of vulnerabilities. Don’t you have exponential growth? No. You still have linear growth. The growth in vulnerability from new implementations of information technology and new instances of that technology multiply. Across technologies, they sum. They don’t act as exponents to one another.

Baker uses “vulnerability” and “threat” interchangeably, but careful thinkers about risk wouldn’t do this, I don’t think. Vulnerability is the existence of weakness. Threat is someone or something animated to exploit it (a “hazard” if that thing is inanimate). Vulnerabilities don’t really matter, in fact, if there isn’t anyone to exploit them. Do you worry about the number of hairs on your body being a source of pain? No, because nobody is going to come along and pluck them all. You need to have a threat vector, or vulnerability is just idle worry.

Now, threats can multiply quickly online. When exploits to some vulnerabilities are devised, their creators can propogate them quickly to others, such as “script kiddies” who will run such exploits everywhere they can. Hence, the significance of the “zero-day threat” and the importance of patching software promptly.

As to consequence, Baker cites examples of recent hacks on HBGary, RSA, Verisign, and DigiNotar, as well as weakness in industrial control systems. This says nothing about growth rates, much less how the number of hacks in the last year forms the basis for more in the next. If some hacks allow other hacks to be implemented, that, again, would be a multiplier, not an exponent. (Generally, these most worrisome hacks can’t be executed by script kiddes, so they are not soaring in numerosity. You know what happens to consequential hacks that do soar in numerosity? They’re foreclosed by patches.)

Vulnerability and threat analyses are inputs into determinations about the likelihood of bad things happening. The next step is to multiply that likelihood against consequence. The product is a sense of how important a given risk is. That’s risk assessment.

Do your representatives in Congress get the math involved here? Do they know the difference between exponential growth and linear growth? Do they “get” risk management? Chances are they don’t. They may even parrot the “statistic” that Baker is putting forth. How well equipped do you suppose a body like that is for telling you how to do your cybersecurity?

During a recent Congressional hearing on President Obama’s trade agenda, Rep. Sander Levin (D-Mich.) stated his continued objections to the FTA with Colombia:

“Union worker violence in Colombia remains unacceptably high - if not the highest in the world. Limited progress is being made in the investigation and prosecution of those responsible. Additionally, reports indicate that threats against union workers and others have increased, and there has been little concrete action today to pursue these cases.” [Emphasis added].

Levin warned that, despite signs of a more constructive approach to this issue from Colombia’s new president Juan Manuel Santos, “The only adequate measuring stick is progress on the ground.”

Rep. Levin should take a look at the Free Trade Bulletin that my colleague Dan Griswold and I published this week: “Trade Agreement Would Promote U.S. Exports and Colombian Civil Society.” When it comes to progress on the ground regarding violence against union members, Colombia already has a remarkable record. The number of assassinations of trade unionists has dropped 77% since its peak in 2001, compared to the total number of homicides in the country, which declined by 44% in the same period.

Sources: National Union School (ENS) and Ministry of Social Protection (MPS).

If we look at the homicide rate as defined by the number of murders per 100,000 inhabitants, the rate for union killings was 5.3 per 100,000 unionists in 2010, six times lower than the homicide rate for the overall population (33.9 per 100,000 inhabitants).

In our paper, we present evidence that shows that union members enjoy greater security than other vulnerable groups of Colombian civil society, such as teachers, councilmen and journalists. Also, we highlight research conducted by economists Daniel Mejía and María José Uribe of the Universidad de los Andes in Colombia, which found no statistical evidence supporting the claim that trade unionists are targeted for their activities. Instead, their results show that “the violence against union members can be explained by the general level of violence and by low levels of economic development.”

As for Rep. Levin’s claim that there has been “little concrete action” to pursue crimes against trade unionists, once again the evidence says otherwise. In 2010 there were over 1,400 trade unionists under a government protection program—more than any other vulnerable group of Colombia’s civil society. In 2007, a special department was created in the Office of the Prosecutor General dedicated exclusively to solving crimes against union members and bringing the perpetrators to justice. Close to 85 percent of the sentences issued since 2000 for assassinations of trade unionists were issued after the creation of this department.

If Rep. Levin’s “adequate measuring stick is progress on the ground,” then he should recognize the tremendous achievements made by Colombia so far in reducing violence against trade unionists, and solving the crimes committed against them.

Our colloquy leaves somewhat open what should replace color-coding. Because most threat warnings are false alarms, and because exhortations to vigilance will tend toward the vagueness of the color-coding system, Ben hopes “DHS winds up being tighter-lipped.”

His points are good ones, but they don’t dissuade me from my belief that DHS should “begin informing the public fully about threats and risks known to the U.S. government.”

The right answer here centers on who is better at digesting threat information—experts in the national security bureaucracy or the public?

There is a great deal of expertise in the U.S. government focused on turning up threat information and digesting it for policymakers. However, that expertise has limits, often manifested as threat inflation, as Ben notes, and as myopia. Daniel Patrick Moynihan’s Secrecy: The American Experience illustrates the latter well (especially the edition with Richard Gid Powers’ fine introduction).

The public consists of hundreds of millions of subject matter experts in every walk of life. They include owners and operators of all our infrastructure, reporters and commentators in the professional and amateur press, academics, state and local law enforcement personnel, information networks, and social networks of all kinds. We have security-interested folk in the hundreds of millions spread out across the land, all in regular communication with each other. We’re a tremendously powerful information processing machine. I believe this public can do a better job of digesting threat information than “experts,” particularly when it comes to terrorism threats, which can—theoretically, at least—manifest themselves pretty much anywhere.

The public constantly digests risk and threat information from other walks of life. We digest information about ordinary crime, health and disease, finance and investment, driving and walking, etc., etc. There is nothing about terrorism that disables the public from making judgments about threat information and incorporating it into daily life. People can figure out what matters and what does not, and they can apply information in the spheres they know.

When I say “fully inform,” I don’t argue for broadcasting every speck of information the U.S. government collects. There are limited domains in which information sharing will reveal sources and methods, undercutting access to future information. Appropriate caveats are part of ”fully” informing, of course. Natural pressure will cause too speculative threats to be winnowed from public release. But even opening a firehose will get people the water they need to drink.

Tight lips sink ships. The presumption should fall in favor of sharing information with the public. After a period of adjustment lasting from months to a year or more, the American information system would incorporate open threat information into daily life, and the country would be more secure. People made confident by the ability to consume and respond to threat information will feel more secure, which is the other half of what security is all about.