Policy Institutes

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.

—

This week, the royal families of Clinton and Bush offered up their 2016 campaign insights on climate change. People have been very interested in what they would say because, as Secretary of State, Clinton gave hints that she was even more aggressive on the issue than her boss, and Bush is the son of GHW Bush, who got us into this mess in the first place by going to Rio in 1992 and signing off on the Climate Treaty adopted there.*

Hillary Clinton unveiled her “climate plan” first. As feared, it’s a step-up over Obama’s, with an impossibly large target for electricity production from renewable energy. While her fans were exuberant, noticeably absent from her plan were her thoughts on Keystone XL pipeline and a carbon tax.

Manhattan Institute scholar Oren Cass (whose take on the carbon tax we’ve featured previously) was, overall, less than impressed. Calling Hilary’s climate plan a “fake plan” in that it really would have no impact on the climate. Cass identifies what Hilary’s “real” plan is—pushing for a $100+ billion annual international “Green Climate Fund” (largely populated with U.S. dollars) to be available to developing countries to fight/prepare for climate change.

Hillary Clinton has a real climate change plan and a fake climate change plan. She released the fake plan earlier this week to predictably rapturous media applause for its “far-reaching” and “comprehensive” agenda.

…The plan is most obviously fake because it is not really a climate plan at all. Clinton offers no estimated reductions in carbon dioxide emissions or future temperatures, probably because her plan cannot achieve any meaningful ones. Her ultimate goal to generate 33 percent of U.S. electricity from renewable sources by 2027 would reduce global emissions by less than 2 percent annually, even if every new kilowatt-hour of renewable power managed to replace coal-fired power. That is only a [tiny-eds] fraction of the increase expected from China during the same period.

Instead of claiming any climate success, Clinton’s campaign material emphasizes health benefits from reducing air pollutants (not carbon dioxide). It promotes job creation (though job losses would be at least as large). And it promises to “make the United States the world’s clean energy superpower,” whatever that means.

The plan is most importantly fake because it obscures an actual climate plan that Clinton has no interest in discussing with voters. The real plan, simply put, is to pay for other countries to reduce their emissions through an unprecedented transfer of wealth from the developed world to the developing world. This plan emerged from the international climate negotiations in Copenhagen in 2009, at which then-Secretary Clinton pledged the United States would help create a Green Climate Fund of at least $100 billion in annual aid – a commitment comparable in scale to all existing development aid from OECD countries.

The silly gap in Clinton’s climate plan is the continuing no-comment on the Keystone XL pipeline. The surprising one is the absence of a price on carbon. But the dangerous one is the omission of what she actually wants to do.

Clearly, Hillary is more interested in influencing public opinion than the actual climate.

Jeb Bush then offered up his thoughts about climate change. In an interview with Bloomberg BNA, Bush said, among other things that “the climate is changing” and that “human activity has contributed to it” but that “we should not say the end is near.”

Sounds like a solid take!

Bush went on to with his opinions on various aspects of energy regulations currently aimed at climate change. Keystone XL pipeline? “Yes.” Renewable fuel standard? “2022 is the law and is probably the good break point.” EPA’s Clean Power Plan? “[I]rresponsible and ineffective.”

For example, here’s his full answer to Bloomberg BNA’s question “Is climate change occurring? If so, does human activity significantly contribute to it?”:

The climate is changing; I don’t think anybody can argue it’s not. Human activity has contributed to it. I think we have a responsibility to adapt to what the possibilities are without destroying our economy, without hollowing out our industrial core.

I think it’s appropriate to recognize this and invest in the proper research to find solutions over the long haul but not be alarmists about it. We should not say the end is near, not deindustrialize the country, not create barriers for higher growth, not just totally obliterate family budgets, which some on the left advocate by saying we should raise the price of energy so high that renewables then become viable.

U.S. emissions of greenhouse gasses are down to the same levels emitted in the mid-1990s, even though we have 50 million more people. A big reason for this success is the energy revolution which was created by American ingenuity—not federal regulations.

This is an encouraging stance from a Republican presidential candidate. And one that we think should come to dominate the issue—from both sides. It serves no one to deny that humans are causing climate change, nor to cry that we’re all going to die. Actions should be appropriate to the magnitude of the issue—in other words, lukewarm.

—

*Many people advised him not to go. But he did, anyway, probably thinking he would get yelled at if he didn’t, and lose votes in the upcoming Presidential election. How well did that work out for him?

Over the last couple of decades, reserve requirements all but vanished as a means of bank regulation and monetary control. But now a new variation on reserve requirements is being introduced through the capital controls of the Basel Accords.

Canada, the UK, Sweden, Australia, New Zealand, and Hong Kong have all abolished traditional reserve requirements. In many other countries, reserve requirements have become a dead letter. In the U.S., for instance, the Fed under Alan Greenspan reduced all reserve requirements to zero except for transactions deposits (checking accounts), while permitting banks to evade reserve requirements on transactions balances by using sophisticated computer software to regularly “sweep” those balances into money market deposit accounts, which have no reserve requirement. In 2011 Congress went a step further by allowing the Fed to eliminate all reserve requirements if it so desired. The Eurozone, for its part, began with a reserve requirement of only 2 percent, which was reduced to 1 percent in January 1999.

There were good reasons for this deregulatory trend. Economists consider reserve requirements an implicit tax on banks, requiring them to hold non-interest earning assets, while central banks considered changes in such requirements too blunt an instrument for monetary control. The Fed discovered the latter shortcoming when, in the midst of the Great Depression, having just gained control over the reserve requirements of national banks, it doubled them, contributing to recession of 1937.

Ostensibly designed to keep banks more liquid, reserve requirements can prevent them from drawing on their liquidity when it is most needed. As Armen A. Alchian and William R. Allen point out in University Economics (1964): “To rely upon a reserve requirement for the meeting of cash-withdrawal demands of banks’ customers is analogous to trying to protect a community from fire by requiring that a large water tank be kept full at all times: the water is useless in case of emergency if it cannot be drawn from the tank.”

As reserve requirements became less fashionable, advocates of more stringent bank regulation resorted instead to risk-based capital requirements, as implemented through the international Basel Accords. More recently the increasingly widespread practice of paying interest on bank reserves has also given central banks an alternative and less burdensome means for inducing banks to hold more reserves.

But in Basel III, agreed upon in 2010-2011, there appeared a new kind of liquidity requirement that mimics reserve requirements in many respects. Known as the “Liquidity Coverage Ratio” or LCR, it requires banks to hold “high quality liquid assets” (HQLA) sufficient to cover potential net cash outflows over 30 days. In September 2014 the Fed, the Comptroller, and the FDIC finalized the rule implementing the Liquidity Coverage Ratio. The rule, which took effect at the beginning at 2015, must be fully complied with by January 2017.

Far from involving a simple ratio, as earlier reserve requirements did, the Liquidity Coverage Ratio is extremely complicated, filling 103 pages in the Federal Register. The rule does not apply to small community banks but instead to banks with more than $250 billion of assets, with a modified rule applying to the holding companies of both banks and savings institutions. The Fed also plans to impose a similar rule on non-bank financial institutions. But because a variant of the rule applies to bank holding companies on a “consolidated basis,” the Liquidity Coverage Ratio already affects most major investment banks, which are owned by bank holding companies.

Unlike traditional reserve requirements, the Liquidity Coverage Ratio does not call for any minimum quantity of cash reserves. Instead, it calls for a minimum quantity of various high quality liquid assets. Weighting bank assets according to their maturity, marketability, and riskiness, the LCR even counts as high quality some forms of corporate debt at half of face value. The LCR also differs in being applied, not just to bank deposits, but to nearly all bank liabilities, including large CDs, derivatives, and off-balance sheet loan commitments, according to their maturity.

In short, the Liquidity Coverage Ratio is designed to reduce maturity mismatches for large financial institutions in order to protect against the kind of panics in the repo and asset-backed commercial paper markets that occurred during the financial crisis of 2007-2008. In any case, the rule will still require banks to hold more reserves or short-term Treasury securities than they otherwise might prefer. Since the rule was under discussion by 2010, it could be another reason—along with interest on reserves and capital requirements—why U.S. banks have continued to hold more than 100-percent reserves behind M1 deposits.

Every time there is a financial crisis, the proposal to force banks to hold higher reserve ratios, if not 100-percent reserves, resurfaces. During the Great Depression, this proposal went under the name of the Chicago Plan and even received support from Milton Friedman in his early writings. The proposal was called “narrow banking” during the savings and loan crisis. Since the recent crisis, it has been advocated in one form or another by such economists as Laurence Kotlikoff of the Boston University, John Cochrane of the University of Chicago, and Martin Wolf of the Financial Times. All of these proposals hinge on the government paying interest on bank reserves.

The new Liquidity Coverage Ratio in one sense is less restrictive than these proposals but in another is more so. It is less restrictive in that it allows deposits to be covered by liquid securities other than cash equivalents, and in that sense is a bit reminiscent of the discredited real-bills doctrine that insisted the banks should make only short-term, self-liquidating loans.

But the Liquidity Coverage Ratio is more restrictive than conventional reserve requirements in so far as it applies to a much broader range of bank liabilities. Unlike such requirements, it is striving to prevent banks from engaging in significant maturity transformation, which involves bundling and converting long-term securities into short-term securities. That makes it closest in spirit to Cochrane’s reform proposal, which combines a 100-percent reserve requirement for deposits with a 100-percent capital requirement for all other bank liabilities. Cochrane’s proposal really would eliminate all maturity mismatches; indeed, it would make all banks resemble combinations of safe-deposit businesses on the one hand and mutual funds or, for that matter, Islamic banks, on the other.

Will the Liquidity Coverage Ratio ultimately work? Although the question requires further thought and study, I doubt it. Several monetary economists, considering the rule’s implementation in Europe (here and here), are more optimistic than I am, and a few even think that it will not be restrictive enough. But they may be overlooking the long-term downsides.

As with so many past banking regulations, this one could ultimately end up being non-binding. Banks may find loopholes in the rule, or may innovate around it, and the rule’s very complexity and supposed flexibility is likely to make doing these things easier. On the other hand, when the next financial crisis hits, by hobbling a bank’s discretionary control over its balance sheet, the rule may well exacerbate the crisis. To the extent that the rule is binding, it changes the fundamental nature of banking in a way that may curtail efficient financial intermediation. Whatever happens, it definitely increases the government’s central planning of the allocation of savings. In the final analysis, it is another futile attempt to use prudential regulation to overcome the excessive risk taking resulting from the moral hazard created by deposit insurance and too-big-to-fail.

News comes this morning that Beijing has been awarded the 2022 Winter Olympics, beating out Almaty, Kazakhstan. Which touches on a point I made in this morning’s Boston Herald:

Columnist Anne Applebaum predicted a year ago that future Olympics would likely be held only in “authoritarian countries where the voters’ views will not be taken into account” — such as the two bidders for the 2022 Winter Olympics, Beijing and Almaty, Kazakhstan.

Fortunately, Boston is not such a place. The voters’ views can be ignored and dismissed for only so long.

Indeed, Boston should be celebrating more than Beijing this week. A small band of opponents of Boston’s bid for the 2024 Summer Olympics beat the city’s elite – business leaders, construction companies, university presidents, the mayor and other establishment figures – because they knew what Olympic Games really mean for host cities and nations:

E.M. Swift, who covered the Olympics for Sports Illustrated for more than 30 years, wrote on the Cognoscenti blog a few years ago that Olympic budgets “always soar.”

“Montreal is the poster child for cost overruns, running a whopping 796 percent over budget in 1976, accumulating a deficit that took 30 years to repay. In 1996 the Atlanta Games came in 147 percent over budget. Sydney was 90 percent over its projected budget in 2000. And the Athens Games cost $12.8 billion, 60 percent over what the government projected.”

Bent Flyvbjerg of Oxford University, the world’s leading expert on megaprojects, and his co-author Allison Stewart found that Olympic Games differ from other such large projects in two ways: They always exceed their budgets, and the cost overruns are significantly larger than other megaprojects. Adjusted for inflation, the average cost overrun for an Olympics is 179 percent.

Bostonians, of course, had memories of the Big Dig, a huge and hugely disruptive highway and tunnel project that over the course of 15 years produced a cost overrun of 190 percent.

It isn’t every day that a person can go to his or her job, work, not participate in any criminal activity, and still get a prison sentence. At least, that used to be the case: the overcriminalization of regulatory violations has unfortunately led to the circumstance that corporate managers now face criminal—not just civil—liability for their business operations’ administrative offenses.

Take Austin and Peter DeCoster, who own and run an Iowa egg-producing company called Quality Egg. The DeCosters plead guilty to violating certain provisions of the Food, Drug, and Cosmetic Act because some of the eggs that left their facilities contained salmonella enteritidis, a bacterium harmful to humans. They were sentenced to 90 days in jail and fined $100,000 for the actions of subordinates, who apparently failed, also unknowingly, in their quality-control duties.

In other words, the “crime” that the DeCosters were convicted of didn’t require them to have put eggs with salmonella into interstate commerce, or even to have known (or reasonably been able to foresee) that Quality Egg was putting such eggs into interstate commerce. It didn’t even require the quality-control operator(s) most directly involved in putting the contaminated eggs into interstate commerce to have known that they were contaminated.

Nearly a century of jurisprudence has held that imprisoning corporate officers for the actions of subordinates is constitutionally suspect, given that there’s neither mens rea (a guilty mind) nor even a guilty act—the traditional benchmarks of criminality since the days of Blackstone. Yet there are about 300,000 regulations that can trigger criminal sanctions. These rules are too often ambiguous or arcane, and many lack any requirement of direct participation or knowledge, imposing strict liability on supervisors for the actions (or inactions) of their subordinates.

In United States v. Quality Egg, the district court ruled that courts have previously held that “short jail sentence[s]” for strict-liability crimes are the sort of “relatively small” penalties that don’t violate constitutional due process. Such a sentence has only been imposed once in the history of American jurisprudence, however, and for a much shorter time on defendants with much more direct management of the underlying bad acts. Additionally, prison is not the sort of “relatively small” penalty—like a fine or probation—that the Supreme Court has allowed for offenses that lack a guilty mind requirement.

Joining the National Association of Manufacturers, Cato points out in an amicus brief supporting the DeCosters’ appeal that this case presents an opportunity for the U.S. Court of Appeals for the Eighth Circuit to join its sister court, the Eleventh Circuit, in holding that prison sentences constitute a due-process violation when applied to corporate officers being charged under a strict-liability regulatory regime.

This week, the United States and Turkey agreed on a deal to expand cooperation in the fight against ISIS, in part through the creation of an ‘ISIS-free zone’ in Northern Syria. The scope of the agreement is unclear, not least because Turkish officials are hailing it as a ‘safe zone’ and a possible area for refugees, while U.S. officials deny most of these claims. U.S. officials are also explicit that the agreement will not include a no-fly zone, long a demand of U.S. allies in the region.

But what’s not in doubt is that the United States and Turkey plan to use airstrikes to clear ISIS fighters from a 68-mile zone near the Turkish border. The zone would then be run by moderate Syrian rebels, although exactly who this would include remains undefined.

Over at the Guardian today, I have a piece talking about the many problems with this plan, in particular the fact that it substantially increases the likelihood of escalation and mission creep in Syria:

“The ambiguity around the ‘Isis-free zone’ creates a clear risk of escalation. It’s unclear, for example, whether groups engaged in fighting the regime directly will be allowed to enter the zone and train there, or only those US-trained and equipped rebels focused on Isis. US officials have been keen to note that Assad’s forces have thus far yielded to American airstrikes elsewhere in Syria – choosing not to use their air defense system and avoiding areas the US is targeting - but that is no guarantee that they would refrain from attacking opposition groups sheltering inside a safe zone.”

The plan is just another step in the current U.S. approach to Syria, which has been haphazard and ill-thought out. The United States is engaged in fighting ISIS while most fighters on the ground want to fight the Assad regime, a key reason for the abysmal recruitment record of the U.S. military’s new train-and-equip programs in Syria. Increased U.S. involvement in Syria risks our involvement in another costly, open-ended civil war.

Renewed diplomatic efforts to find a settlement are the only way to effectively address the Syrian crisis. A negotiated settlement which sees Assad removed from power - while allowing some of his followers to participate in a unified Syrian government - would allow fighters inside the country to focus on fighting ISIS, while ensuring that Syria’s minorities are not entirely disenfranchised.

A successful diplomatic settlement will be difficult to achieve. Negotiations would by necessity involve other unpleasant states, including Assad’s Iranian and Russian patrons. But there have been recent indications that Moscow may be more willing to talk, and the ties forged during the U.S.-Iranian nuclear talks could prove valuable. The United Nations is once again trying to restart talks, an initiative the United States should support wholeheartedly. Nonetheless, diplomacy is infinitely better than the slippery slope to military intervention offered by this week’s agreement with Turkey.

You can find the whole article at the Guardian here. For more thoughts on how a U.S. diplomatic strategy for Syria might work, check out this podcast.

There is something fishy about Cecil the lion story. Don’t get me wrong, I find trophy hunting nauseating. Still, why on earth would Walter Palmer pay $50,000 to kill a lion? Per capita GDP in Zimbabwe is $936 per year (2014 dollars). If Palmer wanted to do something illegal, he could have killed a lion for fraction of the price. (I assume that any lion would do. Palmer happened to get “unlucky” and kill the most famous lion in Zimbabwe.)

Goodness knows that magnificent wild animals get slaughtered throughout Zimbabwe – for food, skin and horns – on a daily basis and for free. The culprits include hungry locals, corrupt parks officials, members of the military and government officials. It is very likely that Palmer believed (or wanted to believe) that he was buying a legal kill and outsourced the details (permits, etc.) to the locals. That does not make Palmer innocent. He should have known better than go on a safari to a failed state – with no property rights and the rule of law. That said, the story should be understood in the proper context: it is not individual hunters, but poverty and anarchy that are destroying Zimbabwe’s wildlife.

Last year Narendra Modi won an unusually strong majority in India’s parliamentary election. Modi subsequently visited the U.S. and was warmly welcomed by both the Obama administration and Indian-Americans.

Although ethnic Indians circled the globe as entrepreneurs and traders, the Delhi government turned dirigiste economics into a state religion. Mind-numbing bureaucracies, rules, and inefficiencies were legion.

Eventually modest reform came, but even half-hearted half-steps generated overwhelming political opposition. Last May the Hindu nationalist Bharatiya Janata Party, led by Modi, handed the venerable Congress Party its greatest defeat ever. He seemed poised to transform his nation economically.

As the anniversary of that visit approaches, the Modi dream is fading. He simply may not believe in a liberal free market.

Moreover, few reforms of significance have been implemented. The failures overshadow the Modi government’s successes and highlight its lost opportunities. Critics cite continuing outsize budget deficits and state direction of bank lending.

Former privatization minister Arun Shourie observed last December: “when all is said and done, more is said than done.” Unfortunately, Modi has missed the “honeymoon” period during which his political capital was at its greatest. Time is slipping away.

Indeed, Indian politics quickly began shifting back to business as usual. Modi has been forced to fend off charges of corruption and other misbehavior.

None of this is unusual by Indian standards, but voters are getting fed up. Disappointed Delhi voters gave a landslide victory to a new anti-corruption party in February.

Religious violence also is on the rise, largely instigated by Hindu extremists. While serving as Gujarat state’s chief minister, Modi was implicated in the 2002 riots which killed more than 1200 people, mostly Muslims. Since his election sectarian attacks are up, on Christians as well as Muslims.

Modi has not encouraged the rising violence, but his government has catered to Hindu nationalist sentiments. Only after an assault on a Christian school—the vast majority of whose students and teachers are Hindus—did he promise that his government would give “equal respect to all religions.”

Despite his disappointing economic record so far, Modi still has an opportunity to liberalize India’s economy. In upcoming years his party will take control of the appointive upper house, which has impeded some of his initiatives.

Argued Sadanand Dhume of the American Enterprise Institute, “in Gujarat, too, he started slowly, but ended up presiding over a long boom.” However, it is not enough for his government to tinker with nonessential reforms.

As I point out on Forbes online: “India desperately needs strong growth for years, even decades, to move to the first rank of nations, as China has done. India has extraordinary potential. But for decades the Indian government has squandered its future.”

Despite the high hopes generated after the BJP’s dramatic victory, nothing has really changed. While growth has picked up in India, that improvement is not sustainable absent far more fundamental and comprehensive reform.

Without sustainable growth, India will not follow China’s example to build a competitive manufacturing sector, generate broad-based income growth, and create a new great power capable of influencing global affairs. Such reforms will not be easy, but making tough decisions presumably is why the Indian people elevated Modi.

Some people predict the 21st Century will be the Chinese century. It is more likely to be the Asian Century, at least if Narendra Modi takes advantage of his unique opportunity. Leading India into a better, more prosperous future obviously would benefit India and the Indian people. It also would benefit the rest of the world.

Today the Hamilton County, Ohio prosecutor’s office released body camera footage showing University of Cincinnati police officer Ray Tensing shoot and kill 43-year-old Samuel DuBose during a routine traffic stop on July 19th. Tensing will face murder and voluntary manslaughter charges. Speaking about the killing, Hamilton County prosecutor Joe Deters used strong and condemning language, calling the killing “senseless” and “asinine.” He also said that the body camera footage of the killing was “invaluable” and that without it, he would probably have believed Tensing’s erroneous account of the incident.

DuBose’s death demonstrates once again that body cameras are not a police misconduct panacea. Tensing, who knew his body camera was on, shot an unarmed man in the head and then lied about being dragged down the street. Nonetheless, the tragic incident does provide an example of how useful body camera footage can be to officials investigating allegations of police misconduct.

Ahead of the release of the video Cincinnati Police Chief Jeffrey Blackwell said that the video “is not good.” If convicted, Tensing faces life in prison.

I’ve seen many police body camera videos while researching and writing about the technology, and the video of DuBose’s death is certainly among the most disturbing that I have seen.

Watch the footage below.

Warning: this footage contains graphic violence.

Technology that highlights incidents of police misconduct ought to be welcomed by advocates of accountability and transparency in law enforcement. As Deters himself said in today’s press conference, the body camera led to Tensing’s murder indictment.

But in order for police misconduct to be adequately addressed there need to be significant reforms of police practices and training, specifically related to the use of force. Indeed, Deters said in the press conference today that Tensing should never have been a police officer. A man who quickly resorts to shooting an unthreatening man in the head during a stop prompted by a missing license plate should not be given a gun and a badge. Yet, if it weren’t for body camera footage, Tensing would still be employed as a University of Cincinnati police officer rather than being behind bars.

The use of body cameras does raise a host of serious privacy concerns that should not be taken lightly. However, as Dubose’s killing has shown, the cameras can be instrumental in investigating police misconduct and getting dangerous police officers off the streets.

I hope I’m wrong to see it as racism returning to the mainstream. Indeed, I hope that the long, agonizingly slow erosion of racial fixations from our society will continue. But I found it interesting to see a Washington Post blog post explaining a recently minted epithet—“cuckservative”—chiefly with reference to the president of a “white nationalist” organization.

Apparently, we have such things in the United States, credible enough to get online ink from a major newspaper. I’m not against reporter Dave Weigel’s use of the source. I take it as confirmation that some of our ugliest politicians have even uglier supporters.

I don’t think it’s likely, but one can imagine a situation where these currents join a worsening economic situation to sow public distemper that gives actual political power to racists. Were some growing minority of political leaders to gain by advocating for ethnic or racial policies, do not count on the “good ones” standing against them. Public choice economics teaches that politicians will prioritize election over justice, morality, or any other high-minded concept.

It is poor civic hygiene to install technologies that could someday facilitate a police state. That includes a national ID system. I’ve had little success, frankly, driving public awareness that the U.S. national ID program, REAL ID, includes tracking of race and ethnicity that could be used to single out minorities. But that’s yet another reason to oppose it.
If the future sees no U.S. national ID materialize, and no political currents to exploit such a system for base injustice and tragedy, some may credit the favorable winds of history. Others may credit the Cato Institute and its fans. We’re working to prevent power from accumulating where it can be used for evil.

Speaking of myths about U.S. banking, another that tops my list is the myth that the Federal Reserve, or some sort of central-bank-type arrangement, was the best conceivable solution to the ills of the pre-1914 U.S. monetary system.

I encountered that myth most recently in reading America’s Bank, Roger Lowenstein’s forthcoming book on the Fed’s origins, which I’m reviewing for Barron’s. Lowenstein’s book is well-researched and entertainingly written. But it also suffers from an all-too-common drawback: Lowenstein takes for granted that those who favored having a U.S. central bank of some kind (whatever they called it and however they chose to disguise it) were well-informed and right-thinking, whereas those who didn’t were either ignorant hicks or pawns of special interests. He has, in other words, little patience with history’s losers, whether they be people or ideas. Like other “Whig” histories, his history of the Fed treats the past as an “inexorable march of progress towards enlightenment.”

Don’t get me wrong: I’m no Tory, and I certainly don’t think that the pre-Fed U.S. monetary system was fine and dandy. I know about the panics of 1884, 1893, and 1907. I know how specie tended to pile-up in New York after every harvest season, and that by the time it got there not one but three banks were likely to reckon it, or make claims to it, as part of their reserves. I also know how, when the harvest season returned, all those banks were likely to try and get their hands on the same gold, and how this made for tight money, if it didn’t spark a full-scale panic. Finally, I know that one way to avoid such panics, on paper at least, was to establish a central bank, or “federal” equivalent, capable of supplying banks with emergency cash when they needed it.

Yet I still think that the Fed was a lousy idea. How come? My reason isn’t simply that the Fed turned out to be quite incapable of preventing financial crises, though that’s certainly true. It’s that there was a much better way of fixing the pre-Fed system. That alternative was perfectly obvious to many who struggled to reform the U.S. system in the years prior to the Fed’s establishment. It could hardly have been otherwise, since it was then almost literally staring them in the face. But it should be equally obvious even today to anyone who delves into the underlying causes of the infirmities of the pre-Fed National Currency system.

What were these causes? Essentially there were two. First, ever since the Civil War state banks were prohibited from issuing circulating notes, while National banks could issue notes only to the extent that they backed them with specified U.S. government bonds. Those bonds were getting harder to come by (by the 1890s National banks had already acquired almost all of them). What’s more, it didn’t pay for National banks to acquire the costly securities just for the sake of meeting harvest-time currency needs, for that would mean incurring very high opportunity costs for the sake of having stacks of notes sitting idle in their vaults for most of the year.

The other, notorious cause of trouble was the fact that most U.S. banks, whether state or National,didn’t have branch networks of any kind. Instead, ours was for the most part a system of “unit” banks. This was so mainly owing to laws that prohibited them from branching, even within their own states. But even had branching been legal, the restrictions on banks’ ability to issue notes would have made it less economical by substantially raising the cost of equipping bank branches with inventories of till money.[1]

That unit banking limited U.S. banks’ ability to diversify their assets and liabilities, and thereby made the U.S. banking system much more fragile than it might have been, is (or ought to be) well-appreciated. Unit banking also encouraged banks to deposit their idle reserves with “reserve city” correspondents, who in turn sent their own surplus cash to New York. The National Banking Acts actually encouraged this practice by letting correspondent balances satisfy a portion of banks’ legal reserve requirements. The set-up kept money gainfully employed when it wasn’t needed in the countryside; but it also made for a mad scramble when cash was needed back home.

Far less well appreciated is how unit banking also contributed to the notorious “inelasticity” of the pre-Fed U.S. currency stock. Before I explain why, I’d better first lay another myth to rest, which is the myth that complaints concerning the “inelasticity” of the pre-Fed currency stock were a hobbyhorse of persons who subscribed to the “real-bills” doctrine — that is, the view that the currency supply could and should wax-and-wane in concert with the total quantity of “real bills” or short-term commercial paper presented to banks for discounting.

It’s true that many persons who complained about the “inelastic” nature of the U.S. currency system, including many who were instrumental in designing (and later in managing) the Federal Reserve System, also subscribed to the real bills doctrine, and that that doctrine is mostly baloney. But that doesn’t mean that the alleged inelasticity of the U.S. currency stock was a mere bugbear. The real demand for currency really did vary considerably, especially by rising a lot — sometimes by as much as 50 percent — during the harvest season, when migrant workers had to be paid to “move” the crops. And U.S. banks really were unprepared to meet such increases in demand by issuing more notes, even if doing so was only a matter of swapping note liabilities for deposit liabilities, owing to the legal restrictions to which I’ve drawn attention. In short, you don’t have to have drunk the real-bills Kool-Aid to agree that the pre-Fed U.S. currency system wasn’t capable of meeting the “needs of trade.”

How, then, did unit banking contribute to the problem of an inelastic currency stock? It did so by considerably raising the cost banks had to incur to redeem rival banks’ notes, and thereby limiting the extent to which unwanted banknotes made it back to their issuers. In a branch-banking system, note exchange and redemption are mostly a local, and therefore cheap, affair; add a few regional clearinghouses to handle items not settled locally, and you’ve got all that’s needed to see to it that unwanted currency is rapidly removed from circulation.

In the U.S., on the other hand, banks had to bear substantial costs of sorting and shipping notes to their sources, or to distant clearinghouses, which costs were made all the greater by the sheer number of National banks — tens of thousands, eventually — and resulting lack of economies of scale. These factors would normally have caused National banks to accept the notes of distant rivals at discounts sufficient to cover anticipated redemption costs, as antebellum state banks had been in the habit of doing. The authors of the 1863 and 1864 National Banking Acts were, however, determined to give the nation a “uniform” currency. Consequently they stipulated that every National bank had to accept the notes of all other national banks at par. That got rid of note discounts, sure enough. But it also meant that National banknotes would no longer be actively and systematically redeemed.[2] As I like to say, any fool can fix most any problem — so long as he ignores the others.

If my dog is limping, and I discover that she’s got a pebble wedged between her paw pads, I don’t think of calling for a team of stretcher bearers: I just pull the pebble out. In the same way any reasonable person, knowing the underlying causes of the infirmities of the pre-Fed U.S. currency system, would first consider removing those causes. And that was precisely what many advocates of currency reform tried to do before any dared to suggest anything like a U.S. central bank. That is, they tried to get bills passed — there must have been at least a dozen of them — calling for some combination of (1) repeal of the bond-backing requirement for National banknotes; (2) allowing National banks to branch, and (3) restoring state banks’ right to issue currency. The restrictions on note issue had, after all, been put into effect for the sake of helping the Union government fund the Civil War — a purpose now long obsolete. The restrictions on branching, on the other hand, were widely understood to be another deleterious consequence of the unfortunate decision to model the National Banking Acts after earlier, state “free banking” laws.

Might deregulation alone, as was contemplated in such “asset currency” reform proposals (so-called because they would have allowed banks to issue notes backed by general assets, rather than by specific securities), really have given the U.S. a perfectly sound and stable currency and banking system? Yes. How can I be so confident? Because it would have given the U.S. a currency system like Canada’s. And Canada’s system was, in fact, famously sound and famously stable.[3]

“Don’t mention the war!” is what Basil Fawlty tells his staff, out of concern for the sensibilities of his German guests. (Basil himself nevertheless can’t help referring to it again and again.) “Don’t mention Canada!” is what a Whig historian of the Fed must tell himself, assuming he knows what went on there, lest he should broach a topic that would muddle-up his otherwise tidy epic. For to consider Canada is to realize that there was, in fact, no need at all for all the elaborate proposals, hearings, secret meetings, and political wheeling-and-dealing, that ultimately gave shape to the Federal Reserve Act, if all that was desired was to equip the United States with a currency system worthy of a nation already on its way to becoming an economic powerhouse. Like Dorothy’s ruby slippers, the solution to the United States’ currency ills had been at hand, or at foot, all along. Legislators had only to repeat to themselves, “There’s no place like Canada,” while taking steps that would tap obstructive legal restrictions out of the banking system.

Of course that didn’t happen, thanks mainly to a combination of banking-industry opposition to branch banking and populist opposition — spearheaded by William Jennings Bryan — to any sort of non-government currency. “Asset currency” was, if you like, “politically impossible.”

So reformers at length turned to the alternative of a central bank. And how was that supposed to work? Though buckets of ink have been spilled for the sake of offering all sorts of elaborate explanations of the “science” behind the Federal Reserve, the essence of that solution, once considered against the backdrop of the “asset currency” alternative, couldn’t have been simpler. It boils down to this: instead of allowing already existing U.S. banks to branch and to issue notes backed by assets other than government bonds, the government would leave the old restrictions in place, while setting up a dozen new banks that would be uniquely exempt from those restrictions. If National banks (or state banks, if they chose to join the new system) wanted currency, but lacked the necessary bonds, they still couldn’t issue more of their own notes no matter what other assets they possessed. But they might now take some of those other assets to the Fed, to exchange for Federal Reserve Notes. The Fed was, in short, a sort of stretcher corps for banks lamed by earlier laws.

To an extent, the more centralized reform resembled an asset currency reform one step removed. But there were two crucial differences. First, by setting the “discount rate” at which they would exchange notes for commercial paper and other assets, the Federal Reserve Banks could either encourage or discourage other banks from acquiring their notes. Second, because member banks could count not just gold and greenbacks but Fed liabilities as reserves, the Fed’s discount rates influenced the overall availability of bank reserves and, hence, of money and credit. These differences, far from having been innocuous, were, as we now realize, portentous.

Still the Fed did have one incontestable advantage over previous reform proposals. For it alone was politically possible. It alone was a winning solution.

But the fact that the Fed won in 1913 doesn’t mean that other, rejected options aren’t worth recalling. Still less does it warrant treating the Fed as sacrosanct. History isn’t finished. Just a few years before the Federal Reserve Act was passed, most people still believed that Andrew Jackson had put paid once and for all to the idea of a U.S. central bank. Today most people still consider the Federal Reserve Act the last word in scientific monetary control. As for what most people will think tomorrow, well, that’s partly up to us, isn’t it?

___________________________[1] Although they typically appreciate the debilitating consequences of unit banking, many U.S. economists and economic historians appear unaware of the crucial role that freedom of note issue played historically in facilitating branch banking. That banking systems involving relatively few restrictions on banks’ ability to issue banknotes, like those of Scotland before 1845 and Canada until 1935, also had extremely well-developed branch networks, was no coincidence.

[3] For a very good review of the features and performance of the Canadian system in its heyday, see R.M. Breckenridge, “The Canadian Banking System, 1817-1890,” Publications of the American Economic Association, v. X (1895), pp. 1-476. Not long ago, when I spoke favorably of Canada’s system at a gathering of economic historians, one asked afterwards, rather superciliously, whether I realized how large Canada’s economy had been back around 1913. Apparently my interrogator thought that Canada’s small size made its success irrelevant. I can’t see why. Nor, evidently, could the many persons who proposed and lobbied for various asset currency proposals over the course of over a decade or so.

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

—

Perhaps no other climatic variable receives more attention in the debate over CO2-induced global warming than temperature. Its forecast change over time in response to rising atmospheric CO2 concentrations is the typical measure by which climate models are compared. It is also the standard by which the climate model projections tend to be judged; right or wrong, the correctness of global warming theory is most often adjudicated by comparing model projections of temperature against real-world measurements. And in such comparisons, it is critical to have a proper baseline of good data; but that is easier acknowledged than accomplished, as multiple problems and potential inaccuracies have been identified in even the best of temperature datasets.

One particular issue in this regard is the urban heat island effect, a phenomenon by which urban structures artificially warm background air temperatures above what they normally would be in a non-urbanized environment. The urban influence on a given station’s temperature record can be quite profound. In large cities, for example, urban-induced heating can be as great as Tokyo’s 10°C, making it all the more difficult to detect and discern a CO2-induced global warming signal in the temperature record, especially since the putative warming of non-urbanized areas of the planet over the past century is believed to be less than 1°C. Yet, because nearly all long-term temperature records have been obtained from sensors initially located in towns and cities that have experienced significant growth over the past century, it is extremely important that urbanization-induced warming – which can be a full order of magnitude greater than the background trend being sought – be removed from the original temperature records when attempting to accurately assess the true warming (or cooling!) of the natural non-urban environment. A new study by Founda et al. (2015) suggests this may not be so simple or straightforward a task.

Working with temperature records in and around the metropolitan area of Athens, Greece, Founda et al. set out to examine the interdecadal variability of the urban heat island (UHI) effect, since “few studies focus on the temporal variability of UHI intensity over long periods.” Yet, as they note, “knowledge of the temporal variability and trends of UHI intensity is very important in climate change studies, since [the] urban effect has an additive effect on long term air temperature trends.”

To complete their objective the four Greek researchers compared long-term air temperature data from two urban, two suburban and two rural stations over the period 1970-2004. The UHI was calculated as the difference between the urban and suburban (or rural) stations for monthly, seasonal and annual means of air temperature (max, min, and mean).

Among their several findings, the authors report notable differences in the UHI’s intensity across the seasons and in comparing the UHI when calculated using maximum, minimum, or mean temperatures. Of significance to the discussion at hand, however, the authors note “the warming rate of the air temperature in Athens is particularly large during [the] last decades,” such that the “difference of the annual mean air temperature between urban and rural stations exhibited a progressively statistically significant increase over the studied period.” Indeed, as shown in the figure below for the stations (a) National Observatory of Athens (NOA) in the center of Athens and Tanagra (TAN), approximately 50 km north of the city, as well as for (b) the coastal urban station of Hellinikon (HEL) and again the rural station of Tanagra, the anthropogenic influence of urbanization on temperatures at these two urban stations is growing in magnitude with time such that “the mean values of UHI magnitude [calculated across the entire record] are not quite representative of the more recent period.”

Interdecadal variation and annual trends of the Athens, Greece UHI calculated between two urban and one rural station using mean annual temperatures over the period 1970-2004. The two urban stations were the National Observatory of Athens (NOA) in the center of Athens and Hellinikon (HEL), located near the urbanized coast. The rural station Tanagra (TAN), was located approximately 50 km north of the city. Adapted from Founda et al. (2015).

Such findings as these are of significant relevance in climate change studies, for they clearly indicate the UHI influence on a temperature record is not static. It changes over time and is likely inducing an ever-increasing warming bias on the temperature record, a bias that will only increase as the world’s population continues to urbanize in the years and decades ahead. Consequently, unless researchers routinely identify and remove this growing UHI influence from the various temperature data bases used in global change studies, there will likely be a progressive overestimation of the influence of the radiative effects of rising CO2 on the temperature record.

One of the themes in my new study, “Why the Federal Government Fails,” is that the federal government has grown too large to manage with any reasonable level of efficiency and competence. Even if politicians worked diligently to advance the general interest, and even if federal bureaucracies focused on delivering quality services, the vast size of the government would still generate failure after failure.

Here’s an astounding fact: the federal government’s 2014 budget of $3.5 trillion was almost 100 times larger than the average state government budget of $36 billion, as shown in the figure. The largest state budget was California’s at $230 billion, but even that flood of spending was only one fifteenth the magnitude of the federal spending tsunami. Total state spending in 2014 was $1.8 trillion, which includes spending on general funds and nongeneral funds.

The federal government is not just large in size, but also sprawling in scope. In addition to handling core functions such as national defense, the government runs more than 2,300 subsidy and benefit programs, which is double the number in the 1980s. The federal government has many more employees, programs, contractors, and subsidy recipients to keep track of than any state government.

So even if federal officials spent their time diligently scrutinizing programs to prune waste, the job would be simply too large for them. With much of their time spent fundraising, meeting with lobbyists, and giving speeches, members have little time left to study policy, and they routinely miss all or most of their committee hearings. Congress grabs for itself vast powers over nonfederal activities, but then members do not have the time to see that their interventions actually work.

A really sad thing about American democracy is that we are squandering a huge built-in advantage that could greatly improve the nation’s governance. I’m talking about federalism, or allowing local and state governments to handle the great majority of governmental activities. Instead, politicians of both parties, and at all levels, have done their best over the past century to crush federalism and centralize power in Washington.

They have done so for no sound policy reason: centralization benefits politicians, not citizens. Consider that Congress has created hundreds of new federal programs to supposedly help the public since the 1960s. Yet, ironically, polling shows that the public has not grown fonder of the federal government. Quite the opposite, polling shows that Americans have become more alienated from the federal government, and more disgusted by its corruption and dysfunction.

In the midst of bitter bailout negotiations between Greece and Europe, warnings proliferated of a possible Greek Fifth Column. The European Union and even NATO would collapse should Athens turn toward Russia. It is one of the stranger paranoid fantasies driving U.S. foreign policy.

For five years Athens has been arguing with its European neighbors over debts and reform. The issue doesn’t much concern the U.S. A European economic crisis would be bad for America, but Grexit is not likely to set off such a cataclysm.

Nevertheless, some analysts speculated that Athens might fall out of the European Union and NATO as well as the Eurozone, resulting in geopolitical catastrophe. Thus, the U.S. should insist that Europe pay off Greece. Despite an apparent bailout agreement, another crisis seems inevitable, in which case the specter of a Greek Trojan Horse likely will reemerge.

This fear betrays an overactive imagination. “You do not want Europe to have to deal with a Greece that is a member of NATO but which all of a sudden hates the West and is cozying up to Russia,” warned Sebastian Mallaby of the Council on Foreign Relations.

Worse, Athens might leave the transatlantic alliance. Warned Robert D. Kaplan of the Center for a New American Security: “Europe will be increasingly vulnerable to Russian aggression if its links to Greece are substantially loosened.”

It sounds like the Cold War redux.

In fact, this all appears to be a grand bluff. To start, Russia poses little threat to Europe. President Vladimir Putin is an unpleasant authoritarian, but he is no Hitler or Stalin. While Moscow has ignored human rights and international law, so far Moscow’s aggressive interventions have reflected traditional Russian security concerns. Nothing suggests that Putin has lost his mind and hopes to rule over territory filled with Europeans.

Some worry about America’s access to naval and air bases. They are useful, not vital. After all, the Med is essentially a NATO lake and the Libya intervention was folly.

Bulgarian President Rosen Plevneliev raised another issue, complaining that “Russia uses every opportunity to divide and weaken the European Union.” Beyond a couple of friendly meetings, however, little has come from the supposed Athens-Moscow axis.

“There is fundamental value to Europe in having Greece as part of its orbit,” argued Stavridis, but the reverse also is true. Irrespective of the debt negotiations and Eurozone membership, Greece will continue to have much at stake with Europe.

Despite past anti-American feeling, Greece has remained with the West. Moreover, the Tsipras government has not obstructed continuation of sanctions against Russia. In fact, Athens has consistently affirmed its participation in Europe.

Defense Minister Panos Kammenos, head of Syriza’s small coalition partner threatened: “If Europe leaves us in the crisis, we will flood it with immigrants, and it will be even worse for Berlin if in that wave of millions of economic immigrants there will be some jihadists.” However, the Syriza government would not want to open its border to terrorists.

Athens has criticized sanctions against Russia. But Greece is not alone in taking this position. Obviously the penalties have failed to reverse Russian policy in Ukraine. Best would be to use possible sanctions repeal to negotiate an admittedly imperfect compromise deal. Such an approach would be entirely consistent with Greece remaining part of the West.

The Greek saga is far from over. The paranoid panic that Greece’s economic problems could destroy Europe’s and America’s geopolitical standing should generate a mix of scorn and laughter.

As I point out in Forbes online, “Washington should calm down, leaving the Greeks and other Europeans alone to solve their problems. Greece subsidized or not, in the Eurozone or out, really isn’t America’s business.”

More than 50 countries agreed on Friday to eliminate tariffs on a wide range of technology goods like medical devices, navigation equipment and advanced semiconductors in a trade agreement that should benefit American manufacturers, consumers and the global economy.

Signatories to the Information Technology Agreement, which covers 201 product categories, include the United States, the European Union, China, South Korea and other members of the World Trade Organization. International trade in those goods totals about $1.3 trillion a year, or about 7 percent of all trade.

I worry that I’m speaking to soon, but so far at least, I have not seen any of the usual trade critics complain about this deal. With trade negotiations such as the Trans Pacific Partnership and the Transatlantic Trade and Investment Partnership, there are lots of groups who are fired up about protesting every stage of the process. But with this deal to eliminate tariffs on tech goods, these same folks have not had much to say. Which perhaps suggests a way forward for negotiating future trade deals – focus on lowering tariffs and other forms of pure liberalization, and stay away from “governance” issues such as intellectual property, labor and the environment. The benefits are greater with this approach, and the controversy appears to be lower.

Yesterday, the Senate passed a six-year transportation bill that increases spending on highways and transit but only provides three years of funding for that increase. As the Washington Postcommented, “only by Washington’s low standards could anyone confuse the Senate’s plan with ‘good government.’”

Meanwhile, House majority leader Kevin McCarthy says the House will ignore the Senate bill in favor of its own five-month extension to the existing transportation law. Since the existing law expires at the end of this week, the two houses are playing a game of chicken to see which one will swerve course first and approve the other house’s bill.

As I noted a couple of weeks ago, the source of the gridlock is Congress’ decision ten years ago to change the Highway Trust Fund from a pay-as-you-go system to one reliant on deficit spending. This led to three factions: one, mostly liberal Democrats, wants to end deficits by raising the gas tax; a second, mostly conservative Republicans, wants to end deficits by reducing spending; and the third, which includes people from both sides of the aisle, wants to keep spending without raising gas taxes.

This third group is no doubt the largest because it is politically the easiest position to take, and is the one responsible for the Senate bill. Gas taxes and other federal highway user fees bring in about $40 billion a year, while Congress is currently spending about $52 billion a year and wants to increase it by at least the rate of inflation. To make up the difference, the Senate bill includes a hodge-podge of ideas such as increasing customs fees and selling oil from the strategic petroleum reserve. As the Post noted, the one thing these sources of funds all have in common is that “none is related to surface transportation.”

According to the Congressional Budget Office’s analysis, these funding schemes will only be enough to last through 2018, after which Congress will have to find another $51 billion to keep the spending going for another three years. That shortfall alone is probably what killed the bill in the House, though it would be nice to think that House members were also wary of a 1,000-plus-page bill sprung on them at the last minute (scroll down to “SA 2266” or search for “DRIVE Act”).

Naturally, the Senate bill does nothing to fix any of the perverse incentives found in the current law, such as the fund that encourages transit agencies to choose the most expensive, rather than the most effective, transit solution in any corridor. Instead, it rewards transit agencies that have neglected their infrastructure by creating a new “state of good repair fund” to help restore that infrastructure, effectively telling the agencies that they can continue spending on new transit lines they can’t afford to maintain and Congress will bail them out.

These games won’t end until Congress does what is right rather than what is easy by returning to a true, pay-as-you-go system. While I agree with fiscal conservatives who think that the federal government doesn’t need to be involved in most transportation issues in the first place, as long as it is involved, the deficit spending is doing more harm than good by making state and local transportation agencies increasingly reliant on the federal government rather than on user fees. Opponents of the current system need to do more than support immediate devolution; they need to find a strategic path from the current system to one that is more responsive to transportation users.

Calls are mounting in Congress (and among some influential opinion groups) for escalating Washington’s military intervention against ISIS in Iraq and Syria and for possible military action against Iran if the new nuclear agreement with that country falls apart. Caution lights should be flashing about both the extent and durability of such sentiment for military action. As I note in a recent article in the National Interest Online, this country has an unfortunate history of launching ill-considered armed crusades, often initially with enthusiastic public support. But that support has a tendency to evaporate and turn to bitter recriminations unless certain conditions are met. Policymakers need to appreciate that history as they consider intensifying U.S. involvement in the Middle East’s turbulent affairs.

Because most Americans believe that the United States embodies the values of individual liberty, human rights, and government integrity, a foreign policy that seems to ignore or violate those values is almost certain to lose the public’s allegiance sooner or later. That is what happened with such missions as the Vietnam War, the Iraq War and, more recently, the counterinsurgency war in Afghanistan. It is not merely that the ventures failed to achieve quick, decisive results, although that aspect clearly played a role. It was also that the United States was increasingly seen as expending blood and treasure on behalf of odious clients and dubious causes that had little or nothing to do with the republic’s vital interests. A disillusioned public turned against those missions, and that development created or intensified bitter domestic divisions.

To sustain adequate public support for military ventures, the objective must be widely perceived as both worthy and attainable. Without those features, public support for a policy either proves insufficient from the outset or soon erodes, and either development is fatal in a democratic political system.

Preserving public support requires officials to make an honest assessment of the issues at stake. Too often, both during the Cold War and the post–Cold War eras, U.S. policymakers have hyped threats to American interests. The alleged dangers posed by such adversaries as North Vietnam, Serbia, Saddam Hussein, the Taliban, and Syrian dictator Bashar al-Assad bordered on being ludicrous. At times, it appears that U.S. officials have deliberately engaged in distortions to gin-up public support for purely elective wars. On other occasions, officials seem to have succumbed to their own propaganda. In either case, public support dissipates rapidly when evidence mounts that the supposed security threat to America is exaggerated.

That troubling history should reinforce the need for caution as U.S. leaders consider new military interventions, especially in the Middle East. None of the proposed missions is likely to produce quick, decisive results—much less results with modest financial outlays and minimal casualties. Moreover, escalating America’s involvement in the region’s myriad troubles puts the United States in a close de facto partnership with Saudi Arabia and its Gulf allies—some of the most corrupt, brutal governments on the planet. Publics in the Middle East and around the world are watching, and the potential for unpleasant blowback is extremely high. And as we saw with the wars in Vietnam, Iraq, and Afghanistan, the reaction of the American people to associations with sleazy foreign clients can become one of profound revulsion. The conditions are in place for new foreign-policy debacles, if U.S. officials have not learned the appropriate historical lessons.

It is a country which the Communist revolutionaries who ruled only four decades ago would not recognize. True believers still exist. One spoke to me reverently of Mao’s rise to power and service to the Chinese people. However, she is the exception, at least among China’s younger professionals.

Indeed, younger educated Chinese could not be further from Communist cadres once determined to create a revolution. The former are socially active, desire the newest technologies, and worry about going to good schools and getting good jobs. Cynicism about corrupt and unelected leaders is pervasive.

If there is one common belief, it is hostility toward government Internet controls. Students have complained to me in class about their inability to get to many websites and readily shared virtual private networks to circumvent state barriers.

But such opinions are not held only by the young. A high school student told me that his father urged him to study in America because of Beijing’s restrictions on freedom.

While Chinese from all walks of life are comfortable telling foreigners what they think, sharing those beliefs with other Chinese is problematic. The media, of course, is closely controlled. Internet sites are blocked, deleted, and revamped. Unofficial intimidation, legal restrictions, and even prison time await those who criticize Communist officialdom on social media and blogs.

But increasingly globalized Chinese are aware of their online disadvantage compared to their peers in the West. Google, YouTube, and Twitter are verboten. Today Bloomberg and the New York Times are beyond reach.

Last week as BBC television began to detail official abuses my TV went black. A couple minutes later BBC was back, after the China report had finished.

While internet and media restrictions have not prevented rapid economic growth, barring the PRC’s best and brightest to a world of information is likely to dampen innovation and entrepreneurship. Moreover, those denied their full freedoms are more likely to leave home. Many of China’s wealthiest citizens have been departing an authoritarian system unbounded by the rule of law.

Repression also stultifies China’s political evolution to a more mature and stable political order. Democracy provides an important safety valve for popular dissent.

The Chinese Communist Party’s control may not be as firm as often presumed. The oppressive establishment which most Chinese have faced for most of their lives is Communist.

Indeed, for many if not most party members, Communism is a means of personal advancement, even enrichment. President Xi Jinping’s anti-corruption campaign is popular, but is widely seen as politically motivated.

Moreover, Xi has abrogated the well-understood “deal” of the last four decades, that rulers can retire and be immune from future prosecution. Will incumbents so readily yield power in the future?

Perhaps even more threatening for the CCP is the potential for an economic slowdown and consequent political unrest. Already protests are common against local governments, which tend to be ostentatiously rapacious. What if that antagonism shifts against the center?

A poorer PRC means a poorer world: China is a major supplier and increasingly important source of global demand. A politically unstable Beijing would have unpredictable effects on its neighbors.

As I wrote for Forbes online: “Since Mao’s death in 1976, the PRC has changed dramatically—and dramatically for the better. But this second revolution has stalled. Economic liberalization remains incomplete. Political reform never started. Individual liberty has regressed.”

The Chinese people deserve to be free. The Chinese nation would benefit from their freedom. The rest of the world would gain from a freer Chinese nation. Everyone desiring a peaceful and prosperous 21st century should hope for the successful conclusion of China’s second revolution.

For almost 50 years, Dr. Ronald Hines has been a licensed veterinarian in Texas. After a spinal cord injury prevented him from continuing to provide in-person services, Dr. Hines started a website to provide advice on pet care. He never tried to be an animal’s primary veterinarian—he noted a disclaimer to that effect—and did not prescribe medication.

After a decade of such practice without any complaints or problems, the Texas State Board of Veterinary Medical Examiners charged Dr. Hines with violating state law by failing to be physically present at the location of the pets before providing veterinary services. The U.S Court of Appeals for the Fifth Circuit upheld this restriction on Dr. Hines’s speech because, according to the court, any speech by a professional within the scope of his profession directed toward an individual’s circumstances isn’t protected by the First Amendment.

Dr. Hines has asked the Supreme Court to review the case and Cato has filed a brief supporting that petition, joined by the Mackinac Center for Public Policy.

The Fifth Circuit erroneously construed the Texas regulations as governing nonspeech conduct that only incidentally impacted speech. But everything that Dr. Hines did was speech!—there was no nonspeech conduct to regulate. Even if the regulations were content-neutral restrictions that incidentally restricted speech, the restrictions should have been reviewed under heightened scrutiny—meaning that the government would need to show a strong justification for its enforcement action. But the restrictions at issue here are explicitly content-based: Dr. Hines could’ve talked about any topic he wanted, except the topic of veterinary care.

Under the lower court’s logic, the following people would be unknowingly violating Texas law: Dr. Sanjay Gupta provides health information online; Loveline Radio provides relationship and drug-addition advice; The Mutual Fund Show provides financial advice; in addition to radio talk shows on pet care. All these people, and many others, would be expected to know and follow the detailed regulations of every single state.

The physical examination requirement doesn’t even make sense as a matter of basic veterinary practice. It only requires that vets visit a location, not that they actually examine a particular animal. It prevents a vet’s colleague from relying on notes and records when the primary-care vet is unavailable. Dr. Hines couldn’t even tell a client that her pet’s condition sounded serious and so the owner should, say, not let the animal drink water and bring it to him right away.

Moreover, someone who wasn’t a licensed veterinarian could have provided the same advice as Dr. Hines without a problem; the law prohibits good information from qualified individuals while allowing unqualified individuals to give bad advice. The regulation just ends up hurting the poor, who can’t afford to travel to Dr. Hines, and practically creates geographic limitations on speech.

The Supreme Court should take up Hines v. Alldredge and protect basic First Amendment rights in the context of occupational regulation.

Late last year, Reason magazine’s crack legal correspondent Damon Root chronicled the rise of the modern libertarian legal movement in his important new book, Overruled: The Long War for Control of the U.S. Supreme Court. In it, he focused especially on the struggle that some of us have been engaged in for more than four decades to recast the terms of the debate over the proper role of the courts from “judicial activism” and “judicial restraint” to “judicial engagement” and “judicial abdication.” That shift has been crucial because it refocused the debate from judicial behavior to where it should have been all along, namely, on the proper interpretation of the law before the court.

The struggle to bring about that shift, although much further along than when it began decades ago, is far from finished: Witness hearings just two days ago before the Senate Judiciary Committee’s Subcommittee on Oversight, Agency Action, Federal Rights and Federal Courts. Called by Subcommittee Chairman Ted Cruz in the wake of last month’s Supreme Court decisions in King v. Burwell, upholding Obamacare’s subsidies for insurance purchased through exchanges established by the federal government, and Obergefell v. Hodges, which made same-sex marriage the law of the land, the hearings were titled “With Prejudice: Supreme Court Activism and Possible Solutions.”

As the title suggests, committee conservatives, in the majority, remain focused on what they see as the Court’s activism. Their witnesses were two professional friends of mine, former Chapman Law Dean and now Professor John Eastman and Ethics and Public Policy Center President Ed Whelan. Nominally representing the liberal activist side was Duke Law Professor Neil Siegel.

I say “nominally” because Professor Siegel took pains early in his testimony to expose problems with the very idea of judicial activism. If defined in opposition to judicial deference, he said, many of the recent decisions of the Court’s “conservatives” would have to be called “activist.” But if the term is defined as engaging in legal infidelity, then we’re arguing not about activism or restraint but about whether the judge read the law correctly.

That’s right. In fact, “judicial engagement” emerged in libertarian thought mainly in opposition to calls from conservatives like Robert Bork and Antonin Scalia for courts to be more deferential to the political branches. But it was animated by the contention that the basic problem with conservative deference was its misreading of the law. In particular, under our Constitution, as Bork put it, majorities were entitled to rule in “wide areas” simply because they were majorities, even if in “some areas” minorities were entitled to be free from majority rule—to which many of us responded that that had the law exactly backwards, turning the Constitution on its head.

But having put his finger on the real source of the differences between the activist and restraint schools, Siegel then went on to illustrate why conservatives called the hearings in the first place, arguing that the Court got it right in both King and Obergefell. In King, Siegel said, Chief Justice John Roberts was right to ignore both the text at issue in the case and the rationale for that text and instead “to read the statute in context and as a whole.” Those, of course, are the kinds of words that enable courts to reach almost any conclusion they wish—to engage in the “activism” conservatives rightly condemn. On reading the law correctly here, credit the conservatives.

Obergefell, however, is another matter. Here too conservatives believe the Court got the law wrong, but they’re wrong. We see why in the two conservatives’ statements. Focusing almost entirely on the “possible solutions” part of the hearings’ title, Professor Eastman nonetheless noted almost in passing that the Constitution left most power with that states. That is true, but the Civil War Amendments made substantial changes to our federalism; the Fourteenth Amendment in particular, for the first time, provided federal remedies, through the courts, for state violations of our rights. Eastman appreciates that more than most conservatives, but he doesn’t go far enough in recognizing the countless unenumerated rights we retained when we reconstituted ourselves in 1787, which the Fourteenth Amendment made good against the states in 1868.

Like Ed Whelan, he would have left it to the states to define marriage in a way that excluded same-sex couples from its benefits. But the problem with that approach surfaced when Whelan rested it on the methodology of original understanding. “Every state,” he said, “had defined marriage as the union of a man and a woman when the Constitution was first adopted and when the Fourteenth Amendment was ratified.” True, but several states practiced segregation when that amendment was ratified and all prohibited interracial marriage.

When the Supreme Court finally put an end to those practices, therefore, it didn’t cite original understanding. It couldn’t, because that understanding supported those practices. Instead, it relied on the original meaning of the words the drafters wrote. And fortunately, that meaning was better than their actions. By its plain text, the Equal Protection Clause prohibits states from discriminating against its dispensation of privileges and benefits—including those pertaining to marriage—unless they have a good reason. And in that regard, the states policy reasons did not suffice, a point Siegel summarizes in his testimony.

Unfortunately, Justice Anthony Kennedy only touched on the equal protection rationale when he wrote for the Court in upholding same-sex marriage; nor did he draw a distinction between original understanding and original meaning. Had the Court drawn that distinction, it might have grounded Obergefell on the right foundation and reached the right result in King as well. For a fuller account of these issues, read the three statements in the link above for the hearings—and see here for a libertarian response. There is more work for modern libertarian legal theory to do.

The correct public policy response is implicit in this very good Wired article describing the whole thing. “Automakers need to be held accountable for their vehicles’ digital security,” writer Andy Greenberg says, quoting auto hacker Charlie Miller thus: “If consumers don’t realize this is an issue, they should, and they should start complaining to carmakers.”

That’s two very important consumer protection systems in a couple of brief sentences: In one, carmakers suffer lost sales if their cars are hackable or perceived as such. The market feedback system—including the article itself—causes automakers to work to make their cars less hackable.

In the other, carmakers suffer monetary damages if their cars are actually hacked in ways that cause injury. The common law tort system causes automakers to work to make cars less hackable. (I don’t know if this is what Greenberg had in mind for accountability, but it’s the legal accountability that’s already in place.)

Yes, these systems cause carmakers to seek to control perceptions of hackabillity and to deny responsibility when a harmful hack occurs. But on the whole they promote good behavior on the part of automakers, and safety for drivers.

Speaking of the common law, we are on the threshhold of a sea change in how liability for software defects is apportioned by contract. Software has typically been sold or licensed without any guarantee of its fitness, letting the risk of software failures fall entirely on the purchaser. That model can’t apply where failures are dangerous such as in driving controls and many implanted medical devices. There, software sellers are liable for failure.

As software grows more secure and in applications where successful functioning is important, liability for flaws will shift to sellers. That should generally happen at the pace buyers demand, based on their willingness to pay.

As is typical, it is not the market processes and common law already husbanding automakers’ behavior that get the attention in Greenberg’s article. He writes of new legislation that would “set new digital security standards for cars and trucks.” Senators Markey (D-MA) and Blumenthal (D-CT) undoubtedly want drivers to be protected. What is open to question is whether any group of politicians in Congress and lawyers in federal agencies can set standards better than the myriad actors in the marketplace, allocating risks according to their desires and needs, under common obligations to protect others from harm.

About the Republican Liberty Caucus

The Republican Liberty Caucus is a 527 voluntary grassroots membership organization dedicated to working within the Republican Party to advance the principles of individual rights, limited government and free markets. Founded in 1991, it is the oldest continuously-operating organization within the Liberty Republican movement.