Monday, July 10, 2017

Over the next three years, local authorities in China are planning to build more than 900 airports for general aviation—the segment of the industry that includes crop dusting and tourism. The figure is nearly double the central government’s goal of “more than 500” over the period.

A news report has warned that’s just too many airports.

In May 2016, the State Council, China’s cabinet, announced that the country wanted to construct more than 500 general aviation airports to boost the size of the industry to over 1 trillion yuan (U.S.$146 billion).

General aviation covers flights on helicopters and light aircraft used in sectors such as tourism, agriculture, medical care, and disaster relief.

All provincial-level governments except Shanghai, Tibet, and the northeastern province of Jilin have since published their own plans for these airports, and their goal is far more ambitious than the central government’s. Together, they plan to build 934 general aviation airports, according to the 21st Century Business Herald.

The number put forward by each region ranges from seven to 200. The three places that intend to build the most general aviation airports are Guangxi in southern China, Heilongjiang in the northeast, and Xinjiang in the northwest—all remote and less developed areas, the newspaper said.

Despite the government excesses managing the public treasure*, corruption in civil engineering works†, etc.‡, the citizen is quite comfortable ¶ with these expenditures (while the costs are not recouped visibly from him). It seems that if we see more tower buildings, and are taller, we assume we are progressing, there is material advance, and that most are better for this. My hypothesis is that what feeds the population's approval are patriotism§ (very powerful in Han China) and redistributionism֍.

* “Infrastructure is a double-edged sword,” said Atif Ansar, a management professor at the University of Oxford who has studied China’s infrastructure spending. “It’s good for the economy, but too much of this is pernicious. ‘Build it and they will come’ is a dictum that doesn’t work, especially in China, where there’s so much built already.”

A study that Mr. Ansar helped write said fewer than a third of the 65 Chinese highway and rail projects he examined were “genuinely economically productive,” while the rest contributed more to debt than to transportation needs.

† In the past six years, anticorruption inquiries have toppled more than 27 Hunan transportation officials.

‡, § “The amount of high bridge construction in China is just insane,” said Eric Sakowski, an American bridge enthusiast who runs a website on the world’s highest bridges. “China’s opening, say, 50 high bridges a year, and the whole of the rest of the world combined might be opening 10.”

Of the world’s 100 highest bridges, 81 are in China, including some unfinished ones, according to Mr. Sakowski’s data. (The Chishi Bridge ranks 162nd.)

China also has the world’s longest bridge, the 102-mile Danyang-Kunshan Grand Bridge, a high-speed rail viaduct running parallel to the Yangtze River, and is nearing completion of the world’s longest sea bridge, a 14-mile cable-stay bridge skimming across the Pearl River Delta, part of a 22-mile bridge and tunnel crossing that connects Hong Kong and Macau with mainland China.

The country’s expressway growth has been compared to that of the United States in the 1950s, when the Interstate System of highways got underway, but China is building at a remarkable clip. In 2016 alone, China added 26,100 bridges on roads, including 363 “extra large” ones with an average length of about a mile, government figures show.

֍ “It’s very important to improve transport and other infrastructure so that impoverished regions can escape poverty and prosper,” President Xi Jinping said while visiting the spectacular, recently opened Aizhai Bridge in Hunan in 2013. “We must do more of this and keep supporting it.”

¶ Indeed, the new roads and railways have proved popular.

§ Who Will Fight? The All-Volunteer Army after 9/11. By Susan Payne Carter, Alexander Smith & Carl Wojtaszek. American Economic Review, May 2017, Pages 415-419, https://www.aeaweb.org/articles?id=10.1257/aer.p20171082.

Abstract: How natural disasters affect politics in developing countries is an important question, given the fragility of fledgling democratic institutions in some of these countries as well as likely increased exposure to natural disasters over time due to climate change. Research in sociology and psychology suggests traumatic events can inspire pro-social behavior and therefore might increase political engagement. Research in political science argues that economic resources are critical for political engagement and thus the economic dislocation from disasters may dampen participation. We argue that when the government and civil society response effectively blunts a disaster's economic impacts, then political engagement may increase as citizens learn about government capacity. Using diverse data from the massive 2010–11 Pakistan floods, we find that Pakistanis in highly flood-affected areas turned out to vote at substantially higher rates three years later than those less exposed. We also provide speculative evidence on the mechanism. The increase in turnout was higher in areas with lower ex ante flood risk, which is consistent with a learning process. These results suggest that natural disasters may not necessarily undermine civil society in emerging developing democracies.

Abstract: One characteristic of nondemocratic regimes is that leaders cannot be removed from office by legal means: in most authoritarian regimes, no institutional way of dismissing incompetent rulers is available, and overthrowing them is costly. Anticipating this, people who have a say in the selection of the leader are likely to resort to alternative strategies to limit his tenure. In this paper, we examine empirically the “strategic gerontocracy” hypothesis: Because selecting aging leaders is a convenient way of reducing their expected time in office, gerontocracy will become a likely outcome whenever leaders are expected to rule for life. We test this hypothesis using data on political leaders for the period from 1960 to 2008, and find that dictators have shorter life expectancies than democrats at the time they take office. We also observe variations in the life expectancies of dictators: those who are selected by consent are on average closer to death than those who seize power in an irregular manner. This finding suggests that gerontocracy is a consequence of the choice process, since it disappears when dictators self-select into leadership positions.

Abstract: Across the globe we witness the rise of populist authoritarian leaders who are overbearing in their narrative, aggressive in behavior, and often exhibit questionable moral character. Drawing on evolutionary theory of leadership emergence, in which dominance and prestige are seen as dual routes to leadership, we provide a situational and psychological account for when and why dominant leaders are preferred over other respected and admired candidates. We test our hypothesis using three studies, encompassing more than 140,000 participants, across 69 countries and spanning the past two decades. We find robust support for our hypothesis that under a situational threat of economic uncertainty (as exemplified by the poverty rate, the housing vacancy rate, and the unemployment rate) people escalate their support for dominant leaders. Further, we find that this phenomenon is mediated by participants’ psychological sense of a lack of personal control. Together, these results provide large-scale, globally representative evidence for the structural and psychological antecedents that increase the preference for dominant leaders over their prestigious counterparts.

Significance: Previous analyses have found that the most feasible route to a low-carbon energy future is one that adopts a diverse portfolio of technologies. In contrast, Jacobson et al. (2015) consider whether the future primary energy sources for the United States could be narrowed to almost exclusively wind, solar, and hydroelectric power and suggest that this can be done at “low-cost” in a way that supplies all power with a probability of loss of load “that exceeds electric-utility-industry standards for reliability”. We find that their analysis involves errors, inappropriate methods, and implausible assumptions. Their study does not provide credible evidence for rejecting the conclusions of previous analyses that point to the benefits of considering a broad portfolio of energy system options. A policy prescription that overpromises on the benefits of relying on a narrower portfolio of technologies options could be counterproductive, seriously impeding the move to a cost effective decarbonized energy system.

Abstract: A number of analyses, meta-analyses, and assessments, including those performed by the Intergovernmental Panel on Climate Change, the National Oceanic and Atmospheric Administration, the National Renewable Energy Laboratory, and the International Energy Agency, have concluded that deployment of a diverse portfolio of clean energy technologies makes a transition to a low-carbon-emission energy system both more feasible and less costly than other pathways. In contrast, Jacobson et al. [Jacobson MZ, Delucchi MA, Cameron MA, Frew BA (2015) Proc Natl Acad Sci USA 112(49):15060–15065] argue that it is feasible to provide “low-cost solutions to the grid reliability problem with 100% penetration of WWS [wind, water and solar power] across all energy sectors in the continental United States between 2050 and 2055”, with only electricity and hydrogen as energy carriers. In this paper, we evaluate that study and find significant shortcomings in the analysis. In particular, we point out that this work used invalid modeling tools, contained modeling errors, and made implausible and inadequately supported assumptions. Policy makers should treat with caution any visions of a rapid, reliable, and low-cost transition to entire energy systems that relies almost exclusively on wind, solar, and hydroelectric power.

“The amount of high bridge construction in China is just insane,” said Eric Sakowski, an American bridge enthusiast who runs a website on the world’s highest bridges. “China’s opening, say, 50 high bridges a year, and the whole of the rest of the world combined might be opening 10.”

Of the world’s 100 highest bridges, 81 are in China, including some unfinished ones, according to Mr. Sakowski’s data. (The Chishi Bridge ranks 162nd.)

China also has the world’s longest bridge, the 102-mile Danyang-Kunshan Grand Bridge, a high-speed rail viaduct running parallel to the Yangtze River, and is nearing completion of the world’s longest sea bridge, a 14-mile cable-stay bridge skimming across the Pearl River Delta, part of a 22-mile bridge and tunnel crossing that connects Hong Kong and Macau with mainland China.

The country’s expressway growth has been compared to that of the United States in the 1950s, when the Interstate System of highways got underway, but China is building at a remarkable clip. In 2016 alone, China added 26,100 bridges on roads, including 363 “extra large” ones with an average length of about a mile, government figures show.

In the country that built the Great Wall, major feats of infrastructure have long been a point of pride. China has produced engineering coups like the world’s highest railway, from Qinghai Province to Lhasa, Tibet; the world’s largest hydropower project, the Three Gorges Dam; and an 800-mile canal from the Yangtze River system to Beijing that is part of the world’s biggest water transfer project.
Leaders defend the infrastructure spree as crucial to China’s development.

“It’s very important to improve transport and other infrastructure so that impoverished regions can escape poverty and prosper,” President Xi Jinping said while visiting the spectacular, recently opened Aizhai Bridge in Hunan in 2013. “We must do more of this and keep supporting it.”

Indeed, the new roads and railways have proved popular, especially in wealthier areas with many businesses and heavy commuter traffic. And even empty infrastructure often has a way of eventually filling up, as early critics of the country’s high-speed rail and the Pudong skyscrapers in Shanghai have discovered.

Abstract:Why
do some societies fail to adopt more efficient institutions in
response to changing economic conditions? And why do such conditions
sometimes generate ideological backlashes and at other times lead to
transformative sociopolitical movements? We propose an explanation that
highlights the interplay - or lack thereof - between new technologies,
ideologies, and institutions. When new technologies emerge, uncertainty
results from a lack of understanding how the technology will fit with
prevailing ideologies and institutions. This uncertainty discourages
investment in institutions and the cultural capital necessary to take
advantage of new technologies. Accordingly, increased uncertainty during
times of rapid technological change may generate an ideological
backlash that puts a higher premium on traditional values. We apply the
theory to numerous historical episodes, including Ottoman reform
initiatives, the Japanese Tokugawa reforms and Meiji Restoration, and
the Tongzhi Restoration in Qing China.

Highlights
• Propaganda can be effective at changing the behavior of all citizens even if most do not believe it.
• This effect is particularly strong when citizens care a lot about behaving in a similar manner as others.
• However, the government picks less propaganda when it is more effective.

Abstract:
I develop a theory of propaganda which affects mass behavior without
necessarily affecting mass beliefs. A group of citizens observe a signal
of their government's performance, which is upwardly inflated by
propaganda. Citizens want to support the government if it performs well
and if others are supportive (i.e., to coordinate). Some citizens are
unaware of the propaganda (“credulous”). Because of the coordination
motive, the non-credulous still respond to propaganda, and when the
coordination motive dominates they perfectly mimic the actions of the
credulous. So, all can act as if they believe the government's lies even
though most do not. The government benefits from this responsiveness to
manipulation since it leads to a more compliant citizenry, but uses
more propaganda precisely when citizens are less responsive.

Abstract:
This paper studies the consequences of autocratic rule for social
capital in the context of imperial China. Between 1660-1788, individuals
were persecuted if they were suspected of subversive attitudes towards
the autocratic ruler. Using a difference-in-differences approach, our
main finding is that these persecutions led to an average decline of
38% in the number of charitable organizations in each subsequent decade.

To investigate the long-run effect of persecutions, we examine
the impact that they had on the provision of local public goods. During
this period, local public goods, such as basic education, relied
primarily on voluntary contributions and local cooperation. We show
that persecutions are associated with lower provision of basic education
suggesting that they permanently reduced social capital. This is
consistent with what we find in modern survey data: persecutions left a
legacy of mistrust and political apathy.

Abstract:
In this paper, we demonstrate that when environmentalist niche parties
compete in a given constituency over a number of elections, but
continually fail to win seats, then environmental sabotage becomes more
frequent in that constituency. When mainstream tactics fail, radical
tactics are used more frequently. Using a new data-set on the success
rates of all Green Party candidates in US states, we show that
environmental sabotage occurs more often when Green Party candidates
fail to win even minor offices. This is true even when we control for
other political expressions of environmentalism, such as interest group
activity, and when we define ‘success’ through votes not seats. We
discuss the implications of this for environmental politics, for social
movements and democracy, and for political violence in the US.

India’s long road to prosperity, by Martin WolfMartin Wolf is impressed by an analysis of what the world’s largest democracy must do in order to thriveFinancial Times, May 24, 2017https://www.ft.com/content/d5cf8bb0-3fc3-11e7-9d56-25f963e998b2

India could do far better. That, in a sentence, is the conclusion of Vijay Joshi’s superb book. Joshi is an Indian economist who has spent most of his professional life at Oxford university. In this penetrating account of the past and present of Indian economic development, he casts a bright light on the prospects ahead. If India’s aim is to become a high-income country in the next generation, its economic, social and political performance needs to improve dramatically.

The good news is that there is room for improvement on many fronts. The bad news is that the obstacles to the needed improvement are huge. Worse, many emanate from the failures of the state and the political processes that guide it. Yet, as Joshi also notes, “The two fixed points in the socio-political setting of the Indian state’s development policies are that the country is a democracy, and an extremely diverse society.” The challenge is to improve performance within the constraints of these realities.

The success of Indian development matters, for at least three reasons: India will soon be the most populous country in the world; it is already far and away the largest democracy; and, above all, despite progress in the last three decades, between 270m and 360m Indians still lived in dire poverty (on slightly different definitions) in 2011 (that is, between 22 and 30 per cent of the population). If extreme poverty is to be eliminated from the world, it must be eliminated in India.

While the focus of India’s Long Road is on the economy, its analysis is appropriately comprehensive. It considers the post-independence growth record, the failure to create remunerative employment, the excessive role of publicly owned enterprises, the poor quality of Indian infrastructure and the inadequacy of environmental regulation. The book also analyses the successes and failures of macroeconomic management, the appalling quality of government-provided education and healthcare, the need for a better safety net for the poor, the long-term decay of the state, the prevalence of corruption and the role of India in the world economy.

In covering all these issues, Joshi combines enthusiastic engagement with the detachment of a scholar who has passed much of his life abroad. No better guide to India’s contemporary economy exists.

Over the past 70 years, India’s growth has shown two marked accelerations. The first followed independence in 1947. The second followed the economic liberalisation that began in the 1980s and accelerated dramatically after the balance of payments crisis of 1991. In the first period, growth averaged 3.5 per cent a year. In the second, it rose to 6 per cent (4 per cent per head). Unfortunately, after a further acceleration in the first decade of the 2000s, growth has slowed once again. The principal explanation for this recent slowdown is a marked weakening of investment by an over-indebted private sector.

"Joshi argues that India could provide a basic income to all by diverting resources wasted on subsidies"

So what should be the goal for the decades ahead? Joshi describes it simply as “rapid, inclusive, stable, and sustainable growth . . . within a political framework of liberal democracy”. More precisely, if incomes per head could grow at 7 per cent a year, India would achieve high-income status, at the level of Portugal, within a quarter of a century.

Only three economies have achieved something close to this in the past: Taiwan, South Korea and China. It represents an enormous challenge that cannot be met with the current “partial reform model”. The basic flaw of that model, argues Joshi, “is a failure to put the role of the state, and the relation between the state, the market, and the private sector, on the right footing”. The state, in brief, does what it does not need to do and fails to do what it does need to do.

It is no longer enough for the state merely to get out of the way, important though that still is in crucial areas. Among these is the labour market, whose huge distortions and inefficiencies have turned the demographic dividend into a demographic disaster.

Thus, in the 10 years from 1999 to 2009, India’s workforce increased by 63m. “Of these, 44 million joined the unorganized sector, 22 million became informal workers in the organized sector, and the number of formal workers in the organized sector fell by 3 million.” This is a social catastrophe. It is due not only to labour-market distortions, but to a host of constraints on the creation, operation and, not least, closure of organised and large-scale businesses.

Yet India also needs an effective state able to supply the public goods, public services and competent regulation on which an efficient economy depends. Unfortunately, that is not what now exists. All international surveys give India a very low rank for the efficiency and honesty of the state and the ease of doing business. Joshi argues that while the economy is more dynamic and the quality of policy has indeed improved since the 1980s, the quality of the state has deteriorated in many respects.

Among the many failures is the waste of state resources on inefficient subsidies that, though often given in the name of the poor, actually go to the better off. Indeed, one of the most original and persuasive aspects of the book is the argument that it would in principle be possible to provide a basic income to all Indians sufficient to lift everybody out of extreme poverty merely by diverting resources wasted on grotesquely costly subsidies. Yet, to take just one example, state governments continue to bribe farmers with free power, at the expense of a reliable electricity supply.

Will prime minister Narendra Modi be the new broom that sweeps all these cobwebs away? Alas no. His government’s performance is “mixed at best”. It has some achievements. But it has shown insufficient energy in tackling both the immediate problems of inadequate private investment, excessive debt and feeble banks, and the longer-term problems of dreadful education, lousy healthcare, weak infrastructure, corruption, regulatory incompetence, excessive interference and government waste.

A great opportunity for radically improved performance is being missed. This is not bad just for the Indian economy. There is a real danger that if the economy fails to perform as needed and desired, the governing Bharatiya Janata party will find itself increasingly attracted to its “dark side” of communal and caste division. That way lies not just economic failure, but possibly the destabilisation of Indian democracy, one of the great political achievements of the post-second world war era.

Those who care about the future of this remarkable country and indeed the future of democracy itself must hope that Modi gets this right. If they want to understand what he needs to do and why, they should first read this book.

India’s Long Road: The Search for Prosperity, by Vijay Joshi, Oxford University Press, RRP£22.99, 360 pagesMartin Wolf is the FT’s chief economics commentator

Much to my surprise, I showed up in the WikiLeaks releases before the election. In a 2014 email, a staffer at the Center for American Progress, founded by John Podesta in 2003, took credit for a campaign to have me eliminated as a writer for Nate Silver’s FiveThirtyEight website. In the email, the editor of the think tank’s climate blog bragged to one of its billionaire donors, Tom Steyer: “I think it’s fair [to] say that, without Climate Progress, Pielke would still be writing on climate change for 538.”

WikiLeaks provides a window into a world I’ve seen up close for decades: the debate over what to do about climate change, and the role of science in that argument. Although it is too soon to tell how the Trump administration will engage the scientific community, my long experience shows what can happen when politicians and media turn against inconvenient research—which we’ve seen under Republican and Democratic presidents.

I understand why Mr. Podesta—most recently Hillary Clinton’s campaign chairman—wanted to drive me out of the climate-change discussion. When substantively countering an academic’s research proves difficult, other techniques are needed to banish it. That is how politics sometimes works, and professors need to understand this if we want to participate in that arena.

More troubling is the degree to which journalists and other academics joined the campaign against me. What sort of responsibility do scientists and the media have to defend the ability to share research, on any subject, that might be inconvenient to political interests—even our own?

I believe climate change is real and that human emissions of greenhouse gases risk justifying action, including a carbon tax. But my research led me to a conclusion that many climate campaigners find unacceptable: There is scant evidence to indicate that hurricanes, floods, tornadoes or drought have become more frequent or intense in the U.S. or globally. In fact we are in an era of good fortune when it comes to extreme weather. This is a topic I’ve studied and published on as much as anyone over two decades. My conclusion might be wrong, but I think I’ve earned the right to share this research without risk to my career.

Instead, my research was under constant attack for years by activists, journalists and politicians. In 2011 writers in the journal Foreign Policy signaled that some accused me of being a “climate-change denier.” I earned the title, the authors explained, by “questioning certain graphs presented in IPCC reports.” That an academic who raised questions about the Intergovernmental Panel on Climate Change in an area of his expertise was tarred as a denier reveals the groupthink at work.

Yet I was right to question the IPCC’s 2007 report, which included a graph purporting to show that disaster costs were rising due to global temperature increases. The graph was later revealed to have been based on invented and inaccurate information, as I documented in my book “The Climate Fix.” The insurance industry scientist Robert-Muir Wood of Risk Management Solutions had smuggled the graph into the IPCC report. He explained in a public debate with me in London in 2010 that he had included the graph and misreferenced it because he expected future research to show a relationship between increasing disaster costs and rising temperatures.

When his research was eventually published in 2008, well after the IPCC report, it concluded the opposite: “We find insufficient evidence to claim a statistical relationship between global temperature increase and normalized catastrophe losses.” Whoops.

The IPCC never acknowledged the snafu, but subsequent reports got the science right: There is not a strong basis for connecting weather disasters with human-caused climate change.

Yes, storms and other extremes still occur, with devastating human consequences, but history shows they could be far worse. No Category 3, 4 or 5 hurricane has made landfall in the U.S. since Hurricane Wilma in 2005, by far the longest such period on record. This means that cumulative economic damage from hurricanes over the past decade is some $70 billion less than the long-term average would lead us to expect, based on my research with colleagues. This is good news, and it should be OK to say so. Yet in today’s hyper-partisan climate debate, every instance of extreme weather becomes a political talking point.

For a time I called out politicians and reporters who went beyond what science can support, but some journalists won’t hear of this. In 2011 and 2012, I pointed out on my blog and social media that the lead climate reporter at the New York Times,Justin Gillis, had mischaracterized the relationship of climate change and food shortages, and the relationship of climate change and disasters. His reporting wasn’t consistent with most expert views, or the evidence. In response he promptly blocked me from his Twitter feed. Other reporters did the same.

In August this year on Twitter, I criticized poor reporting on the website Mashable about a supposed coming hurricane apocalypse—including a bad misquote of me in the cartoon role of climate skeptic. (The misquote was later removed.) The publication’s lead science editor, Andrew Freedman, helpfully explained via Twitter that this sort of behavior “is why you’re on many reporters’ ‘do not call’ lists despite your expertise.”

I didn’t know reporters had such lists. But I get it. No one likes being told that he misreported scientific research, especially on climate change. Some believe that connecting extreme weather with greenhouse gases helps to advance the cause of climate policy. Plus, bad news gets clicks.

Yet more is going on here than thin-skinned reporters responding petulantly to a vocal professor. In 2015 I was quoted in the Los Angeles Times, by Pulitzer Prize-winning reporter Paige St. John, making the rather obvious point that politicians use the weather-of-the-moment to make the case for action on climate change, even if the scientific basis is thin or contested.

Ms. St. John was pilloried by her peers in the media. Shortly thereafter, she emailed me what she had learned: “You should come with a warning label: Quoting Roger Pielke will bring a hailstorm down on your work from the London Guardian, Mother Jones, and Media Matters.”

Or look at the journalists who helped push me out of FiveThirtyEight. My first article there, in 2014, was based on the consensus of the IPCC and peer-reviewed research. I pointed out that the global cost of disasters was increasing at a rate slower than GDP growth, which is very good news. Disasters still occur, but their economic and human effect is smaller than in the past. It’s not terribly complicated.

That article prompted an intense media campaign to have me fired. Writers at Slate, Salon, the New Republic, the New York Times, the Guardian and others piled on.

In March of 2014, FiveThirtyEight editor Mike Wilson demoted me from staff writer to freelancer. A few months later I chose to leave the site after it became clear it wouldn’t publish me. The mob celebrated. ClimateTruth.org, founded by former Center for American Progress staffer Brad Johnson, and advised by Penn State’s Michael Mann, called my departure a “victory for climate truth.” The Center for American Progress promised its donor Mr. Steyer more of the same.

Yet the climate thought police still weren’t done. In 2013 committees in the House and Senate invited me to a several hearings to summarize the science on disasters and climate change. As a professor at a public university, I was happy to do so. My testimony was strong, and it was well aligned with the conclusions of the IPCC and the U.S. government’s climate-science program. Those conclusions indicate no overall increasing trend in hurricanes, floods, tornadoes or droughts—in the U.S. or globally.

In early 2014, not long after I appeared before Congress, President Obama’s science adviser John Holdren testified before the same Senate Environment and Public Works Committee. He was asked about his public statements that appeared to contradict the scientific consensus on extreme weather events that I had earlier presented. Mr. Holdren responded with the all-too-common approach of attacking the messenger, telling the senators incorrectly that my views were “not representative of the mainstream scientific opinion.” Mr. Holdren followed up by posting a strange essay, of nearly 3,000 words, on the White House website under the heading, “An Analysis of Statements by Roger Pielke Jr.,” where it remains today.

I suppose it is a distinction of a sort to be singled out in this manner by the president’s science adviser. Yet Mr. Holdren’s screed reads more like a dashed-off blog post from the nutty wings of the online climate debate, chock-full of errors and misstatements.

But when the White House puts a target on your back on its website, people notice. Almost a year later Mr. Holdren’s missive was the basis for an investigation of me by Arizona Rep. Raul Grijalva, the ranking Democrat on the House Natural Resources Committee. Rep. Grijalva explained in a letter to my university’s president that I was being investigated because Mr. Holdren had “highlighted what he believes were serious misstatements by Prof. Pielke of the scientific consensus on climate change.” He made the letter public.

The “investigation” turned out to be a farce. In the letter, Rep. Grijalva suggested that I—and six other academics with apparently heretical views—might be on the payroll of Exxon Mobil (or perhaps the Illuminati, I forget). He asked for records detailing my research funding, emails and so on. After some well-deserved criticism from the American Meteorological Society and the American Geophysical Union, Rep. Grijalva deleted the letter from his website. The University of Colorado complied with Rep. Grijalva’s request and responded that I have never received funding from fossil-fuel companies. My heretical views can be traced to research support from the U.S. government.

But the damage to my reputation had been done, and perhaps that was the point. Studying and engaging on climate change had become decidedly less fun. So I started researching and teaching other topics and have found the change in direction refreshing. Don’t worry about me: I have tenure and supportive campus leaders and regents. No one is trying to get me fired for my new scholarly pursuits.

But the lesson is that a lone academic is no match for billionaires, well-funded advocacy groups, the media, Congress and the White House. If academics—in any subject—are to play a meaningful role in public debate, the country will have to do a better job supporting good-faith researchers, even when their results are unwelcome. This goes for Republicans and Democrats alike, and to the administration of President-elect Trump.

Academics and the media in particular should support viewpoint diversity instead of serving as the handmaidens of political expediency by trying to exclude voices or damage reputations and careers. If academics and the media won’t support open debate, who will?

---Mr. Pielke is a professor and director of the Sports Governance Center at the University of Colorado, Boulder. His most recent book is “The Edge: The Wars Against Cheating and Corruption in the Cutthroat World of Elite Sports” (Roaring Forties Press, 2016).

Sunday, April 17, 2016

The Great Recession Blame Game

Banks took the heat, but it was Washington that propped up subprime debt and then stymied recovery.

By Phil Gramm and Michael Solon
WSJ, April 15, 2016 6:09 p.m. ET

When the subprime crisis broke in the 2008 presidential election
year, there was little chance for a serious discussion of its root
causes. Candidate Barack Obama weaponized the crisis by blaming greedy bankers, unleashed when financial regulations were “simply dismantled.” He would go on to blame them for taking “huge, reckless risks in pursuit of quick profits and massive bonuses.”
That
mistaken diagnosis was the justification for the Dodd-Frank Act and the
stifling regulations that shackled the financial system, stunted the
recovery and diminished the American dream.

In fact, when the
crisis struck, banks were better capitalized and less leveraged than
they had been in the previous 30 years. The FDIC’s reported capital-to-asset
ratio for insured commercial banks in 2007 was 10.2%—76% higher than it
was in 1978. Federal Reserve data on all insured financial institutions
show the capital-to-asset ratio was 10.3% in 2007, almost double its
1984 level, and the biggest banks doubled their capitalization ratios.
On Sept. 30, 2008, the month Lehman failed, the FDIC found that
98% of all FDIC institutions with 99% of all bank assets were “well
capitalized,” and only 43 smaller institutions were undercapitalized.

In
addition, U.S. banks were by far the best-capitalized banks in the
world. While the collapse of 31 million subprime mortgages fractured
financial capital, the banking system in the 30 years before 2007 would
have fared even worse under such massive stress.

Virtually all
of the undercapitalization, overleveraging and “reckless risks” flowed
from government policies and institutions. Federal regulators followed
international banking standards that treated most
subprime-mortgage-backed securities as low-risk, with lower capital
requirements that gave banks the incentive to hold them. Government
quotas forced Fannie Mae and Freddie Mac to hold ever larger volumes of
subprime mortgages, and politicians rolled the dice by letting them
operate with a leverage ratio of 75 to one—compared with Lehman’s
leverage ratio of 29 to one.

Regulators also eroded the safety
of the financial system by pressuring banks to make subprime loans in
order to increase homeownership. After eight years of vilification and
government extortion of bank assets, often for carrying out government
mandates, it is increasingly clear that banks were more scapegoats than
villains in the subprime crisis.

Similarly, the charge that banks
had been deregulated before the crisis is a myth. From 1980 to 2007
four major banking laws—the Competitive Equality Banking Act (1987), the
Financial Institutions, Reform, Recovery and Enforcement Act (1989),
the Federal Deposit Insurance Corporation Improvement Act (1991), and
Sarbanes-Oxley (2002)—undeniably increased bank regulations and
reporting requirements. The charge that financial regulation had been
dismantled rests almost solely on the disputed effects of the 1999
Gramm-Leach-Bliley Act (GLBA).

Prior to GLBA, the decades-old
Glass-Steagall Act prohibited deposit-taking, commercial banks from
engaging in securities trading. GLBA, which was signed into law by
President Bill Clinton, allowed highly regulated financial-services
holding companies to compete in banking, insurance and the securities
business. But each activity was still required to operate separately and
remained subject to the regulations and capital requirements that
existed before GLBA. A bank operating within a holding company was still
subject to Glass-Steagall (which was not repealed by GLBA)—but
Glass-Steagall never banned banks from holding mortgages or
mortgage-backed securities in the first place.

GLBA loosened
federal regulations only in the narrow sense that it promoted more
competition across financial services and lowered prices. When he signed
the law, President Clinton said that
“removal of barriers to competition will enhance the stability of our
financial system, diversify their product offerings and thus their
sources of revenue.” The financial crisis proved his point. Financial
institutions that had used GLBA provisions to diversify fared better
than those that didn’t.

Mr. Clinton has always insisted that
“there is not a single solitary example that [GLBA] had anything to do
with the financial crisis,” a conclusion that has never been refuted.
When asked by the New York Times in 2012, Sen. Elizabeth Warren
agreed that the financial crisis would not have been avoided had GLBA
never been adopted. And President Obama effectively exonerated GLBA from
any culpability in the financial crisis when, with massive majorities
in both Houses of Congress, he chose not to repeal GLBA. In fact,
Dodd-Frank expanded GLBA by using its holding-company structure to
impose new regulations on systemically important financial institutions.

Another myth of the financial crisis is that the bailout was
required because some banks were too big to fail. Had the government’s
massive injection of capital—the Troubled Asset Relief Program, or
TARP—been only about bailing out too-big-to-fail financial institutions,
at most a dozen institutions might have received aid. Instead, 954
financial institutions received assistance, with more than half the
money going to small banks.

Many of the largest banks did not
want or need aid—and Lehman’s collapse was not a case of a
too-big-to-fail institution spreading the crisis. The entire financial
sector was already poisoned by the same subprime assets that felled
Lehman. The subprime bailout occurred because the U.S. financial sector
was, and always should be, too important to be allowed to fail.

Consider
that, according to the Congressional Budget Office, bailing out the
depositors of insolvent S&Ls in the 1980s on net cost taxpayers $258
billion in real 2009 dollars. By contrast, of the $245 billion
disbursed by TARP to banks, 67% was repaid within 14 months, 81% within
two years and the final totals show that taxpayers earned $24 billion on
the banking component of TARP. The rapid and complete payback of TARP
funds by banks strongly suggests that the financial crisis was more a
liquidity crisis than a solvency crisis.

What turned the subprime
crisis and ensuing recession into the “Great Recession” was not a
failure of policies that addressed the financial crisis. Instead, it was
the failure of subsequent economic policies that impeded the recovery.

The
subprime crisis was largely the product of government policy to promote
housing ownership and regulators who chose to promote that social
policy over their traditional mission of guaranteeing safety and
soundness. But blaming the financial crisis on reckless bankers and
deregulation made it possible for the Obama administration to seize
effective control of the financial system and put government bureaucrats
in the corporate boardrooms of many of the most significant U.S. banks
and insurance companies.

Suffocating under Dodd-Frank’s
“enhanced supervision,” banks now focus on passing stress tests, writing
living wills, parking capital at the Federal Reserve, and knowing their
regulators better than they know their customers. But their ability to
help the U.S. economy turn dreams into businesses and jobs has suffered.

In postwar America, it took on average just 2 1/4 years to
regain in each succeeding recovery all of the real per capita income
that had been lost in the previous recession. At the current rate of the
Obama recovery, it will take six more years, 14 years in all, for the
average American just to earn back what he lost in the last recession.
Mr. Obama’s policies in banking, health care, power generation, the
Internet and so much else have Europeanized America and American
exceptionalism has waned—sadly proving that collectivism does not work
any better in America than it has ever worked anywhere else.

Mr.
Gramm, a former chairman of the Senate Banking Committee, is a visiting
scholar at the American Enterprise Institute. Mr. Solon is a partner of
US Policy Metrics.

Someone at Yale University should have dressed up as Robespierre for Halloween, as its students seem to have lost their minds over what constitutes a culturally appropriate costume. Identity and grievance politics keeps hitting new lows on campus, and now even liberal professors are being consumed by the revolution.

On Oct. 28 Yale Dean Burgwell Howard and Yale’s Intercultural Affairs Committee blasted out an email advising students against “culturally unaware” Halloween costumes, with self-help questions such as: “If this costume is meant to be historical, does it further misinformation or historical and cultural inaccuracies?” Watch out for insensitivity toward “religious beliefs, Native American/Indigenous people, Socio-economic strata, Asians, Hispanic/Latino, Women, Muslims, etc.” In short, everyone.

Who knew Yale still employed anyone willing to doubt the costume wardens? But in response to the dean’s email, lecturer in early childhood education Erika Christakis mused to the student residential community she oversees with her husband, Nicholas, a Yale sociologist and physician: “I don’t wish to trivialize genuine concerns,” but she wondered if colleges had morphed into “places of censure and prohibition.”

And: “Nicholas says, if you don’t like a costume someone is wearing, look away, or tell them you are offended. Talk to each other. Free speech and the ability to tolerate offence are the hallmarks of a free and open society.”

Some 750 Yale students, faculty, alumni and others signed a letter saying Ms. Christakis’s “jarring” email served to “further degrade marginalized people,” as though someone with a Yale degree could be marginalized in America. Students culturally appropriated a Puritan shaming trial and encircled Mr. Christakis on a lawn, cursing and heckling him to quit. “I stand behind free speech,” he told the mob.

Hundreds of protesters also turned on Jonathan Holloway, Yale’s black dean, demanding to know why the school hadn’t addressed allegations that a black woman had been kept out of a fraternity party. Fragile scholars also melted down over a visiting speaker who made a joke about Yale’s fracas while talking at a conference sponsored by the school’s William F. Buckley, Jr. program focused on . . . the future of free speech.

The episode reminds us of when Yale alumnus Lee Bass in 1995 asked the university to return his $20 million donation. Mr. Bass had hoped to seed a curriculum in Western civilization, but Yale’s faculty ripped the idea as white imperialism, and he requested a refund. Two decades later the alternative to Western civilization is on display, and it seems to be censorship.

According to a student reporting for the Washington Post, Yale president Peter Salovey told minority students in response to the episode that “we failed you.” That’s true, though not how he means it. The failure is that elite colleges are turning out ostensible leaders who seem to have no idea why America’s Founders risked extreme discomfort—that is, death—for the right to speak freely.

President Obama arrived in Kenya
on Friday and will travel from here to Ethiopia, two crucial U.S. allies
in East Africa. The region is not only emerging as an economic
powerhouse, it is also an important front in the battle with al Qaeda,
al-Shabaab, Islamic State and other Islamist radicals.

Yet
grievances related to how the International Criminal Court’s universal
jurisdiction is applied in Africa are interfering with U.S. and European
relations on the continent. In Africa there are accusations of
neocolonialism and even racism in ICC proceedings, and a growing
consensus that Africans are being unjustly indicted by the court.

It
wasn’t supposed to be this way. After the failure to prevent mass
atrocities in Europe and Africa in the 1990s, a strong consensus emerged
that combating impunity had to be an international priority. Ad hoc
United Nations tribunals were convened to judge the masterminds of
genocide and crimes against humanity in Yugoslavia, Rwanda and Sierra
Leone. These courts were painfully slow and expensive. But their
mandates were clear and limited, and they helped countries to turn the
page and focus on rebuilding.

Soon universal jurisdiction was
seen not only as a means to justice, but also a tool for preventing
atrocities in the first place. Several countries in Western Europe
including Spain, the United Kingdom, Belgium and France empowered their
national courts with universal jurisdiction. In 2002 the International
Criminal Court came into force.

Africa and Europe were early
adherents and today constitute the bulk of ICC membership. But India,
China, Russia and most of the Middle East—representing well over half
the world’s population—stayed out. So did the United States. Leaders in
both parties worried that an unaccountable supranational court would
become a venue for politicized show trials. The track record of the ICC
and European courts acting under universal jurisdiction has amply borne
out these concerns.

Only when U.S. Defense Secretary Donald Rumsfeld threatened to move NATO headquarters out of Brussels in 2003 did Belgium rein in efforts to indict former President George H.W. Bush, and Gens. Colin Powell and Tommy Franks,
for alleged “war crimes” during the 1990-91 Gulf War. Spanish courts
have indicted American military personnel in Iraq and investigated the
U.S. detention facility in Guantanamo Bay.

But with powerful
states able to shield themselves and their clients, Africa has borne the
brunt of indictments. Far from pursuing justice for victims, these
courts have become a venue for public-relations exercises by activist
groups. Within African countries, they have been manipulated by one
political faction to sideline another, often featuring in electoral
politics.
The ICC’s recent indictments of top Kenyan officials are a prime example. In October 2014, Kenyan President Uhuru Kenyatta
became the first sitting head of state to appear before the ICC, though
he took the extraordinary step of temporarily transferring power to his
deputy to avoid the precedent. ICC prosecutors indicted Mr. Kenyatta in
connection with Kenya’s post-election ethnic violence of 2007-08, in
which some 1,200 people were killed.

Last December the ICC
withdrew all charges against Mr. Kenyatta, saying the evidence had “not
improved to such an extent that Mr Kenyatta’s alleged criminal
responsibility can be proven beyond reasonable doubt.” As U.S. assistant
secretary of state for African affairs from 2005-09, and the point
person during Kenya’s 2007-08 post-election violence, I knew the ICC
indictments were purely political. The court’s decision to continue its
case against Kenya’s deputy president, William Ruto, reflects a degree of indifference and even hostility to Kenya’s efforts to heal its political divisions.

The ICC’s indictments in Kenya began with former chief prosecutor Luis Moreno-Ocampo’s
determination to prove the court’s relevance in Africa by going after
what he reportedly called “low-hanging fruit.” In other words, African
political and military leaders unable to resist ICC jurisdiction.

More
recently, the arrest of Rwandan chief of intelligence Lt. Gen. Emmanuel
Karenzi Karake in London last month drew a unanimous reproach from the
African Union’s Peace and Security Council. The warrant dates to a 2008
Spanish indictment for alleged reprisal killings following the 1994
Rwandan genocide. At the time of the indictment, Mr. Karenzi Karake was
deputy commander of the joint U.N.-African Union peacekeeping operation
in Darfur. The Rwandan troops under his command were the backbone of
the Unamid force, and his performance in Darfur was by all accounts
exemplary.

Moreover, a U.S. government interagency review
conducted in 2007-08, when I led the State Department’s Bureau of
African Affairs, found that the Spanish allegations against Mr. Karenzi
Karake were false and unsubstantiated. The U.S. fully backed his
reappointment in 2008 as deputy commander of Unamid forces. It would be a
travesty of justice if the U.K. were to extradite Mr. Karake to Spain
to stand trial.

Sadly, the early hope of “universal jurisdiction”
ending impunity for perpetrators of genocide and crimes against
humanity has given way to cynicism, both in Africa and the West. In
Africa it is believed that, in the rush to demonstrate their power,
these courts and their defenders have been too willing to brush aside
considerations of due process that they defend at home.

In the
West, the cynicism is perhaps even more damaging because it calls into
question the moral capabilities of Africans and their leaders, and
revives the language of paternalism and barbarism of earlier
generations.

Ms. Frazer, a former U.S. ambassador to South
Africa (2004-05) and assistant secretary of state for African affairs
(2005-09), is an adjunct senior fellow for Africa studies at the Council
on Foreign Relations.

Saturday, May 30, 2015

June marks the 800th
anniversary of Magna Carta, the ‘Great Charter’ that established the
rule of law for the English-speaking world. Its revolutionary impact
still resounds today, writes Daniel Hannan

Eight hundred years ago next month, on a reedy stretch of
riverbank in southern England, the most important bargain in the history
of the human race was struck. I realize that’s a big claim, but in this
case, only superlatives will do. As Lord Denning, the most celebrated
modern British jurist put it, Magna Carta was “the greatest
constitutional document of all time, the foundation of the freedom of
the individual against the arbitrary authority of the despot.”

It
was at Runnymede, on June 15, 1215, that the idea of the law standing
above the government first took contractual form. King John accepted
that he would no longer get to make the rules up as he went along. From
that acceptance flowed, ultimately, all the rights and freedoms that we
now take for granted: uncensored newspapers, security of property,
equality before the law, habeas corpus, regular elections, sanctity of
contract, jury trials.

Magna Carta is Latin for “Great Charter.”
It was so named not because the men who drafted it foresaw its epochal
power but because it was long. Yet, almost immediately, the document
began to take on a political significance that justified the adjective
in every sense.

The bishops and barons who had brought King John
to the negotiating table understood that rights required an enforcement
mechanism. The potency of a charter is not in its parchment but in the
authority of its interpretation. The constitution of the U.S.S.R., to
pluck an example more or less at random, promised all sorts of
entitlements: free speech, free worship, free association. But as Soviet
citizens learned, paper rights are worthless in the absence of
mechanisms to hold rulers to account.

Magna Carta instituted a form of conciliar rule that was to develop
directly into the Parliament that meets at Westminster today. As the
great Victorian historian William Stubbs put it, “the whole
constitutional history of England is little more than a commentary on
Magna Carta.”

And
not just England. Indeed, not even England in particular. Magna Carta
has always been a bigger deal in the U.S. The meadow where the
abominable King John put his royal seal to the parchment lies in my
electoral district in the county of Surrey. It went unmarked until 1957,
when a memorial stone was finally raised there—by the American Bar
Association.

Only now, for the anniversary, is a British
monument being erected at the place where freedom was born. After some
frantic fundraising by me and a handful of local councilors, a large
bronze statue of Queen Elizabeth II will gaze out across the slow, green
waters of the Thames, marking 800 years of the Crown’s acceptance of
the rule of law.

Eight hundred years is a long wait. We British
have, by any measure, been slow to recognize what we have. Americans, by
contrast, have always been keenly aware of the document, referring to
it respectfully as the Magna Carta.

Why? Largely because
of who the first Americans were. Magna Carta was reissued several times
throughout the 14th and 15th centuries, as successive Parliaments
asserted their prerogatives, but it receded from public consciousness
under the Tudors, whose dynasty ended with the death of Elizabeth I in
1603.

In the early 17th century, members of Parliament revived
Magna Carta as a weapon in their quarrels with the autocratic Stuart
monarchs. Opposition to the Crown was led by the brilliant lawyer Edward
Coke (pronounced Cook), who drafted the first Virginia Charter in 1606.
Coke’s argument was that the king was sidelining Parliament, and so
unbalancing the “ancient constitution” of which Magna Carta was the
supreme expression.

United for the first
time, the four surviving original Magna Carta manuscripts are prepared
for display at the British Library, London, Feb. 1, 2015.
Photo:
UPPA/ZUMA PRESS

The early settlers arrived while these rows were at their height and
carried the mania for Magna Carta to their new homes. As early as 1637,
Maryland sought permission to incorporate Magna Carta into its basic
law, and the first edition of the Great Charter was published on
American soil in 1687 by William Penn, who explained that it was what
made Englishmen unique: “In France, and other nations, the mere will of
the Prince is Law, his word takes off any man’s head, imposeth taxes, or
seizes any man’s estate, when, how and as often as he lists; But in
England, each man hath a fixed Fundamental Right born with him, as to
freedom of his person and property in his estate, which he cannot be
deprived of, but either by his consent, or some crime, for which the law
has imposed such a penalty or forfeiture.”

There was a
divergence between English and American conceptions of Magna Carta. In
the Old World, it was thought of, above all, as a guarantor of
parliamentary supremacy; in the New World, it was already coming to be
seen as something that stood above both Crown and Parliament. This
difference was to have vast consequences in the 1770s.

The
American Revolution is now remembered on both sides of the Atlantic as a
national conflict—as, indeed, a “War of Independence.” But no one at
the time thought of it that way—not, at any rate, until the French
became involved in 1778. Loyalists and patriots alike saw it as a civil
war within a single polity, a war that divided opinion every bit as much
in Great Britain as in the colonies.

The American
Revolutionaries weren’t rejecting their identity as Englishmen; they
were asserting it. As they saw it, George III was violating the “ancient
constitution” just as King John and the Stuarts had done. It was
therefore not just their right but their duty to resist, in the words of
the delegates to the first Continental Congress in 1774, “as Englishmen
our ancestors in like cases have usually done.”

Nowhere, at this
stage, do we find the slightest hint that the patriots were fighting
for universal rights. On the contrary, they were very clear that they
were fighting for the privileges bestowed on them by Magna Carta. The
concept of “no taxation without representation” was not an abstract
principle. It could be found, rather, in Article 12 of the Great
Charter: “No scutage or aid is to be levied in our realm except by the
common counsel of our realm.” In 1775, Massachusetts duly adopted as its
state seal a patriot with a sword in one hand and a copy of Magna Carta
in the other.

I recount these facts to make an important, if
unfashionable, point. The rights we now take for granted—freedom of
speech, religion, assembly and so on—are not the natural condition of an
advanced society. They were developed overwhelmingly in the language in
which you are reading these words.

When we call them universal
rights, we are being polite. Suppose World War II or the Cold War had
ended differently: There would have been nothing universal about them
then. If they are universal rights today, it is because of a series of
military victories by the English-speaking peoples.

Various early
copies of Magna Carta survive, many of them in England’s cathedrals,
tended like the relics that were removed during the Reformation. One
hangs in the National Archives in Washington, D.C., next to the two
documents it directly inspired: the Declaration of Independence and the
Constitution. Another enriches the Australian Parliament in Canberra.

But
there are only four 1215 originals. One of them, normally housed at
Lincoln Cathedral, has recently been on an American tour, resting for
some weeks at the Library of Congress. It wasn’t that copy’s first visit
to the U.S. The same parchment was exhibited in New York at the 1939
World’s Fair, attracting an incredible 13 million visitors. World War II
broke out while it was still on display, and it was transferred to Fort
Knox for safekeeping until the end of the conflict.

Could there
have been a more apt symbol of what the English-speaking peoples were
fighting for in that conflagration? Think of the world as it stood in
1939. Constitutional liberty was more or less confined to the
Anglosphere. Everywhere else, authoritarianism was on the rise. Our
system, uniquely, elevated the individual over the state, the rules over
the rulers.

When the 18th-century statesman Pitt the Elder
described Magna Carta as England’s Bible, he was making a profound
point. It is, so to speak, the Torah of the English-speaking peoples:
the text that sets us apart while at the same time speaking truths to
the rest of mankind.

The very success of Magna Carta makes it
hard for us, 800 years on, to see how utterly revolutionary it must have
appeared at the time. Magna Carta did not create democracy: Ancient
Greeks had been casting differently colored pebbles into voting urns
while the remote fathers of the English were grubbing about alongside
pigs in the cold soil of northern Germany. Nor was it the first
expression of the law: There were Sumerian and Egyptian law codes even
before Moses descended from Sinai.

What Magna Carta initiated,
rather, was constitutional government—or, as the terse inscription on
the American Bar Association’s stone puts it, “freedom under law.”

It
takes a real act of imagination to see how transformative this concept
must have been. The law was no longer just an expression of the will of
the biggest guy in the tribe. Above the king brooded something more
powerful yet—something you couldn’t see or hear or touch or taste but
that bound the sovereign as surely as it bound the poorest wretch in the
kingdom. That something was what Magna Carta called “the law of the
land.”

This phrase is commonplace in our language. But think of
what it represents. The law is not determined by the people in
government, nor yet by clergymen presuming to interpret a holy book.
Rather, it is immanent in the land itself, the common inheritance of the
people living there.

The idea of the law coming up from the
people, rather than down from the government, is a peculiar feature of
the Anglosphere. Common law is an anomaly, a beautiful, miraculous
anomaly. In the rest of the world, laws are written down from first
principles and then applied to specific disputes, but the common law
grows like a coral, case by case, each judgment serving as the starting
point for the next dispute. In consequence, it is an ally of freedom
rather than an instrument of state control. It implicitly assumes
residual rights.

And indeed, Magna Carta conceives rights in
negative terms, as guarantees against state coercion. No one can put you
in prison or seize your property or mistreat you other than by due
process. This essentially negative conception of freedom is worth
clinging to in an age that likes to redefine rights as entitlements—the
right to affordable health care, the right to be forgotten and so on.

It
is worth stressing, too, that Magna Carta conceived freedom and
property as two expressions of the same principle. The whole document
can be read as a lengthy promise that the goods of a free citizen will
not be arbitrarily confiscated by someone higher up the social scale.
Even the clauses that seem most remote from modern experience generally
turn out, in reality, to be about security of ownership.

There
are, for example, detailed passages about wardship. King John had been
in the habit of marrying heiresses to royal favorites as a way to get
his hands on their estates. The abstruse-sounding articles about
inheritance rights are, in reality, simply one more expression of the
general principle that the state may not expropriate without due
process.

Those who stand awe-struck before the Great Charter
expecting to find high-flown phrases about liberty are often surprised
to see that a chunk of it is taken up with the placing of fish-traps on
the Thames. Yet these passages, too, are about property, specifically
the freedom of merchants to navigate inland waterways without having
arbitrary tolls imposed on them by fish farmers.

Liberty and
property: how naturally those words tripped, as a unitary concept, from
the tongues of America’s Founders. These were men who had been shaped in
the English tradition, and they saw parliamentary government not as an
expression of majority rule but as a guarantor of individual freedom.
How different was the Continental tradition, born 13 years later with
the French Revolution, which saw elected assemblies as the embodiment of
what Rousseau called the “general will” of the people.

In that
difference, we may perhaps discern explanation of why the Anglosphere
resisted the chronic bouts of authoritarianism to which most other
Western countries were prone. We who speak this language have always
seen the defense of freedom as the duty of our representatives and so,
by implication, of those who elect them. Liberty and democracy, in our
tradition, are not balanced against each other; they are yoked together.

In February, the four surviving original copies of Magna Carta were
united, for just a few hours, at the British Library—something that had
not happened in 800 years. As I stood reverentially before them, someone
recognized me and posted a photograph on Twitter with the caption: “If
Dan Hannan gets his hands on all four copies of Magna Carta, will he be
like Sauron with the Rings?”

Yet the majesty of the document
resides in the fact that it is, so to speak, a shield against Saurons.
Most other countries have fallen for, or at least fallen to, dictators.
Many, during the 20th century, had popular communist parties or fascist
parties or both. The Anglosphere, unusually, retained a consensus behind
liberal capitalism.

This is not because of any special property
in our geography or our genes but because of our constitutional
arrangements. Those constitutional arrangements can take root anywhere.
They explain why Bermuda is not Haiti, why Hong Kong is not China, why
Israel is not Syria.

They work because, starting with Magna
Carta, they have made the defense of freedom everyone’s responsibility.
Americans, like Britons, have inherited their freedoms from past
generations and should not look to any external agent for their
perpetuation. The defense of liberty is your job and mine. It is up to
us to keep intact the freedoms we inherited from our parents and to pass
them on securely to our children.

Mr. Hannan is a British
member of the European Parliament for the Conservative Party, a
columnist for the Washington Examiner and the author of “Inventing
Freedom: How the English-speaking Peoples Made the Modern World.”

White House officials can be oddly candid in talking to their liberal
friends at the New Yorker magazine. That’s where an unnamed official in
2011 boasted of “leading from behind,” and where last year President Obama
dismissed Islamic State as a terrorist “jayvee team.” Now the U.S. Vice
President has revealed the Administration line on human rights in
China.

In the April 6 issue, Joe Biden recounts meeting Xi Jinping
months before his 2012 ascent to be China’s supreme leader. Mr. Xi
asked him why the U.S. put “so much emphasis on human rights.” The right
answer is simple: No government has the right to deny its citizens
basic freedoms, and those that do tend also to threaten peace overseas,
so U.S. support for human rights is a matter of values and interests.

Instead,
Mr. Biden downplayed U.S. human-rights rhetoric as little more than
political posturing. “No president of the United States could represent
the United States were he not committed to human rights,” he told Mr.
Xi. “President Barack Obama would not be able to stay in power if he did
not speak of it. So look at it as a political imperative.” Then Mr.
Biden assured China’s leader: “It doesn’t make us better or worse. It’s
who we are. You make your decisions. We’ll make ours.” [not the WSJ's emphasis.]

Mr. Xi took the advice. Since taking office he has detained more
than 1,000 political prisoners, from anticorruption activist Xu Zhiyong
to lawyer Pu Zhiqiang and journalist Gao Yu. He has cracked down on Uighurs in Xinjiang, banning more Muslim practices and jailing scholar-activist Ilham Tohti for life. Anti-Christian repression and Internet controls are tightening. Nobel Peace laureate Liu Xiaobo remains in prison, his wife Liu Xia
under illegal house arrest for the fifth year. Lawyer Gao Zhisheng left
prison in August but is blocked from receiving medical care overseas.
Hong Kong, China’s most liberal city, is losing its press freedom and
political autonomy.

Amid all of this Mr. Xi and his government have faced little challenge from Washington. That is consistent with Hillary Clinton’s
2009 statement that human rights can’t be allowed to “interfere” with
diplomacy on issues such as the economy and the environment. Mr. Obama
tried walking that back months later, telling the United Nations that
democracy and human rights aren’t “afterthoughts.” But his
Administration’s record—and now Mr. Biden’s testimony—prove otherwise.

In the name of ‘affordable’ loans, the White House is creating the conditions for a replay of the housing disaster

The
Obama
administration’s troubling flirtation with another mortgage
meltdown took an unsettling turn on Tuesday with Federal Housing Finance
Agency Director
Mel Watt
’s testimony before the House Financial Services Committee.

Mr.
Watt told the committee that, having received “feedback from
stakeholders,” he expects to release by the end of March new guidance on
the “guarantee fee” charged by
Fannie Mae
and
Freddie Mac
to cover the credit risk on loans the federal mortgage agencies guarantee.

Here
we go again. In the Obama administration, new guidance on housing
policy invariably means lowering standards to get mortgages into the
hands of people who may not be able to afford them.

Earlier this
month, President Obama announced that the Federal Housing
Administration (FHA) will begin lowering annual mortgage-insurance
premiums “to make mortgages more affordable and accessible.” While that
sounds good in the abstract, the decision is a bad one with serious
consequences for the housing market.

Government programs to make
mortgages more widely available to low- and moderate-income families
have consistently offered overleveraged, high-risk loans that set up too
many homeowners to fail. In the long run-up to the 2008 financial
crisis, for example, federal mortgage agencies and their regulators
cajoled and wheedled private lenders to loosen credit standards. They
have been doing so again. When the next housing crash arrives, private
lenders will be blamed—and homeowners and taxpayers will once again pay
dearly.

Lowering annual mortgage-insurance premiums is part of a
new affordable-lending effort by the Obama administration. More
specifically, it is the latest salvo in a price war between two
government mortgage giants to meet government mandates.

Fannie
Mae fired the first shot in December when it relaunched the 30-year, 97%
loan-to-value, or LTV, mortgage (a type of loan that was suspended in
2013). Fannie revived these 3% down-payment mortgages at the behest of
its federal regulator, the Federal Housing Finance Agency (FHFA)—which
has run Fannie Mae and Freddie Mac since 2008, when both
government-sponsored enterprises (GSEs) went belly up and were put into
conservatorship. The FHA’s mortgage-premium price rollback was a
counteroffensive.

Déjà vu: Fannie launched its first price war
against the FHA in 1994 by introducing the 30-year, 3% down-payment
mortgage. It did so at the behest of its then-regulator, the Department
of Housing and Urban Development. This and other actions led HUD in 2004
to credit Fannie Mae’s “substantial part in the ‘revolution’ ” in
“affordable lending” to “historically underserved households.”

Fannie’s
goal in 1994 and today is to take market share from the FHA, the main
competitor for loans it and Freddie Mac need to meet mandates set by
Congress since 1992 to increase loans to low- and moderate-income
homeowners. The weapons in this war are familiar—lower pricing and
progressively looser credit as competing federal agencies fight over
existing high-risk lending and seek to expand such lending.

Mortgage
price wars between government agencies are particularly dangerous,
since access to low-cost capital and minimal capital requirements gives
them the ability to continue for many years—all at great risk to the
taxpayers. Government agencies also charge low-risk consumers more than
necessary to cover the risk of default, using the overage to lower fees
on loans to high-risk consumers.

Starting in 2009 the FHFA
released annual studies documenting the widespread nature of these
cross-subsidies. The reports showed that low down payment, 30-year loans
to individuals with low FICO scores were consistently subsidized by
less-risky loans.

Unfortunately, special interests such as the
National Association of Realtors—always eager to sell more houses and
reap the commissions—and the left-leaning Urban Institute were
cheerleaders for loose credit. In 1997, for example, HUD commissioned
the Urban Institute to study Fannie and Freddie’s single-family
underwriting standards. The Urban Institute’s 1999 report found that
“the GSEs’ guidelines, designed to identify creditworthy applicants, are
more likely to disqualify borrowers with low incomes, limited wealth,
and poor credit histories; applicants with these characteristics are
disproportionately minorities.” By 2000 Fannie and Freddie did away with
down payments and raised debt-to-income ratios. HUD encouraged them to
more aggressively enter the subprime market, and the GSEs decided to
re-enter the “liar loan” (low doc or no doc) market, partly in a desire
to meet higher HUD low- and moderate-income lending mandates.

On
Jan. 6, the Urban Institute announced in a blog post: “FHA: Time to stop
overcharging today’s borrowers for yesterday’s mistakes.” The institute
endorsed an immediate cut of 0.40% in mortgage-insurance premiums
charged by the FHA. But once the agency cuts premiums, Fannie and
Freddie will inevitably reduce the guarantee fees charged to cover the
credit risk on the loans they guarantee.

Now the other shoe appears poised to drop, given Mr. Watt’s promise on Tuesday to issue new guidance on guarantee fees.

This
is happening despite Congress’s 2011 mandate that Fannie’s regulator
adjust the prices of mortgages and guarantee fees to make sure they
reflect the actual risk of loss—that is, to eliminate dangerous and
distortive pricing by the two GSEs. Ed DeMarco, acting director of the
FHFA since March 2009, worked hard to do so but left office in January
2014. Mr. Watt, his successor, suspended
Mr. DeMarc
o’s efforts to comply with Congress’s mandate. Now that Fannie
will once again offer heavily subsidized 3%-down mortgages, massive new
cross-subsidies will return, and the congressional mandate will be
ignored.

The law stipulates that the FHA maintain a
loss-absorbing capital buffer equal to 2% of the value of its
outstanding mortgages. The agency obtains this capital from profits
earned on mortgages and future premiums. It hasn’t met its capital
obligation since 2009 and will not reach compliance until the fall of
2016, according to the FHA’s latest actuarial report. But if the economy
runs into another rough patch, this projection will go out the window.

Congress
should put an end to this price war before it does real damage to the
economy. It should terminate the ill-conceived GSE affordable-housing
mandates and impose strong capital standards on the FHA that can’t be
ignored as they have been for five years and counting.

Mr. Pinto,
former chief credit officer of Fannie Mae, is co-director and
chief risk officer of the International Center on Housing Risk at the
American Enterprise Institute.

The Department of Justice isn't known for a sense of humor. But on Monday it announced a civil settlement with Citigroup over failed mortgage investments that covers almost exactly the period when current Treasury Secretary Jack Lew oversaw divisions at Citi that presided over failed mortgage investments. Now, that's funny.

Though Justice, five states and the FDIC are prying $7 billion from the bank for allegedly misleading investors, there's no mention in the settlement of clawing back even a nickel of Mr. Lew's compensation. We also see no sanction for former Treasury Secretary Timothy Geithner, who allowed Citi to build colossal mortgage risks outside its balance sheet while overseeing the bank as president of the New York Federal Reserve.

The settlement says Citi's alleged misdeeds began in 2006, the year Mr. Lew joined the bank, and the agreement covers conduct "prior to January 1, 2009." That was shortly before Mr. Lew left to work for President Obama and two weeks before Mr. Lew received $944,518 from Citi in "salary, payout for vested restricted stock," and "discretionary cash compensation for work performed in 2008," according to a 2010 federal disclosure report. That was also the year Citi began receiving taxpayer bailouts of $45 billion in cash, plus hundreds of billions more in taxpayer guarantees.

While Attorney General Eric Holder is forgiving toward his Obama cabinet colleagues, he seems to believe that some housing transactions can never be forgiven. The $7 billion settlement includes the same collateralized debt obligation for which the bank already agreed to pay $285 million in a settlement with the Securities and Exchange Commission. The Justice settlement also includes a long list of potential charges not covered by the agreement, so prosecutors can continue to raid the Citi ATM.

Citi offers in return what looks like a blanket agreement not to sue the government over any aspect of the case, and waives its right to defend itself "based in whole or in part on a contention that, under the Double Jeopardy Clause in the Fifth Amendment of the Constitution, or under the Excessive Fines Clause in the Eighth Amendment of the Constitution, this Agreement bars a remedy sought in such criminal prosecution or administrative action." We hold no brief for Citi, which has been rescued three times by the feds. But what kind of government demands the right to exact repeated punishments for the same offense?

The bank's real punishment should have been failure, as former FDIC Chairman Sheila Bair and we argued at the time. Instead, the regulators kept Citi alive with taxpayer money far beyond what it provided most other banks as part of the Troubled Asset Relief Program. Keeping it alive means they can now use Citi as a political target when it's convenient to claim they're tough on banks.

And speaking of that $7 billion, good luck finding a justification for it in the settlement agreement. The number seems to have been pulled out of thin air since it's unrelated to Citi's mortgage-securities market share or any other metric we can see beyond having media impact.

If this sounds cynical, readers should consult the Justice Department's own leaks to the press about how the Citi deal went down. Last month the feds were prepared to bring charges against the bank, but the necessities of public relations intervened.

According to the Journal, "News had leaked that afternoon, June 17, that the U.S. had captured Ahmed Abu Khatallah, a key suspect in the attacks on the American consulate in Benghazi in 2012. Justice Department officials didn't want the announcement of the suit against Citigroup—and its accompanying litany of alleged misdeeds related to mortgage-backed securities—to be overshadowed by questions about the Benghazi suspect and U.S. policy on detainees. Citigroup, which didn't want to raise its offer again and had been preparing to be sued, never again heard the threat of a suit."

This week's settlement includes $4 billion for the Treasury, roughly $500 million for the states and FDIC, and $2.5 billion for mortgage borrowers. That last category has become a fixture of recent government mortgage settlements, even though the premise of this case involves harm done to bond investors, not mortgage borrowers.

But the Obama Administration's references to the needs of Benghazi PR remind us that it could be worse. At least Mr. Holder isn't blaming the Geithner and Lew failures on a video.

It is now five years since the end of the most recent U.S. financial crisis of 2007-09. Stocks have made record highs, junk bonds and leveraged loans have boomed, house prices have risen, and already there are cries for lower credit standards on mortgages to "increase access."

Meanwhile, in vivid contrast to the Swiss central bank, which marks its investments to market, the Federal Reserve has designed its own regulatory accounting so that it will never have to recognize any losses on its $4 trillion portfolio of long-term bonds and mortgage securities.

Who remembers that such "special" accounting is exactly what the Federal Home Loan Bank Board designed in the 1980s to hide losses in savings and loans? Who remembers that there even was a Federal Home Loan Bank Board, which for its manifold financial sins was abolished in 1989?

It is 25 years since 1989. Who remembers how severe the multiple financial crises of the 1980s were?

The government of Mexico defaulted on its loans in 1982 and set off a global debt crisis. The Federal Reserve's double-digit interest rates had rendered insolvent the aggregate savings and loan industry, until then the principal supplier of mortgage credit. The oil bubble collapsed with enormous losses.

Between 1982 and 1992, a disastrous 2,270 U.S. depository institutions failed. That is an average of more than 200 failures a year or four a week over a decade. From speaking to a great many audiences about financial crises, I can testify that virtually no one knows this.

In the wake of the housing bust, I was occasionally asked, "Will we learn the lessons of this crisis?" "We will indeed," I would reply, "and we will remember them for at least four or five years." In 2007 as the first wave of panic was under way, I heard a senior international economist opine in deep, solemn tones, "What we have learned from this crisis is the importance of liquidity risk." "Yes," I said, "that's what we learn from every crisis."

The political reactions to the 1980s included the Financial Institutions Reform, Recovery and Enforcement Act of 1989, the FDIC Improvement Act of 1991, and the very ironically titled GSE Financial Safety and Soundness Act of 1992. Anybody remember the theories behind those acts?

After depositors in savings and loan associations were bailed out to the tune of $150 billion (the Federal Savings and Loan Insurance Corporation having gone belly up), then-Treasury Secretary Nicholas Brady pronounced that the great legislative point was "never again." Never, that is, until the Mexican debt crisis of 1994, the Asian debt crisis of 1997, and the Long-Term Capital Management crisis of 1998, all very exciting at the time.

And who remembers the Great Recession (so called by a prominent economist of the time) in 1973-75, the huge real-estate bust and New York City's insolvency crisis? That was the decade before the 1980s.

Viewing financial crises over several centuries, the great economic historian Charles Kindleberger concluded that they occur on average about once a decade. Similarly, former Fed Chairman Paul Volcker wittily observed that "about every 10 years, we have the biggest crisis in 50 years."

What is it about a decade or so? It seems that is long enough for memories to fade in the human group mind, as they are overlaid with happier recent experiences and replaced with optimistic new theories.

Speaking in 2013, Paul Tucker, the former deputy governor for financial stability of the Bank of England—a man who has thought long and hard about the macro risks of financial systems—stated, "It will be a while before confidence in the system is restored." But how long is "a while"? I'd say less than a decade.

Mr. Tucker went on to proclaim, "Never again should confidence be so blind." Ah yes, "never again." If Mr. Tucker's statement is meant as moral suasion, it's all right. But if meant as a prediction, don't bet on it.

Former Treasury Secretary Tim Geithner, for all his daydream of the government as financial Platonic guardian, knows this. As he writes in "Stress Test," his recent memoir: "Experts always have clever reasons why the boom they are enjoying will avoid the disastrous patterns of the past—until it doesn't." He predicts: "There will be a next crisis, despite all we did."

Right. But when? On the historical average, 2009 + 10 = 2019. Five more years is plenty of time for forgetting.Mr. Pollock is a resident fellow at the American Enterprise Institute and was president and CEO of the Federal Home Loan Bank of Chicago 1991-2004.