Obesity remains a serious health problem and it is no secret that many people want to lose weight. Behavioral economists typically argue that “nudges” help individuals with various decisionmaking flaws to live longer, healthier, and better lives. In an article in the new issue of Regulation, Michael L. Marlow discusses how nudging by government differs from nudging by markets, and explains why market nudging is the more promising avenue for helping citizens to lose weight.

Armed with a computer model in 1935, one could probably have written the exact same story on California drought as appears today in the Washington Post some 80 years ago, prompted by the very similar outlier temperatures of 1934 and 2014.

Two long wars, chronic deficits, the financial crisis, the costly drug war, the growth of executive power under Presidents Bush and Obama, and the revelations about NSA abuses, have given rise to a growing libertarian movement in our country – with a greater focus on individual liberty and less government power. David Boaz’s newly released The Libertarian Mind is a comprehensive guide to the history, philosophy, and growth of the libertarian movement, with incisive analyses of today’s most pressing issues and policies.

Search form

Archives: 09/2012

Back in May, I said that Congress would avoid the ‘fiscal cliff’ by agreeing to some sort of deal that would effectively kick the can down the road (yet again). According to Politico, a group of Senators are considering a can-kicking idea that immediately brought to my mind the movie Groundhog Day:

Several sources said the lawmakers are working on legislative language that could be used to set up a process that would force Congress to act to diminish the debt. The last time the two parties came to such a deal, they set up the automatic spending cuts — the sequester — that many lawmakers are now scrambling to undo before they kick in at the start of next year.

And when that process fails, Congress can come up with another one. And when that one fails…

In a new essay for the Templeton Foundation’s “big questions online” site, I explore how social media can promote individual liberty:

When Iranian dissidents took to the streets to protest irregularities in the 2009 election, some observers dubbed it the “Twitter revolution” for the role the social networking site played in coordinating the demonstrations. The State Department asked Twitter to postpone scheduled maintenance to ensure that pro-democracy activists in Iran would have uninterrupted access. Thousands of Western supporters turned their Twitter icons green in solidarity with the “green revolution.”

These protests failed to bring down the Iranian regime. And the next year, Malcolm Gladwell took to the pages of the New Yorker to pour cold water on the idea that social media had the potential to transform societies. He pointed out that for all their apparent decentralization and spontaneity, the most effective social movements tend to operate according to a carefully designed plan and to be put into action by intensely committed volunteers. The sit-ins and bus boycotts of the civil rights movement, for example, were carefully orchestrated from NAACP headquarters in New York. “The civil-rights movement was more like a military campaign than like a contagion,” Gladwell wrote.

But as I explain, Gladwell is over-stating his case. While social media can’t topple oppressive regimes all by themselves, it can be a powerful tool for motivating and coordinating social protests.

Unfortunately, social media isn’t always a force for good. In China, the government has figured out how to tightly control social media sites and make them part of the state’s censorship apparatus. Read more at Big Questions Online.

Virulent identity politics are swirling across post-revolutionary North Africa, as seen on full display in Libya and Egypt. Somereports now point to a pro-al Qaeda group or other extremist elements as responsible for the attack in Libya, planned in advance and unrelated to the anti-Islam video. The protestors in Libya may have been acting separately. There are still many unknown details.

But the idea that a derogatory and clownish internet video justifies mob violence or murder can only be described as barbaric.

The U.S. government should make crystal clear to its Libyan and Egyptian counterparts that if they wish to have any relationship, let alone a functional relationship, with the United States in the future, we expect the perpetrators of these acts to be brought to justice swiftly and for sufficient measures to be undertaken to ensure they cannot be repeated. Apologies are not enough.

For its part, the United States needs to figure out what went wrong in terms of operational security, and how the U.S. ambassador to Libya was killed and the Cairo embassy overrun. The past 10 years have blurred the line between warfighters and diplomats, but this experience is a reminder that the two are still distinct.

Finally, although their rights to free speech are sacrosanct and must be defended by all means possible, the filmmakers ought to consider the dangerous game that they are playing. The filmmaker’s statement to the Wall Street Journal that he raised $5 million from 100 Jewish donors to make the film threatens to fuel hatred, and a consultant to the film’s admission that “we went into this knowing this was probably going to happen” are both cold comfort to the deceased’s families and reminders that possession of a right is not an argument for the prudence of every possible exercise of that right.

The United States is a free society in which free speech is respected, but not every American enjoys every exercise of that right. The work of Andres Serrano and Robert Mapplethorpe infuriated and offended millions of Americans, but the right to free speech was protected and survived. One hopes that this standard can be reached by the citizens and governments of Libya and Egypt soon.

The Cato Institute is sad to report the death of the trailblazing and iconoclastic critic of psychiatry Thomas Szasz, professor of psychiatry emeritus at the Health Science Center, State University of New York and Cato adjunct scholar. He was 92.

Szasz advocated for individual liberty from a substantially different point of view than most libertarian intellectuals. Rather than focusing on economic arguments or political philosophy, Szasz focused on personal responsibility and how the institutions and practices of modern psychiatry fundamentally undermine the rights and responsibilities of individuals.

In the 1950s and 60s, psychiatry was in a dark place. Thanks to movies like One Flew Over the Cuckoo’s Nest, people are now aware of the profoundly disturbing practices that took place within the walls of mental institutions: lobotomies, electro-shock treatments, and involuntary medication. At the time, however, the practices were part of a profession that saw itself in a golden age. The emerging science of the brain disorders—which had been neatly categorized in the first edition, 1952, of the Diagnostic and Statistical Manual of Mental Disorders (DSM-I)—seemed to promise the slow systemization and categorization of divergent behaviors into a predictable and scientific study of human behavior. Vague words like “neurosis” and “psychosis” were casually thrown about by doctors as objective classifications. If a doctor deemed someone to be “neurotic,” then he could be involuntary committed and subjected to the aforementioned tortures.

In some ways, this “golden age” of psychiatry parallels the false “golden ages” of other disciplines—times when too much knowledge was presumed and too much power granted. The command and control economies of the 20th century are an obvious example: Brilliant people “cracked the code” of the economy and all they needed for a more rational social order was better data and more control. Such overestimations of knowledge often precede claims for broader power. In some sense, Szasz’s war against psychiatry can be viewed in the same light as Hayek’s war against planned economies: an opposition to state-backed conglomerations of power masquerading under the pretense of knowledge.

The psychiatry profession has a lot of power. They can exonerate murderers by deeming them insane. They can institutionalize people against their will. And they do this all based on the trust the “system” has in their subjective determination of what is or is not an aberrant behavior.

Szasz had a problem with this system. As he wrote in the in the preface to the 50th anniversary edition of his most famous book, The Myth of Mental Illness:

I insisted that mental hospitals are like prisons, not hospitals; that involuntary mental hospitalization is a type of imprisonment, not medical care; and that coercive psychiatrists function as judges and jailers, not healers. I suggested that we view and understand “mental illnesses” and psychiatric responses to them as matters of law and rhetoric, not matters of medicine and science…If all “conditions” now called “mental illnesses” proved to be brain diseases, there would be no need for the notion of mental illness and the term would become devoid of meaning. However, because the term refers to judgments of some persons about the (bad) behaviors of other persons, the opposite is what actually happens: the history of psychiatry is the history of an ever-expanding list of “mental disorders.”

To expand on why Szasz believed mental illness to be a “myth”: If we call someone “mentally ill” without reference to a physical brain disorder but only as a “problem” with her behavior, then we are describing something that is difficult, if not impossible, to objectively quantify. We must invoke some norm to make our diagnosis more than a subjective opinion about “divergent” behavior. If homosexuality is a mental illness, then the norm of heterosexuality is presumed. If marital infidelity is a mental illness, then the norm of fidelity is presumed. Without any appeal an objective criterion we will inevitably institutionalize people based on our opinions about their personalities. As Szasz says, the obvious question always arises: “What kinds of behavior are regarded as indicative of mental illness, and by whom?”

Perhaps the most famous example of misusing the term “mental illness” is drapetomania, or “runaway slave syndrome.” But drapetomania was not the first misuse of mental illness, nor would it be the last. Szasz’s unique contribution to psychiatry was to continually refocus the question on whether there is a scientific, objective basis for asserting that certain “kinds of behavior are regarded as indicative of mental illness.” His unique contribution to libertarian thought was to focus on personal responsibility as the proper response to claims of “mental illness,” to be concerned about the involuntary incarceration of the “mentally ill” as an immoral deprivation of liberty, and to criticize the state as the most significant “whom” that defines mental illness.

Because of this focus on the state’s effect on social and scientific areas, rather than in just the economic and philosophical realms, Szasz’s work encourages libertarians to look to broader social criticisms of government. Szasz wisely questioned the implications of letting the government define “mental illness” and trusting the political forces that affect those determinations. As he wrote in The Myth of Mental Illness, “Debate about what counts as mental illness has been replaced by legislation about the medicalization and demedicalization of behavior. Old diseases such as homosexuality and hysteria disappear, while new diseases such as gambling and smoking appear, as if to replace them.”

There is something profoundly unsettling about the state having any say in defining “normality.” The state is never a passive player in these situations. Government officials have concerns and interests of their own that can fundamentally distort the perception of mental illness and “divergent” behavior. And, perhaps most importantly, the state has vast amounts of money it can use to fund research and institutions that skew the playing field in its favor. The medicalization of Attention Deficit Disorder in American public schools is perhaps the best recent example of this phenomenon. In retrospect, the mass overdiagnosis of ADD seems the inevitable result of a recalcitrant and monolithic public school system combined with a state-backed mental health establishment obsessed with psychopharmacology.

Despite the scathing criticism given to the book-length version of The Myth of Mental Illness (1961), Szasz’s critiques were arguably at the forefront of major changes in psychiatry that followed. Although many current psychology students have not heard of Szasz, they have assuredly read about the famous Rosenhan experiment. As documented in the essay “On Being Sane in Insane Places” (highly recommended), the Rosenhan experiment had eight sane, normal people admit themselves to mental institutions complaining of auditory hallucinations. They were then told to behave normally and try to convince the doctors that they were, in fact, sane. This proved nearly impossible to do. All behaviors were immediately categorized as manifestations their subjectively diagnosed “neuroses.” Moreover, whereas many of the other patients sensed that the subjects were planted there, the doctors could not be convinced.

As a result of Szasz’s work, One Flew Over the Cuckoo’s Nest, the Rosenhan experiment, and other work, American mental hospitals are no longer the horrific institutions of the 1950s. But things aren’t too much better. The most disturbing development is the total politicization of the mental health profession. As Szasz wrote in 2011:

Since that time [1961], the formerly sharp distinctions between medical hospitals and mental hospitals, voluntary and involuntary mental patients, and private and public psychiatry have blurred into nonexistence. Virtually all medical and mental health care is now the responsibility of and is regulated by the federal government, and its cost is paid, in full or part, by the federal government. In short, psychiatry is medicalized, through and through. The opinion of official American psychiatry, embodied in the American Psychiatric Association, contains the imprimatur of the federal and state governments.

Even as his scientific studies slowly go out of date, Szasz’s work will always underscore the fact that the state does not only control, it distorts. Sometimes it distorts so much that the world starts to look, well, kind of insane.

As a fan of comedian Dennis Miller, I was astonished to discover that he became a supporter of U.S. government policies in fighting terrorism after the September 11th attacks. Perhaps I am in the minority on this issue, but the 9/11 attacks were what helped to erode my faith in government.

Few people bring this up, but in 2004, a CIA Inspector General report found a number of weaknesses in the Intelligence Community’s pre-9/11 counterterrorism practices, many of which “contributed to performance lapses related to the handling of materials concerning individuals who were to become the 9/11 hijackers.” Two al Qaeda terrorists who later became 9/11 hijackers, Nawaf al-Hazmi and Khalid al-Mihdhar, had attended a meeting of suspected terrorists in Malaysia in early 2000. The Inspector General probe uncovered that the CIA had learned that one of the operatives had a U.S. visa, and the other had flown from Bangkok to Los Angeles.

Yet, the Agency failed to forward that relevant information by “entering the names of suspected al-Qa’ida terrorists on the ‘watchlist’ of the Department of State and providing information to the Federal Bureau of Investigation (FBI) in proper channels.” Some 50 to 60 individuals—including Headquarters personnel, overseas officers, managers, and junior employees—had read the cables containing the travel information on al-Hazmi and al-Mihdhar.

The report said in a stark assessment, “The consequences of the failures to share information and perform proper operational follow-through on these terrorists were potentially significant.” Indeed. Had the names been passed to the FBI and the State Department through proper channels, the operatives could have been watchlisted and surveilled. In theory, those steps could have yielded information on financing, flight training, and other details vital to unraveling the 9/11 plot.

Corroborating these findings was a Joint Inquiry Report by the Senate Select Committee on Intelligence and the House Permanent Select Committee on Intelligence. It found “persistent problems” with the “lack of collaboration between Intelligence Community agencies.” About the FBI in particular, the report went so far as to say as late as December 2002 that “…the Bureau–-as a law enforcement organization–-is fundamentally incapable, in its present form, of providing Americans with the security they require against foreign terrorist and intelligence threats.” Now that is a ringing endorsement of our government’s ability to protect us.

Weoftenhear that the failure of 9/11 was government-wide. But few observers delve into why it failed, especially on 9/11 anniversaries, when, one would think, such explanations would be most helpful. A number of structural factors impede effective collaboration. For instance, many intelligence agencies operate under different legal authorities. Many of them have distinct customers and cultures, and jealously guard their turf, budgets, sources, and methods. Individuals within various agencies also share information by relying on trust and personal relationships.

Yet, dispersed knowledge made it so that there was no single person or “silver bullet” that could have enabled intelligence agencies to prevent the 9/11 attacks. As the CIA Inspector General report made clear, neither the U.S. government nor the Intelligence Community had a comprehensive strategic plan to guide counterterrorism efforts. Amid the pre-9/11 flurry of warnings, intelligence cables, and briefing materials on al Qaeda’s plot to hijack airliners and ram them into our buildings, a significant failure, concluded the 9/11 Commission, was one of imagination.

After 9/11, many Americans were quick to cede yet more power to government. While much has changed in eleven years, with agencies less reluctant to share critical data, a February 2011 Government Accountability Office report noted that the government “does not yet have a fully-functioning Information Sharing Environment,” that is, “an approach that facilitates the sharing of terrorism and homeland security information”:

GAO found that the government had begun to implement some initiatives that improved sharing but did not yet have a comprehensive approach that was guided by an overall plan and measures to help gauge progress and achieve desired results.

Over the decade, while our government focused narrowly on the problem of terrorism, it also embraced ambitious, wasteful, and counterproductive programs and policies that drained us economically and spread our resources thin. After 9/11, excluding the invasions and occupations of Iraq and Afghanistan, American taxpayers have shelled out over $1 trillion dollars for their sprawling counterterrorism-industrial-complex, replete with its thousands of federal, state, and local government organizations and the private companies that work with them.

Perhaps it is unsurprising that our government expanded after an attack that called into question its primary constitutional function: protecting our country. What is more remarkable is that the public continues to accept humiliating pat-downs and invasive full-body scans for airline travel, costlygrant programs rolled out by the Department of Homeland Security, and reckless politicians who advocate endless wars against predominately-Muslim states that play directly into al Qaeda’s hands.

Now, many Americans ask: Arewesafer? Certainly, but marginal increases in safety have come at an exceptionally high cost, have far exceeded the point of diminished returns, and have encouraged a terrorized public to exalt a government that failed them.

On Sunday, I went to the “stakeholder” part of the ongoing trade negotiations for the Trans Pacific Partnership. This round of the talks was held at the Lansdowne resort in Leesburg. The “stakeholder” events allow the public—in other words, people without direct access to the actual policy-makers—to have its voice heard.

Back in the late 1990s, trade negotiations caused quite an uproar, with some violent protests in Seattle being the highlight. Things are much calmer these days. I was told there was a protest on Sunday, but I didn’t notice it (and it didn’t disturb anyone’s attempts to catch up on the NFL games going on).

Why has the furor over international trade rules died down so much?

One reason may be that this is the 14th round of talks for this particular agreement, with no end in sight. There may be some protest fatigue setting in, and it may be getting difficult to convince people this is worth worrying about.

Another reason may be that we already have trade agreements with many of the participants in these talks. To some extent, it just consolidates existing agreements into a strange grouping of various countries that touch the Pacific Ocean. Thus, there is nothing radically new here.

Despite the low profile of trade protests these days, there are still people who are upset with the policies the United States is pursuing in these agreements. For the most part, however, it is not the “free trade” parts that are controversial. It is the United States’ quest for ever stronger intellectual property protections, as well as the special provisions that allow foreign companies to sue governments in international tribunals for vaguely defined due process-type concerns, that have people upset.

All in all, it is easy to come away from the experience thinking, what are we doing here and what happened to free trade? This agreement may never be concluded; it covers mostly countries with whom we already have trade agreements; and so-called “trade” agreements are becoming less and less about free trade. At a certain point, you start to think that it may be time to scrap the existing approach and try something new.

Last week, American Public Media’s Marketplace posted an interactive map—attached to a much-appreciated interview with me—enabling users to see how much states spent per public-college student in 2011, and how that had changed over the the last twenty-five years. It cites as its sources the State Higher Education Executive Officers and me. Unfortunately, it gives only half of the story I was trying to relate with my crunching of SHEEO’s data: per-pupil state and local spending has generally been on a downward trend, but that does not come close to fully explaining rising prices.

To get the full breakdown of the data you can access my calculations here. Note, though—as I wrote in the blog post to which my crunching was originally attached—I didn’t put the spreadsheet together for widespread dissemination, at least not to appear authoritative. I think it’s on target, but I didn’t triple-check it as I would have a more formal data analysis. More importantly, the key point is that while most states have seen decreasing per-pupil allocation trends—primarily because of very large enrollment spurts—they have much more than made up for those losses through tuition increases, bringing in roughly two dollars for every dollar lost. Taxpayers aren’t cheap— colleges are greedy.