Obama and the Harvard “We”

When I attended Harvard Law School, just before Barack Obama and at the same time as his wife, it was (surprise) a place steeped in a particular sort of elitism. I think there are two kinds of elitism, a good one and a bad one, and that President Obama may have been corrupted by the latter during his time at Harvard. I don’t suppose that the Harvard of my and Obama’s generation intentionally aimed to produce the bad kind of elitism, but it soaked its students in a bad elitist culture. Even the vocabulary of the students changed to accommodate elitism. The best example is what I call the Harvard “we.”

You have heard the editorial “we,” as in “we believe that HLS promotes insidious elitism.” In that case “we” means “I,” because the editorialist thinks that “I” is bad style. You have also heard the nursing “we,” as in “have we had our daily enema?” The nursing “we” means “you,” and I think might be derived from baby talk. Further, you have heard the spousal “we,” as in “we need to take out the trash,” which actually means “we,” but as between the two of us, it’s really your job. And you have heard the royal “we,” as in “bring us our scepter and our breakfast.” The royal “we” means “God and I,” because the king’s power derives from God. In addition, you have certainly heard the Harvard “we,” and I’m going to tell you what it means.

The Harvard “we,” as in “we need to make a rule prohibiting home schooling,” means “we” but not just any “we”; it means we who know better than you. It means we who have power, or should have power.

The “we” speakers themselves often are unaware of this, but any sentence in which the Harvard “we” occurs refers to the uses of state authority. It’s sort of the obverse of “they,” as in “they just passed a new law that says you can't drive and talk on your phone,” or “they say we don’t have enough information to make our own health insurance decisions,” or “check this out: they made somebody put a warning label on a toilet brush: ‘do not use for personal hygiene.’”

The Harvard “we” means we who know better than you. It means we who have power, or should have power.

(By the way, when “we” elites become the all-powerful “they” of whom regular folk speak, you become an inferior “them” as in candidate Obama’s notorious observation, “It's not surprising then they get bitter. They cling to guns or religion or antipathy to people who aren't like them or anti-immigrant sentiment or anti-trade sentiment as a way to explain their frustrations.”)

At Harvard, the professors constantly use the elite “we,” and most of the students pick it up within the first month or two. Like their professors, they become the mighty “they,” at least in their own minds; and so when referring to the powerful “them,” they say “we.” The students don’t openly admit it; they simply assume that they are fit to make decisions for other people. The Harvard “we” is a paternalistic “we.”

Right now, unkempt, spotty geeks who got better grades than you did are sitting in Harvard (and Stanford and Princeton and Yale) lecture halls saying things like, “We should deconstruct the bundle of property rights into its constituent parts and eliminate the strands that impinge on legitimate community rights” — which when translated means, “The government should have the power to take your property in the name of certain social interests that my classmates and I consider to be worthy.”

By the end of the first year, the habit is ingrained. The students have become the “they” and have lost the natural fear of being told what to do by bureaucrats, agencies, and policemen — because they assume that they will now be making the rules. They no longer see any humor in Ronald Reagan’s famous line, “The nine most terrifying words in the English language are, ‘I'm from the government and I'm here to help.’”

By the end of the first year, the students assume that they will now be making the rules.

I’m happy to say that when I was at Harvard Law, I didn’t go in for that “we” business. Despite my own snobbery and angry-young-man ardor, I didn’t want to be part of an elite class that would beneficently lord it over the little people. I still don’t.

The Harvard “we” is an elitist “we.” I admit that elitism isn’t always wrong. People in the good elite stand for good values and set an example that encourages good behavior. People in the bad elite use power to dictate your behavior, because they know better than you. Meanwhile, they exempt themselves from the constraints of values, because they think that their ends justify their means.

Barack Obama is the greatest living practitioner of the Harvard “we.” To understand that is to understand his presidency.

How would the elitist-in-chief govern? He would seek to expand his rule, intervening in important areas of life, without respect for process or checks or balances.

Are there examples?

Certainly there are. One is the fact that “we” want much more power over financial transactions, so “we” — that is, Obama — put Harvard Law professor Elizabeth Warren in charge of the new Consumer Financial Protection Bureau without the inconvenience of a Senate confirmation or any other kind of open political process. She probably would not have survived a confirmation hearing. Even the left-liberal Senator Chris Dodd warned Obama that she might not be confirmable and objected to his nomination maneuver. Naturally, she is one of “us,” having been a professor of Obama’s at Harvard.

This new bureau can grant itself its own budget and has independent rulemaking authority. It is not subject to the oversight involved in congressional appropriations. But it will largely determine how credit is extended by banks, other financial firms, and even small businesses that grant credit to consumers. It will be a huge office with extensive powers. Its director is an important officer of the government. What about the advice and consent clause? Article II, Section 2 of the Constitution says the president “shall nominate, and by and with the Advice and Consent of the Senate, shall appoint . . . Officers of the United States.” The Wall Street Journalput it this way: “To deflect this question, the president’s lawyers have cobbled together yet another legal fiction. The trick is to give her [Warren] a second appointment. In addition to serving as President Obama's special assistant, she will also serve as a special adviser to Treasury Secretary Timothy Geithner. This allows her to pretend she is Mr. Geithner’s humble consultant when she and her staff come up with an action plan for the new agency. This legalistic gambit serves as a fig leaf for a very different reality: Mr. Geithner will never reject any of Ms. Warren's ‘advice.’ The simple truth is that the Treasury secretary is being transformed into a rubber stamp for a White House staffer.”

Of course “we” also want power over the businesses of medicine and health insurance. By use of a recess appointment and without a debate in the Senate, Obama put Harvard professor and Harvard alumnus Donald Berwick in charge of Medicare. Under ObamaCare, Medicare has extensive new powers to reshape the business of medicine.

Obama and the man he chose to run the newly empowered agency don’t seem to see any difference between actual government-mandated rationing and the “rationing” that occurs through individual cost-based decisions resulting from a market for services. Berwick said, “The decision is not whether or not we will ration care — the decision is whether we will ration with our eyes open.” And the White House, according to the Wall Street Journal, issued an internal memo with this talking point: “The fact is, rationing is rampant in the system today, as insurers make arbitrary decisions about who can get the care they need. Don Berwick wants to see a system in which those decisions are transparent — and that the people who make them are held accountable.”

Stunning spin. That really is the same as my saying that Ferraris are being unfairly rationed, because I can’t afford one.

By the way, don’t ever think that Obama’s Harvard “we” means “my constituents and I” or even “my supporters and I.” To know how he really thinks and acts, observe him in a tight spot. My definition of “character” is how you behave under pressure. By October 2010, with midterm elections coming up and his party on the ropes, President Obama was under some pressure. So he said it would be “inexcusable” for Democrats to sit out the November 2nd elections, given the stakes for the country and the potential consequences for their own agenda. He went on to criticize the enthusiasm gap between energized Republicans and members of his own party. Asked about his party’s political troubles, he said, “And so part of the reason that our politics seems so tough right now, and facts and science and argument does [sic] not seem to be winning the day all the time, is because we’re hardwired not to always think clearly when we’re scared, and the country is scared, and they [sic] have good reason to be.”

That really is the same as my saying that Ferraris are being unfairly rationed, because I can’t afford one.

What a linguistic nightmare. Trying to explain why so many of his supporters were abandoning his party, he used another “we” — we the lame-brained human animals who were not admitted to Harvard. Not for a second, though, did he sincerely include himself in the class of great apes not smart enough to “think clearly” when fear strikes. No, he made that very clear. In the same sentence, he ungrammatically shifted to the second person plural, saying, “They have good reason to be [scared].” There, I have to agree.

The idea is that the president is right and rational and, if you voted Republican in 2010, you are scared and irrational. But don’t worry. The president will take some falsely modest blame for the election results. As he told a reporter for the New York Times, “Given how much stuff was coming at us, we probably spent much more time trying to get the policy right than trying to get the politics right. There is probably a perverse pride in my administration — and I take responsibility for this; this was blowing from the top — that we were going to do the right thing even if short-term it was unpopular.” Allow me to translate that into Obama’s Harvard-we voice: “We spent all of our time figuring out how to make you do what is best for you, and not enough time telling you fairy tales.”

"It’s not that we believed our own press or press releases, but there was definitely a sense at the beginning that we could really change Washington,” another White House official told me. "‘Arrogance’ isn’t the right word, but we were overconfident." (New York Times, October 17, 2010)

Yet the question remains: what were they “overconfident” about? What did they want to “change”? All the evidence indicates that these apparatchiks, as well as their boss, were overconfident about their ability to change “they” into “we,” to turn a set of blinkered, bigoted, undereducated elitists into a committee with absolute power over everyone else. Pardon me if I fail to sympathize.

About this AuthorMichael Christian is a recovering lawyer trying to avoid working a real job.

The Metamorphosis

When the Cuban people awoke last April 2011, they did not find themselves transformed into giant insects. That change had already occurred. Over the course of the previous 50 years, Fidel Castro had transformed the island into one giant beehive or ant colony laboring single-mindedly for his vision of a Caribbean utopia. What they did wake up to find was something entirely novel: a vibrant options market in 1950s vintage Detroit automotive classics.

In “Cuba: Change We Can Count On?” (Liberty, December 2010), I reported the passage of enabling legislation by the Cuban government to guide the Congress of the Communist Party in implementing far-reaching reforms to the economy. Though the fine print of implementation had yet to be worked out, a big change was decreed. It included the legalization of self-employment in ”dozens” of areas, the privatization of many small state-owned businesses as cooperatives, and the establishment of limited property rights in real estate and some bits of movable property such as cars, boats, and appliances, many of which can now be bought and sold.

The impetus for all this hope and change was money. Cuba’s economic and fiscal health was dire. The reforms hoped to eliminate one-fifth of the government work force (thereby cutting expenditures); incentivize former government employees into joining taxable petit-capitalist enterprises (thereby raising revenue); and — along with liberalized foreign investment reforms — stimulate the economy and improve Cuba’s fiscal prospects.

In April 2011 the details of the new legislation were announced. In a recent paper entitled “Economic Impact of New Employment, Tax and Financial Policies in Cuba,” presented at the XXI Annual Meeting of the Association for the Study of the Cuban Economy (Miami, August 2011), Luis R. Luis, former director, Latin America Department, of the Institute of International Finance and chief economist at the Organization of American States (OAS) in Washington, applied macroeconomic analysis and a crystal ball to predict the effects of the reforms.

To a populace that has never paid taxes, much less dealt with the fine points of business expense deductions and tax accounting protocols, the entireexperience must have been far from “liberalizing.”

Given the market sophistication of the Congress of the Cuban Communist Party — akin to that of the Creation Science Institute, sequencing the malaria genome — the reforms are still a work in progress. They aim primarily at improving state finances, but the use of price controls, size limits on firms, confiscatory tax rates, complicated monthly payment requirements, and petty regulatory activity “could result,” as Luis drily observes, “in even larger evasion than is usual in developing countries by single proprietorships and the self-employed, [and] will also result in many activities taking place wholly or partially underground, limiting tax revenue and fostering operation of undersized and inefficient activities.”

The very first modifications to the April bill were made a scant few weeks later, following a strike by cocheros (horse cart drivers) in Bayamo, Granma Province (née Oriente Province). The provincial capital is immortalized in Cuba’s national anthem as the birthplace of independence. It is a place redolent with symbolism, and a situation best handled with care. Bayamo cocheros, members of one of the newly privatized occupations, discovered that when they added their new tax liability to their clients’ bill, demand plummeted. So they went on strike.

The new self-employment taxes consist of four categories: social security tax, personal income tax, sales tax, and payroll tax. Let’s look at each.

1. The social security tax is levied at 25% of the tax base (in the US, it’s about 13% — with half paid by the employer). So far, so progressive.

2. The personal income tax gives a whole new meaning to “taxing the rich.” Marginal rates rise to 50% for annual incomes of $208! When combined with the social security levies, the personal tax nears 60%. Mindful of the reader’s attention span, I will skip all the qualifying fine print, ceilings, and permutations that complicate the base tax rate — except for business expenditures, aka deductions. These are limited to 20% or 40%, depending on the enterprise.

As Luis notes: “These rates discriminate against enterprises whose cost of inputs exceed[s] 40%, which will lead to curtailment of activity, firm creation, and widespread tax evasion.” Cocheros, for some unknown reason,were limited to a 20% business expenditures deduction.

To a populace that has never paid taxes, much less dealt with the fine points of business expense deductions and tax accounting protocols, the entireexperience must have been far from “liberalizing.” It was reminiscent of a farcical zarzuela, the Spanish version of a Gilbert and Sullivan operetta, with a dose of Monty Python thrown in for gravitas. The Congress responded by raising cocheros’ allowable deductions from 20% to 40%.

3. Sales taxes for all products are levied at 10%, except for farm products, which are taxed at 5%. Simple enough.

4. The new payroll taxes are not only complex; they (along with the other taxes) actually, as Luis observes, “pose a formidable constraint on employment.” The following summary — through no fault of Luis — is beyond this author’s ability to make intelligible, much less fun:

A new 25% payroll tax is instituted. The base of the tax is the overall wage bill except that there is a minimum taxable amount equal to a multiple of the average wage for specific workers calculated by the appropriate local labor office. The base is made progressive as the minimum taxable amount increases with the size of the payroll. Thus for firms with 1 to 9 workers, the minimum equals 1.5 times, rising to 2 times for those between 10 and 15 workers and to 3 times for those firms that have more than 15 employees.

So much for the new taxes. Will Cuba’s vision of self-employment provide the fiscal salvation the government so desperately needs, or is it just a tempest in a teapot?

If the government succeeds in shifting 250,000 government workers into self-employment, and they pay all their taxes, Luis estimates a $40 million revenue windfall for the government (not to mention all the supplies and material that would not be pilfered or stolen from state companies and offices, as supplements for employees’ meager salaries — a point important enough that Luis footnotes it in his report). But so far, no more than 50,000 state employees have taken the bait.

The eminent French art critic and father of surrealism, André Breton, visiting Cuba in the late 1920s, observed that, “Truly, Cuba is too surrealistic a country to be livable.”

Furthermore, it’s impossible to predict the tax compliance rate, which, worldwide, is low for the self-employed. “However,” Luis observes, “it is expected that the fiscal authorities will enforce the tax code with some vigor. Undoubtedly, the high tax rates will act as an incentive to evasion and to a reversion of business to the underground economy. Sizeable underreporting of revenues is to be anticipated.”

In 2011, Cuba’s population was 11 million. As of mid-May 2011, about 300,000 people were self-employed (excluding farmers); or (with slightly different numbers), never more than 3.5% of the labor force. Though the passage of the new legislation doubled the number of self-employed, a large percentage of them were people who came out of the black market closet and hope to become legal.

Luis’ analysis bears some contextual elaboration because, as Miguel Bretos, author of Matanzas: The Cuba Nobody Knows, has stated, “Those seeking to understand Cuban history in conventional ways are doomed to frustration.” He was referring to the eminent French art critic and father of surrealism, André Breton, who, visiting Cuba in the late 1920s, observed that, “Truly, Cuba is too surrealistic a country to be livable.”

What makes the details of the reforms so surreal is their schizophrenic set of objectives. When first proposed, the reforms were compared to the Chinese model: an infusion of capitalism to build wealth, with the Communist Party retaining absolute power. But, as the Chinese are discovering, when laissez faire markets infect a regime of total power, the liberty virus proves hard to cure.

The Chinese are a practical people with few Maoist ideologues left among them. No one, from the highest party apparatchik to the lowliest peasant, objects to becoming richer. Meanwhile, power is being incrementally ceded through a phenomenon usually foreign to absolutist regimes: limited but sensitive responses to popular dissatisfaction with corruption, judicial arbitrariness, environmental degradation, out-of-control eminent domain, and even — very slightly — the transfer of some political power. (For example, provincial officials in Wukan, Guangdong Province, are allowing local elections to take place.) Moreover, the Chinese are rather comfortable with duality; witness the Taoist concept of yin and yang.

It’s not quite so simple for Cubans.

The competing objectives of raising capital through economic liberalization while retaining absolute power are — in Cuba — complicated by a third factor that tips the reforms from the bipolar into the surreal: an anti-capitalist idealism so fervent that it equates private employment with involuntary servitude, profit with depravity, and self-employment with crimes against society. These attitudes not only saturate the nomenklatura — with their source and apogee in the moralist-in-chief, Fidel — but also pervade the majority of the Cuban population. Cubans are poor and unhappy; they sense that something is wrong with the system; they are starving for change. Yet they idolize St. Fidel’s idealism and venerate him as the conscience of the Revolution.

As the Chinese are discovering, when laissez faire markets infect a regime of total power, the liberty virus proves hard to cure.

National character, along with its kinfolk — ethnic, religious, cultural, and racial character — has fallen into disrepute as a way of defining a population. Whatever validity it might once have possessed has evaporated. It has been dismissed for its oversimplification, unscientific methodology, racist undertones, and complete absence of political correctness. But it retains a great deal of insight and literary utility, when considered informally. Hedrick Smith was definitely onto something when he described the Russian character as a cross between German and Mexican temperaments.

Cuba was ruled by Spain for over 400 years — longer than any of its other colonies. During the Latin American wars of independence in the 1820’s, Cuba remained staunchly Spanish. By the time it won its independence in 1902, it was considered an integral part of Spain. That date is so recent that in 1966 the last surviving Afro-Cuban general of the War for Independence, Generoso Campos Marquetti (by then living in the US, in exile from Castro’s revolution), was asked to testify before the US Congress during hearings investigating the nature of the Castro Revolution. It’s as if Nathanael Greene or Henry Knox had still been alive within our living memories, to comment on US current affairs.

The Cuban character is a diversely spiced mélange. Settled by immigrants from Galicia, Asturias, Catalonia, and the Basque Provinces in northern Spain, Cuba was infused with a strain of rigid, dour, doctrinaire, and humorless temperament. Fidel Castro is a second-generation Galician — he can’t dance, carry a tune, or tell a joke. Though he would reject the comparison (in spite of his early flirtations with Falangism and Fascism) Castro has much in common with the long-lived and long-ruling Francisco Franco and his Minister of Propaganda, José Millán-Astray — both Galicians.

General Millán-Astray was a serious parody of himself. Founder of the Spanish Foreign Legion and a decorated war hero who’d lost an arm and an eye, he personified Spanish fascism. He was obstinate and ruthless, yet impulsive; flamboyant, reckless, and self-aggrandizing. At rallies he resembled the mad Dr. Strangelove. Wearing one white glove and a black eye patch, he would exaggeratedly throw out his one arm in the Nationalist salute, while shouting his telltale mottoes, “Viva la muerte!” ("long live death") and “Death to intelligence!” ("death to the intelligentsia").

Cubans are poor and unhappy; they sense that something is wrong with the system; they are starving for change. Yet they idolize St. Fidel.

Ladino and Canary Islands immigrants added cunning, perspicacity, and some levity to the Cuban national character; Andalucians, Valencian gypsies, and West African slaves tempered the whole with rhythm and a wry sense of humor. Provincial and (in the case of the West Africans) tribal clubs, mutual aid societies, and other ethnic affiliations lasted well into the 1960s.

The Spanish component of the Cuban character alone suffices to explain the paradoxes inherent in holding multiple contradictory perspectives. Pepe Azcarraga, a 91-year-old Spaniard from a small village in Aragon (but now a retired college professor living in the US), personifies this Weltanschauung. He recounts that once, as a teenager, he accompanied a friend to the dry goods almacén to buy towels. On the way back, he helped her carry the goods, stacked on his doubled arms. As he passed by his own house, his mother, perched on the second-floor balcony, spotted him on the cobbled street below supporting the pile of towels in front of him as if they were the Blessed Sacrament and he was leading an Easter procession. She beckoned to him angrily. Puzzled, he detoured into his house.

Once inside, she asked him what the diablo he thought he was doing carrying a pile of towels for all the world to see. Before he could answer, she walloped the fear of propriety into him, moaning that “the whole town will think the Azcarraga family needs towels!”

Pepe tells the story without a hint of irony, as if his failure to anticipate the finer etiquette of towel buying in a gossipy small town were an obvious sign of his stupidity. At different times, depending on the context of the conversation, he’ll call himself a socialist, a capitalist, a libertarian, or simply a man of the left. He and his immediate family sided with Franco during the Civil War — for the sake of order and stability. Yet as members of the local militia guarding the frontier against infiltration from Republican guerrillas holding out in the French Pyrenees after the war, Pepe and his friends, when off-duty, would cross over and (avoiding politics) socialize with the enemy, many of whom were friends, family, and acquaintances. They shared snacks, smokes, stories, and beer. A devout Catholic who attends Mass every Sunday, he is nonetheless skeptical of the existence of an afterlife — and he harbors a sense of unworthiness that keeps him from communion.

Pepe stands on the shoulders of giant, original, way-outside-the-box thinkers: surrealist artist Salvador Dalí, whose melting clocks epitomize the persistence of memory; philosopher Miguel de Unamuno, who introduced doubt to faith, and found that they got along just fine; writer Miguel de Cervantes, whose Don Quixote — the patron saint of hopeless causes — made tilting at windmills not only intelligible but honorable; and Grand Inquisitor Tomás de Torquemada (literally, twist and burn), whose auto da fés melted heretics in order to save them. To an Anglo-Saxon who can only shake his head in perplexity, like a mental centrifuge spinning to separate the conflicting strains, little of this intellectual anarchy makes sense.

Fidel Castro, the Cuban Communist Party, and their recent economic reforms embody this cognitive dissonance. Luis’ assessment is not sanguine: “It is evident from the multiple constraints, prohibitions, regulations and high taxes involved in the new measures the authorities are striving to maintain tight control over the liberalization process. These controls will dampen or even fullycontain the output and consumption gains from market opening.”

And the controls are extensive. One-hundred-and-seventy-eight self-employment occupations have been legalized (up from 157); most require little or no capital (animal caretaker, hairdresser, locksmith, plumber, mason, mattress repairman). A few others, such as room renting (though not to foreigners, and no subletting) and transportation services (truck and taxi driving) imply greater use of property or equipment. Restaurants are now allowed 50 tables, up from 20. Capital investment is capped at $800.

Even the most touted reform, the buying and selling of real estate, is less than meets the eye. Ownership is limited to domiciles — one residence and one vacation home — and possession is limited to citizens or foreigners permanently residing in Cuba.

Additionally, the domestic portion of the reforms requires that all transactions take place in nonconvertible pesos. (Cuba has dual currencies: convertible and non-convertible pesos — one for tourists, the other for Cubans — both highly controlled.) Foreign investment in the newly allowed enterprises is forbidden; as are family and personal remittances (also subject to taxes), which must only be used for personal consumption. Wholesale activities, inter-provincial trades, and most intermediation among firms are also forbidden.

“Intermediation” — a fancy word to describe the place that banks (among other entities) hold between savers and investors: they take deposits, then lend them out to entrepreneurs. Cuba’s (official) private savings rate for the last six years is about 2% of income — not an important source of financing for new enterprises, though probably understated because of non-bank and in-kind savings. As Luis again drily notes, “Most bank loans are made to state enterprises. A vibrant self-employment sector would be helped greatly by access to credit from the banking system. This would require building-up a credit system, with an important role for micro-credits by local branches of banks with appropriate credit expertise . . . [as in] Asia.”

Fidel Castro is a second-generation Galician — he can’t dance, carry a tune, or tell a joke.

Any reforms along those lines are unlikely, because they would undermine the institutionalized apartheid system that attempts to minimize economic fraternization between Cubans and foreigners. Very few of the newly approved occupations affect the export or tourist sector, and the government monopoly on labor for joint venture and foreign enterprises has not been affected. It is surprising that the new employment and tax measures do not address Cuba’s external accounts, even though more foreign investment — under the pre-existing framework — is being attracted.

Luis boldly sums up his report with an estimate of the impact of the reforms on Cuba’s GDP. He admits he’s on shaky ground — with disclaimers, caveats, weasel words, and the assumption that many more black-market enterprises will come into the open. Despite the effects of government controls, he broadly predicts a 2% GDP increase as a low estimate, with a 6.4% GDP increase if all the hoped-for 250,000 state employees become successful entrepreneurs, make lots of money, and pay all their taxes.

The Cuban reforms are a tug-of-war among various conflicting objectives: on the practical level, increasing state revenue while maintaining total state power; on the philosophical level, allowing enough “human action” (in the Misesian sense) without diluting the “social justice” objectives of the Revolution by introducing greed, ambition, and a subversive focus on individuality.

On that last point — to paraphrase Charles Darwin, who, at the conclusion of The Origin of Species, foretold that “light will be thrown on the origin of man” — the Cuban reforms will shed much light on how far the capitalist goose that lays the eggs of prosperity can be starved, strangled, and robbed, without killing it.

About this Author

Robert H. Miller is a builder, outdoor adventure guide, and author of Kayaking the Inside Passage: A Paddler's Guide from Olympia, Washington to Muir Glacier, Alaska.

Insurance: For Me or Thee?

Once upon a time, before we got married, my wife Tina got a ticket for driving without insurance and decided to contest it pro se. Her argument to the judge was simple: insurance was designed to protect the insured from potential losses to herself — not to protect a third party. Anyone she might harm had recourse to indemnification by demanding recompense either voluntarily or through civil action — the traditional recourse for most torts. She added, for good measure, that compulsory insurance laws were a racket — nothing more than rentseeking, insurance industry full-employment legislation.

At the time, Tina was very poor and couldn’t afford insurance. Burdened by a heavy student-loan debt and no job prospect, she was treading water running a one-woman cleaning business. Her $300 Chevy Nova was basic transportation. She had no other assets and was living in a $150-per-month apartment on the wrong side of town.

The judge — in a totally unnecessary flourish of engagement — cited these very reasons to show that mandatory insurance was necessary. Tina retorted that you can’t squeeze blood from a turnip; that, traditionally, once the perpetrator’s assets, however large or small these might have been, had been exhausted in compensation, that was all she wrote; that, in essence, mandatory insurance schemes forced the poor to cover wealthier people who could afford to insure themselves against damages perpetrated by those who could barely afford food and a roof.

Of course, she lost, but the judge admired her spunk and charged her only half the usual fine. While trying to settle up at the cashier's window, she argued her case to the cashier too. He asked her if she was black or Mexican or Indian and pregnant. She wasn’t, so she didn’t get off.

With the prospect of mandatory health insurance coming in 2014, will we get off? Where will the unfunded mandates stop?

Mandatory car insurance is premised on the assumption that driving is a privilege, not a right. Therefore, greater state control is justified. The counter-argument is that people have a right to travel; that driving a vehicle is the modern equivalent of using a horse, and that horse travel was never considered a privilege. It was a necessity.

Alongside the privilege argument (which actually came later) was the “assurance” argument, the argument that there is “no way of assuring that even though fault was assessed the victim of an automobile accident would be able to collect from the tortfeasor” (as Bill Long recounts in Automobile Insurance: A Brief History).

This argument prompts the question: since there is no assurance that a victim may be able to collect damages from a pedestrian, bicycle, equestrian, horse and buggy; or any other type of accident — including accidents on property normally covered by homeowner’s, renter’s, or liability insurance — will we one day be forced to buy these also? I can just imagine governments requiring panhandlers and the homeless to carry liability insurance to make it easier for citizens to collect damages from unfortunate encounters with them.

The “assurance” argument is better described as a “convenience” argument: an argument about providing a convenience for insurance companies and the better-off, at the expense of the poor. (The uninsured better-off face serious loss, if not destitution, when at fault.)

With the invention of the automobile in the late 19th century came the inevitable side effect of automobile accidents.These were perceived — rightly or wrongly (and probably as a natural response to a new and untested technology) — as more frequent and more harmful than previous, more familiar torts. Therefore, it was thought, new laws were required to govern automobiles.

Connecticut led the way in 1925 with a modest “financial responsibility” law. This required any vehicle owner involved in an accident with damages over $100 to prove "financial responsibility to satisfy any claim for damages, by reason of personal injury, to, or death of, any person, of at least $10,000."This early financial responsibility requirement applied to vehicle owners only after their first accident. In the same year, Massachusetts passed the first compulsory insurance law as a prerequisite to vehicle registration.

Mandatory insurance schemes force the poor to cover wealthier people who could afford to insure themselves against damages perpetrated by those who could barely afford food and a roof.

By and large, traditional tort practices remained effective, since — for over 30 more years — no other state saw a need to enact special automobile accident legislation. Then, in 1956, New York passed its compulsory insurance law, with North Carolina following suit the next year. Today, every state bar New Hampshire has some sort of compulsory insurance scheme, and even it has a “personal responsibility” requirement.

Minimum insurance coverage requirements vary wildly from state to state, since estimating the cost of an accident before it occurs is very difficult.The requirements are often expressed in tripartite form — as, for example, in Alaska’s and Maine’s laws, with the highest requirement at 50/100/25, or in the District of Columbia’s, at 10/25/5. These numbers are shorthand for thousands of dollars and refer, in sequence, to: "bodily injury per person/bodily injury per accident/property damage."

After an accident, and once these limits have been reached — again, that’s all she wrote. Limits on insurance coverage have no relationship to liability limits, which are determined only by a judgment and restricted only by one’s net worth.

How effective is the mandatory auto insurance system? An Insurance Research Council study estimated that about 15% of the US population is uninsured — in Colorado, almost 23%.

Many of the logical shortcomings in the mandatory car insurance laws must be evident to people generally, because there is no political will to enforce them effectively. In most states, it's pretty easy to avert the mandates. Most people who fail to comply with the laws do so because they cannot afford the additional cost. It doesn't seem that the will exists to remove these people's means of transportation, and often their means of earning a living. (California and New Jersey, however, have taken a perverse approach toward incentivizing compliance: if uninsured drivers are victims in an accident, they are — by law — prevented from recovering non-compensatory damages, such as damages for “pain and suffering” from the perpetrator.)

Instead of being fined or having their vehicles taken away, motorists are ordinarily given a ticket, and the fee is waived when they show up in court with proof of insurance. Naturally, they can then cancel the coverage or cease making payments once the court date has passed. All this does is create a hassle for the uninsured who happen to get caught, and increases the paperwork for the insurance companies — a small price to pay, I assume — that minister to the captive market.

Do states that have more uninsured drivers actually have lower fatality rates or lower accident rates, because uninsured drivers will presumably drive more cautiously? This is a milder form of economist Walter Williams’ thought experiment, in which he mused that traffic accident rates would decline dramatically if every car’s steering wheel were equipped with a razor-sharp rapier extending from the center of the wheel to within a few inches of the driver’s sternum.

Many of the logical shortcomings in the mandatory car insurance laws must be evident to people generally, because there is no political will to enforce them effectively.

Would the costs to the auto liability system be lowered if we had no mandatory coverage? Perhaps. The narrowing of the base might work against the lowering, but the reduction in regulation would certainly promote it. On the other hand, rates might increase with a broader use of uninsured and underinsured coverage — a pittance to pay for greater freedom of choice and much more convenience.

Soon after the enactment of the first mandatory car insurance laws, the imposition of compulsory social insurance (or retirement insurance) in the form of Social Security became a reality. Lately, after some of the floods, hurricanes, and tornados that have devastated various regions of the country, precipitating massive federal and state relief programs, mandatory flood insurance has been proposed.

Today we are faced with the prospect of compulsory health insurance, beginning in 2014 — if the Supreme Court upholds the constitutionality of Obamacare, a program being challenged by several states because of its compulsory nature.

One major provision of the new Health Care Act requires employers above a certain size to buy health insurance for their employees — definitely a third party mandate. The irony of this requirement is that the practice of employer-provided health insurance began during World War II as a way for businesses to get around government imposed wage and price controls. Since employers couldn’t offer salary hikes, they began to offer perks which, by some loophole in the wage and price control legislation, were not considered pay raises. Yesterday’s dodge becomes today’s mandate.

Advocates of compulsory health insurance argue that it is in the best interest of every individual. It broadens the base of insured people, thereby lowering premiums. But this argument hides the underlying logic of compulsory health insurance: whether or not it actually benefits individuals, it benefits third parties — insurance companies, paying patients (mostly insured), hospitals, and taxpayers, all of whom, to one degree or another, now pick up the tab for deadbeat patients (mostly uninsured).

Only a small minority of uninsured patients are destitute. For the rest, being uninsured is a lifestyle choice made possible by the widespread requirement that hospitals treat the seriously ill regardless of their ability to pay. The repeal of such laws would provide the strongest incentive for everyone to choose to buy insurance, while the truly destitute would rely either on charity or on Medicaid.

Insurance was invented to protect people from unforeseen losses to themselves, not to protect third parties. Transferring the definition of insurance into the realm of bonding muddles the distinction. Some states, such as Arizona, recognize this and offer a bonding option — based on the premise that driving a car is a privilege, and on the state constitution’s prohibition against forcing an individual into any sort of a private contract. But it’s a messy compromise, with folks overwhelmingly choosing insurance instead of bonding.

And when it comes right down to it, isn’t it reprehensible for a majority that is mostly well-to-do to force a less well-off minority to buy insurance merely for the majority’s convenience?

About this Author

Robert H. Miller is a builder, outdoor adventure guide, and author of Kayaking the Inside Passage: A Paddler's Guide from Olympia, Washington to Muir Glacier, Alaska.

Censoring South Park

Earlier this year I read an interview with Matt Stone and Trey Parker, the creative duo behind the hit animated comedy South Park, in conjunction with their new Broadway play The Book of Mormon. What struck me was one of them saying that in the episode of South Park in which they lambasted the cult of Scientology, they had wanted to say that Tom Cruise is in the closet. Their lawyer advised them that Cruise could sue them for defamation, so instead they put the cartoon version of Tom Cruise in a literal closet that he refused to come out of. The result was laugh-out-loud comedic gold, but it highlights one of my major peeves about legal causes of action, which is the law of defamation.

Defamation is a cause of action under which a plaintiff can sue a defendant for damage to his reputation. In For a New Liberty, Murray Rothbard wrote that he believed defamation law should be abolished, because a person’s reputation exists in the brains of other people and the plaintiff has no property right in other people’s minds. My concern is broader; I believe that defamation law scares people away from making statements that might offend those among us with the money to hire lawyers. This fear of being sued for defamation chills people's ability to say what they want. It scares them away from criticizing others, even when the criticism might be justified and deserved.

This danger is often poignant in the case of such artistic representations as South Park, which makes deep, meaningful social commentary by making jokes, often offensive ones, directed at people who could easily take offense and who generally have money. The strange thing is that the first amendment has a clause that guarantees freedom of speech. Why isn't the First Amendment regarded as making charges of defamation unconstitutional?

There is a larger and a smaller answer. The larger answer is that the members of the Supreme Court, even the supposedly “textualist” and “originalist” conservatives, do not take the words of the Constitution literally. They make interpretations that twist and mangle it into something that looks like what they want, something that deforms the meaning of the words on paper, written by the Founders. The smaller, more specific answer is that the Supreme Court has grappled with the conflict between free speech and defamation, and has chosen a middle ground that tries reconciles the two.

Why isn't the First Amendment regarded as making charges of defamation unconstitutional?

In the landmark case of New York Times v.Sullivan (1964), an overseer of Southern police officers, sued the Times and members of the civil rights movement under a defamation theory, accusing them of damaging the policemen’s reputation by publishing an ad indicating that the police had committed crimes against demonstrators. Instead of holding defamation unconstitutional, the Supreme Court found for the defendants, holding that when public officials assert defamation they must prove “actual malice,” meaning that the defendant knew his statement was false or acted with reckless disregard for truth. This is a much higher standard than the “negligence” requirement that applies to defamation against private individuals on matters of public concern or the mere “publication” requirement that applies to defamation by private citizens on a matter of private concern. However, after Sullivan the Supreme Court expanded the actual malice rule to cover “public figures” as well as public officials, so most celebrities, such Tom Cruise, must prove actual malice.

Actual malice was designed to prevent censorship. I am sure that the Court believed it was being quite generous by creating such a high barrier to recovery. But because defamation continued to exist, the fear of being sued and the expense of litigation remain a serious impediment to American free speech and to our ability to criticize people of political and social importance. Speaking freely about the flaws (real or alleged) of our political and cultural leadership is a basic requirement for democracy to function.

A more recent important case is Hustler Magazine & Larry Flynt v.Jerry Falwell, a 1988 United States Supreme Court case in which evangelist Jerry Falwell sued a pornographic magazine for printing a joke that accused him of having sex with his mother. The accusation was obviously a joke that no one could take seriously. It was also clearly an example of the use of charges of defamation to censor criticism and take revenge against people who offend you. The jury found against Falwell on his libel claim, but found against Hustler on the “intentional infliction of emotional distress” claim, which is a somewhat similar cause of action that is also used to censor criticism and punish offensive behavior. The jury awarded substantial monetary damages. The Supreme Court, however, found in favor of the magazine on the “IIED” (as lawyers call it) claim, citing the need to protect the American tradition of political satire cartoons, and held that the New York Times v.Sullivan “actual malice” standard for defamation against public figures should also be used in cases involved intentional infliction of emotional distress claims against public figures, in order to protect free speech and create breathing room for vigorous debate. Regarding the right to be offensive towards other people, the court said that offensive speech is protected by the First Amendment.

But again, the Court refused to see the truth sitting right under its nose, which is that the only real purpose of claims of defamation (or intentional infliction of emotion distress claims alleged against plaintiffs because of what they say or write) is to censor speech; and this violates the first amendment. The law of defamation has no place in a society that believes in intellectual freedom for all citizens. We libertarians are basically the only group of people in America who say that the emperor has no clothes and who criticize governmental mistakes that modern-liberals and conservatives ignore or condone. Defamation is an obvious abuse of the law and of the state’s coercive power to repress independent thinking, and we should all get angry about it.

About this Author

Russell Hasan lives in Connecticut. He is a graduate of Vassar and graduated with Honors from the University of Connecticut School of Law. His passions include philosophy, libertarianism, computer programming, and the New York Yankees. His most recent books are the libertarian political treatise Golden Rule Libertarianism and the epistemological essay The Apple of Knowledge, available for Kindle, Nook, and iPad.

Office Complex

Recently, I missed a flight and ended up in a vast airport-adjacent suburb with a few hours to kill. My first stop was at a Starbucks to do some quick work. (Although I don’t drink coffee, I travel enough to know that Starbucks outlets almost always have clean bathrooms and reliable wireless Internet connections.)

One appeared quickly, at the corner of a big box shopping center. It was larger than most of the shops in the Pacific Northwest, so I figured there’d be plenty of room near noon on a weekday. But I was mistaken. The place was packed.

It took a few minutes to find an empty table where I could set down my tea and set up my laptop. In the meantime, I noticed dozens of commercial conversations, negotiations, and meetings going on in this semi-public space. It had the feel of a Middle Eastern bazaar.

A youngish man with a carefully cultivated scraggly beard and black apparel was plugged into his laptop via elaborate headgear. He was facing me, so I couldn’t see the screen of his computer; but, from the cadence of his talk, it was evident that he was participating in some sort of video conference. He made direct eye contact with me for a few moments — which I thought might be a reproach for looking at him — but then he changed his gaze to another person and spoke into his mic.

I noticed dozens of commercial conversations, negotiations, and meetings going on in this semi-public space. It had the feel of a Middle Eastern bazaar.

Something I’d read somewhere came back to me: videoconferencing veterans suggest choosing people or things in the room around you to represent the other participants in a conference call. On video, this creates the impression that you’re responding to specific others in the “meeting,” as if they were in a real room with you. He was just using me as an eye-contact avatar.

I couldn’t make out everything the fashionable man said. He was too far away and the Starbucks had too much background noise. But a few phrases made it across the space. “Elevations.” “Build-out.” “Retrofit.” “Improvements.” Because my wife is an architect, I recognized these as terms from a construction project — and, specifically, the expansion of an existing building.

Other snippets of words he used conveyed a certain fastidiousness; he constantly asked others what they thought and if they understood what someone else had said. Sounded like he was the construction manager or coordinator on the project.

I noticed that he’d chosen a seat with a blond panel wall behind it. The small video camera atop his computer screen would frame him in a background that could be from some fashionable office. And the elaborate headgear probably filtered out the background noise. Smart. His clients would have no idea he was sitting in a coffee shop.

Closer to me, a middle-aged salesman and saleswoman huddled at a smaller café-style table and swapped office gossip. The man did most of the talking — an overweight man with an overbearing voice: “The guy is so clueless that he has no idea Everett actually hates him. And he’ll never figure that out.” “I tried to give him some advice. Live on your draw and save your commissions. Don’t count on commissions for paying bills. But he doesn’t listen.” “I told him, ‘Look, it’s not my fault it’s like this. I mean, times are hard. We’re all cutting back.’”

The woman listened and nodded agreement with most of this. But she looked tired and clearly wished she were somewhere else.

As they reached the bottoms of their lattes, the salespeople plotted their afternoon. They were sharing one rental car but had separate appointments before their flight home that evening. He sketched out a plan for dropping her off at her next call while he made his and then switching driving chores, so that she’d drop him off at his last call while she made hers.

If times were better, they’d each have rented their own car.

Just behind me, two women — one older and very sharply dressed, one younger and casually dressed — talked about graphic design work. Their conversation was more about practical matters than aesthetics. The older woman opened a nice leather portfolio and showed the younger various business forms: letterhead, contracts, purchase orders and invoices.

It wasn’t clear whether the business forms were the product of the older woman’s practice or the forms that she used to deal with clients. And the younger woman’s questions were so elementary that they didn’t make matters any more clear.

This meeting seemed to be a “Can I pick your brain?” session. Perhaps the younger woman was the daughter of one of the older woman’s friends. The younger may have read somewhere that asking an established person for “advice” is the best way to get intelligence on employment.

Corporate America can’t afford to be the babysitter that it was for most of the last century. Working people understand this.

I’ve been on the older woman’s side of the table for a few of these meetings myself. I caught a glimpse of her face. She was in her late 40s or early 50s, quite attractive and carefully appointed. But her eyes looked sad. They squinted a lot — in contempt, I think — at the younger woman, whose childish questions and cadence made her sound simple-minded.

If the meeting behind me was a job interview, the younger woman wasn’t going to be hired. As the older woman folded up her portfolio, the younger asked her about any contract work that might be available. “It would be subcontract work,” the older said ruefully. “Give me a couple of business cards. I’ll keep them handy.”

Nearly finished with my emails, I took a break to use the men’s room. There were a couple of men ahead of me. While waiting, we listened to a white-haired man pitch four or five other older men and one woman on an investment scheme.

He’d handed each of his marks letters and information printed on heavy-stock paper which had a Baroque-style firm name ending in “Capital” at the top.

“ . . . our record speaks for itself, of course. But, like everyone, we are always looking for more business. And advertising on radio or television, frankly, isn’t something that interests us.”

The others nodded eagerly. This was a job interview. The white-haired man was selling them on becoming sales representatives for his firm — which was involved in some capacity with reverse mortgages. But my turn to use the bathroom came before I could hear the details.

Reverse mortgages are, essentially, the subprime loans of the coming decade. They are legal but unwise financial vehicles that are most effective at separating gullible people from their wealth. The gullible people, in this case, are seniors with real estate that they own outright or nearly outright; with a reverse mortgage, they get a monthly stipend in exchange for leaving their property to the mortgage company when they die.

If they die after just a few years of payments, the gullible old people have effectively sold their property for a fraction of its value.

Although he had the cheap sophistication of a game-show host, the white-haired man couldn’t have been very high up on the food chain of his shady industry. Multilevel marketing schemes are usually desperate to seem established, so they aren’t usually run out of coffee shops. But, hey, times are hard. And we’re all cutting back.

Back from the bathroom, I packed up my computer and scanned the place one last time on my way out. There were at least a dozen intense conversations going on; and another dozen or so people working intensely on computers or other devices. Did any of these people have “jobs” in the sense that the Department of Labor defines them?

Statist hacks like Robert Reich, Paul Krugman, and Barack Obama think of “jobs” as compliant proles lining up at the gates of General Motors for hourly-wage work, performing clearly defined tasks in clearly defined places. But this thinking is antiquated and wrong. Corporate America can’t afford to be the babysitter that it was for most of the last century. Working people understand this.

For most people, a “job” means — and will mean, for the foreseeable future — hustling for freelance work. Contracts and subcontracts. Commission sales. Multilevel marketing. There’s money in it, but that money doesn’t come easily. And, sometimes, it doesn’t come reliably.

New Light on a Great Libertarian

“I’ve just ordered your book on Garet Garrett, brother to my grandmother, Gertrude Garrett Graham, and my great uncle. There are a few anecdotes from his later years of retirement in Tuckahoe NJ and his relationships with his family and I’d enjoy talking with you, once I’ve read your book.”

It was signed, “Trudy Beth Bond.”

Garet Garrett (1878–1954) author of The Driver, The People’s Pottage, and The American Story, was one of the great libertarian journalists of the 20th century. I wrote a book about him, Unsanctioned Voice, published in 2008.

The book is more about Garrett as a writer than as a person and necessarily so. Most of his papers were lost. He had no children. His third wife, Dorothy Williams Garrett, had a son from a previous marriage, and he also had died, but I found his daughter. She had a few photos of Garrett but was too young to have known him.

Digging for details more than half a century after he died, I found one person who had known him: Richard Cornuelle, who had worked with Garrett in the early 1950s (and who died on April 26, 2011). Cornuelle had gone on to become an official at the National Association of Manufacturers — and had written a book, Reclaiming the American Dream, championing the nonprofit sector as an alternative to the welfare state. In 2007 I flew to New York City to meet him at his Greenwich Village townhouse and hear of his time with Garrett. Cornuelle, then 80, was delighted that someone wanted to know about the man who had been his mentor.

My book was more about Garrett as a writer than as a person and necessarily so. Most of his papers were lost.

Cornuelle gave me some personal details about Garrett, some of them incomplete. He told me one of Garrett’s sisters lived in a farmhouse on Garrett’s property at Tuckahoe. But which sister? Why was she there? What role had Garrett played in her family? He didn’t know. My book wasn’t mostly about things like that, but more personal details would have improved it.

After the book came out, I wondered whether I would hear from some lost relative. Three years went by. Then came the email from his grandniece, Trudy, who lives in the very town, Port Townsend WA, in which Liberty was founded and published for more than 20 years — a town about two hours’ drive and a ferry ride from my house.

Trudy was born in 1941. Through her, I got to talk to her brother Marshall, born 1943, and her sister Connie, born 1945. They knew Garrett as kids, aged 9 to 13. They were also part of a family that would have some stories about him. They might fill in some of the blanks in my account.

I had known that Garrett and his third wife, Dorothy, lived somewhere along the Tuckahoe River in New Jersey. From Connie, I received a satellite photo of the property.

At the beginning of the book I had listed Garrett’s siblings from Census records: Gertrude, Mary, Sarah, Thomas, and “what looks like ‘Clarra.’” But I knew nothing about them. Now I had their full names — it was “Clara” — and the dates of their births. I noticed that all but two of Garrett’s siblings were born in different towns, most of them not far from Garrett’s birthplace at Pana, in central Illinois. Garrett’s father Silas was a tinker — an itinerant tinsmith — and his family moved around.

From my new acquaintances, I heard one story of the children’s youth. Garret’s father, Silas, was a Protestant. What denomination was never said, though his funeral was in a Methodist church. According to Marshall, Garet’s mother, Alice, was a devout Catholic.

“They had made a pact when they married that their children would be allowed to make up their own minds,” Marshall said. “But the priest prevailed on Alice, and she did certain things behind her husband’s back, such as putting them in catechism class.” The effort backfired, and Clara, Gertrude, and Marie embraced the new religion of the time, Christian Science.

Then came an email from the very town, Port Townsend WA, in which Liberty was founded and published for more than 20 years.

“The Christian Science thing was a big schism in the family,” said Trudy, who was raised in that faith but abandoned it. She heard the story through her grandmother, Gertrude, who was zealous enough to become a Christian Science practitioner. Garrett was not a follower of Christian Science, and Trudy says his religious sisters disapproved of his drinking, smoking, and being married three times. But Gertrude and Marie also respected his achievement. They called him “Brother,” as if it were his name.

I learned little about Clara, who had stayed in Illinois. Sarah, whom they called Sadie, married a veterinarian and lived in Missouri and Iowa.

“I met my Aunt Sadie when I was 10,” Trudy said. “She seemed to have escaped the whole religion thing.”

Garrett’s younger brother Thomas had been an artist, and Connie has a painting of his (“landscape on the front, female nude on the back”). Thomas died young, in 1917, of somedisease. He is buried in the same cemetery as Garet and Dorothy Garrett, in Tuckahoe, which suggests that his elder brother had taken care of him — and perhaps had removed his remains, because Garrett didn’t move from Egg Harbor, NJ, to Tuckahoe until the mid-1920s.

The sister living on Garrett’s property was Marie. Trudy recalled the story that Marie had been living in Chicago. She was a man’s secretary for many years, and had become his mistress. “Supposedly his wife was sickly and when she died, he said, he would marry Marie, but he did not.”

His religious sisters disapproved of his drinking, smoking, and being married three times. But they also respected his achievement.

Trudy’s sister Connie writes: “As mother told me, Marie fell in love with a successful lawyer in Chicago. He was married and his wife was in a mental institution, and he told Marie there was some law that prevented a spouse from divorcing a spouse who is in an institution. So Marie consented to stay with him (my impression was that he paid for her apartment), something she otherwise would never have done.” Connie visited her Aunt Marie there, and recalls “Uncle Walter” stopping by with candy for the kids.

When the man’s wife died, he wouldn't marry Marie. Connie continues: “Humiliated, Marie went east to Garet’s, where she stayed on, and where Joe French, Garet’s tenant farmer, fell for her and asked her to marry him.”

French was not an educated or worldly man. His claim to fame was playing baseball in the minor leagues in San Francisco, Topeka, Sioux City, Dubuque, Peoria, and Beaumont in the years before World War I. Trudy recalls that his fingers had been broken from playing catcher.

As Connie recalls the story, Marie went to Garet and asked his advice, telling him, “Imagine. That man is no more than a tenant farmer and he wants to marry me!” To which Garet replied, “At least he wants to make an honest woman out of you.” Trudy says that Marie married French “under duress from Garrett, who didn’t want to support Marie for the rest of his life.”

In none of Garrett’s writings does he talk about taking care of his family, or the stress of an obligation to do so. Garrett left his family in Iowa in his mid-teens. Several of the characters in his fiction are without family, and none of his fiction or his vast amount of journalism focuses on family issues or champions family loyalty. He addresses other things. Yet he takes care of his sister when the gamble of her life fails. He nudges her into a marriage with a man who loves her, and he provides them both with a house.

It is what honorable people do, if they can, when there is no welfare state.

Trudy, Marshall, and Connie recall Garrett as the success, the urban sophisticate, of the family — and, of course, much older than they. As preteens and early teens, they moved with their parents from suburban Chicago to New York City in 1953. She believes they were the only relatives of her generation who lived close enough to visit Garrett, and the only ones today who remember him.

In none of Garrett’s writings does he talk about taking care of his family, or the stress of an obligation to do so.

From the summer of 1953 to Garrett’s death in late 1954, they visited Tuckahoe often, staying at Marie and Joe’s, in the farmhouse on Garrett’s property. This other building was not a farmhouse really. It was a three-story brick house, covered with ivy. It had a ship’s binnacle on the porch, and Connie remembers it as “the captain’s house.” The place was the subject of a feature story in the Atlantic City Press. (Trudy has the clipping.) Part of the Stille Homestead, the house had been built in 1795 “by slaves,” the newspaper said, using “bricks brought from England.” It had thick walls and five fireplaces, two of them in the basement, “where in the cold winter time the first families cooked, ate and kept warm.” Upstairs it had a “borning room” where mothers gave birth, and outside was a small graveyard.

One of the side buildings had been made into a glassblowing studio for Garrett’s wife, Dorothy. Connie has a small bottle that Dorothy created, with a dime in it.

Trudy turned 13 in 1954. Of the captain’s house she remembers “a wonderful attic where I spent many, many hours reading old Saturday Evening Posts” that Garrett had kept in bundles. His articles were mostly above her head, but she saw his name on them in the Post. “It wasn’t really until then that I knew what Uncle Garrett did.”

Garet and his wife lived several hundred yards from the old house in a new house he had built. “Garet and Dorothy's house was enchanting to me,” Connie recalls. “It had the biggest fireplace I had ever seen and I remember very well the bust of Nefertiti that you mention in your book, as well as the high bookshelves that flanked the fireplace.”

Trudy remembers Garrett’s room with the two-story ceiling and the big fireplace. “The room had a cozy feel to it, almost like you would feel in a log home. Garrett was the boss of that room. Really he was the boss of the whole place. Dorothy was pretty much in her cups all the time.”

I had mentioned Dorothy’s alcoholism in the book, and all three of Gertrude’s descendants remembered it. They remembered Garrett drinking, too, but not being drunk.

They also remembered the outbuilding Garrett called his “cave,” where he wrote. “He built that,” Marshall recalled. “He was proud of that. He had a little storage area underground where he kept his ink cool. He had these big bottles of Scripps ink. Four of them. He’d refill the well on his desk.”

I had quoted Richard Cornuelle in the book about how Garrett would research a topic, keeping everything in his head, “muttering and fuming quietly. Then, suddenly, he would seize an old-fashioned pen holder, jam a new point into it, and scrawl on white foolscap, often for hours, panting and sweating, jabbing the pen in the ink now and then, until he had it all down.”

Scripps ink, from big bottles.

I had said in the book that Garrett sometimes hid in his “cave” from kids, and Connie, who turned 9 in Garrett’s last year, recalls:

“Although we did often go over to Garet and Dorothy's house, and I did catch my first fish off their dock, and Garet helped me unhook it, on the whole not too much happened of a family gathering nature when we were over there. We kids generally weren't allowed in his little building where he did his writing, but sometimes we were sent out there to fetch him or take him a snack, and I would look around in awe at all the books and papers around him.”

Trudy and Marshall remember Garrett showing them his artesian well, and how he had piped the water into his house. Garrett did his own plumbing.

In Unsanctioned Voice, I quote Cornuelle saying that Garrett had buckets of silver dollars under his porch as insurance against a feared inflation. (If he had lived another 20 years, they would have paid off.) Trudy and Marshall remember the stories of buried coins, either silver or gold. Marshall recalls that silver coins were found inside the ship’s binnacle, under a layer of sand.

When Garrett died, his property went to Dorothy. She died six months or so later, willing the property to her son, James. That was a setback for Marie and Joe, because they had to move out of the captain’s house into town.

Perhaps Garrett had an influence on the family. Connie was the closest to Garrett in her career: she became a writer and editor of Smithsonian magazine. Marshall is the closest to Garrett in his philosophy.

“I don’t think anybody knew about libertarianism then, if they called it that,” he says. “Most of us are still quite conservative, maybe not to the extent of libertarianism, but pretty near.”

“When I read about the attitudes of Garet Garrett, I see my brother,” Trudy says.

Trudy was an attorney, teaching classes in how non-attorneys could file papers and defend their rights. She recalls once going out with a man who admired Ayn Rand, and telling him, “I am the grandniece of Garet Garrett.”

About this Author

Bruce Ramsey is a Seattle writer and author of Unsanctioned Voice: Garet Garrett, Journalist of the Old Right.

Cash Poor

In spite of Liberty contributing editor Mark Skousen’s observation that “income isn’t distributed, it’s earned,” much handwringing has followed recent reports that income distribution in the US is becoming increasingly unequal.

To a libertarian, how much one earns or owns — so long as it’s acquired honestly and honorably — is nobody’s business. But at the other extreme are the still-influential Rawlsian redistributionists. They believe that since each person’s station in life is due to little more than a roll of the existential dice, income distribution ought to cluster tightly. If it doesn’t, it’s an indication of an unjust society, and something must be done about it. That’s the ideology.

Nevertheless, there’s a pea under that ideological mattress: the fear of revolution if the rich get richer, the poor get poorer, and the middle class disappears. Though probably a reductio ad absurdum, it is nonetheless a theoretical possibility under a laissez-faire economic system.Friederich Hayek understood this, which is why — to the consternation of many libertarians — he advocated a minimal welfare state as a worst-case scenario safety net.

Whatever the chances might be that a just economic system — laissez-faire capitalism — could lead to the penury of a majority, income distribution is a concern to the inner Hobbesian in all of us. So when income inequality rears its ugly head, it bears critical investigation.

In 1912 Italian statistician and sociologist Corrado Gini devised the eponymous Gini Coefficient, a statistical formula based on the Lorenz curve, to measure variability and mutability among data in any discipline. The index is now widely accepted as a measure of income inequality in economics. Values range from 0, total equality, to 1, maximal inequality.

As of 2009, Sweden scored .23, the lowest Gini Coefficient, indicating the highest income equality; while Namibia rated a .74, indicating a large income inequality. Most first-world nations have Gini coefficients in the high point-twenties to the mid-thirties, with a .31 for the EU average. Back in 1980, the US rated a .40 Gini; today we rate around .47, a figure comparable to Russia or Turkey and a trend that is alarming to many.

The fear of revolution, though probably a reductio ad absurdum, is nonetheless a theoretical possibility under a laissez-faire economic system.

Paul Krugman, who could be considered the Linus Pauling of economics — for his previous Nobel-recognized genius, followed by descent into crankhood for his advocacy of snake-oil remedies: vitamin C in the latter case, dirigisme in the former — refers to the period after 1979 as the “Great Divergence,” because of the rapid increase in inequality that had occurred. According to a 2011 Congressional Budget Office report, “Real income (adjusted for inflation) in the US grew by 62% for all households between 1979 and 2007. However, after-tax income of households in the top 1% of earners grew by 275%, while income growth for the bottom fifth of earners was 18%.”

If, instead of income distribution, we look at wealth, the disparity is even greater. While the bottom 60% of the US population lost about 6% of its wealth, the net worth of the top 5% increased by 40% between 1983 and 2009.

Why the increasing inequality in a political and economic system deemed among the freest by classical economists?

* * *

Before addressing that question we must deal with a concept clamoring for immediate attention: fuzzy numbers.

Nearly all the figures quoted in this article come from Wikipedia compilations replete with references, from various other internet sites, The Economist,Scientific American, Money,Chris Martenson’s The Crash Course, and PuruSaxena’s Money Matters. Although these sources are ostensibly reliable, please take them with a pinch of salt. Some figures purporting to measure exactly the same thing vary wildly.

For instance, that 275% of after-tax household income growth for the top 1% earners, derived from 2011 Congressional Budget Office figures of 1979 to 2007, becomes a 176% increase — from 1979 to 2005, nearly the same time range — in a 2006 New York Times article.

Exactly what is being measured, how it is defined, over what period of time it is measured (a variable often manipulated for calculating equity returns down to the day), and what statistical tools are employed can have considerable impact on the figures. Just the difference between ‘”household” and “individual” income measurements can affect income inequality figures substantially.

Dodgy numbers can also lead to convertibility problems and out-of-this-world results — literally out of this world. Composite numbers (such as indices, among others), which are built up from subsidiary numbers, can become statistical black holes, swallowing endless data but illuminating little. As The Economist reports, “In theory, countries’ current-account balances should all sum to zero because one country’s export is another’s import. However, if you add up all countries’ current-account transactions, the world exported $331 billion more than it imported in 2010, according to the IMF. Are aliens buying Louis Vuitton handbags?”

And of course, ideology plays a big part. As the old saw goes, “He who frames the question determines the shape of the answer.”

On top of this are myriad unexamined assumptions. Statistician Joseph Locascio has identified what he calls “publication bias,” which means that academic journals “often give greater weight toward publishing articles that report statistically significant findings over those that don’t.” With this kind of review process, if out of 20 studies one shows a slight significance (perhaps because of chance), while the other 19 show none, that one will be published and the others ignored.

Not all people are driven to make the most money they can, all the time. Many earn and live below their possibilities, and spend the rest of their time pursuing their passions.

Hoover Institute economist Thomas Sowell suggests that many discussions of income equality are based on fallacious reasoning. For example, “an absolute majority of the people who were in the bottom 20% [of income] in 1975 have also been in the top 20% at some time since then. Most Americans don’t stay put in any income bracket. At different times, they are both ‘rich’ and ‘poor’ — as these terms are recklessly thrown around in the media.”

Finally, somewhere between the last two observations, lies individual choice. Not all people are driven to make the most money they can, all the time. Many earn and live below their possibilities, earning what they consider a sufficient amount, and spending the rest of their time pursuing their passions: music, rock climbing, walking around the world preaching the gospel . . . whatever.

So, to that pinch of salt, add a squeeze of lime and, what the hell, a shot of tequila.

* * *

But back to the ostensible causes of inequality, which remain unknown to many, even to theNew York Times, as proclaimed in an article onJune 5, 2005. Let’s consider these causes.

1. The rich work more than the poor. As of 2005, 42% of all US households had two or more income earners. However, in the top quintile of households, nearly twice as many (76%), had dual-earners. Among the lower class, the most common source of income is not occupation but government welfare (according to the leftish Winner-Take-All Politics by Hacker and Pierson, 2010).

2. The rich are more educated than the poor. In the top quintile, 62% of householders are college graduates; while many at the bottom half of earners hold at most a high school diploma. Educational and occupational achievement and the possession of scarce skills correlate with higher income.

3. People of modest means keep giving their money to the rich. George Mason University economist Walter E. Williams recently recognized that “the millions of people who watch LeBron James play are the direct cause of LeBron’s earning $43 million and are thereby responsible for — in Paul Krugman’s terms — ‘undermining the foundations of our democracy.’” The same can be said of the millions of Walmart and Microsoft shoppers who keep enriching the Walton and Gates families.

4. Government policies. Among the usual partisan suspects — such as decreased expenditure on social services and labor’s diminishing political clout, suffering from declining union membership — are Republican tax policies, specifically the “low” progressivity of US tax rates. Reversing these factors through government diktat is a crude cure that doesn’t address the underlying ecology (Thomas Sowell’s term) of the marketplace. The enforcement of legislation such as higher redistributive taxes and the imposition of closed shops would require force, a road down which classical liberals would prefer not to travel. Might there be another cause of the increasing income inequality that hasn’t yet been identified? One whose correction does not require coercion?

I believe there is, and that culprit is inflation — in spite of the fact that nearly all of the above statistics are adjusted for inflation. And, as Milton Friedman recognized, since “inflation is always and everywhere a monetary phenomenon,” it is a direct result of government policy. Inflation is a word often misunderstood. It is the decreased purchasing power of currency, caused by the expansion of the money supply, and not to be confused with price increases caused by scarcity.

Krugman’s “Great Divergence” begins soon after America’s divorce from the gold standard and the subsequent collapse of the Bretton Woods currency exchange system in the early 1970s. After that, the Federal Reserve instituted a Keynesian monetary expansionist policy. In 1972, the price of a new house averaged $27,600; in 2010 (despite 2008’s deflation of the housing bubble), the average price was $272,900. A 1972 Coke cost a dime; today it’s a buck. By 1973, gold hit a high of $126 per ounce; in 2009 it topped $1,212. In 1972 the Dow Jones Industrial Average hit 1,000; by 2010 it had reached 11,000 — a ten-fold increase in only four decades. A $10 Hamilton from 1972 is today’s $100 Franklin. How does this stark change affect the disparity between rich and poor?

* * *

But first, more fuzzy numbers.

Without an objective anchor such as gold, the value of money is subject to fluctuation according to the active “monetarist” policy set by the central bank. That policy is based on many variables — prominently including the consumer price index (CPI), with a nod to gross domestic product (GDP), processed through complex formulae and topped with a generous dollop of intuition. The objective is a stable currency — a very difficult goal with such a capricious policy, and one whose results always lag policy implementation.

For a variety of reasons, the central bank considers deflation a greater evil than inflation. So, wishing to avoid deflation at any cost, the Federal Reserve sets an inflation goal of 1–2%. It often misses this goal. The October 2011 rate was 3.53%, according to the Bureau of Labor Statistics (BLS), as measured by the CPI.

How reliable is this number? Not very — as it is neither accurate nor even precise. Like an aging diva afflicted with weight gain, wrinkles, fatigue, loss of figure and overexposure, the CPI has been massaged, injected with Botox, subjected to fad crash diets, over-cosmetized, face-lifted, repackaged, and rebranded. Richard Nixon, for example, bequeathed us the so-called “core inflation” measure, which strips out food and fuel — a bit like weighing yourself without your belly. In 1996 Bill Clinton implemented three oddities in the measure of inflation: substitution, weighting, and hedonics.

Krugman’s “Great Divergence” begins soon after America’s divorce from the gold standard and the Federal Reserve’s institution of a Keynesian monetary expansionist policy.

With substitution, it is now assumed (for example) that if the price of salmon goes up too much, people will switch to something cheaper, such as hot dogs. So as the price of an individual item within a representative basket of thirty goods rises, that item is removed and substituted with something cheaper, chosen by a trained bureaucrat. According to the BLS, food costs rose 4.1% from 2007 to 2008. But according to the Farm Bureau, which tracks exactly the same shopping basket of 30 goods from one year to the next without substitution, food prices rose 11.3% for the same year.

Weighting is an even sharper tool for cutting the measure of inflation. Anything that rises too quickly in price is undercounted in the CPI, under the assumption that people will use less of those things. For example, although healthcare is about 17% of the economy, it is weighted as only 6% of the CPI basket.

But the most creative way to fiddle with inflation is hedonics. This adjustment is supposed to reflect quality improvements. Here’s how it works, based on a presentation by a commodity specialist at the BLS and explained by Chris Martensen:

In 2004, the commodity specialist at the BLS noted that a 27-inch television selling for $329.99 was selling for the same price in 2007, but was later equipped with a better screen. After taking this subjective improvement into account, he adjusted the price of the TV downwards by $135, concluding that the screen improvement was the same as if the price of the TV had fallen by 29%. The price reflected in the CPI was not the actual retail store cost of $329.99, which is what it would cost you to buy, but $195. Bingo! At the BLS, TVs cost less and inflation is heading down. But at the store, they’re still selling for $329.99.

Hedonics rests on the improbable assumption that new features are always beneficial and are synonymous with falling prices (never mind that most old rotary phones still work, while modern cell phones seldom seem to last three years). Hedonics is now used to adjust as much as 46% of the total CPI.

What would the inflation rate have been for, say 2008, before all the fuzzy statistical manipulation gussied it up? John Williams of shadowstats.com, using early 1980’s formulas, computed the figure at 13%: the BLS reported a 5% inflation rate for the same year — a stunning 8% difference.

But that’s not all. During Alan Greenspan’s tenure at the Federal Reserve — particularly while the real estate bubble was growing gangbusters — some economists bemoaned that, without asset prices such as real estate and equities being included in the CPI, true inflation rates would be misleading, thereby skewing monetary policy.

While inflation has been massaged down, GDP has been steroided up, by similar sleight-of-hand manipulations — further inflating the money supply.

So, at this point, brace yourself with another shot — this time of hedonic Cuervo Añejo.

* * *

Inflation affects the poor and the rich in completely different ways, though both lose wealth. No one benefits — except for government and banks, which, having access to newly created money before it hits the streets and raises prices, can buy goods and services at the old, cheaper rate. By the time the surplus money has permeated the economy and reached the masses, prices have usually risen significantly.

Broadly speaking, the poor — for the purposes of this essay, people in the lower 40% of income distribution — have fewer assets, lack financial sophistication, and tend to hold, at most, a high school diploma. They deal in cash and its derivatives and equivalents — CD’s, bonds, and interest-bearing accounts. In an inflationary regime, these lose value. In an underreported inflationary regime, the effect is not only obviously greater but, because wages only grudgingly and loosely track the “official” inflation rate of the CPI (if at all), “much of the developed world’s workforce has been squeezed on two sides, by stagnant wages and rising costs,” as The Economist opined in its November 19, 2011 issue.

There is one factor leading to wealth disparity that Rawlsians and Marxists most seem to ignore, but classical economists believe is fundamental — productive innovation.

As if this situation were not bad enough, many of the poor were lured into buying homes by dodgy loans and government social engineering policies (such as the Community Redevelopment Act, Fannie Mae and Freddie Mac practices, and lower than historic interest rates) in the middle of a bubble. When the bubble burst, these folks lost whatever equity they had managed to cobble together, as well as ending up with ruined credit. And they couldn’t even rely on their savings (what little they might have saved between stagnant wages and rising costs), as these too had dwindled along with the higher interest rates that made savings more attractive.

So, without a doubt, the poor are getting poorer. What about the rich?

While cash loses its value, real goods such as commodities, equities, and real estate track the changing value of money and, long term — with the dips and highs of the business cycle evened out — generally keep pace. The rich, with more education, more financial sophistication, and more discretionary income, invest. The poor, on the other hand, save (if they can afford to). All other things being equal, inflation makes investments tread water, but savings lose. Without inflation, income inequality might not have become so pronounced over the last 40 years.

Though the above analysis might go a long way toward explaining the increasing income inequality in the United States, it still isn’t the full picture.

There will always be income inequality, if for no other reason than the fact that people’s work habits, education, and ambition vary tremendously. But the one factor leading to wealth disparity that Rawlsians and Marxists most seem to ignore, but classical economists believe is fundamental — productive innovation — also plays a big part.

A study done by University of Texas economists James K. Galbraith and Travis Hale found that

During the technology boom of the late 1990s, most of the gains enjoyed by the top 1% came from a small number of counties, rather than a national trend. Almost all of the richest 1%’s gains occurred in the economic hotbeds of Silicon Valley, and also New York City. If the top four counties in those regions are removed, there is almost no trend towards income inequality during the years studied (1994–2000). On this basis, the researchers ascribe the growth in income inequality in the late 1990s to the growth of information technology.

Earned income.

Definitely.

About this Author

Robert H. Miller is a builder, outdoor adventure guide, and author of Kayaking the Inside Passage: A Paddler's Guide from Olympia, Washington to Muir Glacier, Alaska.

Making Art from the History of Art

Hollywood has a way of both following trends and creating them. We can see these trends develop whenever we look back at the course of a Hollywood year. Filmmaking is now nearly a hundred years old, and while its beginning is not specific enough to generate any "100th anniversary" hoopla, several films this year looked back at the groundbreaking artistry of filmmaking that we often take for granted. These high-quality movies have worked on two levels — as entertaining stories that stand on their own, and as tributes to filmmaking itself.

Let's look at some of these stylish 2011 films that are still making news at the awards shows in 2012.

John Le Carre's novels about the Cold War era are among the finest spy thrillers. His recurring espionage agent, George Smiley, is not a caricatured James Bond or a rough-and-tumble Jason Bourne. He demonstrates the true complexity and moral conflict of a man who protects his country and her way of life by infiltrating and often breaking the laws of another. He is a man who lives a life of quiet isolation. Gary Oldman plays him brilliantly in this version.

When I saw that Tinker Tailor was being remade, my first reaction was "Why now?" The Cold War has been over for a long time. Countries that once made up the Soviet bloc are no longer our enemies, and the political and economic philosophies that separated us then don't inform the conflict we now experience in the Middle East. Agent Smiley "came in from the Cold" a long time ago, and for good reason. I wondered whether this story about a mole in the upper leadership of MI6 would be updated or modified to offer a fresh look at current moral dilemmas.

The answer to "Why now?" surprised me. Tinker Tailor isn't just a remake of a spy thriller. It is a remake of a ’70s film, and another offering in this season's retro moviemaking trend. More than a movie about the ’70s, it is a movie made like a ’70s film. Filmed in Super 16mm, which was used for filming television shows and some movies during that time, Tinker Tailor has the grainy texture of a Bullitt or a French Connection, two films that represent the era. The direction is slow, and the pacing even slower — as in those films, which we once considered so tense and exciting.

Everything about this film makes it feel like a reissue rather than a remake. Its old-school communication equipment, Wang word processors, shaggy hairstyles, and polyester clothing feel natural and unobtrusive rather than recreations designed for retro effect. It was reported that Oldman searched diligently through several vintage shops to find just the right eyeglasses for Smiley to wear. Even the outdoor scenes of London have the grimy, dusty look of the ’70s, before London was scrubbed clean and white in the ’80s and kept that way through better emissions controls.

Tinker, Tailor, Soldier, Spy isn't as thrilling as the Bourne movies or as campy as the Bond films. But it is an impressive tribute to the books and films of the ’60s and ’70s, with an impressive cast of A-list actors as well.

Here is another 2011 film that celebrates the art of filmmaking. Anyone interested in the behind-the-scenes aspect of the movies will enjoy this one about the making of The Prince and The Show Girl (1957) as seen through the eyes of Colin Clark (Eddie Redmayne), a young aspiring filmmaker who worked on the production despite the disapproval of his aristocratic family.

In 1956 Marilyn Monroe (Michelle Williams) was the biggest star in Hollywood, if not the world. Laurence Olivier (Kenneth Branagh) was the greatest Shakespearean stage actor. They came together that summer to make The Prince and the Show Girl, starring and directed by Olivier. Through sheer will and determination (and the good fortune of having met the Oliviers at a society party), Clark secured a job on the film, as third assistant director — little more than a go-fer, really. Nevertheless, Clark caught Monroe's eye and became her boy-toy for a week, in every sense of the phrase. And Clark kept a journal.

It was not an easy shoot. Monroe was constantly late to the set, constantly muffing her lines, and constantly close to tears. She brought her own acting coach with her, Paula Strasberg (Zoe Wanamaker) of the method school of acting, and this created conflict with Olivier as the actual director of the film. As Clark tells Marilyn when he tries to comfort her after one of Olivier’s biting criticisms, "It's agony because he's a great actor who wants to be a film star, and you're a film star who wants to be a great actress. This film won't help either of you."

Kenneth Branagh, who plays Olivier in this film, has similar aspirations for mixing media. Perhaps the greatest Shakespearean actor today, or at least the best known, Branagh's goal has been to move Shakespeare from the stage to film, where the bard's plays will be more accessible to the masses. He has succeeded by bringing seven of them to the screen. His Iago in Othello (1996), with Lawrence Fishburne in the title role, is a masterpiece. Nevertheless, I had my doubts when I saw Branagh enter this film as Olivier. He just didn't look the part. But there comes a moment, as he is applying his makeup for the coming scene as the Prince, when his entire countenance changes and Olivier takes over the body they are sharing. At that moment Branagh simply disappears into the role. Remarkable.

Equally good is Dame Judi Dench as the gracious and gentle Dame Sybil Thorndike, grand dame of British stage and film in the first half of the 20th century, the actress who played the Queen Dowager in The Prince and the Show Girl. Where Olivier is critical, Thorndike is encouraging. When he reacts with exasperation to Marilyn's repeated flubs, Dame Sybil kindly praises her. She greets all the people on the crew by name and expresses genuine interest in them. Dench, the new grand dame of British film and stage, met Thorndike when she was new to acting. She said, "She came round to see us after [our presentation of Romeo and Juliet] and was so charming. We were young actors and she was lovely to us and strongly encouraging and gentle. I think they got very, very close to how Dame Sybil was in the script."

The best art and poetry can evoke an entire life in a single moment. This is the case with My Week with Marilyn. Through her affair with Clark we learn about the memories of childhood abandonment that led to Monroe' lifelong insecurity and vulnerability. We see her fears about not measuring up as an actress, her dependence on pills, and her anxiety about being alone. We see how frustrating it was to work with her, and how hard it was for her to live up to being the idolized Marilyn. And we see the magic she created on film when she felt good about herself. It’s one of the best bio-flicks I’ve seen in a long time.

Clark predicted that making The Prince and the Show Girl would not help either of the principals’ careers. But he was wrong. Olivier's next project was The Entertainer, his memorable film about Archie Rice, an aging vaudevillian actor. He received numerous accolades for the stage production and was nominated for an Oscar when the play was adapted for film in 1960. He said of that role, "I am Archie Rice. I am not Hamlet."

And Marilyn Monroe? Her next film was Some Like It Hot.

Hugo (directed by Martin Scorsese; Paramount, 126 minutes)

This film is about a young boy who lives inside a Paris train station, fixing the clocks. It appears at its outset to be a charming fantasy. Populated by cartoonish characters and centered on an impossible premise, it simply can't be true. But underneath the magic tricks of fiction is the true story of George Méliès, one of the early pioneers of filmmaking. More than a hundred years ago, Méliès developed stop-action animation techniques to create special effects. He hand painted individual cels to make color, and experimented with multiple exposures and time-lapse photography. You have probably seen snippets of his famous A Trip to the Moon, in which the man-in-the-moon is shot in the eye during a rocket ship's landing. You probably haven't seen many of his other films, because the French government confiscated most of his works during the Great War and melted the celluloid down to make boot heels for the soldiers. At the time, filmmaking was considered a timewasting entertainment. No one realized the great historical and artistic value of Méliès's work. And as far as Méliès himself knew, everything was gone.

Later, friends of Méliès set him up with a toy shop in the Montparnasse train station so that he could earn a living. Later still, a few copies of his works were recovered by journalists interested in his story. Eventually he was awarded the Legion of Honor for his work. All of this, as well as many of his groundbreaking film techniques, appear in Scorsese's marvelous film. The movie may purport to be about a fictional little boy who fixes clocks in the Paris train station, but it is no “kids’ flick.” It is one of the most satisfying films of the season.

Perhaps the most significant film in this category is The Artist, which won the Golden Globe for best picture this week. It was reviewed for Liberty by Gary Jason, but it is worth mentioning again from the perspective of its tribute to the art of filmmaking itself.

The technology necessary to record sound was available to filmmakers from the very beginning; after all, Edison invented the phonograph before he invented the motion-picture camera. What these early filmmakers lacked was the ability to synchronize sound with action. So movies remained silent, substituting music to complement the action and enhance emotion on the screen. In New York, full orchestras provided that music in ornate theaters for audiences of more than 5,000 people. Small-town theaters employed organists to play the soundtrack. Actors used body language, facial expressions, and outright pantomime to communicate conflict and exposition. Obviously, complex story lines heavy with dialogue were close to impossible. Emotion and physical comedy dominated.

The Artist is a silent movie whose story is set in 1927–32, when the stock market wasn't the only thing that crashed. Silent films also came tumbling down as the problem with synchronization was resolved and talkies took over. Like the marvelous Gene Kelley-Debbie Reynolds-Donald O'Connor musical Singin' in the Rain (1952), set in the same era, The Artist follows the careers of a handsome silent film star and a bubbly young ingénue whom he has discovered — in this case George Valentin (Jean Dujardin) and the aptly named Peppy Miller (Berenice Bejo). George is the quintessential ’30s film star with his pencil thin mustache and dazzling smile. But he refuses to make the transition to talkies.

Without the dialogue and complex storyline that characterize modern filmmaking, director Hazavanicius invites the audience to focus on the rich artistry of early filmmaking — the lighting, the use of shadows and reflections, the camera angles, the elegant costumes, and the stylized sets, among other features. Silent films are often parodied for their actors' broad pantomime and "mugging" for the camera, but this criticism is deftly contradicted by the emotional range portrayed by these actors' facial expressions and body language. Yes, there was some serious overacting in early films, but The Artist reminds us that there were astounding subtlety and depth as well.

The Artist really is a silent movie; with two very short but very important exceptions, the only sound you will hear is music. The soundtrack is splendid, and even includes a tribute to Alfred Hitchcock — a long section of Bernard Herrmann's soundtrack from Vertigo at the emotional climax of the film. So be prepared, but don't let this fact keep you away from the film. It is, as its title suggests, a work of art.

Super 8 (directed by J.J. Abrams; Paramount, 112 minutes)

Super 8 rounds out the list of 2011 tributes to filmmaking with a summer blockbuster paean to both filmmaking and filmmakers. As I wrote in my June 14 review: "Super 8 is the best Steven Spielberg movie to come along in years. And it isn't even a Spielberg film."

Written and directed by J.J. Abrams, it is the most Spielbergian film to come along in many years, a veritable homage to the master of blockbuster films inhabited by preadolescent protagonists. Among the Spielberg effects that Abrams incorporates in this science-fiction coming-of-age thriller are the trademark bicycles spinning into getaway mode, the classic suburban settings, the snappy potty-mouthed dialogue among kids, and the Orwellian military bad guys, reminiscent of E.T. Best of all, Abrams employs the particular kind of coming-of-age storyline for which Spielberg is known. Yes, there's a monster out there, but the real monster is at home, in the form of an unnamed tension between parent and child that has to be resolved.

Super 8 is an homage in a different way as well. Setting aside the aliens that are attacking the city (and that might be hard to set aside in real life) is a classic “let’s put on a show” format that would have made Mickey Rooney and Judy Garland feel right at home. In movies like theirs, groups of kids were always transforming a barn into a stage in order to raise money for some worthy cause. The format gave the filmmakers an excuse to present rousing music-and-dance numbers that had nothing to do with the plot.

In Super 8, Riley (Charlie Griffiths) is trying to make a film for a teen film festival. In the spirit of Judy and Mickey, he enlists all his friends to act as makeup artists, sound technicians, camera operators, actors, and writers. And in the spirit of this article about retro themes, they do it with a vintage Super 8 camera, the kind we used to use to film three minutes of family activities before sending the cartridge out to be developed. Many of today’s directors got their start with the family Super 8, including Spielberg, on whom Riley's character is based. Spielberg won a prize at a teen film festival when he was 13. Moreover, his first Super 8 film culminated in a train wreck created by his Lionel model trains, just as Super 8 does.

Like all the films highlighted in this article, Super 8 stands on its own as a well-made, entertaining movie. But these films become even more enjoyable when one recognizes the allusions they make and understands the background of the filmmakers they honor. As we pass the hundredth year of motion pictures, universities are legitimizing the art with degrees that focus on the history of filmmaking, not just on the technical aspects of making a film. I think this trend will continue to influence the quality of filmmaking, especially in works by independent filmmakers — and the influence will be all to the good.

Jo Ann Skousen teaches writing and literature at Mercy College and Sing Sing Correctional Facility, and is the founding director of the Anthem Libertarian Film Festival. She can be reached at jskousen@anthemfilmfestival.com.

Europe: The Problem and the Prospects

In 2004 the European Commission issued a formal warning to Greece, having found that it had falsified budget deficit data in advance of joining the Eurozone. That’s right, Greece had not just failed to meet the budget requirements for joining the new currency — lots of countries did that — but it had lied about it for the privilege of swapping drachmae for euros.

Over the next few years the Greek government's modest attempts to reform the coddled Greek labor market, particularly the obese public sector, met with massive protests, many of them violent.

In the late spring of 2009 I sat across from an old law school friend, drinking wine on the terrace of a Parisian bistro near the Bastille. It was a mild early evening with hours of sunlight left, yet as usual my friend was already in his cups. But then, this guy (call him “Jay”), was smarter drunk than I am sober.

As I drained my glass of Beaujolais Cru, just a few years after Greece had joined the Euro, the Greek debt crisis was in full cry. Bailout negotiations between the EU and Greece had begun. Jay is a prominent international finance lawyer, and he represented the EU on the legal side of the negotiations. So I ordered another drink and got an inside view of the proceedings.

Jay and I debated the virtues, vices, and prospects of a bailout. It was all very speculative and academic, reminding me of so many college rap sessions in which my friends and I handily remade the world to no good (or ill) effect. The curious difference here, decades later, was that Jay really was involved in remaking the world.

As an aside, think of Professor Obama noodling over, say, the constitutionality of a federal mandate that everyone buy health insurance, the kind of seemingly harmless brain game that is played all day, every day in our universities and law schools. Most of the highly accomplished students who, like Obama, attended the top schools become convinced that they know what’s good for you. And some of them attain the power to give it to you. A student’s collectivist or paternalist nonsense is harmless. But with the stroke of a pen wielded by the nerd who used to sit next to you in Social Studies, governments convulse huge sectors of the economy. The difference is that the harmless nerd, the student Obama, for example, has become the hand of power.

At that early stage of the Greek debt crisis (which became the Italian, Irish, and Portuguese debt crisis, which became the euro crisis, which became the Europe crisis, which is becoming the second dip of the Great Recession, and which may doom the European Union to diminishment or dissolution and trash the feeble recovery in the US), it was hard for me to see the historical context of the problem. Jay went straight to it, talking about the German fear of inflation and profligacy, at odds with the German fear of the consequences of a divided Europe.

With the stroke of a pen wielded by the nerd who used to sit next to you in Social Studies, governments convulse huge sectors of the economy.

I know this is remedial history, but just in case: Germany suffered three great traumas in the 20th century, and two great boons. The three traumas were the first war of Europe divided, WWI; the second war of Europe divided, WWII; and between them the hyperinflation of the Weimar Republic, which probably resulted in German National Socialism. The two boons were first, Germany’s long, vigorous period of growth and prosperity, which persisted and accelerated in conjunction with the economic, monetary, and political integration of Europe; and second, German reunification, which came with the collapse of Soviet communism.

France, the other dominant player in the current crisis, has learned much different lessons from history. Of course it fears Germany as Germany fears itself, but it trusts government in a way that Germany does not. The French ruling class favors European unity, not just because it wants to restrain Germany but also because it thinks it can harness the Germans. This has made France the serial instigator of Euro-government activism.

At the center of France’s vision of European peace and unity is an organ grinder with an elephant instead of a monkey, but the elephant does not collect peanuts and coins; it distributes them. France is the organ grinder. Germany is the elephant. The rest of Europe stands around applauding, and collecting peanuts and coins.

Later in 2009, Greece lost its credit rating. Much bad news, “reforms,” and bailouts followed in a parade of horrors that continues now more than 2.5 years later, like shit hitting a fan in super slow motion. Greece, the EU, France, and Germany made and broke a series of promises about Greek debt. Greece was solvent. There would be no second (or third) bailout. Greece would never default. Greece would reform. Etc.

More of the same, until something really breaks, is a good prediction. Sarkozy the organ grinder will play furiously. Like an Indian mahout, he will even bring out the “ankus,” the goad. At the sharp end of the ankus are reminders of Germany’s behavior in World War II. The elephant will give out more coins and peanuts in greater quantities but with greater reluctance, and greater resentment for the crowd of client states that surround the center of Europe. In exchange, the crowd, and even France, will give up freedom, sovereignty, and independence. France does not like loss of sovereignty but believes it will always call the tune. The UK will congratulate itself for staying out of the euro and will refuse to sacrifice its own sovereignty to save the newish currency.

By helping us see how people in nation states see themselves, history helps us guess what they will do. But it does not tell us the results of their choices, which they themselves always fail to predict. After all, none of the EU, France, Germany, or Greece intended the Greek crisis or predicted it early enough to do anything to avoid it. How did that happen?

Descriptions of economic crises past reveal the historian’s perspective, bias, and even philosophy. The Great Depression makes a good example, over which commentators continue to fight. Was it caused or worsened by too much trade protection, too little Keynesian stimulus, a shrinking money supply, the bursting of the credit bubble that preceded it?

Soon there will be as many descriptions of the euro crisis.

I see that crisis and America’s subprime mortgage debacle as symptoms of the same contradiction, one that has strained most of the developed economies for decades and seems to be reaching some kind of limit now. The contradiction is between the love of state largesse and the limits of governments’ ability to raise revenue. That is not a very original observation, but in diverse countries and regions, the fallout from this strain takes surprisingly diverse and original forms.

The form of the fallout seems to depend on the particular weaknesses of a country’s institutions. In Greece, they overborrowed, overspent, cheated, lied to their creditors, and chronically failed to collect taxes due. In Germany they turned a blind eye, because European profligacy spurred Germany’s exports, and exporters had the ear of the German government.

More of the same, until something really breaks, is a good prediction.

In the United States we accepted war as an excuse for big deficits, and when the electorate showed resistance to faster growth of the welfare state, Congress contrived to finance it “off balance sheet” through Freddie Mac and Fannie Mae. And now the Great Recession gives us a reason to bail out financial institutions and automobile manufacturers and to print money (“monetary easing”).

In all these cases, the severity of the crises will partly depend on how and how thoroughly a state and its people fool themselves. The exact nature and severity of the crises are hard to predict. There may be cause for real fear.

I am afraid. For the first time in years, I feel financially insecure. I thought that, through work, good fortune, and saving, I had acquired financial security. Now I don’t know. Will quantitative easing cause high inflation? Will the markets where I store my wealth behave bearishly for long enough to beggar me before I die? Will the European crisis grow so deep and severe as to badly infect the world economy? Is Greece in effect a domino? I don’t know, but it’s falling. There will be no soft landing.

About this AuthorMichael Christian is a recovering lawyer trying to avoid working a real job.

The Bureaucrat and the Cellphone Ban

About a month ago, the National Transportation Safety Board (NTSB) chairman, Deborah A.P. Hersman, called for a “first-ever nationwide ban” on “the non-emergency use of portable electronic devices,” including hands-free cellphones, while driving. In a prepared statement introducing the proposed ban, Hersman told the story of a fatal multi-vehicle accident that had recently occurred in rural Missouri, set in motion by a pickup truck driver who’d been using a cellphone while driving:

“And it was over just like that. It happened so quickly. And, that’s what happened at Gray Summit. Two lives lost in the blink of an eye. And, it’s what happened to more than 3,000 people last year. Lives lost. In the blink of an eye. In the typing of a text. In the push of a send button.”

Quickly, critics of the Obama administration raised questions about that “3,000 lives lost” statistic. While some of these criticisms had a peevish tone, their basic point was valid. The 3,000 number was an exaggeration, based on an imprecise use of more defensible fatality numbers.

A few days later, the Washington Post published an opinion column under Hersman’s name that justified the NTSB’s proposal. (The Post’s opinion pages serve as a sort of free press-release service for columns supposedly written by high-level bureaucrats.) The column used most of the same language from Hersman’s earlier statement — but avoided specific figures:

“Washington residents remember well the 2009 Metro crash on the Red Line in which nine people were killed. The number of fatalities from distractions on U.S. roadways is the equivalent of one Metro crash every day of the year. . . . At the NTSB, our charge is to investigate accidents, learn from them and recommend changes. In Gray Summit and on highways across the United States, thousands of people were killed last year in the blink of an eye. In the typing of a text. In the push of a send button.”

There was still plenty of mendacious rhetoric at work in the column. It went on to imply that fatal accidents caused by cellphone use are a growing risk. It stated that cellphones and personal digital assistants have become “ubiquitous”; and it cited a study suggesting that 21% of drivers in the Washington, D.C., area have admitted to texting while driving.

Taken together, these emotionally fraught passages clearly implied that some 3,000 people a year are killed in motor-vehicle accidents caused by sending or receiving cellphone text messages. But that’s not true. The “3,000 lives lost” number comes from an NTSB study of “distracted driving” in general. Based on data from that study, the NTSB estimated that fewer than a third of those deaths could be connected to cellphone use. To repeat for emphasis, even that number is an estimate. (Of course, bureaucratic fiefdoms like the NTSB often issue regulatory decrees based on slight justification and without regard to practicality, effectiveness or cost.)

So, Hersman exaggerated the risk of cellphone use while driving by a factor of at least three — and repeated the exaggeration with carefully calibrated verbiage. And, most important, she used the exaggerations and imprecise rhetoric to support an invasive regulatory action.

She may have figured the mendacity was needed because the general trend has been toward greater safety on American highways. In 1990, about 44,600 people died in car crashes in the U.S.; in 2010, that number had dropped to less than 32,900. This drop is even more striking when you consider that the total number of licensed drivers in the U.S. rose significantly over the same period. According to the National Highway Traffic Safety Administration (NHTSA), there were 1.71 deaths per 100 million vehicle-miles driven in 1994 but only 1.09 in 2010. That’s a major improvement — though you’d never know it from Nanny Hersman.

Hersman exaggerated the risk of cellphone use while driving by a factor of at least three, and used the exaggerations to support an invasive regulatory action.

In significant ways, Hersman resembles other current and former Obama administration apparatchiks. Like Julius Genachowski, she is a career Beltway insider whose slavish devotion to big government overwhelms any notion of private-sector economy; like Elizabeth Warren, her background speaks more to bureaucratic credentialing than education in the classical liberal sense.

Hersman’s December decree urged state governments to prohibit text-messaging and other electronic device use while driving. (It calls, specifically, for the 50 states and the District of Columbia to ban “the non-emergency use of portable electronic devices.”) But her urgency was unnecessary: 35 states already have such rules in place.

If “distracted driving” is a problem, why are cellphones a more urgent issue than other sources of distraction — watching kids in the back seat, eating fast food, studying a GPS map, applying makeup, etc.? A cynic might say that a cellphone ban gives state agencies a broad excuse to harass citizens…and a new source of cash flow for government coffers. But statist hacks like Hersman are too earnest for that.

A more likely answer is that a ban on cellphone use in the privacy of one’s own car is a preemptive regulation. And preemptive regulations have two distinctive traits: they are often misused — and, particularly, overused — by state agencies; and they are often based on shaky logical foundations that sound good on first impression but don’t stand up well to rigorous inspection.

That second trait explains why bureaucrats like Hersman use emotional manipulations to promote pre-emptive regulations.

An important point: The feds’ own research underscored the futility of Hersman’s gesture. An NHTSA report on accidents “involving” cellphones as the cause of fatalities stated that:

“Sixteen percent of fatal crashes in 2009 involved reports of distracted driving and ... of those people killed in distracted-driving-related crashes, 995 involved reports of a cellphone as a distraction (18% of fatalities in distraction-related crashes).”

So, Nanny Hersman proposed banning cellphones in cars to reduce a risk that causes — at most — 2.9% of traffic-related deaths.

There may have been other factors affecting her thinking. A few months before Hersman’s proposal, the U.S. Senate considered a Department of Transportation spending bill that set up a $10 million grant program aimed at helping states combat “distracted driving” — and especially texting behind the wheel. According to the bill (S. 1596):

“While there is no definitive data as to how many distracted driving deaths and injuries are caused by cellphone use and texting, 20% of the drivers involved in fatal accidents in 2009 were either using or in the presence of a cellphone at the time of the crash, and there is reason to be concerned about whether the recent rise in distracted driving fatalities is linked to the increasing use of electronic devices.”

Admitting they had “no definitive data” to support their actions, the Solons would bribe states to prohibit citizens from operating a vehicle while in the “presence of a cellphone.” Maybe Hersman wanted the NTSB to administer the grants to the states.

If “distracted driving” is a problem, why are cellphones a more urgent issue than other sources of distraction — watching kids in the back seat, eating fast food, studying a GPS map, applying makeup?

The Senate bill also required $5 million to be set aside “for the development, production, and use of broadcast and print media advertising to support enforcement of State laws to prevent distracted driving.” Maybe Hersman wanted the NTSB to produce those ads . . . and its chairman to star in them.

The Obama administration has never been shy about manipulating numbers and emotions to support its various statist schemes and bureaucratic boondoggles. Specifically:

According to the Census Bureau, more than 30 million Americans — one in every seven — live in poverty. And that number is growing, in part, because the Obama administration has expanded the definition of the word “poverty.” The administration has worked to delink the concepts of poverty and deprivation…and redefined poverty instead as being “about inequality.” Traditional metrics of poverty have focused on absolute purchasing power — how much food or durable goods a person can buy; the Obama administration’s metrics focus instead on comparative purchasing power — how much food or durable goods a person can buy relative to other people. This is a statistical trick designed to assure that a fixed portion of the population will always be poor.

In the spring of 2011, Obama administration officials publicized the possibility that “82% of U.S. schools” could be rated as failing, according to metrics established by the No Child Left Behind program. Department of Education Secretary Arne Duncan repeated this statistic in numerous speeches — even though education experts called the number “unverified,” “likely exaggerated” and “meaningless to the schools that are being rated.” Even after several education policy groups challenged Duncan’s emotional rhetoric, he and other administration officials showed no inclination to make more precise statements. Some observers suggested the administration’s goal was, rather than issuing reliable numbers, to scare Congress into approving its spending goals.

In the fall of 2011, a heated exchange between Rep. Connie Mack (R-FL) and Labor Secretary Hilda Solis made clear that tension between the Obama administration and congressional Republicans over the president’s efforts to bolster the clean energy economy was getting worse. Mack scoffed at administration projections that counted drivers of hybrid buses as “green jobs.” (This dispute occurred during the height of public outrage over Department of Energy loan guarantees — funded through Obama’s $825 billion stimulus plan — to bankrupt solar energy company Solyndra.) Some lawmakers argued that the Obama administration exaggerated the impact that its “green energy” policies had on improving the economy and creating jobs.

In late 2011, immigration policy groups noted that the Obama Administration had inflated statistics to suggest that it had deported a “record-high number of illegal immigrants with criminal records.” In fact, the real deportation figure was closer to an historic low. In October 2011, Obama’s Immigration and Customs Enforcement (ICE) director had announced that nearly 55% of the record 396,906 illegal immigrants deported in FY2011 were convicted of felonies or crimes. But the real figure was less than 15%, according to federal records obtained through the Freedom of Information Act (FOIA) by the Transactional Records Access Clearinghouse (TRAC). Specifically, the average rate across the four quarters for FY 2011 was 14.9%.

In October 2011, the web site FactCheck.org caught the Obama administration exaggerating the impact of a proposed additional round of “stimulus” spending. (The administration had predicted that its previous stimulus plan would “save or create” millions of jobs. Those predictions turned out to be wrong — some 1.2 million American jobs had been lost during the two years following passage of the 2009 stimulus. In 2011, Obama claimed that “independent economists” agreed that a new stimulus package would “create nearly 2 million jobs next year.” But FactCheck.org countered that the “median estimate in a survey of 34 economists showed 288,000 jobs could be saved or created over two years under the president’s plan.”

Focusing on this or that political prevarication is easy and, on a reptilian level, fun (on this topic, I commend to you Vaclav Havel’s great New Year’s Day 1990 speech on statist lies). But there’s also a bigger point raised by the meddling of bureaucratic schemers like Deborah Hersman and Barack Obama. Specifically: what burden of proof should be borne by a party who proposes a law or regulation?

The feds’ own research underscored the futility of Hersman’s gesture.

The statists who support Obama argue that the answer to that question is “none.” They argue that bureaucrats are by definition well-meaning and laws or regulations they propose should be presumed virtuous and effective. According to this peculiar logic, the burden of proof falls on those who question the proposed laws or regulations. Here’s one commenter’s defense of Nanny Hersman’s decree:

“Ms. Hersman was appointed to the NTSB in 2004. I can’t for the life of me figure out what possible political (or other nefarious) agenda she could possibly have in recommending that states ban cellphone usage while driving. I don’t see why we can’t assume that she is a conscientious officer who has looked at the question and sincerely believes that the evidence supports her recommendation. . . . I challenge you to find any study that shows that texting or mobile phone use does not impair driving ability. You won’t find any.”

A more coherent — and liberty-friendly — approach to government regulation would be that, if a state agency proposes restricting or banning some object or action, it must first prove that:

the object or action accounts directly for some demonstrable economic loss, and

restricting or banning the object or action will alleviate the loss.

If the agency can’t establish both points, then its proposal would be ignored.

And even if the agency can establish both points, citizens would demand a cost-benefit analysis of the proposed regulation that establishes with some confidence that it will save more in economic losses than it will cost to enforce.

This approach would reduce the amount of statist noise generated by the present administration. And future ones, too.

Back to the point: statists claim that bureaucratic drivel like Hersman’s proposed cellphone ban should be presumed valid. And that those who question it must prove the validity of their questions.

The fruitless search for zero risk fits well into this warped thinking. Whether the particulars involve texting on cellphones, smoking cigarettes, wearing seatbelts, eating Big Macs, or anything else, statist busybodies justify their requirements, prohibitions and other petty tyrannies with good intent. And they imply that their opponents are in favor of the bad outcomes of risky behavior — or are “against safety.”

But a quick text message sent home or to work while driving on an empty country road or stopped in traffic might be as effective a safety measure as wearing a seat belt. Because text messages are time-stamped, people who care about you can know where you were at a given time; this is important, if you don’t show up as expected.

This sort of effective communication may have something to do with the overall trend toward safer U.S. highways. (And most of the existing state laws that restrict or prohibit cellphone use while driving specifically exempt emergency use — such as calls to the highway patrol to report dangerous conditions, etc.)

As I’ve noted, Hersman’s decree was unnecessary. Most states already have laws in place restricting cellphone use by people driving cars; and all states have reckless driving laws that apply to situations in which cellphone use causes dangerous results. But, as one online commenter noted:

“Enforcing laws is so boring. Not only is it work, you get little political benefit from mundane enforcement stuff as it rarely makes the papers. And enforcement of laws may even upset people, causing political problems. But passing laws, now that’s sexy.”

Well, there’s no accounting for taste.

The most damning indictment of the proposed cellphone ban comes from a statistical study conducted by researchers at the Colorado School of Mines. They note:

“On July 1, 2008, California enacted a ban on hand-held cellphone use while driving. Using California Highway Patrol panel accident data for California freeways from January 1, 2008, to December 3, 2008, we examine whether this policy reduced the number of accidents on California highways. To control for unobserved time-varying effects that could be correlated with the ban, we use high-frequency data and a regression discontinuity design. We find no evidence that the ban on hand-held cellphone use led to a reduction in traffic accidents.”

This study is preliminary and based on limited data — but it doesn’t bode well for the cost-effectiveness of Hersman’s futile gesture.

Bureaucrats promulgate regulations. It’s their lifeblood, the air they breathe. A bureaucrat isn’t fulfilling her statist destiny unless she banning or prohibiting something.

But free citizens need to keep in mind that the United States is a country built on the philosophical premise that everything not banned is permitted instead of the tyrannical axiom that everything not permitted is banned.

It’s right there, in the Tenth Amendment to the Constitution. Nanny Hersman and her current boss should take a look.