My thanks to Professor Charles Karelis, author of The Persistence of Poverty, for his excellent comment beneath my recent guest post at Psychology Today's Headcase blog. I'm reprinting the comment below, but first a recap and some additional thoughts.

My post focused on a 2008 study that Dan Ariely linked to a few weeks ago, and which found that premature deaths in the US are increasingly the result of bad personal decisions (rather than causes such as accidents or genetic diseases).

After explaining the study, I mentioned that some of these decisions are made disproportionately by people in poverty. This is where Karelis's book comes into play, as its main thesis is about how the decisions faced by the poor are different in nature than those faced by everybody else. I summarized his idea as follows:

If you have so many burdens in your life at the same time—joblessness, obesity, crime—then eliminating just one of them makes a difference so negligible that you'll simply choose not to address any of them.

As Mike Konczal explained in a great post last year, Karelis posits that solving the first of many problems brings very little satisfaction, or "utility" in economic speak. The problems that remain are so many, varied, and painful that someone in poverty will hardly notice. But if the first problem does get solved, then solving each additional problem would bring increasing satisfaction, as the issues become fewer and more manageable. This is an idea that reverses the traditional economic utility function, which predicts diminishing returns for additional goods. But the point is that because solving the first problem brings such little satisfaction to someone in poverty, it won't be worth the effort to get started. "Hence the persistence of poverty," Konczal writes. (That's an extremely simplistic overview, so do read Konczal's full post for more depth, or better yet buy the book.)

Anyways, the book generated additionaldiscussion within the blogosphere (see also Tyler Cowen in 2007), with mostly favorable reviews. But some reviewers noted that although the book's ideas were intriguing, they hadn't been tested and consequently there didn't seem to be much evidence for them.

In my post at Psychology Today, I wondered why it hadn't been investigated. Despite the discussion of utility, along with Vaughan Bell I understood Karelis's theory to be as much about psychology as about economics. And I closed my post by suggesting that if social scientists were going to design new experiments and conduct new research about personal decision-making (as Ariely recommends), they should keep Karelis's theory in mind. In other words, this seems like a situation where it's not necessarily appropriate to isolate psychological influences from socioeconomic ones, as they might interact with each other in complicated ways.

Karelis writes below that the rationalist paradigm of economics needn't be discarded for his critique of the utility function to hold, and he also responds to the notion that his theory is unproven. I thank him again for the comment and reproduce it here in full:

From the author of the Persistence of Poverty, cited in these posts. My book doesn't link the life-shortening decisions discussed by Ariely (smoking, overeating) to the utility function I hypothesized. That extrapolation seems to have been made first by Ezra Klein in his Washington Post blog, and it was carried further by Mr. Coates in his Atlantic blog and by Andrew Sullivan in his blog. But though I didn't make it in my book, I find the extrapolation very plausible. After all, it takes discipline to resist many of these life-shortening activities. They're like work, or maybe you could even say they're a kind of work. So--on the rationalist paradigm of human behavior, which has been too much disparaged by behavioral economics, in my opinion-- the prudent decision will be taken if and only if the accompanying rewards are perceived as being greater than the effort required. What my hypothesis adds here is that for poor people, whose plates are heaped high with troubles--the felt relief brought about in the short run by losing a few pounds or breathing easier as you climb upstairs will be hardly noticeable. What's one or two fewer troubles on a plate heaped high with them? So we need not embrace the (in any case distasteful) hypothesis that poor people suffer from limited time horizons in order to explain the failure to resist these life-shortening temptations.

As for the question whether my book is "empirically confirmed," I would respectfully contend that my critics have a paradigm of "the empirical" that over-relies on laboratory evidence and under-weighs the question of which among competing theories offers the simplest explanation of undisputed facts about human behavior, such as the greater obesity, alcohol consumption, and smoking among the poor; and I'd add that positivist economics has a double-standard when it comes to introspective evidence. The conventional wisdom about the utility function rests on little else besides introspection. Take down your intro econ text and look if you doubt it.

My friend and Psychology Today Headcase blogger Eric Jaffe remains busy with his book tour, and because I have the flexible schedule of a guy who lounges all day in his underpants waiting for editors to call him back freelance writer, I agreed to contribute another guest post to his blog. It just went up, and here's an excerpt:

Most of us know that our bad choices can eventually kill us, especially
when these choices become hard-to-break addictions
like smoking
or binge drinking or overeating. We're not usually thinking in such
morbid terms each time we light up another cigarette or reach for a
second piece of chocolate cake, but maybe we should be. ...

The paper (pdf here), written by
operations research professor Ralph Keeney, defined a personal decision
as "a situation where an individual can make a choice among two or more
alternatives," and where a person is aware of these alternatives. Using
data from public agencies and previous studies, the paper links causes
of premature death to personal decisions:

The analysis
indicates that over one million of the 2.4 million deaths in 2000 can
be attributed to personal decisions and could have been avoided if
readily available alternative choices were made. Separate analyses
indicate 46% of deaths due to heart disease and 66% of cancer deaths are
attributable to personal decisions, about 55% of all deaths for ages
15-64 are attributable to personal decisions, and over 94% of the deaths
attributable to personal decisions result in the death of the
individual making the decisions.

I go on to argue that if social scientists plan to do more research into the psychology of personal decision-making (as Dan Ariely recommends), they shouldn't conduct their experiments without regard to how socioeconomic factors alter the way people think.

Many of the great mistakes of history, including the problems financial markets have continualy re-experienced, have been caused by a basic error of judgement – the idea that it’s possible to define, plan and control the outcomes of the world around us despite the rampant uncertainty we daily take in our strides. So instead of relying on expert judgement and feeling our way carefully towards outcomes we’ve found ourselves traduced by people with tunnel vision and a strong but unjustified confidence in their ability to navigate unerringly to a correct solution, whatever that might be. ...

At the centre of Obliquity is the idea that we mostly don’t make decisions through some careful process of analysis – maximisation, or whatever term you want to apply to it – because the world is too complex to permit of such an approach in real life. This theory of direct decision making is not just wrong, but is also at the heart of some of the worst decisions in history, invariably made by people who thought that they knew what was right for everyone else.

Instead we end up with reasonable outcomes when we approach decision making obliquely – by using judgement and skill and, frankly, muddling our way through making the best of the situation as we find it on a day by day basis. The book gives example after example of corporations that have succeeded in making their shareholders very rich by setting themselves objectives that are nothing directly to do with wealth creation – and also shows how often direct attempts to generate wealth lead to the exact opposite outcome.

Much more here. This adds to the welcome proliferation of books in the last decade that challenge our understanding of how much control we have over outcomes in our lives.

People are inconsistent in how they view their own lives versus the lives of others. We tend to look upon another person's lot in life and, whether it's that of a celebrity or a destitute bum, assume it is mostly deserved. And we treat him that way. But when we look at the outcomes in our own lives, we're more likely to see them as a combination that, along with talent and effort, includes factors beyond our control. I don't know if books that make us aware of these biases lead to any actual changes in behavior, but it doesn't hurt to be reminded of them now and again.

My friend and housemate Eric Jaffe, proprietor of the Headcase blog at Psychology Today, recently published his first book (to greatreviews!) and is now on book tour. While he's occupied with writerly obligations, I'll be filing a couple of guest posts for the blog. Here's an excerpt from today's post on why bosses turn into bullies:

Of course, Wall Street doesn't have a monopoly on this kind of boss.
A survey (pdf here) conducted by
Zogby in 2007 found that 37% of American workers had been bullied at
some point in their careers, including 13% within the previous year.
Two out of every five bullied workers eventually quit, representing some
21.6 million workers at the time.

Clearly the problem is
widespread, so it seems a good idea to ask: why do bosses become bullies
in the first place? Through a series of experiments, psychologists
Nathanael Fast and Serena Chen tried to answer that very question, and
they released the findings in a research paper for Psychological
Science (pdf here) last year.

The
authors found that power alone isn't enough to corrupt: it has to be
accompanied by self-perceived feelings of incompetence on the part of
the powerful. Furthermore, people in positions of power put added
pressure on themselves to be competent, making their egos all the more
defensive when they lack self-confidence.

Drunkenness and youth share in a reckless irresponsibility and the illusion of timelessness. The young and the drunk are both reprieved from that oppressive, nagging sense of obligation that ruins so much of our lives, the worry that we really ought to be doing something productive instead. It’s the illicit savor of time stolen, time knowingly and joyfully squandered. There’s more than one reason it’s called being “wasted.” ...

But drinking was also an excuse to devote eight consecutive hours to sitting idly around having hilarious conversations with friends, and I am still not convinced there is any better possible use of our time on earth. Lately, in these more temperate years, I’m reminded of Shakespeare’s Henry plays after Falstaff has died; it’s as if, having put riotous youth behind, there’s now a place in life for things like dignity and honor and even great accomplishment — but it also feels, sometimes, as if everything best and happiest and most human has gone out of the world.

More here. Even during riotous youth, I think most of us already do expect our lives to eventually have "dignity and honor and even great accomplishment"—but it seems like there is so much time left to get those things, and meanwhile everybody else is partying right now. Why miss out?

Of course, life doesn't work that way, but the point is that the younger we are, the greater the illusion that we can have it all. We assume that once we get to our late twenties and thirties, then we'll become serious about our careers, stop carousing until the wee hours, start a family, and put away money for retirement. We start focusing more on the future when there is less of it left.

Arthur Brooks, president of the AEI, has a new book out, The Battle, and he makes an interesting claim. He states that the key factor in one's happiness--not experiential happiness, but 'remembered happiness' that is more correlated with 'life satisfaction', see Kahneman on the difference--is 'perceived earned success'. This is the willingness and ability to create value in your life or the life of others. He states that if you ask someone if they feel like they are creating such value, they are happy, regardless of how much they make. Giving people money, via welfare or inheritance, does not make people happy, because this if anything discourages the effort needed to find and develop such a niche. ...

If you are really good at your job your day is filled with sincere gratitude by colleagues and customers, and hopefully you can also have a family that appreciates you as well (but for vary different reasons).

In the end Marcus Aurelius notes that popularity counts for nothing, "we're all forgotten. The abyss of endless time that swallows it all. The emptiness of those applauding hands." So, live for creating value in the lives of those around you, value that is appreciated and will be missed because it is real. If you create real value, things that with the passage of time retain their admiration, as Brooks suggests, that will probably make you happy, which isn't nothing.

The whole post is here. I still say Eric Falkenstein remains underrated among economics bloggers. I disagreed with him in this post.

I'm doing a new book at the moment
called "Epiphany," which is based on a series of
interviews with people about how they discovered
their talent. I'm fascinated by how people got to be there.
It's really prompted by a conversation I had
with a wonderful woman who maybe most people
have never heard of, she's called Gillian Lynne,
have you heard of her? Some have. She's a choreographer
and everybody knows her work.
She did "Cats," and "Phantom of the Opera."
She's wonderful.

I used to be on the board of the Royal Ballet, in England,
as you can see.
Anyway, Gillian and I had lunch one day and I said,
"Gillian, how'd you get to be a dancer?" And she said
it was interesting, when she was at school,
she was really hopeless. And the school, in the '30s,
wrote to her parents and said, "We think
Gillian has a learning disorder." She couldn't concentrate,
she was fidgeting. I think now they'd say
she had ADHD. Wouldn't you? But this was the 1930s,
and ADHD hadn't been invented at this point.
It wasn't an available condition.
People weren't aware they could have that.

Anyway, she went to see this specialist. So, this oak-paneled room,
and she was there with her mother,
and she was led and sat on a chair at the end,
and she sat on her hands for 20 minutes while
this man talked to her mother about all
the problems Gillian was having at school. And at the end of it --
because she was disturbing people,
her homework was always late, and so on,
little kid of eight -- in the end, the doctor went and sat
next to Gillian and said, "Gillian,
I've listened to all these things that your mother's
told me, and I need to speak to her privately."
He said, "Wait here, we'll be back, we won't be very long."
and they went and left her.

But as they went out the room, he turned on the radio
that was sitting on his desk. And when they
got out the room, he said to her mother,
"Just stand and watch her." And the minute they left the room,
she said, she was on her feet, moving to the music.
And they watched for a few minutes
and he turned to her mother and said,
"Mrs. Lynne, Gillian isn't sick, she's a dancer.
Take her to a dance school."

I said, "What happened?"
She said, "She did. I can't tell you how wonderful it was.
We walked in this room and it was full of
people like me. People who couldn't sit still.
People who had to move to think." Who had to move to think.
They did ballet, they did tap, they did jazz,
they did modern, they did contemporary.

She was eventually auditioned for the Royal Ballet School,
she became a soloist, she had a wonderful career
at the Royal Ballet. She eventually graduated
from the Royal Ballet School and
founded her own company -- the Gillian Lynne Dance Company --
met Andrew Lloyd Weber. She's been responsible for
some of the most successful musical theater
productions in history, she's given pleasure to millions,
and she's a multi-millionaire. Somebody else
might have put her on medication and told her
to calm down.

I also recently came across this article, which surveys some of the psychology research on creativity and concludes: "Unfortunately, once you leave school, society does not get much more supportive of really creative behavior."

Some commentators in Britain sneered at the “crocodile
tears” of the masses over the death of Diana. On the contrary, Leader says,
this grief is the same as the old public grief in which groups got together to
experience in unity their individual losses. As a saying from China’s lower
Yangtze Valley (where professional mourning was once common) put it, “We use
the occasions of other people’s funerals to release personal sorrows.” When we
watch the televised funerals of Michael Jackson or Ted Kennedy, Leader
suggests, we are engaging in a practice that goes back to soldiers in the Iliad
mourning with Achilles for the fallen Patroclus. Our version is more mediated.
Still, in the Internet age, some mourners have returned grief to a social
space, creating online grieving communities, establishing virtual cemeteries,
commemorative pages, and chat rooms where loss can be described and shared.

I think this is interesting as an example of how an action that seems illogical---grieving for the death of a famous person you’ve never met
and who would not have grieved at your death---has a non-obvious but useful social purpose.

It’s easy to be condescending toward the waves of sympathy that
sometimes greet celebrity deaths, but public grief is for some people a natural and healthy
process---and as O’Rourke explains, one that was quite common until just a
century ago, when grief started to become a largely private matter.

And even if you do think public grief is silly, at least it's harmless. A related point was made in this blog post from last June
after the death of Michael Jackson, in which Bryan Caplan argued that grieving
for celebrities gives people an outlet to express feelings that would
otherwise manifest in more troublesome ways.I don’t agree with everything Caplan says, but this seems
right:

Samuel Johnson once
wisely observed that, "There are few ways in which a man can be more
innocently employed than in getting money."I'd like to add that "There are few ways in which a man
can be more innocently hysterical than in grieving over a celebrity."