Main menu

When you buy wine at a restaurant in front of clients, or a date, or someone else you need to impress, you’re likely to spend more than you would when with a friend or with a longtime partner. Part of what you’re paying for is demonstrating how much you can pay for things — you’re buying the number next to the name of the wine on the wine list as well as the actual wine. But it seems improbable that the marginal value of spending an extra dollar is equal to one dollar. You could buy a seventy-dollar bottle that’s no better than a thirty-dollar bottle, but it may be worth something other than forty dollars more in impressiveness value. If it’s worth less, then you’re simply miscalculating. If it’s worth more, then you might be willing to pay eighty dollars to get the wine plus the “70” next to the wine’s name. There’s no mechanism, however, for the restaurant to charge you that eighty.

On the other hand, you could imagine a restaurant with the regular wine list and the “enhanced prices” wine list. The wines listed are identical. The EP wine list, however, has numerals representing prices that are twenty dollars higher on it, but the wine actually costs only five dollars more than if you had ordered from the regular wine list. If you want to buy forty impression units (thirty-dollar wine quality plus forty-dollar impression), instead of paying seventy dollars, you only have to pay fifty-five dollars!

And then, of course, once this form of signaling is somewhat established, you can save your dough and impress your clients by countersignaling.

Bryan Caplan, a creative and blinkered thinker who at once interests and maddens me, proposed a few months ago an ideological Turing test as a heuristic for assessing the strength of a political or economic position. The test starts from the premise that “to state opposing views as clearly and persuasively as their proponents. . .is a genuine symptom of objectivity and wisdom,” and thus those who can best argue their opponents’ points are more likely to hold the better position.

I think Caplan is half right. The problem with his proposal, though, is that it privileges esoteric or fringe positions, and that it is entirely an input- rather than output-based measure. Austrian economics is a fringe position, for example. I personally think it’s largely meritless, in a similar way that intelligent design is meritless. Yet anyone who is well versed in Austrian economics is familiar enough with mainstream Keynesian economics to be able to articulate a reasonably convincing simulacrum of a Keynesian position; similarly, proponents of intelligent design (ID) are likely to be knowledgeable about the consensus in favor of evolution so that they might engage with people who are persuadable in one way or the other. People who are actual bioscientists, however, don’t bother learning all of the esoteric arguments in favor of intelligent design because intelligent design is silly and not worthy of spending any time on. So of course the IDers will do a better job of articulating the “evolutionist” argument than the “evolutionists” will of articulating, or, in Turing-test terms, simulating or imitating the ID arguments.

I recall my parents having bought me a logic/mathematics puzzle book that posed, among many others, the following problem: You wish to buy, among three boats, the slowest of them. At first, each owner pilots their own boat, but the “race” is interminably slow because they each try to advance their boat as slowly as is perceptible. How do you identify which boat is truly the slowest?

The answer was to have each boat owner pilot somebody else’s boat. That way, they would drive the boat to its limit in an effort to make it upstage their own boat. I think this is a better model for a ideological Turing test than Caplan’s. Find an uninformed (but not stupid), disinterested third party, and convince him or her that he or she has the power to effect some change in the world that is meaningful to the two ideologically opposed parties. The party that genuinely favors X argues position Y to the third party, and the party that genuinely favors Y argues position X to the party, and whichever position the third party ends up favoring is the position whose prescriptions is *not* carried out.

I think this is a much better test of the quality of two sides’ respective arguments than is simply the ability to recite what the other side thinks.

Take two shuffled, standard 52-card decks. Draw a card from each. If you find them to be the same card, that’s rather surprising, no? But what if you find the first card to be the jack of hearts, and the second card the four of clubs? That’s not very surprising, even though it’s fifty-two times less likely to occur than getting two of the same card!

I think part of the answer as to why the pair of identical cards is more surprising than the much rarer Jh4c draw is that we’re implicitly entertaining an alternative hypothesis. The null hypothesis of course is that both decks are completely fair and uncorrelated — nothing fishy’s going on. The alternative is that something fishy is indeed going on. And our notions of fishiness get triggered more by a recognizable pattern than by something that doesn’t fit into any pattern. Conceivably, we could have a different sense of patterns. Just as a draw of something like 9s9s is maximally concordant, a draw of Jh4c is maximally discordant. Jacks and fours are as far from each other as they can be in number, (assuming circularity where A lies between K and 2), and also in suit: the standard suit order is clubs, diamonds, hearts, spades, so, with similar circularity assumptions, clubs and hearts are antipodal. If you don’t fully buy this, well, they’re different colors in any event.

You have to take my word on this, but when I picked the jack of hearts and the four of clubs, I tried to think of the most random-sounding pair of cards I could, and I ended up with picking a maximally discordant pair, which is just as improbable as picking a maximally concordant pair (i.e., two of the same card). That’s pretty surprising, no? But only now is it surprising, since your sense of patterns has been updated to include this concordance metric as a means by which to evaluate draws.

A test that’s purely a test of the null hypothesis is impossible. You’re always comparing two hypothesis. It’s better to understand what both of them are than to only understand one.

Most of our preferences, if fulfilled, make us better off, at least for the span of time before hedonic treadmilling claws the benefit back. If it is given that I am to have a bowl of ice cream tonight, and I prefer vanilla over chocolate, we would expect me to benefit more from a bowl of vanilla than from a bowl of chocolate.

But let’s say that I’m at a very odd ice cream shop and they have only two flavors left, neither which I’ve heard of before: fluzzleberry and kablazzo bean. Unbeknownst to me, I’ll like them equally, but I’m not allowed to sample, and so I flip a coin and end up with the kablazzo bean. I eat it and enjoy it, but I’m no better off than I would have been had the coin dictated the fluzzleberry.

We now change the scenario, renaming fluzzleberry to sufferberry. This time, the name “sufferberry” causes me to develop a preference for the kablazzo bean, even though my tastes and the ice creams have not changed at all. I order the kablazzo bean, eat it, and enjoy it, but my benefit is no greater in this second scenario than it was when sufferberry was called fluzzleberry.

A person’s preferences are usually good indicators of what would be best for that person, but creating a preference in someone and then fulfilling that preference without changing their actual benefit structure doesn’t make them better off.

When we create a desire in someone, we create a preference for something as well as some combination of a hunger and a drive to attain that object. This can be beneficial to the person if the desire increases the probability of their attaining the desideratum, and attaining the desideratum gives them a benefit — perhaps before desiring it they just didn’t have the energy to pursue the object, or they mistakenly thought it wouldn’t be worthwhile. But let’s assume that the person had sufficient energy and complete relevant knowledge, and correctly determined pre-desire that it wasn’t worth doing what they’d have to do to attain the object. In this case, unless we ensure that the person indeed attain the object, we do them a disservice. Creating a hunger in someone imposes a deficiency, and that deficiency is harmful until remedied. Indeed, even if they do attain the object and satisfy their hunger, they still were harmed for the period between the creation of the deficiency and the attainment of the desideratum.

To me at least, this still seems counterintuitive. That’s why I went through the ice cream thought experiment, to demonstrate that benefit need not depend on preference. Usually when we prefer something and have a hunger for it, it’s because we have reason to believe that getting it will bring us a benefit, and forgoing that benefit is painful. Beings that lacked the drive to attain good things for themselves would have been selected out in the course of evolution.

But sometimes our desires are unjustified. Inducing a desire in a child for a toy by advertising does the child no good unless the child would have actually been better off with the toy and the advertisement gave the child or its parents new information. And as adults, we’re hardly immune to the effects of advertising, now matter how much we’d like to believe. Our reason and prudence may promote in us desires for good things, but they are unreliable gatekeepers in preventing us from being planted with unhelpful desires. Our machinery of hunger and lust is operable by others who do not always have our best interests in mind.

From time to time, we make decisions for other people. Sometimes, they delegate the decisions to us — they consent to our deciding for them. Other times, they could have consented, but either we do not ask them or we decide for them against their wishes. And at other times still, it is impossible for us to obtain consent: the person may be hiking in the woods and unreachable, or perhaps they in a coma and we have to make medical decisions on their behalf.

In this third case, where consent is impossible, we generally consider the right decision to be either the decision that the person would have made had circumstances been such that they could have made the decision for themselves, or the decision that we deem to be in their best interests.

The two most significant decisions we can make for someone else are whether to create them and whether to kill them (including whether to allow these things to happen). The latter question receives lots of attention, from the popular press, from professional philosophers, and from ordinary people faced with a dying relative. Regardless of one’s position on euthanasia, do-not-resuscitate orders, and medical heroics, pretty much everyone agrees that morally relevant criteria for end-of-life considerations center on the patient’s autonomy and interests. So a child’s wish to euthanize a parent because the parent has become a burden is not considered morally compelling, and neither is a child’s wish to keep a parent alive despite the parent’s wishes because the child likes visiting the parent. Indeed, these wishes are widely regarded as selfish.

The situation could not be more different when it comes to deciding whether to bring someone into the world. Deciding to have a child because you want to pass on your name or genes, or to appease your parents, or to care for you in your old age, or to have someone to propagate your religion or values are all considered by most to be legitimate reasons to have children. Curiously, similarly selfish reasons for not having a child, such as the desire for freedom from responsibility of caring for someone, are often recognized as such. What almost never enters the equation are the two questions that should be entirely controlling: (1) would a pre-existent person choose existence over continued non-existence, and (2) would it be better for a pre-existent person to become existent than to continue in non-existence?

Answering these questions is extremely difficult. For the first question, we lack access to pre-existent persons to pose the question to. For the second question, it’s hard to devise cardinal utility functions in general, and practically impossible to determine the utility of a life, even before you take into account the biases we all possess that make us misjudge how (dis)utile things in our life have been, which would presumably be the starting point for assessing the utility of a life. In the posts that follow, however, I’ll try to make some progress.