Tag Archives: NearFar

Many incoming college freshman like “international studies” or “international business.” Far fewer like local studies or local business. Yet there will be more jobs in the later area than the former.

The media discusses national and international politics more than more local politics, yet most of the “news you can use” is local.

Our economics department once estimated there’d be substantial demand for a “managerial economics” major. It would teach basically the same stuff as in an economics major gets, but attract students because of the word “managerial.”

Within management, reorganization is usually higher status than managing within existing structures.

The ratio of students who do science majors relative to engineering majors is much larger than the ratio of jobs in those areas.

Within science, students tend to prefer “basic” sciences like particle physics to more “applied” sciences like geology or material science, relative to the ratio of jobs in such areas.

Compared to designing things from scratch, there is far more work out there maintaining, repairing, and making minor modifications to devices and software. Yet engineering and software schools focus mainly on designing things from scratch.

Within engineering, designing products is higher status than designing the processes that manufacture those products.

Designing new categories of products is seen as higher status than new products within existing categories.

Even when designing from scratch, most real work is testing, honing, and debugging a basic idea. Yet in school the focus is more on creating the basic idea.

There seems to be an overemphasis at school on designing tools that may be useful for other design work, relative to using tools to design things of more direct value.

Do these trends have something in common? My guess: we see wider-scope choices as higher status, all else equal. That is, things associated with choices that we think will influence and constrain many other choices are seen as higher status than things associated with those other more constrained choices. For example, we think managers constrain subordinates, world policy constrains local policy, physics constrains geology, product designs constrain product maintenance, and so on. Yes reverse constraints also happen, but we think those happen less often.

The ability to control the choices of others is a kind of power, and power has long been seen as a basis for status. There may also be a far-view heuristic at work here, i.e., where choices that evoke a far mental view tend to be seen as high status. After all, power does tend to evoke a far view.

A lesson here seems to be that while it can raise your status to be associated with big scope choices, you should expect a lot of competition for that status, and a relative neglect of smaller scope choices. That is, more people may major in science, but there are more jobs in engineering. You might impress people by focusing on creating designs in school, but you are likely to spend your life maintaining pre-existing designs. If you want to get stuff done instead of gaining status, you should focus on smaller scope choices.

Now in my life I’ve spent a lot of time trying to reconsider basic big scope choices. For example, I’ve studied foundations of quantum mechanics, and proposed a new form of governance. And I’ve often thought of such topics as neglected. So how can I reconcile such views with the apparent lesson of this post?

One obvious reconciliation is that I’ve just been wrong, having succumbed to the big scope status bias.

Another possibility is that big scope topics tend more to be public goods where people tend to free-ride on the efforts of others. It is easier for a person or group to own the gains from better understanding smaller scope topics, and thus have a strong incentives to deal with them. If so, there would be positive externalities from progress on such topics, to counter the negative externalities from status and signaling. I think this explanation has some truth, but only some.

A third possibility is that it is harder to reason well about big scope choices, which is part of why it impresses to do that well. But if good reasoning is harder as the topic gets more abstract, there should be fewer people who can handle such topics. Some topics will be so abstract that very few can deal well with them, or even evaluate the dealings of others. So those few people will tend more to be on their own, and not get much praise from others.

Recruiting a sample of Americans via the internet, they polled participants on a set of contentious US policy issues, such as imposing sanctions on Iran, healthcare and approaches to carbon emissions. One group was asked to give their opinion and then provide reasons for why they held that view. This group got the opportunity to put their side of the issue, in the same way anyone in an argument or debate has a chance to argue their case.

Those in the second group did something subtly different. Rather that provide reasons, they were asked to explain how the policy they were advocating would work. They were asked to trace, step by step, from start to finish, the causal path from the policy to the effects it was supposed to have.

The results were clear. People who provided reasons remained as convinced of their positions as they had been before the experiment. Those who were asked to provide explanations softened their views, and reported a correspondingly larger drop in how they rated their understanding of the issues. (more; paper; HT Elliot Olds)

The question “why” evokes a far mode while “how” which evokes a near mode.

When I first got into prediction markets twenty five years ago, I called them “idea futures”, and I focused on using them to reform how we deal with controversies in science and academia (see here, here, here, here). Lately I’ve focused on what I see as the much higher value application of advising decisions and reforming governance (see here, here, here, here). I’ve also talked a lot lately about what I see as the main social functions of academia (see here, here, here, here). Since prediction markets don’t much help to achieve these functions, I’m not optimistic about the demand for using prediction markets to reform academia.

But periodically people do consider using prediction markets to reform academia, as did Andrew Gelman a few months ago. And a few days ago Scott Alexander, who I once praised for his understanding of prediction markets, posted a utopian proposal for using prediction markets to reform academia. These discussions suggest that I revisit the issue of how one might use prediction markets to reform academia, if in fact enough people cared enough about gaining accurate academic beliefs. So let me start by summarizing and critiquing Alexander’s proposal.

Alexander proposes prediction markets where anyone can post any “theory” broadly conceived, like “grapes cure cancer.” (Key quotes below.) Winning payouts in such market suffer a roughly 10% tax to fund experiments to test their theories, and in addition some such markets are subsidized by science patron orgs like the NSF. Bettors in each market vote on representatives who then negotiate to pick someone to pay to test the bet-on theory. This tester, who must not have a strong position on the subject, publishes a detailed test design, at which point bettors could leave the market and avoid the test tax. “Everyone in the field” must make a public prediction on the test. Then the test is done, winners paid, and a new market set up for a new test of the same question. Somewhere along the line private hedge funds would also pay for academic work in order to learn where they should bet.

That was the summary; here are some critiques. First, people willing to bet on theories are not a good source of revenue to pay for research. There aren’t many of them and they should in general be subsidized not taxed. You’d have to legally prohibit other markets to bet on these without the tax, and even then you’d get few takers.

Second, Alexander says to subsidize markets the same way they’d be taxed, by adding money to the betting pot. But while this can work fine to cancel the penalty imposed by a tax, it does not offer an additional incentive to learn about the question. Any net subsidy could be taken by anyone who put money in the pot, regardless of their info efforts. As I’ve discussed often before, the right way to subsidize info efforts for a speculative market is to subsidize a market maker to have a low bid-ask spread.

Third, Alexander’s plan to have bettors vote to agree on a question tester seems quite unworkable to me. It would be expensive, rarely satisfy both sides, and seems easy to game by buying up bets just before the vote. More important, most interesting theories just don’t have very direct ways to test them, and most tests are of whole bundles of theories, not just one theory. Fourth, for most claim tests there is no obvious definition of “everyone in the field,” nor is it obvious that everyone should have opinion on those tests. Forcing a large group to all express a public opinion seems a huge cost with unclear benefits.

OK, now let me review my proposal, the result of twenty five years of thinking about this. The market maker subsidy is a very general and robust mechanism by which research patrons can pay for accurate info on specified questions, at least when answers to those questions will eventually be known. It allows patrons to vary subsidies by questions, answers, time, and conditions.

Of course this approach does require that such markets be legal, and it doesn’t do well at the main academic function of credentialing some folks as having the impressive academic-style mental features with which others like to associate. So only the customers of academia who mainly want accurate info would want to pay for this. And alas such customers seem rare today.

For research patrons using this market-maker subsidy mechanism, their main issues are about which questions to subsidize how much when. One issue is topic. For example, how much does particle physics matter relative to anthropology? This mostly seems to be a matter of patron taste, though if the issue were what topics should be researched to best promote economic growth, decision markets might be used to set priorities.

The biggest issue, I think, is abstraction vs. concreteness. At one extreme one can ask very specific questions like what will be the result of this very specific experiment or future empirical measurement. At the other extreme, one can ask very abstract questions like “do grapes cure cancer” or “is the universe infinite”.

Very specific questions offer bettors the most protection against corruption in the judging process. Bettors need worry less about how a very specific question will be interpreted. However, subsidies of specific questions also target specific researchers pretty directly for funding. For example, subsidizing bets on the results of a very specific experiment mainly subsidizes the people doing that experiment. Also, since the interest of research patrons in very specific questions mainly results from their interest in more general questions, patrons should prefer to directly target the more general questions directly of interest to them.

Fortunately, compared to other areas where one might apply prediction markets, academia offers especially high hopes for using abstract questions. This is because academia tends to house society’s most abstract conversations. That is, academia specializes in talking about abstract topics in ways that let answers be consistent and comparable across wide scopes of time, space, and discipline. This offers hope that one could often simply bet on the long term academic consensus on a question.

That is, one can plausibly just directly express a claim in direct and clear abstract language, and then bet on what the consensus will be on that claim in a century or two, if in fact there is any strong consensus on that claim then. Today we have a strong academic consensus on many claims that were hotly debated centuries ago. And we have good reasons to believe that this process of intellectual progress will continue long into the future.

Of course future consensus is hardly guaranteed. There are many past debates that we’d still find to hard to judge today. But for research patrons interested in creating accurate info, the lack of a future consensus would usually be a good sign that info efforts in that area less were valuable than in other areas. So by subsidizing markets that bet on future consensus conditional on such a consensus existing, patrons could more directly target their funding at topics where info will actually be found.

Large subsidies for market-makers on abstract questions would indirectly result in large subsidies on related specific questions. This is because some bettors would specialize in maintaining coherence relationships between the prices on abstract and specific questions. And this would create incentives for many specific efforts to collect info relevant to answering the many specific questions related to the fewer big abstract questions.

Yes, we’d probably end up with some politics and corruption on who qualifies to judge later consensus on any given question – good judges should know the field of the question as well as a bit of history to help them understand what the question meant when it was created. But there’d probably be less politics and lobbying than if research patrons choose very specific questions to subsidize. And that would still probably be less politics than with today’s grant-based research funding.

Of course the real problem, the harder problem, is how to add mechanisms like this to academia in order to please the customers who want accuracy, while not detracting from or interfering too much with the other mechanisms that give the other customers of academia what they want. For example, should we subsidize high relevant prestige participants in the prediction markets, or tax those with low prestige?

A new JPSPpaperconfirms that we are idealistic in far mode, and selfish in near mode. If you ask people for short abstract descriptions of their goals, they’ll say they have ideal goals. But if you ask them to describe in details what is it like to be them pursuing their goals, their selfishness shines clearly through. Details:

Completing an inventory asks the respondent to take an observer’s perspective upon the self, effectively asking, “What do you look like to others?” Imagining watching a video of oneself driving a car, playing basketball, or speaking to a friend is an experience as the self-as-actor. Rating the importance of various goals also recruits the self-as-actor. Motivated to maintain a moral reputation, the self-as-actor is infused with prosocial, culturally vetted scripts.

Another way of accessing motivation is by asking people questions about their lives. Open-ended verbal responses (e.g., narratives or implicit measures) require the respondent to produce ideas, recall details, reflect upon the significance of concrete events, imagine a future, and narrate a coherent story. In effect, prompts to narrate ask respondents, “What is it like to be you?” Imagining actually driving a car, playing basketball, or speaking to a friend is an experience as the self-as-agent (McAdams, 2013). Asking people to tell about their lives also recruits the self-as-agent. Motivated by survival, the self-as-agent is selfish in nature. …

Taken together, this leads to the prediction that frames the current research: Inventory ratings, which recruit the self-as-actor, will yield moral impressions, whereas narrated descriptions, which recruit the self-as-agent, will yield the impression of selfishness. …

The motivation to behave selfishly while appearing moral gave rise to two, divergently motivated selves. The actor—the watched self— tends to be moral; the agent—the self as executor—tends to be selfish. Each self serves its own adaptive function: The actor helps people maintain inclusion in groups, whereas the agent attends to basic survival needs. Three studies support the thesis that the actor is moral and the agent is selfish. In Study 1, actors claimed their goals were equally about helping the self and others (viz., moral); agents claimed their goals were primarily about helping the self (viz., selfish). This disparity was evident in both individualist and collectivist cultures, albeit more so among individualists. Study 2 compared actors and agents’ motives to those of people role-playing highly prosocial or selfish exemplars. In content and in the impression they made upon an outside observer, actors’ motives were similar to those of the prosocial role-players, whereas agents’ motives were similar to those of the selfish role-players. In Study 3, participants claimed that their agent’s motives were the more realistic and their actor’s motives the more idealistic of the two. When asked to take on an idealistic mindset, agents became more moral; a realistic mindset made the actor more selfish. (more)

Imagine that you decide that this week you’ll go to a different doctor from your usual one. Or that you’ll get a haircut from a different hairdresser. Ask yourself: by how much do you expect such actions to influence the distant future of all our descendants? Probably not much. As I argued recently, we should expect most random actions to have very little long term influence.

Now imagine that you visibly take a stand on a big moral question involving a recognizable large group. Like arguing against race-based slavery. Or defending the Muslim concept of marriage. Or refusing to eat animals. Imagine yourself taking a personal action to demonstrate your commitment to this moral stand. Now ask yourself: by how much do you expect these actions to influence distant descendants?

I’d guess that even if you think such moral actions will have only a small fractional influence on the future world, you expect them to have a much larger long term influence than doctor or haircut actions. Furthermore, I’d guess that you are much more willing to credit the big-group moral actions of folks centuries ago for influencing our world today, than you are willing to credit people who made different choices of doctors or hairdressers centuries ago.

But is this correct? When I put my social-science thinking cap on, I can’t find good reasons to expect big-group moral actions to have much stronger long term influence. For example, you might posit that moral opinions are more stable than other opinions and hence last longer. But more stable things should be harder to change by any one action, leaving the average influence about the same.

I can, however, think of a good reason to expect people to expect this difference: near-far (a.k.a construal level) theory. Acts based on basic principles seem more far than acts based on practical considerations. Acts identified with big groups seem more far than acts identified with small groups. And longer-term influence is also more strongly associated with a far view.

So I tentatively lean toward concluding that this expectation of long term influence from big-group moral actions is mostly wishful thinking. Today’s distribution of moral actions and the relations between large groups mostly result from a complex equilibrium of people today, where random disturbances away from that equilibrium are usually quickly washed away. Yes, sometimes they’ll be tipping points, but those should be rare, as usual, and each of us can only expect to have a small fraction influence on such things.

Long ago our primate ancestors learned to be “political.” That is, instead of just acting independently, we learned to join into coalitions for mutual advantage, and to switch coalitions for private advantage. Our human ancestors added social norms, i.e., rules enforced by feelings of outrage in broad coalitions. Foragers used norms and coalitions to manage bands of roughly thirty members, and farmers applied similar behaviors to village communities of roughly a thousand.

In ancient politics, people learned to attract allies, to judge who else was reliable as an ally, to gossip about who was allied with who, and to help allies and hurt rivals. In particular we learned to say good things about allies and bad things about rivals, such as accusing rivals of violating key social norms, and praising allies for upholding them.

Today many people consider themselves to be very “political”, and they treat this aspect of themselves as central to their identity. They spend lots of time talking about related views, associating with those who share them, and criticizing those who disagree. They often feel especially proud of how boldly and freely they do these things, relative to their ancestors and those in “backward” cultures.

Trouble is, such folks are mostly “political” about national or international politics. Their interest fades as the norms and coalitions at stake focus on smaller scales, such as regions, cities, or neighborhoods. The politics of firms, clubs, and families hardly engage them at all. Of course such people are members of local coalitions, and do sometimes voice support for enforcing related norms. So they are political there to some extent. But they are much less bold, self-righteous, and uncompromising about local politics, and don’t consider related views to be central to their identity. Such folks are eager to associate with those who sacrifice to improve world politics, but are only mildly interested in associating with those who sacrifice to improve local politics.

This focus on politics at the largest scale is both relatively safe, and relatively useless. On the one hand, your efforts to take sides and support norm enforcement at very local levels are far more likely to benefit you personally via better local outcomes. On the other hand, such efforts are far more likely to bother opposing coalitions, leaving you vulnerable to retaliation. Given these risks, and the greater praise given to for those who push politics at the largest scales, it is understandable if people tend to focus on safe-scale politics, unlikely to cause them personal troubles.

Near-far theory predicts that we’d tend to focus our ideals and moral outrage and praise more on the largest social scales. But a net result of this tendency is that we seem far less effective today than were our ancestors at enforcing very-local-level social norms, and at discouraging related harms from local coalitions. We chafe at the idea of letting our nation be dominated by a king, but we easily and quietly submit to local kings in firms, clubs, and families.

Our political instincts and efforts are largely wasted, because we just are much less able to coordinate to identify and right wrongs on the largest scales. Now to some extent this is healthy. There was a lot of destructive waste when most political efforts were directed at very local politics. But many wrongs were also detected and righted. The human political instinct does serve some positive functions. After all, human bands were much larger than other primate bands, suggesting that human politics was less destructive than other primate politics.

I’ve suggested that organizations use decision markets to help advise key decisions. And to illustrate the idea, I’ve discussed the example of how it could apply to national politics. I’ve done this because people seem far more interested in reforming national politics, relative to reforming local small organizations. But honestly, I see a much bigger gains overall from smaller scale applications. And small scale application is where the idea needs to start, to work out the kinks. And such trials are feasible now. If only I could get some small orgs to try. Sigh.

I posted back in ’07 on a hero of local politics:

A colleague of my wife was a nurse at a local hospital, and was assigned to see if doctors were washing their hands enough. She identified and reported the worst offender, whose patients were suffering as a result. That doctor had her fired; he still works there not washing his hands. (more)

I’d admire you much more if you acted like this, relative to your marching on Washington, soliciting door-to-door for a presidential candidate, or posting ever so many political rants on Facebook. Shouldn’t you admire such folks far more as well?

When a man loves a woman, …. if she is bad, he can’t see it. She can do no wrong. Turn his back on his best friend, if he puts her down. (Lyrics to “When a Man Loves A Woman”)

Kristeva analyzes our “incredible need to believe”–the inexorable push toward faith that … lies at the heart of the psyche and the history of society. … Human beings are formed by their need to believe, beginning with our first attempts at speech and following through to our adolescent search for identity and meaning. (more)

This “to believe” … is that of Montaigne … when he writes, “For Christians, recounting something incredible is an occasion for belief”; or the “to believe” of Pascal: “The mind naturally believes and the will naturally loves; so that if lacking true objects, they must attach themselves to false ones.” (more)

We often shake our heads at the gullibility of others. We hear a preacher’s sermon, a politician’s speech, a salesperson’s pitch, or a flatter’s sweet talk, and we think:

Why do they fall for that? Can’t they see this advocate’s obvious vested interest, and transparent use of standard unfair rhetorical tricks? I must be be more perceptive, thoughtful, rational, and reality-based than they. Guess that justifies my disagreeing with them.

Problem is, like the classic man who loves a woman, we find hard to see flaws in what we love. That is, it is easier to see flaws when we aren’t attached. When we “buy” we more easily see the flaws in the products we reject, and when we “sell” we can often ignore criticisms by those who don’t buy.

Why? Because we have near and far reasons to like things. And while we might actually choose for near reasons, we want to believe that we choose for far reasons. We have a deep hunger to love some things, and to believe that we love them for the ideal reasons we most respect for loving things. This applies not only to other people, but to politicians, to writers, actors, ideas.

For the options we reject, however, we can see more easily the near reasons that might induce others to choose them. We can see pandering and flimsy excuses that wouldn’t stand up to scrutiny. We can see forced smiles, implausible flattery, slavishly following fashion, and unthinking confirmation bias. We can see politicians who hold ambiguous positions on purpose.

Because of all this, we are the most vulnerable to not seeing the construction of and the low motives behind the stuff we most love. This can be functional in that we can gain from seeming to honestly sincerely and deeply love some things. This can make others that we love or who love the same things feel more bonded to us. But it also means we mistake why we love things. For example, academics are usually less interesting or insightful when researching topics where they feel the strongest; they do better on topics of only moderate interest to them.

This also explains why sellers tend to ignore critiques of their products as not idealistic enough. They know that if they can just get good enough on base features, we’ll suddenly forget our idealism critiques. For example, a movie maker can ignore criticisms that her movie is trite, unrealistic, and without social commentary. She knows that if she can make the actors pretty enough, or the action engaging enough, we may love the movie enough to tell ourselves it is realistic, or has important social commentary. Similarly, most actors don’t really need to learn how to express deep or realistic emotions. They know that if they can make their skin smooth enough, or their figure tone enough, we may want to believe their smile is sincere and their feelings deep.

Same for us academics. We can ignore critiques of our research not having important implications. We know that if we can include impressive enough techniques, clever enough data, and describe it all with a pompous enough tone, our audiences may be impressed enough to tell themselves that our trivial extension of previous ideas are deep and original.

Relatively minor technological change can move the balance of power between values that already fight within each human. [For example,] Beeminder empowers a person’s explicit, considered values over their visceral urges. … In the spontaneous urges vs. explicit values conflict …, I think technology should generally tend to push in one direction. … I’d weakly guess that explicit values will win the war. (more)

The goals we humans tend to explicitly and consciously endorse tend to be more idealistic than the goals that our unconscious actions try to achieve. So one might expect or hope that tech that empowers conscious mind parts, relative to other parts, would result in more idealistic behavior.

A relevant test of this idea may be found in the behavior of human orgs, such as firms or nations. Like humans, orgs emphasize more idealistic goals in their more explicit communications. So if we can identify the parts of orgs that are most like the conscious parts of human minds, and if we can imagine ways to increase the resources or capacities of those org parts, then we can ask if increasing such capacities would move orgs to more idealistic behavior.

A standard story is that human consciousness functions primarily to manage the image we present to the world. Conscious minds are aware of the actions we may need to explain to others, and are good at spinning good-looking explanations for our own behavior, and bad-looking explanations for the behavior of rivals.

Marketing, public relation, legal, and diplomatic departments seem to be analogous parts of orgs. They attend more to how the org is seen by others, and to managing org actions that are especially influential to such appearances. If so, our test question becomes: if the relative resources and capacities of these org parts were increased, would such orgs act more idealistically? For example, would a nation live up to its self-proclaimed ideals more if the budget of its diplomatic corps were doubled?

I’d guess that such changes would tend to make org actions more consistent, but not more idealistic. That is, the mean level of idealism would stay about the same, but inconsistencies would be reduced and deviations of unusually idealistic or non-idealistic actions would move toward the mean. Similarly, I suspect humans with more empowered conscious minds do not on average act more idealistically.

But that is just my guess. Does anyone know better how the behavior of real orgs would change under this hypothetical?

As suggested by sex is near, love is far, it seems that we don’t directly feel romantic love. Instead, we rather abstractly interpret our feelings as being love or not, depending on whether we think our relation fits our abstract ideal of love:

When adult women were asked about love and how they have experienced love in their own lives, … many women found it difficult to talk about their feelings generally and love in particular. There was an absence of falling in love stories and rather, women explained that they ‘drifted’ into relationships, or they ‘just happened’. …

Love continues to be used as the legitimating ideology for family, relationships and marriage. Moreover, the representation of love in society is omnipresent; it is depicted in blockbuster films, on daytime television, in novels, in music and in numerous other cultural formats. This ‘commercialization’ of love has commonly captured a specific form of love: one which promises salvation for both sexes, although perhaps more so for women. …

Love was mentioned often by the 23 young, mostly heterosexual (one woman identified as bisexual), adult women with whom this paper is concerned. Yet, the context in which love was mentioned was almost always in relation to abstract discussions about relationships and marriage. Romantic discourses were shunned in favour of pragmatic, objective assessments of emotion. When I asked them to tell me about their own relationships they often seemed to struggle to put their feelings into words and there was a distinct absence of falling in love stories. These women did not openly desire love and many accounts of relationships were based on ‘drifting’ into relationships with friends or finding that love ‘just happened’. …

Eleanor commented, ‘about a month ago I suddenly woke up and I just thought I’m in love with you. And I thought I was before that point but I just woke up and I just knew’. … The absence of love stories is documented in participants’ use of cover stories, metaphors and a ‘drift’ discourse. Yet when asked directly about love, respondents did not shy away from talking about their feelings. …

Narratives of whirlwind romances were rare but the significance and meaning of love, as well as the romantic image of ‘the one true love’, led the respondents to define love in a very specific way. Thus it was common for them to denounce the love they felt in past relationships in the form of ‘I thought it was love . . .’. Michelle was a good example of this: ‘I thought I was in love with him and in hindsight it was quite an inappropriate [relationship]’. Michelle later ‘realizes’ that it was not love at all. (Carter, 2013; ungated)

That is, these women don’t see love in the details of how their relations started or grew. At some point they just decide they are in love. Later, if they change how they think about the relation, they may change their mind about if they were in love. So if they feel love, it is a feeling attached to and drawn mostly from an abstract interpretation of a situation, rather than from particular concrete details. Love is far indeed.

Decisions depend on both values and facts. Values are about us and what we want, while (beliefs about) facts are about everything else, especially the way everything else changes how what we get depends on what we do. Both values and facts are relevant to decisions.

But honestly, facts usually matter far more. Yes, sometimes we err by mistaking our values, and sometimes our values are more complex than we realize. But for the vast majority of our decisions, we have a good rough idea of what we value, and most of our decision problem (on the margin) is to figure out relevant facts. (If you review the last ten decisions you made, I think you’ll see this is obvious.)

Even when learning values is important, talking values with others usually helps less. To learn what we value, we mostly just need to try different things out and see how we feel about them. So compared to thinking about values, talking values seems even less useful for informing decisions. That is, we have better ways to coordinate to discover the world we share than to coordinate to learn our individual values. Yet we seem to spend an awful lot of time discussing values. Especially on grand topics like politics, religion, charity, sex/love, the arts, the future, etc., we seem to spend more time talking values than facts. (We also love to drop names on these topics.) Why?

Such topics tend to put us in a far mental mode, and far modes focus us on basic values relative to practical constraints. Which makes sense if far modes function more to manage our social impressions. That is, value-focused talk makes sense if such talk functions less to advise decisions, and more to help us look good. By talking values we can signal our loyalties and the norms we support, and we can covertly hint about norm violations we might overlook. (Dropping names also lets us covertly signal our loyalties.)

This is what bugs me personally about most discussions of grand topics — they are so full of value affirmations (and name dropping), and so empty of info to improve decisions. The modes that we prefer for such topics, such as stories, music, testimonials, and inspirational speeches, are much better for transmitting values than facts. Worse, people love to revisit the same value topics over and over, even though their values rarely change; it is info about facts that change, and so justify revising topics often. Also, the “experts” we prefer on these grand topics are mostly those whose main celebrated credentials are about their values and their abilities to move values, not about their understanding of facts.

I’m glad to be an academic, since our standard mode of talk is better suited to discerning and sharing facts than values. And I’m especially glad to be an economist, since our using a standard value metric lets us focus most of our disagreement on differing views about facts. Of course even so most academic discussion isn’t very well targeted at improving decisions; we are far more interested in getting better credentialed as being impressive. But at least we mostly talk facts.

If you think you are one of the rare folks who actually cares more about making better decisions than about signaling loyalties, and if you wanted to find other like minded folks to work with, I’d think you’d tend to avoid talking values, as that would be a bad sign about your interests. But in fact most folks who say they are the rare ones who care mainly about better decisions, and who take lots of personal time talk about it, seem in fact to spend most of their time talking values. They even tend to prefer the value focused modes and experts. Why are so few folks willing to even pretend to focus on facts?