Tag Archives: Signaling

If morality is basically a package of norms, and if norms are systems for making people behave, then each individual’s main moral priority becomes: to avoid blame. While the norm system may be designed to on average produce good outcomes, when that system breaks then each individual has only weak incentives to fix it. They mainly seek to avoid blame according to the current broken system. In this post I’ll discuss an especially disturbing example, via a series of four hypothetical scenarios.

1. First, imagine we had a tech that could turn ordinary humans into productive zombies. Such zombies can still do most jobs effectively, but they no longer have feelings or an inner life, and from the outside they also seem dead inside, lacking passion, humor, and liveliness. Imagine that someone proposed to use this tech on a substantial fraction of the human population. That is, they propose to zombify those who do jobs that others see as boring, routine, and low status, like collecting garbage, cleaning bedpans, or sweeping floors. As in this scenario living people would be turned into dead zombies, this proposal would probably be widely seen as genocide, and soundly rejected.

2. Second, imagine someone else proposes the following variation: when a new child of a parent seems likely enough to grow up to take such a low status job, this zombie tech is applied very early to the fetus. So no non-zombie humans are killed, they are just prevented from existing. Zombie kids are able to learn and eventually learn to do those low status. Thus technically this is not genocide, though it could be seen as the extermination of a class. And many parents would suffer from losing their chance to raise lively humans. Whoever proposed all this is probably considered evil, and their proposal rejected.

3. Third, imagine combining this proposal with another tech that can reliably induce identical twins. This will allow the creation of extra zombie kids. That is, each birth to low status parents is now of identical twins, one of which is an ordinary kid, and the other is a zombie kid. If parent’s don’t want to raise zombie kids, some other organization will take over that task. So now the parents get to have all their usual lively kids, and the world gains a bunch of extra zombie kids who grow up to do low status jobs. Some may support this proposal, but surely many others will find it creepy. I expect that it would be pretty hard to create a political consensus to support this proposal.

While in the first scenario people were killed, and in the second scenario parents were deprived, this third scenario is designed to take away these problems. But this third proposal still has two remaining problems. First, if we have a choice between creating an empty zombie and a living feeling person who finds their life worth living, this second option seems to result in a better world. Which argues against zombies. Second, if zombies seem like monsters, supporters of this proposal might might be blamed for creating monsters. And as the zombies look a lot like humans, many will see you as a bad person if you seem inclined to or capable of treating them badly. It looks bad to be willing to create a lower class, and to treat them like a disrespected lower class, if that lower class looks a lot like humans. So by supporting this third proposal, you risk being blamed.

4. My fourth and last scenario is designed to split apart these two problems with the third scenario, to make you choose which problem you care more about. Imagine that robots are going to take over most all human jobs, but that we have a choice about which kind of robot they are. We could choose human-like robots, who act lively with passion and humor, and who inside have feelings and an inner life. Or we could choose machine-like robots, who are empty inside and also look empty on the outside, without passion, humor, etc.

If you are focused on creating a better world, you’ll probably prefer the human-like robots, as that which choice results in more creatures who find their lives worth living. But if you are focused on avoiding blame, you’ll probably prefer the machine-like robots, as few will blame you for for that choice. In that choice the creatures you create look so little like humans that few will blame you for creating such creatures, or for treating them badly.

I recently ran a 24 hour poll on Twitter about this choice, a poll to which 700 people responded. Of those who make a choice, 77% picked the machine-like robots:

If some kind of robot is going to replace humans on most jobs, would you prefer it to be 1) empty machine-like robots w/ no feelings or inner life, or 2) lively human-like robots full of passion, & humor? You probably feel guilty mistreating 2, but not 1.

Maybe my Twitter followers are unusual, but I doubt that a majority of a more representative poll would pick the human-like option. Instead, I think most people prefer the option that avoids personal blame, even if it makes for a worse world.

Those physicists go too far. They say conservation of momentum applies exactly at all times to absolutely everything in the universe. And yet they can’t predict whether I will raise my right or left hand next. Clearly there is more going on than their theories can explain. They should talk less and read more literature. Maybe then they’d stop saying immoral things like Earth’s energy is finite.

Sounds silly, right? But many literary types really don’t like economics (in part due to politics), and they often try to justify their dislike via a similar critique. They say that we economists claim that complex human behavior is “nothing but” simple economic patterns. For example, in the latest New Yorker magazine, journalist and novelist John Lanchester tries to make such a case in an article titled:

Can Economists and Humanists Ever Be Friends? One discipline reduces behavior to elegantly simple rules; the other wallows in our full, complex particularity. What can they learn from each other?

He starts by focusing on our book Elephant in the Brain. He says we make reasonable points, but then go too far:

The issue here is one of overreach: taking an argument that has worthwhile applications and extending it further than it usefully goes. Our motives are often not what they seem: true. This explains everything: not true. … Erving Goffman’s “The Presentation of Self in Everyday Life,” or … Pierre Bourdieu’s masterpiece “Distinction” … are rich and complicated texts, which show how rich and complicated human difference can be. The focus on signalling and unconscious motives in “The Elephant in the Brain,” however, goes the other way: it reduces complex, diverse behavior to simple rules.

This intellectual overextension is often found in economics, as Gary Saul Morson and Morton Schapiro explain in their wonderful book “Cents and Sensibility: What Economics Can Learn from the Humanities” (Princeton). … Economists tend to be hedgehogs, forever on the search for a single, unifying explanation of complex phenomena. They love to look at a huge, complicated mass of human behavior and reduce it to an equation: the supply-and-demand curves; the Phillips curve … or mb=mc. … These are powerful tools, which can be taken too far.

You might think that Lanchester would support his claim that we overreach by pointing to particular large claims and then offering evidence that they are false in particular ways. Oddly, you’d be wrong. (Our book mentions no math nor rules of any sort.) He actually seems to accept most specific claims we make, even pretty big ones:

Many of the details of Hanson and Simler’s thesis are persuasive, and the idea of an “introspective taboo” that prevents us from telling the truth to ourselves about our motives is worth contemplating. … The writers argue that the purpose of medicine is as often to signal concern as it is to cure disease. They propose that the purpose of religion is as often to enhance feelings of community as it is to enact transcendental beliefs. … Some of their most provocative ideas are in the area of education, which they believe is a form of domestication. … Having watched one son go all the way through secondary school, and with another who still has three years to go, I found that account painfully close to the reality of what modern schooling is like.

While Lanchester does argue against some specific claims, these are not claims that we actually made. For example:

“The Elephant in the Brain”… has moments of laughable wrongness. We’re told, “Maya Angelou … managed not to woo Bill Clinton with her poetry but rather to impress him—so much so that he invited her to perform at his presidential inauguration in 1993.” The idea that Maya Angelou’s career amounts to nothing more than a writer shaking her tail feathers to attract the attention of a dominant male is not just misleading; it’s actively embarrassing.

But we said nothing like “Angelou’s career amounts to nothing more than.” Saying that she impressed Clinton with her poetry is not remotely to imply there was “nothing more” to her career. Also:

More generally, Hanson and Simler’s emphasis on signalling and unconscious motives suggests that the most important part of our actions is the motives themselves, rather than the things we achieve. … The last sentence of the book makes the point that “we may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.” With that one observation, acknowledging that the consequences of our actions are more important than our motives, the argument of the book implodes.

We emphasize “signalling and unconscious motives” because is the topic of our book. We don’t ever say motives are the most important part of our actions, and as he notes, in our conclusion we suggest the opposite. Just as a book on auto repair doesn’t automatically claim auto repair to be the most important thing in the world, a book on hidden motives needn’t claim motives are the most important aspect of our lives. And we don’t.

In attributing “overreach” to us, Lanchester seems to rely most heavily on a quick answer I gave in an interview, where Tyler Cowen asked me to respond “in as crude or blunt terms as possible”:

Wait, though—surely signalling doesn’t account for everything? Hanson … was asked to give a “short, quick and dirty” answer to the question of how much human behavior “ultimately can be traced back to some kind of signalling.” His answer: “In a rich society like ours, well over ninety per cent.” … That made me laugh, and also shake my head. … There is something thrilling about the intellectual audacity of thinking that you can explain ninety per cent of behavior in a society with one mental tool.

That quote is not from our book, and is from a context where you shouldn’t expect it to be easy to see exactly what was meant. And saying that a signaling motive is on average one of the strongest (if often unconscious) motives in an area of life is to say that this motive importantly shapes some key patterns of behavior in this area of life; it is not remotely to claim that this fact explains most of details of human behavior in this area! So shaping key patterns in 90% of areas explains far less than 90% of all behavior details. Saying that signaling is an important motive doesn’t at all say that human behavior is “nothing more” than signaling. Other motives contribute, we vary in how honest and conscious we are of each motive, there are usually a great many ways to signal any given thing in any given context, and many different cultural equilibria can coordinate individual behavior. There remains plenty of room for complexity, as people like Goffman and Bourdieu illustrate.

Saying that an abstraction is important doesn’t say that the things to which it applies are “nothing but” that abstraction. For example, conservation of momentum applies to all physical behavior, yet it explains only a tiny fraction of the variance in behavior of physical objects. Natural selection applies to all species, yet most species details must be explained in other ways. If most roads try to help people get from points A to B, that simple fact is far from sufficient to predict where all the roads are. The fact that a piece of computer code is designed help people navigate roads explains only a tiny fraction of which characters are where in the code. Financial accounting applies to nearly 100% of firms, yet it explains only a small fraction of firm behavior. All people need air and food to survive, and will have a finite lifespan, and yet these facts explain only a tiny fraction of their behavior.

Look, averaging over many people and contexts there must be some strongest motive overall. Economists might be wrong about what that is, and our book might be wrong. But it isn’t overreach or oversimplification to make a tentative guess about it, and knowing that strongest motive won’t let you explain most details of human behavior. As an analogy, consider that every nation has a largest export commodity. Knowing this commodity will help you understand something about this nation, but it isn’t remotely reasonable to say that a nation is “nothing more” than its largest export commodity, nor to think this fact will explain most details of behavior in this nation.

There are many reasonable complaints one can make about economics. I’ve made many myself. But this complaint that we “overreach” by “reducing complexity to simple rules” seems to me mostly rhetorical flourish without substance. For example, most models we fit to data have error terms to accommodate everything else that we’ve left out of that particular model. We economists are surely wrong about many things, but to argue that we are wrong about a particular thing you’ll actually need to talk about details related to that thing, instead of waving your hands in the general direction of “complexity.”

For millennia, we humans have shown off our intelligence via complicated arguments and large vocabularies, health via sport achievement, heavy drink, and long hours, and wealth via expensive clothes, houses, trips, etc. Today we appear to have the more efficient signaling substitutes, such as IQ tests, medical health tests, and bank statements. Yet we continue to show off in the old ways, and rarely substitute such new ways. Why?

One explanation is inertia. Signaling equilibria require complex coordination, and those who try to change it via deviations can seem non-conformist and socially clueless. Another explanation is hypocrisy. As we discuss in our new book, The Elephant in the Brain, ancient and continuing norms against bragging push us to find plausible deniability for our brags. We can pretend that big vocabularies help us convey info, that sports are just fun, and that expensive clothes, etc. are prettier or more comfortable. It is much harder to find excuses to waive around your IQ test or bank statement for others to see.

It recently occurred to me that a sufficient lack of privacy would be an obvious fix for this problem. Imagine that it were easy to use face recognition to find someone’s official records, and from there to find out their net worth, IQ scores, and health test scores. In that case, observers could more cheaply acquire the same info that we are now try to show off in deniable ways.

Yes, we say to want to keep such info private, but the big efforts most of us go through to show off our smarts, health, and wealth suggests that we doth protest too much there. And as usual, it is less that we don’t know what policies would make us better off, and more than we don’t much care about that when we choose our political efforts.

Added 7a: Of course there may also be big disadvantages to losing privacy, and our evolved preferences may be tied more to particular surface behaviors and cues than to their general underlying signaling functions.

Many firms fail to pass bad news up the management chain, and suffer as a result, even though simple fixes have long been known:

Wall Street Journal placed the blame for the “rot at GE” on former CEO Jeffrey Immelt’s “success theater,” pointing to what analysts and insiders said was a history of selectively positive projections, a culture of overconfidence and a disinterest in hearing or delivering bad news. …The article puts GE well out of its usual role as management exemplar. And it shines a light on a problem endemic to corporate America, leadership experts say. People naturally avoid conflict and fear delivering bad news. But in professional workplaces where a can-do attitude is valued above all else, and fears about job security remain common, getting unvarnished feedback and speaking candidly can be especially hard. …

So how can leaders avoid a culture of “success theater?” … They have to model the behavior, being realistic about goals and forecasts and candid when things go wrong. They should host town halls where employees can speak up with criticism, structuring them so bad news can flow to the top. For instance, he recommends getting respected mid-level managers to first interview lower-level employees about what’s not working to make sure tough subjects are aired. …

Doing that is harder than it sounds, making it critical for leaders to create systemic ways to offer feedback, rather than just talking about it. She tells the story of a former eBay manager who would leave a locked orange box near the office bathrooms where people could leave critical questions. He would later read them aloud in meetings — with someone else unlocking the box to prove he hadn’t edited its contents — hostile questions and all. “People never trusted anything was really anonymous except paper,” she said. “He did it week in and week out.”

When she worked at Google, where she led online sales and operations for AdSense, YouTube and Doubleclick, she had a crystal statue she called the “I was wrong, you were right” statue that she’d hand out to colleagues and direct reports. (more)

Consider what signal a firm sends by NOT regularly reading the contents of locked anonymous bad news boxes at staff meetings. They in effect admit that they aren’t willing to pay a small cost to overcome a big problem, if that interferes with the usual political games. You might think investors would see this as a big red flag, but in fact they hardly care.

I’m not sure how exactly to interpret this equilibrium, but it is clearly bad news for prediction markets in firms. Such markets are also sold as helping firms to uncover useful bad news. If firms don’t do easier simpler things to learn bad news, why should we expect them to do more complex expensive things?

For millennia, we humans have shown off our intelligence via complicated arguments and large vocabularies, health via sport achievement, heavy drink, and long hours, and wealth via expensive clothes, houses, trips, etc. Today we appear to have the more efficient signaling substitutes, such as IQ tests, medical health tests, and bank statements. Yet we continue to show off in the old ways, and rarely substitute such new ways. Why?

One explanation is inertia. Signaling equilibria require complex coordination, and those who try to change it via deviations can seem non-conformist and socially clueless. Another explanation is hypocrisy. As we discuss in our new book, The Elephant in the Brain, ancient and continuing norms against bragging push us to find plausible deniability for our brags. We can pretend that big vocabularies help us convey info, that sports are just fun, and that expensive clothes, etc. are prettier or more comfortable. It is much harder to find excuses to waive around your IQ test or bank statement for others to see.

Now consider these comments by Tyler Cowen on Bryan Caplan’s new book The Case Against Education:

Bryan’s strangest assumption, namely a sociologically-rooted, actually anti-economics “conformity is stronger than you think” argument, which Bryan uses to assert the status quo will continue more or less indefinitely. It won’t. To the extent Bryan is correct (and that you can debate, but at least he is more correct than most people in the educational establishment will let on), competency-based learning and changes in employer behavior will in fact bring about a new equilibrium…not quickly, but certainly in well under two decades.

And what about on-line education? Well, a lot of students don’t like it because they have to actually work on their own and pay attention. To the extent education really is just signaling, that should give on-line options a brighter future all the more. But not in the Caplanian world view, as conformity serves once again as an intervening factor. For better or worse, Bryan’s book subverts economics as a science at least as much as it does education. Bryan of course is smart enough to see the trade-offs here, and he knows if the standard model of economic competition were allowed to reign supreme, we would (even with subsidies, relative to those subsidies) tend to see strong moves toward relatively efficient means of signaling, if only through changes in the relative sizes of institutions.

Tyler suggests that Bryan’s views imply competency-based learning and on-line education are more efficient signals, and so should win a market competition for customers. Yet I don’t see it. Yes, such approaches may let some learn more faster, and signal what they have learned. But Bryan and I see school as less about learning.

Both competency-based learning and on-line education divorce learning from its usual social conformity context. You can use them to learn what you want when you want, and then to prove what you’ve learned. You don’t have to commit to and keep up with a standard plan of what to learn when shared by a large cohort, nor be visibly compared to this cohort.

Yes, such variations may let one better show initiative, independence, creativity, and self-actualization. And yes, we give lip service to admiring such features. But employers are not usually that eager to see such features in their employees. The usual learning plan, in contrast, is much more like a typical workplace, where workers have less freedom to choose their projects, must coordinate plans closely, and must deal with office politics and conformity pressures. It seems to me that success in the usual schooling plans work better as a signal of future workplace performance, and so would not be outcompeted by competency-based learning and on-line education. Even if they let you learn some things faster, and even if change was easier than it is.

While we tend to say and think otherwise, in fact much of what we do is oriented toward helping us to show off. (Our new book argues for this at length.) Assuming this is true, what does a better world look like?

In simple signaling models, people tend to do too much of the activities they use to signal. This suggests that a better world is one that taxes or limits such activities. Say by taxing or limiting school, hospitals, or sporting contests. However, this is hard to arrange because signaling via political systems tends to create the opposite: subsidies and minimum required levels of such widely admired activities. (Though socializing such activities under limited government budgets is often effective.) Also, if we put most all of our life energy into signaling, then limits or taxes on just signaling activities will mainly result in us diverting our efforts to other signals.

If some signaling activities have larger positive externalities, then it seems an obvious win to use taxes, subsidies, etc. to divert our efforts into those activities. This is plausibly why we try to praise people more for showing off via charity, innovation, or whistleblowing. Similarly, we tend to criticize activities like war and other violence with large negative externalities. We should continue to do these things, and also look for other such activities worthy of extra praise or criticism.

However, on reflection I think the biggest problem with signals today is the quality of our audience. When the audience that we want to impress knows little about how our visible actions connect to larger consequences, then we also need not attend much to such connections. For example, to show an audience that we care enough about someone via helping them to get medicine, we need only push the sort of medicine that our audience thinks is effective. Similarly for using charity to convince an audience we care about the poor, politics to convince an audience we care about our nation, or using creative activities to convince an audience we promote innovation.

What if our audiences knew more about which medicines helped health, which charities helped the poor, which national policies help the nation, or which creative activities promoted innovation? That would push us to also know more, and lead us to choose more effective medicines, charities, policies, and innovations. All to the world’s benefit. So what could make the audiences that we seek to impress know more about how our activities connect to these larger consequences?

One approach is make our audiences more elite. Today our efforts to gain more likes on social media have us pandering to a pretty broad and ignorant audience. In contrast, in many old-world rags-to-riches stories, a low person rose in rank via a series of encounters with higher persons, each of whom was suitably impressed. The more that we expect to gain via impressing better-informed elites, the better informed will our show-off actions be.

But this isn’t just about who we seek to impress. It is also about whether we impress them via many small encounters, or via a few big ones. In larger encounters, our audience can take more time to judge how much we really understand about what we are doing. Yes risk and randomness could dominate if the main encounters that mattered to us were too small in number. But we seem pretty far away from that limit at the moment. For now, we’d have a better world of signals if we tried more to impress via a smaller number of more intense encounters with better informed elites.

Of course to fill this role of a better informed audience, it isn’t enough for “elites” to merely be richer, prettier, or more popular. They need to actually know more about how signaling actions connect to larger consequences. So there can be outsized gains from better educating elites on such things, and from selecting our elites more from those who are better educated on them. And anything that distracts elites from performing well in this this crucial role can have outsized costs.

Of course there’s a lot more to figure out here; I’ve just scratched the surface. But still, I thought I should plant a flag now, and show that it is possible to think more carefully about how to make a better world, when that world is chock full of signaling.

If you’ve laughed at “X is not about Y”, now is the time to take it seriously, as an equal.

Over the years, many seem to have found my “X is not about Y” arguments to be enjoyably mockable. As if I would be equally likely to say “Toasters are not about toast” or “Napkin holders are not about napkins.” Which seems to suggest that while my claims might be important if true, they are too silly to take seriously.

Now I don’t mind people having fun, but I do worry about the human habit to dismiss as unworthy of attention things that have been wittily mocked. (See the movie Ridicule.) If you worry about that too, and if you’ve at least smirked some at “X is not about Y” jokes, then perhaps I can appeal to your guilt or concern to take the time now to engage the argument.

Now publishers and the media usually coordinate to talk about new books near the day when hardback copies are officially released. Which for our book is January 2. Usually ebooks are also withheld until near that date. As a result, usually the only people who can say much about a book at its official release date are elites who have been given special access to pre-release copies. Those who talk about a book weeks or months later are clearly revealed as less elites, and get less attention.

But now for our book all of you can participate more as equals in that release date book conversation. If you read our book now, and then publicly post a review or engage our argument near the release date, and indicate that you’d like us to publicly engage your response, then we will try to do so. When time is limited we will of course focus more on responses that we think are better argued. But we will try to engage as many of you as possible, without giving undue priority to media and other elites.

So please, go read, and then join our debate. Just how often is it plausible that “X is not about Y”?

The vast majority of economic papers and books that offer explanations for human behaviors don’t bother to distinguish if their explanations are mediated by conscious intentions or not. (In fact, most papers on any topic don’t take a stance on most possible distinctions related to their topic.) …

Yet I’ve had even economics colleagues tell me that I should take more care, when I point out possible signaling explanations, to say if I am claiming that such signaling effects are consciously intended. But why would it be more important to distinguish conscious intentions in this context, compared to the rest of economics and social science?

Standard signaling models assume that people dislike sending the signal. It is this assumption that implies that signaling equilibria are highly inefficient – or even full-blown Prisoners’ Dilemmas. If people enjoyed signaling, in contrast, signaling equilbria could easily be ideal. What superficially appears to be a vast zero-sum game turns out to be fine because the players like playing the game.

So why don’t economists clearly acknowledge the centrality of conscious desire when they apply signaling models to the real world? Because we usually focus on cases where most people plainly don’t enjoy sending the signal. When I wrote The Case Against Education, I definitely double-checked this fact; but I probably wouldn’t have even launched the project if I’d spent a lifetime inside classrooms packed with jubilant learners.

I agree that, when explaining human behavior, it can often be important to be clear about the preferences that one is postulating. The same behavior explained by different preferences can have different implications for if we should encourage or discourage that behavior.

But when explaining behavior, postulated preferences and conscious intent are just separate and independent topics on which one can be clear. There is no obvious or necessary relation between them.

For example, the real reason that people go to school could because they like school, or it could be because they want to show off smarts, conformity, etc. And for either real motive, people could be fully conscious of that motive, or they could be self-deceived and in denial about it. For example, people could think that they enjoy school, but really go to show off, or they could think that they are trying to show off, but really go because they enjoy school.

While I’m pretty sure that Caplan claims that we go to school more to show off, I’m not actually sure if he has taken much of a stand on how conscious we are in choosing to go to school for this purpose. And that’s my point: I can love his new book (buy it) even without knowing this stance. Like most good economists, he doesn’t bother to distinguish how much his explanation of schooling is mediated by conscious intent.

Many people (including me) claim that we eat food and drink water because without nutrition and fluids we would starve and dehydrate. Imagine this response:

No, people eat food because they are hungry, and drink water because they are thirsty. We don’t need abstract concepts like nutrition and dehydration to explain something so elemental as following our authentic feelings and desires.

Yes hunger and thirst are direct proximate causes of eating and drinking. But we are often interested in finding more distal explanations of such proximate causes. So almost no one objects to the nutrition and dehydration explanations of eating and drinking.

However, one of the most common criticisms I get about signaling explanations of human behavior is that we are instead just following authentic feelings and desires. As in this exchange:

If you are high status, others care about your views on wide range of topics. If low status, hard to get them to listen even on the topics on which you are most expert. So folks often express opinions on many topics, to try to seem high status.

Yes, people don’t need to consciously force themselves to express opinions on many topics. That habit comes quite naturally. Even so, we might want to explain that habit in terms of more basic distal forces.

I’m an economics professor, and the vast majority of economic papers and books that offer explanations for human behaviors don’t bother to distinguish if their explanations are mediated by conscious intentions or not. (In fact, most papers on any topic don’t take a stance on most possible distinctions related to their topic.) Economics are in fact famously wary (too wary I’d say) of survey data, as they fear conscious thoughts can mislead about economic behaviors.

Yet I’ve had even economics colleagues tell me that I should take more care, when I point out possible signaling explanations, to say if I am claiming that such signaling effects are consciously intended. But why would it be more important to distinguish conscious intentions in this context, compared to the rest of economics and social science?

My best guess is that what is going on here is that our social norms disapprove mildly of consciously intended signaling. Just as we aren’t supposed to brag, we also aren’t supposed to do things on purpose to make ourselves look good. It is okay to look good, but only as a side effect of doing things for other reasons. And as we usually claim other reasons for these behaviors, if we are actually doing them for signaling reasons we could also be accused of lying, which is also a norm violation.

Thus many see my signaling explanation proposals as accusing them personally of norm violations. At which point, they become vastly more interested in defending themselves against this accusation than in evaluating my general claims about human behavior. Perhaps if I were a higher status professor publishing in a prestigious journal, they might be reluctant to publicly challenge my claimed focus on distal explanations of general behavior patterns. But for mere tweets or blog posts by someone like me, they feel quite entitled to read me as accusing them of being bad people, unless I explicitly say otherwise. (And perhaps even then.) Sigh.

For the record, the degree of conscious intent of any behavior is a mildly interesting facet, but I’m less interested in it than are most people. This is in part because I’m inclined to give people less of a moral or legal pass on the harms resulting from behaviors if people do not consciously intend such consequences. It is just too easy for people to not notice such consequences, when they find it in their interest to not notice.

Casual conversation norms say to wander across many topics, with each person staying relevant to each current topic. This functions well to test individual impressiveness. Today, academic and mass media conversations today follow similar norms, though they did this much less in the ancient world.

While ancient artists and musicians tried to perfect common styles, modern artists and musicians seek more distinctive personal styles. For example, while songs were once designed to sound good when ordinary folks sang them, now songs are designed to create a unique impressive performance by one artist.

Politicians often go out of their way to do “position taking” on many issues, even on issues they have little chance of influencing policy while in office. Voters prefer systems like proportional representation where voters can identify more closely with particular representatives, even if this doesn’t give voters better outcomes overall. Knowing many of a politician’s positions helps voters to identify with them.

“Sophomoric” thinkers, typically college sophomores, are eager to take positions on as many common topics as possible, even if this means taking poorly consider positions. They don’t feel they are adult until they have an opinion ready for most common intellectual conversations. This is more feasible when opinions on each topic area are reduced to choices between a small number of standard “isms”, offering integrated packages of answers. Sophomoric thinkers love isms.

We often try to extract “isms” out of individuals, such as my colleagues Tyler Cowen or Bryan Caplan. We might ask “What is the Caplanian position on X?” That is, we wonder how they would answer random questions, presuming that we can infer a coherent style from past positions that would answer all future questions, at least within some wide scope. Intellectuals who desire wider attention often go out of their way to express opinions on many topics, chosen via a distinctive personal style.

We pretend that we search only for truth, picking each specific position only via the strongest specific evidence and arguments. And in many mundane contexts that’s not a bad approximation. But in many other grander contexts we seek more to become and associate with distinctive intellectual artists. Such artists are impressive both via the wide range of topics on which they can be impressive, and via having a distinctive personal style that they can bring to bear on this range of topics.

This all makes complete sense as an impressiveness contest, but far less sense as a way for the world to jointly estimate accurate Bayesian estimates on each topic. I’m sure you can make up reasons why distinctive intellectual styles that imply positions on wide ranges of topics are really great ways to produce accuracy. But they will mostly sound like excuses to me.

Sophomoric thinkers often retain for a lifetime the random opinions they quickly generate without much thought. Yet they don’t want to just inherit their parents positions; they need to generate their own new opinions. I wonder which effect will dominate when young ems choose opinions; will they tend to adopt standard positions of prior clan members, or generate their own new individual opinions?