Tag Archives: Academia

Imagine that that you are a politically savvy forager in a band of size thirty, or a politically savvy farmer near a village of size thousand. You have some big decisions to make, including who to put in various roles, such as son-in-law, co-hunter, employer, renter, cobbler, or healer. Many people may see your choices. How should you decide?

Well first you meet potential candidates in person and see how much you intuitively respect them, get along with them, and can agree on relative status. It isn’t enough for you to have seen their handiwork, you want to make an ally out of these associates, and that won’t work without respect, chemistry, and peace. Second, you see what your closest allies think of candidates. You want to be allies together, so it is best if they also respect and get along with your new allies.

Third, if there is a strong leader in your world, you want to know what that leader thinks. Even if this leader says explicitly that you can do anything you like, they don’t care, if you get any hint whatsoever that they do care, you’ll look closely to infer their preferences. And you’ll avoid doing anything they’d dislike too much, unless your alliance is ready to mount an overt challenge.

Fourth, even if there is no strong leader, there may be a dominant coalition encompassing your band or town. This is a group of people who tend to support each other, get deference from others, and win in conflicts. We call these people “elites.” If your world has elites, you’ll want to treat their shared opinions like those of a strong leader. If elites would gossip disapproval of a choice, maybe you don’t want it.

What if someone sets up objective metrics to rate people in suitability for the roles you are choosing? Say an archery contest for picking hunters, or a cobbler contest to pick cobblers. Or public track records of how often healer patients die, or how long cobbler shoes last. Should you let it be known that such metrics weigh heavily in your choices?

You’ll first want to see what your elites or leader think of these metrics. If they are enthusiastic, then great, use them. And if elites strongly oppose, you’d best only use them when elites can’t see. But what if elites say, “Yeah you could use those metrics, but watch out because they can be misleading and make perverse incentives, and don’t forget that we elites have set up this whole other helpful process for rating people in such roles.”

Well in this case you should worry that elites are jealous of this alternative metric displacing their advice. They like the power and rents that come from advising on who to pick for what. So elites may undermine this metric, and punish those who use it.

When elites advise people on who to pick for what, they will favor candidates who seem loyal to elites, and punish those who seem disloyal, or who aren’t sufficiently deferential. But since most candidates are respectful enough, elites often pick those they think will actually do well in the role. All else equal, that will make them look good, and help their society. While their first priority is loyalty, looking good is often a close second.

Since humans evolved to be unconscious political savants, this is my basic model to explain the many puzzles I listed in my last post. When choosing lawyers, doctors, real estate agents, pundits, teachers, and more, elites put many obstacles in the way of objective metrics like track records, contests, or prediction markets. Elites instead suggest picking via personal impressions, personal recommendations, and school and institution prestige. We ordinary people mostly follow this elite advice. We don’t seek objective metrics, and instead use elite endorsements, such as the prestige of where someone went to school or now works. In general we favor those who elites say have the potential to do X, over those who actually did X.

This all pushes me to more favor two hypotheses:

We choose people for roles mostly via evolved mental modules designed mainly to do well at coalition politics. The resulting system does often pick people roughly well for their roles, but more as a side than a direct effect.

In our society, academia reigns as a high elite, especially on advice for who to put in what roles. When ordinary people see another institution framed as competing directly with academia, that other institution loses. Pretty much all prestigious institutions in our society are seen as allied with academia, not as competing with it. Even religions, often disapproved by academics, rely on academic seminary degrees, and strongly push kids to gain academic prestige.

We like to see ourselves as egalitarian, resisting any overt dominance by our supposed betters. But in fact, unconsciously, we have elites and we bow to them. We give lip service to rebelling against them, and they pretend to be beaten back. But in fact we constantly watch out for any actions of ours that might seem to threaten elites, and we avoid them like the plague. Which explains our instinctive aversion to objective metrics in people choice, when such metrics compete with elite advice.

Added 8am: I’m talking here about how we intuitively react to the possibility of elite disapproval; I’m not talking about how elites actually react. Also, our intuitive reluctance to embrace track records isn’t strong enough to prevent us from telling specific stories about our specific achievements. Stories are way too big in our lives for that. We already norms against bragging, and yet we still manage to make our selves look good in stories.

Years ago I was being surprised to learn that patients usually can’t pick docs based on track records of previous patient outcomes. Because, people say, that would invade privacy and make bad incentives for docs picking patients. They suggest instead relying on personal impressions, wait times, “bedside” manner, and prestige of doc med school or hospital. (Yeah, those couldn’t possibly make bad incentives.) Few ever study if such cues correlate with patient outcomes, and we actively prevent the collection of patient satisfaction track records.

For lawyers, most trials are in the public record, so privacy shouldn’t be an obstacle to getting track records. So people pick lawyers based on track records, right? Actually no. People who askarerepeatedlytold: no practically you can’t get lawyer track records, so just pick lawyers based on personal impressions or the prestige of their law firm or school. (Few study if those correlate with client outcomes.)

Despite being public record, court data is surprisingly inaccessible in bulk, nor is there a unified system to access it, outside of the Federal Courts. Clerks of courts refused Premonition requests for case data. Resolved to go about it the hard way, Unwin … wrote a web crawler to mine courthouse web sites for the data, read it, then analyze it in a database. …

Many publications run “Top Lawyer” lists, people who are recognized by their peers as being “the best”. Premonition analyzed the win rates of these attorneys, it turned out most were average. The only way that they stood out was a disproportionate number of appealed and re-opened cases, i.e. they were good at dragging out litigation. They discovered that even the law firms themselves were poor at picking litigators. In a study of the United Kingdom Court of Appeals, it found a slight negative correlation of -0.1 between win rates and re-hiring rates, i.e. a barrister 20% better than their peers was actually 2% less likely to be re-hired! … Premonition was formed in March 2014 and expected to find a fertile market for their services amongst the big law firms. They found little appetite and much opposition. …

The system found an attorney with 22 straight wins before the judge – the next person down was 7. A bit of checking revealed the lawyer was actually a criminal defense specialist who operated out of a strip mall. … The firm claims such outliers are far from rare. Their web site … shows an example of an attorney with 32 straight wins before a judge in Orange County, Florida. (more)

As a society we supposedly coordinate in many ways to make medicine and law more effective, such as via funding med research, licensing professionals, and publishing legal precedents. Yet we don’t bother to coordinate to create track records for docs or lawyers, and in fact our public representatives tend to actively block such things. And strikingly: customers don’t much care. A politician who proposed to dump professional licensing would face outrage, and lose. A politician who proposed to post public track records would instead lose by being too boring.

On reflection, these examples are part of a larger pattern. For example, I’ve mentioned before that a media firm had a project to collect track records of media pundits, but then abandoned the project once it realized that this would reduce reader demand for pundits. Readers are instead told to pick pundits based on their wit, fame, and publication prestige. If readers really wanted pundit track records, some publication would offer them, but readers don’t much care.

Attempts to publish track records of school teachers based on students outcomes have produced mostly opposition. Parents are instead encouraged to rely on personal impressions and the prestige of where the person teaches or went to school. No one even considers doing this for college teachers, we at most just survey student satisfaction just after a class ends (and don’t even do that right).

Regarding student evaluations, we coordinate greatly to make standard widely accessible tests for deciding who to admit to schools. But we have almost no such measures of students when they leave school for work. Instead of showing employers a standard measure of what students have learned, we tell employers to rely on personal impressions and the prestige of the school from which the student came. Some have suggested making standard what-I-learned tests, but few are interested, including employers.

For researchers like myself, publications and job position are measures of endorsements by prestigious authorities. Citations are a better measure of the long term impact of research on intellectual progress, but citations get much less attention in evaluations of researchers. Academics don’t put their citation count on their vita (= resume), and when a reporter decides which researcher to call, or a department decides who to hire, they don’t look much at citations. (Yes, I look better by citations than by publications or jobs, and my prestige is based more on the later.)

Related is the phenomenon of people being more interested in others said to have the potential to achieve X, than in people who have actually achieved X. Related also is the phenomenon of firms being reluctant to use formulaic measures of employee performance that aren’t mediated mostly by subjective boss evaluations.

It seems to me that there are striking common patterns here, and I have in mind a common explanation for them. But I’ll wait to explain that in my next post. Till then, how do you explain these patterns? And what other data do we have on how we treat track records elsewhere?

Added 22Mar: Real estate sales are also technically in the public record, and yet it is hard for customers to collect comparable sales track records for real estate agents, and few seem to care enough to ask for them.

Almost all research into human behavior focuses on particular behaviors. (Yes, not extremely particular, but also not extremely general.) For example, an academic journal article might focus on professional licensing of dentists, incentive contracts for teachers, how Walmart changes small towns, whether diabetes patients take their medicine, how much we spend on xmas presents, or if there are fewer modern wars between democracies. Academics become experts in such particular areas.

After people have read many articles on many particular kinds of human behavior, they often express opinions about larger aggregates of human behavior. They say that government policy tends to favor the rich, that people would be happier with less government, that the young don’t listen enough to the old, that supply and demand is a good first approximation, that people are more selfish than they claim, or that most people do most things with an eye to signaling. Yes, people often express opinions on these broader subjects before they read many articles, and their opinions change suspiciously little as a result of reading many articles. But even so, if asked to justify their more general views academics usually point to a sampling of particular articles.

Much of my intellectual life in the last decade has been spent in the mode of collecting many specific results, and trying to fit them into larger simpler pictures of human behavior. So both I and the academics I’m describing above in essence present themselves as using these many results presented in academic papers about particular human behaviors as data to support their broader inferences about human behavior. But we do almost all of this informally, via our vague impressionistic memories of what has been the gist of the many articles we’ve read, and our intuitions about what more general claims seem how consistent with those particulars.

Of course there is nothing especially wrong with intuitively matching data and theory; it is what we humans evolved to do, and we wouldn’t be such a successful species if we couldn’t at least do it tolerably well sometimes. It takes time and effort to turn complex experiences into precise sharable data sets, and to turn our theoretical intuitions into precise testable formal theories. Such efforts aren’t always worth the bother.

But most of these academic papers on particular human behaviors do in fact pay the bother to substantially formalize their data, their theories, or both. And if it is worth the bother to do this for all of these particular behaviors, it is hard to see why it isn’t be worth the bother for the broader generalizations we make from them. Thus I propose: let’s create formal data sets where the data points are particular categories of human behavior.

To make my proposal clearer let’s for now restrict attention to explaining government regulatory policies. We could create a data set where the datums are particular kinds of products and services that governments now provide, subsidize, tax, advise, restrict, etc. For such datums we could start to collect features about them into a formal data set. Such features could say how long that sort of thing has been going on, how widely it is practiced around the world, how variable has been that practice over space and time, how familiar are ordinary people today with its details, what sort of justifications do people offer for it, what sort of emotional associations do people have with it, how much do we spend on it, and so on. We might also include anything we know about how such things correlate with age, gender, wealth, latitude, etc.

Generalizing to human behavior more broadly, we could collect a data set of particular behaviors, many of which seem puzzling at least to someone. I often post on this blog about puzzling behaviors. Each such category of behaviors could be one or more data points in this data set. And relevant features to code about those behaviors could be drawn from the features we tend to invoke when we try to explain those behaviors. Such as how common is that behavior, how much repeated experience do people have with it, how much do they get to see about the behavior of others, how strong are the emotional associations, how much would it make people look bad to admit to particular motives, and so on.

Now all this is of course much easier said than done. Is it a lot of work to look up various papers and summarize their key results as entries in this data set, or just to look at real world behaviors and put them into simple categories. It is also work to think carefully about how to usefully divide up the space of actions and features. First efforts will no doubt get it wrong in part, and have to be partially redone. But this is the sort of work that usually goes into all the academic papers on particular behaviors. Yes it is work, but if those particular efforts are worth the bother, then this should be as well.

As a first cut, I’d suggest just picking some more limited category, such as perhaps government regulations, collecting some plausible data points, making some guesses about what useful features might be, and then just doing a quick survey of some social scientists where they each fill in the data table with their best guesses for data point features. If you ask enough people, you can average out a lot of individual noise, and at least have a data set about what social scientists think are features of items in this area. With this you could start to do some exploratory data analysis, and start to think about what theories might well account for the patterns you see.

Now one obvious problem with my proposal is that while it looks time consuming and tedious, it isn’t obviously impressive. Researchers who specialize in particular areas will complain about your data entries related to their areas, and you won’t be able to satisfy them all. So you will end up with a chorus of critics saying your data is all wrong, and your efforts will look too low brow to cower them with your impressive tech. So I can see why this hasn’t been done much. Even so, I think this is the data set we need.

Grad students vary in their research autonomy. Some students are very willing to ask for advice and to listen to it carefully, while others put a high priority on generating and pursuing their own research ideas their own way. This varies with personality, in that more independent people pick more independent strategies. It varies over time, in that students tend to start out deferring at first, and then later in their career switch to making more independent choices. It also varies by topic; students defer more in more technical topics, and where topic choices need more supporting infrastructure, such as with lab experiments. It also varies by level of abstraction; students defer more on how to pursue a project than on which project ideas to pursue.

Many of these variations seem roughly explained by near-far theory, in that people defer more when near, and less when far. These variations seem at least plausibly justifiable, though doubts make sense too. Another kind of variation is more puzzling, however: students at top schools seem more deferential than those at lower rank schools.

Top students expect to get lots of advice, and they take it to heart. In contrast, students at lower ranked schools seem determined to generate their own research ideas from deep in their own “soul”. This happens not only for picking a Ph.D. thesis, but even just for picking topics of research papers assigned in classes. Students seem as averse to getting research topic advice as they would be to advice on with whom to fall in love. Not only are they wary of getting research ideas from professors, they even fear that reading academic journals will pollute the purity of their true vision. It seems a moral matter to them.

Of course any one student might be correct that they have a special insight into what topics are neglected by their local professors. But the overall pattern here seems perverse; people who hardly understand the basics of a field see themselves as better qualified to identify feasible interesting research topics than those nearby with higher status, and who have been in the fields for decades.

One reason may be overconfidence; students think their profs deserve more to be at a lower rank school than they do, and so estimate a lower quality difference between they and their profs. More data supporting this is that students also seem to accept the relative status ranking of profs at their own school, and so focus most of their attention on the locally top status profs. It is as if each student thinks that they personally have so far been assigned too low of a status, but thinks most others have been correctly assigned.

Another reason may be like our preferring potential to achievement; students try to fulfill the heroic researcher stories they’ve heard, wherein researchers get much credit for embracing ideas early that others come to respect later. Which can make some sense. But these students are trying to do this way too early in their career, and they go way too far with it. Being smarter and knowing more, students at top schools understand this better.

I started my Ph.D. at the age of 34, and Tyler hired me here at GMU at the age of 40. So by my lights Tyler deserves credit for overcoming the age bias. Tyler doesn’t discuss why this bias might exist, but a Stanford history prof explained his theory to me when I was in my early 30s talking to him about a possible PhD. He said that older students are known for working harder and better, but also for being less pliable: they have more of their own ideas about what is interesting and important.

I think that fits with what I’ve heard from others, and have seen for myself, including in myself. People complain that academia builds too little on “real world” experience, and that disciplines are too insular. And older students help with that. But in fact the incentive for each prof in picking students isn’t to solve the wider problems with academia. It is instead to expand an empire by creating intellectual clones of him or herself. And for that selfish goal, older students are worse. My mentors likely feel this way about me, that I worked hard and did interesting stuff, but I was not a good investment for expanding their legacy.

Interestingly this explanation is somewhat the opposite of the usualexcuses for age bias in Silicon Valley. There the usual story is that older people won’t take as many risks, and that they aren’t as creative. But the complaint about older Ph.D.s is exactly that they take too many risks, and that they are too creative. If only they would just do what they are told, and copy their mentors, then their hard work and experience could be more valued.

I find it hard to believe that older workers change their nature this much between tech and academia. Something doesn’t add up here. And for what its worth, I’ve been personally far more impressed by the tech startups I’ve known that are staffed by older folks.

Ten years ago today the GMU economics department voted to award me tenure. With that vote, I won my academic gamble. I can’t be sure what my odds reasonably were, so I can’t be sure it was a gamble worth taking. And I’m not sure tenure is overall good for the world. But I am sure that I’m very glad that I achieved tenure.

Many spend part of their tenure dividend on leisure. Many spend part on continuing to gain academic prestige as they did before. Many switch to more senior roles in the academic prestige game. And some spend tenure on riskier research agendas, agendas that are foolish for folks seeing tenure.

Though some may disagree, I see myself as primarily in this last category. And since that would not be possible without tenure, I bow in sincere supplication, and thank my colleagues for this treasured honor. THANK YOU for my tenure.

In the movie “My Big Fat Greek Wedding,” when Toula was a little girl, she sat alone in the school cafeteria, frizzy haired, big nosed, and unpopular. The blonde girls at the next table asked her what she was eating, and Toula quietly said “moussaka.” The popular girls laughed cruelly, saying “Ewwww, ”moose caca!”” (more)

Imagine that those cruel girls had gone on to tell other kids “Toula says she loves to eat moose caca!” That is how I feel when Noah Smith says:

Why is it that the sciences look like a feminist nirvana compared with the economics profession, which seems to have a built-in bias that prevents women from advancing?

Consider this 2011 blog post by George Mason University economist Robin Hanson. Hanson writes that “gentle, silent rape” of a woman by a man causes less harm than a wife cuckolding her husband:

I [am puzzled] over why our law punishes rape far more than cuckoldry…[M]ost men would rather be raped than cuckolded…Imagine a woman was drugged into unconsciousness and then gently raped, so that she suffered no noticeable physical harm nor any memory of the event, and the rapist tried to keep the event secret…Now compare the two cases, cuckoldry and gentle silent rape.

There was no outcry whatsoever over these remarks, nor any retraction that I could find. (more)

Now I’ve admitted as far back as 2006 that academia, economics included, is biased against women. (Having been in both physics and computer science before, I doubt the situation is much worse in econ.) This one post of mine that Smith points to did induce many negative responses in comments and elsewhere, and of my thousands of blog posts I’d be surprised if much more than a dozen had induced any blog responses by economists whatsoever. And I suggested that we consider that the harms of rape and cuckoldry might be similar; I didn’t claim I knew one to be definitely larger.

But more fundamentally, Noah Smith is plenty smart enough to understand that I was not at all minimizing the harm of rape when I used rape as a reference to ask if other harms might be even bigger. Just as people who accuse others of being like Hitler do not usually intend to praise Hitler, people who compare other harms to rape usually intend to emphasize how big are those other harms, not how small is rape.

But I’m pretty sure Smith knows that. Yet, like the girls who taunted Toula, Smith finds it suits him better to pretend to misunderstand.

Added 4p: My topic was the relative harm of cuckoldry & rape. Noah Smith says that this topic itself is innately offensive to most women, who think cuckoldry to be of such low harm that comparing it with rape suggests rape to be low harm. He is further offended that I would talk on a topic if I knew it might offend in this way. I said his presuming cuckoldry is of very low harm offends the many men who think it very high harm. He disagrees that there are many such men, and would bet on a poll on the subject, but thinks it offensive to make such a poll, and won’t help with that.

Added 10a Sunday: Heartiste has a poll with over 3700 respondents so far on preferring rape or cuckoldry. Express your opinion there, or start a new poll somewhere.

Added Tuesday: Now Noah Smith wonders out loud if I’m a fake nerd, who pretends not to understand political correctness so I can have an excuse to offend people. Cause people so admire nerds that of course everyone wants to look like one …

In this post I’ll talk primarily to people who, like me, lean libertarian. The rest of you can take a break.

Libertarians want to move more products and services from being provided directly by government, to being provided privately. And for those that are provided privately, libertarians want to weaken regulations. These changes would increase liberty.

Libertarians tend to offer arguments that are relatively abstract and theory-based. That is, they focus more on why more liberty is more moral, or why it should in theory give better outcomes. They focus less on showing that liberty has in practice worked out better. When libertarians do focus on data, they tend to be very broad, or randomly specific. That is, they talk about how West Germany is better than East Germany, or South Korea better than North Korea. Or they pick on very specific examples, like regulations limiting eyeglass ads, and leave audiences wondering how cherry-picked are such examples.

It seems to me that libertarians focus too much on trying to argue abstractly that liberty would be better, and not enough on just concretely describing how liberty would be different. Yes for you the abstract arguments seem best; they persuade you plenty, and they bring the most prestige in your circle. But typical libertarians today are a distinct personality type; most people are not like you. Most people just cannot be comfortable with a proposal for change if they cannot imagine it in some detail, and imagine that they’d like that detail. Such people don’t need more abstract arguments and examples; they instead credible concrete descriptions.

True, people have sometimes written fiction set in libertarian settings. But such fiction doesn’t usually come with a careful analysis of why one should believe in its many details. Yes, part of the attraction of liberty is that it frees up people to innovate in ways that one can’t anticipate in advance. But that doesn’t mean that we can’t go a long way to better describe a world of more liberty.

On reflection, I realize that when I try to imagine more liberty, I mostly draw on a limited set of iconic comparisons, such as comparing airlines, trucks, and phones before and after US deregulation, or comparing public to private schools and mail in the US. Alas, we and our audiences should worry that we cherry-pick such examples to support conclusions we like.

We should be able to do much better than this. By now there are vast literatures discussing many industries in many places before and after regulation or deregulation, and describing specific times and places where certain products and services provided directly by governments, or provided privately. From this vast literature we should be able to identify many concrete patterns and “stylized facts” about how government-provision and heavy-regulation tends to change products and services.

I recall these suggestions for typical features of industries with more liberty:

Less “gold-plating” in materials and methods

More product variety, including more low quality versions

Faster innovation and product cycles

Fewer guarantees to workers or customers

Price, features vary more with customer features

Workers have less school and seniority

Less overhead spend on paperwork

more?

Some people should work to extract patterns like these from our vast related literatures – I’ve looked, and there just aren’t many such summaries today. With such patterns in hand, we would be in a much better position to credibly describe how familiar products and services would concretely change if we were to provide them privately, or to regulate them less. And such credible concrete descriptions might allow many more people to become comfortable with endorsing such expansions of liberty.

This sort of project seems well within the abilities of the median grad student. It doesn’t require great creativity or technical skills. Instead, it just requires methodically surveying and summarizing related literatures. Perhaps some libertarian students should shy away from it in hopes of impressing via more difficult methods. But surely there must be other students for which this sort of project is a good match.

Long ago, I first believed in religion as a young kid because I believed what I was told. Then I also believed because religious claims seemed to explain the strong emotions that religious contexts induced. This is how religion works – you feel strong emotions due to candles, buildings, clothes, music, well crafted and button-pushing words, charismatic empathetic leaders, social support and status. And if respected leaders and supporters around you then claim that your emotions are caused by God, well that makes sense. Even though many religious claims are transparently crazy, at least to people who well understand the world, they are easy for the young or inexpert to accept.

Recently while watching an emotional movie with political and moral overtones, I was reminded that the same is true for art. Art can make us feel strong emotions via all the same mechanisms. When high status artists and art supporters around us tell us these emotions are caused by our recognizing the emotional truth of art’s moral, political, and legal claims, that can make sense to us. Yet most of the channels by which art makes us feel emotions are irrelevant to the truth of its key claims. When we come to see this, we usually make excuses and tell ourselves that we aren’t fooled by all that other stuff; we really are just evaluating only the key moral/political/legal arguments. But the many correlations we see between features of art and who is persuaded when make it hard to believe this applies to most people most of the time.

The same likely also holds for essays like this one, or academic papers. While such writings may contain logical arguments, they also transmit writing styles, author charisma, status, and impressiveness, and clues about who supports or opposes them. You might think that you correct for all those influences when you read such writings and evaluate their claims, but the patterns above in religion and art suggest this is unlikely. The factthat people aren’t very interested in the accuracy of their pundits suggests we usually give a high priority to presentation style.

Could we do better? On subjects that have implications for future observations we could use prediction markets. But what about other subjects? Well, we might try to control for presentation variation by having a group of neutral writers rewrite common arguments in a standard style. That is, a single neutral writer could present all the different arguments on some subject, all using the same writing style. Readers of such presentations would have a better chance of drawing conclusions on each subject based on the logic of arguments, instead of writing styles. The fact that we aren’t very interested in these sort of presentations suggests that we aren’t very interested in reducing the influence of other writing style related factors.

[Tarot card] readers claim to be able to describe a person’s life, his problems, hopes and fears, his personality and even his future. (more)

I recently watched a demonstration of Tarot card reading. The reader threw out various interpretations of the cards she placed, in terms of the subjects personality and life, and watched the subject carefully for reactions, moving the interpretation closer to options where the subject seemed more engaged. Though the subject was a skeptic, she admitted to finding the experience quite compelling.

Contrast such life readings to school career counselors. Economists have long been puzzled by the lack of student interest in career info. Career counselors usually refer to statistics about the income or graduation rates of broad categories of people given certain types of careers, colleges, or majors. Such advice may be evidence-based but it seems far less compelling to students. It is not connected to salient recent personal experiences of the subjects, or to outcomes in which subjects are very emotionally engaged. The advice is clear but uncertain, in contrast to the certainty and ambiguity of Tarot readings.

It seems obvious to me that many students would be more engaged by more Tarot-like career counseling. It also seems obvious that many parents and other citizens would loudly object, as this would be seen as unscientific and lower the status of this school, at least among elites. Even if the process just took on the appearance of Tarot readings but mainly had the usual career counseling content.

The high status of science seems to push many people to have less compelling and engaging stories of their lives, even if such stories are more accurate.