Monthly Archives: February 2018

Many firms fail to pass bad news up the management chain, and suffer as a result, even though simple fixes have long been known:

Wall Street Journal placed the blame for the “rot at GE” on former CEO Jeffrey Immelt’s “success theater,” pointing to what analysts and insiders said was a history of selectively positive projections, a culture of overconfidence and a disinterest in hearing or delivering bad news. …The article puts GE well out of its usual role as management exemplar. And it shines a light on a problem endemic to corporate America, leadership experts say. People naturally avoid conflict and fear delivering bad news. But in professional workplaces where a can-do attitude is valued above all else, and fears about job security remain common, getting unvarnished feedback and speaking candidly can be especially hard. …

So how can leaders avoid a culture of “success theater?” … They have to model the behavior, being realistic about goals and forecasts and candid when things go wrong. They should host town halls where employees can speak up with criticism, structuring them so bad news can flow to the top. For instance, he recommends getting respected mid-level managers to first interview lower-level employees about what’s not working to make sure tough subjects are aired. …

Doing that is harder than it sounds, making it critical for leaders to create systemic ways to offer feedback, rather than just talking about it. She tells the story of a former eBay manager who would leave a locked orange box near the office bathrooms where people could leave critical questions. He would later read them aloud in meetings — with someone else unlocking the box to prove he hadn’t edited its contents — hostile questions and all. “People never trusted anything was really anonymous except paper,” she said. “He did it week in and week out.”

When she worked at Google, where she led online sales and operations for AdSense, YouTube and Doubleclick, she had a crystal statue she called the “I was wrong, you were right” statue that she’d hand out to colleagues and direct reports. (more)

Consider what signal a firm sends by NOT regularly reading the contents of locked anonymous bad news boxes at staff meetings. They in effect admit that they aren’t willing to pay a small cost to overcome a big problem, if that interferes with the usual political games. You might think investors would see this as a big red flag, but in fact they hardly care.

I’m not sure how exactly to interpret this equilibrium, but it is clearly bad news for prediction markets in firms. Such markets are also sold as helping firms to uncover useful bad news. If firms don’t do easier simpler things to learn bad news, why should we expect them to do more complex expensive things?

For millennia, we humans have shown off our intelligence via complicated arguments and large vocabularies, health via sport achievement, heavy drink, and long hours, and wealth via expensive clothes, houses, trips, etc. Today we appear to have the more efficient signaling substitutes, such as IQ tests, medical health tests, and bank statements. Yet we continue to show off in the old ways, and rarely substitute such new ways. Why?

One explanation is inertia. Signaling equilibria require complex coordination, and those who try to change it via deviations can seem non-conformist and socially clueless. Another explanation is hypocrisy. As we discuss in our new book, The Elephant in the Brain, ancient and continuing norms against bragging push us to find plausible deniability for our brags. We can pretend that big vocabularies help us convey info, that sports are just fun, and that expensive clothes, etc. are prettier or more comfortable. It is much harder to find excuses to waive around your IQ test or bank statement for others to see.

Now consider these comments by Tyler Cowen on Bryan Caplan’s new book The Case Against Education:

Bryan’s strangest assumption, namely a sociologically-rooted, actually anti-economics “conformity is stronger than you think” argument, which Bryan uses to assert the status quo will continue more or less indefinitely. It won’t. To the extent Bryan is correct (and that you can debate, but at least he is more correct than most people in the educational establishment will let on), competency-based learning and changes in employer behavior will in fact bring about a new equilibrium…not quickly, but certainly in well under two decades.

And what about on-line education? Well, a lot of students don’t like it because they have to actually work on their own and pay attention. To the extent education really is just signaling, that should give on-line options a brighter future all the more. But not in the Caplanian world view, as conformity serves once again as an intervening factor. For better or worse, Bryan’s book subverts economics as a science at least as much as it does education. Bryan of course is smart enough to see the trade-offs here, and he knows if the standard model of economic competition were allowed to reign supreme, we would (even with subsidies, relative to those subsidies) tend to see strong moves toward relatively efficient means of signaling, if only through changes in the relative sizes of institutions.

Tyler suggests that Bryan’s views imply competency-based learning and on-line education are more efficient signals, and so should win a market competition for customers. Yet I don’t see it. Yes, such approaches may let some learn more faster, and signal what they have learned. But Bryan and I see school as less about learning.

Both competency-based learning and on-line education divorce learning from its usual social conformity context. You can use them to learn what you want when you want, and then to prove what you’ve learned. You don’t have to commit to and keep up with a standard plan of what to learn when shared by a large cohort, nor be visibly compared to this cohort.

Yes, such variations may let one better show initiative, independence, creativity, and self-actualization. And yes, we give lip service to admiring such features. But employers are not usually that eager to see such features in their employees. The usual learning plan, in contrast, is much more like a typical workplace, where workers have less freedom to choose their projects, must coordinate plans closely, and must deal with office politics and conformity pressures. It seems to me that success in the usual schooling plans work better as a signal of future workplace performance, and so would not be outcompeted by competency-based learning and on-line education. Even if they let you learn some things faster, and even if change was easier than it is.

The outcomes within any space-time region can be seen as resulting from 1) preferences of various actors able to influence the universe in that region, 2) absolute and relative power and influence of those actors, and 3) constraints imposed by the universe. Changes in outcomes across regions result from changes in these factors.

While you might mostly approve of changes resulting from changing constraints, you might worry more about changes due to changing values and influence. That is, you likely prefer to see more influence by values closer to yours. Unfortunately, the consistent historical trend has been for values to drift over time, increasing the distance between random future and current values. As this trend looks like a random walk, we see no obvious limit to how far values can drift. So if the value you place on the values of others falls rapidly enough with the distance between values, you should expect long term future values to be very wrong.

What influences value change?Inertia – The more existing values are tied to important entrenched systems, the less they change.Growth – On average, over time civilization collects more total influence over most everything.Competition – If some values consistently win key competitive contests, those values become more common.Influence Drift – Many processes that change the world produce random drift in agent influence.Internal Drift – Some creatures, e.g., humans, have values that drift internally in complex ways.Culture Drift – Some creatures, e.g., humans, have values that change together in complex ways.Context – Many of the above processes depend on other factors, such as technology, wealth, a stable sun, etc.

For many of the above processes, rates of change are roughly proportional to overall social rates of change. As these rates of change have been increased over time, we should expect faster future change. Thus you should expect values to drift faster in the future than then did in the past, leading faster to wrong values. Also, people are living longer now than they did in the past. So even past people didn’t live long enough to see big enough changes to greatly bother them, future people may live to see much more change.

Most increases in the rates of change have been concentrated in a few sudden large jumps (associated with the culture, farmer, and industry transitions). As a result, you should expect that rates of change may soon increase greatly. Value drift may continue at past rates until it suddenly goes much faster.

Perhaps you discount the future rapidly, or perhaps the value you place on other values falls slowly with value distance. In these cases value drift may not disturb you much. Otherwise, the situation described above may seem pretty dire. Even if previous generations had to accept the near inevitability of value drift, you might not accept it now. You may be willing to reach for difficult and dangerous changes that could remake the whole situation. Such as perhaps a world government. Personally I see that move as too hard and dangerous for now, but I could understand if you disagree.

The people today who seem most concerned about value drift also seem to be especially concerned about humans or ems being replaced by other forms of artificial intelligence. Many such people are also concerned about a “foom” scenario of a large and sudden influence drift: one initially small computer system suddenly becomes able to grow far faster than the rest of the world put together, allowing it to quickly take over the world.

To me, foom seems unlikely: it posits an innovation that is extremely lumpy compared to historical experience, and in addition posits an unusually high difficulty of copying or complementing this innovation. Historically, innovation value has been distributed with a long thin tail: most realized value comes from many small innovations, but we sometimes see lumpier innovations. (Alpha Zero seems only weak evidence on the distribution of AI lumpiness.) The past history of growth rates increases suggests that within a few centuries we may see something, perhaps a very lumpy innovation, that causes a growth rate jump comparable in size to the largest jumps we’ve ever seen, such as at the origins of life, culture, farming, and industry. However, as over history the ease of copying and complementing such innovations has been increasing, it seems unlikely that copying and complementing will suddenly get much harder.

While foom seems unlikely, it does seems likely that within a few centuries we will develop machines that can outcompete biological humans for most all jobs. (Such machines might also outcompete ems for jobs, though that outcome is much less clear.) The ability to make such machines seems by itself sufficient to cause a growth rate increase comparable to the other largest historical jumps. Thus the next big jump in growth rates need not be associated with a very lumpy innovation. And in the most natural such scenarios, copying and complementing remain relatively easy.

However, while I expect machines that outcompete humans for jobs, I don’t see how that greatly increases the problem of value drift. Human cultural plasticity already ensures that humans are capable of expressing a very wide range of values. I see no obviously limits there. Genetic engineering will allow more changes to humans. Ems inherit human plasticity, and may add even more via direct brain modifications.

In principle, non-em-based artificial intelligence is capable of expressing the entire space of possible values. But in practice, in the shorter run, such AIs will take on social roles near humans, and roles that humans once occupied. This should force AIs to express pretty human-like values. As Steven Pinker says:

Artificial intelligence is like any other technology. It is developed incrementally, designed to satisfy multiple conditions, tested before it is implemented, and constantly tweaked for efficacy and safety.

If Pinker is right, the main AI risk mediated by AI values comes from AI value drift that happens after humans (or ems) no longer exercise such detailed frequent oversight.

It may be possible to create competitive AIs with protected values, i.e., so that parts where values are coded are small, modular, redundantly stored, and insulated from changes to the rest of the system. If so, such AIs may suffer much less from internal drift and cultural drift. Even so, the values of AIs with protected values should still drift due to influence drift and competition.

Thus I don’t see why people concerned with value drift should be especially focused on AI. Yes, AI may accompany faster change, and faster change can make value drift worse for people with intermediate discount rates. (Though it seems to me that altruistic discount rates should scale with actual rates of change, not with arbitrary external clocks.)

Yes, AI offers more prospects for protected values, and perhaps also for creating a world/universe government capable of preventing influence drift and competition. But in these cases if you are concerned about value drift, your real concerns are about rates of change and world government, not AI per se. Even the foom scenario just temporarily increases the rate of influence drift.

Your real problem is that you want long term stability in a universe that more naturally changes. Someday we may be able to coordinate to overrule the universe on this. But I doubt we are close enough to even consider that today. To quote a famous prayer:

God, grant me the serenity to accept the things I cannot change,
Courage to change the things I can,
And wisdom to know the difference.

For now value drift seems one of those possibly lamentable facts of life that we cannot change.

Recently I posted on how many seek spiritual insight via cutting the tendency of their minds to wander, yet some like Scott Alexandar fear ems with a reduced tendency to mind wandering because they’d have less moral value. On twitter Scott clarified that he doesn’t mind modest cuts in mind wandering; what he fears is extreme cuts. And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

On nature preserves, some fear eventually losing all of wild nature, but when arguing for any particular development others say we need new things and we still have plenty of nature. On military spending, some say the world is peaceful and we have many things we’d rather spend money on, while others say that societies who do not remain militarily vigilant are eventually conquered. On increasing inequality some say that high enough inequality must eventually result in inadequate human capital investments and destructive revolutions, while others say there’s little prospect of revolution now and inequality has historically only fallen much in big disasters such as famine, war, and state collapse. On value drift, some say it seems right to let each new generation choose its values, while others say a random walk in values across generations must eventually drift very far from current values.

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

Third, our ability to foresee the future rapidly declines with time. The more other things that may happen between today and some future date, the harder it is to foresee what may happen at that future date. We should be increasingly careful about the inferences we draw about longer terms.

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

Someday we may be able to create brain emulations (ems), and someday later we may understand them sufficiently to allow substantial modifications to them. Many have expressed concern that competition for efficient em workers might then turn ems into inhuman creatures of little moral worth. This might happen via reductions of brain systems, features, and activities that are distinctly human but that contribute less to work effectiveness. For example Scott Alexander fears loss of moral value due to “a very powerful ability to focus the brain on the task at hand” and ems “neurologically incapable of having their minds drift off while on the job”.

The default mode network is active during passive rest and mind-wandering. Mind-wandering usually involves thinking about others, thinking about one’s self, remembering the past, and envisioning the future.… becomes activated within an order of a fraction of a second after participants finish a task. … deactivate during external goal-oriented tasks such as visual attention or cognitive working memory tasks. … The brain’s energy consumption is increased by less than 5% of its baseline energy consumption while performing a focused mental task. … The default mode network is known to be involved in many seemingly different functions:

It is the neurological basis for the self:

Autobiographical information: Memories of collection of events and facts about one’s selfSelf-reference: Referring to traits and descriptions of one’s selfEmotion of one’s self: Reflecting about one’s own emotional state

Thinking about others:

Theory of Mind: Thinking about the thoughts of others and what they might or might not knowEmotions of other: Understanding the emotions of other people and empathizing with their feelingsMoral reasoning: Determining just and unjust result of an actionSocial evaluations: Good-bad attitude judgments about social conceptsSocial categories: Reflecting on important social characteristics and status of a group

Remembering the past and thinking about the future:

Remembering the past: Recalling events that happened in the pastImagining the future: Envisioning events that might happen in the futureEpisodic memory: Detailed memory related to specific events in timeStory comprehension: Understanding and remembering a narrative

In our book The Elephant in the Brain, we say that key tasks for our distant ancestors were tracking how others saw them, watching for ways others might accuse them of norm violations, and managing stories of their motives and plans to help them defend against such accusations. The difficulty of this task was a big reason humans had such big brains. So it made sense to design our brains to work on such tasks in spare moments. However, if ems could be productive workers even with a reduced capacity for managing their social image, it might make sense to design ems to spend a lot less time and energy ruminating on their image.

Psychologists and neuroscientist now acknowledge that the human mind tends to wander. .. Subjects reported being lost in thought 46.9 percent of the time. .. People are consistently less happy when their minds wander, even when the contents of their thoughts are pleasant. … The wandering mind has been correlated with activity in the … “default mode” or “resting state” network (DMN). .. Activity in the DMN decreases when subjects concentrate on tasks of the sort employed in most neuroimaging experiments.

The DMN has also been linked with our capacity for “self-representation.” … [it] is more engaged when we make such judgements of relevance about ourselves, as opposed to making them about other people. It also tends to be more active when we evaluate a scene from a first person point of view. … Generally speaking, to pay attention outwardly reduces activity in the [DMN], while thinking about oneself increases it. …

Mindfulness and loving-kindness mediation also decrease activity in the DMN – and the effect is most pronounced among experienced meditators. … Expert meditators … judge the intensity of an unpleasant stimulus the same but find it to be less unpleasant. They also show reduced activity in regions associated with anxiety while anticipanting the onsite of pain. … Mindfulness reduces both the unpleasantness and intensity of noxious stimuli. …

There is an enormous difference between being hostage to one’s thoughts and being freely and nonjudgmentally aware of life in the present. To make this shift is to interrupt the process of rumination and reactivity that often keep us so desperately at odds with ourselves and with other people. … Meditation is simply the ability to stop suffering in many of the usual ways, if only for a few moments at a time. … The deepest goal of spirituality is freedom from the illusion of the self. (pp.119-123)

I see a big conflict here. On the one hand, many are concerned that competition could destroy moral value by cutting away distinctively human features of em brains, and the default net seems a prime candidate for cutting. On the other hand, many see meditation as a key to spiritual insight, one of the highest human callings, and a key task in meditation is cutting the influence of the default net. Ems with a reduced default net could more easily focus, be mindful, see the illusion of the self, and feel more at peace and less anxious about their social image. So which is it, do such ems achieve our highest spiritual ideals, or are they empty shells mostly devoid of human value? Can’t be both, right?

By the way, I was reading Harris because he and I will record a podcast Feb 21 in Denver.

It is a standard trope of fiction that people often get angry when they suffer life outcomes well below what they see as their justified expectations. Such sore losers are tempted to retaliate against the individuals and institutions they blame for their loss, causing increasing damage until others agree to fix the unfairness.

Most outcomes, like income or fame, are distributed with mean outcomes well above median outcomes. As a result, well over half of everyone gets an outcome below what that they could have reasonably expected. So if this sore loser trope were true, there’d be a whole lot of angry folks causing damage. Maybe even most people would be this angry. Hard to see how civilization could function here. This scenario is often hoped-for by those who seek dramatic revolutions to fix large scale social injustices.

Actually, however, even though most people might plausibly see themselves as unfairly assigned to be losers, few become angry enough to cause much damage. Oh most people will have resentments and complaints, and this may lead on occasion to mild destruction, but most people are mostly peacefully. In the words of the old song, while they may not get what they want, they mostly get what they need.

Not only do most people achieve much less than the average outcomes, they achieve far less than the average outcomes that they see in media and fiction. Furthermore, most people eventually realize that the world is often quite hypocritical about the qualities it rewards. That is, early in life people are told that certain admired types of efforts and qualities are the ones with the best chance to lead to high outcomes. But later people learn that in fact that other less cooperative or fair strategies are often rewarded more. They may thus reasonably conclude that the game was rigged, and that they failed in part because they were fooled for too long.

Given all this, we should be somewhat surprised, and quite grateful, to live in such a calm world. Most people fall below the standard of success set by average outcomes, and far below that set by typical media-visible outcomes. And they learn that their losses are caused in part by winners taking illicit strategies and lying to them about the rewards to admired strategies. Yet contrary to the common fictional trope, this does not induce them to angrily try to burn down our shared house of civilization.

So dear mostly-calm near-median person, I respectfully salute you. Without you and your stoic acceptance, civilization would not be possible. Perhaps I should salute men a bit more, as they are more prone to violent anger, and suffer higher variance and thus higher mean to median outcome ratios. And perhaps the old a bit more too, as they see more of the world’s hypocrisy, and can hope much less for success via big future reversals. But mostly, I salute you all. Humans are indeed amazing creatures.

People keep suggesting that I can’t possibly present myself as an expert on the future if I’m not familiar with their favorite science fiction (sf). I say that sf mostly pursues other purposes and rarely tries much to present realistic futures. But I figure should illustrate my claim with concrete examples from time to time. Which brings us to Altered Carbon, a ten episode sf series just out on Netflix, based on a 2002 novel. I’ve watched the series, and read the novel and its two sequels.

Altered Carbon’s key tech premise is a small “stack” which can sit next to a human brain collecting and continually updating a digital representation of that brain’s full mental state. This state can also be transferred into the rest of that brain, copied to other stacks, or placed and run in an android body or a virtual reality. Thus stacks allow something much like ems who can move between bodies.

But the universe of Altered Carbon looks very different from my description of the Age of Em. Set many centuries in future, our descendants have colonized many star systems. Technological change then is very slow; someone revived after sleeping for centuries is familiar with almost all the tech they see, and they remain state-of-the-art at their job. While everyone is given a stack as a baby, almost all jobs are done by ordinary humans, most of whom are rather poor and still in their original body, the only body they’ll ever have. Few have any interest in living in virtual reality, which is shown as cheap, comfortable, and realistic; they’d rather die. There’s also little interest in noticeably-non-human android bodies, which could plausibly be pretty cheap.

Regarding getting new very-human-like physical bodies, some have religious objections, many are disinterested, but most are just too poor. So most stacks are actually never used. Stacks can insure against accidents that kill a body but don’t hurt the stack. Yet while it should be cheap and easy to backup stack data periodically, inexplicibly only rich folks do that.

It is very illegal for one person to have more than one stack running at a time. Crime is often punished by taking away the criminal’s body, which creates a limited supply of bodies for others to rent. Very human-like clone and android bodies are also available, but are very expensive. Over the centuries some have become very rich and long-lived “meths”, paying for new bodies as needed. Meths run everything, and are shown as inhumanly immoral, often entertaining themselves by killing poor people, often via sex acts. Our hero was once part of a failed revolution to stop meths via a virus that kills anyone with a century of subjective experience.

Oh, and there have long been fully human level AIs who are mainly side characters that hardly matter to this world. I’ll ignore them, as criticizing the scenario on these grounds is way too easy.

Now my analysis says that there’d be an enormous economic demand for copies of ems, who can do most all jobs via virtual reality or android bodies. If very human-like physical bodies are too expensive, the economy would just skip them. If allowed, ems would quickly take over all work, most activity would be crammed in a few dense cities, and the economy could double monthly. Yet while war is common in the universe of Altered Carbon, and spread across many star systems, no place ever adopts the huge winning strategy of unleashing such an em economy and its associated military power. While we see characters who seek minor local advantages get away for long times with violating the rule against copying, no one ever tries to do this to get vastly rich, or to win a war. No one even seems aware of the possibility.

Even ignoring the AI bit, I see no minor modification to make this into a realistic future scenario. It is made more to be a morality play, to help you feel righteous indignation at those damn rich folks who think they can just live forever by working hard and saving their money over centuries. If there are ever poor humans who can’t afford to live forever in very human-like bodies, even if they could easily afford android or virtual immortality, well then both the rich and the long-lived should all burn! So you can feel morally virtuous watching hour after hour of graphic sex and violence toward that end. As it so happens that hand-to-hand combat, typically producing big spurts of blood, and often among nudes, is how most conflicts get handled in this universe. Enjoy!