Tag Archives: Disaster

I’m a big fan of Nick Bostrom; he is way better than almost all other future analysts I’ve seen. He thinks carefully and writes well. A consistent theme of Bostrom’s over the years has been to point out future problems where more governance could help. His latest paper, The Vulnerable World Hypothesis, fits in this theme:

Consider a counterfactual history in which Szilard invents nuclear fission and realizes that a nuclear bomb could be made with a piece of glass, a metal object, and a battery arranged in a particular configuration. What happens next? … Maybe … ban all research in nuclear physics … [Or] eliminate all glass, metal, or sources of electrical current. … Societies might split into factions waging a civil wars with nuclear weapons, … end only when … nobody is able any longer to put together a bomb … from stored materials or the scrap of city ruins. …

The ​vulnerable world hypothesis​ [VWH] … is that there is some level of technology at which civilization almost certainly gets destroyed unless … civilization sufficiently exits the … world order characterized by … limited capacity for preventive policing​, … limited capacity for global governance.​ … [and] diverse motivations​. … It is ​not​ a primary purpose of this paper to argue VWH is true. …

Four types of civilizational vulnerability. … in the “easy nukes” scenario, it becomes too easy for individuals or small groups to cause mass destruction. … a technology that strongly incentivizes powerful actors to use their powers to cause mass destruction. … counterfactual in which a preemptive counterforce [nuclear] strike is more feasible. … the problem of global warming [could] be far more dire … if the atmosphere had been susceptible to ignition by a nuclear detonation, and if this fact had been relatively easy to overlook …

two possible ways of achieving stabilization: Create the capacity for extremely effective preventive policing.​ … and create the capacity for strong global governance. … While some possible vulnerabilities can be stabilized with preventive policing alone, and some other vulnerabilities can be stabilized with global governance alone, there are some that would require both. …

It goes without saying there are great difficulties, and also very serious potential downsides, in seeking progress towards (a) and (b). In this paper, we will say little about the difficulties and almost nothing about the potential downsides—in part because these are already rather well known and widely appreciated.

I take issue a bit with this last statement. The vast literature on governance shows both many potential advantages of and problems with having more relative to less governance. It is good to try to extend this literature into futuristic considerations, by taking a wider longer term view. But that should include looking for both novel upsides and downsides. It is fine for Bostrom to seek not-yet-appreciated upsides, but we should also seek not-yet-appreciated downsides, such as those I’ve mentioned in tworecent posts.

While Bostrom doesn’t in his paper claim that our world is in fact vulnerable, he released his paper at time when many folks in the tech world have been claiming that changing tech is causing our world to in fact become more vulnerable over time to analogies of his “easy nukes” scenario. Such people warn that it is becoming easier for smaller groups and individuals to do more damage to the world via guns, bombs, poison, germs, planes, computer hacking, and financial crashes. And Bostrom’s book Superintelligence can be seen as such a warning. But I’m skeptical, and have yet to see anyone show a data series displaying such a trend for any of these harms.

More generally, I worry that “bad cases make bad law”. Legal experts say it is bad to focus on extreme cases when changing law, and similarly it may go badly to focus on very unlikely but extreme-outcome scenarios when reasoning about future-related policy. It may be very hard to weigh extreme but unlikely scenarios suggesting more governance against extreme but unlikely scenarios suggesting less governance. Perhaps the best lesson is that we should make it a priority to improve governance capacities, so we can better gain upsides without paying downsides. I’ve been working on this for decades.

I also worry that existing governance mechanisms do especially badly with extreme scenarios. The history of how the policy world responded badly to extreme nanotech scenarios is a case worth considering.

The power of an individual to kill others has not increased over time. To restate that: An individual — a person working alone today — can’t kill more people than say someone living 200 or 2,000 years ago.

Plotting Dupuy’s Theoretical Lethality Index of weapons shows a nice (?) historical superexponential up to H-bombs… but the TLI per dollar makes AK47 and nukes on par, and the heavier weapons require big support teams. I have not found any strong trend in small actor damage.

If your mood changes every month, and if you die in any month where your mood turns to suicide, then to live 83 years you need to have one thousand months in a row where your mood doesn’t turn to suicide. Your ability to do this is aided by the fact that your mind is internally divided; while in many months part of you wants to commit suicide, it is quite rare for a majority coalition of your mind to support such an action.

In the movie Lord of the Rings, Denethor Steward of Gondor is in a suicidal mood when enemies attack the city. If not for the heroics of Gandalf, that mood might have ended his city. In the movie Dr. Strangelove, the crazed General Ripper “believes the Soviets have been using fluoridation of the American water supplies to pollute the `precious bodily fluids’ of Americans” and orders planes to start a nuclear attack, which ends badly. In many mass suicides through history, powerful leaders have been able to make whole communities commit suicide.

In a nuclear MAD situation, a nation can last unbombed only as long as no one who can “push the button” falls into a suicidal mood. Or into one of a thousand other moods that in effect lead to misjudgments and refusals to listen to reason, that eventually leads to suicide. This is a serious problem for any nuclear nation that wants to live long relative to number of people who can push the button, times the timescale on which moods change. When there are powers large enough that their suicide could take down civilization, then the risk of power suicide becomes a risk of civilization suicide. Even if the risk is low in any one year, over the long run this becomes a serious risk.

This is a big problem for world or universal government. We today coordinate on the scale of firms, cities, nations, and internationals organizations. However, the fact that we also fail to coordinate to deal with many large problems on these scales shows that we face severe limits in our coordination abilities. We also face many problems that could be aided by coordination via world government, and future civilizations will be similarly tempted by the coordination powers of central governments.

But, alas, central power risks central suicide, either done directly on purpose or as an indirect consequence of other broken thinking. In contrast, in a sufficiently decentralized world when one power commits suicide, its place and resources tend to be taken by other powers who have not committed suicide. Competition and selection is a robust long-term solution to suicide, in a way that centralized governance is not.

This is my tentative best guess for the largest future filter that we face, and that other alien civilizations have faced. The temptation to form central governments and other governance mechanisms is strong, to solve immediate coordination problems, to help powerful interests gain advantages via the capture of such central powers, and to sake the ambition thirst of those who would lead such powers. Over long periods this will seem to have been a wise choice, until suicide ends it all and no one is left to say “I told you so.”

Divide the trillions of future years over which we want to last over the increasingly short periods over which moods and sanity changes, and you see a serious problem, made worse by the lack of a sufficiently long view to make us care enough to solve it. For example, if the suicide mood of a universal government changed once a second, then it needs about 1020 non-suicide moods in a row to last a trillion years.

Twenty years ago today, I introduced the phrase “The Great Filter” in an essay on my personal website. Today Google says 300,000 web pages use this phrase, and 4.3% of those mention my name. This essay has 45 academic citations, and my related math paper has 17 cites.

These citations are a bit over 1% of my total citations, but this phrase accounts for 5% of my press coverage. This press is mostly dumb luck. I happened to coin a phrase on a topic of growing and wide interest, yet others more prestigious than I didn’t (as they often do) bother to replace it with another phrase that would trace back to them.

I have mixed feelings about writing the paper. Back then I was defying the usual academic rule to focus narrowly. I was right that it is possible to contribute to many more different areas than most academics do. But what I didn’t fully realize is that to academic economists non-econ publications don’t exist, and that publication is only the first step to academic influence. If you aren’t around in an area to keep publishing, giving talks, going to meetings, doing referee reports, etc., academics tend to correctly decide that you are politically powerless and thus you and your work can safely be ignored.

So I’m mostly ignored by the academics who’ve continued in this area – don’t get grants, students, or invitations to give talks, to comment on paper drafts, or to referee papers, grants, books, etc. The only time I’ve ever been invited to talk on the subject was a TEDx talk a few years ago. (And I’ve given over 350 talks in my career.) But the worst scenario of being ignored is that it is as if your paper never existed, and so you shouldn’t have bothered writing it. Thankfully I have avoided that outcome, as some of my insights have been taken to heart, both academically and socially. People now accept that finding independent alien life simpler than us would be bad news, that the very hard filter steps should be roughly equally spaced in our history, and that the great filter gives a reason to worry about humanity’s future prospects.

Many people have been working hard for a long time to develop tech that helps to read people’s feelings. They are working on ways to read facial expressions, gazes, word choices, tones of voice, sweat, skin conductance, gait, nervous habits, and many other body features and motions. Over the coming years, we should expect this tech to consistently get cheaper and better at reading more subtler feelings of more people in more kinds of contexts more reliably.

Much of this tech will be involuntary. While your permission and assistance may help such tech to read you better, others will often be able to read you using tech that they control, on their persons or and in the buildings around you. They can use tech integrated with other complex systems that is thus hard to monitor and regulate. Yes, some defenses are possible, such as via wearing dark sunglasses or burqas, and electronically modulating your voice. But such options seem rather awkward and I doubt most people will be willing to use them much in most familiar social situations. And I doubt that regulation will greatly reduce the use of this tech. The overall trend seems clear: our true feelings will become more visible to people around us.

We are often hypocritical about our feelings. That is, we pretend to some degree to have certain acceptable public feelings, while actually harboring different feelings. Most people know that this happens often, but our book The Elephant in the Brain suggests that we still vastly underestimate typical levels of hypocrisy. We all mask our feelings a lot, quite often from ourselves. (See our book for many more details.)

These two facts, better tech for reading feelings and widespread hypocrisy, seem to me to be on a collision course. As a result, within a few decades, we may see something of a “hypocrisy apocalypse”, or “hypocralypse”, wherein familiar ways to manage hypocrisy become no longer feasible, and collide with common norms, rules, and laws. In this post I want to outline some of the problems we face.

Long ago, I was bullied as a child. And so I know rather well that one of the main defenses that children develop to protect themselves against bullies is to learn to mask their feelings. Bullies tend to see kids who are visibly scared or distraught as openly inviting them to bully. Similarly, many adults protect themselves from salespeople and sexual predators by learning to mask their feelings. Masked feelings also helps us avoid conflict with rivals at work and in other social circles. For example, we learn to not visibly insult or disrespect big people in rowdy bars if we don’t want to get beaten up.

Tech that unmasks feelings threatens to weaken the protections that masked feelings provide. That big guy in a rowdy bar may use new tech to see that everyone else there can see that you despise him, and take offense. You bosses might see your disrespect for them, or your skepticism regarding their new initiatives. Your church could see that you aren’t feeling very religious at church service. Your school and nation might see that your pledge of allegiance was not heart-felt. And so on.

While these seem like serious issues, change will be mostly gradual and so we may have time to flexibly search in the space of possible adaptations. We can try changing with whom we meet how for what purposes, and what topics we consider acceptable to discuss where. We can be more selective who we make more visible and how.

I worry more about collisions between better tech for reading feelings and common social norms, rules, and laws. Especially norms and laws that we adopt for more symbolic purposes, instead of to actually manage our interactions. These things tend to be less responsive to changing conditions.

For example, today we often consider it to be unacceptable “sexual harassment” to repeatedly and openly solicit work associates for sex, especially after they’ve clearly rejected the solicitor. We typically disapprove not just of direct requests, but also of less direct but relatively clear invitation reminders, such as visible leers, sexual jokes, and calling attention to your “junk”. And of course such rules make a great deal of sense.

But what happens when tech can make it clearer who is sexually attracted how much to whom? If the behavior that led to these judgements was completely out each person’s control, it might be hard to blame on anyone. We might then socially pretend that it doesn’t exist, though we might eagerly check it out privately. Unfortunately, our behavior will probably continue to modulate the processes that produce such judgements.

For example, the systems that judge how attracted you are to someone might focus on the moments when you directly look at that person, when your face is clearly visible to some camera, under good lighting. Without your wearing sunglasses or a burqa. So the longer you spend directly looking at someone under such conditions, the better the tech will be able to see your attraction. As a result, your choice to spend more time looking directly at them under favorable reading conditions might be seen as an intentional act, a choice to send the message that you are sexually attracted to them. And thus your continuing to do so after they have clearly rejected you might be seen as sexual harassment.

Yes, a reasonable world might adjust rules on sexual harassment to account for many complex changing conditions. But we may not live in a reasonable world. I’m not making any specific claims about sexual harassment rules, but symbolic purposes influence many of the norms and laws that we adopt. That is, we often support such rules not because of the good consequences of having them, but because we like the way that our personal support for such rules makes us look personally. For example, many support laws against drugs and prostitution even when they believe that such laws do little to discourage such things. They want to be personally seen as publicly taking a stand against such behavior.

Consider rules against expressing racism and sexism. And remember that the usual view is that everyone is at least a bit racist and sexist, in part because they live in a racist and sexist society. What happens when we can collect statistics on each person regarding how their visible evaluations of the people around them correlate with the race and sex of those people? Will we then punish white males for displaying statistically-significantly low opinions of non-whites and non-males via their body language? (That’s like a standard we often apply to firms today.) As with sexual harassment, the fact that people can moderate these readings via their behaviors may make these readings seem to count as intentional acts. Especially since they can be tracking the stats themselves, to see the impression they are giving off. To some degree they choose to visibly treat certain people around them with disrespect. And if we are individually eager to show that we personally disapprove of racism and sexism, we may publicly support strict application of such rules even if that doesn’t actually deal well with real problems of racism and sexism in the world.

Remember that this tech should improve gradually. So for the first cases that set key precedents, the tech will be weak and thus flag very few people as clearly harassers or racists or sexists. And those few exceptions are much more likely to be people who actually did intend to harass and express racism or sexism, and who embody extreme versions of such behavior. While they will also probably tend to be people who are weird and non-conformist in other ways, this tech for reading feelings may initially seem to do well to help us identify and deal with problematic people. For example, we may be glad that tech can identity the priests who most clearly lust after the young boys around them.

But as the tech gets better it will slowly be able to flag more and more people as sending disapproved messages. The rate will drift upward from one person in ten thousand to one in a thousand to one percent and so on. People may then start to change their behavior in bigger ways, to avoid being flagged, but that may be too little too late, especially if large video, etc. libraries of old behaviors are available to process with new methods.

At this point we may reach a “hypocralypse”, where rules that punish hypocrisy collide in a big way with tech that can expose hypocrisy. That is, where tech that can involuntarily show our feelings intersects with norms and laws that punish the expression of common but usually hidden feelings. Especially when such rules are in part symbolically motivated.

What happens then, I don’t know. Do white males start wearing burqas, do we regulate this tech heavily, or do we tone down and relax our many symbolic rules? I’ll hope for the best, but I still fear the worst.

In principle, any piece of simple dead matter in the universe could give rise to simple life, then to advanced life, then to an expanding visible civilization. In practice, however, this has not yet happened anywhere in the visible universe. The “great filter” is sum total of all the obstacles that prevent this transition, and our observation of a dead universe tells us that this filter must be enormous.

Life and humans here on Earth have so far progressed some distance along this filter, and we now face the ominous question: how much still lies ahead? If the future filter is large, our changes of starting an expanding visible civilization are slim. While being interviewed on the great filter recently, I was asked what I see as the most likely future filter. And in trying to answer, I realized that I have changed my mind.

The easiest kind of future filter to imagine is a big external disaster that kills all life on Earth. Like a big asteroid or nearby supernovae. But when you think about it, it is very hard to kill all life on Earth. Given how long Earth as gone without such an event, the odds of it happening in the next millions years seems quite small. And yet a million years seems plenty of time for us to start an expanding visible civilization, if we were going to do that.

Yes, compared to killing all life, we can far more easily imagine events that destroy civilization, or kill all humans. But the window for Earth to support life apparently extends another 1.5 billion years into our future. As that window duration should roughly equal the typical duration between great filter steps in the past, it seems unlikely that any such steps have occurred since a half billion years ago, when multicellular life started becoming visible in the fossil record. For example, the trend toward big brains seems steady enough over that period to make big brains unlikely as a big filter step.

Thus even a disaster that kills most all multicellular life on Earth seems unlikely to push life back past the most recent great filter step. Life would still likely retain sex, Eukaryotes, and much more. And with 1.5 billion years to putter, life seems likely to revive multicellular animals, big brains, and something as advanced as humans. In which case there would be a future delay of advanced expanding life, but not a net future filter.

Yes, this analysis is regarding “try-try” filter steps, where the world can just keep repeatedly trying until it succeeds. In principle there can also be “first or never” steps, such as standards that could in principle go many ways, but which lock in forever once they pick a particular way. But it still seems hard to imagine such steps in the last half billion years.

So far we’ve talked about big disasters due to external causes. And yes, big internal disasters like wars are likely to be more frequent. But again the problem is: a disaster that still leaves enough life around could evolve advanced life again in 1.5 billion years, resulting in only a delay, not a filter.

The kinds of disasters we’ve been considering so far might be described as “too little coordination” disasters. That is, you might imagine empowering some sort of world government to coordinate to prevent them. And once such a government became possible, if it were not actually created or used, you might blame such a disaster in part on our failing to empower a world government to prevent them.

Another class of disasters, however, might be described as “too much coordination” disasters. In these scenarios, a powerful world government (or equivalent global coalition) actively prevents life from expanding visibly into the universe. And it continues to do so for as long as life survives. This government might actively prevent the development of technology that would allow such a visible expansion, or it might allow such technology but prevent its application to expansion.

For example, a world government limited to our star system might fear becoming eclipsed by interstellar colonists. It might fear that colonists would travel so far away as to escape the control of our local world government, and then they might collectively grow to become more powerful than the world government around our star.

Yes, this is not a terribly likely scenario, and it does seem hard to imagine such a lockdown lasting for as long as does advanced civilization capable of traveling to other stars. But then scenarios where all life on Earth gets killed off also seem pretty unlikely. It isn’t at all obvious to me that the too little coordination disasters are more likely than the too much coordination disasters.

And so I conclude that I should be in-the-ballpark-of similarly worried about both categories of disaster scenarios. Future filters could result from either too little or too much coordination. To prevent future filters, I don’t know if it is better to have more or less world government.

I’ve long puzzled over the fact that most of the concern I hear expressed on inequality is about the smallest of (at least) seven kinds: income inequality between the families of a nation at a time (IIBFNAT). Expressed concern has greatly increased over the last half decade. While most people don’t actually know that much about their income ranking, many seem to be trying hard to inform those who rank low of their low status. Their purpose seems to be to induce envy, to induce political action to increase redistribution. They hope to induce these people to identify more with this low income status, and to organize politically around this shared identity.

Many concerned about IIBFNAT are also eager to remind everyone of and to celebrate historical examples of violent revolution aimed at redistribution (e.g., Les Misérables). The purpose here seems to be to encourage support for redistribution by reminding everyone of the possibility of violent revolution. They remind the poor that they could consider revolting, and remind everyone else that a revolt might happen. This strengthens an implicit threat of violence should redistribution be insufficient.

Now consider this recent news:

Shortly before the [recent Toronoto van] attack, a post appeared on the suspect’s Facebook profile, hailing the commencement of the “Incel Rebellion”. …There is a reluctance to ascribe to the “incel” movement anything so lofty as an “ideology” or credit it with any developed, connected thinking, partly because it is so bizarre in conception. … Standing for “involuntarily celibate”,… it [has] mutate[d] into a Reddit muster point for violent misogyny. …

It is quite distinctive in its hate figures: Stacys (attractive women); Chads (attractive men); and Normies (people who aren’t incels, i.e. can find partners but aren’t necessarily attractive). Basically, incels cannot get laid and they violently loathe anyone who can. Some of the fault, in their eyes, is with attractive men who have sex with too many women. …

Incels obsess over their own unattractiveness – dividing the world into alphas and betas, with betas just your average, frustrated idiot dude, and omegas, as the incels often call themselves, the lowest of the low, scorned by everyone – they then use that self-acceptance as an insulation.

Basically, their virginity is a discrimination or apartheid issue, and only a state-distributed girlfriend programme, outlawing multiple partners, can rectify this grand injustice. … Elliot Rodger, the Isla Vista killer, uploaded a video to YouTube about his “retribution” against attractive women who wouldn’t sleep with him (and the attractive men they would sleep with) before killing six people in 2014. (more)

One might plausibly argue that those with much less access to sex suffer to a similar degree as those with low income, and might similarly hope to gain from organizing around this identity, to lobby for redistribution along this axis and to at least implicitly threaten violence if their demands are not met. As with income inequality, most folks concerned about sex inequality might explicitly reject violence as a method, at least for now, and yet still be encouraged privately when the possibility of violence helps move others to support their policies. (Sex could be directly redistributed, or cash might be redistributed in compensation.)

Strikingly, there seems to be little overlap between those who express concern about income and sex inequality. Among our cultural elites, the first concern is high status, and the later concern low status. For example, the article above seems not at all sympathetic to sex inequality concerns.

Added 27Apr: Though the news article I cite focuses on male complaints, my comments here are about sex inequality in general, applied to both men and women. Not that I see anything particular wrong with focusing on men sometimes. Let me also clarify that personally I’m not very attracted to non-insurance-based redistribution policies of any sort, though I do like to study what causes others to be so attracted.

Added 10p: 27Apr: A tweet on this post induced a lot of discussion on twitter, much of which accuses me of advocating enslaving and raping women. Apparently many people can’t imagine any other way to reduce or moderate sex inequality. (“Redistribute” literally means “change the distribution.”) In the post I mentioned cash compensation; more cash can make people more attractive and better able to afford legalized prostitution. Others have mentioned promoting monogamy and discouraging promiscuity. Surely there are dozens of other possibilities; sex choices are influenced by a great many factors and each such factor offers a possible lever for influencing sex inequality. Rape and slavery are far from the only possible levers!

Many people are also under the impression that we redistribute income mainly because recipients would die without such redistribution. In rich nations this can account for only a tiny fraction of redistribution. Others say it is obvious that redistribution is only appropriate for commodities, and sex isn’t a commodity. But we take from the rich even when their wealth is in the form of far-from-commodity unique art works, buildings, etc.

Also, it should be obvious that “sex” here refers to a complex package that is desired, which in individual cases may or may not be satisfied by sexbots or prostitutes. But whatever it is the package that people want, we can and should ask how we might get more of it to them.

Finally, many people seem to be reacting primarily to some impression they’ve gained that self-identified “incels” are mostly stupid rude obnoxious arrogant clueless smelly people. I don’t know if that’s true and I don’t care; I’m focused on the issue that they help raise, not their personal or moral worth.

I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

I emphasize how all the mighty human edifice of Go knowledge … was entirely discarded by AlphaGo Zero with a subsequent performance improvement. … Sheer speed of capability gain should also be highlighted here. … you don’t even need self-improvement to get things that look like FOOM. … the situation with AlphaGo Zero looks nothing like the Hansonian hypothesis and a heck of a lot more like the Yudkowskian one.

I replied that, just as seeing an unusually large terror attack like 9-11 shouldn’t much change your estimate of the overall distribution of terror attacks, nor seeing one big earthquake change your estimate of the overall distribution of earthquakes, seeing one big AI research gain like AlphaGo Zero shouldn’t much change your estimate of the overall distribution of AI progress. (Seeing two big lumps in a row, however, would be stronger evidence.) In his recent podcast with Sam Harris, Eliezer said:

Y: I have claimed recently on facebook that now that we have seen Alpha Zero, Alpha Zero seems like strong evidence against Hanson’s thesis for how these things necessarily go very slow because they have to duplicate all the work done by human civilization and that’s hard. …

H: What’s the best version of his argument, and then why is he wrong?

Y: Nothing can prepare you for Robin Hanson! Ha ha ha. Well, the argument that Robin Hanson has given is that these systems are still immature and narrow, and things will change when they get general. And my reply has been something like, okay, what changes your mind short of the world actually ending. If your theory is wrong do we get to find out about that at all before the world does.

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

The citation distributions of papers published in the same discipline and year lie on the same curve for most disciplines, if the raw number of citations c of each paper is divided by the average number of citations c0 over all papers in that discipline and year. The dashed line is a lognormal fit. …

The probability of citing a paper grows with the number of citations that it has already collected. Such a model can be augmented with … decreasing the citation probability with the age of the paper, and a fitness parameter, unique to each paper, capturing the appeal of the work to the scientific community. Only a tiny fraction of papers deviate from the pattern described by such a model.

It seems to me quite reasonable to expect that fields where real research progress is lumpier would also display a lumpier distribution of citations. So if CS, AI, or ML research is much lumpier than in other areas, we should expect to see that in citation data. Even if your hypothesis is that only ML research is lumpier, and only in the last 5 years, we should still have enough citation data to see that. My expectation, of course, is that recent ML citation lumpiness is not much bigger than in most research fields through history.

Added 24Mar: You might save the hypothesis that research areas vary greatly in lumpiness by postulating that the number of citations of each research advance goes as the rank of the “size” of that advance, relative to its research area. The distribution of ranks is always the same, after all. But this would be a surprising outcome, and hence seems unlikely; I’d want to see clear evidence that the distribution of lumpiness of advances varies greatly across fields.

Added 27Mar: More directly relevant mightbedataon distributions of patent value and citations. Do these distributions vary by topic? Are CS/AI/ML distributed more unequally?

The very readable book The Wizard and the Prophet tells the story of environmental prophet William Vogt investigating the apocalypse-level deaths of guano-making birds near Peru. When he discovered the cause in the El Nino weather cycle, his policy recommendations were to do nothing to mitigate this natural cause; he instead railed against many much smaller human influences, demanding their reversal. A few years later his classic 1948 screed Road To Survival, which contained pretty much all the standard environmental advice and concepts used today, continued to warn against any but small human-caused changes to the environment, while remaining largely indifferent to even huge natural changes.

I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

But of course few are very good at resolving their near versus far incoherences. And so the positions people take end up depending a lot on how they first framed the key issues, as in terms of short or long term changes.

Many people argue that we should beware of foreigners, and people from other ethnicities. Beware of visiting them, trading with them, talking to them, or allowing them to move here. The fact that so many people are willing to argue for such conclusions is some evidence in favor of them. But the fact that the arguments offered are so diverse, and so often contradict one another, takes away somewhat from the strength of this evidence. This pattern looks like people tend to have a preconceived conclusion for which they opportunistically embrace any random arguments they can find.

Similarly, many argue that we should be wary of future competition, especially if that might lead to concentrations of power. I recently posted on my undergrad law & econ students’ largely incoherent fears of one group taking over the entire solar system, and how Frederick Engels expresses related fears back in 1844. And I’ve argued on this blog with my ex-co-blogger regarding his concerns that if future AI results from competing teams, one team might explode to suddenly take over the world. In this post I’ll describe Ted “Unabomber” Kaczynski’s rather different theory on why we should fear competition leading to concentration, from his recent book Anti Tech Revolution.

Kaczynski claims that the Fermi paradox, i.e., the fact that the universe looks dead everywhere, is explained by the fact that technological civilizations very reliably destroy themselves. When this destruction happens naturally, it is so thorough that no humans could survive. Which is why his huge priority is to find a way to collapse civilization sooner, so that at least some humans survive. Even a huge nuclear war is preferable, as at least some people survive that.

Why must everything collapse? Because, he says, natural-selection-like competition only works when competing entities have scales of transport and talk that are much less than the scale of the entire system within which they compete. That is, things can work fine when bacteria who each move and talk across only meters compete across an entire planet. The failure of one bacteria doesn’t then threaten the planet. But when competing systems become complex and coupled on global scales, then there are always only a few such systems that matter, and breakdowns often have global scopes.

Kaczynski dismisses the possibility that world-spanning competitors might anticipate the possibility of large correlated disasters, and work to reduce their frequency and mitigate their harms. He says that competitors can’t afford to pay any cost to prepare for infrequent problems, as such costs hurt them in the short run. This seems crazy to me, as most of the large competing systems we know of do in fact pay a lot to prepare for rare disasters. Very few correlated disasters are big enough to threaten to completely destroy the whole world. The world has had global scale correlation for centuries, with the world economy growing enormously over that time. And yet we’ve never even seen a factor of two decline, while at least thirty factors of two would be required for a total collapse. And while it should be easy to test Kaczynski’s claim in small complex systems of competitors, I know of no supporting tests.

Yet all dozen of the reviews I read of Kaczynski’s book found his conclusion here to be obviously correct. Which seems to me evidence that a great many people find the worry about future competitors to be so compelling that they endorse most any vaguely plausible supporting argument. Which I see as weak evidence against that worry.

Yes of course correlated disasters are a concern, even when efforts are made to prepare against them. But its just not remotely obvious that competition makes them worse, or that all civilizations are reliably and completely destroyed by big disasters, so much so that we should prefer to start a big nuclear war now that destroys civilization but leaves a few people alive. Surely if we believed his theory a better solution would be to break the world into a dozen mostly isolated regions.

Kaczynski does deserve credit for avoiding common wishful thinking in some of his other discussion. For example, he says that we can’t much control the trajectory of history, both because it is very hard to coordinate on the largest scales, and because it is hard to estimate the long term consequences of many choices. He sees how hard it is for social movements to actually achieve anything substantial. He notes that futurists who expect to achieve immortality and then live for a thousand years too easily presume that a fast changing competitive world will still have need for them. And while I didn’t see him actually say it, I expect he’s the sort of person who’d make the reasonable argument that individual humans are just happier in a more forager-like world.

Kaczynski isn’t stupid, and he’s more clear-headed than most futurists I read. Too bad his low mood leans him so strongly to embrace a poorly-argued inevitable collapse story.

Apparently the causal path from simple dead matter to an expanding visible civilization is very unlikely. Almost everything that starts along this path is blocked by a great filter, which might be one extremely hard step, or many merely very hard steps. The most likely location of this great filter is that the origin of life is very very hard. Which is good news, because otherwise we’d have to worry at lot about our future, via what fraction of the overall huge filter still lies ahead of us. And if we ever find evidence of life in space that isn’t close to the causal path that led to us, that will be big bad news, and we’ll need to worry a lot more.

One of the more interesting future filter scenarios is a high difficulty of traveling between the stars. As we can easily see across the universe, we know that photons have few problems traveling very long distances. And since stars drift about at great speeds, we know that stars can also travel freely suffering little harm. But we still can’t be sure of the ease of travel for humans, or for the sort of things that our descendants might try to send between the stars. We have collected a few grains of interstellar dust, but still know little about them, and so don’t know how easy was their travel. We do know that most of the universe is made of dark matter and dark energy that we understand quite poorly. So perhaps “Here Be Dragons” lie in wait out there for our scale of interstellar travelers.

Many stars, like ours, are surrounded by a vast cloud of small icy objects. Every once in a while one of these objects falls into a rare orbit where it travels close to its star, and then it becomes a comet with a tail. Even more rarely, one should fall into an orbit that throws it out away from its star (almost always without doing much else to it). Such an object would then travel at the typical star speed between stars, and after billions of years it might perhaps pass near one other star; the chance of two such encounters is very low. And if the space between stars is as mild as it seems, it should arrive looking pretty much as it left.

Astronomers have been waiting for a while to see such an interstellar visitor, and were puzzled to have not yet seen one. They expected it to look like a comet, except traveling a lot faster than do most comets. Well within roughly a year of a new instrument that could see such things better, we’ve finally seen such a visitor in the last few months. It looked like what we expect in some ways. It is traveling at roughly the speed we’d expect, its size is unremarkable, and its color is roughly what we expect from ancient small space objects. But it is suspiciously weird in several other apparently-unrelated ways.

First, its orbit is weird. Its direction of origin is 6 degrees from sun’s motion vector; only one in 365 random directions are closer. And among the travel paths where we could have seen this object, only one in 100 such paths would have traveled closer to the sun than did this one (source: Turner). But one must apparently invoke very strange and unlikely hypotheses to believe these parameters were anything but random. For now, I won’t go there.

Second, the object itself is weird. It does not have a comet tail, and so has apparently lost most of its volatiles like water. If this is typical, it explains why we haven’t seen objects like this before. The object seems to be very elongated, much more than any other natural object we’ve ever seen in our solar system. And it is rotating very fast, so fast that it would fly apart if it were made out of the typical pile of lightly attached rubble. So at some point it experienced an event so dramatic as to melt away its volatiles, melt it into a solid object, stretch it to an extreme, and set it spinning at an extreme rate. After which it drifted for long enough to acquire the usual color of ancient space objects.

This raises the suspicion that it perhaps encountered a dangerous “dragon” between the starts. Making it “dragon debris.” If the timing of this event were random, we should see roughly one a year in the future, and with new better instruments coming online in a few years we should see them even faster. So within a decade we should learn if this first visitor is very unusual, or if we should worry a lot more about travel dangers between the stars.

Added 30Oct2018: The object is even more interesting: it started out at rest wrt galaxy, and seems to be paper thin.