Share

How to Design Beacons for Humanity's Afterlife

To teach future civilizations about humanity, we'll need to craft messages that transcend all cultural context.

NASA

Let’s say we had a way to distribute beacons around our solar system (or beyond) that could survive for billions of years, recording what our civilization has achieved. What should they be like?

It’s easy to come up with what I consider to be sophomoric answers. But in reality I think this is a deep—and in some ways unsolvable—philosophical problem, that’s connected to fundamental issues about knowledge, communication and meaning.

Still, a friend of mine recently started a serious effort to build little quartz disks and have them hitch rides on spacecraft, to be deposited around the solar system. At first I argued that it was all a bit futile, but eventually I agreed to be an advisor to the project, and at least try to figure out what to do to the extent we can.

But, OK, so what’s the problem? Basically it’s about communicating meaning or knowledge outside of our current cultural and intellectual context. We just have to think about archaeology to know this is hard. What exactly was some arrangement of stones from a few thousand years ago for? Sometimes we can pretty much tell, because it’s close to something in our current culture. But a lot of the time it’s really hard to tell.

OK, but what are the potential use cases for our beacons? One might be to back up human knowledge so things could be restarted even if something goes awfully wrong with our current terrestrial civilization. And of course historically it was very fortunate that we had all those texts from antiquity when things in Europe restarted during the Renaissance. But part of what made this possible was that there had been a continuous tradition of languages like Latin and Greek—not to mention that it was humans that were both the creators and consumers of the material.

But what if the consumers of the beacons we plan to spread around the solar system are aliens, with no historical connection to us? Well, then it’s a much harder problem.

In the past, when people have thought about this, there’s been a tendency to say, “just show them math: it’s universal, and it’ll impress them!” But actually, I think neither claim about math is really true.

To understand this, we have to dive a little into some basic science that I happen to have spent many years working on. The reason people think math is a candidate for universal communication is that its constructs seem precise, and that at least here on Earth there’s only one (extant) version of it, so it seems definable without cultural references. But if one actually starts trying to work out how to communicate about current math without any assumptions (as, for example, I did as part of consulting on the Arrival movie), one quickly discovers that one really has to go “below math” to get to computational processes with simpler rules.

And (as seems to happen with great regularity, at least to me) one obvious place one lands is with cellular automata. It’s easy to show an elaborate pattern that’s created according to simple well-defined rules:

Stephen Wolfram/Wolfram Design

But here’s the problem: there are plenty of physical systems that basically operate according to rules like these, and produce similarly elaborate patterns. So if this is supposed to show the impressive achievement of our civilization, it fails.

OK, but surely there must be something we can show that makes it clear that we’ve got some special spark of intelligence. I certainly always assumed there was. But one of the things that’s come out of the basic science I’ve done was what I called the Principle of Computational Equivalence, that basically says that once one’s gotten beyond a very basic level, every system will show behavior that’s equivalent in the sophistication of the computation it exhibits.

So although we’re very proud of our brains, and our computers, and our mathematics, they’re ultimately not going to be able to produce anything that’s beyond what simple programs like cellular automata—or, for that matter, “naturally occurring” physical systems—can produce. So when we make an offhand comment like “the weather has a mind of its own,” it’s not so silly: the fluid dynamic processes that lead to the weather are computationally equivalent to the processes that, for example, go on in our brains.

It’s a natural human tendency at this point to protest that surely there must be something special about us, and everything we’ve achieved with our civilization. People may say, for example, that there’s no meaning and no purpose to what the weather does. Of course, we can certainly attribute such things to it (“it’s trying to equalize temperatures between here and there,” etc.), and without some larger cultural story there’s no meaningful way to say if they’re “really there” or not.

OK, so if showing a sophisticated computation isn’t going to communicate what’s special about us and our civilization, what is? The answer is in the end details. Sophisticated computation is ubiquitous in our universe. But what’s inevitably special about us is the details of our history and what we care about.

We’re learning the same thing as we watch the progress of artificial intelligence. Increasingly, we can automate the things we humans can do—even ones that involve reasoning, or judgement, or creativity. But what we (essentially by definition) can’t automate is defining what we want to do, and what our goals are. For these are intimately connected to the details of our biological existence, and the history of our civilization—which is exactly what’s special about us.

But, OK, how can we communicate these things? Well, it’s hard. Because—needless to say—they’re tied into aspects of us that are special, and that won’t necessarily be shared with whatever we’re trying to communicate with.

At the end of the day, though, we’ve got a project that’s going to launch beacons on spacecraft. So what’s the best thing to put on them? I’ve spent a significant part of my life building what’s now the Wolfram Language, whose core purpose is to provide a precise language for communicating knowledge that our civilization has accumulated in a way that both us humans, and computers, can understand. So perhaps this—and my experience with it—can help. But first, we should talk about history to get an idea of what has and hasn’t worked in the past.

Lessons from the Past

A few years ago I was visiting a museum and looking at little wooden models of life in ancient Egypt that had been buried with some king several millennia ago. “How sad,” I thought. “They imagined this would help them in the afterlife. But it didn’t work; instead it just ended up in a museum.” But then it struck me: “No, it did work! This is their ‘afterlife’!” They successfully transmitted some essence of their life to a world far beyond their own.

The Metropolitan Museum of Art

Werner Forman/Universal Images Group/Getty Images

Werner Forman/Universal Images Group/Getty Images

Of course, when we look at these models, it helps that a lot of what’s in them is familiar from modern times. Cows. A boat with oars. Scrolls. But some isn’t that familiar. What are those weird things at the ends of the boat, for example? What’s the purpose of those? What are they for? And here begins the challenge—of trying to understand without shared context.

I happened last summer to visit an archaeological site in Peru named Caral, that has all sorts of stone structures built more than 4000 years ago. It was pretty obvious what some of the structures were for. But others I couldn’t figure out. So I kept on asking our guide. And almost always the answer was the same: “it was for ceremonial purposes.”

Wolfram

Immediately I started thinking about modern structures. Yes, there are monuments and public artworks. But there are also skyscrapers, stadiums, cathedrals, canals, freeway interchanges and much more. And people have certain almost-ritual practices in interacting with these structures. But in the context of modern society, we would hardly call them “ceremonial”: we think of each type of structure as having a definite purpose which we can describe. But that description inevitably involves a considerable depth of cultural context.

When I was growing up in England, I went wandering around in woods near where I lived—and came across all sorts of pits and berms and other earthworks. I asked people what they were. Some said they were ancient fortifications; some said at least the pits were from bombs dropped in World War II. And who knows: maybe instead they were created by some process of erosion having nothing to do with people.

Almost exactly 50 years ago, as a young child vacationing in Sicily, I picked up this object on a beach:

Stephen Wolfram

Being very curious what it was, I took it to my local archaeology museum. “You’ve come to the wrong place, young man,” they said, “it’s obviously a natural object.” So off I went to a natural history museum, only to be greeted with “Sorry, it’s not for us; it’s an artifact.” And from then until now the mystery has remained (though with modern materials analysis techniques it could perhaps be resolved—and I obviously should do it!).

There are so many cases where it’s hard to tell if something is an artifact or not. Consider all the structures we’ve built on Earth. Back when I was writing A New Kind of Science, I asked some astronauts what the most obvious manmade structure they noticed from space was. It wasn’t anything like the Great Wall of China (which is actually hard to see). Instead, they said it was a line across the Great Salt Lake in Utah (actually a 30-mile-long railroad causeway built in 1959, with algae that happen to have varying colors on the two sides):

(L) Wolfram Language (R) Ravell Call/Deseret News

Then there was the 12-mile-diameter circle in New Zealand, the 30-mile one in Mauritania, and the 40-mile one in Quebec (with a certain Arrival heptapod calligraphy look):

Wolfram Language

Which were artifacts? This was before the web, so we had to contact people to find out. A New Zealand government researcher told us not to make the mistake of thinking their circle followed the shape of the cone volcano at its center. “The truth is, alas, much more prosaic,” he said: it’s the border of a national park, with trees cut outside only, i.e. an artifact. The other circles, however, had nothing to do with humans.

(It’s fun to look for evidence of humans visible from space. Like the grids of lights at night in Kansas, or lines of lights across Kazakhstan. And in recent years, there’s the 7-mile-long palm tree rendering in Dubai. And, on the flip side, people have also tried to look for what might be “archeological structures” in high-resolution satellite images of the moon.)

But, OK, let’s come back to the question of what things mean. In a cave painting from 7000 years ago, we can recognize shapes of animals, and hand stencils that we can see were made with hands. But what do the configurations of these things mean? Realistically at this point we have no serious idea.

Getty Images

Getty Images

Maybe it’s easier if we look at things that are more “mathematical”-like. In the 1990s I did a worldwide hunt for early examples of complex but structured patterns. I found all sorts of interesting things (such as mosaics supposedly made by Gilgamesh, from 3000 BC—and the earliest fractals, from 1210 AD). Most of the time I could tell what rules were used to make the patterns—though I could not tell what “meaning” the patterns were supposed to convey, or whether, instead, they were “merely ornamental.”

(L-R) Wolfram, Wolfram, The J. Paul Getty Museum

De Agostini/Archivio J. Lange/Getty Images

The last pattern above, though, had me very puzzled for a while. Is it a cellular automaton being constructed back in the 1300s? Or something from number theory? Well, no, in the end it turns out it’s a rendering of a list of 62 attributes of Allah
from the Koran, in a special square form of Arabic calligraphy constructed like this:

Stephen Wolfram’s A New Kind of Science

About a decade ago, I learned about a pattern from 11,000 years ago, on a wall in Aleppo, Syria (one hopes it’s still intact there). What is this? Math? Music? Map? Decoration? Digitally encoded data? We pretty much have no idea.

I could go on giving examples. Lots of times people have said “if one sees such-and-such, then it must have been made for a purpose.” The philosopher Immanuel Kant offered the opinion that if one saw a regular hexagon drawn in the sand, one could only imagine a “rational cause” for it. I used to think of this whenever I saw hexagonal patterns formed in rocks. And a few years ago I heard about hexagons in sand, produced purely by the action of wind. But the biggest hexagon I know is the storm pattern around the north pole of Saturn—that presumably wasn’t in any usual sense “put there for a purpose:”

NASA

In 1899 Nikola Tesla picked up all sorts of elaborate and strange-sounding radio emissions, often a little reminiscent of Morse code. He knew they weren’t of human origin, so his immediate conclusion was that they must be radio messages from the inhabitants of Mars. Needless to say, they’re not. Instead, they’re just the result of physical processes in the Earth’s ionosphere and magnetosphere.

But here’s the ironic thing: they often sound bizarrely similar to whale songs! And, yes, whale songs have all sorts of elaborate rhyme-like and other features that remind us of languages. But we still don’t really know if they’re actually for “communication”, or just for “decoration” or “play.”

One might imagine that with modern machine learning and with enough data one should be able to train a translator for “talking to animals.” And no doubt that’d be easy enough for “are you happy?” or “are you hungry?”. But what about more sophisticated things? Say the kind of things we want to communicate to aliens?

I think it’d be very challenging. Because even if animals live in the same environment as us, it’s very unclear how they think about things. And it doesn’t help that even their experience of the world may be quite different—emphasizing for example smell rather than sight, and so on.

Animals can of course make “artifacts” too. Like this arrangement of sand produced over the course of a week or so by a little puffer fish:

Wolfram Design

Humberto Ramirez/Getty Images

But what is this? What does it mean? Should we think of this “piscifact” as some great achievement of puffer fish civilization, that should be celebrated throughout the solar system?

Surely not, one might say. Because even though it looks complex—and even “artistic” (a bit like bird songs have features of music)—we can imagine that one day we’d be able to decode the neural pathways in the brain of the puffer fish that lead it to make this. But so what? We’ll also one day be able to know the neural pathways in humans that lead them to build cathedrals—or try to plant beacons around the solar system.

Aliens and the Philosophy of Purpose

There’s a thought experiment I’ve long found useful. Imagine a very advanced civilization that’s able to move things like stars and planets around at will. What arrangement would they put them in?

Maybe they’d want to make a “beacon of purpose.” And maybe—like Kant—one could think that would be achievable by setting up some “recognizable” geometric pattern. Like how about an equilateral triangle? But no, that won’t do. Because for example the Trojan asteroids actually form an equilateral triangle with Jupiter and the Sun already, just as a result of physics.

And pretty soon one realizes that there’s actually nothing the aliens could do to “prove their purpose.” The configuration of stars in the sky may look kind of random to us (except, of course, that we still see constellations in it). But there’s nothing to say that looked at in the right way it doesn’t actually represent some grand purpose.

And here’s the confusing part: there’s a sense in which it does! Because, after all, just as a matter of physics, the configuration that occurs can be characterized as achieving the purpose of extremizing some quantity defined by the equations for matter and gravity and so on. Of course, one might say “that doesn’t count; it’s just physics.” But our whole universe (including ourselves) operates according to physics. And so now we’re back to discussing whether the extremization is “meaningful” or not.

We humans have definite ways to judge what’s meaningful or not to us. And what it comes down to is whether we can “tell a story” that explains, in culturally meaningful terms, why we’re doing something. Of course, the notion of purpose has evolved over the course of human history. Imagine trying to explain walking on a treadmill, or buying goods in a virtual world, or, for that matter, sending beacons out into the solar system—to the people thousands of years ago who created the structures from Peru that I showed above.

We’re not familiar (except in mythology) with telling “culturally meaningful stories” about the world of stars and planets. And in the past we might have imagined that somehow whatever stories we could tell would inevitably be far less rich than the ones we can tell about our civilization. But this is where basic science I’ve done comes in. The Principle of Computational Equivalence says that this isn’t true—and that in the end what goes on with stars and planets is just as rich as what goes on in our brains or our civilization.

In an effort to “show something interesting” to the universe, we might have thought that the best thing to do would be to present sophisticated abstract computational things. But that won’t be useful. Because those abstract computational things are ubiquitous throughout the universe.

And instead, the “most interesting” thing we have is actually the specific and arbitrary details of our particular history. Of course, one might imagine that there could be some sophisticated thing out there in the universe that could look at how our history starts, and immediately be able to deduce everything about how it will play out. But a consequence of the Principle of Computational Equivalence is what I call computational irreducibility, which implies that there can be no general shortcut to history; to find how it plays out, one effectively just has to live through it—which certainly helps one feel better about the meaningfulness of life.

The Role of Language

OK, so let’s say we want to explain our history. How can we do it? We can’t show every detail of everything that’s happened. Instead, we need to give a higher-level symbolic description, where we capture what’s important while idealizing everything else away. Of course, “what’s important” depends on who’s looking at it.

We might say “let’s show a picture.” But then we have to start talking about how to make the picture out of pixels at a certain resolution, how to represent colors, say with RGB—not to mention discussing how things might be imaged in 2D, compressed, etc. Across human history, we’ve had a decent record in having pictures remain at least somewhat comprehensible. But that’s probably in no small part because our biologically determined visual systems have stayed the same.

(It’s worth mentioning, though, that pictures can have features that are noticed only when they become “culturally absorbed.” For example, the nested patterns from the 1200s that I showed above were reproduced but ignored in art history books for hundreds of years—until fractals became “a thing” and people had a way to talk about them.)

When it comes to communicating knowledge on a large scale, the only scheme we know (and maybe the only one that’s possible) is to use language—in which essentially there’s a set of symbolic constructs that can be arranged in an almost infinite number of ways to communicate different meanings.

It was presumably the introduction of language that allowed our species to begin accumulating knowledge from one generation to the next, and eventually to develop civilization as we know it. So it makes sense that language should be at the center of how we might communicate the story of what we’ve achieved.

And indeed if we look at human history, the cultures we know the most about are precisely those with records in written language that we’ve been able to read. If the structures in Caral had inscriptions, then (assuming we could read them) we’d have a much better chance of knowing what the structures were for.

There’ve been languages like Latin, Greek, Hebrew, Sanskrit and Chinese that have been continuously used (or at least known) for thousands of years—and that we’re readily able to translate. But in cases like Egyptian hieroglyphs, Babylonian cuneiform, Linear B, or Mayan, the thread of usage was broken, and it took heroic efforts to decipher them (and often the luck of finding something like the Rosetta Stone). And in fact today there are still plenty of languages—like Linear A, Etruscan, Rongorongo, Zapotec and the Indus script—that have simply never been deciphered.

Then there are cases where it’s not even clear whether something represents a language. An example is the quipus of Peru—that presumably recorded “data” of some kind, but that might or might not have recorded something we’d usually call a language:

Stephen Wolfram

Math to the Rescue?

OK, but with all our abstract knowledge about mathematics, and computation, and so on, surely we can now invent a “universal language” that can be universally understood. Well, we can certainly create a formal system—like a cellular automaton—that just consistently operates according to its own formal rules. But does this communicate anything?

In its actual operation, the system just does what it does. But where there’s a choice is what the actual system is, what rules it uses, and what its initial conditions were. So if we were using cellular automata, we could for example decide that these particular ones are the ones we want to show:

Stephen Wolfram/Wolfram Design

What are we communicating here? Each rule has all sorts of detailed properties and behavior. But as a human you might say: “Aha, I see that all these rules double the length of their input; that’s the point.” But to be able to make that summary again requires a certain cultural context. Yes, with our human intellectual history, we have an easy way to talk about “doubling the length of their input.” But with a different intellectual history, that might not be a feature we have a way to talk about, just as human art historians for centuries didn’t have a way to talk about nested patterns.

Let’s say we choose to concentrate on traditional math. We have the same situation there. Maybe we could present theorems in some abstract system. But for each theorem it’s just “OK, fine, with those rules, that follows—much like with those shapes of molecules, this is a way they can arrange in a crystal.” And the only way one’s really “communicating something” is in the decision of which theorems to show, or which axiom systems to use. But again, to interpret those choices inevitably requires cultural context.

One place where the formal meets the actual is in the construction of theoretical models for things. We’ve got some actual physical process, and then we’ve got a formal, symbolic model for it—using mathematical equations, programs like cellular automata, or whatever. We might think that that connection would immediately define an interpretation for our formal system. But once again it does not, because our model is just a model, that captures some features of the system, and idealizes others away. And seeing how that works again requires cultural context.

There is one slight exception to this: what if there is a fundamental theory of all of physics, that can perhaps be stated as a simple program? That program is then not just an idealized model, but a full representation of physics. And the point is that that “ground truth” about our universe describes the physics that govern absolutely any entity that exists in our universe.

If there is indeed a simple model for the universe, it’s essentially inevitable that the things it directly describes are not ones familiar from our everyday sensory experience; for example they’re presumably “below” constructs like space and time as we know them. But still, we might imagine that we could show off our achievements by presenting a version of the ultimate theory for our universe (if we’d found it!). But even with this, there’s a problem. Because, well, it’s not difficult to show a correct model for the universe: you just have to look at the actual universe! So the main information in an abstract representation is in what the primitives of the abstract representation end up being (do you set up your universe in terms of networks, or algebraic structures, or what?).

Let’s back off from this level of philosophy for a moment. Let’s say we’re delivering a physical object—like a spacecraft, or a car—to our aliens. You might think the problem would be simpler. But the problem again is that it requires cultural context to decide what’s important, and what’s not. Is the placement of those rivets a message? Or an engineering optimization? Or an engineering tradition? Or just arbitrary?

Pretty much everything on, say, a spacecraft was presumably put there as part of building the spacecraft. Some was decided upon “on purpose” by its human designers. Some was probably a consequence of the physics of its manufacturing. But in the end the spacecraft just is what it is. You could imagine reconstructing the neural processes of its human designers, as you could imagine reconstructing the heat flows in the annealing of some part of it. But what is just the mechanism by which the spacecraft was built, and what is its “purpose”—or what is it trying to “communicate”?

The Molecular Version

It’s one thing to talk about sending messages based on the achievements of our civilization. But what about just sending our DNA? Yes, it doesn’t capture (at least in any direct way) all our intellectual achievements. But it does capture a couple of billion years of biological evolution, and represent a kind of memorial of the 1040 or so organisms that have ever lived on our planet.

Of course, we might again ask “what does it mean?”. And indeed one of the points of Darwinism is that the forms of organisms (and the DNA that defines them) arise purely as a consequence of the process of biological evolution, without any “intentional design”. Needless to say, when we actually start talking about biological organisms there’s a tremendous tendency to say things like “that mollusc has a pointy shell because it’s useful in wedging itself in rocks”—in other words, to attribute a purpose to what has arisen from evolution.

So what would we be communicating by sending DNA (or, for that matter, complete instances of organisms)? In a sense we’d be providing a frozen representation of history, though now biological history. There’s an issue of context again too. How does one interpret a disembodied piece of DNA? (Or, what environment is needed to get this spore to actually do something?)

Long ago it used to be said that if there were “organic molecules” out in space, it’d be a sign of life. But in fact plenty of even quite complex molecules have now been found, even in interstellar space. And while these molecules no doubt reflect all sorts of complex physical processes, nobody takes them as a sign of anything like life.

So what would happen if aliens found a DNA molecule? Is that elaborate sequence a “meaningful message,” or just something created through random processes? Yes, in the end the sequences that have survived in modern DNA reflect in some way what leads to successful organisms in our specific terrestrial environment, though—just as with technology and language—there is a certain feedback in the way that organisms create the environment for others.

But, so, what does a DNA sequence show? Well, like a library of human knowledge, it’s a representation of a lot of elaborate historical processes—and of a lot of irreducible computation. But the difference is that it doesn’t have any “spark of human intention” in it.

Needless to say, as we’ve been discussing, it’s hard to identify a signature for that. If we look at things we’ve created so far in our civilization, they’re typically recognizable by the presence of things like (what we at least currently consider) simple geometrical forms, such as lines and circles and so on. And in a sense it’s ironic that after all our development as a civilization, what we produce as artifacts look so much simpler than what nature routinely produces.

And we don’t have to look at biology, with all its effort of biological evolution. We can just as well think of physics, and things like the forms of snowflakes or splashes or turbulent fluids.

As I’ve argued at length, the real point is that out in the computational universe of possible programs, it’s actually easy to find examples where even simple underlying rules lead to highly complex behavior. And that’s what’s happening in nature. And the only reason we don’t see that usually in the things we construct is that we constrain ourselves to use engineering practices that avoid complexity, so that we can foresee their outcome. And the result of this is that we tend to always end up with things that are simple and familiar.

Now that we understand more about the computational universe, we can see, however, that it doesn’t always have to be this way. And in fact I have had great success just “mining the computational universe” for programs (and structures) that turn out to be useful, independent of whether one can “understand” how they operate. And something like the same thing happens when one trains a modern machine learning system. One ends up with a technological system that we can identify as achieving some overall purpose, but where the individual parts we can’t particularly recognize as doing meaningful things.

And indeed my expectation is that in the future, a smaller and smaller fraction of human-created technology will be “recognizable” and “understandable”. Optimized circuitry doesn’t have nice repetitive structure; nor do optimized algorithms. Needless to say, it’s sometimes hard to tell what’s going on. Is that pattern of holes on a speakerphone arranged to optimize some acoustic feature, or is it just “decorative”?

Yet again we’re thrust back into the same philosophical quandary: we can see the mechanism by which things operate, and we can come up with a story that describes why they might work that way. But there is no absolute way to decide whether that story is “correct”—except by referring back to the details of humans and human culture.

Talking about the World

Let’s go back to language. What really is a language? Structurally (at least in all the examples we know so far) it’s a collection of primitives (words, grammatical constructs, etc.) that can be assembled according to certain rules. And yes, we can look at a language formally at this level, just like we can look, say, at how to make tilings according to some set of rules. But what makes a language useful for communication is that its primitives somehow relate to the world—and that they’re tied into knowledge.

In a first approximation, the words or other primitives in a language end up being things that are useful in describing aspects of the world that we want to communicate. We have different words for “table” and “chair” because those are buckets of meaning that we find it useful to distinguish. Yes, we could start describing the details of how the legs of the table are arranged, but for many purposes it’s sufficient to just have that one word, or one symbolic primitive, “table”, that describes what we think of as a table.

Of course, for the word “table” to be useful for communication, the sender and recipient of the word have to have shared understanding of its meaning. As a practical matter, for natural languages, this is usually achieved in an essentially societal way—with people seeing other people describing things as “tables.”

How do we determine what words should exist? It’s a societally driven process, but at some level it’s about having ways to define concepts that are repeatedly useful to us. There’s a certain circularity to the whole thing. The concepts that are useful to us depend on the environment in which we live. If there weren’t any tables around (e.g. during the Stone Age), it wouldn’t be terribly useful to have the word “table.”

But then once we introduce a word for something (like “blog”), it starts to be easier for us to think about the thing—and then there tends to be more of it in the environment that we construct for ourselves, or choose to live in.

Imagine an intelligence that exists as a fluid (say the weather, for example). Or even imagine an aquatic organism, used to a fluid environment. Lots of the words we might take for granted about solid objects or locations won’t be terribly useful. And instead there might be words for aspects of fluid flow (say, lumps of vorticity that change in some particular way) that we’ve never identified as concepts that we need words for.

It might seem as if different entities that exist within our physical universe must necessarily have some commonality in the way they describe the world. But I don’t think this is the case—essentially as a consequence of the phenomenon of computational irreducibility.

The issue is that computational irreducibility implies that there are in effect an infinite number of irreducibly different environments that can be constructed on the basis of our physical universe—just like there are an infinite number of irreducibly different universal computers that can be built up using any given universal computer. In more practical terms, a way to say this is that different entities—or different intelligences—could operate using irreducibly different “technology stacks,” based on different elements of the physical world (e.g. atomic vs. electronic vs. fluidic vs. gravitational, etc.) and different chains of inventions. And the result would be that their way of describing the world would be irreducibly different.

Forming a Language

But OK, given a certain experience of the world, how can one figure out what words or concepts are useful in describing it? In human natural languages, this seems to be something that basically just evolves through a process roughly analogous to natural selection in the course of societal use of the language. And in designing the Wolfram Language as a computational communication language I’ve basically piggybacked on what has evolved in human natural language.

So how can we see the emergence of words and concepts in a context further away from human language? Well, in modern times, there’s an answer, which is basically to use our emerging example of alien intelligence: artificial intelligence.

Just take a neural network and start feeding it, say, images of lots of things in the world. (By picking the medium of 2D images, with a particular encoding of data, we’re essentially defining ourselves to be “experiencing the world” in a specific way.) Now see what kinds of distinctions the neural net makes in clustering or classifying these images.

In practice, different runs will give different answers. But any pattern of answers is in effect providing an example of the primitives for a language.

An easy place to see this is in training an image identification network. We started doing this several years ago with tens of millions of example images, in about 10,000 categories. And what’s notable is that if you look inside the network, what it’s effectively doing is to hone in on features of images that let it efficiently distinguish between different categories.

These features then in effect define the emergent symbolic language of the neural net. And, yes, this language is quite alien to us. It doesn’t directly reflect human language or human thinking. It’s in effect an alternate path for “understanding the world”, different from the one that humans and human language have taken.

Can we decipher the language? Doing so would allow us to “explain the story” of what the neural net is “thinking.” But it won’t typically be easy to do. Because the “concepts” that are being identified in the neural network typically won’t have easy translations to things we know about—and we’ll be stuck in effect doing something like natural science to try to identify phenomena from which we can build up a description of what’s going on.

OK, but in the problem of communicating with aliens, perhaps this suggests a way. Don’t try (and it’ll be hard) to specify a formal definition of “chair.” Just show lots of examples of chairs—and use this to define the symbolic “chair” construct. Needless to say, as soon as one’s showing pictures of chairs, not providing actual chairs, there are issues of how one’s describing or encoding things. And while this approach might work decently for common nouns, it’s more challenging for things like verbs, or more complex linguistic constructs.

But if we don’t want our spacecraft full of sample objects (a kind of ontological Noah’s Ark), maybe we could get away with just sending a device that looks at objects, and outputs what they’re called. After all, a human version of this is basically how people learn languages, either as children, or when they’re out doing linguistic fieldwork. And today we could certainly have a little computer with a very respectable, human-grade image identifier on it.

But here’s the problem. The aliens will start showing the computer all sorts of things that they’re familiar with. But there’s no guarantee whatsoever that they’ll be aligned with the things we (or the image identifier) has words for. One can already see the problem if one feeds an image identifier human abstract art; it’s likely to be even worse with the products of alien civilization:

The Metropolitan Museum of Art

What the Wolfram Language Does

So can the Wolfram Language help? My goal in building it has been to create a bridge between the things humans want to do, and the things computation abstractly makes possible. And if I were building the language not for humans but for aliens—or even dolphins—I’d expect it to be different.

In the end, it’s all about computation, and representing things computationally. But what one chooses to represent—and how one does it—depends on the whole context one’s dealing with. And in fact, even for us humans, this has steadily changed over time. Over the 30+ years I’ve been working on the Wolfram Language, for example, both technology and the world have measurably evolved—with the result that there are all sorts of new things that make sense to have in the language. (The advance of our whole cultural understanding of computation—with things like hyperlinks and functional programming now becoming commonplace—also changes the concepts that can be used in the language.)

Right now most people think of the Wolfram Language mainly as a way for humans to communicate with computers. But I’ve always seen it as a general computational communication language for humans and computers—that’s relevant among other things in giving us humans a way to think and communicate in computational terms. (And, yes, the kind of computational thinking this makes possible is going to be increasingly critical—even more so than mathematical thinking has been in the past.)

But the key point is that the Wolfram Language is capturing computation in human-compatible terms. And in fact we can view it as in effect giving a definition of which parts of the universe of possible computations we humans—at the current stage in the evolution of our civilization—actually care about.

Another way to put this is that we can think of the Wolfram Language as providing a compressed representation (or, in effect, a model) of the core content of our civilization. Some of that content is algorithmic and structural; some of it is data and knowledge about the details of our world and its history.

There’s more to do to make the Wolfram Language into a full symbolic discourse language that can express a full range of human intentions (for example what’s needed for encoding complete legal contracts, or ethical principles for AIs.) But with the Wolfram Language as it exists today, we’re already capturing a very broad swath of the concerns and achievements of our civilization.

But how would we feed it to aliens? At some level its gigabytes of code and terabytes of data just define rules—like the rules for a cellular automaton or any other computational system. But the point is that these rules are chosen to be ones that do computations that we humans care about.

It’s a bit like those Egyptian tomb models, which show things Egyptians cared about doing. If we give the aliens the Wolfram Language we’re essentially giving them a computational model of things we care about doing. Except, of course, that by providing a whole language—rather than just individual pictures or dioramas—we’re communicating in a vastly broader and deeper way.

The Reality of Time Capsules

What we’re trying to create in a sense amounts to a time capsule. So what can we learn from time capsules of the past? Sadly, the history is not too inspiring.

Particularly following the discovery of King Tutankhamun’s tomb in 1922, there was a burst of enthusiasm for time capsules that lasted a little over 50 years, and led to the creation—and typically burial—of perhaps 10,000 capsules. Realistically, though, the majority of these time capsules are even by now long forgotten—most often because the organizations that created them have changed or disappeared. (The Westinghouse Time Capsule for the 1939 World’s Fair was at one time a proud example; but last year the remains of Westinghouse filed for bankruptcy.)

My own email archive records a variety of requests in earlier years for materials for time capsules, and looking at it today I’m reminded that we seem to have created a time capsule for Mathematica’s 10th anniversary in 1998. But where is it now? I don’t know. And this is a typical problem. Because whereas an ongoing archive (or library, etc.) can keep organized track of things, time capsules tend to be singular, and have a habit of ending up sequestered away in places that quickly get obscured and forgotten. (The reverse can also happen: People think there’s a time capsule somewhere—like one supposedly left by John von Neumann to be opened 50 years after his death—but it turns out just to be a confusion.)

The one area where at least informal versions of time capsules seem to work out with some frequency is in building construction. In England, for example, when thatched roofs are redone after 50 years or so, it’s common for messages from the previous workers to be found. But a particularly old tradition—dating even back to the Babylonians—is to put things in the foundations, and particularly at the cornerstones, of buildings.

Often in Babylonian times, there would just be an inscription cursing whoever had demolished the building to the point of seeing its foundations. But later, there was for example a longstanding tradition among Freemason stonemasons to embed small boxes of memorabilia in public buildings they built.

More successful, however, than cleverly hidden time capsules have been stone inscriptions out in plain sight. And indeed much of our knowledge of ancient human history and culture comes from just such objects. Sometimes they are part of large surviving architectural structures. But one famous example (key to the deciphering of cuneiform) is simply carved into the side of a cliff in what’s now Iran:

Ali Majdfar/Getty Images

Such inscriptions were common in the ancient world (as their tamer successors are common today). But somehow their irony was well captured by what is probably my single all-time favorite poem, Shelley’s “Ozymandias” (named after Ramses II of Egypt):

“I met a traveller from an antique land,
Who said—Two vast and trunkless legs of stone
Stand in the desert.
…
And on the pedestal, these words appear:
‘My name is Ozymandias, King of Kings;
Look on my Works, ye Mighty, and despair!’
Nothing beside remains. Round the decay
Of that colossal Wreck, boundless and bare
The lone and level sands stretch far away.”

If there was a “Risks” section to a prospectus for the beacon project, this might be a good exhibit for it.

Of course, in addition to intentional “showoff” inscriptions, ancient civilizations left plenty of “documentary exhaust” that’s still around in one form or another today. A decade ago, for example, I bought off the web (and, yes, I’m pretty sure it’s genuine) a little cuneiform tablet from about 2100 BC:

Stephen Wolfram

It turns out to be a contract saying that a certain Mr. Lu-Nanna is receiving 1.5 gur (about 16 cubic feet) of barley in the month of Dumuzi (Tammuz/June-July), and that in return he should pay out certain goods in September-November.

Most surviving cuneiform tablets are about things like this. One in a thousand or so are about things like math and astronomy, though. And when we look at these tablets today, it’s certainly interesting to see how far the Babylonians had gotten in math and astronomy. But (with the possible exception of some astronomical parameters) after a while we don’t really learn anything more from such tablets.

And that’s a lesson for our efforts now. If we put math or science facts in our beacons, then, yes, it shows how far we’ve gotten (and of course to make the best impression we should try to illustrate the furthest reaches of, for example, today’s math, which will be quite hard to do). But it feels a bit like job applicants writing letters that start by explaining basic facts. Yes, we already know those; now tell us something about yourselves!

But what’s the best way to do that? In the past the channel with the highest bandwidth was the written word. In today’s world, maybe video—or AI simulation—goes further. But there’s more—and we’re starting to see this in modern archaeology. The fact is that pretty much any solid object carries microscopic traces of its history. Maybe it’s a few stray molecules—say from the DNA of something that got onto an eating utensil. Maybe it’s microscopic scratches or cracks in the material itself, indicating some pattern of wear.

Atomic force microscopy gives us the beginning of one way to systematically read such things out. But as molecular-scale computing comes online, such capabilities will grow rapidly. And this will give us access to a huge repository of “historical exhaust.”

We won’t immediately know the name “Lu-Nanna.” But we might well know their DNA, the DNA of their scribe, what time of day their tablet was made, and what smells and maybe even sounds there were while the clay was drying. All of this one can think of as a form of “sensory data”—once again giving us information on “what happened,” though with no interpretation of what was considered important.

Messages in Space

OK, but our objective is to put information about our civilization out into space. So what’s the history of previous efforts to do that? Well, right now there are just four spacecraft outside our solar system (and another one that’s headed there), and there are under 100 spacecraft more-or-less intact on various planetary surfaces (not counting hard landings, melted spacecraft on Venus, etc.). And at some level a spacecraft itself is a great big “message”, illustrating lots of technology and so on.

Wolfram Language

Probably the largest amounts of “design information” will be in the microprocessors. And although radiation hardening forces deep space probes to use chip designs that are typically a decade or more behind the latest models, something like the New Horizons spacecraft launched in 2006 still has MIPS R3000 CPUs (albeit running at 12 MHz) with more than 100,000 transistors.

There are also substantial amounts of software, typically stored in some kind of ROM. Of course, it may not be easy to understand, even for humans—and indeed just last month, firing backup thrusters on Voyager 1 that hadn’t been used for 37 years required deciphering the machine code for a long-extinct custom CPU.

The structure of a spacecraft tells a lot about human engineering and its history. Why was the antenna assembly that shape? Well, because it came from a long lineage of other antennas that were conveniently modeled and manufactured in such-and-such a way, and so on.

But what about more direct human information? Well, there are often little labels printed on components by manufacturers. And in recent times there’s been a trend of sending lists of people’s names (more than 400,000 on New Horizons) in engravings, microfilm or CDs/DVDs. (The MAVEN Mars mission also notably carried 1000+ publicly submitted haikus about Mars, together with 300+ drawings by kids, all on a DVD.) But on most spacecraft the single most prominent piece of “human communication” is a flag:

NASA

A few times, however, there have been explicit, purposeful plaques and things displayed. For example, on the leg of Apollo 11’s lunar module this was attached (with the Earth rendered in a stereographic projection cut in the middle of the Atlantic around 20°W):

NASA

Each Apollo mission to the Moon also planted an American flag (most still “flying”according to recent high-res reconnaissance)—strangely reminiscent of shrines to ancient gods found in archaeological remains:

NASA

The very first successful moon probe (Luna 2) carried to the Moon this ball-like object—which was intended to detonate like a grenade and scatter its pentagonal facets just before the probe hit the lunar surface, proclaiming (presumably to stake a claim): “USSR, September 1959”:

Courtesy of the Cosmosphere, Hutchinson, KS

On Mars, there’s a plaque that seems more like the cover sheet for a document—or that might be summarized as “putting the output of some human cerebellums out in the cosmos” (what kind of personality analysis could the aliens do from those signatures?):

NASA/JPL-Caltech/MSSS

There’s another list of names, this time an explicit memorial for fallen astronauts, left on the Moon by Apollo 15. But this time it comes with a small figurine, strangely reminiscent of the figurines we find in early archaeological remains:

NASA

Figurines have actually been sent on other spacecraft too. Here are some LEGO ones that went to Jupiter on the Juno spacecraft (from left to right: mythological Jupiter, mythological Juno, and real Galileo, complete with LEGO attachment):

NASA/JPL-Caltech/KSC

Also on that spacecraft was a tribute to Galileo—though all this will be vaporized when the spacecraft deorbits Jupiter later in 2018 to avoid contaminating any moons:

NASA/JPL-Caltech/KSC

There are “MarsDials” on several Mars landers, serving as sundials and color calibration targets. The earlier ones had the statement “Two worlds, one sun”—along with the word “Mars” in 22 languages; on later ones the statement was the less poetic “On Mars, to explore”:

NASA/JPL-Caltech/MSSS

As another space trinket, the New Horizons spacecraft that recently passed Pluto has a simple Florida state quarter on board—which at least was presumably easy and cheap to obtain near its launch site.

But the most serious—and best-known—attempts to provide messages are the engraved aluminum plaques on the Pioneer 10 and 11 spacecraft that were launched in 1972 and 1973 (though are sadly now out of contact):

QAI Publishing/UIG/Getty Images

NASA

I must say I have never been a big fan of this plaque. It always seemed to me too clever by half. My biggest beef has always been with the element at the top left. The original paper (with lead author Carl Sagan) about the plaque states that this “should be readily recognizable to the physicists of other civilizations.”

But what is it? As a human physicist, I can figure it out: it’s an iconic representation of the hyperfine transition of atomic hydrogen—the so-called 21-centimeter line. And those little arrows are supposed to represent the spin directions of protons and electrons before and after the transition. But wait a minute: electrons and protons are spin-1/2, so they act as spinors. And yes, traditional human quantum mechanics textbooks do often illustrate spinors using vectors. But that’s a really arbitrary convention.

Oh, and why should we represent quantum mechanical wavefunctions in atoms using localized lines? Presumably the electron is supposed to “go all the way around” the circle, indicating that it’s delocalized. And, yes, you can explain that iconography to someone who’s used to human quantum mechanics textbooks. But it’s about as obscure and human-specific as one can imagine. And, by the way, if one wants to represent 21.106-centimeter radiation, why not just draw a line precisely that length, or make the plaque that size (it actually has a width of 22.9 centimeters)!

I could go on and on about what’s wrong with the plaque. The rendering conventions for the (widely mocked) human figures, especially when compared to those for the spacecraft. The use of an arrow to show the spacecraft direction (do all aliens go through a stage of shooting arrowheads?). The trailing (binary) zeros to cover the lack of precision in pulsar periods.

The official key from the original paper doesn’t help the case, and in fact the paper lays out some remarkably elaborate “science IQ test” reasoning needed to decode other things on the plaque:

NASA

After the attention garnered by the Pioneer plaques, a more ambitious effort was made for the Voyager spacecraft launched in 1977. The result was the 12-inch gold-plated Voyager Golden Record, with an “album cover:”

NASA

In 1977, phonograph records seemed like “universally obvious technology.” Today of course even the concept of analog recording is (at least for now) all but gone. And what of the elaborately drawn “needle” on the top left? In modern times the obvious way to read the record would just be to image the whole thing, without any needles tracking grooves.

But, OK, so what’s on the record? There are some spoken greetings in 55 languages (beginning with one in a modern rendering of Akkadian), along with a 90-minute collection of music from around the world. (Somehow I imagine an alien translator—or, for that matter, an AI—trying in vain to align the messages between the words and the music.) There’s an hour of recorded brainwaves of Carl Sagan’s future wife (Ann Druyan), apparently thinking about various things.

Then there are 116 images, encoded in analog scan lines (though I don’t know how color was done). Many were photographs of 1970s life on Earth. Some were “scientific explanations”, which are at least good exercises for human science students of the 2010s to interpret (though the real-number rounding is weird, there are “9 planets”—and it’s charming to see the stencil-and-ink rendering):

NASA

NASA

Among efforts after Voyager have been the (very 1990s-style) CD of human Mars-related “Visions of Mars” fiction on the failed 1996 Mars 96 spacecraft, as well as the 2012 “time capsule” CD of images and videos on the EchoStar 16 satellite in geostationary orbit around Earth:

Wolfram Language

Yes, when I proposed the “alien flashcards” for scientists in the movie Arrival, I too started with binary—though in modern times it’s easy and natural to show the whole nested pattern of successive digit sequences:

The Planetary Society

A slightly different kind of plaque was launched back in 1976 on the LAGEOS-1 satellitethat’s supposed to be in polar orbit around the Earth for 8.4 million years. There are the binary numbers, reminiscent of Leibniz’s original “binary medal”. And then there’s an image of the predicted effect of continental drift (and what about sea level?) from 228 years ago, to the end of the satellite’s life—that to me gives off a certain “so, did we get it right?” vibe:

NASA

There was almost an engraved diamond plaque sent on the Cassini mission to Saturn and beyond in 1997, but as a result of human disagreements, it was never sent—and instead, in a very Ozymandias kind of way, all that’s left on the spacecraft is an empty mounting pedestal, whose purpose might be difficult to imagine.

Still another class of artifacts sent into the cosmos are radio transmissions. And until we have better-directed radio communications (and 5G will help), we’re radiating a certain amount of (increasingly encrypted) radio energy into the cosmos. The most intense ongoing transmissions remain the 50 Hz or 60 Hz hum of power lines, as well as the perhaps-almost-pulsar-like Ballistic Missile Early Warning System radars. But in the past there’ve been specific attempts to send messages for aliens to pick up.

The most famous was sent by the Arecibo radio telescope in 1974. Its repetition length was a product of two primes, intended to suggest assembly as a rectangular array. It’s an interesting exercise for humans to try to decipher the resulting image. Can you see the sequence of binary numbers? The schematic DNA, and the bitvectors for its components? The telescope icon? And the little 8-bit-video-game-like human?

Wolfram Language

(There’ve been other messages sent, including a Doritos ad, a Beatles song, some Craigslist pages and a plant gene sequence—as well as some arguably downright embarrassing “artworks”.)

Needless to say, we pick up radio transmissions from the cosmos that we don’t understand fairly often. But are they signs of intelligence? Or “merely physics”? As I’ve said, the Principle of Computational Equivalence tells us there isn’t ultimately a distinction. And that, of course, is the challenge of our beacons project.

It’s worth mentioning that in addition to what’s been sent into space, there are a few messages on Earth specifically intended for at least few thousand years in the future. Examples are the 2000-year equinox star charts at the Hoover Dam, and the long-planned-but-not-yet-executed 10,000-year “stay away; it’s radioactive” warnings (or maybe it’s an “atomic priesthood” passing information generation to generation) for facilities like the WIPP nuclear waste repository in southeastern New Mexico. (Not strictly a “message”, but there’s also the “10,000 year clock” being built in West Texas._

A discussion of extraterrestrial communication wouldn’t be complete without at least mentioning the 1960 book “Lincos: Design of a Language for Cosmic Intercourse“—my copy of which wound up on the set of Arrival. The idea of the book was use the methods and notation of mathematical logic to explain math, science, human behavior and other things “from first principles”. Its author, Hans Freudenthal, had spent decades working on math education—and on finding the best ways to explain math to (human) kids.

Lincos was created too early to benefit from modern thinking about computer languages. And as it was, it used the often almost comically abstruse approach of Whitehead and Russell’s 1910 Principia Mathematica—in which even simple ideas become notationally complex. When it came to a topic like human behavior Lincos basically just gave examples, like small scenes in a stage play—but written in the notation of mathematical logic.

Yes, it’s interesting to try to have a symbolic representation for such things—and that’s the point of my symbolic discourse language project. But even though Lincos was at best just at the very beginning of trying to formulate something like this, it was still the obvious source for attempts to send “active SETI” messages starting in 1999, and some low-res bitmaps of Lincos were transmitted to nearby stars.

Science Fiction and Beyond

For our beacons project, we want to create human artifacts that will be recognized even by aliens. The related question of how alien artifacts might be recognizable has been tackled many times in science fiction.

Most often there’s something that just “doesn’t look natural,” either because it’s obviously defying gravity, or because it’s just too simple or perfect. For example, in the movie 2001, when the black cuboid monolith with its exact 1:4:9 side ratios shows up on Stone Age Earth or on the Moon, it’s obvious it’s “not natural.”

On the flip side, people in the 1800s argued that the fact that, while complex, a human-made pocket watch was so much simpler than a biological organism meant that the latter could only be an “artifact of God." But actually I think the issue is just that our technology isn’t advanced enough yet. We’re still largely relying on engineering traditions and structures where we readily foresee every aspect of how our system will behave.

But I don’t think this will go on much longer. As I’ve spent many years studying, out in the computational universe of all possible programs it’s very common that the most efficient programs for a particular purpose don’t look at all simple in their behavior (and in fact this is a somewhat inevitable consequence of making better use of computational resources). And the result is that as soon as we can systematically mine such programs (as Darwinian evolution and neural network training already begin to), we’ll end up with artifacts that no longer look simple.

Ironically—but not surprisingly, given the Principle of Computational Equivalence—this suggests that our future artifacts will often look much more like “natural systems.” And indeed our current artifacts may look as primitive in the future as many of those produced before modern manufacturing look to us today.

Some science fiction stories have explored “natural-looking” alien artifacts, and how one might detect them. Of course it’s mired in the same issues that I’ve been exploring throughout this post—making it very difficult for example to tell for certain even whether the strangely red and strangely elongated interstellar object recently observed crossing our solar system is an alien artifact, or just a “natural rock.”

The Space of All Possible Civilizations

A major theme of this post has been that “communication” requires a certain sharing of “cultural context.” But how much sharing is enough? Different people—with at least fairly different backgrounds and experiences—can usually understand each other well enough for society to function, although as the “cultural distance” increases, such understanding becomes more and more difficult.

Over the course of human history, one can imagine a whole net of cultural contexts, defined in large part (at least until recently) by place and time. Neighboring contexts are typically closely connected—but to get a substantial distance, say in time, often requires following a quite long chain of intermediate connections, a bit like one might have to go through a chain of intermediate translations to get from one language to another.

Particularly in modern times, cultural context often evolves quite significantly even over the course of a single human lifetime. But usually the process is gradual enough that an individual can bridge the contexts they encounter—though of course there’s no lack of older people who are at best confused at the preferences and interests of the young (think modern social media, etc.). And indeed were one just suddenly to wake up a century hence, it’s fairly certain that some of the cultural context would be somewhat disorientingly different.

But, OK, can we imagine making some kind of formal theory of cultural contexts? To do so would likely in effect require describing the space of all possible civilizations. And at first this might seem utterly infeasible.

But when we explore the computational universe of possible programs we are looking at a space of all possible rules. And it’s easy to imagine defining at least some feature of a civilization by some appropriate rule—and different rules can lead to dramatically different behavior, as in these cellular automata:

Wolfram Language

But, OK, what would “communication” mean in this context? Well, as soon as these rules are computationally universal (and the Principle of Computational Equivalenceimplies that except in trivial cases they always will be), there’s got to be some way to translate between them. More specifically, given one universal rule, there must be some program for it—or some class of initial conditions—that make it emulate any other specified rule. Or, in other words, it must be possible to implement an interpreter for any given rule in the original rule.

We might then think of defining a distance between rules to be determined by the size or complexity of the interpreter necessary to translate between them. But while this sounds good in principle, it’s certainly not an easy thing to deal with out in practice. And it doesn’t help that interpretability can be formally undecidable, so there’s no upper bound on the size or complexity of the translator between rules.

But at least conceptually, this gives us a chance to think about how a “communication distance” might be defined. And perhaps one could imagine a first approximation for the simplified case of neural networks, in which one just asks how difficult it is to train one network to act like another.

As a more down-to-earth analogy to the space of cultural contexts, we could consider human languages, of which there are about 10,000 known. One can assess similarities between languages by looking at their words, and perhaps by looking at things like their grammatical structures. And even though in first approximation all languages can talk about the same kinds of things, languages can at least superficially have significant differences.

But for the specific case of human languages, there’s a lot determined by history. And indeed there’s a whole evolutionary tree of languages that one can identify, that effectively explains what’s close and what’s not. (Languages are often related to cultures, but aren’t the same. For example, Finnish is very different as a language from Swedish, even though Finnish and Swedish cultures are fairly similar.)

In the case of human civilizations, there are all sorts of indicators of similarity one might use. How similar do their artifacts look, say as recognized by neural networks? How similar are their social, economic or genealogical networks? How similar are quantitative measures of their patterns of laws or government?

Of course, all human civilizations share all sorts of common history—and no doubt occupy only some infinitesimal corner in the space of all possible civilizations. And in the vast majority of potential alien civilizations, it’s completely unrealistic to expect that the kinds of indicators we’re discussing for human civilizations could even be defined. So how might one characterize a civilization and its cultural context? One way is to ask how it uses the computational universe of possible programs. What parts of that universe does it care about, and what not?

Now perhaps the endpoint of cultural evolution is to make use of the whole space of possible programs. Of course, our actual physical universe is presumably based on specific programs—although within the universe one can perfectly well emulate other programs.

And presumably anything that we could identify as a definite “civilization” with definite “culture context” must make use of some particular type of encoding—and in effect some particular type of language—for the programs it wants to specify. So one way to characterize a civilization is to imagine what analog of the Wolfram Language (or in general what symbolic discourse language) it would invent to describe things.

Yes, I’ve spent much of my life building the single example of the Wolfram Language intended for humans. And now what I’m suggesting is to imagine the space of all possible analogous languages, with all possible ways of sampling and encoding the computational universe.

But that’s the kind of thing we need to consider if we’re serious about alien communication. And in a sense just as we might say that we’re only going to consider aliens who live within a certain number of light years of us, so also we may have to say that we’ll only consider aliens where the language defining their cultural context is within a certain “translation distance” of ours.

How can we study this in practice? Well, of course we could think about what analog of the Wolfram Language other creatures with whom we share the Earth might find useful. We could also think about what AIs would find useful—though there is some circularity to this, insofar as we are creating AIs for the purpose of furthering our human goals. But probably the best path forward is just to imagine some kind of abstract enumeration of possible Wolfram-Language analogs, and then to start studying what methods of translation might be possible between them.

What Should We Actually Send?

OK, so there are lots of complicated intellectual and philosophical issues. But if we’re going to send beacons about the achievements of our civilization into space, what’s the best thing to do in practice?

A few points are obvious. First, even though it might seem more “universal,” don’t send lots of content that’s somehow formally derivable. Yes, we could say 2+2=4, or state a bunch of mathematical theorems, or show the evolution of a cellular automaton. But other than demonstrating that we can successfully do computation (which isn’t anything special, given the Principle of Computational Equivalence) we’re not really communicating anything like this. In fact, the only real information about us is our choice of what to send: which arithmetic facts, which theorems, etc.

Here’s an ancient Egyptian die. And, yes, it’s interesting that they knew about icosahedra, and chose to use them. But the details of the icosahedral shape don’t tell us anything: it’s just the same as any other icosahedron:

The Metropolitan Museum of Art

OK, so an important principle is: if we want to communicate about ourselves, send things that are special to us—which means all sorts of arbitrary details about our history and interests. We could send an encyclopedia. Or if we have more space, we could send the whole content of the web, or scans of all books, or all available videos.

There’s a point, though, at which we will have sent enough: where basically there’s the raw material to answer any reasonable question one could ask about our civilization and our achievements.

But how does one make this as efficient as possible? Well, at least for general knowledge I’ve spent a long time trying to solve that problem. Because in a sense that’s what Wolfram|Alpha is all about: creating a system that can compute the answers to as broad a range as possible of questions.

So, yes, if we send a Wolfram|Alpha, we’re sending knowledge of our civilization in a concentrated, computational form, ready to be used as broadly as possible.

Of course, at least the public version of Wolfram|Alpha is just about general, public knowledge. So what about more detailed information about humans and the human condition?

Well, there’re always things like email archives, and personal analytics, and recordings, and so on. And, yes, I happen to have three decades of rather extensive data about myself, that I’ve collected mostly because it was easy for me to do.

But what could one get from that? Well, I suspect there’s enough data there that at least in principle one could construct a bot of me from it: in other words, one could create an AI system that would respond to things in pretty much the same way I would.

Of course, one could imagine just “going to the source” and starting to read out the content of a human brain. We don’t know how to do that yet. But if we’re going to assume that the recipients of our beacons have advanced further, then we have to assume that given a brain, they could tell what it would do.

Indeed, perhaps the most obvious thing to send (though it’s a bit macabre) would just be whole cryonically preserved humans (and, yes, they should keep well at the temperature of interstellar space!). Of course, it’s ironic how similar this is to the Egyptian idea of making mummies—though our technology is better (even if we still haven’t yet solved the problem of cryonics).

Is there a way to do even better, though? Perhaps by using AI and digital technology, rather than biology. Well, then we have a different problem. Yes, I expect we’ll be able to make AIs that represent any aspect of our civilization that we want. But then we have to decide what the “best of our civilization” is supposed to be.

It’s very related to questions about the ethics and “constitution” we should define for the AIs—and it’s an issue that comes back directly to the dynamics of our society. If we were sending biological humans then we’d get whatever bundle of traits each human we sent happened to have. But if we’re sending AIs, then somehow we’d have to decide which of the infinite range of possible characteristics we’d assign to best represent our civilization.

Whatever we might send—biological or digital—there’s absolutely no guarantee of any successful communication. Sure, our person or our AI might do their best to understand and respond to the alien that picked them up. But it might be hopeless. Yes, our representative might be able to identify the aliens, and observe the computations they’re doing. But that doesn’t mean that there’s enough alignment to be able to communicate anything we might think of as meaning.

It’s certainly not encouraging that we haven’t yet been able to recognize what we consider to be signs of extraterrestrial intelligence anywhere else in the universe. And it’s also not encouraging that even on our own planet we haven’t succeeded in serious communication with other species.

But just like Darius—or even Ozymandias—we shouldn’t give up. We should think of the beacons we send as monuments. Perhaps they will be useful for some kind of “afterlife.” But for now they serve as a useful rallying point for thinking about what we’re proud of in the achievements of our civilization—and what we want to capture and celebrate in the best way we can. And I’ll certainly be pleased to contribute to this effort the computational knowledge that I’ve been responsible for accumulating.