In his 2007 book I Am a Strange Loop, author Douglas Hofstadter–best known for writing the Pulitzer Prize-winning book Gödel, Escher, Bach: An Eternal Golden Braid–discusses his reaction to the untimely death of his wife Carol, due to a brain tumor, while on sabbatical together with their children in Italy in 1993. If you’re interested in simulation and how it might work within the human brain, I highly recommend reading this book, and especially the chapter devoted to this subject. He writes:

I realized… that although Carol had died, [a] core piece of her had not died at all, but that it lived on very determinedly in my brain…

The name “Carol” denotes, for me, far more than just a body, which is now gone, but rather a very vast pattern, a style, a set of things including memories, hopes, dreams, beliefs, loves, reactions to music, sense of humor, self-doubt, generosity, compassion, and so on. Those things are to some extent sharable, objective, and multiply instantiatable, a bit like software on a diskette.

This is interesting enough, but Hofstadter goes further and asserts not only that his wife’s “very vast pattern” exists, at least in piecemeal form, in a variety of locations, including within his brain, but that this pattern is capable of executing itself (or being executed):

Along with Carol’s desires, hopes, and so on, her own personal sense of “I” is represented in my brain, because I was so close to her, because I empathized so deeply with her, co-felt so many things with her, was so able to see things from inside her point of view when we spoke, whether it was her physical sufferings… or her greatest joys… or her fondest hopes or her reactions to movies or whatever.

For brief periods of time in conversations, or even in nonverbal moments of intense feeling, I was Carol, just as, at times, she was Doug. So her “personal gemma” (to borrow Stanislaw Lem’s term in his story “Non Serviam”) had brought into existence a somewhat blurry, coarse-grained copy of itself inside my brain, had created a secondary Gödelian swirl inside my brain (the primary one of course being my own self-swirl), a Gödelian swirl that allowed me to be her, or, said otherwise, a Gödelian swirl that allowed her self, her personal gemma, to ride (in simplified form) on my hardware.

In other words, without conscious effort, Hofstadter was running a “coarse-grained” simulation of his wife inside his own brain. I think this is a profound insight. If you’re married or similarly partnered, and especially if you’ve been partnered a long time, I’m willing to bet that you do the same all the time–that without bidding, you imagine how your partner might react to a certain situation. And that’s because, over the years, your subconscious has constructed a simulation of his or her personality that has gotten better and better (e.g., resembled reality more and more closely). And you don’t have conscious control over the simulations your subconscious is constantly running. None of us does.

I find it interesting that in the book, Hofstadter only uses the word “simulation” twice, and in a curious tone:

Without going into more detail, let me simply say that it makes perfect sense to discuss living animals and self-guiding robots in the same part of this book, for today’s technological achievements are bringing us ever closer to understanding what goes on in living systems that survive in complex environments. Such successes give the lie to the tired dogma endlessly repeated by John Searle that computers are forever doomed to mere “simulation” of the processes of life. If an automaton can drive itself a distance of two hundred miles across a tremendously forbidding desert terrain, how can this feat be called merely a “simulation”? It is certainly as genuine an act of survival in a hostile environment as that of a mosquito flying about a room and avoiding being swatted.

In the same paragraph, Hofstadter is both a) arguing that artificial brains carry no fundamental traits that would prevent them from being considered life forms and b) (seemingly) using the word “simulation” as a pejorative. Yet what is the “coarse-grained copy of itself” created by the “personal gemma” of his former wife that “rides” on the “hardware” of his brain but a simulation?

If we begin thinking of brains as multi-layered, overlapping, nesting simulations, both hierarchical and non-hierarchical–if we think of brains as simulation engines–then the idea of carrying around a coarse-grained copy of the person we know and love the most, a copy that, unbidden, is invoked (or invokes itself), becomes much easier to understand and accept. And the word “simulation” no longer has a negative connotation in the context of discussing cognition, as well it should not.

The terms “simulation” and “emulation” are often used incorrectly, and I’d like to do my part to set the record straight, because the distinction is an important one.

Simulation is, in the words of the Wikipedia entry for the term, “the imitation of the operation of a real-world process or system over time”. A simulator is a computing device or software program that accomplishes this.

Emulation is the reproduction of the behavior of a computing device or software program by executing its defining code within the context of another device or program. The Wikipedia entry for emulator describes it as “hardware or software or both that duplicates (or emulates) the functions of a first computer system (the guest) in a different second computer system (the host), so that the emulated behavior closely resembles the behavior of the real system”.

When we say we’re going to simulate something, we mean we’re going to imitate the behavior of the original, to whatever level of fidelity is required to accomplish the high-level goals of the simulation. When we say we’re going to emulate something, we mean we’re going to actually execute its code within a larger simulation so that the emulated device or program behaves exactly as it would in a comparable real world scenario.

Only some types of objects can be emulated. In order for emulation to be possible, an object has to be either a software program, or a computing device that can be reduced to a software program. We can’t emulate an automobile, since an automobile is a physical object with physical processes. We have to simulate the operation of the engine, the performance of the suspension, how the tires react to road conditions, and so on. But in theory, we could emulate computer-based subsystems on the auto, were doing so desirable.

In an earlier post on the question of whether we’re living in a simulation, I wrote this:

It could… be argued that if we’re living inside a simulation, and if we were to discover evidence of this fact, it wouldn’t make any difference, since there would be no chance of “breaking out”. (I’m always amused by the endless stories in pop culture—from Tron to Star Trek: The Next Generation to The Matrix and many others—in which there appears to be a one-to-one correlation between software portrayals of entities and their physical equivalents, and it’s possible to transition from one state to the other, usually (and conveniently) looking quite similar in the process.)

Why do you think there would be no chance of breaking out? I too, have wondered before if we are living in some entity’s simulation or petri dish (and, sometimes wondered, am I a robot?) but my experience in this universe/simulation is that programmers/creators take shortcuts all the time and cannot design/implement things flawlessly. (Note that I’m not using creator to refer to a deity, but any creative entity.) There is inevitably an error in the networking and overlap of the larger simulation’s parts, where we can slip out of the simulation.

I replied in the comments, but I’d like to expand upon my comment here.

Any simulation would be inescapable because the medium of a simulation would be so very different from the medium of the beings that create it.

The reason that I believe that any simulation would be inescapable (absent external intervention, as described below) is not that I think any simulation creator might be infallible; far from it. I believe that any simulation would be inescapable because my strong hunch is that the medium (or substrate, or nature of the underlying universe, as you prefer) of a simulation would be so very different from the medium of the beings that create it.

Let’s try a gedankenexperiment. If you’re not familiar with John Conway’s Game of Life, it’s probably the most famous example of a cellular automaton. A two-dimensional grid of cells that can be “on” or “off” comprises the “universe” of the game. The grid is updated from generation to generation all at once, and four simple rules determine whether a given cell is on or off in the next generation. Conway’s Game of Life (which I’ll refer to here as the Game of Life) has been implemented on virtually every general-purpose computing device known to mankind.

I once saw an estimate that if one had it running on a monitor the size of the Solar System, that might be large enough for self-replicating life forms to emerge through natural selection. I can’t find this reference but it’s not necessary, since my thought experiment doesn’t depend on the specific requirements but rather that basic concept: that the Game of Life provides the substrate for a virtual universe rich enough that given sufficient virtual time and space, intelligent life could evolve. (1) Let’s imagine that we have a virtual universe large enough and the computing power to run it and that we run a version of the Game of Life that does lead to self-replicating life forms.

In our thought experiment, we can run this simulated universe at super speed and so we enable the equivalent of hundreds of millions of years of evolution in a very short time. Perhaps we use artificial selection to accelerate the process of moving from simple to sentient creatures. Whatever the specific process, we reach the point where we have intelligent life forms living inside Conway’s Game of Life. It’s easy for me to imagine the existence of such life forms and vastly more difficult for me to imagine how they might work in practice. What would the equivalent of potential energy be in their world—organized blocks of specific patterns? Would they “consume” less intelligent life forms for their stored patterns? How would they perceive the world around them? What would they believe to be the nature of the universe? Yikes, this is complex!

Anyway, let’s imagine that we have in our version of the Game of Life intelligent, self-replicating life forms, and let’s imagine that they invent their equivalent of “technology”. We can imagine individual pixels in the Game of Life screen as roughly equivalent to atoms (or even sub-atomic particles) and very large (millions or billions of displays) structures that accomplish tasks under the command of the life forms. We can even imagine them creating computing devices, which is a pretty cool thought.

So now that our imaginary life forms have devices, advanced technologies, and even computers, they start to wonder, “Could we be living in a simulation? Could our very universe be nothing more than code running in some unfathomably large and powerful computer?” Of course, the answer is yes, which we know because we’re the ones running the simulation.

With all that as backdrop, two questions:

1. How would these life forms (could we call them “Game-of-Life-forms”?) develop technology sophisticated enough to enable them to perceive and demonstrate the existence of the simulation in which they exist?

2. If the Game-of-Life-forms manage to perceive the existence of the simulation in which they exist, and decide that they wanted to break out of it, that they want to inhabit a physical universe as opposed to a virtual universe, they they want to not exist simply at our whim, how would they do so? (2) How would they, using technology available to them, translate themselves from the Game of Life into our universe?

I can imagine answers to the first question. A Game-of-Life-form which is that universe’s equivalent of a physicist might posit that certain computational errors would creep into a simulation, and Game-of-Life-forms might search for such computational errors as proof of the simulation-based nature of their existence. This is analogous to humans who have posited some form of jitter at the most basic level of our own universe as a telltale sign that we are, in fact, simulated.

But for the life of me, even as a science fiction question, I can’t imagine a possible answer to the second, at least not without our cooperation. And perhaps that’s the only answer—a sympathetic human who wishes to fulfill their desires and comes up with a way to transfer them to physical form. Without that sympathetic human, how would our Game-of-Life-forms ever make the transition from their universe to ours?

In 2004, while I was serving as Chief Operating Officer for 3Dsolve (later acquired by Lockheed Martin), I wrote an article for Training and Simulation Journal called “A Moon Shot for e-Learning”. [Download Paper (PDF)] The unabridged version of this article was titled “A Moon Shot for Simulation Learning”, which more accurately captured the idea of it.

In the article, I proposed that the Department of Defense commit to having complete simulation-based e-learning courseware for every enlisted and non-commissioned specialty in all the branches of the armed forces within five years:

How lofty a goal is this? According to the US Army’s recruiting site, there are currently 189 Military Occupational Specialties (MOSes) in that service alone. Most MOSes have multiple levels. For example, 3Dsolve is currently building simulation-based e-learning courseware for the US Army Signal Center and School’s 74B10 Information Systems Operator/Analyst course. This is only the first level of training for 74Bs, who can go on to 74B20, 74B30, 74B40, and 75B50. Assuming an average of five levels per MOS, that gives us a rough estimate of 945 individual courses within the Army. Extrapolating out over the Navy, Air Force, and Marine Corps, we get a total of 3,780 individual courses throughout the DoD.

The most complex form of Interactive Multimedia Instruction (IMI) is Level IV, which is simulation-based. Assuming an average of 160 hours of Level IV instruction per specialty, at a nominal $25,000 per finished hour, we arrive at a figure of approximately $15 billion for the entire program. According to the Congressional Budget Office, the DoD budget over the period 2005-2009 will total $2.437 trillion. $15 billion represents just over one-half of one percent of that total. In other words, for about 1/160th of its overall budget over the next five years, the DoD would receive complete simulation-based e-learning courseware for every enlisted military specialty—nearly 4,000 courses in all. Not the page-turning, screen-scrolling, filmstrip-in-a-browser courseware of a few years ago, but challenging, compelling, and engaging courseware based on interactive simulations of the real world—courseware for the “Nintendo generation.”

Did this happen? Absolutely not. We certainly have more in the way of simulation-based e-learning than we had eight years ago, but by no means are we close to having simulation-based courseware for every specialty in every branch.

I’d be interested in hearing from people with more of an insider view, but from my perspective, the biggest barriers to adopting a program such as I’ve outlined above have been essentially technical:

Standardization. I hoped that an effort such as I described would lead to much greater standardization between tools, on issues such as 3D file formats, scripting systems, and the like. Without the effort, the standardization simply hasn’t happened. As far as I can tell, we’re no closer to having universally accepted standards in the field of simulation-based e-learning than we were eight years ago.

Reusability. This is closely related to standardization and is another issue I had hoped would be forced to resolution by a moon shot-type effort. With enough standards in place—with agreement on 3D file formats, image formats, scripting languages, physics meta-data, and other issues—we could literally drag-and-drop assets from one training program to another. We could define a rifle, a building, a truck, or even a soldier once and not have to redefine it for each new program. We’re just not there.

Browser compatibility. The simple fact is that an effort such as I described is only possible if content can be played back in native browsers. Unlike the previous two items, this is an issue that I didn’t fully appreciate at the time. Web browsers are the one true universal client software standard we have. Every desktop computer, every laptop computer, every tablet, and every smartphone has a browser. For us to get to the promised land, our content has to play in browsers, no plug-ins or extensions needed. We’re close on this one, closer than we’ve ever been, but it has been a long haul.

From what I have seen, we may be on the verge of addressing all these issues, through a combination of a variety of browser improvements plus the creation of the Virtual World Framework. (I’ll defer a more technical discussion of VWF to my colleague Rett Crocker.) It may be time to revive my original call for a “moon shot” for simulation-based e-learning.

If you’ve seen the film Apollo 13—and if you haven’t, you should—you know the story of how the astronauts worked together with NASA personnel to devise a method of keeping their spacecraft alive to get them back to Earth after an explosion. But you may not know the entire story, nor how simulation played a critical role in the operation.

In the film, as in real life, an explosion aboard the service module led to an emergency in which the crew had perhaps 15 minutes to execute a complex set of operations that would enable them to use the lunar module as a lifeboat:

GENE KRANTZ (FLIGHT DIRECTOR – WHITE)
Okay. Okay, guys! Listen up! Here’s the drill! We’re moving the astronauts over to the LM, we gotta get some oxygen up there.

MOCR ENGINEER
Right.

GENE KRANTZ (FLIGHT DIRECTOR – WHITE)
TELMU, Control, I want an emergency power procedure, the essential hardware only!… GNC, EECOM! When we’re shutting down the Command Module at the same time they have to transfer the guidance system from one computer to the other, so I want those numbers up and ready when our guys are in position.

GNC – WHITE
Roger that.

SY LIEBERGOT (EECOM – WHITE)
Okay, we gotta transfer all control data over to the LM computer before the Command Module dies.

In 2005, the 35th anniversary of the Apollo 13 mission, IEEE Spectrum printed a series of articles on the event, going into a great deal of depth on how the crew and ground personnel reacted to the problems they faced. We can transition from the screenplay above — Flight Director Gene Krantz ordering a procedure to transfer power to the lunar module before the command module dies — to IEEE Spectrum‘s account:

It doesn’t sound like a tall order—the lunar module had big, charged, batteries and full oxygen tanks all designed to last the duration of Apollo 13′s lunar excursion, some 33 hours on the surface—so it should have been a simple matter of hopping into the Aquarius, flipping a few switches to turn on the power and getting the life-support system running, right?

Unfortunately, spaceships don’t work like that. They have complicated, interdependent, systems that have to be turned on in just the right sequence as dictated by lengthy checklists. Miss a step and you can do irreparable damage.

So what the crew needed was a “lengthy checklist” telling them exactly how to power on the lunar module, and they needed it immediately. And since the crew made it home safely, we know they must have been given just such a checklist, and it must have worked. But an operation like this wasn’t standard procedure. Where would such a checklist come from?

As it turns out, the lunar module controller team “had already been working on that problem for months”.

A year earlier, during a pre-Apollo 10 simulation, the simulator controllers had given the crew and their support personnel a scenario that was “uncanny” in how much it resembled what actually happened when the oxygen tank exploded aboard Apollo 13. As in the actual emergency, in the simulation, the crew had to transfer power to the lunar module with a damaged command module attached, and as in the actual emergency, they had a short window of time in which to do so. The team called in reinforcements from the lunar module controllers but even together they were unable to figure out a solution in time, and in the simulation, the crew died. But this wasn’t seen as a problem by management:

“Many people had discussed the use of the LM as lifeboat, but we found out in this sim,” that exactly how to do it couldn’t be worked out in real time, [lunar module controller Bob] Legler says. At the time, the simulation was rejected as unrealistic, and it was soon forgotten by most. NASA “didn’t consider that an authentic failure case,” because it involved the simultaneous failure of so many systems, explains [lunar module branch chief James] Hannigan.

But the simulation nagged at the lunar module controllers. They had been caught unprepared and a crew had died, albeit only virtually. “You lose a crew, even in a simulation, and it’s doom,” says Hannigan. He tasked his deputy, Donald Puddy, to form a team to come up with a set of lifeboat procedures that would work, even with a crippled command module in the mix…

For the next few months after the Apollo 10 simulation, even as Apollo 11 made the first lunar landing and Apollo 12 returned to the moon, Puddy’s team worked on the procedures, looking at many different failure scenarios and coming up with solutions. Although the results hadn’t yet been formally certified and incorporated into NASA’s official procedures, the lunar module controllers quickly pulled them off the shelf after the Apollo 13 explosion. The crew had a copy of the official emergency lunar module activation checklist on board, but the controllers needed to cut the 30-minute procedure to the bare minimum.

The lunar module team’s head start stood them in good stead… By the time the crew actually got into the Aquarius and started turning it on, the backroom controllers estimated there were just 15 minutes of life left in the last fuel cell onboard the Odyssey.

To put it concisely:

During a simulation exercise a year prior to the Apollo 13 mission, an artificial disaster similar to the actual Apollo 13 event resulted in the simulated deaths of the crew. NASA management felt the simulation was unrealistic and did not mandate any follow-up. The lunar module controller team was unsatisfied with this and spent the next year devising a procedure that would allow the crew to survive such an event. When the explosion occurred aboard Apollo 13, the existence of this procedure enabled the crew to successfully transfer power to the lunar module and survive the return trip to Earth.

If it hadn’t been for the simulator, and for the dedication of the lunar module controller team, it’s quite likely that the Apollo 13 crew would have died.

If it hadn’t been for the simulator, and for the dedication of the lunar module controller team, it’s quite likely that the Apollo 13 crew would have died, unable to transfer power in time and trapped in a powerless, rapidly freezing spacecraft headed away from Earth at something like 25,000 miles per hour. Beyond what would have been a tragic loss of life, this could have had a dramatic effect on the US space program:

NASA DIRECTOR
This could be the worst disaster NASA’s ever experienced.

GENE KRANTZ (FLIGHT DIRECTOR – WHITE)
With all due respect, sir, I believe this is gonna be our finest hour.

This is from my keynote address at MODSIM World Canada, delivered in June of 2010:

So what do soft drinks and exponential technology growth have to do with one another?

From John Sculley we know that however much you can persuade people to buy, that’s how much they’ll consume. This rule was formed around snack foods, but given how much we all use the Internet, my strong suspicion is that it applies equally to computing and communications. Let’s call this “Sculley’s Law”.

From Gordon Moore and Carver Mead, we know that computing power is doubling every 18 months, which equates to the price-performance of computing doubling in the same amount of time. This is Moore’s Law, but for the moment, let’s be generous to Dr. Mead and call it the “Moore-Mead Law”.

From Ray Kurzweil, we know that the Moore-Mead Law extends back to the beginning of the 20th Century, offering powerful historical evidence that exponential growth in computing power can survive technological paradigm shifts. This is an aspect of what is known as “Kurzweil’s Law of Accelerating Returns”, which tells us that in certain domains — specifically biology and technology — evolutionary processes tend to accelerate the pace of innovation.

From George Gilder, we know that total telecommunications bandwidth will triple every year for at least the next decade. This is “Gilder’s Law of the Telecosm”. Let’s simplify things and include in Gilder’s Law the related point that telecommunications bandwidth over any specific medium will double on the same time scale as the Moore-Mead Law.

And from Chris Anderson, we know that as the price of a commodity approaches zero, it becomes, in his words, “too cheap to matter”.

Do you see where all this is going? Computing and communications show every sign of continuing to increase in performance and decrease in cost at exponential rates for the foreseeable future. A single cycle of a CPU or a single bit of data delivered costs a hundredth of what it did a decade ago and a ten-thousandth of what it did two decades ago. And whatever we buy, that’s how much we consume.

However much faster Intel and other microprocessor vendors make their chips, we’ll use every cycle they give us. However faster telecommunications vendors make their networks, we’ll use every bit they give us. And they’re going to keep giving us more and more. We need a new law that sums up all of this. How about this: “Generally speaking, as the prices of a consumer commodity approaches zero, usage approaches infinity.”

Of course, that’s Economics 101. Price at zero consumption is infinity, and consumption at zero price is infinity. We need to qualify our law slightly. After all, do we expect that Pepsi could be made available in exponentially increasing amounts? Do we expect that if Pepsi were to lower its price to zero that people would consume infinite amounts of it? In 2055, the world consumed almost half a trillion liters of soft drinks. Were that to double every 18 months, in less than two decades, we’d be drinking the entire volume of Lake Ontario in soft drinks every year. Obviously there are limits to the production and consumption of tangible goods.

So we’ll modify our law slightly. Let’s say this: “For a unit of any given intangible commodity, over time, its price tends to approach zero and its usage tends to approach infinity.” I’ll call this “Boosman’s Law of Accelerating Usage”.

You can find the complete keynote address, including the reasoning behind each of the points made above, here.

Reading this text fresh, two years later, it seems to me to hold up well. I’d change a word or two here or there, but the basic conclusion still feels right.

The obvious question posed by the law I’ve hypothesized is what we’ll do with all the computing power and bandwidth we’ll have? If, 10 years from now, our computers and our connections to the Internet, both wired and wireless, will be 2 to the 6.67 faster, or 101.59 times faster, what will we do with that power and bandwidth? I’ll comment on that in future posts, or you can get an idea of where I’m going by reading the original address.

When it comes to simulations, the focus of our work here is on how to apply them to real-world problems. On this blog, we write about the future of simulation, and we speculate as to whether human brains are composed of simulations themselves. But a deeper, more metaphysical question has been asked by some, and it’s worth consideration: are we living in a simulation? Is everything we perceive nothing more than code running on an unfathomably powerful computer in an unknown universe?

It could be argued that if we’re living inside a simulation, then there’s no way of knowing it, so any discussion of the topic is pointless. Personally, I’m sympathetic to this point of view in the sense that pondering the answer to this question, while taking some level of effort, probably won’t make life any better (or worse) in the here and now, and we have a huge backlog of real-world (or simulated-real-world, as the case may be) problems to solve. But I’m reminded that I once believed it was impossible for us to know what might have preceded the existence of our universe, and yet physicists are making interestingprogressonthatveryquestion.

It could also be argued that if we’re living inside a simulation, and if we were to discover evidence of this fact, it wouldn’t make any difference, since there would be no chance of “breaking out”. (I’m always amused by the endless stories in pop culture—from Tron to Star Trek: The Next Generation to The Matrix and many others—in which there appears to be a one-to-one correlation between software portrayals of entities and their physical equivalents, and it’s possible to transition from one state to the other, usually (and conveniently) looking quite similar in the process.) Again, I’m sympathetic to this point of view.

But perhaps the question of “are we living in a simulation?” is not one of physics or cosmology or software but of philosophy. And for thousands of years, we as a species have generally ascribed value to philosophy. The search for truth—the word “philosophy” itself comes from the Greek for “love of wisdom”—is a worthy pursuit. And so even if it turns out to be impossible for us to definitively answer this question, and even if we do answer it but it turns out there is nothing to be done about it—well, information wants to be free, yo.

I’ll return to this topic in the future with more on this question and what those who have considered the question have to say about about it.

[I recently wrote here on the question of the future of science and the scientific method and how they would be influenced by simulation. In that entry, I referred to Dr. Rick Satava, Professor Emeritus at the University of Washington, who has written more on this topic than anyone I know. Rick was kind enough not only to read my piece but to comment quite thoughtfully on it. I asked him for permission to post his comments as a guest blog entry and he graciously agreed. Rick's comments are below. -- Frank Boosman]

Guest Post: Comments on Simulation-Based Science

Dr. Rick Satava, Professor Emeritus, University of Washington

I purposefully chose 108 because scientists, (especially healthcare or those involved in clinical or other research involving patients or human subjects) routinely use 8 to 10 or so subjects (n) on the first iteration of an experiment, and if the results are good, then go higher, especially in genetics and similar complex fields. So if n = 108, that is 100 million experiments, which proves the importance of doing something 100 million times (each a little different from before, as in Monte Carlo analysis), we can optimize, as you point out, the most likely best results. But just as importantly, there usually are only a few of the millions of results of the simulation that don’t ‘fit’ the hypothesis. (Read The Black Swan by Nassim Taleb about the one in a million events and see the implications for creativity and perhaps generating intelligence by ‘discovering’ the random event (outliers) that point to where the new discovery should be, as opposed to the other 999,999 that support one’s conventional idea.) This is possible now, especially when we now have — literally — supercomputers at our fingertips thanks to parallel computing, the grid, and other technologies.

Using the above methodology of massive simulation is an attempt to bring creativity, imagination, intuition, etc., into the mainstream of science and the ‘scientific method’. I concur with Thomas Kuhn and Karl Popper that “the scientific method is dead” — that each new age of science (e.g., Classical, Renaissance, Age of Enlightenment, and Industrial Age) not only brought new technology, but depended upon an extension of the scientific method in order to accomplish the next revolution in science — hence Classical (observation), Renaissance (phenomenology and taxonomy), Enlightenment (experimentation), and Industrial (current hypothesis-driven scientific method). Each ‘age’ does not destroy the previous concept of science, but rather ‘stands on the shoulders of those who have gone before’ to create a more comprehensive understanding (and process) of the scientific method. (The scientific method is dead, long live the [new] scientific method). The Information Age is actually extending the scientific method through the use of simulation to integrate creativity, etc., as a formal part of the discovery process, as proposed above. Hence, as a structured approach to scientific discovery, massive simulation of current natural ‘laws’ to (re)prove their validity will result in outliers. Rather than discard these outliers, they are then ‘chosen’ (as a human brain would) as the ‘creative new idea’ which is the hypothesis (beginning) of the scientific method.

How has the scientific method changed?

Why is the outlier important and why does it occur? One possible explanation is that as our (incomplete) understanding of science and the natural world expands, this understanding is becoming more complex. Initially an observation results in a fact, which then is investigated to reveal that the fact is actually part of a system-of-facts (a phenomenon), and after further investigation, that system-of-facts is actually but a small part of multiple other systems (i.e., system-of-systems — hence the investigation of science as ‘systems-of-systems’). As the level of complexity increases with multiple different associated systems, not only the amount of known facts increases, but there is an unknown process from which ‘emergent properties’ of the system-of-systems occurs that is not a property of the sum of the individual systems — which is the accepted belief that ‘the whole is greater than the sum of the parts’. In essence, an emergent property is the unknown property that is the ‘association’ (whatever that might be) that ‘binds’ two (or more) complex systems together.

Is there any practical value to this concept? Of course. By challenging current ‘irrefutable’ laws or common knowledge, a re-evaluation of the knowledge which has been proven (validated) through hundreds or thousands of experiments. This is done through simulation of millions or hundreds of millions of ‘virtual experiments’ with computational simulation — and looking for the outliers, as indicated above. One could consider this approach as a scientific methodology for generating a new hypothesis (artificial creativity). There are various ways of inserting the outlier (new idea) into scientific method — through analogy, metaphor, exception to the rule, etc., as the expression of a new hypothesis, which then uses the scientific method to prove/disprove current new hypothesis in real-world experimentation.

A current approach in scientific investigation is to use a multi-disciplinary approach and look at a problem or unknown phenomenon from many different ‘views’, i.e., the ’360 degree’ approach. While at DARPA, there was the rise of an interesting concept for discovering something new, based upon the observation that as a new idea is refined, validated, expanded, and investigated through multiple iterations, there is less and less improvement with each iteration until a final ‘product’ occurs which has little if any improvement. This is a ‘pendulum effect’, in which an idea, product (or a company) has reached the maximum potential for the product, and unless there is a change in the product/approach through innovation, it will become irrelevant, obsolete, or replaced by a new and better one — Clayton Christensen’s “disruptive technology”. Awareness of this concept has revitalized Schumpeter’s expansion of Marxist economic theory of ‘creative destruction’ (old systems of wealth are destroyed so new systems can arise) to business and now to scientific inquiry as well.

On a separate yet parallel note (pun intended) to the previous concept of supercomputing (through massive parallel processing), about two-thirds of our brains are devoted to ‘visualization’, i.e., using the occipital lobe for ‘acquiring the image’ (retinal stimulation by light) and the forebrain for ‘interpreting the image’ (perception defined as the interpretation of retinal sensory signals). This ability to organize the neural signs into a ‘model’ is what distinguishes primates (principally humans) from lower species. Data show that bees navigate by serial processing (orienteering from one point to the next, without a global model or map of their world). However, humans likely use ‘parallel processing’, massively comparing (through parallel processing) one image (a picture is worth a thousand words) to similar stored imagines using visual pattern matching. This hypothesis seems to be supported by fMRI [functional magnetic resonance imaging] and DTI [diffusion tensor imaging] of brain tracts rather than neurons. Pattern matching is also what intelligent image processing analytic programs, say for mammograms, use to discover critical features within the very ‘noisy’ environment of the breast. I believe most humans almost always parallel process, especially for complex problems like visualizing or consolidating/comparing abstract concepts.

All of life is a simulation.

So, the point to all of this is that optimization of a theory (or an idea) through the use of supercomputers for simulation might be a more ‘natural’ approach (i.e., the way the human brain works) in an absolutely fundamental way, supporting your hypothesis that the human brain is like a simulation machine. In addition, simulation is the method or process by which creativity occurs, and combining simulation with ‘creative destruction’ may be one pathway for innovation. And I can answer your contention about disambiguating multiple similar possibilities with emotion, culture, politics, etc. that seem irrational and not susceptible to simulation — it is a matter of context. Hence, all of life is a simulation. (The movie Total Recall, anyone?)

Thus, simulation not only allows us to optimize (almost predict) the best alternative (reducing the expense of real-world experimentation) in real-time or near-real-time, but also may be the basis of creativity (i.e., instead of ignoring the ‘outliers’, explore them as the method to discover something new — something that does not fit into what is ‘absolute truth’ based upon eons of ‘evidence’). The black swan, according to the pre-Renaissance world, could absolutely not exist — the evidence was that in thousands of years, no one had ever seen a black swan, therefore they could not be possible. One day, a black swan appeared, and since then, although they are uncommon (unless specifically bred) they do exist. (I actually have seen some in Stratford-upon-Avon, England.) Massive simulation is the essence of creativity (intuition, imagination, discovery, etc.). However, ‘observation’ or ‘models’ will rarely provide a ‘black swan-like result’ — but usually will, if the simulation is massive enough. We are exploring this in the next generation of programming called ‘big data’ — well beyond meta-analysis.

So why is a human brain like a simulation? Because it is the same process (software program) but a billion times more powerful (and faster, hence real-time) than existing computers. This is because of parallel processing billions of neurons, each connecting with thousands of other neurons (4 x 109!, or 4 x 109 factorial possible combinations), which makes Avogadro’s number (6.02 x 1023) seem infinitesimally small. That is how we will eventually simulate emotions, culture, politics, etc., and all the other seemingly impossible things to simulate. You might want to look into Dylan Schmorrow’s human social, cultural, behavioral (HSCB) program that he started while at DARPA and now continues as a totally new office in the Office of Naval Research in the Department of Defense.

These days, when I think about simulation, I tend to find myself thinking about interconnected simulations. I do think about individual simulations, but more and more it seems like I want to find a way for those simulations to talk. “Why?”, you might ask. Well, it comes down to my approach to code.

I started my programming life in procedural languages. It was simple and straight-forward. Do this. Do that. Nothing to it. Eventually, though, I started to think of code as little life forms, and those life forms would talk. This is probably due to the fact that I was at college working towards a career in molecular biology at the time. Then I heard about object oriented programming. It was a presentation that Bill Gates gave at BMUG (the Berkeley Macintosh Users Group) in 1989 or 1990. I walked away from that night with many thoughts about how my life forms (now objects) could talk and inherit and grow.

Years later, when I was actually working as a developer, I would always try to think in terms of objects. Keep everything discreet and simple. Don’t make an object do more than it needs to do. Provide clean simple interfaces for how those objects interact. These were my silent orders to my self. They’ve carried forward to today.

Of course, now we all think in terms of objects. At this point I feel like it’s just a question of semantics. Do you want strong typing and compilation? Here’s C#. Oh, you want more low level access than that. All right, here’s C++. You want dynamic properties? Fine, how about Javascript? Oh, you really like parentheses. Lisp should do. Even more than our language choices, we write whole systems with OOP and interfaces in mind. Your startup doesn’t really count until it has an API.

So back to simulations…

Since I do a lot of my work in game engines, it’s patently obvious that there are multiple simulations running in tandem. In a first person 3D game, as my player moves around within the game world, a physics simulation is running to handle collisions. The entire 3D visualization system is a simulation of the environment. AI is managing the other non-human characters in the environment. And all these are combining to form the overall experience. They do so with carefully crafted interfaces and a well developed framework. That said, it’s insanely easy for things to go off the rails. Modern games are very complex systems. So complex that it’s very difficult–if not impossible–for any one person to keep the entire system in their head. This means that when two, three or fifty subsystems are interacting with a single in-world object they can end up pushing and pulling in different directions. One just needs to do a search for “World of Warcraft exploits” to see hundreds of examples of this phenomenon.

That’s games, but what about “real” simulations, and why do simulations need to interact?

Let me start by introducing you to two current simulation projects.

OpenWorm is an interesting project designed to fully simulate a nematode Caenorhabditis elegans. Their target is smart. The C. elegans has about 1000 cells in its whole body. This results in 302 neurons, 50k synapses and 95 muscles. (1) It’s not that many elements to simulate. Even better, the C. elegans has been heavily studied. There is a ton of data on which to base the simulations.

The project seems to have been started by two Ph.D students at UC San Diego: Stephen Larson and Marius Buibas. Nowadays, at least according to their web site, it has 10 active developers and six “contributors” (whatever that means). This is what’s supposed to happen when you open source something cool.

OpenWorm looks promising. There’s even a Chrome Experiment for exploring the visual model called the OpenWorm Browser. YMMV if you open this in something other than Chrome.

Then there’s a team at Stanford that have fully simulated all the systems in a Mycoplasma genitalium cell. (2) M. genitalium is a single cell pathogen with 525 genes. Compare that with E. coli‘s 4288 genes. So again, the target chosen is smart. This project really dives down as far as they can into the details. Every gene is included in the simulation, and every gene function is there, too.

For reference, that’s a ton of computation. They’re running on a cluster of 128 computers, and it takes them 10 hours to perform a complete cell division. Coincidentally, that’s how long it takes M. genitalium to split in real life. E. coli divides a couple of times an hour, though, so it isn’t some rule of the universe we’re seeing.

If we start building our simulations with the idea they’re somewhere in the strata then we will have far more interesting simulations.

If I look at these two different simulations, I see two beautiful accomplishments, but they’re still islands unto themselves. They’re bespoke. What if I want to integrate the two simulations? It would be another very large task. Let’s imagine for a second that the simulation of M. genitalium was actually of a simulation of a C. elegans muscle cell. If the projects didn’t plan ahead then when they finished and had to integrate, they would likely be in for a huge amount of work.

That shouldn’t be the case. There should be a way for us to design simulations that allow our inputs and outputs to be defined, that describes how and what we’ve simulated and how it fits into the (virtual) world.

If we have that then we get more accuracy. We get more expandability. We get strata.

So that’s where it gets really interesting to me. Imagining a time where we can easily interconnect simulations makes all the individual simulations more valuable. Looking at OpenWorm, I’m simulating a worm moving through the earth and eating and living. If I’m interested in studying the soil in compost piles, then I grab a simulation of a compost pile, drop OpenWorm in, and the compost pile will use OpenWorm instead of its more limited C. elegans representation. If I want to understand more about how C. elegans breaks down its food as it is chomping through that compost, and OpenWorm only has a simplistic simulation of that process. I’d just drop in a simulation of that process that’s more detailed and OpenWorm would start using it. The phrase I like for this concept is “simulation strata.” (3)

If we start building our simulations with the idea they’re somewhere in the strata then we will have far more interesting simulations. How does that happen? How do we define cells relative to galaxies? I don’t know yet, but I’m certain we can.

I have long believed there to be more in common between the states of K-12 education and healthcare in the US than might commonly be thought. (Bear with me here, as I’ll bring this back to the simulation focus of the blog at the end.) Let’s look at the similarities:

In K-12 education, the US spends… (1)

…more on a per-pupil basis than all but one other nation, Switzerland, within the OECD (Organisation for Economic Co-operation and Development),

…53 percent more on a per-pupil basis than the OECD average, and,

…42 percent more on a per-pupil basis than the OECD nations with the best combined math and science scores (Finland).

And yet despite all this spending on education, the US scores… (2)

…14th out of 34 OECD nations in reading,

…24th out of 34 OECD nations and “statistically significantly below the OECD average” in mathematics, and,

…16th out of 34 OECD nations in science.

(In the first group of three items above, spending comparisons are based on cumulative per-student expenditures between 6 and 15 years of age as of 2008. In the second group of three items, scores are based on the 2009 Programme for International Student Assessment, or PISA.)

In healthcare, the US spends… (3)

…more in total on health as a percentage of GDP than any other nation within the OECD,

…46 percent more in total on health as a percentage of GDP than the next highest spending nation within the OECD, Netherlands,

…85 percent more in total on health as a percentage of GDP than the OECD average, and,

…151 percent more on a per-capita purchasing power parity basis than the OECD average.

And yet despite all this spending on healthcare, the US… (3, 4, 5)

…has the fourth highest infant mortality rate in the OECD (ahead of only Chile, Turkey, and Mexico) and an infant mortality rate 41 percent higher than the OECD average and 177 percent higher than that of the best-scoring OECD nation, Iceland,

…ranks below the OECD average and 29th out of 39 OECD nations in healthy adjusted life expectancy (HALE) at birth, and,

…has the seventh- or eighth-highest rate of mortality amenable to healthcare (defined as “premature deaths that should not occur in the presence of effective and timely care”) of 31 OECD nations surveyed, depending on the methodology used.

As the authors of a 2010 paper from The New England Journal of Medicine put it, “It is hard to ignore that in 2006, the United States was number 1 in terms of health care spending per capita but ranked 39th for infant mortality, 43rd for adult female mortality, 42nd for adult male mortality, and 36th for life expectancy… Why do we spend so much to get so little?” (6)

(On the subject of infant mortality, to translate the percentages into concrete terms, in 2010, the year to which the data above applies, the US had 24,548 infant deaths. (7) If our infant mortality rate were the same as the OECD average, we would have suffered 17,409 infant deaths, or 7,139 fewer than actually occurred. If our infant mortality rate were the same as the best-scoring OECD nation, Iceland, we would have suffered 8,829 infant deaths, or 15,719 fewer.)

When I look at a market where the spending and results are so out of whack, what I see is a highly disruptable market — that is, a market which is vulnerable to change based on radically better approaches. As the Wikipedia entry for “disruptive innovation” puts it, “A disruptive innovation is an innovation that helps create a new market and value network, and eventually goes on to disrupt an existing market and value network (over a few years or decades), displacing an earlier technology. The term is used in business and technology literature to describe innovations that improve a product or service in ways that the market does not expect, typically first by designing for a different set of consumers in the new market and later by lowering prices in the existing market.” (8)

If there are two markets that are in need of disruptive innovation, they must be education and healthcare.

If there are two markets that are in desperate need of disruptive innovation, they must be education and healthcare. And I believe that simulation could be the key to disruption in both markets.

In education, simulation gives us the opportunity to deliver students the benefits of experiential learning in almost any subject imaginable. Using simulations, students can explore topics from history to mathematics, from language to chemistry, all in simulated environments that enable them not only to try different strategies but even — given the proper tools — to change the underlying assumptions of the simulation.

In healthcare, simulation gives us the opportunity to improve performance at a systemic level by modeling healthcare systems and virtually prototyping changes to them. Using simulations, clinicians, clinical engineers, and other healthcare professionals can gain, as I co-wrote in a 2010 paper, “the ability to essentially develop the equivalents of flight simulators, systems integration laboratories, and intelligent cockpits for clinical environments”. (9)

I think the chances are excellent that we will make use of simulation to improve and even remake our education and healthcare systems. To me, the question is not “if”, but rather “when” and “how”.

Recent Blog Posts

In his 2007 book I Am a Strange Loop, author Douglas Hofstadter–best known for writing the Pulitzer Prize-winning book Gödel, Escher, Bach: An Eternal Golden Braid–discusses his reaction to the untimely death of his wife Carol, due to a brain tumor, while on sabbatical together with their children in Italy in 1993. If you’re interested [...]