Turing-completeness (TC) is (avoiding the rigorous formal definition) the property of a system being able to, under some simple representation of input & output, compute any program of interest, including another computer in some form.

TC, besides being foundational to computer science and understanding many key issues like “why a perfect antivirus program is impossible”, is also weirdly common: one might think that such universality as a system being smart enough to be able to run any program might be difficult or hard to achieve, but it turns out to be the opposite and it is difficult to write a useful system which does not immediately tip over into TC. “Surprising” examples of this behavior remind us that TC lurks everywhere, and security is extremely difficult.

I like demonstrations of TC lurking in surprising places because they are often a display of considerable ingenuity, and feel like they are making a profound philosophical point about the nature of computation: computation is not something esoteric which can exist only in programming languages or computers carefully set up, but is something so universal to any reasonably complex system that TC will almost inevitably pop up unless actively prevented.

It turns out that given even a little control over input into something which transforms input to output, one can typically leverage that control into full-blown TC. This matters because, if one is clever, it provides an escape hatch from system which is small, predictable, controllable, and secure, to one which could do anything. It’s hard enough to make a program do what it’s supposed to do without giving anyone in the world the ability to insert another program into your program, which can then interfere with or take over its host. Even if there is no way to outright ‘escape’ the sandbox, such hidden programs can be dangerous, by extracting information about the surrounding program (eg JS embedded in a web page which can extract your passwords by using RowHammer to attack your hardware directly, even if it can’t actually escape your web browser), or can take the host into strange & uncharted (and untested) territories. That we find these demonstrations surprising is itself a demonstration of our lack of imagination and understanding of computers, computer security, and AI. We pretend that we are running programs on these simple abstract machines which do what we intuitively expect, but they run on computers which are bizarre, and our programs themselves turn out to be computers which are even more bizarre. Secure systems have to be built by construction; once the genie of TC has been let out of the lamp, it’s difficult to patch the lamp.

An active area of research is into languages & systems carefully designed and proven to not be TC (eg. total functional programming). Why this effort to make a language in which many programs can’t be written? Because TC is intimately tied to Godel’s incompleteness theorems & Rice’s theorem, allowing TC means that one is forfeiting all sorts of provability properties: in a non-TC language, one may be able to easily prove all sorts of useful things to know; for example, that programs terminate, that they are type-safe or not, that they can be easily converted into a logical theorem, that they consume a bounded amount of resources, that one implementation of a protocol is correct or equivalent to another implementation, that there are a lack of side-effects and a program can be transformed into a logically-equivalent but faster version (particularly important for declarative languages like SQL where the query optimizer being able to transform queries is key to acceptable performance, but of course one can do a surprising amount in SQL like 3D raytracinggradient descent for fitting machine learning models and some SQL extensions make it TC anyway by allowing either a cyclic tag system to be encoded, the model DSL, or to call out to PL/SQL) etc.

Languages or systems which unintentionally cross over the line into being TC can be amusing or useful (although usually not), but they also have some serious implications: such systems, because they were never expected to be programmable, can be harmful, or extremely insecure & a cracker’s delight, as exemplified by the “language-theoretic security” paradigm, based on exploiting “weird machines”; some of the literature:

Most recently, Spectre & generalizations (Mcilroy et al 2019) can be interpreted as providing a whole ‘shadow computer’ in the CPU via speculative execution which can be programmed to do things like run malware without visibly executing any of the malware instructions while having side-effects in the real computer. Spectre is interesting in being a class of vulnerabilities which have existed for decades in CPU architectures that were closely scrutinized for security problems, but just sort of fell into a collective human blind spot. Nobody thought of controllable speculative execution as being a ‘computer’ which could be ‘programmed’. Once someone noticed, because it was a powerful computer and of course TC, it could be used in many ways to attack stuff.

Many cases of discovering TC seem to consist of simply noticing that a primitive in a system is a little too powerful/flexible. For example, if Boolean logic can be implemented, that’s a sign that more may be possible and turn Boolean circuits into full-blown circuit logic for a TM. Substitutions, definitions/abbreviations, regular expressions (especially with any extensions or custom features), or any other kind of ‘search and replace’ functionality is another red flag, as they suggest that a cellular automaton or tag system is lurking. This applies to anything which can change state based on ‘neighbors’, like a spreadsheet cell or a pixel. Any sort of scripting interface or API, even if locked down, may not be locked down quite enough. An actual scripting language or VM is so blatant as to be boring when (not if) someone finds a vulnerability or escape from the sandbox. Operations which take variable lengths of times or whose completion can’t easily be predicted from the start are another source of primitives, as they may ‘depend’ on the data they are operating over in some way, implementing different operations on different data, which may mean that they can be made equivalent to Boolean conditionals based on a careful encoding of data.

What is “surprising” may differ from person to person. Here is a list of accidentally-Turing-complete systems that I found surprising:

Wang tiles: multi-colored squares, whose placement is governed by the rule that adjacent colors must be the same (historically, not surprising to Wang, but was surprising to me and I think to a lot of other people)

A StarCraft buffer overflow was used by the SC community to implement complicated maps, tower defense games, Mario, and Mario level editors; emulating the hack to avoid breaking the mods in updated SC versions caused Blizzard quite a bit of trouble.

SVG: PostScript is TC by design, but what about the more modern vector graphics image format, SVG, which is written as XML, a (usually) not-TC document language? SVG also allows a slow encoding of Rule 110 and so is TC.

one category of weird machines doesn’t quite count since they require an assumption along the lines of the user mechanically clicking or making the only possible choice in order to drive the system into its next step; while the user provides no logical or computational power in the process, they aren’t as satisfying examples for this reason:

CSS: was designed to be a declarative markup language for tweaking the visual appearance of HTML pages, but CSS declarations interact just enough to allow anencoding of the cellular automaton Rule 110, under the assumption of mechanical mouse clicks on the web browser to advance state (CSS hacks honorable mention: Kevin Kuchta’s “CSS-Only Chat”, which uses no JS by outsourcing computation to the server)

Microsoft PowerPoint animations (excluding macros, VBScript etc) can implement a Turing machine when linked appropriately (Wildenhain 2017; video; PPT), under the assumption of a user clicking on the only active animation triggers

Possibly accidentally or surprisingly Turing-complete systems:

CSS without the assumption of a driving mouse click (perhaps some sort of Wang tile using reflections and conditionals?)

Human visual illusions: Changizi 2008 presents ambiguous images in a circuit-like format, whose depth perception ‘flips’ based on the ‘input’ or top of the circuit, which are analogous to OR/AND/NOT/XOR computations; the existence of these ‘visual circuits’ hints at the possibility of ‘TC-complete’ images (although many pieces, like a working memory, are missing)

Why are there so many places for backdoors and weird machines in your “computer”? Because your computer is in fact scores or hundreds, perhaps even thousands, of computer chips, many of which are explicitly or implicitly capable of Turing-complete computations (many more powerful than desktops of bygone eras), working together to create the illusion of a single computer. Backdoors, bugs, weird machines, and security do not care about what you think—only where resources can be found and orchestrated into a computation.

A curious fallacy circulating in discussion of AI or AI risk or especially the possibility of a “hard takeoff” is a pseudo-debate about whether an AI will be “one” AI or “a community/ecosystem” of AIs, with some concluding that risk is minimal or hard takeoffs impossible because AIs will inevitably be implemented as a ‘community’ or ‘network’—as if this made any difference or reflected any kind of principled distinction.

What is important are the inputs and outputs: how capable is the system as a whole and what resources does it require? No one cares if Google is implemented using 50 supercomputers, 50,000 mainframes, 5 million servers, or 50 million embedded/mobile processors, or a mix of any of the above exploiting a wide variety of chips from custom “tensor processing units” to custom on-die silicon (implemented by Intel on Xeon chips for a number of its biggest customers) to FPGAs to GPUs to CPUs to still more exotic hardware like prototype D-Wave quantum computers—as long as it is competitive with other tech corporations and can deliver its services at a reasonable cost. (A “supercomputer” these days mostly looks like a large number of rack-mounted servers with unusual numbers of GPUs & connected by unusually high-speed InfiniBand connections and is not that different from a datacenter.) Any of these pieces of hardware could support multiple weird machines on many different levels of computation depending on their internal dynamics & connectivity. Similarly, it is foolish to insist on defining whether an AI is ‘one’ AI or ‘many’: any AI system might be implemented as a single giant neural network, or as a sharded NN running asynchronously, or as a heterogeneous set of micro-services, or as a “society of mind” etc—but it doesn’t especially matter, from a complexity or risk perspective, how exactly it’s organized internally as long as the totality works; the question for AI risk is how dangerous is the totality. The system can be seen on many levels, each equally invalid but useful for different purposes. Are you a ‘single biological intelligence’ or a community/ecosystem of human cells/neurons/bacteria/yeast/viruses/parasites? And does it matter in the slightest bit to anyone you might wrong?

Here is an example of the ill-defined nature of the question: on your desk or in your pocket, how many computers do you currently have? How many computers are in your “computer”? Did you think just one? Let’s take a closer look—it’s computers all the way down.

You might think you have just the one large CPU occupying pride of place on your motherboard, and perhaps the GPU too, but the computational power available goes far beyond just the CPU/GPU, for a variety of reasons: transistors and processor cores are so cheap now that it often makes sense to use a separate core for realtime or higher performance, for security guarantees, to avoid having to burden the main OS with a task, for compatibility with an older architecture or existing software package, because a DSP or core can be programmed faster than a more specialized ASIC can be created, or because it was the quickest possible solution to slap down a small CPU and they couldn’t be bothered to shave some pennies6. Further, many of these components can be used as computational elements even if they were not intended to be or hide that functionality. (For example, I believe I’ve read that the Commodore 64’s floppy drive’s CPU running Commodore DOS was used as a source of spare compute power & for defeating copy-protection schemes.)

Thus:

A common AMD/Intel CPU has billions of transistors, devoted to a large number of tasks:

Each of the 2-8 main CPU cores can run independently, shutting on or off as necessary, and has its own private caches L1-L3 (often bigger than desktop computers’ entire RAM a few decades ago7, and likely physically bigger than their CPUs were to boot), and must be regarded as individuals.

any floating point unit may be Turing-complete through encoding into floating-point operations in the spirit of FRACTRAN8

the MMU can be programmed into a page-fault weird machine driven by a CPU stub, as previously mentioned

DSP units, custom silicon: ASICs for video formats like h.264 probably are not Turing-complete (despite their support for complicated deltas and compression techniques which might allow something like Wang tiles), but for example Apple’s A9 mobile system-on-a-chip goes far beyond simply a dual-core ARM CPU and GPU as like Intel/AMD desktop CPUs, it includes the secure enclave (a physically separate dedicated CPU core), but it also includes an image co-processor, a motion/voice-recognition coprocessor (partially to support Siri), and apparently a few other cores. These ASICs are sometimes there to support AI tasks, and presumably specialize in matrix multiplications for neural networks; as recurrent neural networks are Turing-complete… Other companies have rushed to expand their system-on-chips as well, like Motorola or Qualcomm

These management or debugging chips may be ‘accidentally’ left enabled on shipping devices, like the Via C3 CPUs’s embedded ARM CPUs

GPUs have several hundred or thousand simple cores, each of which can run neural networks well (which are highly expressive or Turing-complete), or do general-purpose computation (albeit slower than the CPU)9

network chips do independent processing for DMA. (This sort of independence is why features like Wake-on-LAN for netboot work.)

smartphones: in addition to all the other units mentioned, there is an independent baseband processor running a proprietary realtime OS for handling radio communications with the cellular towers/GPS/other things, or possibly more than one virtualized using something like L4. Baseband processors have been found with backdoors, in addition to all their vulnerabilities.

SIM cards for smartphones are much more than simple memory cards recording your subscription information, as they are smart cards which can independently run Java Card applications (apparently NFC chips may also be like this as well), somewhat like the JVM in the IME. Naturally, SIM cards can be hacked too and used for surveillance etc.

USB or Thunderbolt cables or devices, or motherboard-attached devices: an embedded processor on device is needed for negotiation of data/power protocols at the least for cables/batteries/chargers10, and may be even more heavy duty with multiple additional specialized processors themselves like WiFi adapters or keyboards or mice or SD cards. In theory, most of these are separate and are at least prevented from directly subverting the host via DMA by in-between IOMMU units, but the devil is in the details…

monitor-embedded CPU (part of a tradition going back to smart teletypes)

So a desktop or smartphone can reasonably be expected to have anywhere from 15 to several thousand “computers” in the sense of a Turing-complete device which can be programmed in a usefully general fashion with little or no code running on the ‘official’ computer, which is computationally powerful enough to run many programs from throughout computing history and which can be exploited by an adversary for surveillance, exfiltration, or attacks against the rest of the system.

None of this is unusual historically, as even the earliest mainframes tended to be multiple computers, with the main computer doing batch processing while additional smaller computers took care of high-speed I/O operations that would otherwise choke the main computers with interrupts.

In practice, aside from the computer security community (as all these computers are insecure and thus useful hidey-holes for the NSA & VXers), users don’t care that our computers, under the hood, are insanely complex and more accurately seen as a motley menagerie of hundreds of computers awkwardly yoked together (was it “the network is the computer” or “the computer is the network”…?); as long as it is working correctly, he perceives & uses it as a single powerful computer.

Although linear NNs exploiting round-to-zero floating point mode in order to encode complex, potentially Turing-complete (for RNNs) behavior, which is invisible when run normally, would be both ‘accidentally’ Turing-complete and a good example of langsec.↩︎

Dwarf Fortress provides clockwork mechanisms, so TC is unsurprising; but the water is implemented as a simple cellular automation, so there might be more ways of getting TC in DF! The DF wiki currently lists 4 potential ways of creating logic gates: the fluids, the clockwork mechanisms, mine-carts, and creature/animal logic gates involving doors+pressure-sensors.↩︎

The full PDF specification is notoriously bloated. For example, in a PDF viewer which supports enough of the PDF spec, like the Google Chrome Browser, one can play Breakout (because PDF includes its own weird subset of JavaScript). The Adobe PDF viewer includes functionality as far afield as 3D CAD support.↩︎

Reminds of when I was doing firmware development and the ASIC team would ask if they could toss in a extra Cortex-M3 core to solve specific control problems. Those cores would be used as programmable state machines. For the ASIC team tossing in a extra core was free compared to custom logic design. However for the firmware team it would be another job to write and test that firmware. We had designs with upwards of 10 Cortex-M3 cores. I’ve heard from a friend at another employer had something like 32 such cores and it was a pain to debug.

Ken Shirriff amusingly notes in a Macbook charger analysis: “One unexpected component is a tiny circuit board with a microcontroller, which can be seen above. This 16-bit processor constantly monitors the charger’s voltage and current. It enables the output when the charger is connected to a Macbook, disables the output when the charger is disconnected, and shuts the charger off if there is a problem. This processor is a Texas Instruments MSP430 microcontroller, roughly as powerful as the processor inside the original Macintosh. [The processor in the charger is a MSP430F2003 ultra low power microcontroller with 1kB of flash and just 128 bytes of RAM. It includes a high-precision 16-bit analog to digital converter. More information is here.]”↩︎