The Busy Beaver Game

This month, a bunch of ‘logic hackers’ have been seeking to determine the precise boundary between the knowable and the unknowable. The challenge has been around for a long time. But only now have people taken it up with the kind of world-wide teamwork that the internet enables.

A Turing machine is a simple model of a computer. Imagine a machine that has some finite number of states, say N states. It’s attached to a tape, an infinitely long tape with lots of squares, with either a 0 or 1 written on each square. At each step the machine reads the number where it is. Then, based on its state and what it reads, it either halts, or it writes a number, changes to a new state, and moves either left or right.

The tape starts out with only 0’s on it. The machine starts in a particular ‘start’ state. It halts if it winds up in a special ‘halt’ state.

The Busy Beaver Game is to find the Turing machine with N states that runs as long as possible and then halts.

The number BB(N) is the number of steps that the winning machine takes before it halts.

In 1961, Tibor Radó introduced the Busy Beaver Game and proved that the sequence BB(N) is uncomputable. It grows faster than any computable function!

A few values of BB(N) can be computed, but there’s no way to figure out all of them.

As we increase N, the number of Turing machines we need to check increases faster than exponentially: it’s

Of course, many could be ruled out as potential winners by simple arguments. But the real problem is this: it becomes ever more complicated to determine which Turing machines with N states never halt, and which merely take a huge time to halt.

Indeed, matter what axiom system you use for math, as long as it has finitely many axioms and is consistent, you can never use it to correctly determine BB(N) for more than some finite number of cases.

So what do people know about BB(N)?

For starters, BB(0) = 0. At this point I should admit that people don’t count the halt state as one of our N states. This is just a convention. So, when we consider BB(0), we’re considering machines that only have a halt state. They instantly halt.

Next, BB(1) = 1.

Next, BB(2) = 6.

Next, BB(3) = 21. This was proved in 1965 by Tibor Radó and Shen Lin.

Next, BB(4) = 107. This was proved in 1983 by Allan Brady.

Next, BB(5). Nobody knows what BB(5) equals!

The current 5-state busy beaver champion was discovered by Heiner Marxen and Jürgen Buntrock in 1989. It takes 47,176,870 steps before it halts. So, we know

BB(5) ≥ 47,176,870.

People have looked at all the other 5-state Turing machines to see if any does better. But there are 43 machines that do very complicated things that nobody understands. It’s believed they never halt, but nobody has been able to prove this yet.

We may have hit the wall of ignorance here… but we don’t know.

That’s the spooky thing: the precise boundary between the knowable and the unknowable is unknown. It may even be unknowable… but I’m not sure we know that.

Next, BB(6). In 1996, Marxen and Buntrock showed it’s at least 8,690,333,381,690,951. In June 2010, Pavel Kropitz proved that

You may wonder how he proved this. Simple! He found a 6-state machine that runs for

I don’t understand them very well. All I can say at this point is that many of the record-holding machines known so far are similar to the famous Collatz conjecture. The idea there is that you can start with any positive integer and keep doing two things:

• if it’s even, divide it by 2;

• if it’s odd, triple it and add 1.

The conjecture is that this process will always eventually reach the number 1. Here’s a graph of how many steps it takes, as a function of the number you start with:

Nice pattern! But this image shows how it works for numbers up to 10 million, and you’ll see it doesn’t usually take very long for them to reach 1. Usually less than 600 steps is enough!

So, to get a Turing machine that takes a long time to halt, you have to take this kind of behavior and make it much more long and drawn-out. Conversely, to analyze one of the potential winners of the Busy Beaver Game, people must take that long and drawn-out behavior and figure out a way to predict much more quickly when it will halt.

Next, BB(7). In 2014, someone who goes by the name Wythagoras showed that

It’s fun to prove lower bounds on BB(N). For example, in 1964 Milton Green constructed a sequence of Turing machines that implies

Here I’m using Knuth’s up-arrow notation, which is a recursively defined generalization of exponentiation, so for example

where there are threes in that tower.

But it’s also fun to seek the smallest N for which we can prove BB(N) is unknowable! And that’s what people are making lots of progress on right now.

Sometime in April 2016, Adam Yedidia and Scott Aaronson showed that BB(7910) cannot be determined using the widely accepted axioms for math called ZFC: that is, Zermelo—Fraenkel set theory together with the axiom of choice. It’s a great story, and you can read it here:

Briefly, Yedidia created a new programming language, called Laconic, which lets you write programs that compile down to small Turing machines. They took an arithmetic statement created by Harvey Friedman that’s equivalent to the consistency of the usual axioms of ZFC together with a large cardinal axiom called the ‘stationary Ramsey property’, or SRP. And they created a Turing machine with 7910 states that seeks a proof of this arithmetic statement using the axioms of ZFC.

Since ZFC can’t prove its own consistency, much less its consistency when supplemented with SRP, their machine will only halt if ZFC+SRP is inconsistent.

Since most set theorists believe ZFC+SRP is consistent, this machine probably doesn’t halt. But we can’t prove this using ZFC.

In short: if the usual axioms of set theory are consistent, we can never use them to determine the value of BB(7910).

The basic idea is nothing new: what’s new is the explicit and rather low value of the number 7910. Poetically speaking, we know the unknowable starts here… if not sooner.

However, this discovery set off a wave of improvements! On the Metamath newsgroup, Mario Carneiro and others started ‘logic hacking’, looking for smaller and smaller Turing machines that would only halt if ZF—that is, Zermelo–Fraenkel set theory, without the axiom of choice—is inconsistent.

By just May 15th, Stefan O’Rear seems to have brought the number down to 1919. He found a Turing machine with just 1919 states that searches for an inconsistency in the ZF axioms. Interestingly, this turned out to work better than using Harvey Friedman’s clever trick.

Thus, if O’Rear’s work is correct, we can only determine BB(1919) if we can determine whether ZF set theory is consistent. However, we cannot do this using ZF set theory—unless we find an inconsistency in ZF set theory.

Post navigation

51 Responses to The Busy Beaver Game

You mentioned you wanted to see an analysis of the 6 state record holder. I don’t have one, but toward the bottom of this page, there are analyses of two previous 6-state record holders. You get a feeling at least for the kind of thing that is going on in these long-lasting state machines.

I think you’re slightly mischaracterizing it to say there are “contestants”. Scott Aaronson and Adam Yedidia published the 7918-state TM using Friedman’s statements and the program which generated it, and made claims about the minimum complexity of a TM using FOL directly that I disagreed with. So I wrote a proof of concept of a direct version, that couldn’t actually be evaluated because I misjudged how Adam’s compiler worked; but it still sparked interest, and Mario suggested improvements to it. I was able to get it working using a compiler of my own, resulting in the 5349-state version, then after spending a few days improving it got it to 1919 states; Peter Taylor made a few small improvements to get to 1895 states. I still haven’t used Mario’s recommendations; once I do that, it’ll shrink quite a bit (I’m guessing 1500 states, but we won’t know until it happens).

To say “contestants” implies we are competing, but this is a cooperative project using a single codebase.

Terminological nitpick: “Metamath” is not the name of a newsgroup. It is the name of a software program (a computer proof verifier, very broadly similar to Coq, HOL Light, Mizar, etc); the newsgroup does not have a proper name, but we call it “the Metamath group” because it exists to discuss Metamath.

Also, ZF and ZFC are proof-theoretically essentially the same system. Each is consistent iff the other is. I tend to say ZF because I’m one of those people who gets a bad aftertaste from the axiom of choice, and the consistency checker omits it because it serves no purpose, but to describe it as “a simpler system” is a bit odd.

As far as I am aware, nobody has tried to optimize the original SRP version using the tricks that worked well for me, so we don’t really know how wide the gap is, or in which direction(!). Harvey Friedman’s work area of “natural arithmetical statements that imply Con(ZF)” is also, IIUC, in its infancy, so there could also be substantial simplicity improvements on that front.

I’m flattered by the continuing interest in what I did there. Let me know if I can help with general understanding of the results or anything else.

BB(n) as described here is actually counting the number of 1s the machine prints starting from all 0s, rather than the number of steps it takes. For instance, the 2 state machine given in the post takes 13 steps and prints 6 1s.

• is the maximum number of 1’s printed by an n-state Turing machine that halts.

• is the maximum number of steps an n-state Turing machine can take before it halts.

People usually use BB(n) as an abbreviation for , but Scott Aaronson has taken to using BB(n) as an abbreviation for , including in his recent blog article, where he writes:

Recall that BB(n), or the nth Busy Beaver number, is defined to be the maximum number of steps that any n-state Turing machine takes when run on an initially blank tape, assuming that the machine eventually halts.

So that’s what I’m trying to do in this article, but I may have gotten mixed up at times. If so, I’ll try to fix it.

Actually it looks like the values of BB(n) that I listed here are consistent with the definition I gave, namely BB(n) = Σ(n). If there are inconsistencies, I’d like to hear about them, so I can fix them.

So there must be an issue somewhere, because the machine you printed definitely does not halt in 6 steps. It halts in 13 steps, and prints 6 1s, making it the 3-state BB machine by most definitions I’ve come across. In particular, I don’t think the C as denoted by the machine you’ve drawn counts as the halt state.

There’s still an issue, in that the machine listed wins the “most 1s” game, but not the “longest running” game! It prints 6 1s in 13 steps. In fact, according to Pascal Michel’s page, the machines that run for 21 steps only print 5 1s.

When Scott published his blog post, I thought a bit about how one might go about explaining this puzzle to people unfamiliar with Turing machines (e.g., smart high school students, since I had just such a talk coming up). Unfortunately, I couldn’t think of a good way to do it.

I think it’s easy to explain the Halting Problem in a high-level way, since it can be re-expressed in a high-level programming language. But as defined, the Busy Beaver problem depends more directly on the “hardware” of Turing Machines.

Is there a compromise? What’s the most natural way to get a BB-like problem in, say, the Lambda Calculus? Or can I define a tiny subset of a language like Python and define to be the most output I can create from an -line program in this Python subset?

Hi! You can take whatever programming language you like, from COBOL to Python to the lambda calculus, and define a version of BB(n) to be either the maximum number of steps that a halting program written with N symbols can run, or the maximum number of symbols that a halting program written with N symbols can print. As long as your programming language lets you compute all Turing-computable functions and nothing more—and all the ones I listed have this property—the fundamental theorems will always be true:

• BB(n) will grow faster than any computable function.

• You’ll eventually hit a value of n such that BB(n) cannot be proved to have any particular value using the usual axioms of set theory… or indeed, any recursively enumerable axiom system in first-order logic… as long as those axioms are a consistent extension of Peano arithmetic.

So yes, for this you can use whatever programming language your audience likes best!

But the exact value of BB(n) will, of course, depend on the details. And this particular post was all about the exact value when we use 2-symbol Turing machines: what we know about it, what we don’t know about it, and what we ‘can’t know’ about it. So I had to give a crash course on Turing machines.

By the way, you have to be careful about “the most output you can create from an n-line program”, since if the lines can be arbitrarily long, this could be undefined.

Yes, that’s sort of what I’m getting at. Obviously there’s a busy beaver style problem for every (Turing complete) programming language, expressed very simply in terms of the number of symbols you use. But that’s not as satisfying to contemplate as it is with pure Turing machines. To begin with, it takes a lot longer before you get any output at all, because of the overhead of writing any non-trivial program. That doesn’t invalidate the process, but it makes it less fun to play with. Second, with an expressive language I feel like it becomes more of an exercise of “who can name the largest number” than a process of unraveling the esoteric behaviour of (say) a 6-state TM that nevertheless manages to run on and on.

Inasmuch as Busy Beaver is a game or puzzle, it needs to be expressed in a language that makes the game fun to play.

So this is a bit nit picky but: if we’ve got an n-state Turing machine encoding of a ZFC contradiction searcher, isn’t it the case that BB(n) is unknowable if all we know is ZFC unless we run it and it actually stops within an amount of time we’re prepared to wait (which it can only do it ZFC is inconsistent of course). If that happens we likely still don’t know BB(n) since there are probably lots of other programs if length n we don’t know how to prove termination of, but one obstacle is gone.

That’s an important nit, and a well-known one. Almost everything Gödelian requires an assumption of “if X, and Y, and your logic is consistent”, because if ZFC is inconsistent, then ZFC will prove Con(ZFC) (and everything else).

The OP gave that caveat twice that I could find:

Indeed, matter what axiom system you use for math, as long as it has finitely many axioms and is consistent,

However, we cannot do this using ZF set theory—unless we find an inconsistency in ZF set theory.

So this is a bit nit picky but:if we’ve got an n-state Turing machine encoding of a ZFC contradiction searcher, isn’t it the case that BB(n) is unknowable if all we know is ZFC unless we run it and it actually stops within an amount of time we’re prepared to wait (which it can only do it ZFC is inconsistent of course).

Yes, I think I made that type of point here:

Thus, if O’Rear’s work is correct, we can only determine BB(1919) if we can determine whether ZF set theory is consistent. However, we cannot do this using ZF set theory—unless we find an inconsistency in ZF set theory.

I should add that most mathematicians are far more scared of being hit by an asteroid when they walk out the door than discovering an inconsistency in ZFC, so you’ll often find statements that omit the clause “unless ZFC is inconsistent”. I did that here:

In short, BB(7910) is unknowable if all we know is ZFC.

Perhaps you are reacting against that momentary lack of caution. Since I pride myself on being clear about precisely this issue, I’ll change my statement to this:

In short: if the usual axioms of set theory are consistent, we can never use them to determine the value of BB(7910).

Less flashy, but more accurate.

By the way, it’s even possible that the propositional calculus is inconsistent. There are proofs that it’s not… but if the propositional calculus is inconsistent, we can’t trust anything, not even these proofs!

In practice, every logician I know wisely acts like the propositional calculus is consistent—and most mathematicians scold me for even mentioning the possibility that it’s not. It’s not something I lose sleep over, but I think it’s interesting.

Thank Stefan and John. My point was more along the lines that running the machine might also give you some information (but was unlikely to since it would only do that if ZFC was inconsistent and that inconsistency was small.) I’m not a good enough logician to know if there are any of the statments which are independent of ZFC which can actually be phrased as search problems which might stop in finite time, but if they exist they’d be unprovable-to-terminate if you only have ZFC unless running them actually reveals a stop after a short time.

My point was more along the lines that running the machine might also give you some information (but was unlikely to since it would only do that if ZFC was inconsistent and that inconsistency was small.)

True. Part of the problem is that in this game people are trying to minimize the number of states in their Turing machines, not minimize the run time before a contradiction is spotted if one exists. If you actually wanted to find a contradiction before our civilization ends, you’d write a different sort of program.

Here’s something else I wanted to add. If we really believe ZFC is consistent, we can add the axiom Con(ZFC) to ZFC and get an axiom system that proves that a Turing machine that searches for an inconsistency in ZFC never halts. This does not necessarily enable us to determine the value of BB(1919) or BB(7910)! However, it raises a curious question: could some set of plausible extra axioms enable us to determine the value of BB(n) for some n where we’d otherwise be stuck?

A concrete example might be BB(5). The obstacle to determining its value is that there are about 40 Turing machines with 5 states that seem to run forever… but we don’t know how to prove they do. If we thought hard about these, might we either settle these cases using ZFC or invent plausible new axioms that settle them?

Possibly… but I doubt it will go that way. We could, if we wanted, add the Riemann Hypothesis or the Collatz Conjecture as an extra axiom to ZFC. But we probably won’t—not unless they’re proved independent of ZFC.

I’m not a good enough logician to know if there are any of the statements which are independent of ZFC which can actually be phrased as search problems which might stop in finite time.

The only statements of this form that leap to mind are “ZFC + X is inconsistent” for various axioms X. We can always search for a proof of 0=1 in ZFC + X.

If we really believe ZFC is consistent, we can add the axiom Con(ZFC) to ZFC and get an axiom system that proves that a Turing machine that searches for an inconsistency in ZFC never halts. This does not necessarily enable us to determine the value of BB(1919) or BB(7910)! However, it raises a curious question: could some set of plausible extra axioms enable us to determine the value of BB(n) for some n where we’d otherwise be stuck?

The answer is yes! By starting with ZFC and repeatedly adding axioms that say “all my previous axioms are consistent”, and doing this infinitely many times, we can eventually get axioms that determine BB(n) for all n!

In fact we don’t even need to start with ZFC: Peano arithmetic will suffice!

It’s fairly subtle, since we need to use transfinite induction and choose an indexiing scheme for countable ordinals, and the results can depend on our indexing scheme. I would like to learn this and explain it sometime.

Wow! Throughout this whole topic I’d been unsure about the dual role being played by ZFC (or ZF, or PA) as both the means by which we prove termination/non-termination and also being encoded in what the Turing machine is computing. It wasn’t clear to me if this was considering excessively complete programs and that there might not be much simpler programs which aren’t provable to terminate under any reasonable theory.

But the above is talking about determining BB(n) — ie behaviour of all programs of length n — from this infinitely augmented theory, not just the behaviour of a program of length n derived from ZFC (or one of its augments).

It wasn’t clear to me if this was considering excessively complete programs and that there might not be much simpler programs which aren’t provable to terminate under any reasonable theory.

The ‘logic hackers’ have found a 1919-state Turing machine whose termination cannot be decided using ZFC. There might be much simpler programs whose termination cannot be decided using ZFC. Somewhere between 5 and 1919 lies the smallest N such that there’s an N-state Turing machine whose termination is undecidable in ZFC. This is also the smallest N such that BB(N) cannot be proven to have the value it has using ZFC. (All this is assuming ZFC is consistent!)

It’s an annoyingly large gap, but shrinking it a lot will probably require new ideas, not just writing more clever programs that check ZFC for contradictions.

As far as I can tell, nobody has a clue whether or not we’ll ever be able to find the smallest N such that there’s an N-state Turing machine whose termination is undecidable in ZFC.

But all this is about ZFC, not “any reasonable theory”.

The logic hackers tried asking the same questions for PA. To find some number N such that BB(N) cannot be proved to have the value it has using PA, it suffices to create an N-state Turing machine that seeks a contradiction in PA. However, even though PA seems simpler than ZFC in some respects, it’s apparently more complicated in other ways, which have so far made this number N larger than 1919.

Some of the theories I described, where we take ZFC or PA and repeatedly add axioms that say “all my previous axioms are consistent”, infinitely many times, are not exactly what I’d call “reasonable” axioms. They’re believable, but I don’t think it’s possible to work with them in practice. The fact that they can be used to settle the Busy Beaver problem is shows that no computer program can enumerate all the consequences of these axioms.

Hmm, if I understand the discussion correctly, the problem is that once you get to transfinite ordinals the method you use to code up the ordinal can contain lots of information, so you can do some prestidigitation to be able to prove every statement. I guess one might then ask how much one can prove if one uses “natural” representations of ordinals, and don’t use any “tricks to code up extra information”. But then the problem is that the terms in quotes are not well-defined.

I don’t know any papers about it. There should be some philosophy and logic papers about the concept of “logical certainty”, and the question of when we can trust a consistency proof: they are well-known issues that have been discussed for a long time. But I don’t know any papers raising doubts about the usual consistency proofs for the propositional calculus.

Personally I think the consistency of very fundamental principles like the propositional calculus, the predicate calculus and Peano arithmetic is widely believed for the following reasons: 1) they haven’t yet led to any contradictions, and 2) we can prove certain subsets of these principles won’t lead to contradictions if we assume certain other subsets won’t lead to contradictions. So, everyone has decided that these principles are consistent. And I think that’s fine — as long as we notice what we are doing!

The following paper, due to appear in the December issue of Cognitive Systems Research, gives a finitary proof of consistency for the first-order Peano Arithmetic PA as sought by Hilbert in the second of his twenty three Millenium ICM-1900 problems:

I’m just saying that if there’s a derivation of F = T in the predicate calculus, we can probably derive almost anything in any system of logic, so we can’t trust the arguments usually given to prove the consistency of mathematical systems.

“…there are about 40 [BB(5)] machines that do very complicated things that nobody understand. It’s believed they never halt, but nobody has been able to prove this.”

I see from the referenced historical survey that the number of “holdouts” had been reduced to ~100 (out of ~63 trillion, right?) by 1990 via a combination of machine and manual analyses, so presumably an additional 60 machines were proven not to halt in the ensuing 26 years. Can anyone describe the basis for the belief that the remaining 40 never halt? Also I am curious as to recent progress in identifying non-halters: Is it fair to say that it’s stalled, or are successive cases still being resolved?

Thanks for digging into this. I think the details will actually be more interesting than the “world records” I listed in my post. What have people done to study those 40 holdouts that prevent us from computing BB(5)? It’s bound to be interesting.

Unfortunately many of the people trying to understand BB(n) seem to be “hackers”, more interested in solving problems than writing nice expository papers.

The study of Turing machines with 5 states and 2 symbols is still going on. Marxen and Buntrock (1990), Skelet, and Hertel (2009) created programs to detect never halting machines, and manually proved that some machines, undetected by their programs, never halt. In each case, about a hundred holdouts were resisting computer and manual analyses. See Skelet’s study in

My program, called bbfind enumerates the Turing machines and then tries to prove their infinitness. As a side effect stoping machines are identified and checked for BB record.

Current version properly evaluates S(n) for class TM(4) and important subclass of TM(5), called RTM(5) (reversal turing machines with 5 states).

The function S(n) is the more technical name for what I’m calling BB(n) in this post. TM(5) is the set of 5-state Turing machines, which is what arch1 was asking about.

For class TM(5) the program leaves 164 machines unproven (some are isomorphic to other by trivial reasons).

This is larger than the number “about 40” that I mentioned in my blog post, which I got from the Wikipedia article, which says: “The current 5-state busy beaver champion produces 4,098 1s, using 47,176,870 steps (discovered by Heiner Marxen and Jürgen Buntrock in 1989), but there remain about 40 machines with non-regular behavior which are believed to never halt, but which have not yet been proven to run infinitely”.

The Wikipedia article refers to Skelet for this fact, and indeed, later on this page Skelet mentions that he’s been unable to determine whether 43 machines halt or not. I guess he uses his program bbfind for an initial analysis and then works harder!

Current version works slowly, and full scanning for TM(5) may take 1 or 2 weeks.

For class TM(5) the program enumerates about 150M machines.

I don’t know where that number comes from. The total number of n-state Turing machines should be

and thus for n = 5,

which is indeed about 63 trillion, as arch1 wrote. However, experts know ways to rule out most of these right away!

There’s a lot of overlap, but combined they appear to have resolved 26 of the 42 and leaving 16 unresolved. However, Briggs notes that one of the machines labelled “BL_2” should also be classified as an HNR machine, bringing the total back up to 17.

That is, if you trust Skelet, Briggs, univerz, and Cloudy176. Probably some extra verification will be necessary.

Cool! Thanks for the update! Like you, I hope someone publishes this work, because it’s a bit risky to have results scattered around like this, a bit hard to assemble, relying on us to trust various pseudonymous authors.

I’m not sure I’d say we’re “extremely close” to solving BB(5). It all depends on how hard the remaining 17 math problems are. If they’re as difficult as the Collatz conjecture, it could take centuries to solve them.

I don’t understand, the halt command of the Turing machine is obtained with an inaccessible tape with only 0, and a terminal tape with 2^N possible simbols with last symbol printed that is 0 or 1, and a second last state.
If it is chosen a single state, and a simbol, then there is a Turing machine halt; but if the terminal tape is enough great, then can be impossible the halt state for each possible string of symbol on the tape, and for each initial state (if this happen the machine does not stops): I think that the number of possible terminal string is less that the number of possible steps of the Turing machine, so that can be tested if it stops; but there must be a my mistake somewhere.

I try to think on the final state of the Turing machine, then the final tape is a string filled of 1s and 0s until the terminal symbol that is 1 or 0, then the remaining tape is full of 0s (the position where the head of Turing machine don’t write before the halt); then the Turing machine halt, in a possible inner point.
It is possible to study the final part of the Turing machine (like an asymptotic behaviour of a function), then it is possible to choose a initial cell (near the terminal part of the tape), an initial state, and to verify if the Turing machine halt (to search if the final string is a terminal string) I think that the number of steps is lower of the number of the steps of the full sequence (although the halting cell could be in the middle of the tape): it is not possible to verify the number of steps of the Turing machine, but it is possible to verify if there are sequences that give the halt (and one of this is a possible busy beaver game), and to know the final values (like the asymptotes).
The symbol on the tape change with the dynamic, but if one consider all the possible final strings, one of these must be the terminal string (with different initial state) with minimum step to obtain the halting (for example, if one chooses a cell, a terminal string, a inner state, then there may be a few of step to verify if there is the halt).

I am thinking that there are many Turing machine that have the same BB(N). For example a machine that make the same operation to the right instead of the left, or if there is an initial tape full of 1s (an arbitrary redefinition of starting tape and there is a busy beaver game that have the same dynamic with opposite symbols), or a Turing machine with permutation of state symbols (or a combination of all these conditions); but if this is a dynamic, so is possible increase the BB(N) with an arbitrary initial condition, an initial string with some 1s and 0s, of finite lenght? If the initial string is not a string that the Turing machine write on the tape starting from the empty tape, then it is possible to increase the length of BB(N).
If the Turing machine does not halt, then there is a periodicity in the written string with written states (another tape).

Slightly off topic, but the latest article in Quanta describes a recent result showing that anything that can be proved using the infinite Ramsey theorem applied to pairs can also be proved without assuming the existence of any infinite sets. But for the infinite Ramsey theorem applied to triples, this isn’t true!

But then in January, Patey and Yokoyama, young guns who have been shaking up the field with their combined expertise in computability theory and proof theory, respectively, announced their new result at a conference in Singapore. Using a raft of techniques, they showed that is indeed equal in logical strength to primitive recursive arithmetic, and therefore finitistically reducible.

The article nicely explains what this statement says; since it concerns arbitrary infinite sets I presume it’s the consequences of for arithmetic that when combined with very weak axioms are equivalent to primitive recursive arithmetic.

Primitive recursrive arithmetic or PRA is a set of axioms for arithmetic much weaker than Peano arithmetic, which doesn’t even include the quantifiers and . Let me take this opportunity to help myself remember it. According to Wikipedia:

The logical rules of PRA are modus ponens and variable substitution. The non-logical axioms are:

• S(x) ≠ 0;

• S(x)=S(y) ⇒ x=y,

and recursive defining equations for every primitive recursive function as desired.

So, very simple!

The Ackermann function is computable but not primitive recursive; no primitive recursive function can grow that fast, so PRA is not only finitistic but also limited in what functions it can talk about.

In fact, I just read that a function

is primitive recursive iff there is a natural number such that can be computed by a Turing machine that always halts within or fewer steps, where is the Ackermann function! So, there’s a nice connection between primitive recursive arithmetic and the Busy Beaver Game!

The idea of having a separate symbol for every primitive recursive function sounds like a mess but it’s just a way of avoiding the need to set up a framework to define new functions. There’s even a way to describe PRA that doesn’t mention logical connectives at all!

We can prove PRA is consistent if we can do induction up to By comparison, to prove Peano arithmetic consistent we need to do induction up to

The article touches on, but doesn’t dwell on, the big project of classifying which statements can be proved in which kinds of arithmetic. It’s called reverse mathematics, and among people working on ‘conventional’ foundations of mathematics—that is, not topos theory, homotopy type theory or other category-flavored approaches—this project has become quite popular.

We have a sort of ongoing project where we explore the use of the human body for “programming” or “steering” tasks, which is intended to go beyond that what you can do with a screen and typical input devices (which mainly use audio and visual interaction and “finger tapping or swishing” i.e. interactions with keyboard or mouse or mousepad) ). In our recent experiment you can program a 2 symbol Turing machine with hand gestures:http://www.randform.org/blog/?p=6184
The application itself (programmed by Tim) is however intended to be also used with other sensory input (the most banal is of course again the keyboard, or buttons and mouse, and in fact those were of course used during prototyping the application). The experiment is summarized in a blog post:http://www.randform.org/blog/?p=6184
Some example of “dance gestures” are also given there.

How To Write Math Here:

You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.