Space Is Easier Than Time

Anil Ananthaswamy is the author of the bookThe Edge of Physics. He also consults for Britain’s New Scientist weekly magazine, and wrote an interesting article on space and time entitled, “Space vs time: One has to go—but which?” He started the article with this:

TO ISAAC NEWTON, space was the “sensorium of God,” the organ through which the deity surveyed His creation. It was absolute, unchanging, infinite. Flowing through it “equably without regard to anything external,” as Newton wrote in his great physical treatise, was another similarly absolute heavenly creation: time.

Today I wish to talk about what we know about space, not space as viewed by Newton, but space as used in complexity theory. For us complexity theorists, space is a measure of the cost of a computation of a Turing Machine (TM). It is the number of squares that a TM uses during a computation. It is not an “organ,” nor is it absolute and unchanging; it is simply the number of different cells written to by the computation. Meanwhile time is the number of steps that a TM takes during a computation.

I am currently teaching complexity theory at Tech, and realized that there is a huge gap between our understanding of space and time. I am starting to wonder if there should be a textbook that makes this viewpoint clearer, but that is another whole project.

Let’s just look at the contrast between what we know about space and time.

The Differences

Models: For space the exact TM model used is relatively unimportant. The key is to have an input tape that is read-only and one working tape of length of handle inputs of length . In the remaining comments we will not worry about the function . It needs to grow at least as to avoid some technical issues, and in some cases one needs to be “constructible,” meaning that a TM on length- inputs can mark the -th cell as a boundary using space. But just think of as some reasonable function.

With time, however, it is necessary to have more than one working tape, and whether more tapes beyond two help save you time is unknown. Worse yet, it is possible that having two-dimensional planes or higher-dimensional spaces, or giving the tape a tree structure, or having some “random access” mechanism, allows you to compute more functions in less time, but nobody knows. For space complexity alone, all of these models are equivalent; 3D is no more powerful than 2D or 1D. In this sense computational space is already “holographic.”

Simulations: For space we have simple simulations of one model by another. For example, a space bounded TM with a hundred tapes can be simulated in the same space by another TM with just one tape. The proof of this is quite simple.

For deterministic time we have the beautiful amortized method of Frederick Hennie and Richard Stearns that shows that any number of tapes can be simulated by two tapes, and the time goes from to . This is one of the more complex simulations in complexity theory, and uses the notion of amortized cost before it was invented years later in the data structure community. The latter was done by Bob Tarjan—see here and here.

Diagonalization: For space we know that more space yields new languages, even if the growth is tiny.

Theorem:
Deterministic space is more powerful that provided that

For deterministic time, and even for nondeterministic time, there are no separation theorems that are even close to this. For example: we cannot prove that deterministic time is better than .

Determinism vs Nondeterminism: For space we know the following beautiful theorem due to Walter Savitch:

Theorem:
Nondeterminism helps at most a polynomial for space bounds.

Of course, he actually proved that any nondeterministic space TM that uses space can be simulated by a deterministic one that uses space . This amazing result shows that “guessing” for space is not very powerful. Indeed, consider:

The corresponding theorem for time would be that .

It is even worse. For space it is still open whether nondeterminism helps at all. That is, it might be that a nondeterministic TM with space could be simulated by a deterministic one with the same space.

Boolean Operations: For space we know the following theorem that was found independently by Neil immerman and Robert Szelpcsényi:

Theorem:
Nondeterministic space is closed under all boolean operations.

It is easy to show this for union and intersection, but the deep result is that it also works for complement.
This terrific results shows that for space guessing a path in a maze requires the same amount of space as proving there is no path. I discussed this ages ago in detail here. Whereas:

The corresponding theorem for time would be that .

Open Problems

Why is space so much “easier” than time? Will we ever understand time as well as space? I could go on and add even more examples, but will stop with the above.

Finally do you know that Frederick Hennie was the PhD advisor of the famous computer architect who (with James Rottsolk) co-founded what is now Cray, Inc.? No, that person was not Seymour Cray.

Definition “Quantum systems engineering” is the restriction of quantum physics to the question: “What are the most efficient (classical) Turing Machines (TMs) that predict the output of quantum simulations?”

By this definition, it is computationally natural to step outside the realm of absolute space (e.g., via greyhole dynamics) while continuing to respect the notion of absolute time (in its local geometric instantiation as symplectomorphic flow).

Aside from replacing global definitions of “absolute time” with local definitions of “symplectomorphic flow”, how else might 21st century STEM enterprises seek to escape the “tether of time” (as it might be called)?

Uhhh … heck … it’s not clear to me what Neel is talking about, but if perhaps it means (as I read it) that 21st century HOTT-like methods are acting to “untether” complexity theorists from generations of frustrating proof-search, by clarifying the boundary between decidable and undecidable propositions regarding complexity theory classes, then this would make a terrific column for Gödel’s Lost Letter and P=NP. So perhaps there might someday be a Neel Krishnaswami GLL guest-column?

In any event, Oliver Twist is hungry for more, please!

And finally, Dick Lipton and Ken Regan, please accept the continuing appreciation and thanks (from me and many) for your outstanding sustainment of the enlightened and enlightening forum that is GLL.

Since one of the more useful functions of a Turing Machine is to determine the values of number-theoretic functions (which is why Turing’s focus in his paper was on identifying computable numbers and not merely on calculating numbers) perhaps we should evaluate the space and time characteristics of a Turing machine in functional terms based on finitarily defined arithmetical properties (i.e., as opposed to non-finitarily defined set-theoretical properties).

For instance, if F(x) is an algorithmically computable arithmetical function, and ALGSPACE(F, n) is the size of the smallest algorithm that computes the first n values F(1), F(2), …, F(n) of F(x), then ALGSPACE(F, n) is bounded as n -> infinity.

So F(x) can be said to be well-behaved as to algorithmic space.

However if G(x) is an algorithmically verifiable but not algorithmically computable arithmetical function, and ALGSPACE(G, n) is the size of the smallest algorithm that computes the first n values G(1), G(2), …, G(n) of G(x), then ALGSPACE(G, n) -> infinity as n -> infinity.

So G(x) cannot be said to be well-behaved as to algorithmic space.

What about time?

If F(x) is an algorithmically computable arithmetical function, and ALGTIME(F, n) is the time taken by the smallest algorithm to compute the first n values F(1), F(2), …, F(n) of F(x), then ALGTIME(F, n) is also an algorithmically computable arithmetical function.

So F(x) could be said to be also well-behaved as to algorithmic time.

However if G(x) is an algorithmically verifiable but not an algorithmically computable arithmetical function, and ALGTIME(G, n) is the time taken by the smallest algorithm to compute the first n values G(1), G(2), …, G(n) of G(x), then ALGTIME(G, n) is also not definable by an algorithmically computable arithmetical function.

So G(x) cannot be said to be well-behaved as to algorithmic time.

That there are algorithmically verifiable arithmetical functions which are not algorithmically computable is a consequence of Cantor’s diagonal argument and the instantiational properties of Goedel’s Beta-function.

Ive been pondering on this dichotomy of space & time in TMs quite a bit over the years, possibly spurred in part by one of your earlier blogs on the subj. it is fascinating that TMs have a time/space tradeoff & continuum something like in real physics.

yes a total review/rewrite of TM theory wrt perspective of space/time tradeoffs would be a fantastic reference and maybe a futuristic perspective as theory seems to be moving/developing in that direction. few people could write such a reference, you are probably one of the few. would *defn* buy that one!

something about space/time seems to be tightly coupled to the issue of reversibility. if there is large time and low space on a computation, it also implies high reversing of squares, ie squares that are overwritten. TM reversibility seems to be the “hidden order” linking space and time [shades of bohms “implicate order”]. it hasnt been studied a lot in isolation, there are many famous theorems about space/time but very few about reversibility.

another very interesting place this manifests is in “time/space tradeoffs on SAT” pioneered by von melkebeek & williams.

yes I agree with you very much that it seems some basic theorems linking them are unknown and remaining to be discovered. also in a deep sense P=?NP is probably tightly linked to the area.

a sample problem I have used to look into time/space tradeoff & interconnection/coupling is this one which looks at TM “run sequences”. it seems to have a deep link, but I havent written up the idea yet, just looking for some motivation/excuse.

chee yap has a very great chapter on TM reversibility in his online ref, its one of the most comprehensive surveys Ive seen. also williams has an interesting paper on TM reversibility. there are also many isolated tcs.se questions on the subject. might try to collect all these refs someday in a blog but they are scattered.

overall from a future pov it might seem our study of complexity theory in terms of Time or Space alone is confusingly/unnecessarily specialized, compartmentalized and reductionistic [but that is an overall problem with current science/math].

Definition “Quantum systems engineering” is the restriction of quantum physics to the question: “What are the most efficient (classical) Turing Machines (TMs) that predict the output of quantum simulations?”

By this definition, it is computationally natural to step outside the realm of absolute space — via the lattice models of greyhole dynamics, for example — while continuing to respect the notion of absolute time (in its geometric instantiation as symplectomorphic flow). QSE thereby loosens the tethers of space and dimensionality as follows:

(1) QSE loosens the tethers of too-rigid macroscopic Euclidean state-space by replacing its Cartesian metric with the more-flexible Riemannian geometry of real manifolds.

These concrete space-loosening and dimension-loosening steps are associated to gains in mathematical naturality, computational efficiency, and practical engineering scope that are considerable and still-accelerating, needless to say.

Aside from the already-mentioned tactic of replacing global definitions of absolute time with local definitions of symplectomorphic flow, how else might 21st century STEM enterprises seek to loosen the tethers of time?

Dick’s (terrific!) essay closes by asking “Why is space so much ‘easier’ than time?” Yes, the tethers of time seem somehow more hard-to-loosen and even hard-to-describe than the tethers of space and dimensionality … perhaps they are mathematically more fundamental?

Uhhh … heck … it’s not clear to me what Neel is talking about, but if perhaps it means (as I read it) that 21st century HOTT-like methods are acting to untether complexity theorists from generations of wasted effort, by clarifying the boundary between decidable and undecidable propositions regarding complexity theory classes, then this would make a terrific column for Gödel’s Lost Letter and P=NP. Perhaps there might be a Neel Krishnaswami GLL guest-column?

Oliver Twist would like to hear some more from Pip, please!

And finally, Dick Lipton and Ken Regan, please accept the continuing appreciation and thanks (from me and many) for your outstanding sustainment of the enlightened and enlightening forum that is GLL.

Perhaps I had better mention too, that the study of quantum greyhole dynamics seeks to clarify those aspects of quantum black hole dynamics that can be concretely reduced to Turing Machine simulations of symplectomorphic flows.

“Here I would like to commend some striking remarks of Neel Krishnaswami, in which he suggests that “canonicity criteria save true theorems from ex falso quodlibet.” Uhhh … heck … it’s not clear to me what Neel is talking about, but if perhaps it means (as I read it) that 21st century HOTT-like methods are acting to untether complexity theorists from generations of wasted effort, by clarifying the boundary between decidable and undecidable propositions regarding complexity theory classes…”

The problem with deciding whether ‘canonicity criteria save true theorems from ex falso quodlibet’ is that you need to decide which definition of ‘truth’ you are using in your theorems.

Contrary to our current paradigm, we actually use two—apparently contradictory, but perhaps complementary—definitions of ‘truth’ in our mathematical reasoning, without clearly distinguishing between them (and possibly sometimes interchangeably even within the same argument)!

The first is a non-finitary, algorithmically verifiable, ‘truth’, which is reflected in the standard (so far accepted as canonical) model of PA.

Gentzen has given a non-finitary proof of consistency for PA that indirectly justifies the Axiom Schema of Finite Induction in this model.

It is a model beloved of Set Theorists, since it allows ‘undecidable’ arithmetical propositions and seemingly assures—as Goedel noted—that the ”formation of higher and higher types can be continued into the transfinite”.

The second is a finitary, algorithmically computable, ‘truth’, which is reflected in the algorithmic (contender for canonicity) model of PA defined in the Birmingham paper.

It is a model that Computationalists should love, since it yields a finitary proof of consistency for PA by giving a computationalist justification for the Axiom Schema of Finite Induction.

It does not, moreover, admit troublesome ‘undecidable’ arithmetical propositions (of course, at the cost of not being able to comfortably conclude the existence of a contrafactual from a negation)!

A human intelligence has the choice of declaring algorithmically verifiable ‘truth’ as ‘canonical’ when wearing the non-finitary hat and reasoning set-theoretically; and declaring algorithmically computable ‘truth’ as ‘canonical’ when wearing the finitary hat and reasoning arithmetically.

A mechanical intelligence may have no choice but to declare only algorithmically computable ‘truth’ as ‘canonical’!

Whether there is an intelligence that could view a different ‘truth’ as ‘canonical’ is an intriguing question.

Someone else I know (an American) also proved the collapse at the second level, a year or so earlier, but thought it was possibly a bug in a proof of something else he and his co-authors were after. A third person had an inkling at level 3. Anyway, I think all this credit stays at the second level.

Hard disks tend to be filled up… but the processor is rarely used to its full capacity.
Cities tend to be crowded… but too many people are left unemployed.
Maybe that’s because space is in one, two, or three dimensions… but the ways of using time are of infinite dimension.