Pages

Saturday, September 13, 2014

Is there a smallest length?

Good ideas start with a question. Great ideas start with a
question that comes back to you. One such question that has haunted scientists
and philosophers since thousands of years is whether there is a smallest unit
of length, a shortest distance below which we cannot resolve structures. Can we
look closer and always closer into space, time, and matter? Or is there a limit,
and if so, what is the limit?

I picture our foreign ancestors sitting in their cave
watching the world in amazement, wondering what the stones, the trees and they themselves
are made of – and starving to death. Luckily, those smart enough to hunt down
the occasional bear eventually gave rise to human civilization sheltered enough
from the harshness of life to let the survivors get back to watching and
wondering what we are made of. Science and philosophy in earnest is only a few
thousand years old, but the question whether there is smallest unit has always been
a driving force in our studies of the natural world.

The old Greeks invented atomism, the idea that there is an
ultimate and smallest element of matter that everything is made of. Zeno’s
famous paradoxa sought to shed light on the possibility of infinite divisibility.
The question came back with the advent of quantum mechanics, with Heisenberg’s
uncertainty principle that fundamentally limits the precision by which we can
measure. It became only more pressing with the divergences in quantum field
theory that are due to the inclusion of infinitely short distances.

It was in fact Heisenberg who first suggested that divergences
in quantum field theory might be cured by the existence of a fundamentally
minimal length, and he introduced it by making position operators non-commuting
among themselves. Like the non-commutativity of momentum and position operators
leads to an uncertainty principle, so does the non-commutativity of position
operators limits how well distances can be measured.

In the mid 1960s Mead reinvestigated Heisenberg’s microscope,
the argument that lead to the uncertainty principle, with (unquantized) gravity
taken into account. He showed that gravity amplifies the uncertainty so that it
becomes impossible to measure distances below the Planck length, about 10-33
cm. Mead’s argument was forgotten, then rediscovered in the 1990s by string
theorists who had noticed using strings to prevent divergences by avoiding
point-interactions also implies a finite resolution, if in a technically
somewhat different way than Mead’s.

Since then the idea that the Planck length may be a fundamental
length beyond which there is nothing new to find, ever, appeared in other
approaches towards quantum gravity, such as Loop Quantum Gravity or
Asymptotically Safe Gravity. It has also been studied as an effective theory by
modifying quantum field theory to include a minimal length from scratch, and
often runs under the name “generalized uncertainty”.

One of the main difficulties with these theories is that a
minimal length, if interpreted as the length of a ruler, is not invariant under
Lorentz-transformations due to length contraction. This problem is easy to
overcome in momentum space, where it is a maximal energy that has to be made
Lorentz-invariant, because momentum space is not translationally invariant. In
position space one either has to break Lorentz-invariance or deform it and give
up locality, which has observable consequences, and not always desired ones. Personally,
I think it is a mistake to interpret the minimal length as the length of a
ruler (a component of a Lorentz-vector), and it should instead be interpreted
as a Lorentz-invariant scalar to begin with, but opinions on that matter
differ.

The science and history of the minimal length has now been
covered in a recent book by Amit Hagar:

Amit is a philosopher but he certainly knows his math and physics. Indeed, I
suspect the book would be quite hard to understand for a reader without at
least some background knowledge in math and physics. Amit has made a
considerable effort to address the topic of a fundamental length from as many perspectives
as possible, and he covers a lot of scientific history and philosophical
considerations that I had not previously been aware of. The book is also
noteworthy for including a chapter on quantum gravity phenomenology.

My only complaint about the book is its title because the
question of discrete vs continuous is not the same as the question of finite vs
infinite resolution. One can have a continuous structure and yet be unable to
resolve it beyond some limit (this is the case when the limit makes itself
noticeable as a blur rather than a discretization). On the other hand, one can
have a discrete structure that does not prevent arbitrarily sharp resolution (which
can happen when localization on a single base-point of the discrete structure
is possible).

(Amit’s book is admittedly quite pricey, so let me add that
he said should sales numbers reach 500 Cambridge University Press will put a
considerably less expensive paperback version on offer. So tell your library to
get a copy and let’s hope we’ll make it to 500 so it becomes affordable for
more of the interested readers.)

Every once in a while I think that there maybe is no
fundamentally smallest unit of length, that all these arguments for its
existence are wrong. I like to think that we can look infinitely close into
structures and will never find a final theory, turtles upon turtles, or that
structures are ultimately self-similar and repeat. Alas, it is hard to make
sense of the romantic idea of universes in universes in universes
mathematically, not that I didn’t try, and so the minimal length keeps coming
back to me.

64 comments:

Thanks for pointing this book out! I'll definitely ask my library to order it.There are two statements in the eighth paragraph which I found puzzling. The first one is that momentum space is not translation invariant. In order to understand it correctly, is it legitimate to reformulate as "the physics is not invariant against translations in momentum space"? As the dual space of the Lorentz-invariant tangent space, I have always thought of momentum space itself as Lorentz invariant, too. That is, in the classical picture...Secondly, I have never heard in this definiteness that any deformation of the Poincaré algebra will lead to non-locality. Is this really true in all generality? I do not only have in mind deformation in the category of Lie but also Hopf algebras.

Translations are not part of the Lorentz-group. You're confusing this with the Poincare-group. Physics is of course not invariant under translations in momentum space. A translation in momentum space is momentum non-conservation. Think of it this way: The zero-vector in momentum space is a special place. The zero-vector in position space is not.

As to non-locality. You can deform the action of the Poincare-algebra off-shell. You get problems with locality if you want the action to be deformed (not the standard action) on-shell, for free particles. Best,

If we search for QG by asking the question, what is the smallest geodesic? then we inextricably incorporate GR from the very beginning. The smallest geodesic is a Planck mass black hole. This implies that we cannot resolve space-time below the Schwarzschild radius of a Planck mass BH. We then proceed to ask is this the highest or lowest energy state of space-time. It is of course the state of highest stress and 4 momentum and the state of least stress is a vacuum state containing dark energy. This low energy state of space-time has the largest geodesic. Therefore the states in between must have energies that are integral multiples of the ground state. Hence geodesics are quantized. A simple calculation will show that there are 10^60 quantized states for the geodesics.

To include massless particles we consider not space-time but phase space. Here the uncertainty principle reigns supreme and governs the relationship between a particles four momentum and its wavelength. A particle's wavelength can be considered as its resolution of space-time or how it observes space-time. A high energy particle observes a high energy state of space-time than a low energy particle.

Sabine, this is such an excellent blog post I thought It required a little more formal comment. You have a way of stating the issues clearly and if some of us look deep enough I agree with many of your answers too .

* * * In itself and of the higher language level of physics, consistency in the logic of models and mathematics, as well what seems to be a fuzzy-like logic of "uncertainty" suggesting at times some phenomena are illusions, is not necessarily a guarantee of our most general concept of evidently stable complete and consistent laws of Being, thus of experiments and measure.*Why should we expect in a most comprehensive view between grounding simplicity and deep complexity that the dynamics of a physical universe in its evolving could be otherwise? In face of the growing understanding, why must we despair at the given and enduring doubts as we face grounding mysteries of the unknown, or for an ear retreat, heads in the comfort of the sand, only so far to dare to ponder beyond the practical living what is deep and no further?*At the frontiers of speculation, a cautious shift of eras as scientific, it is difficult not so much to build radically new theories- if possible reassured by proofs and experiments, but that in the partial headway in theory, concepts can run hauntingly parallel yet distinct as interpretations.*For the issue of minimum distance, small lamba suggested by quantum theory considerations, that divided by the velocity of light thought a needed minimum duration as well. So this issue stands in the state of our visions today. It was mentioned in the popular book Thirty Years that Shook Physics. Then follows a long plateau of rich speculation. Since then where gridlock of what we intuit as less than a more general theory, at times coming to a point we question our enterprise of inquiry and even thought itself.*Let us consider an article I saw but yesterday in Wired Magazine and the drama reported, including what the author of the concept while exploring by standard quantum means and with predictions verified in measure must have felt. This new era (recall, friend Erion our talks on Lacan and theBorrowmean rings in the philosophy chat forum?) could be contrasted with the drama felt upon "discovery" - a deep desire to find and focused probabilities half realized for the Higgs in the standard model "found". Surely an important step. So now we have another clue not alternative but part of the bigger picture, at least extended three levels of recursive scales where adjacent ones reflect the virial measures of things inverse square law obeying in the measures squared.*So what can this mean as we look back on our models, or questions, so as to adapt and adjust them accordingly? For one thing the astute application to centering physics on the weak force level of generations on this foundational level- issues of information and thermodynamic equivalences really - may not explicitly hold. This also applies to those models that try to resolve things by the CPT concept and its core logic.*It may be that nature, for some still not declared reason three dimensional and the plus one, condenses space as physical density as well extends it indefinitely as compactification. In nature questions of renormalization may exist as the process finds its use and not only as projections of our dreams.*The same goes for what is near or distant, that within or excluded as touching and all the in between. We as nature see but the rim or flange of our topological structures as well imagine extended ideas of voids or dimensions to deeper infinities of symmetry or broken symmetry over singularities and spacious singularities that we by trivial addition in so many things (such as breaking the algebra of complex numbers or heat transfer into real plus imaginary parts) thing the ancients saw pi put there by design in the square base of their pyramids.* * * *

@ Sabine the statement:"A high energy particle observes a high energy state of space-time than a low energy particle. "

Implies that a relativistic muon for example,sees space-time differently from a non-relativistic muon. Here also consider that in this phase space the ground state is that due only to the presence of dark energy. This is the absolute energy reference point. The other states of space-time occupy energy levels that are multiples of the ground state. The highest being the Planck state which is 10^60 times the energy of the ground state

"Implies that a relativistic muon for example,sees space-time differently from a non-relativistic muon."

Yeah, as I said, it's not Lorentz-invariant, so better bury this idea.

Which btw isn't new but runs under the name rainbow gravity.

"Here also consider that in this phase space the ground state is that due only to the presence of dark energy. This is the absolute energy reference point."

I hate to break the news to you, but dark energy is actually (locally) Lorentz-invariant. You are taking words literally without understanding their technical meaning. Dark energy is expressed through a cosmological constant term in the field equations. This term is proportional to the metric tensor and has the same symmetries. The constant also doesn't have dimension of energy, but (depending on definition) some power of an energy scale. This scale however is not an energy in the sense of being the zero-component of a four-vector. In summary, you're not making sense. Best,

Good post Bee......foundations is always what came to mind and with minimal length I had thought a standard with which measure runs into problems. The theoretical position is then realized with parameters reached as we know it in science.

In a sense a condense matter theorist might find a foundations approach as a selective example of the building blocks one chooses to use, so in that sense condense matter theory recognizes a foundation?

"Implies that a relativistic muon for example,sees space-time differently from a non-relativistic muon."This is an experimental fact. Dark energy implies there is a background curvature even in the absence of matter fields. I do not see where the confusion resides.

Thank you for the thread and your technical paper it was most useful to me. In my own crackpot like theory if I use integers in the simulation I find that the proton has five units for its size i.e. minimum length is .2 proton size.

I wonder if you could help me. I have seen a paper long time ago which I am not able to find now that says that there are theories that predict this minimum length but there was no reference. So do you know of any such reference. Thanks in advance.

Zephir,Noise can enhance what we hear and there are two approaches to how we deal with noise, one being Shannon's informational entropy.Links from Kover's fb group are revelant to the discussions here as to point or line distances. In 2D + 1 I. Fuentes et al discuss dynamic Casmir effect slowing down time dialation of the accelerated twin. (Thus dark energy QM issues)Uncle AI the link involving scaled Borromean rings tends to see the physics as molecular in nature so what you know of actual molecules may apply on this recursive scale.How did Fuentes derive the integer 240 in the formuli for that suggests to me a link to the 2D +1 representation as if a surface. There can be point representations and Pointclare unless an artifact of pure gravity QM is not the most foundational system.

qsain two minimal knot dimensions in three space not all simplex values are divisible by prime 5 condensed natural dimensions. So it cannot be exactly 5 but a little more. Three rings and 6 compactified.But maybe we should ask just good questions (like. Does a photon's wave front not exceed the speed of light. Sometimes our theories are embarrassingly not crackpot enough.

"A minority held that there could be no fundamentallength at all, but most were then convinced that a [different] fundamental length. . . , ofthe order of the proton Compton wavelength, was the wave of the future. Moreover, thepeople I contacted seemed to treat this much longer fundamental length as establishedfact, not speculation, despite the lack of actual evidence for it.” ([224], p. 15)

Otto I will comment on you comment after I hopefully receive a reply from Sabine.

It is easy to underestimate this structure and overestimate its range. In theory at this null thermodynamic law two such bodies can be interlinked also as in Whitehead's rings. The difference of 2 and 3 event systems could be a whole new quantum measure of this quasic space. In a sense a single ring could be the raw stuff of inertial gravity in both the loopy or string formulations. The double null thermodynamic law or its hot mirror equivalent. Thus SM 5 fold quasi scaling, n-thermodynamic laws, over the Omnium.

It is possible that Planck's constant is the minimum unit of action only for the atomic scale of nature's fractal hierarchy.

In such a paradigm, each fundamental discrete cosmological scale [..., subquantum, atomic, stellar, galactic, metagalactic, ...] has its own distinct Planck's "constant" that can be quantified if one knows the scaling laws of the paradigm.

On the question of whether Planck length should be thought of as a scalar or not, in the thought experiments that put a minimum resolution length on measurements of position, is that minimum length a scalar or will I get a different resolution in a different Lorentz frame? And would you say that the answer to this question has any bearing on Planck length being scalar?Cheers,Nirmalya

During weekend I discussed with a mathematician who'd had very interesting idea about how discreteness could be realised in physics. Neither of use believed in discretisation of space-time and fundamental level.

His question was "What is discrete and could be behind continuum concept"? Coxeter diagrams could be the answer. They are discrete objects with natural metric. From these one can deduce Lie algebras and groups,associated coset spaces, even Boolean algebras, and much more. His wild idea was that maybe the fundamental formulation might start from these diagrams. If one accepts some form of "Mathematics is reality" vision then this might be one possible starting point.

These arguments rely on spherical symmetry. If you boost them, you can't apply them in any obvious way. Besides this, these thought experiments consider elastic scattering (deflection in a microscope and all), while at (com) energies close by the Planck scale you should see deeply inelastic processes. Having said that, the variables that should quantify how strong the gravitational interaction is should be the curvature (or powers thereof respectively) and that is a Lorentz-invariant statement. The particles at their closest approach either do or don't form a strong gravity background. However, you may question whether this argument should hold for strong quantum gravity to begin with because we don't know the full theory and that's where it becomes fuzzy.

I'd say (and I have argued in some papers) that you should reasonably expect the relevant quantities that determine the experiment out come to be Lorentz-invariants, and that there should be a Planck-scale bound on the cross-section (a scalar) rather than a limit on the resolution length of structures (not a scalar). The former more or less implies that there is no modification for free particles. (And that means that the total momentum is preserved as usual, as Giovanni has recently rediscovered.) The problem with my interpretation is that it doesn't lead to any spectacular predictions that would make it testable any time soon. Best,

We know that matter is discrete and consists of atoms. Atom is the smallest indivisible part of matter. Democritus said "The more any indivisible exceeds, the heavier it is." From the point of view of information theory, atoms must exist: otherwise, a smaller object could contain more information than a larger one. What about geometry? Our world is 3 + 1 dimensional. Is there a smallest physical element of volume? It turns out the concept of two-dimensional (2D) area is distinctive. The point is that at the Planck scale all physics in a region of space is described by data located not into its volume but on its boundary surface. This is called the holographic principle. It is one of the most important pillars of quantum gravity. It states that a region with boundary of area A is completely described by not more than A/4l^2 Boolean degrees of freedom, or about 1-bit of information per surface element of the Planck size ∆A=l^2. Is there a smallest area? I think if information theory is true, then the smallest area must exist.

At least could you tell me what is the rational for the early researchers to consider the minimum length to be of the order of proton. Thanks.

Matti,

I am interested , do you have any reference?

Panda,

Are you new to physics forums? I tell you a secret, people are trying to affirm and confirm their ideas(in their own minds) and NOT promote them since almost all are basic ideas and not full fledged theories.

Sabine, I am not sure I agree Theory is a matter of prediction of what we can expect physics describes and in a sense why not in the present moment, say as it were, for an experimental level applications are intimately present as well as our deeper levels of understanding emerge?

The question is one of mathematics as the foundations also - all these issues you address which as far as I can see stands out above those with an urge their less than complete models are attracted to theoreticians or perhaps their common objective and unbiased sense. That desire to ground the self coherence of the views they have struggled to make sense of the world, society, and themselves.

The comments here, which you so generously have allowed, are representative of these deeper concerns. As with Pitkanen we can see arithmetical issues as central yet among the radical theories that has not been deeply and directly explored thus in a state of controversy.

Let us say for example that what is physical density is the same or similar idea as to what is arithmetical density (as in the connections- at five points the density is 2 cycles) of the infinite group of two dimensional polygons.But if we ask this of the star polyhedra as Riemann did there is a much higher, but structurally limited concept of multi-ply re-entrant density for the star polyhedra five fold.

Would Einstein's theories be just a metaphysics, or Maxwell's radio waves without atom bombs or radios? - What hath the God particle wrought? is the usual question and message.

We divide the plane into quadratics or some other models of symmetry so think as spherical grounding we can ask if the universe is elliptical, flat, or hyperbolic. Or we can embed the spherical things in a flat Euclidean space (which Uncle AI is complete by congruence as far as it can apply) so we get several topology along the lines of Rene Thom's theories also controversial.

Theories compete and adapt much along the lines of animal symbiosis. But are the animals aware of what they are pondering yet they have an ability to discern what other species including us are doing.

RLO's points are well taken if we look deeper into what is the nature of scale we limit to ideas of scalars. Keenmo's simple question I find rather profound for it can be expanded to great rather than good questions.

If there can me multiple h's on some level then there can be multiple c's so why would not Lorentz hold in such structural models between the one and many? While from some view we can see things as discrete they can also abstractly be seen as continuous- the distinction is the question.

Feynmann rotates his bounded region by a right angle. What we think of as the trace is so rotated. A little further thought now that the 2D+1 is established as a unique grounding view we can rotate things along the lines of the 720 degrees and so on which leads to quarterions, stringy theories and so on. Arithmetic density seems lost along the way or we have measures of containment, a more general idea of asymptotic freedom and where does the unity and lack of symmetry go?

But I am not promoting this as a unified theory of everything as we sit here in our caves or on our rocks contemplating the universe so wide like our ancient ancestors or those to come some QM physicists imagined they are in fact the same thinking person like similar yet one electron.

My current question is that in the ideas of maximum symmetry for an abstract structure is there some changing value of a minimum distance or duration, the monster group as center and abstractly discrete to which our unity parameters, h,c and so on is also the center of a stable measure? The polar or Cartesian descriptions at some deeper known physics can be seen more general and unified without the usual distinctions applies to things like momentum. If the proof of Poincare is rejecting the reward for the discover it seems to me this is an example of the best of theoreticians who know something we do not.

I hope this is not off topic but would be a good one for a paper in itself. I sit down at the pc and did not think I had so much to say (this part was rejected for I reach the interesting 4096 character limit) Pardon if I have intruded on the continuity of your contemplation and important work

There's one more thing that confuses me - when one speaks of the possible tension between an invariant minimal length and Lorentz transformations, is one speaking in the context of a. flat space field theory? That would seem odd, because we've chosen to neglect gravity there in the first place. If gravity became important global Lorentz symmetry would be lost anyway. or b. Lorentz invariance in a small co-ordinate patch ? I suppose I'm asking about the regime of validity of theories which incorporate an invariant Planck length in a flat space field theory framework.Cheers,Nirmalya

Kneemo, between the holophrasic and the staccato (old linguistic terms) or the holographic and fractal emergent views, quasi-contiguity both incorporating these symmetry concepts or not - your great question occurred to me too. Minimum distance grounded when the Planck context is higher volume.

Andrii,One bit is not enough to clarify things as it can be fuzzy in between or 1 or 0 to show there must be a minimum length or duration as these can represent a line or point in the context of a more general space. As we can add more groups of bits, this paradox still remains for each unitary bit.

Well, if you want a modification for a free particle then gravity is weak (for all particles we know), so (global) Lorentz-invariance is what you expect when you change coordinate frames. Now why you would expect there to be any effect for that particle if gravity is weak is a different question and I'm the wrong person to ask. The argument that I normally get to hear is that it's supposedly a deformation of the symmetry of the space-time ground state.

In the rather boring interpretation that I favor, the Lorentz-invariance is that of the asymptotically flat space. You have in/outgoing particles with momenta, the cross-sections are computed from these (and some quantum numbers etc). You have Lorentz-invariance of the resulting observables in the standard sense under transformations acting on these in/outgoing states.

Now what you can do is that you construct a local momentum that you trace through the interaction region and this momentum then will no longer transform under the normal Lorentz-transformation because the space-time is curved (as you expect). In this paper with Xavier and Roberto we argued that in Asymptotically Save Gravity space-time becomes effectively energy-dependent (depending on the energy of the collision in this case) and then the momentum and its transformation is 'deformed' in the collision region - which is where you expect strong gravity, in particular it has an upper bound and cross-sections stagnate once you reach that limit, they become asymptotically constant.

I don't like to refer to this as a smallest length exactly because of the reason you mention. I normally refer to it as a limit on the 'resolution of structures', which means that com energies beyond the Planck energy cannot deliver any new information, in the above explained meaning that cross-sections run towards some limiting value and then just stay there. Best,

If phase space is "quantized" - as in has a minimum volume, where it doesn't make sense to talk of smaller volumes -- and one of spatial length, area or volume is also "quantized", then so must momentum, and vice versa?

Has anyone ever considered that, as one approaches the smallest length (presumably the Planck length), space might turn inside out? And that all those superparticles might be hiding in that inside out space?

Marek Abramowicz's research on black holes suggests such a possibility. See for example his paper on the "Relativity of Inwards and Outwards"

DocG The universe itself as a physical totality can be so conceived. All things vibrate some day but bounded by what? Such an external principle may actually exist as a metaphysical mirror, say light speed negative and as with time reversed.So, can we imagine exceeding cardinality and ordinality, complex spaces aside and zero to open ended arrow of time.Even with recursive fractal-like scale we could imagine different kinds of fundamental strings and develop new laws of interactions between them as well as condensates a step above Fermi and pair production and that above any unity of a dimensionless place of synced cycles of beginnings that may decohere randomly in development, even return in sync.Mass and gravity as if external to each other, both involving at least five fold symmetry do seem inertia can approach something near singularity as if concrete and independently self contained. The standard model and deep gravity in this conceptual relation as such mirrors. Physics for now is the lesser physics and needed study of that in between to which we can debate and legitimately ask where does the information go?In short it is a brilliant but very incomplete idea.

Thanks Mr. Otto, but your response is extremely vague and fanciful -- compared with the work of Abramowicz, which is very specific and solidly based on both observation and logic. It looks to me as though Abramowicz has achieved an important conceptual breakthrough -- comparable in some ways to that of Einstein in formulating GR. And I'm wondering why there appears to have been no followup to his work in all these years.

Are you familiar with his work, Bee? And if so can you provide a reason why it's apparently been ignored?

DocGI wonder what he would make of these ideas if Abramowicz applied them to this interesting new observation?

http://spacetelescope.org/news/heic1419/

"Additionally, the results could affect theories of how such UCDs form. "This finding suggests that dwarf galaxies may actually be the stripped remnants of larger galaxies that were torn apart during collisions with other galaxies, rather than small islands of stars born in isolation," explains Seth. "We don't know of any other way you could make a black hole so big in an object this small."

Lorentz and Einstein holds when taking into this century. More than that we do not have the vague answers and worries if the universe falls apart by some experiment or shift of a vacuum. And in a unified theory where quantum holds as well. Still, we can see that the electrical part of that unity as Einstein's first vague vision transcends issues of local and non-local as if some intrinsic curved or flat surface was the better description of a deeper relativity... all concerns he pondered these entities charged or not.Now if A senses a top down or bottom up flow of things both the case does his theory apply to scale? The grid of tensors is a unity but it is also an infinite span as in quantum matrices. Our vague dreams do have explanations that they escape from the depths and are not always what we see as the same size clumps of space and matter.Would the twin paradox hold in some case where we limit matrices as solutions of the physical?

DocGI notice he has very recent papers on these issues. So why did you offer such a primitive paper? The symmetry looks like that preserved Lorentz as hyperbolic. The idea of rotating gyroscopes does not address deeper issues of chirality whenever that paper was offered.

If I ask - given the solar system, with the characteristic mass scale being that of the Sun or Jupiter, the characteristic length scale being that of a planetary orbit (a few A.U.), and Newton's constant - what is the characteristic time, based purely on dimensional analysis?

The answer is, of course, that the characteristic times planetary orbital periods - i.e., of the order of years.

But somehow the system also produces much longer time scales - of the order of 100s of millions of years for stability of the planetary system. (I believe the system is chaotic, but one can estimate a probability that two planets have collided, or a planet has been ejected.).

"Jacques Laskar and his colleague Mickaël Gastineau in 2009 took a more thorough approach by directly simulating 2500 possible futures. Each of the 2500 cases has slightly different initial conditions: Mercury's position varies by about 1 metre between one simulation and the next. In 20 cases, Mercury goes into a dangerous orbit and often ends up colliding with Venus or plunging into the Sun. Moving in such a warped orbit, Mercury's gravity is more likely to shake other planets out of their settled paths: in one simulated case its perturbations send Mars heading towards Earth".

So, e.g., one could define a characteristic time when there is a exp(-1) probability that the system has "decayed", and that time would be 100s of millions of orbital periods.

* Can dynamical systems produce even larger ratios? I think so - think of nuclear reaction times compared with radioactive nuclei decay times.

* Is there some way to produce such large ratios of time scales via purely dimensional analysis?

DocG,Ize roan speak English even as a minor poet we invent English.You asked Sabine a damn good physics question (do we really want to know the answer? )Thanks for the issue and inspiration. I made it the lead poem for my series in Facebook notes. SphereFarmer. I mean no disrespect.

If one assumes that the spacetime is a PL manifold corresponding to a triangulation, then the metric is determined by the edge lengths. If the space is a compact 3d manifold,then there will be a minimal edge length L. If the number of the 4-simplices is large, then one can approximate the effective quantum dynamics by the effective action for the QFT with a momentum cutoffhbar/L corresponding to GR coupled to matter. Since the LHC experiments have not seen any QG effects, this means that l_P << L < 10^{-20}m.

Not correct. If you have any fixed edge length it violates Lorentz-invariance and the constraints on this are much tighter, already well beyond the Planck scale. (Depends on the coupling in the effective limit.) Best,

In this general question of minimum distance, and of the distinction between things and notions. While drawing I said to my Artist roommate some decades back "It seems to me an artist tries to express things beginning with a minimum of lines to start with. " He said that was right. I was thinking where exactly I could color things between the lines as the borders to a keen eye can seem so fuzzy.So for a question, one of the issue between the quality and quality, the continuous and discrete to consider would be this : As in the poetry with its concentration of meanings, Is there an analog to minimum distance in the sense of such notions of minimum dimensions as art? For unity of the possibility algebra of theory and thought are these strictly equivalent or only locally appear so or not as variations?"

I did a new theory idea on minimum distance that included many of the ideas of posters here considered- in the form of a poems, with reversals of all the current issues of art and science funding and so on. It is on my blog and face book notes.

Sabine:In the model I am talking about there is a local Lorentz invariance, which is not violated, i.e. the edge-lengths of a 4-simplex are invariant under the Lorentz transformations in that 4-simplex (recall that the spacetime metric is flat in each 4-symplex, but it is not a smooth function). What is violated are the smooth diffeomorphism transformations.

Global Lorentz transformations are only defined in flat spacetimes, and since gravity exists, global Lorentz transformations are an idealization. Hence, I would appreciate if you can give me a reference which relates to your claim that the experimental bounds on the Lorentz symmetry violations restrict the minimal length to be much smaller than the Planck length.

I checked in Wikipedia, and the experimental results you are talking about concern the speed of light dispersion. Nontrivial dispersion of "c" occurs only if you violate the local Lorentz symmetry.

In Regge quantum gravity this does not happen, because the local Lorentz symmetry is preserved. Hence Regge QG is consistent with Lorentz symmetry and a minimal legth L in Regge QG satisfiesl_P << L < 10^{-20} m.

Regarding minimal length and length contraction, isn't it obvious that the value of Planck length is defined in a rest frame? - It is the minimum proper length that any observer can measure in his/her own rest frame.Other frames may see that the minimum length is shortened, but that does not bring any physical implications of cause any problem.