Pages

Sunday, November 18, 2007

The Cosmological Constant

Dead or Alive

The Cosmological Constant

Aka: Einstein's biggest blunder [1], or the vacuum energy

Committed Crimes: Being 'the most embarrassing observation in physics' [2]; being the 'the worst prediction in physics' [3]; being either too small, or too large, or too coincidental; being bad for astronomy, and being generally an annoyance

Last seen: In high redshift supernovae and the WMAP data

Preliminaries

Watch out, here comes an equation!

Apologies if I scared any unprepared readers but I *really* can't do without. These are Einstein's field equations [4] of General Relativity, and aren't they just pretty? Here is in a nutshell what they say:

The quantities on the left side are the g, which is the metric of our spacetime. The metric tells us how we measure angles and distances. Then there are the R's with varying amount of indices. They describe the curvature of space time, and are build up of second order derivatives of the metric. Thus, the left side has dimensions Energy2 [5]. If space-time is flat, the curvature is identically zero.

On the other side of the equation we have a T which is called the stress-energy tensor, and describes the matter and energy content in the space-time. It has dimension energy per volume, and contains energy density, as well as pressure and energy flux. In energy units it has dimension Energy4. The G is a coupling constant, and one now easily concludes it has dimension of 1/Energy2. If one investigates the Newtonian limit, one finds that G=1/MP2, where MP is the Planck Mass.

Thus, the equations say how matter and energy (right side) affects the curvature of the space-time we live in (left side). If space time is flat, there are no matter sources (Tμν = 0). An important point is that you can not just chose matter that moves as you like it, because it generally will not be consistent with what the equations say. You can only chose an initial configuration, and the equations will tell you how that system will evolve, matter and space-time together. Different matter types have different effects, and result in different time-evolutions.

That's the thing to keep in mind for the next section: different stuff causes different curvature. The details are what you need the PhD for.

Now cosmology is an extremely simplified case in which one describes the universe as if it was roughly speaking the same everywhere (homogeneous), and the same in all directions (isotropic). This is called the Cosmological Principle, and if you look around you, it is evidently complete nonsense. However, whether or not such a description is useful is a matter of scales.

Look e.g. at the desk in front of you. It looks like a plane surface with a certain roughness. If you look really close you'd find lots of structure, but if you are asking for some large scale effect - like how far your coffee cup will slide - the exact shape of a single tiny hills or dips in the surface doesn't matter. It's the same with the universe. If you look from far enough away, the finer details don't matter, galaxies are roughly equally distributed over the sky. With the cosmological principle, one neglects the details of the structures. One describes matter by an average density ρ and pressure p that does not depend on the position in spacetime. It has the same value everywhere, but can depend on time.

We have today extremely strong evidence that the universe is expanding, thus its volume grows. The ratio of this expansion is usually measured with the scale-factor a(t) , a dimensionless, increasing function of time. The universe's expansion is the same in all three spatial directions, so a given volume grows with ~ a(t)3. When the volume grows, stuff inside it thins out. The energy density of ordinary matter drops just inversely to the volume ~ 1/a(t)3.

The energy density of radiation drops even faster, because not only does the volume increase - in addition its wavelength gets also stretched, and therefore the frequency drops with an additional 1/a(t). Taken together, the energy density of radiation drops with 1/a(t)4. Thus, the density of all all kind of matter that we know, and have observed on earth should drop.

Because the expansion of the universe causes light to be stretched to higher wavelengths, and be shifted towards lower - 'redder' - frequencies, cosmologists like to date events not by the time t, but by the redshift, commonly denoted as z.

The Cosmological Constant and its Relatives

The Cosmological Constant (CC) is usually denoted with the Greek symbol Lambda, Λ. It is the constant in front of an additional term that can be added to Einstein's field equations. Depending on your taste, you can either interpret it as belonging on the left 'space-time' side, or the right 'matter' side of the equation. For the calculations this doesn't matter, so lets put it to the matter-side:

What have we done? Well, consider as before the case of 'empty' space, where Tμν is equal to zero. This empty space can no longer be flat: if it was, the curvature and thus the left side would vanish. But the right side doesn't. Thus, with the CC term,empty space is no longer flat. It is therefore very tempting to interpret this as the energy density of the vacuum which creates curvature even if 'nothing' is really there. As is appropriate for an energy density, the dimension of the CC is Energy4.

If one plugs the matter content and the CC term into Einstein's field equations, one obtains the Friedmann equations that relate the behavior of the scale factor to the density and pressure

The appearing constant κ is either 0, +1 or -1, depending on the spatial curvature. The first equation has a square on the left side, meaning that the right side needs to be positively valued. The second equation determines the acceleration of the universe. Note that for usual matter, energy density and pressure are positively valued. Thus, the only factor that can make a positive contribution to the acceleration is the CC.

The most stunning fact about the Cosmological Constant is that it is constant. No kidding: remember we've seen before that all kind of matter that we know dilutes when the universe expands.

But the Cosmological Constant is constant.

The corollary of this insight is that if you start with an arbitrary amount of usual matter, sooner or later it will have dropped to the value of the CC term. And if you wait even longer, the CC term will eventually dominate, causing an eternally accelerating universe. Who could possibly want that?

A term with a CC is not the only way to get a positive contribution to the universe's acceleration. Unusual equations of state that relate ρ with p in a way no standard matter could do would have a similar effect. The family of stuff with such behavior is the mafia of the 'dark energy'. A lot of creativity has gone into its investigation. The suspects in the family include quintessence, k-essence, h-essence, phantom fields, tachyon fields, Chaplygin gas, ghost condensates, and probably a couple more aunts and uncles that I haven't been introduced to.Observational Evidence

The CC appears in the Einstein's field equation and can be treated as a source term. For this analysis it is irrelevant whether the term might actually be of geometrical origin. In this context, the constant Λ just is a parameter to describe a specific behaviour.

Supernovae redshift

Supernovae of type Ia show a very uniform and reliable dependence of luminosity on time. This makes them ideal candidates for observation, as they allow (to a certain extent) to disentangle the source effects from the effects occurring during light propagation. The emitted light travels towards us, and while it does so it has to follow the curvature of space-time. The frequency and luminosity that we receive then depends on the way the universe has evolved while the light was traveling. From the observation on can then extract knowledge about the curvature, and thus about the matter content of the universe.

As it turns out, distant supernova (z > .5) are fainter than would be expected for a decelerating universe with vanishing CC. If one explains the data with the dynamics of the scale factor, one is lead to the conclusion that the universe presently undergoes accelerated expansion. As we have seen above, this can only be caused by a positive cosmological constant. In addition, the data also shows that the transition from deceleration to acceleration happened rather recently, i.e. around z ~ 0.3. For more detail's see Ned Wright's excellent tutorial.

Last November, the observations of high redshift supernovae could be extended above z~1, which allows us to conclude that the dark energy component at these times didn't do anything too spectacular. For more, see Sean's post Dark Energy has long been Dark-Energy-Like.

The Age of the Universe

A recurring argument for a CC has come from the age and present expansion rate of the universe. The presence of the CC influences the expansion of the universe. If it exists, the age of the universe one can extract from today's value of the Hubble parameter would be larger than without a CC. The age of the oldest stars that have been observed seems to indicate the necessity of the CC. However, this analysis is not without difficulties since determining the age of these stars, as well as the present value of the Hubble parameter, is still subject to uncertainties that affect the conclusion [6].

The Cosmic Microwave Background

WMAP measures the temperature inhomogeneities imprinted in the Cosmic Microwave Background (CMB). These very low temperature photons have been traveling freely since the time the universe became transparent for them, called the 'surface of last scattering'. The photon's distribution shows small fluctuations on specific scales, a snapshot of the universe as it was only 300,000 years old. Commonly depicted are the temperature fluctuations as a function of the multipole moment, roughly speaking the angular size of the spots (for a brief intro, see here). Recall from above that different stuff causes different curvature. Thus, from these structures in the CMB one can draw conclusions about the evolution, and thus about the matter content, of the universe.

Since the CC only became important recently, its dominant effect is to change the distance to the last scattering surface, which determines the angular scale of the observed CMB anisotropies. Most prominently, it affects the position of the first peak in this figure. Based on current measurements of the Hubble scale, the WMAP data is best fitted by a spatially flat universe in which 70% of the matter is described by the CC term.

The value of the CC that can be inferred from the presently available data is approximately Λ1/4 ~ 10-12GeV.

Committed Crimes

First Crime: DivergingIn my previous post on the Casimir Effect, I briefly talked about the vacuum in quantum field theory. It is not just empty. Instead, there are constantly virtual particles created an annihilated. One can calculate the stress-energy tensor of these contributions. It is proportional to the metric, like the CC term. If you calculate the constant itself, the result is embarrassingly infinity.

Second Crime: Being large

Infinity is not a particularly good result. Now you might argue that we can't trust quantum field theory up to arbitrarily high energy scales, because quantum gravity should become important there. If you drop virtual particles with energy higher than the Planck scale, you find that Λ should be of the order MP4, a huge value.

Of one neglects gravity, one can argue that the absolute value of the energy density can't be observed, and one should only be interested in energy differences. One thus can set 'by hand' the vacuum energy to zero, a process that is called renormalization. Unfortunately, one can't do this with gravity, because all kind of energy creates curvature, and it is not only energy differences that are observable. However, one can't take the above huge value seriously. If the CC was really that large, we wouldn't exist. In fact, this value for the CC is 120 orders of magnitude larger than the observed one. Thus the reason why it has been called 'The worst prediction in physics'.

Third Crime: Being Small

It had long be hoped that the CC was actually zero, possible protected by some yet unknown symmetry. Supersymmetry e.g. would do it if it was unbroken. When I finished high school the status was, the CC is zero. However, observations now show that is is not actually zero. Instead it is very small, but nonzero, far away from any naturally occuring scale. Who ordered that?

Forth Crime: Why now?!

Why is the CC such that it just recently became important, as can be inferred from the supernovae data? This is also called 'the coincidence problem'.

Fifth Crime: Other Coincidences

Some other coincidences that make my colleagues scratch their heads:

The CC is the fourth power of an energy scale, which happens to be close by scale in which the (differences of the squared) neutrino masses have been measured, and the absolute masses of the lightest neutrinos are believed to fall. Coincidence?

It further turns out that the ratio of that mass scale Λ1/4 to the vacuum expectation value (VEV) of the Higgs, is about the same as the ratio of the Higgs VEV to the Planck mass. Coincidence?

And then the CC is about the geometric mean of the (forth power of) the Hubble scale and the Planck mass [7]. Coincidence?

But

This data analysis is of course not completely watertight. To begin with, one has to notice that all of the above mentioned rests on General Relativity. If instead the equations of motions were modified, if might be possible to do without dark energy. However, studies in this direction so far have not been particularly convincing, as consistent modifications of GR are generally hard to do.

But besides this, the data interpretation is still subject of discussion. An input in all the analysis is the value of the Hubble constant. It has been argued for various reasons that the presently used value should be corrected by taking into account local peculiar motions, or a possible spatial variations. Measuring the Hubble value through time delays in gravitationally lensed systems e.g. yielded a significantly lower result.

Likewise, the supernovae data could be biased through the sample, or the effect could stem from other effects during propagation, like dust with unusual properties. For an recent summary of all such uncertainties, see [6].

Generally, on can say that there remains the possibility that the data can be fitted by other models. But to date the the CC term is the simplest, and most widely accepted explanation we have. One should keep in mind however that the desire to come up with a theory that produces a kind of dark energy uses GR as relay station. Instead it might be possible that a satisfactory explanation reproduces the observational facts, yet is not cast into the form of GR with dark energy because it eludes the parametrization implied in writing down the Friedmann equations.

Summary

Our observations can currently be best described by to so-called ΛCDM model. It has a Cosmological Constant, and a significant amount of cold 'dark matter' (explained in our earlier post). The parameters in this model have been well constrained by experiments, but the theoretical understanding is still very unsatisfactory. ΛCDM is a macroscopic description, but we have so far no good theory explaining the microscopic nature of dark energy or dark mater.

[1] George Gamow, My World Line(Viking, New York). p 44, 1970[2] Ed Witten, quoted from Renata Kallosh's talk "Towards String Cosmology", slide 5.[3] Lawrence Krauss, quoted from "Physicists debate the nature of space-time",NewScientist Blog, 02/20/07[4] It's a plural because there is one for every choice of indices μν, and each index runs over 3 space and one time dimension, from 0 to 3. This would make for 16 equations, but the set is symmetric under exchange of the indices, so actually there are only ten different equations.[5] This is a theoretical physics blog, so Planck's constant and the speed of light is equal to one. This then means the dimension of length is that of time, and both is an inverse of energy. If that doesn't make sense to you, don't worry it's not really relevant. Just accept it as a useful way to check the order of coupling constants.[6] For details, see arxiv: 0710.5307[7] See T. Padmanabhan, hep-th/0406060TAGS: PHYSICS, DARK ENERGY, VACUUM ENERGY, COSMOLOGY, COSMOLOGICAL CONSTANT

By the way, if you want to look at de Sitter's original 1917 paper [in which he introduced de Sitter spacetime, which is what you get when the only matter is the CC] you can see it if you look for "Einstein's theory of gravitation and its astronomical consequences. Third paper" at http://adsabs.harvard.edu/ads_abstracts.html

One amusing thing you will find there is that de Sitter thought that the topology of space should be that of real projective space; which means that "de Sitter spacetime" *doesn't* have the de Sitter group as its isometry group![Schwarzschild thought the same; to him, spherical topology should be rejected in favour of projective space, on the grounds that projective space is "simpler".

Thanks for the reference! I will have a look, I never actually read de Sitter's paper.

Hi Plato:

Glad you enjoyed it. Yeah, I neglected the question of spatial curvature, the post was getting too long anyhow, sorry.

If spacetime is flat, "parallel lines exist" and so does euclidean geometry?

Well, you might want to keep in mind that the cosmological description only holds on large scales. On smaller scales, like e.g. our solar system, the classification into cosmological models is inappropriate. Best,

Hi Bee, you are welcome. If you do have a chance to look at de Sitter's paper, note that he actually argues that S3 is bad, and RP3 is good, for some physical [ie not aesthetic] reason. But I don't really understand his argument. Schwarzschild argued that S3 should be rejected because geodesics emanating from a point would eventually converge somewhere else, which would mean that things happening at the antipode would be controlled by things happening here! Not so silly actually.

I see T. Padmanabhan cited - if I'm not mistaken, he's the guy who ran an extra-curricular physics class when he was in the University of Kerala, and where/from whom I learned about the calculus of variations, Lagrangians, and so on, when I was in high school.

I appreciate your including the many arguments against a CC. All I can say is that if Einstein said something is a blunder he was probably right. He and De Sitter in 1932 adopted an "Einstein-de Sitter" Universe without any cosmic constants. The solution can ne easily described to a grandmother.

A question regarding your Einstein field equation. I see your versions r.h.s. is not negative and in some other places (texts, papers, etc.) its written with a negative sign. Is there some time where its appropriate to have it and other times not?

Well, I admit on being biased. I don't think dark energy has anything to do with quantum effects.

Padmanabhan has written a lot of really interesting papers (not only about cosmology). I find them sometimes hard to read (not very well structured imho), but he has something to say. In case you haven't seen it, try his homepage, I just love this article:

"God expects us to be moral, kind to others and brush our teeth twice each day. [...] obviously there are variations in the theme; some Gods expect us to brush the teeth only once a day. Such differences of opinion, of course, have led to major conflicts and wars."

It's a convention that depends on the signature in the metric. For this case it is (+,-,-,-). If you use instead (-,+,+,+) like e.g. Weinberg does, you get a minus on the right side. It's very bloggy-sloppy of me to write down the equations without specifying the signature, so my apologies. The convention I use is more frequently chosen in particle physics (so the time-like component is positive), whereas the other one seems to be more frequently used in the GR literature.

Either way, the choice doesn't affect the calculation in any way. Best,

Well the criterion that a model can be easily described to the average grandmother isn't exactly one that we should make priority. I include pros and cons because I dislike the tendency to present matters of ongoing research in black and white. Science in the making doesn't have any easy answers, and those who ask for simple Yes or No-s are often just severely confused. I think it's a very bad consequence of oversimplified scientific journalism that people think theories that are still under development can be explained without mentioning uncertainties.

What is actually worse is that even in our community the theoretical side is in many cases blissfully unaware of the actual analysis of the data and what it implies. This is to blame on both sides though, I find it often extremely hard to extract the relevant information out of experimentalist's papers, like, what were the assumptions that went into the analysis. Small wonder, many theories just try to reproduce the parameters used for the fits, instead of directly fitting the data - which has become almost impossible to do, esp. without connections to the community.

Desitter spacetime actually varies quite nicely between flat Minkowski with no CC and a conformal conespace with all CC. It's quite liminocentric as Plato would say using a John Fudjack term. It also fits nicely with MacDowell-Mansouri gravity in an E8 GUT (always nice to look for the good parts of a paper too!) This isn't the first time I've been a clicked link away from a great place where it took an unusual event to actually get me here, apprently my surfing skills need to improve (I'll stick to the web kind though).

Hee, hee. I once wrote up the field equations on a blackboard during a lecture on cosmology to a bunch of Australian category theorists, knowing full well they wouldn't recognise it. Mind you, that was the only thing in the lecture even vaguely related to manifold spacetimes.

Here are a couple of quotes from Frank Wilczek relevant to general relativity and the CC:

Any theory of gravity that fails to explain why our richly structured vacuum, full of symmetry-breaking condensates and virtual particles, does not weigh much more than it does is a profoundly incomplete theory. [“Scaling Mount Planck III: Is that all there is?” Physics Today, 55(8), 10 (2002).]

Since gravity is sensitive to all forms of energy it really ought to see this stuff. . . . A straightforward estimation suggests empty space should weigh several orders of magnitude (no misprint here!) more than it does. It “should” be much denser than a neutron star, for example. The expected energy of empty space acts like dark energy, with negative pressure, but there’s much too much of it. To me this discrepancy is the most mysterious fact in all of physical science, the fact with the greatest potential to rock the foundations. [“The universe is a strange place,” astro-ph/0401347.]

Wilczek is an exception. No matter how many problems arise in the standard cosmology, it seems politically incorrect to suggest they might reflect a failing of general relativity. We "know" general relativity is correct, therefore there must be "new physics." This paper outlines a cosmology due to a general relativity alternative, with no CC problem.

Kris, there are Irving Segal MOND-like explanations for Pioneer that leave GR OK where you expect GR to be OK. As for gravity not being as strong as you would think, there are proposals which could eat gravitons (like virtual Planck mass black holes).

Low Math, You're welcome :-) I hope you didn't misunderstand my post on anonymity. As I said there, I don't mind pseudonymity. It's just that the most exasperating property of the blogosphere is an incredibly short memory span. I find myself re-re-re-repeating the same over and over again, which is very annoying. Anonymity contributes its share to this effect because it doesn't allow to connect opinions to a history. Best,

Bee said: " I don't think dark energy has anything to do with quantum effects. "

Yow! OK, you got my attention! Care to explain?

I often laugh when I see "quantum gravity" as the default explanation of all mysteries. Supersymmetry breaking? Arrow of time? Cosmological constant? We don't understand them, but all will be made clear when we understand quantum gravity. When my wife recently lost her keys in a completely inexplicable way, I said that it was obviously a quantum gravity effect. I really wish people would stop using quantum gravity as an excuse to avoid thinking about things....

There isn't much to explain. I happen to believe the CC is a macroscopic effect, and in addition the data is misunderstood. It might very well be that the 'hubble bubble' we seem to sit in significantly affects our data analysis. We've had a previous post on the anomalous alignments, and if I take all that together, local anisotropy, the hubble constant being measured to have different values with different techniques yet being essential to determine the other parameters, not well understood features in the CMB data, I just find it suspicious. Its nothing that I could pin down, call it a gut feeling. I could be totally wrong, time will tell. Best,

Hey bee, on the being large crime, do you know any good textbooks/articles that discuss this? I'm particularly thinking about putting things on a planck lattice, then a unit volume has 1/l_p^4 points, and I roughly guess that one can understand it as the 1/2 hbar omega ground state energy of harmonic oscilators attached to these points.

This should be morally the same thing as vaccuum loops with a momentum cutoff, right?

Thanks for exposing the CC, and not least because it gives me the chance to ask again some questions I had raised after your Casimir Effect post (it's #40, last one in that thread).

First I asked

"After having seen the EM Casimir effect measured to be real, what interests/worries me most is the implications for vacuum energy, and by extension cosmology. We are now reasonably sure that _changes_ in vacuum energy are real and match with what would be described in QFT as zero-point modes. But at the same time we know that the absolute value of vacuum energy, if GR is to be believed, is much lower than these zero-point modes would imply. Does this strike you as a worrisome contradiction?"

So I think you've answered that in this post!But now is the more pointed example:

"Another way to approach this question is to ask: if I hold up two sheets of aluminum foil parallel and let them go, they will fall together under the Casimir effect force; eventually they will collide and presumably wind up being (slightly) heated. Standing in my kitchen, how do I understand where this heat energy came from? Did the cosmological constant of the universe just go down by a little bit?"

"What's to prevent me from repeating the process and "mining" an arbitrary amount of vacuum energy into mechanical/heat energy? I need a lot of aluminum foil which is already separated at some large distance; but what is the fundamental limit on creating all those conducting sheets? Does the Casimir logic also say that it costs/yields energy to take a single conductor and change its shape? Or, does it cost/yield energy to change a piece of material from conducting to non-conducting (as could be done chemically)?"

Dear Bee,I think we are missing the biggest crime of Lambda. It shatters our illusion that we understand the standard physics. If it was an indication that our knowledge fails above some UV scale, where we have to unite gravity with other interactions or that we don't know what happens near the singularities, we could live with it. These regions are far away from any conceivable experiment, so they are not so urgently pressing on us. Or let's say that Lambda has nothing to do with the vacuum energy, but is some kind of slowly varying scalar field, still there is a big unsolved problem.Here it is:We have our normal well-behaved Standard Model degrees of freedom. We think that we understand them from the Electroweak VEV (~1/4 TeV) down almost to zero energies (cosmological times and distances). But these degrees of freedom have vacuum energy (zero quantum oscillations ...). If we put an UV cutoff at the Electroweak VEV, we'll get enormously large Lambda. In fact, to get the observable Lamda , we need to put the UV cutoff at an energy scale that corresponds to a centimeter spatial scale!!! All higher energy degrees of freedom have to be cutoff. We couldn't get nuclei, atoms, molecules, microbes cells, even wrist watches! The vacuum energy of their degrees of freedom will exceed the observed Lambda. Something has to cancel this vacuum energy, but we don't know what.Supersymmetry wouldn't help, because although when unbroken it does the trick, it is broken at such high energy scale, that it is irrelevant for the “standard” degrees of freedom. Are there are some new unknown dof at our scales, that do the cancellation (hence have negative vacuum energy)? But then the picture looks like the unbroken SUSY. How can it be that these dof are hidden from us and unobservable? Or do we need to modify the quantum mechanics? Or also modify gravity in the EW VEV to the 10^-4 eV range? Or maybe deep in the infrared, somehow the vacuum energy integral becomes so large and negative, that it can compensate the positive contribution from all “normal” degrees of freedom from the Standard Model and maybe even up to the Planck scale? But I haven't heard about any theory that has negative IR contribution for the vacuum energy!Oh, and I have forgotten the chiral condensate, that gives mass to the hadrons and hence more than 99% of the mass around us (not counting dark matter). What compensates the enormous energy density of the chiral condensate, so that lambda is not 600 Mev? The same thing for the Higgs condensate.

Here half sleeping I am doing some naive counting:Of course, the free fermions have negative 1/2 for the zero modes. We have a lot of fermions, so it is interesting to make a simple balance for OUR vacuum:4x4x3x2=96 negative DoF (including right neutrinos and left antineutrinos) versus only 2x(8+3+1)=24 gauge + 4 Higgs =28 positive DoF. But then there are the positive condensates (the vacuum is not perutbative) - the chiral one and the Higgs one. Then there are the fermion zero modes due to the QCD and the SM instantons. I have to figure how they contribute to the vacuum energy. But in this naive picture it seems thet we have more negative DoF that contribute as the cutoff^4, while the condensates have a constant contribution. Then maybe we also have to add the gravity's DoF. (We shouldn't add SUSY, because we know that all free field DoF add to 0, but the condensates, especially the SUSY breaking one remain). Still it seems that we have more negative degrees of freedom so we can integrate the free field part of the vacuum energy to some UV cutoff so that it compensates the positive energy of the condensates (zero vacuum energy and the observed Lambda are the same at this level of precision). The cutoff should be around the VEV scale. But this is also the cutoff for the physics that we know. Then maybe I am not right about Lambda destroying the known physics? I drop my charge.

You are of course right, very embarrassing. I've fixed that. What I actually meant to say is that flat space is a vacuum solution, which is no longer the case with a CC. I didn't want to elaborate on the differences between the curvature tensor and it's contractions, it didn't seem like the right place to do.

you are of course right that the CC's first and second crime should really make us worry about our 'understanding' of quantum mechanics.

Are there are some new unknown dof at our scales, that do the cancellation (hence have negative vacuum energy)? But then the picture looks like the unbroken SUSY. How can it be that these dof are hidden from us and unobservable?

Well, the hope would be that they are not unobservable.

Or also modify gravity in the EW VEV to the 10^-4 eV range? Or maybe deep in the infrared,

No sorry, I am afraid I can't give you a good reference (my husband might, if he'd find time to answer some comments.... WINK WINK). If you put things on a lattice, you'll have a preferred frame and break Lorentz invariance. That's why I've been playing around with the DSR stuff. Best,

Sorry for not having answered to your question at the Casimir posting. I was terribly busy that week and must have missed it. Yes, the answer to your first question is one should worry about it because it says clearly we don't really understand the vacuum in QFT.

About your second question. First, you have assumed there that the CC *is* the quantum vacuum energy, which is far from sure. In your thought experiment, energy is conserved. If you pull the foils away from each other, you'll have to invest energy. Then you can ask where does that energy come from? But well, where does energy come from anyway? (I think there are several people holding patents on energy extraction from the vacuum. Ohhm. If you ask me, wouldn't exactly invest my money into that.) What you've done is converting it into a form more useful to you, but I don't see what's mysterious about that. It's hard to tell whether changing a shape would generally cost energy. Best,

Forgive me if these questions are obtuse, but about those equations...I think I understand most of what you're getting at, and these probably quite ancillary questions from the main point.

Anyway, it looks like the Gaussian Curvature and the pressure are somehow quite related. Also, why must k only be -1, 0, or 1? Does that relate somehow to the nature of pressure? By that I mean, one can have the absence or presence of pressure, and pressure may push or pull at you, but it makes no sense to say there's sort-of-pressure acting on something. Whatever the magnitude or "direction" it acts, either there's pressure or there isn't. Am I on the right track here?

I am actually not sure what track you are on. k says something about the space-time, the pressure is a property of the matter. k is either +/-1 or 0 because these are the possible values that fulfil the symmetry requirements of the cosmological principle. Does that help?

Well, that tells me I'm probably not on the right track, but I'll probably have to read about in on my own (if able) before I can formulate the question more intelligently, assuming it can be.

Anyway, this cosmological principle...A quick Wiki read tells me the principle states the universe looks the same in all directions (at sufficiently large scales), and no place in it has preferred status. So, any other value than 0 or+/-1 (greater, lesser, non-integer) would yield a somehow lopsided universe? I take it then that the principle is input? Meaning one could choose other values and not do something mathematically forbidden, just observationally? Sort of like you can unquantize certain equations and get a classical universe, but we don't happen to inhabit such a place. Or would deviating from these symmetry requirements render something truly absurd, roughly in the same way probabilities greater than 1 are absurd?

k it just not a continuous parameter. Its a label for different geometries, open, closed, flat that remain possible after the requirements of homogeneity and isotropy. There is an excellent chapter in Weinberg's book which explains these classifications (I think it's chapter 13 or so), and I am perfectly sure Misner, Thorne, Wheeler also explains that, but no idea where to find it. Deviations from the cosmological principle can, and have been considered, keyword 'Swiss Cheese Universe'. Best,

I think that values of k other than -1/0/+1 do not make much sense. By replacing a with \sqrt(|k|) a one can arrange for k to be -1/0/+1, it seems. Unless that's forbidden for some reason I can't see...

Thanks for your attention. I appreciate that you must be busy, and am constantly amazed that you can hold down a day job while also turning out these excellent posts and thoughtful replies.

Now, I don't think you've answered my question about the Casimir Effect and dark energy directly; or I haven't been as clear as I should be. Here's the essential question, re-stated in a few steps:

1. In a general two-component view, the energy density of the Universe is the sum of two types: in column A we have ordinary matter&radiation with non-negative pressure; and in column B we have other stuff -- "dark energy" -- with significant negative pressure. WIthin GR, the acceleration of the expansion implies a large helping from B.

2. Within column B, two most-often mentioned possibilities are the official cosmological constant, ie a term in the Einstein field equations; and some kind of "vacuum energy" such as the zero-point energy of QFT. Either might be zero or non-zero, for all we know; and operationally the two are indistinguishable in their effects, so we can just refer to their sum as "dark energy".

3. In the aluminum foil experiment, where I let the two sheets come together and collide under the influence of the EM Casimir effect, I certainly don't dispute that energy is conserved. What's interesting, to me at least, is to observe that the newly appearing kinetic energy is in Column A, which means that an equivalent amount of energy must have come _out_ of Column B. Do you agree? or would you argue that the transaction is somehow entirely within Column A? (This amounts to asking how the stress-energy tensor in my kitchen changes during the process, which is a very well-defined question.)

4. Assuming the former, then it follows that I have, heroically and from my kitchen, reduced the total amount of dark energy in the Universe and so slowed its acceleration ever so slightly! Alternatively, I could take the rest of the roll of aluminum foil where the sheets are still pressed together and pull them apart against the Casimir force, which will result in a transfer of energy from Column A back to Column B. So my second question here is, is there any obvious fundamental limit on how much energy we can slosh back and forth this way, using the Casimir effect forces?

Mm. I mean, I get it a little. That is to say, if k is just telling you something like "flat", "open", or "closed", then of course it makes no sense to talk about surfaces that are opener than open, closeder than closed, or flatter than flat. Nor does it make any sense to talk about a surface that's sort-of-closed but not flat or open either, for example.

What throws me is puzzling over the various equations that give you k on, say, Mathworld. I mean, goodness, look at Eq. 9. To my unschooled eye, it's a bunch of stuff on the right, and k on the left, and nothing about it makes it obvious that k must wind up being only +1, -1, or 0. Conceptually, I think I'm there, but computationally, no way, and won't be without devoting an inordinate amount of time to understanding it fully, sadly. Differential geometry goes way past my maximum of calculus and a smidgen of linear algebra, I'm afraid, so it's probably something I'm just not going to be able to appreciate fully.

unfortunately, in this case I'll have to disappoint the hope of my wife ;-), because on your question

do you know any good textbooks/articles that discuss this? I'm particularly thinking about [putting harmonic oscillators in Planck volumes]

I can't say much. I don't know of any textbooks with a more detailed discussion - Antony Zee has a few pages about the Cosmological Constant - that book is very nice and worth a closer look anyway!

But on the other hand, what you suggest is essentially to use the Planck length as an UV cut-off? Your oscillators could then be identified with the the modes with the highest allowed frequency in the UV?

concerning the Casimir effect (sorry, the comment also may fit better in that thread) - and trying to understand this series of recentpapers by Jaffe et al, I'm wodering the following:

May it be that the "standard" Casimir situation with the parallel plates is a configuration where just by some weird coincidence the force which is actually caused by the van der Waals like interaction between quantum fluctuations of charges "looks like" being caused by the pressure due to the "missing modes" between the plates?

After all, Jaffe argues that the Casimir effect has nothing to do with zero-point fluctations, and as far as I know, the comparatively simple calculation of counting allowed modes to determine energy density, hence pressure and attraction, works only for the parallel plates? In other geometries, there can be repulsion as well after all, which is quite hard to understand in the "standard" explanation since all kind of extra boundary conditions should, naively thinking, reduce the number of allowed modes and hence always lead to attraction?

I have no idea what Equation 9 you are referring to, so no clue what's on its left or right side? Its a symmetry requirement, if you drop that you could parametrize all kinds of metrics that don't have this symmetry. Look, the procedure is essentially as follows. You write down the field equations, full form. They are nasty to deal with. You impose your symmetry requirements. These allow you to reduce your metric to a very specific form. It can be shown that every metric that fulfils these requirements can be described with a(t) and with k being either +/-1 or 0. If you just choose k arbitrarily possibly a function of r and/or t coordinates, you can either make a coordinate transformation to get the metric in the usual form (like rescaling r). Or you can't, then it doesn't fulfil the requirements.

"Assuming the former, then it follows that I have, heroically and from my kitchen, reduced the total amount of dark energy in the Universe and so slowed its acceleration ever so slightly!"

The expansion rate, velocity, acceleration is given by the whole source term. This includes all kinds of energy densities. You can convert one into the other as long as you respect the conservation laws. The scenario you have in mind seems to be a very local one, for which cosmology is inappropriate to begin with. What you are basically saying is, when I shift energy from here to there, then I change locally the way the universe evolves. Yes. Definitely. You don't need dark energy for that. If you take two charged plates and let them fall onto each other - no quantum effects - you have converted the electrical field (it gravitates) into another kind of energy as well.

About the point you mention, on a deeper level there remains however the question I raised in the earlier posting, namely how Casimir energy actually couples to gravity. As I have expressed there, I don't think this is really clear, and I am very sceptical about the opinion pushed forward in the paper I cited (very bottom of the post). Best,

OK, the CC would have units a/r (as if it were a density of matter with negative gravity), which translates into units of 1/T^2. We can extract a time etc. from that. I suppose, those horrendously too-high predicted values of the CC come from the idea that the equivalent time (lambda^-1/2) was equal to the Planck time? Well, the ratio must really be some tiny number - so we have another dimensionless ratio to "explain" right? BTW, what is the current estimated value of lambda, and what proportion to Planck unit?

BTW, I have a question about the applicability of the "accelerating elevator" to model the uniform field around a very extended planar mass. Greg Egan at Cosmic Variance tells me (in the thread " arxiv Find: Universal Quantum Mechanics") that the field of the planar mass (PM field) is not equivalent to the true accelerating Rindler field, even locally. He seems credible, and says that the acceleration of a body moving rapidly transverse to g is not "the same" as for a simply dropped body (as we expect from the equivalence principle and elevator - that they hit the floor at the same time.) He says, the acceleration is equal to g(1 + v^2/c^2). That really baffled me. Is he right?

BTW, whether that is true or not, "gravitomagnetism" means the EP is wrong anyway (!) - for rapidly moving bodies to accelerate differently than straight-falling ones at a given point (for any reason related to gravitational issues) is a local distinction (since can't be transformed away by acceleration), not a distinction about large regions (like tidal fields, etc.) G-m and presumably Greg's distinction have been known for a fairly long time, so somehow folks just didn't appreciate the implications.

BTW, check out the poll on "What's your favorite example of quantum chicanery?" at Uncertain Principles. I said decoherence (albeit a middle brow who barely knows on what basis to pick it, but hear me out ...)

OK, maybe it is time to speak of possible *solutions* of the CC problem. I must say that to me, Polchinski's discussion

http://arxiv.org/abs/hep-th/0603249

is very convincing.

I hesitated to raise this point, because it might trigger off a riot if the A-word gets mentioned. Please, *please* don't do that, everyone! Bee will shoot me if you do! :-) The fact remains that Polchinski's argument is the currently most widely accepted solution of the CC problem. However, there are others, see for examplehttp://arxiv.org/abs/hep-th/0510123

I will have a look at the paper. If you're referring to Weinberg's anthropic argument, I have made very clear here what I think about it. It's a consistency requirement, that might be useful as a constraint on parameters, but doesn't actually explain anything. I.e. it doesn't give you a mechanism for how so, and is in this sense rather unsatisfactory as it doesn't really improve our understanding. Best,

Oh no, Bee, now you have done it! :-) Before the customary anthropic riot starts, let me say that I have never really seen the relevance of anthropic stuff here. We have an observation: the CC is small in our Universe. Therefore we have to come up with a theory that explains how small CCs are possible. Polchinski's argument does just that. It's a bit like developing a theory of the scattering of light in planetary atmospheres: your theory had better allow for the possibility that some planets have blue skies, because we know that this is possible. Nothing anthropic about that. Similarly here. The Landscape allows for the existence of universes with small CCs. I repeat: nothing particularly anthropic about that, it just means that the Landscape passes a basic test. Polchinski's paper makes it very clear that this is not an easy thing to do.....

To answer Paul's questions about extracting energy from the vacuum using Casimir plates.

You are of course correct that once you have contructed a pair pf plates, you can let them collide, extract that energy, shove them off somewhere, and then repeat with a new pair of plates. In effect this trades very small regions of large negative energy density (between the collided plates) for arbitrary amounts of positive energy extracted.

Now obviously this is impossible. For one thing, there are positive energy theorems it would violate. For another, any quantum theory which allowed that sort of process would be unstable.

So what saves you? It's actually rather subtle, but it has to do with not being able to build a perfect mirror (or conducting plate) without expending an infinite amount of energy to assemble it. I could expound further, but not in this format.

Incidentally, there'a an even cooler paradox if you make a box out of perfect mirrors. You can make it balance on the fountain of Hawking radiation from a black hole, which means it will accelerate forever in empty flat space (think of the Rindler coordinates).

Bee:Well, you might want to keep in mind that the cosmological description only holds on large scales. On smaller scales, like e.g. our solar system, the classification into cosmological models is inappropriate.

Well, that's one of the reasons why I am glad you're not anonymous. If you hadn't been here before I'd be rather annoyed.

Hi Plato,

A circle doesn't have curvature.

Hi Paul, Hi Sol,

Energy density (00-component of T) is not a conserved quantity.

Hi Dr. Who,

Look, would you please do me the favor and read my previous post first. If you have anything to add to my argument, feel free to do so but I am not willing to re-re-repeat the same useless exchange again and again. One has a model with a parameter (CC). We have an observation: we exist. If you don't like the complications with life, consider galaxies form or something. This observation allows you to constrain the parameter. Now we have other observations that constrain the parameter, and luckily they are in agreement. What this says is that the model you're using works well. It does not, as you say, explain the parameter.

I don't see why one would have to investigate the Newtonian limit to conclude G = 1/M_P^2, considering M_P is defined to be 1/sqrt(G)...

Upon re-reading what I wrote I have to admit it sounds rather confused. What I meant to say is one takes the Newtonian limit to identify the appearing energy scale one needs for the coupling (M_P) with the Newtonian gravitational constant via G = 1/M_P^2. It is not a priory clear that the constant one puts in the Field Equations is the same as the one in Newtonian Gravity.

If one could really create conductive plates at minimal energy cost, one could allow them to fall together, extract the energy, throw them away, repeat, and that's a perpetual motion machine. You can extract arbitrary amounts of energy from the vacuum that way.

And as I mentioned, one can also make a forever-accelerating box if you follow the same naive Casimir-type analysis.

I believe the resolution lies in the fact that it's not possible to build a perfect conducting plate without expending infinite energy.

This is a fine summary, Bee. It takes me back to 1972 when I was doing a thesis on gravitational collapse. It is nice seeing some classical GR discussed for a change(ie easier for my neural net to process).

OK for the following someone correct the math concepts if I'm misremembering something I've read (I am reviewing a little as I type) I think curvature is a different thing than what the CC causes, the CC would be torsion, right? Lie Manifolds allow both a curvature connection (Levi-Civita) and a torsion connection (absolute parallelism). I think that while the curvature connection is a very GR thing, the torsion one is more like that Mobius Transformation Lubos wrote briefly on today. The two connections would then kind of allow you to vary your DeSitter spacetime between Minkowski/Kerr and Conformal Conespace, right? Perhaps the more Minkowski/Kerr area would be lots of matter areas (like Pioneer on Earth) and the more Conformal Conespaceish area might be a less matter dominated area like Pioneer out past Uranus (less Sun dominated)? This kind of creates a Swiss Cheese-like universe too. Tony Smith uses the analogy of an expanding balloon with pennies taped to it (the matter dominated areas that don't expand).

I think curvature is a different thing than what the CC causes, the CC would be torsion, right?

I think you are confusing some concepts here. Let me start from the beginning, with a metric and a base manifold. If you want to have a covariant derivative on the manifold, you need a connection. If you assume that the connection is metric compatible (i.e. parallel transport conserves angles and distances) then there is exactly one connection, this is the Levi-Civita one, which is given by partial derivatives of the metric. This connection has no torsion. This is standard GR.

If you wish, you can drop this requirement and allow for connections with torsion. There are several good papers by Hehl on such generalized scenarios.

The curvature is constructed out of the connection, whether or not this has a torsion. If it has, the curvature tensor gets more complicated though.

That however has nothing to do with the CC as I have introduced it above. It enters as an additional term in the field equations. This is standard classical GR, and what I've been talking about has nothing to do with torsion whatsoever.

If you were talking about something different, then I'd need to know better what you were referring to.

"4-form dA /\ dA - HyperVolumeIn American Journal of Physics 39 (1971) 901-904, David Finkelstein showed that in Unimodular Relativity the Cosmological Constant is an unavoidable Lagrange Multiplier beloging to a constraint that expresses the existence of a Fundamental Volume Element of Spacetime Hypervolume at every point of Spacetime. Unimodular SL(4) is related to SU(2,2) which is isomorphic to the Conformal Group Spin(2,4)."

Gravitational torsion was under the anti-deSitter group Spin(2,3) heading not the full Conformal Group Spin(2,4) so I guess that wouldn't be a CC thing even for this CC, but Non-linear Mobius Fractional Projective Transformations were under the full Conformal Group so I guess it's just the Mobius Transformation that goes with this CC not torsion. This seems like a CC different from one that can fit with GR, doesn't it?

I think I do need to read about torsion more, I've messed up talking about it before over the years; I think for Tony it's a Higgs Mechanism/strong gravity thing and I tend to lump all the not well understood stuff together too much like Higgs/strong gravity/negative energy,etc. It can be a pain being a layperson (electrical engineer) reading above one's abilities just to get the general idea.

Folks, you can't understand the CC/dark energy etc. unless you get GR, and I'm trying to understand GR at some middle to upper-middle brow level. I heard something weird about the acceleration of bodies moving rapidly relative to distributions of mass and ask for your take. Gravitomagnetism apparently isn't the only example of unequal acceleration applying to bodies at different velocities transiting a small region. On sci.physics.research, I asked as OP: "How unlike real elevator (Rindler field) is field fromplanar mass?" That's because Greg Egan on thread http://cosmicvariance.com/2007/11/13/arxiv-find-universal-quantum-mechanics/ assures me of the following surprising distinction: The metrics of the Rindler Field (RF) of constant proper acceleration, and that of the basically parallel and uniform field (PF) around an extended planar mass are not the same.

I mean, not just regarding the extended field structure (like the hyperbolic g = -c^2/Z relation), but it also matters for *local* experiments. He says, the lab-frame acceleration of a body moving transverse to g in a PF is:g(moving) = g(1 + v^2/c^2). I thought, WTF?! In an accelerating elevator, a dropped body and a bullet fired parallel to the floor hits at the same time. But he seems savvy, and wrote:

Greg Egan: "In the Rindler "elevator", transverse motion is just an extra degree of freedom that has no effect whatsoever in the Z direction. In the curved space-time near a planar mass, the geometry is sliced differently by world lines with different transverse velocities."

Is he right about this weird effect seen in the lab frame where planar mass is at rest (so not "gravitomagnetism"), that makes the lab-standard acceleration g(moving) = g(1 + v^2/c^2)? But I thought of an energy conservation problem, now a better one than first proposed to him: What if you accelerate a ring from rest to rapid rotation, using up its own mass-energy. The total mass-energy per unit ds of the ring, seen in the lab, stays the same (and we can use discrete points to avoid stretching.) In my original understanding, the close to 1/r gravity field near the ring current therefore stays the same value. That avoids free energy tricks like raising/lowering parallel static mass rings before/after acceleration of the first ring.

But if Greg is right, then we can speed up the first ring, get g(new) = g(rest)*(1 - v^2/c^2), lower in some sandwiching static rings, decelerate the first ring (keeping the energy there for same mass-energy per unit), then raise the other rings back out and keep the extra energy. It would be worth 0.36 g*delta h if the main ring got up to 0.6c, etc.

Below are some references, one of which he claims supports his contention - but I feel like something is not right about the increased acceleration.

As much as I appreciate and encourage your interest in understanding GR, this is not a forum. Please post questions about Rindler space or elevators elsewhere - I am really sorry, but I just don't have time to be an "ask-the-expert" service.

I want to kindly ask you to only comment on the topic. This, I should add, would require that you read what I wrote. Generally, you might want to ask yourself why I would want to take the time reading your comments, if you don't care about reading my postings.

May it be that the "standard" Casimir situation with the parallel plates is a configuration where just by some weird coincidence the force which is actually caused by the van der Waals like interaction between quantum fluctuations of charges [...]

This reminds me of some comments by Gordon Pusch on Usenet a while ago. At the time, at least, he maintained that the Casimir effect was due to ordinary van der Waals forces. I searched for a couple of representative comments of his:

This one (third comment down) makes mention of a paper from the 1980's

OK, point taken. So, here's a question directly about the CC: how does it affect red shift, if at all? I always wondered, how only Doppler type considerations could be used to get relative red shift between galaxies. Consider: If I consider myself the standard for the center of mass density, and emit light, that light has to climb up through the gravity of each sphere it keeps reaching the edge of (since the field of all mass above that cancels out.) I heard, this pseudoNewtonian approach can be used for things.

So, light from me becomes red shifted by *my* standards (direct, not "corrected standards" where the energy you define changes with potential etc.) by the time it reaches another galaxy. That interacts with the motion of the other galaxy to get the net red shift from me it observes.

Now, I think that normally all cooperates to get the expected usual relations for Hubble constant, like the frequency ratio being the scale ratio, etc. But how would dark energy affect the interaction of "inherent" red shift and velocity effect? If the CC affects acceleration, it seems it must also affect the change of energy of light - true? As like "negative mass density", does the CC make the light get bluer as it proceeds out? How do these things interact?

It seems to me, it would reduce the net apparent red shift. So, given a certain radial velocity (coordinate expansion * distance) relative to me at a given "cosmic time," the receiving galaxy gets less red shift looking at me than with no CC. Did I figure this right?

The effect of the CC on the redshift is what is measured in the supernovae data. This however is not a Doppler effect and has nothing to do with the potential of the supernovae. Roughly speaking the net effect is that the light is emitted, but it has to travel through a lot of space to reach us. While it does so, the space expands and redshifts the wavelength of the light. Yes, the CC affects this redshift because it affects the way the space stretches while the light is traveling. Best,

thanks for the pointer. No, I was not aware of these comments by Gordon Punsch. But it seems that they to go in the very same direction as the papers by Jaffe. This would simply imply that we should stop bringing up the Casimir effect when discussing the vacuum. Hm, will have to think about it...

"David Finkelstein showed that in Unimodular Relativity the Cosmological Constant is an unavoidable Lagrange Multiplier..."

After reading the 78th listing in a Yahoo search for "unimodular relativity", it looks like unimodular relativity just means conventional GR with the Cosmological Constant allowed to be variable and that makes it different than conventional GR and the conventional CC.

Well, I did predict that talking about explanations of the CC would lead to a riot....little did I suspect that the rioter would be Bee herself! I did of course read your stuff about the AP, but as I said -- twice! -- I don't think that the AP really has anything to do with the Landscape theory of the CC. As I said -- you did read my post, right? -- what we need is a theory that can explain the existence of universes with very small values of the CC. The Landscape does that. The question as to why *we* find ourselves in such a universe is a completely different matter, which you may find interesting -- I don't, really. In short, your post is about the AP, and my post was aimed at declaring that the AP is irrelevant to the problem of the CC. Which is what this thread is about, right? I do wish people would stick to the point....

little did I suspect that the rioter would be Bee herself [...] my post was aimed at declaring that the AP is irrelevant to the problem of the CC

Well. If I wanted to cause a riot, believe me I could do better :-) I was just trying to clarify my point of view. I don't know why the mentioning of the AP always comes with funny remarks like the A-word, or something. It completely obscures any sensible discussion. Yes, I read your comments. If you're saying the AP is irrelevant for the post, why then did you bring it up? Sorry if I misinterpreted it and thought you meant to say something about the topic.

Declaring that something is possible because anything is possible might be possible, but I try to avoid wasting time with it, if possible. Therefore I'll stop 'rioting' here. Best,

It's been a while since I read it, but overall seen not particularly. He has some nice lines in there that I like (e.g. the one with the teeth brushing, and the one about acquiring balance on the bike). However, his conclusion ("one day you may realise that your the natural state is one without mind or thoughts!") is equally ill defined as the vacuum in QFT. It's full of bubbles, and I am all for renormalization. I tend to agree however that many people (me included) make life unnecessarily complicated, and try to convince themselves of doing things that actually make them unhappy (that includes too much self-reflection). Most notice sooner or later, but what good is it to be sixty and realize that all the money doesn't make you happy or young again?

Similarly, the intellectual and mental responses for every situation life throws at you should come from you naturally, spontaneously and without the mind --- as you know it today --- playing any role.

I find it somewhat too radical, as I think self-control is not an entirely useless trait. It can avoid a lot of unnecessary confusion and insults, that only cause more bubbles. On the other hand, I am afraid we are living in a society where people have forgotten how to listen to the immediate 'intellectual and mental' responses. Thus the weird split that I often encounter, e.g. people saying they dislike the 'political games' they have to play to work in science, but try to convince themselves that's just the way it is, nothing can be done about it, and playing the game is necessary. Ask them why? Because they have to think of their career (how can you be so naive to ask!). Ask them whether that makes them happy, and - depending on the stability of their mental balance - they will start being very disturbed, or call you an 'idealistic dreamer'. It's kind of funny to me that so many intelligent people are not able to realize how they are lying to themselves.

Perhaps more about this in some different time and place; but self-control implies that one part of you wants do to something and another part of you has to overrule it. If Padmanabhan is going by traditional narratives, at this point it is believed that there are no warring parts of you. :)

---------

It's kind of funny to me that so many intelligent people are not able to realize how they are lying to themselves.

This is a more immediate and practical issue and I have no idea how to tackle it. Even within myself introspection fails at times.

Yeah, maybe some other time, otherwise I will have to advise myself to stick with the topic ;-) I see it more as a matter of time scales than of different parts. Some reactions come instantaneously, others take more time. Both should be taken into account. Best,

Bee: Yes, a circle (or "curve" like a parabola) has curvature at every point if has continuous derivatives there. Not the same kind, but yes in the general sense. It is equal to d(theta)/ds, which is change in tangent angle per unit of progression along curve (and can be derived from regular Cartesian derivatives.)

Plato: This will raise eyebrows, but really, the equivalence principle is not strictly universal! Since a direct reply to another commenter's issue, I hope it's sufficiently on-topic even if not directly related to the CC:

The very existence of "gravitomagnetism" means the strong EP (as I understand it) is wrong.(I mean gravitational *analog* of magnetism, not the perhaps outré idea that EM effects can produce gravity effects over and above standard GR concepts like from mass-energy of the fields, etc.) Let's say you're standing on afloor on the Earth etc, with a stream of matter flowing rapidly below you. So, you've got the basic gravity field, tidal fields and all, but also the g-M field from the flow. That means, the acceleration in lab frame of a particle zipping by you is not the same as for one you drop straight down. But from the Einstein elevator, we expect a dropped body and a bullet fired horizontally to hit at the same time.

OK, we understand why that is so. But for rapidly moving bodies to accelerate differently than straight-falling ones at a given point (for any reason related to gravitational issues) is a local distinction (since it can't be transformed away by acceleration), not a distinction about large regions (like tidal fields or the difference between genuine uniform and the g = -c^2/Z of Rindler, etc.)

If so, then GR may at least not be presented well, which could have implications for teaching about complications such as the CC.

I am talking about GR, the curvature of a manifold, not the extrinsic curvature of an embedding in a higher dimensional space. In this context, a circle is flat. I am not impressed by the quotation from that website you give

"Einstein summarizes this in his Principle of Equivalence: There is no way to distinguish between the effects of acceleration and the effects of gravity - they are equivalent!"

It is so sloppy it is simply wrong. There is no way to locally distinguish between the effects of acceleration in flat space and the effects of gravitation. You can do however measure the difference by leaving the very local neighborhood. As so often, I have no idea what you are trying to tell me. Best,

Yes, a circle (or "curve" like a parabola) has curvature at every point if has continuous derivatives there.

To you too, you are confusing intrinsic with extrinsic curvature. I am talking about the curvature defined through the metric as discussed in the post above, the one that is relevant for GR. The curvature of any 1 dimensional curve is zero. If you don't believe it, go compute it. Best,

You missed the most important fact about the CC in the original context in which is was introduced - namely, it represents the global length scale that Riemannian geometry allows, and as a corollary, the reducibility of the metric into 9 direction cosines (dynamical) plus 1 volume element (non-dynamical). In Riemannian geometry, directions are localized while length is not. In Weyl's calibration invariant geometry both are localized and theories built on it do not have the CC issue. This simple fact, known to Weyl in 1918, is completely overlooked in today's world of bad physics and poorly educated physicists.

Thanks, Stefan, for the link. One thing I wonder about and haven't seen much on (not saying it isn't there), is quantization of the dark energy field. What sort of "quanta" make up the DE field, and how affected by it being a scalar field? If there are quanta of DE, then doesn't Hawking radiation involve generating them too from the EH?

(Speaking of which, what re gravitons for that, and Unruh effect also should include all quanta not just EM. But UE seems to be spoken of in terms of EM and temperature equivalent. Yes, I know Bee takes a skeptical view of UE.)