Pages

Saturday, July 28, 2007

Consistency

I am guilty of frequently using physics speech in daily life, an annoying habit I also noticed among many of my colleagues [1]. You'll find me stating "My brain feels very Boltzmannian today", or "The customer density in this store is too high for my metastable mental balance". I have a friend who calls Chinese take out "the canonical choice" and another friend who, when asked whether he had made a decision, famously explained "I don't yet want my wave-function to collapse". My ex-boyfriend once called it "the physicist's Tourette-syndrome" [2].

One of my favourite physics-speech words is self-consistent. Self-consistency is tightly related to nothing. You know, that "nothing" that causes your wife to conclude her whole life is a disaster, we're all going to die in a nuclear accident, her glasses vanished (again!), and btw that's all your fault (obviously). But if you ask her what's the matter. Well, nothing.

"There's nothing I hate more than nothingNothing keeps me up at nightI toss and turn over nothingNothing could cause a great big fightHey -- what's the matter?Don't tell me nothing."

Science is our attempt to understand the world we live in. We observe and try to find reliable rules upon which to build our expectations. We search for explanations that are useful to make predictions, a framework to understand our environment and shape our future according to our needs. If our observations disagree with our rules, or observations seemingly disagree with each other (I swear I left my glasses in the kitchen), we are irritated and try to find a mistake. Something being in contradiction with itself [3] is what I mean with not self-consistent (What's the matter? - Nothing!).

On a mathematical basis this is very straight forward. E.g. If you assume my mood is given by a real valued continuous function f on the compact interval [now, then] with f(now)f(then) smaller than 0, this isn't self-consistent with the expectation it can do so without having a zero [4]. For more details on my mood, see sidebar.

Self-consistency is a very powerful concept in theoretical physics: if one talks about a probability, that probability better should not be larger than one. If one starts with the axioms of quantum mechanics, it's not self-consistent to talk about a particle's definite position and momentum. The speed of light being observer independent is not compatible with Galileo invariance and the standard addition law for velocities. Instead, self-consistency requires the addition law to be modified. This lead Einstein to develop Special Relativity.

A particularly nice example comes from multi-particle quantum mechanics, where an iterative approach can be used to find a 'self-consistent' solution for the electron distribution e.g. in a crystal or for an atom with many electrons (see self-consistent field method or Hartree-Fock method). A state of several charged particles will not be just a tensor product of the single particles, since the particles interact and influence each other. One starts with the tensor product as a 'guess' and applies the 'rules' of the theory. That is, by solving the Schrödinger equation with the mean- field potential which effectively describes the interaction, a new set of single particle wave functions can be computed. This result will however in general not agree with the initial guess: it is not self-consistent. In this case, one repeats the procedure with using the result as an improved guess. Given that the differential equations behave nicely, this iterative procedure leads one to find a fixed point with the properties that the initial distribution agrees with the resulting one: it is self-consistent.

A similar requirement holds for quantum corrections. A theory that is subject to quantum corrections but whose initial formulation does not take into account the existence of such extra terms is strictly speaking not self-consistent (see also the interesting discussion to our recent post on Phenomenological Quantum Gravity).

There are some subtleties one needs to consider, most importantly that our knowledge is limited in various regards. Self-consistency might only hold under certain assumptions or in certain limiting regimes, like small velocities (relative to the speed of light), large distances (relative to the Planck length) or at energies below a certain threshold. Likewise, not being self-consistent might be the result of having applied a theory outside these limits (typically, using an expansion outside a radius of convergence). In some cases (gravitational backreaction), violations of self-consistency can be negligible.

However, one might argue if it is possible at all to arrive at such a disagreement then at least one of the assumptions was unnecessary to begin with, and could have been replaced by requiring self-consistency. Unfortunately, this is often more easily said than done -- physics is not mathematics. We rarely start with writing down a set of axioms which one could check for self-consistency. Instead, in many cases one starts with little more than a patchwork of hints, and an idea how to connect them. Self-consistency in this case is somewhat more subtle to check. My friends and I often kill each others ideas by working out nonsensical consequences. Here, at least as important as self-consistency is that a theory in physics also has to be consistent with observation.

The classical Maxwell-Lorentz theory is self-consistent. However, it is in disagreement with the stability of the atom. According to the classical theory, an electron circling around the nucleus should radiate off energy. The solution to this problem was the development of quantum mechanics. The inconsistency in this case was one with observation. Without quantizing the orbits of the electron, atoms would not be stable, and we would not exist.

This requirement is specific to sciences that describe the real world out there. Such a theory can be 'wrong' (not consistent with observation) even though it is mathematically sound. Sometimes however, these two issues get confused. E.g. in a recent Discover issue, Seth Lloyd wrote:

"The vast majority of scientific ideas are (a) wrong and (b) useless. The briefest acquaintance with the real world shows that there are some forms of knowledge that will never be made scientific [...] I would bet that 99.8 percent of ideas put forth by scientists are wrong and will never be included in the body of scientific fact. Over the years, I have refereed many papers claiming to invalidate the laws of quantum mechanics. I’ve even written one or two of them myself. All of these papers are wrong. That is actually how it should be: What makes scientific ideas scientific is not that they are right but that they are capable of being proved wrong."

"I was taken aback by Seth Lloyd's assertion that "99.8 percent of ideas put forth by scientists are [probably] wrong" and even more so by his statement that "of the 0.2 percent of ideas that turn out to be correct ... [t]he great majority of them are relatively useless." His thesis omits a basic trait of what we call science -- that it is a continuous fabric, weaving all provable knowledge together [...] we do science for a science sake, because a fundamental principle of science is that we never know when a discovery will be useful"

~Eric Fisher, Springfield, IL.

Well, the majority of my scientific ideas are definitely (a) wrong and (b) useless, but these usually don't end up in a peer review process. However, the reply letter apparently referred to the word 'correct' as 'provable knowledge', and to science as the 'weave' of all that knowledge. It might indeed be that the mathematical framework of a theory that is not consistent with observation turns out to be useful later but that doesn't change the fact that this idea is 'wrong' in the meaning that it does not describe nature. Peer review today seems to be mostly concerned with checking self-consistency, whereas being non-consistent with observation is ironically increasingly tolerated as a 'known problem'. Like, the CC being 120 orders of magnitude too large is a known problem. Oohm, actually the result is just infinity. But, hey, you've turned your integration contour the wrong way, the result is not infinity, but infinity + 2 Pi.

The requirement of consistency with observation was for me the main reason to chose theoretical physics over maths. The world of mathematics, so I found, is too large for me and I got lost in following runaway thoughts, or generalizing concepts just because it was possible. It is the connection to the real world, provided by our observations, that can guide physicists through these possibilities and lead the way. (And, speaking of observations and getting lost, I'd really like to know where my glasses are.)

Unlike maths, theoretical physics aims to describes the real world out there. This advantageous guiding principle can also be a weakness when it comes to the quantities we deal with. Mathematics deals with well defined quantities whose properties are examined. In physics one wants to describe nature, and the exact definitions of the quantities are in many cases subject of discussion as well. Consider how our understanding of space and time has changed over the last centuries!

In physics it has often happened that concepts of a theory's constituents only developed with the theory itself (e.g. the notion of a tensor or the Fock-space). As such it happens in physics that one can deal with quantities even though the framework does not itself define them. One might say in such a case the theory is incomplete, or not self-contained.

Due to this complication, I've known more than one mathematician who frowned upon approaches in theoretical physics as too vague, whereas physicists often find mathematical rigour too constraining, and instead prefer to rely on their intuition. Joe Polchinski expressed this as follows:

"[A] chain of reasoning is only as strong as its weakest step. Rigor generally makes the strongest steps stronger still - to prove something it is necessary to understand the physics very well first - and so it is often not the critical point where the most effort should be applied. [A]nother problem with rigor [is]: it is hard to get it right. If one makes one error the whole thing breaks, whereas a good physical argument is more robust."

When it comes to formulating an idea, physicists often set different priorities than mathematicians. In some cases it might just not be necessary to define a quantity because one can sit down and measure it (e.g. the PDFs). Or, one can just leave a question open (will be studied in a forthcoming publication) and get a useful theory nevertheless. All of our present theories leave questions open. Despite this being possible, it is unsatisfactory, and the attempt to make a theory self-contained has lead to many insights throughout the history of science.

Newton's dynamics deals with forces, yet there is nothing in this framework that explains the origin of a force. It contains masses, yet does not explain the origin of masses. Maxwell's theory provides an origin of a force (electromagnetic). It has a source term (J), yet it does not explain the dynamics of the source term. This system has to be closed, e.g. with minimal coupling to another field whose dynamics is known. The classical Maxwell-Lorentz theory does this, it is self-contained and self-consistent. However, as mentioned above, this theory is not consistent with observation. Today we know the sources for the electromagnetic field are fermions, they obey the Dirac equation and Fermi statistic. However, if you look at an atom close enough you'll notice that quantum electrodynamics alone also isn't able to describe it satisfactory...

Besides the existence of space and time per se, the number of space-time dimensions is one of these open questions that I find very interesting. It has most often been an additional assumption. An exception is string theory where self-consistency requires space-time to have a certain number of dimensions. However - if it also contains an explanation why we observe only three of them, nobody has yet found it. So again, we are left with open questions.

The last guiding principle that I want to mention is simplicity, or the question whether one can reduce a messy system of axioms and principles to something more simple. Is there a way to derive the parameters of the standard model from a single unified approach? Is there a way to derive the axioms of quantization? Is there a way to derive that our spacetime has dimension three, or Lorentzian signature?

In my opinion, simplicity is often overrated compared to the first three points I listed. We tend to perceive simplicity as elegance or beauty, concepts we strive to achieve, but these guidelines can turn out to be false friends. If you can find your glasses, look around and you'll notice that the world has many facettes that are neither elegant nor simple (like my husband impatiently waiting for me to finish). Even if you'd expect the underlying laws of nature to be simple, you'll still have to make the case that a certain observable reflects the elementary theory rather than being a potentially very involved consequence of a complex dynamical system, or an emergent feature. A typical example are the average distances of planets from the sun, a Sacred Mystery of the Cosmos that today nobody would try to derive from a theory of first principles (restrictions apply).

Also, we tend to find things simpler the more familiar we are with them, up to the level of completely forgetting about them (did you say something?). E.g. we are so used to starting with a Lagrangian that we tend to forget that its usefulness rests on the validity of the action principle. It is also quite interesting to note that researchers who are familiar with a field often find it 'simple' and 'natural'... I therefore support Tommaso's suggestions to renormalize simplicity to the generalized grandmother.

In this regard I also want to highlight the argument that one can allegedly derive all the parameters in the standard model 'simply' from today's existence of intelligent life. Notwithstanding the additional complication of 'intelligent', could somebody please simply explain 'existence' and 'life'?

Bottomline

Much like classical electrodynamics, Einstein's field equations too have a source term whose dynamics one needs to know. The system can be closed with an equation of state for each component. This theory is self-consistent [6], and it is consistent with all available observations. It reaches its limits if one asks for the microscopic description of the constituents. The transition from the macro- to the microscopic regime can be made for the sources of the gravitational field, but not also for the coupled gravitational field (oh, and then there's the CC, but this is a known problem).

Two theories that yield the same predictions for all observables I'd call equivalent (if you don't like that, accept it as my definition of equivalence.) But our observations are limited, and unlike the case of classical electrodynamics not being consistent with the stability of the atom, there is presently no observational evidence in disagreement with classical gravity.

For me this then raises the question:

Is there more than one theory that is self-consistent, self-contained and consistent with all present observations?

In a recent comment, Moshe remarked:"To paraphrase Ted Jacobson, you don't quantize the metric for the same reason you don't go about quantizing ocean waves." That sounds certainly reasonable, but if I look at water close enough I will find the spectral lines of the hydrogen atom and evidence for its constituents. And their quantization. To me, this just doesn't satisfactory solve the question what the microscopic structure of the 'medium', here space-time, is.

And what have we learned from all this...?

Let me go back to the start: If you ask a question and the answer is 'Nothing', you most likely asked the wrong question, or misunderstood the answer.

Ah... Stefan found my glasses (don't ask).

See also: Self-Consistency at The Reference Frame[1]This habit is especially dominant -- and not entirely voluntarily -- among the not native English speakers, whose vocabulary naturally is most developed in the job related area.[2] Unintentional cursing and uttering of obscenities, called Coprolalia, is actually only a specific feature of the Tourette syndrom.[3] However, some years ago I was taught the word 'self-consistency' in psychology has a different meaning, it refers to a person accumulating knowledge from his/her own behaviour. A person whose thoughts and actions are in agreement and not in contradiction is called 'clear'. (At least in German. I couldn't find any reference to this online, and I'm not a psychologist, so better don't trust me on that.).[4] See: Bolzano's theorem.[5] "Woman on Window", by F.L. Campello. For more, see here.[6] Note that this theory is self-consistent at arbitrary scales as long as you don't ask for the microscopic origin of the sources.TAGS:PHYSICS, SCIENCE, MATHEMATICS

84 comments:

Moshe
said...

Hi B., this is fun to read. Let me paraphrase my comment regarding the quantization of the metric and the analogy to ocean waves (Ted actually referred to sound waves in his "Einstein equation of state" paper...).

The description of ocean waves using hydrodynamics is obviously an approximation that will not describe the medium at short distances. In that case we are lucky and we have the resolution to see it directly. In gravity we do not.

However, this is where your distinction between self-consistency and consistency with observation is crucially important. Even if we do not look at water closely, the hydrodynamical description is NOT self-consistent when applied to short-distance questions. It has corrections (irrelevant terms) that become large at short distances and invalidate the whole description.

Exactly the same statement applies to Einstein gravity, it is then not unreasonable to draw the conclusion that classical GR is simply an effective hydrodynamical description of something else, something that bears no similarity to GR whatsoever.

If we take this clue seriously, trying to quantize GR (or something like it) is simply misguided and will never lead to "quantum gravity". This is the one most crucial difference between string theory and alternative approaches to QG, so I feel it is an important point to appreciate.

(for the record, Ted's point in gr-qc/9504004 is slightly different, but I had to borrow his wonderful phrasing)

Hi Bee, interesting readingSo I take it the observable Universe didn't simply emerge out of 'nothing'And we are still not clear why the observable Universe has an event horizon (or indeed whether said event horizon exists).

Gravity & Time are features of the observable Universe, we have Earth Gravity, Terrestial Time (solar years) - and we have Atomic TimeIt's all fun (or not) passtime.

Somehow it seems to fit in the theme of your blog-post. The theme of the preprint is that even in classical gravity we don't find it easy to map physical principle into mathematical representation.

Q2: How would I describe the radiation of a radio transmitter (say the one that is broadcasting WNYC 820 AM) in terms of quantum processes?

To me this question is related to the "quantizing ocean waves". You see, to arrive at the quantum theory of radiation, we have to study atomic (and molecular and other) quantized transitions, among other things. We are lucky to have such things accessible to us along with classical EM fields.

But with gravity, all we have is fields generated by extremely classical (i.e., hardly quantum) objects like the earth and so on.

So we have to repeat the conceptual steps without an experimental guide that would take us from a radio transmitter to a quantum theory description of a radio transmitter.

--- I had just promised myself that I'd deaddict myself to blog-hopping, and here you post again! :) :)

Since Moshe is on, let me ask another question - our Standard Model QFT has divergences. String perturbation theory is supposed to lack divergences - at least to two loops. Does string theory suggest where the divergences of the Standard Model supposedly somehow embedded in it arise from?

Seems to me in QFT terms, string theory adds an infinity of irrelevant terms to the Standard Model action and would thereby get rid of divergences.

Of course, the question also has a high probability of being gobbledygook, but at this point, my attitude is to shrug.

I will also comment that it is starting to seem paradoxical to me that why I can compute the gyromagnetic ratio of the electron to four or five loops and be confident of this answer - a sub-electron-volt level phenomenon by the way in terms of measuring it - is because the divergences - infinite energy scale level - of the Feynman integrals involved are so kind as to not to require new counterterms. "Well behaved" ultra-ultraviolet is needed or else no predictions!

Yes, I have to throw in the muon to get the correct answer with today's experimental precision. But that doesn't lessen the feeling of paradox.

Gravity is a very weak force and the experimental evidence for GR is also weak.

"Simple", well, Newton's assumption of flat space is certainly simpler than Einstein's complicated mess. If space is indeed flat, then the speed of light has to be variable, but that's hardly a big deal. Ocean wave speeds are variable for example, and so are sound waves.

The Cambridge geometry group put together a theory of gravity on a flat space that is exactly equivalent to GR in all situations where they can be compared (but not equivalent inside event horizons where they cannot be compared).

Thanks for the clarification, I thought you might have referred to a talk or something. I think I just misunderstood the statement gravity doesn't have to be quantized. You're saying there is evidence that GR is not the correct description for short-distance questions, but the answer to these questions might not be to quantize it, say, as we quantize the electric field. I would agree on both. In this sense, the hydrodynamical analogy fits in. Just as I wrote: it doesn't solve the question what the microscopic structure of the 'medium', here space-time, is. Maybe Jacobson's way it the way to make progress, I don't know.

I admittedly don't fully understand your argument, because I thought of irrelevant terms etc as being properties of a quantum theory, and corrections becoming necessary because one has to renormalize. How does that translate into a classical theory like hydro? Is it the description at high frequencies that breaks down there? In what sense is classical gravity not self-consistent in this regime?

One way or the other, the actual reason for this post was that I'm still trying to figure out what indication we have that gravity (or make that the structure of space-time) is quantized at all. Even knowing that it is not self-consistent at smallest distances does not mean the solution needs to be quantization. So, basically I wasn't able to answer my own question. The only outcome was this post.

See - that's what I mean. 'Simple' is a very observer dependent statement. I personally find Special and General Relativity anything but a 'complicated mess'. It's elegant, it's beautiful, it's simple (unlike, say, cosmological perturbation theory). But I see no point in arguing for or against theories bases on how each of us perceives beauty.

If space is indeed flat, then the speed of light has to be variable, but that's hardly a big deal.

Space in SR is flat, but the speed of light is a constant. If it's not this is a big deal because all observations say it is constant indeed.

The Cambridge geometry group put together a theory of gravity on a flat space that is exactly equivalent to GR in all situations where they can be compared (but not equivalent inside event horizons where they cannot be compared).

Hmm. Interesting. Though I can't but wonder why anybody would want to spend time on constructing theory A that is exactly equivalent to an already existent theory B except for situation where one can't compare both.

..by the way: the condition that a real-valued function f:[a,b]-> R satisfies f(a)f(b)< 0 is in perfect agreement with the assumption it would not meet zero, take any appropriate non-continous function. This possibly exemplifies what devides physical reasoning from mathematical: installing precision of statements 'a posteriori' instead using precision as a constitutive agent.

Furthermore, Bee, I just notice that I was not able to get the actual 'point' of your text until you explained you worried about whether quantization of gravity was the right strategy for 'quantum gravity' or not, which is a perfectly clear point, but assuming the familiarity with the point that 'self-consistency' is a necessary but most probably not sufficient condition for a model to describe a given set of observations, that a theory/model should 'explain' its own constituents ('self-containment') or should be as simple as its correctness forces, I did not get your 'new' point. Finally I remark that assuming 'forces' a la Newton is not necessarily intellecually less honest or 'self-contained' than assuming practically fictional strings carrying information about states of elementary partcles, both regimes constitute what I would call a 'mathematical model'- Newtons 'force' is, in addition, still closer to what Sartre would call 'the being of the phenomenon', consequently more abstract than the object of a 'vibrating string' which relies on the grounds of a well-founded term of 'phenomenic being', i.e. being more close to a 'cognitive', anthroposophic construction than the former.

For me one of the beautiful parts of string theory is indeed the fact that spacetime is "emergent". Until there is a blog post on that, maybe the semi-popular talk by Nati Seiberg (hepth 0601234) may fill the gap.

Bee, right now doesn't look like modifications of QM are needed, but that may change. In any event, whatever replaces QM should apply uniformly to all of physics, cannot imagine having consistent theory which treats gravity differently from everything else. Maybe something like that works, hard to judge without any concrete details, but the specific idea of classical gravity coupled to quantum mechanical matter does not work (see Jacques' post for example).

Thanks so much. How embarrassing that the word continuous dropped out while editing, I assure you it's in the draft on my hard-disk. I've corrected that sentence.

Regarding the point of my writing, though the reason for me to think about these matters was my try to make up my mind whether or not gravity should be quantized (or what that means), I thought it might be worthwhile to summarize what requirements a theory in physics should fulfil. It is also historically interesting why scientists found a current status unsatisfactory, and what lead them to ask further questions, that is, where progress came from and how it was made. Besides this, I hope that my writing captured the essence of what we call a 'scientific' theory.

I've read that post by Jacques. Actually, I read it repeatedly because various people have referred me to it but it doesn't address my questions. As I've said before, I agree with you that a modification of qm doesn't sound too convincing, so I'm not even in the mood to defend such an attempt. I am mostly trying to collect arguments. The post by Jacques though I find confusing because I don't see classically why it is inconsistent to set all the higher order terms to zero? You said previously one can't do that in a quantized theory, because the contributions sneak in again through the back door. In the classical case, one could make an argument with naturalness (why are all these terms absent/parameters zero?) but in how far would it not be self-consistent?

B., the situation is slightly different from the Lorentz violation case, where people concentrate on the smaller effects (irrelevant terms) while neglecting the ones that are much larger (relevant and marginal)...

Here the situation is much better, there is no experimental evidence for higher order terms beyond the Einstein-Hilbert term, but even the leading term is not-renormalizable, so the theory still breaks down in short distances, at least in perturbation theory (maybe some miracle happens non-perturbatively, I'll believe it when I see it).

Also, the renormalization story is same for classical theories, such as hydrodynamics. Renormalization has to do with the scale dependence of physics, and is not specific to quantum mechanics or quantum field theory.

First. Superstring theory is proven to be finite not only at two loops but to all orders and beyond. And there is of course no contradiction between the finiteness of this full theory and divergences in its approximations, such as effective field theories.

It's always like that: a more complete theory is free of divergences, and a less complete theory is divergent. A more complete theory that has no short-distance divergences is called a UV completion of the original, divergent theory.

It is important to realize that the problem is not really in divergences' being infinite but in the undeterminacy of the results. One can always make some cutoff, as Jacques and others explain, and infinite things become finite. The problem is exactly transformed into another problem - namely the dependence of the results on the exact way how the cutoff is made.

There is no principle that would make one set of choices for the cutoff and regularizations more correct than others. One ends up with an infinite-dimensional space of possible results, labeled by coefficients of all nonrenormalizable terms. Neither of these results is better than others. The divergent theory may be used to calculate rather accurate results at low energies but breaks down at the cutoff scale and higher.

String theory - or more generally any UV completion - is simply a recipe to choose a priviliged answer among the regulated divergent theories. In other words, it picks THE right way to regularize the original incomplete theory. It can do so because it has other principles that lead to no divergences i.e., after imposing a cutoff, to no infinite-dimensional uncertainty. The same is true not only for string theory but also e.g. for the gauge theory completion of the four-fermion interaction (both are field theories), or any other UV completion for that matter.

Dear Bee,

what you wrote here directly contradicts several sections of your newest article about Consistency. First of all, your newest comment is internally inconsistent. First, you promise us that you won't promote theories that quantum mechanics may be abandoned - but despite this pledge, it's exactly what you're doing in the second part of the comment when you encourage people to talk about classical gravity even though every good undergrad students must know that our world - I mean the whole world, not the world minus gravity or some ludicrous castration of this kind - is quantum.

Second, even if you incorrectly assumed that gravity is or may be classical in this world, there is no fully justifiable way to set the higher-derivative terms to zero (except for meaningless constructs that are discussed below). The only reason why Einstein threw these terms away was simplicity. But as you correctly write in your newer text about consistency - even though you probably don't understand what you're correctly saying - simplicity is not as important as consistency.

Even at the classical level, there is nothing much better about a theory where the higher-order terms are set to zero (except for a possible scale invariance that may work classically). Such a theory is "simpler" on paper but this kind of simplicity is not a physical argument; it is just an aesthetic argument. In the purely classical world, theories with all possible values of coefficients of higher-derivative terms are equally good.

In modern physics after the 1970s, all these poetic arguments about simplicity have been eliminated and we rely on physical arguments only. The simplicity was replaced by the rules of the renormalization group. The higher derivative terms have small coefficients because effective gravity has to work well up to energy scales that are very high, namely the Planck scale. That's the real reason why all higher-derivative terms have small coefficients: because the cutoff where the theory is allowed to break down is very high. Simplicity is not a direct argument anymore.

At any rate, these higher-derivative terms may have small impact on long-distance physics but they have an order of 100% impact on the Planckian physics. The Planckian physics is by very definition quantum physics. Physics over there follows the quantum rules, and there is no distinction between setting higher terms to zero or something else because you generate them by RG flow anyway, choosing a different renormalization scheme and scale.

You should realize that the cutoff energy scale goes to infinity in the classical limit and this whole discussion becomes vacuous. Indeed, you may view a classical theory as a very low-energy limit of a quantum theory that is required to work at arbitrarily high scales, while you scale hbar to zero.

If you do so, you indeed find a rational argument why the higher-derivative terms in a partial limiting theory should be zero. But exactly by the same assumption, your theory in its regime of validity completely avoids the problem of quantum gravity because quantum gravity occurs at the cutoff scale, and you sent the cutoff scale to infinity.

Your comment illustrates very clearly a widespread logical fallacy - that all kinds of Smolins and Jacobsons keep on spreading all the time. You want to keep the exact rules of classical physics but at the same moment, you want to say that by studying classical physics of gravity, you study quantum gravity. That's not possible. Classical gravity is where hbar is zero, cutoffs are infinite, and the terms are not generated by any RG flows. Quantum physics is where all these things are finite and nonzero. You can't have both at the same moment. It's just silly.

I assure you that T-duality in string theory is not only proven in all formulations of string theory we have where T-duality is claimed to exist, but students are led to prove it roughly in 4th lecture of the 1st semester of string theory courses at various places.

You may find T-duality counterintuitive and I can recommend you chapter 10 of The Elegant Universe by Brian Greene who explains why the intuition is wrong and why there's no contradiction between the "classicality" of the underlying manifold and T-duality.

The winding modes get exchanged with the momentum modes and vice versa. There's no physical way to distinguish whether a mode is momentum mode or winding mode unless one settles a convention. The exact formula for masses - and similarly interactions - is invariant under the exchange of R with 1/R and exchange of momenta and windings, and masses and interactions of objects are everything that can be measured.

Labels can't be directly measured. They're not physics.

In conformal field theory, T-duality is simply the reflection of a spatial coordinate that is only applied to the right-movers but not left-movers (or vice versa), and it is a symmetry largely because the holomorphic and antiholomorphic sectors of CFT are decoupled (up to the zero modes that had to be rearranged - that's the change of the radius from R to 1/R).

what you wrote here directly contradicts several sections of your newest article about Consistency. First of all, your newest comment is internally inconsistent. First, you promise us that you won't promote theories that quantum mechanics may be abandoned - but despite this pledge, it's exactly what you're doing in the second part of the comment when you encourage people to talk about classical gravity even though every good undergrad students must know that our world - I mean the whole world, not the world minus gravity or some ludicrous castration of this kind - is quantum.

You are criticising me for trying to understand a matter that I regard complicated. An explanation of the kind 'every good undergrad must know' is not helpful, neither are your accusations that I said things I never said.

I have not promoted any approach, but I also never 'promised' I wouldn't promote one approach or the other. Instead, I have said repeatedly that I am trying to understand arguments for and against. If I have encouraged people to do anything, then it's hopefully asking questions - something that you apparently want to discourage by making the person asking feel stupid for not having been born understanding UV fixed points in QFTs. This is a behaviour I do not welcome on my blog.

Besides this, I am not a theory, and I am aware that I am frequently in disagreement with myself.

Your comment illustrates very clearly a widespread logical fallacy - that all kinds of Smolins and Jacobsons keep on spreading all the time. You want to keep the exact rules of classical physics but at the same moment, you want to say that by studying classical physics of gravity, you study quantum gravity. That's not possible.

I have not said anything like this. I am merely looking into the arguments that I frequently hear why gravity has to be quantized. I have not written about these in the post above, because I don't think I have yet fully understood all points. Instead, the content of my writing above are requirements that I think a theory in physics has to fulfill, requirements that I regard essential, and that I have been thinking about in this context. I certainly have never claimed that 'by studying classical physics of gravity [I] study quantum gravity'.

Speaking of logical fallacies: you have defined quantum gravity to be gravity at the Planck scale ("The Planckian physics is by very definition quantum physics."), so it seems even somebody who would argue against quantization of gravity at the Planck scale, in your definition would indeed do quantum physics.

You promised, in your comment, that you wouldn't be talking about the dumb "theory" that quantum gravity is the same thing as classical gravity, when you wrote to Moshe:

"As I've said before, I agree with you that a modification of qm doesn't sound too convincing, so I'm not even in the mood to defend such an attempt."

But defending this unjustifiable, silly attempt to not only modify but deny all of quantum mechanics is exactly what you did in the rest of the comment when you were trying to squeeze an irrelevant discussion about classical GR and present it as a part of the discussion about quantum gravity.

I don't think that this is an attempt to understand something. Quite on the contrary, it is an attempt to do the maximum - emit enough protective fog - not to understand anything. Could you please try to accept that the problem that you should understand exists in the first place - quantum gravity - and that this quantum gravity is not equivalent to classical physics or anything else from 19th century or high school physics and that you must learn particular new insights that you haven't known so far?

If you always end up with some ludicrous doubts whether the problem exists at all whenever a single, even elementary, question about the subject is discussed or whenever someone like Moshe tries to explain you anything about it, be sure that you will never understand anything about the subject.

The rest of your text just shows that you still don't understand absolutely anything.

First of all, the fact that gravity in the quantum world has to be quantum gravity is not a difficult argument that physicists spend years with. It is supposed to be an extremely elementary thing that takes a few minutes for a student who joins the field. It's baffling if someone spends so much time with it.

All types of phenomena in the world have the same hbar because hbar is a universal constant. The commutator of x and p is i.hbar, and this hbar is the same thing as the commutator of the vector potential and the electric field, with i times delta function removed. Gravity is no different because it is a phenomenon, too. Saying that gravity is not quantum is equivalent to saying that 0=1 because hbar in nature, in natural units, is 1, while saying that gravity is classical is saying that hbar is zero.

If you understood that quantum is the same thing as "finite hbar" while classical is "hbar equals zero", by definition, you would have to understand that classical gravity in a quantum world is simply a contradiction because hbar is a universal constant and it can't be zero and nonzero at the same moment.

I have wasted literally hours just with you and this extremely simple thing - that can be explained in thousands of different ways - and you're still not getting it. It's just extremely frustrating

Your conclusion about "fallacy" in the last paragraph is equally absurd and based on the same kind of rudimentary misunderstanding. When we say that quantum gravity by definition is an order 100% phenomenon at the Planck scale, it also implies that it is where quantum phenomena make order 100% contributions to the results.

It's because this is how the Planck scale is defined. The Planck length is sqrt(hbar.G/c^3). Neglecting quantum phenomena is equivalent to sending hbar to zero which means that the Planck length would drop to zero meters, too. The Planck scale *is* the scale where the relative quantum fluctuations of the metric are of order one whether someone likes this mathematical fact or not. This is why the Planck scale is defined in this way. Just make the simple calculation. Anyone who says that quantum corrections may be neglected at the Planck scale is simply a moron. This is not a topic for serious discussions.

Proposing that quantum mechanics could be modified, deformed, or something like that is a priori plausible - even though I think it has nearly been established that it is simply impossible when all knowledge is taken into account. But you wanted much more: you were talking about the people who deny *any* "quantization" at the Planck scale. That's just plain stupidity, can't you see that?

The relative role of QM in QG increases as you go to shorter distances, and it just becomes 100% near the Planck scale. Is that really so difficult to get this point? This discussion is completely irrational.

I am not an expert at all in these things, but from my limited understanding of "classical gravity", I would agree when Lubos says "Classical gravity is where hbar is zero, cutoffs are infinite, and the terms are not generated by any RG flows."

Leaving aside possible subtleties concerning singularities, does this not mean that classical gravity is self-consistent in the sense Bee has used above?

But then, I am puzzled about Moshes statement concerning classical hydrodynamics, "Even if we do not look at water closely, the hydrodynamical description is NOT self-consistent when applied to short-distance questions. It has corrections (irrelevant terms) that become large at short distances and invalidate the whole description."

How does this go together? Or is classical hydrodynamics different from classical gravity in some essential point?

I mean, I could imagine that it makes sense in classical hydrodynmics to look at Fourier decompositions of pressure or velocity fields and so on, to integrate out high frequencies, and to obtain a formulation where transport coefficients depend on cutoff frequencies (do they blow up?), and that's how the RG formalism comes into play. But then, I do not quite see how, and why, as Moshe writes "Exactly the same statement applies to Einstein gravity, it is then not unreasonable to draw the conclusion that classical GR is simply an effective hydrodynamical description of something else, something that bears no similarity to GR whatsoever."

Sorry, confused - and thank you for any helpful hint!

stefan

BTW: Are there recent textbooks which explain how the RG formalism is used in application to classical hydrodynamics?

Hi Stefan, away from my office now so cannot remember references. I heard about the hydrodynamics from Weinberg, maybe it made it to his book. In any event the way I remember it is the procedure you outlined: you approximate the complicated structure of the fluid by continuous variables such as density and velocity fields etc., in effect integrating out short distance fluctuations. This results in the most general action (or equations of motion) consistent with all symmetries.

Now, at long distances one can systematically expand in order of relevance, so keeping only the relevant and marginal perturbations will give conventional fluid dynamics. Of course, since this is just an approximation there will be corrections, which presumably are measurable. These will be irrelevant terms which are an intrinsic indication hydrodynamics is not the full story.

(this is the story of hydrodynamics as an EFT; scale dependence and RNG flow are somethingI don't remember seeing explicitly but seems to me will work the same way- when integrating out momentum shell your phenomenological parameters will flow, with or without quantum mechanics, no?)

The difference from gravity I have in mind is the following: there is a limit in which hydrodynamics - just the relevant and marginal terms- is self-consistent (but wrong). Just take the cutoff away and then all irrelevant terms go away. There is no such limit for gravity because even the first term in the action is already non-renormalizable. Take the Planck mass to infinity, and you get rid of dynamical gravity altogether (though depending what you keep fixed you may still be in curved space).

If the underlying classical manifold is just a label, then I am not sure how I can interpret string perturbation theory spin-2 massless particles as perturbations of the metric of this non-quite-real classical spacetime.

I have no doubt that the mathematical results of string theory are well-grounded, even when rigorous proofs are not available. It is the physical interpretation that bothers me.

I'm not a professional physicist, so if it is too frustrating, simply ignore my questions here. My understanding of this is not crucial to my career or to anything else.

Dear Lumo,I have noticed that, both on your own blog and here, you never put forward any ideas of your own; all of your posts are attacks on the ideas of other people. I would strongly advise you, in your own interests, to ponder whether this behavior is causally related to your failure to publish anything for a period of time which has turned out to be fatal to your academic career. It's really quite a waste.

Every experiment is an interaction between a system and a detector, and the result depends on the physical properties of both. Usually we want to eliminate the detector dependence as far as possible, but not further than that. There is one detector property which cannot be eliminated in the presence of gravity: its mass M. QFT does not depend on M. QG must hence be described by a deformation of QFT, where M is the deformation parameter.

To see that we cannot eliminate the M dependence, consider the limits where it disappears. If M = 0, the detector moves at the speed of light. If M = infinity, the detector interacts with gravity and collapses into a black hole. Both limits are qualitatively wrong, and hence QG must explicitly depend on the detector's mass. OTOH, if we ignore gravity, nothing prevents us from putting M = infinity. Then the detector just sits still, in good agreement with reality. In this situation, QFT works well.

I was decided to leave the fucked up Academia at least since 2004. Also, I don't like to generate bogus ideas just to generate them. If I have nothing to say, I say nothing.

And I have resigned mostly because of these things and especially the influence of political and other scum - such as the feminists - on my life that I couldn't get rid of however much I tried - although technically it was because of visa expiration.

I've been living under a terror of a left-wing totalitarian system for the first half of my life so far and I definitely don't intend to return to something similar.

And if you're interested, I despise anonymous posters and people who are doing theory for their career. Whether or not you're Academia and whether I know you or not, I think that you are at the moral bottom.

The spreading of junk scientists in the Academia, masked by all kinds of politically correct cliches, is what some people don't care about but it certainly makes acting of people like me impossible. The society is already paying a huge price for it and it's getting worse.

Dear Stefan, classical gravity is surely self-consistent, up to the existence of singularities that inevitably evolve under some circumstances.

But it is not consistent with the experimental observations. It is not consistent with the world being quantum. The observations say hbar is nonzero. Classical gravity means hbar is zero which is not the case.

Stefan, I don't understand what you find strange on Moshe's analogy. Moshe's point was exactly that the case of hydrodynamics and gravity are analogous.

The classical equations - Einstein's equations or Navier-Stokes equations - have pathological behavior at short distances as long as generic terms are included which is where quantum phenomena kick in and regulate the picture into a full theory without problems.

The only relevant difference here is that hydrodynamics with the leading term only can be renormalizable as a non-relativistic quantum theory. Einstein's gravity is worse in this respect - already the Einstein-Hilbert term is nonrenormalizable - which proves that dynamics beyond the classical equations is needed.

Dear Arun,

what I primarily meant by "labels" was the difference between momentum modes - particles moving along compact dimensions - and winding modes - strings wrapped w times around it. The difference between them is a matter of words.

Yes, all other properties of the background manifold are labels in some sense, too.

String theory with all winding modes removed is exactly equivalent to a "local" quantum field theory on the relevant manifold. One can easily see it. All normal fields in spacetime - including perturbations of metric - come from non-winding strings. That's why the non-winding portion of string theory exactly includes a local theory on a background manifold that you may imagine classically.

Winding modes may be neglected as long as they're too heavy to be produced i.e. as long as the radius vastly exceeds the string scale. However, when the circle is string-length short, their influence on physics becomes of order 100% and they imply dramatic change of qualitative conclusions, including the equivalence between R and 1/R. Not sure what else may be unclear, you can always ask.

Well, actually, just criticising other people really amounts to "nothing". Really. Think about it: people remember you for your new ideas; nobody ever got the Nobel just for being a critic. It's so easy. It's a zero achievement. So I suggest that you follow your own advice and say nothing.

"And I have resigned mostly because of these things and especially the influence of political and other scum - such as the feminists - on my life that I couldn't get rid of however much I tried - although technically it was because of visa expiration."

Really? Harvard gives tenure to people who fail to write papers? When did they become so kind?

"And if you're interested, I despise anonymous posters "

That's ok. Maybe it will help you to feel better about yourself.

"and people who are doing theory for their career."

How noble. I observe however that this sentiment is surprisingly common among academic failures.

"Whether or not you're Academia and whether I know you or not, I think that you are at the moral bottom."

What about people who deliberately try to discourage other people who are doing real work, in order to make themselves feel better about not doing any work themselves? What is their moral standing, LM?

I mean, seriously, Sabine H is tough enough and experienced enough to know how to deal with your nonsense. What if she were some young student, who might be demoralized by your writings?

"Hmm. Interesting. Though I can't but wonder why anybody would want to spend time on constructing theory A that is exactly equivalent to an already existent theory B except for situation where one can't compare both."

Given an arbitrary GR solution (a set of coordinate charts that cover all spacetime), pick a single event. Pick a Minkowski coordinate system in the neighborhood of that event.

My understanding is that the Cambridge geometry group figured out how to uniquely extend that local Minkowski coordinate system into a flat coordinate system that works for all the charts, except possibly for the ones inside of black holes.

They figured out how to extend the local structure of spacetime to a fully global structure. It was quite a tour de force. It makes the usual formulation of GR look a little sick. And it's a LOT easier to do calculations on a flat background.

Instead of tensors, they use what you might call the Dirac operator, i.e. gamma^mu d_mu. I'm familiar with their work because there are so few of us who play with the "geometric algebra".

in the right world, people who can't do certain things simply shouldn't do these things. I don't want to just demoralize these people. I want these people to be gone because they have no business to be around. It has nothing to do with toughness. Toughness or non-toughness shouldn't be the deciding factor.

In the broader Academia, I was surrounded by a lot of tough imbeciles and I say, never more. Constant intimidation by anonymous and semianonymous trash like you has been a part of it, too.

I don't care about achievements in this context. People who are in this setup and who help to build it are parasites. My counting of achievements is very different from yours. Most of the things that I consider values are probably not valuable for you and vice versa.

"People who are in this setup and who help to build it are parasites."

But, you see, a parasite is exactly what you are --- or were, before you got fired for underperforming. I mean, if people out there were not doing real work, there would be nothing for you to do, correct? Because you only criticize. Isn't that the definition of a parasite, Lubos? Now why don't you run along, and write something on your blog that isn't parasitic on someone else's work?

When I was a kid I once agreed to take care of a friend's canaries over the summer. There were five of them. My mother wasn't too happy about it but I was quite excited. I didn't have any pets then. So every morning I'd go, take away the blanket and see what the birds were up to. Some days they were peacefully chirping next to each other, some days they just stupidly sat on a bar, occasionally they had gotten into a fight with each other and feathers were flying around.

This blog reminds me of these days. I get up in the morning and see what you've been up to over night. It seems today feathers are flying.

It turned out though I can't stand canaries. They never shut up, shit literally everywhere, and one of them was stupid enough to fly against the glass door and break its neck which caused me a lot of trouble.

So, how about you try chirping a bit nicer, and not to shit around everywhere you go. And watch that glass doors. Thanks,

Thanks for the explanation. I didn't read the paper, so excuse me for asking. From what you say it sounds not like a 'new' theory but like a reformulation of GR. Maybe one that has advantages in some situations, could be. But your explanation doesn't make sense to me

My understanding is that the Cambridge geometry group figured out how to uniquely extend that local Minkowski coordinate system into a flat coordinate system that works for all the charts, except possibly for the ones inside of black holes.

They figured out how to extend the local structure of spacetime to a fully global structure.

If you can find a globally flat coordinate system then the space time is flat. You can probably take out some parts (maybe even points), extend the local system as far as possible and it might work, but I fail to see what this would have to do with black holes.

Show me a globally flat coordinate system on the sphere. You're allowed to take out black holes.

Lets me quote Seth Lloyd:"The fraction of information that is truly useful is small and getting smaller, but it is not zero. Yet."

With this I therfore ask how small is small?..and how big is big?

I have the impression it's not that the fraction of useful information is getting smaller, but that the fraction of potentially useful information is getting more and more useless. A human being can only process a limited amount if information. The information that is available today much exceeds what one can possibly read/learn/process - so there needs to be some kind of a sensible filter. Many people do that by relying on certain sources, or on friends opinions, or - a particularly tricky situation - rate importance higher if they've heard it repeatedly. The problem today is not that the amount of information is too 'small' but that it's hard to find the important and relevant fraction.

What we'd need is sensible means to deal with the information we have.

Your 'answer' to my comment has hardly anything to do with what I wrote. As so often, you either deliberately or mistakenly misinterpret me, construct an opinion I don't have, and then use it to confirm your already present believe that I am stupid.

I don't think that this is an attempt to understand something. Quite on the contrary, it is an attempt to do the maximum - emit enough protective fog - not to understand anything. Could you please try to accept that the problem that you should understand exists in the first place - quantum gravity - and that this quantum gravity is not equivalent to classical physics or anything else from 19th century or high school physics and that you must learn particular new insights that you haven't known so far?

I have never denied that the 'problem' of quantum gravity exists, neither did I say it is equivalent to classical physics. The question I was asking is in what sense is classical GR coupled to classical sources inconsistent, and I thank Moshe and you for your explanations on that matter. I don't know why you interpret that as me denying quantization at the Planck scale. I can't even imagine a reason why I would want to pretend not to understand something, neither do I want to spend time on figuring out why you are accusing me of doing that.

If you always end up with some ludicrous doubts whether the problem exists at all whenever a single, even elementary, question about the subject is discussed or whenever someone like Moshe tries to explain you anything about it, be sure that you will never understand anything about the subject.

Actually it is you who doubts that a problem exists because in your world everything is clear and you have no open questions whatsoever, lucky man.

It seems to me we have a misunderstanding about the question, so let me clarify

It's because this is how the Planck scale is defined. The Planck length is sqrt(hbar.G/c^3). Neglecting quantum phenomena is equivalent to sending hbar to zero which means that the Planck length would drop to zero meters, too.

Indeed. So, the Planck length is a length. It's 10^-33 cm or something, defined with the hbar/G/c that we've measured (to avoid any running arguments here.)Effects of quantum gravity are expected to become important at this scale. Nobody knows how that would look like, I'd really like to just look and see, but unfortunately that is not possible.

The question for me is what happens at this length scale.

The Planck scale *is* the scale where the relative quantum fluctuations of the metric are of order one whether someone likes this mathematical fact or not.

Look, Lubos, this is exactly the reason why I was trying to sort out my thoughts, something that you have apparently never done. A mathematical *fact* holds under certain assumptions. One of the assumptions one makes to derive the Planck scale is that QM remains unaltered on all that scales down to 10^-33 cm. The derivation of the Planck scale is an extrapolation over 16 orders of magnitude (from the TeV scale), in which you assume nothing happens, even though we have no experimental evidence for that. Note, I am not saying there is anything wrong with that. I am just trying to find out what assumptions go into this argument.

You apparently are not able to understand the logical mistake you are making, even after I pointed it out. You have defined the Planck scale to be the scale where QG becomes important (and not as that particular length scale 10^-33 which you extrapolate from classical QG and QM considerations), and QG to be what becomes important at the Planck scale. According to your definition, of course QG sets in at the Planck scale, but its a circular definition.

Since Lubos has already replied to your comments and seems to have fun with it, I see no point in scraping them out. However, I want to encourage you to be polite, and to remind you that we don't tolerate anonymously made insults.

Sabine H is tough enough and experienced enough to know how to deal with your nonsense. What if she were some young student, who might be demoralized by your writings?

Ha. The correct wording would have been, I'm demoralized enough so there's no damage left to do ;-)

But seriously, I think you are right. I am not so much bothered by Lubos' insults as by his arrogant manner of saying everything is crystal clear and if you don't get it you're stupid. Not everybody says it that openly, but it's an attitude that is not constructive and indeed discourages many young people - or results in them not daring to ask their questions. I've seen it happening, again and again.

Wow Bee, Thanks!Now that is what I call a strong, even irresistible attractive force (second only to you of course - lol!)

I've been trying to fathom out where the possible limit is with the 'sight' of an evaporating blackhole at LHC.There probably is not enough significant mass to create any discernible or measurable bh @ LHCAny photon energetic enough to precisely measure a Planck-sized object could actually create a particle of that dimension, but it would be massive enough to immediately become a black hole, thus completely distorting that region of space, and swallowing the photon.

From wikipedia: According to the Hot Big Bang Model (also called the Standard Model), during the first few moments after the big bang, pressure and temperature were extremely great. Under these conditions, simple fluctuations in the density of matter may have resulted in local regions dense enough to create black holes. Although most regions of high density would be quickly dispersed by the expansion of the universe, primordial black holes would be stable, persisting to the present.

One way to detect primordial black holes is by their Hawking radiation. All black holes are believed to emit Hawking radiation at a rate inversely proportional to their mass. Since this emission further decreases their mass, black holes with very small mass would experience runaway evaporation, creating a massive burst of radiation.

A regular black hole (of about 3 solar masses) cannot lose all of its mass within the lifetime of the universe (they would take about 10 to the minus 60 years to do so). However, since primordial black holes are not formed by stellar core collapse, they may be of any size. A black hole with a mass of about 10 to the minus 12 kg would have a lifetime about equal to the age of the universe. If such low-mass black holes were created in sufficient number in the Big Bang, we should be able to observe some of them exploding today.

General relativity predicts the smallest primordial black holes would have evaporated by now, but if there were a fourth spatial dimension — as predicted by string theory — it would affect how gravity acts on small scales and "slow down the evaporation quite substantially."

This could mean there are several thousand black holes in our galaxy. To test this theory scientists will use the Gamma-ray Large Area Space Telescope (GLAST) to be put in orbit by NASA in 2007. If they observe specific small interference patterns within gamma-ray bursts; it could be the first indirect evidence for primordial black holes and string theory.

"At the Planck scale, the strength of gravity is expected to become comparable to the other forces, and it is theorized that all the fundamental forces are unified at that scale, but the exact mechanism of this unification remains unknown."

Microscopic black holes are thus a paradigm for convergence. At the intersection of astrophysics and particle physics, cosmology and field theory, quantum mechanics and general relativity, they open up new fields of investigation and could constitute an invaluable pathway towards the joint study of gravitation and high-energy physics.

Luboš wrote:String theory with all winding modes removed is exactly equivalent to a "local" quantum field theory on the relevant manifold. One can easily see it. All normal fields in spacetime - including perturbations of metric - come from non-winding strings. That's why the non-winding portion of string theory exactly includes a local theory on a background manifold that you may imagine classically.

Dear Luboš : this is good.

However, according to me, I'd take this low energy local QFT and perform experiments to decide whether the universe is in string theory A or its T-dual B. Then I'd interpret my very high energy probes to be probing the appropriate manifold.

You're welcome to discuss. My apologies that I don't have time to follow the details.

Dear Quasar:

How charming :-) But I'm admittedly not a fashion geek, so I doubt I will ever end up in an Vogue issue on 'ageless beauty'.

Regarding the primordial black holes that could evaporate 'right now' and be potentially observable. Well, in principle I'd think it is possible. But what about all those that would have already evaporated? Their decay would have made a contribution to the CMB that we haven't seen, and many processes in the early universe would have been modified by their evaporation. Whether one thinks that makes the topic more interesting, or more unlikely to be true is another question. Bernard Carr has done quite some work on the topic, he had a recent paper that I found very readable, see

The creepy part of these kind of discussions is that one doesn't say that RHIC collisions "create" black holes, but that nucleus-nucleus collisions, and even proton-proton collisions, are in some sense black holes, albeit black holes in some sort of "dual" space which makes the theory easier.

New physics happening all the time?:)What are the relativistic conditions within the superfluids?

You may find this interesting.Peter Woit posted this link to a 't Hooft talk.

"My problem with this is the same di±culty I have with claims from both supergravity and string theory today. Just imagine that indeed the integrals that one needs to perform would converge, or at least become su±ciently regular, whenever the momentum variables k¹ would tend to infinity. This would mean that some kind of decoupling would take place at very high momentum, or equivalently, at very tiny distance scales. This decoupling would indicate that, at tiny distance scales, our system would linearize, decouple, simplify. We would be able to describe a smooth and comprehensible world at distance scales smaller than, say,the Planck length. Why not try to imagine such a world directly? The point is that this is impossible. Newton's constant would tend to infinity there, or, distances in space and time, as what we are familiar with, cease to make sense there. This is characteristic for atopological theory. Thus, gravity must become purely topological at small distances. As long as we do not have such a topological theory, chances that we stumble upon one by blindly manipulating superpropagators, supergravity diagrams or string world sheets, are remote. Our searches should be well directed ones."

Hi Bee, thanks for the Carr link.You find the topic of Microscopic black holes interesting. "I would even hope they constitute a pathway towards the understanding of gravity at the Planck scale."

And yet you argue that Their decay would have made a contribution to the CMB that we haven't seen, and many processes in the early universe would have been modified by their evaporation.

In general relativity there is no minimum mass for a black hole and there is no gravitational time dilation limit, other than (the unrealistic) zero seconds per second. Very small-mass black holes would look like elementary particles because they would be completely defined by their mass, charge and spin.

Bee, I am not arguing that microstate blackholes (with a singularity) do not 'exist', but rather that they are not likely to exist 'alone' (for any length of time)

Bubbles - Massive Primordial blackholes (Bubbles) resulting from the merger of smaller blackholes.Under these conditions, simple fluctuations in the density of matter may have resulted in local regions dense enough to create black holes. Although most regions of high density would be quickly dispersed by the expansion of the universe, primordial black holes would be stable, persisting to the present.

Physicist Brian Greene has suggested that the electron may be a micro black hole (or black hole electron). Small black holes would look like elementary particles because they would be completely defined by their mass, charge and spin. On this view, the significance of the Planck mass is that it marks a transition where the Hawking semi-classical approximation breaks down, and a fully quantum mechanical description of the system becomes required. Gravitationally dominated "black hole"-like structures might still exist with these lower masses, but the emission of Hawking radiation would be suppressed by quantum effects, just as an electron constantly accelerating round an atom does not radiate, despite the apparent predictions of classical electrodynamics.

All that can be said with certainty is that current predictions for the functioning of a black hole with a mass less than Planck mass are inconsistent and incomplete.

Wheeler worked on the concept that the things we call particles (electrons) can be accounted for by an inner spacetime structure. A quote from the book Gravitation by Misner, Thorne and Wheeler (page 1215) relates to this concept; "What else can a particle (electron) be but a fossil from the most violent event of all, gravitational collapse?".

As a description, the black hole electron theory is incomplete. The first problem is that black holes tend to merge when they meet. Therefore, a collection of black-hole electrons would be expected to become one big black hole. Also, an electron-positron collision would be expected to produce a larger neutral black hole instead of two photons as is observed. These problems reflect the non-quantum nature of general relativity theory.

A more serious issue is Hawking radiation. According to Hawking's theory, a black hole the size and mass of an electron should vanish in a shower of photons (not just two photons of a given energy) within a small fraction of a second. Again, the current incompatibility of general relativity and quantum mechanics at electron scales prevents us from understanding why this never occurs.

You said you are collecting arguments or viewpoints, so let me add one more view here that will undoubtedly seem extreme, but, what the heck, it doesn't hurt to have a few persons taking less traveled roads and working in uninhabited areas of physics.

I am working on the assumption that we need to backtrack to the starting points of the two main branches of metaclassical physics, relativity and quantum theory, and reexamine things in order to understand the physical meaning of these phenomena.

I think that physics today is still reeling from the effects of the experiments that created metaclassical physics, i.e. the Michelson-Morley experiment on the one hand, and the Black Body Radiation and the Photoelectic Effect on the other. The phenomena that came to light through these experiments were hugely different from the physical reality of everyday experience, and very quickly physicists despaired of ever being able to physically understand what their mathematics told them. Their attitude was, "Nobody can make sense of this, but we can't stop working and ponder things for ever. We have to go on".

The lack of a physical understanding led to an inordinate emphasis on mathematics, that were used without a clear understanding of the underlying physical reality they were attempting to describe, to the point that physics today could be arguably more accurately described as "physical mathematics" than as "mathematical physics". Also it has led to things that, imho, cannot be reasonably defended, such as renormalization, a clear case of "nothing works, but we must do something even if it is blatantly wrong".

Now, about what I have done and what I am trying to do. So far I have proven that the phenomena of special relativity are due to the curved expanding universe, and that the Minkowski spacetime is in fact a projection of the curved expanding universe on a straight coordinate system. Of course, someone ( :-) ) said that this is just a geometric construction of the Lorentz transformation, but anyone can easily see and prove for themselves that the equations of the Lorentz transformation are the equations of an expanding sphere with radius t (details here). If it barks like a dog, it may be a dog after all. Now, quantum field theory depends on particle fields embedded in the flat spacetime of special relativity. What are the implications of the fact that the "flat" spacetime of special relativity is in fact a projection of the curved expanding universe?

Next I will try to introduce simple Newtonian gravity in the above formulation of special relativity and see where it will lead me. And working in the opposite direction, I want to build an extrinsic formulation of General Relativity and see if it agrees with Special Relativity with Gravity. I think that in this way things will be much more clear as to what curved spacetime means (what is curved in regard to what?)

Having arrived at such a formulation, the next step will be to apply it to electromagnetic waves, and see what I can come up with. I suspect EM waves will prove to be fluctuations of the curvature produced in spacetime by electric charge. And then I may be able to see how and why these fluctuations may be quantized as to the energy they carry. Yes, I am moving in the opposite direction, using something that seems more understandable to me (general relativity) to understand something that seems much more strange (quantization of EM waves).

You may say that I am trying to reinvent the wheel, but it is not so. I want to know the physical reality of gravity and quantum phenomena, and the present theories do not tell me how and why gravity and quantum phenomena work as they do. They have not advanced our understanding of physical reality, they have only advanced our epicyclic descriptions of it.

I know that I have not answered your question, why should gravity be quantized, and what does that really mean. It is a answer that I do not have yet. But I am also of the school that wants to know what something "really means" and why, and I just thought I'd give you and your readers my take of things.

So this is my way to quantum gravity (one of the fifty or so :-) ). A hundred years of failed efforts, so scratch everything and start from the beginning because things have gone astray at some point and we have ended up at an impasse.

Excuse me for insisting, but I am still thinking about your reply. I can follow your arguments, and I think you are correct, so is Lubos - classical GR certainly is not a candidate for a fundamental theory. There is a priori no good reason to throw away higher order terms in GR - in principle they should be there. You are saying, if one integrates out high frequencies, the effective description one gets has all these higher order terms as well. This seems to me a theory one would not trust, one that is sensitive to all that higher frequency stuff. It would be a theory that is unstable at small distances/high energies etc etc. These are all good indicators that classical GR is not a fundamental theory, and (despite of what Lubos thinks I think) I don't think it is.

But it seems to me you are referring to the word self-consistent in a different meaning than I do. Do I understand that correctly that you mean the regularized theory, after integrating out, would obtain all these additional contributions, that were not initially present? By integrating out, one does not change the theory. One just obtains an effective description, that might be appropriate physically, but is mathematically equivalent to the original one. This can illuminate pathological or nasty properties that we don't want - but this doesn't make the theory inconsistent in the mathematical sense. We know that classical GR coupled to classical sources has singularities and it's not possible to avoid these within the context of the theory. That also is a physically nasty property that we don't want, but it doesn't mean the theory is not self-consistent in the meaning that I've outlined above: that its assumptions are in disagreement with each other.

So in regard to my use of the meaning self-consistent, may I repeat my question: why would classical GR coupled to a classical source not be self-consistent? I still fail to see what would be an inconsistency in the formulation of that theory. Let me repeat again, that I don't refute the theory has a behaviour at short distances that is physically unacceptable and indicates it should be replaced with something else, its just not the question that I was asking.

If you want to I want to know the physical reality of gravity and quantum phenomena, and the present theories do not tell me how and why gravity and quantum phenomena work as they do. They have not advanced our understanding of physical reality, they have only advanced our epicyclic descriptions of it.

I can only repeat what I've already told you. If you're unsatisfied with the present interpretation of quantum theory or gravity, you're welcome to think about an improvement. But if your theory is equivalent to the existing description in all observables, then it is the same theory (or the same dog if you like, even if you didn't hear it barking before). In this case, you've found a new formulation, which might indeed be more useful in one case or the other. If it's not equivalent, then you will have to convince people your formulation is mathematically sound (pictures won't do) and to carefully consider all the known observables. That is to say, your theory should be self-consistent and consistent with observation.

then you will have to convince people your formulation is mathematically sound (pictures won't do)

Well, the pictures are produced mathematically by the simulation, and the mathematics are also presented along with the pictures, but anyway, the post was not intended as an endorsement request, just a reminder of another possible way of going about this. I'll be getting out of your way now. It is good though seeing you trying to take one quantum gravity. Keep us posted.

About the 'fi' syllable mystery, probably the text was written in a non standard latex or pdf font where 'fi' is rendered as a ligature, a combination of two letters in one symbol (other frequent examples are oe and th), placed at a position where standard fonts have the funny symbol we end up seeing.

Hi B., sorry for misunderstanding. When people talk about not quantizing gravity I assume they keep everything else we know fixed, including quantum mechanical matter. The result then is not self-consistent. You apparently mean something else by not quantizing gravity...

So, regarding completely classical world, the interest is of course a little academic. But in any event, I would call classical physics not self-consistent (with or without gravity) because it leads to physically nonsensical results. I don't need to go out and observe anything to know that total radiation from a black body is not infinite, and that the curvature inside black holes is not infinite.

Maybe that is not lack of self-consistency in your definition, since mathematically speaking there is no logical contradiction in having infinite radiation or singularity. In any event, this is a choice of words, we don't disagree on any point of substance.

Thanks. Yes, I think this clarifies the issue for me. And yes, the question is merely academical. As I've written above, classical GR with a classical source is clearly not consistent with observation if one takes the microscopic limit.

Hi Arun:

That would be an example. There one would have classical EM and a point charge results in nonsense. Another example would be a classical fluid stabilized inside it's Schwarzschild radius. A static n-body solution in GR (I once sat through a seminar about these, believe it or not). Or any kind of theory that predicts probabilities that are infinite.

Anyway, I think I've managed to clear that point with the classical sources. Sorry if my insistence appeared to be nit-picking, it's been very helpful for me.

Some postulates in physics can be challenged without challenging logical consistency or known behavior. One postulate that I doubt is the idea we can't get better than binary answers to questions about the polarization of a single photon (Basically, the projection postulate.) Knowing that isn't like knowing position and momentum accurately, since the latter violates the consistency of the Fourier composition of the wave function. It seems odd to me, since we can produce photons with definite polarization traits, like "elliptical: 80% RH and 20% LH at major axis angle 15 degrees" meaning an equivalent pass filter would have to let it through. So, why must it not be possible to "back engineer" such a photon if you didn't already know that?

I proposed a way to perhaps do that, based on known properties of half-wave plates and not unusual theories or assumptions. The discussion I started about that on sci.physics.research made its way to #1 etc. on Google search for "quantum measurement paradox." (For example, see Hit #2 which gives a good rundown of my original proposal, which sadly wouldn't be easy to do in practice.)

Most commenters agreed it was at least a good try and brain-teaser, and maybe not with a simple accepted answer to be affirmed and noted. My later version of that paradox involves sending a polarized photon through the same half-wave plate many times, with a corrector to reverse it's circularity again after each pass, instead of using a set of many HW plates. The point is, I said you could measure the magnitude of circular polarization, like plus, zero for linear, negative, etc., not just yes/no results, based on how much angular momentum was collected after many passes. Hey, if many photons of given polarization passing through once each would give that final result, then one photon passing through a plate many times should also: photons don't have "identity." Since it would take millions of passes to get measurable angular momentum, this is hard to do - but also thereby doesn't contradict known experiments, almost all of which are based on single interactions with particles! This is related to the now-hip concept of "weak measurements" that looks more valid all the time, AFAIK. Comments?

Lubos: What do you mean that you despise people doing "theory" for their career? That is a pretty broad swath and would include people like Weinberg, Witten, Gross, Hartle, etc.Was something left out?I agree about anonymous posters. Why not eliminate the category?

thank you for your comments! Following the discussion, things become a little bit clearer to me. Moreover, simple googleing for "Navier-Stokes" and "renomalization group" produces some useful links, I could have thought about it before ;-)... and will have to try to study and think about this...

Excuse me for being brief, but what you write doesn't make sense, and being a Google hit doesn't proof anything. The only thing of your writing I agree on is the first sentence. You apparently haven't really understood what the spin of a particle is. It's a quantum number, it doesn't mean the photon 'points' into any specific direction. If you doubt that the photon's spin can take only values +/-1, then you will a) have to let the electron have other values than +/- 1/2 b) violate conservation laws or c) let the photon only occasionally couple to electrons. Either way, it's not consistent with observation. I encourage you to look up a textbook on quantum mechanics to figure out what the quantum number of spin has to do with the polarization of a light wave. Best,

OK, I need to explain the point of my “new quantum measurement paradox” better. Sure, there’s a quantum number for spin, and photons are always “found” in non-weak (ordinary “collapse” producing) measurements to have spin either plus or minus (spin axis parallel or anti-parallel to motion, using standard conventions.) I didn’t imply that photons can point in any other directions. However, pls. "look up a textbook on quantum mechanics to figure out what the quantum number of spin has to do with the polarization of a light wave" indeed! I based everything I said on what we already know, except for the idea of what kind of final result we can get with multiple interactions. I say multiple interactions don’t produce the same sort of results that a single detection would, that’s my main point.

Did you think we couldn’t produce a “linear polarized photon” because of the spin issue about particles? Of course we can, or else we couldn’t ensure it’s passing through a matching linear polarization filter. Maybe you forgot about superposition of states. Photons can be superpositions of the pure circularly polarized plus and minus spin base states. The general representation is [photon wavefunction] = Az |+> + B |->, using typical keyboard approximations for bra ket etc. A and B are amplitudes (so A^2 + B^2 = 1) and z is a complex number of modulus one that expresses relative phase between the constituent base states. This composite wave is certainly “real” in the sense that vector addition of the equivalent E and B vectors produce the resultant polarization. If A = B, then we have a linear polarized wave, with z determining the angle. We know that photons of this type are not to be confused with a mixture of purely plus and minus CP photons, since all of the former will go through the matching linear filter instead of random 50/50 hits.

You’re thinking, if we set up a conventional experiment to measure spin, we get a hit of either plus or minus and no middle ground. That’s true, so far. But consider what happens if thousands of photons of a given composite A and B makeup go through a half-wave plate. HWPs flip the spin of photons: they switch the values of A and B for the spin components (so that means, 0.8, 0.6 turns into 0.6, 0.8 etc., not just for pure states.) The photon may come out with a different polarization angle, but the circularity value (A^2 – B^2) is always same magnitude but opposite sign. We know, because after passage we can always still get a linear photon to go through the matching linear filter, etc. However, despite there being no “collapse” of the passing photons, their going through a HWP still causes an average fractional increase in spin, based on the average angular momentum for the combined state. IOW, since the spin is flipped and thus doubling the transfer to the plate: Delta S = 2n(A^2 – B^2)hbar, where n is the number of photons passing through. We know that has to happen, from either experiments (like Beth experiment) or optical theory.

Now here’s the rub: Suppose that, instead of sending many photons of a given mixed state through the HWP, we send the same photon over and over again, by reflecting it around a course. (We need to re-invert the circularity, which can be done with either a correcting second HWP or mirrors.) According to QM, that should produce the same result as sending the same number of photons through, “once each” due to indistinguishability. So if we start with a photon with A = 0.8 and B = 0.6, it comes out from first pass as A = 0.6 and B = 0.8. Then is corrected back to A = 0.8 and B = 0.6, passes again, etc. Now suppose the photon goes through one million times. That should be the same as one million photons going through once each, for a total angular momentum transfer to the plate of 560,000 hbar. That’s enough angular momentum to measure classically, and now we find out approximately the circularity of the photon, not a mere yes/no answer to “is it plus or minus circular” with 64% change of getting “plus” and 36% chance of getting “minus.” If you are worried about evolution of the wave during repeated passes, that is a good question. It may happen, and I want investigation of that. However, if the photon starts linear or close to that, the wave should tend to wander back and forth in circularity and not grow towards one or other eigenstate.

PS: I don’t think my point is valid because it racks up top hits on Google search. I think it’s valid because I did a good job analyzing the situation and making the argument.

Anonymous, why is it so difficult for you to reveal yourself? It's so very cowardly to throw out libelous statements anonymously.

Are you the epitomy of an "Academic success"? If so, then why would anyone in his right mind want to be like you, too cowardly to own your words? Are you ashamed because you know that you are really lying or is it because you're just too stupid to accept the truth?

On the one hand you accuse Lubos of being a parasite, but on the other hand aren't you the anonymous parasite with nothing to offer but lies and insults? You certainly aren't bringing forth any original ideas yourself. Everything you said about Lubos is actually true of yourself and that's why you're too embarrassed to show yourself.

The original theme of Bee's post was "consistency". Just reading through this comment thread it's very clear who's consistent and who isn't.

wow, in this blog canaries can really send feathers around the cage (Bee, great metaphorical story btw! Laughed at it). But it was interesting to read the exchanges anyhow.

I have an addition to the prologue of the post - Bee, you mention "physics talk" in ordinary life. I think one thing that is used a lot - and confusing matters a bit - is the word "conservative".As an experimentalist, I use it a lot when I choose a loose cut rather than an aggressive one, when I set a systematic uncertainty at 10% when I know it is probably 6% or 7%, and when I choose a simpler technique to a more complex one.

But I also use the word when I choose the same pizza over and over, or the same restaurant - or in tens of other instances.

Is it me and my flawed English, or are other readers here using it at large ?

Who was the one using the obscenities? Not me. Fact: people like LM actually harm other people, by demoralizing them. It's time somebody told him this truth. Do you actually approve of the way he addressed himself to our hostess here? Fact: quite a bit of what LM writes about *physics* is actually nonsensical. People should be notified of that too. Fact: in English there is a saying: "Give him some of his own medicine." Maybe there is no analogous saying in Czech. Finally: yes, my academic career has been more successful, though that is not much of a boast.....anyway I'm pretty sure LM got the message so enough is enough.

Hi Tommaso,Interesting you mention the word 'conservative'. I also frequently use it, mostly in the literal meaning of 'to conserve/conservare' - i.e. similar to what you say, same pizza :-) It often irritates my German friends though, since in German 'the conservatives' (die Konservativen) usually refers to the party CDU (politically close by the US Republicans). Best,

I should make my quantum measurement paradox easier by posting a short summary as alternative to all that stuff up there. OK, first what we already know (conservation of angular momentum, averaging, Beth experiment): If lots of photons go through a HW plate (retarder that delays one linear component more than the orthogonal), the total angular momentum transferred to the HWP is a function of the photons' average angular momentum:

delta spin = 2n(A^2-B^2)hbar,

using A and B for base circular state amplitudes.

Here's my adjustment: Suppose we use one photon instead, but send it through the HWP many (n) times. That should be the same as n photons going through once each.IOW, the formula should be the same as before, only that now "n" means how many times instead of how many photons!

I say we could use that to find out the photon's circularity along a continuum, which tradition says we can't do.

I hope this helps, and I also appeal to sympathy and open mindedness about this, tx.

(quick note: The transiting photon has to be re-flipped back to original circular sign, so it can keep re-entering the HWP with the same sign if any as it had before. That can be done with mirrors or another HWP.)

Thanks for your explanations. Interestingly, Stefan and I have discussed a very similar experiment only last week. My suggested setup was different (to begin with it's not with photons), and also my motivation has a different origin, but the basic idea was the same: instead of making a quantum measurement with many particles, repeat it with the same particle. Neither Stefan nor I could recall such an experiment had been done before. Since it seems you have spend some thought on the situation, do you happen to know whether anything like this has been done before, and what sources have you already checked? Best,

It's great to hear of someone else thinking about multiple interactions for quantum measurement. I got the idea for my measurement paradox around 1999 just trying to push the envelope, not prompted by other investigations I can recall. What originally got me thinking: the way half-wave plates handle polarized light. They interact with it (because of how they build up angular momentum, per the Beth experiments [(R. A. Beth, Phys. Rev. 48 471 (1935); Phys. Rev. 50, 115 (1936)]. So, each photon is "detected" in some sense by a HW plate, because some increment of angular momentum is imparted (even if that increment can't be detected separately, we must assume that there is some shift in average expectation value for AM of the plate, so that we can finally get enough change to be measured.) Conservation requires that the amount transferred equal the doubled net average AM of the beam, hence the formula delta spin = 2n(A^2 - B^2)hbar. However, unlike the expectation of "collapse" for a detection, the transiting photon is not converted into one or other circular eigenstate. I thought that might have odd consequences for measurement.

For comparison here's the intuitive model, applying to traditional single-hit measurements. In this case it turns out to be wrong: a photon of arbitrary composition (A, B), referencing amplitude of |+> and |-> bases, enters a "detector" and makes a hit. It transfers either plus or minus spin, with probability A^2, B^2, and then is converted into either a pure |+> or |-> photon accordingly (note it would be reversed angular momentum, so the amplitude for plus would be converted into how many minus photons etc.) Well, we know that isn't what happens! Looking at both the classical wave theory and known experiment, a HWP converts light (and thus per each photon) according to well-known rules of transformation. Simply, the light circularity changes sign, so A -> B and B -> A. (For example a linear photon (A = B) comes out still linear, and we know because we can arrange to have it pass a properly oriented linear filter (polarization axis is reflected around the fast axis of the plate.) If the traditional behavior applied instead, linear pol. light would randomly produce 50/50 plus and minus circular photons, which would have 50/50 chance of passing any given linear filter.

Then it was only natural to consider, why not just pass the exiting photon back through, and again and again? We should be able to measure its circularity, long thought impossible, from the cumulative transfer of angular momentum. I had a cumbersome multiple stack of plates in my original top Google paradox, but later I thought to return a photon thru the same HWP over and over (making adjustment with mirrors or second plate to re-invert sign of circularity.) Well, the indistinguishability principle says, "the same photon" can't be tracked, so the HWP can't react like "What, *you* again? I can't take your spin on a second time" etc., to make it cute. Hence, the plate must build up spin proportionately after many passes.

Well, the possible bug is evolution of the wave function, but even then: if the photon had the usual chance of the wave ending up plus or minus, a linear photon has equal chance of going towards plus or minus, and almost the same if only a bit circular. Hence, it would keep bobbing near the linear state like the walking drunk problem. The net AM transferred with big n should still be near zero, proving it started "linear."

For similar ideas, search "weak measurements" on google and g-scholar. The work of Yakir Aharonov is about similar issues, and see for exampleLink. However, I haven't found authors who actually say, we can go around the projection postulate and find out things beyond the expected binary measurements. I still can't find a good example of the same particle interacting repeatedly, but I will look some more. I still hear talk of "measuring the wave function," which sounds like being able to do that.

This is uncharted territory. Please let me know about what you folks are up to. If you want to send email instead of posting, try paradoxer42 at lycos dot com, after dropping the number.tx

Well...I had a rather depressing encounter with Google search for "quantum measurement paradox" a few days ago. The discussion thread on sci.physics.research I had referred to above as being in the top 2-4 hits for the past few years, has gone down to no higher than 10th place. This happened in the past few days. Can anyone imagine why? Did Google change their way of ranking things? It can't be, that just one site started getting linked more (maybe due to their deliberate effort), because several got ahead of my reference. Any ideas?

PS: The measurement paradox referred to is still quite interesting IMHO. Indeed, you can read up on it better at the following page Link than at the links formerly getting top Google nod, because those somehow went to later replies. (Maybe the replies linked to academic web sites of the commenters, no longer in force?)

Sorry for the delay. I currently don't have the time to really think about the experiment, but I will come back to it sooner or later. Thanks for leaving your email, so I know how to contact you. Regarding Google: I'm not interested enough in Google to really know the details of their search algorithm, but it seems to imply some kind of ordering by date. I'm not sure though why you are bothered by it. Google ranking is not the kind of quality rating you should rely on. Being a top Google hit doesn't mean anything. I've been the top hit for 'Denzel Washington's marital status', even though I have no idea whether he's single, married, straight, gay or even dead.

Regard Google ranking you might also be interested in my recent post on The Right not to Know and comments there. Best,

Hi, Bee, You have a point.Still, I think there is a "media advantage" to being able to say, I am in top pick in Google for a subject I care about and am trying to get attention over. Well, somehow that thread comment about "quantum measurement paradox" moved from #9 back up, to #7 today. In any case, I look forward to hearing from you about your experiment in repeated quantum interactions. (Er, I hope you guys wouldn't mind at least mentioning me as someone who talked about doing such experiments...) The Orthodoxy just doesn't pay enough attention to repeated interactions, being hung up on the "big bang" protocol of collapse-producing events.

Could you please suggest a book that defines and elaborates on what a Fock spcace is? I am a physics student interested in, lets say, high energy physics, so I come accross Fock spaces all the time...but I still don´t know how it is defined (although I know the practical rules of how do do calculations...which sometimes seem to be the only thing that matters in physics textbooks).

Bee, Volume 3 by SW is QTF: Supersymmetry. Can you recommend reading it? I do look forward to reading Volume 2, and wondered if a break after that wouldn't be warranted to read Anthony Zee's Quantum Field Theory in a Nutshell.