Sunday, December 15, 2013

Last week, on December 6, 2013, I defended my PhD Thesis.
The Thesis is named Singular General Relativity, and can be found at arXiv:1301.2231.

Thesis Abstract:

This work presents the foundations of Singular Semi-Riemannian Geometry and
Singular General Relativity, based on the author's research. An extension of
differential geometry and of Einstein's equation to singularities is reported.
Singularities of the form studied here allow a smooth extension of the Einstein
field equations, including matter. This applies to the Big-Bang singularity of
the FLRW solution. It applies to stationary black holes, in appropriate
coordinates (since the standard coordinates are singular at singularity, hiding
the smoothness of the metric). In these coordinates, charged black holes have
the electromagnetic potential regular everywhere. Implications on Penrose's
Weyl curvature hypothesis are presented. In addition, these singularities
exhibit a (geo)metric dimensional reduction, which might act as a regulator for
the quantum fields, including for quantum gravity, in the UV regime. This opens
the perspective of perturbative renormalizability of quantum gravity without
modifying General Relativity.

The Thesis is based on a series of papers, from which the following are published or accepted:

Friday, October 4, 2013

After I reviewed briefly the so-called black hole wars, and expressed my doubts about black hole complementarity, there are still many things to be said. However, I would like to skip over various solutions proposed in the last decades, and discuss the one that I consider most natural.

All the discussions taking place within the last year around black hole complementarity and firewall are concentrated near the event horizon. But why looking for the information at the event horizon, when it was supposed to be lost at the singularity?

Remember the old joke with the policeman helping a drunk man searching his lost keys under a streetlight, only to find later that the drunk man actually lost them in the park? When asked why did he search the keys under the streetlight, the drunk man replied that in the park was too dark. In science, this behavior is called the streetlight effect.

By analogy, the dark place is the singularity, because it is not well understood. The lightened place is the event horizon. This is Schwarzschild's equation describing the metric of the black hole:

where ${d}\sigma^2 = {d}\theta^2 + \sin^2\theta {d} \phi^2$ is the metric of the unit sphere $S^2$, $m$ the mass of the body, and the units were chosen so that $c=1$ and $G=1$.

Schwarzschild's metric has two singularities, one at the event horizon, and the other one at the "center".

But in coordinates like those proposed by Eddington-Finkelstein, or by Kruskal-Szekeres, the metric becomes regular at the event horizon, showing that this singularity is due to the coordinates used by Schwarzschild. Fig. 1. represents the Penrose-Carter diagram of the Schwarzschild black hole. The yellow lines represent the event horizon, and we see that the metric is regular there.

Figure 1. Penrose-Carter diagram of the Schwarzshild black hole.

While at the event horizon the darkness was dispersed by finding appropriate coordinates, it persisted at the central singularity, represented with red. This is a spacelike singularity, and it is not actually at the center of the black hole, but in the future. This kind of singularity could not be removed completely, because it was not due exclusively to the coordinates.

However, in my paper Schwarzschild Singularity is Semi-Regularizable, I showed that we can eliminate the part of the singularity due to coordinates, by the transformation $r = \tau^2$, $t = \xi\tau^4$. The Schwarzshild metric in the new coordinates becomes

The metric is still singular, because it is degenerate, but the coordinate singularity was removed. The metric extends analytically through the singularity $r=0$, and the Penrose-Carter diagram becomes as in Fig. 2.

In the new coordinates, the singularity behaves well. Although the metric is degenerate at the singularities, in arXiv:1105.0201 I showed that this kind of metric allows the construction of invariant geometric objects in a natural way. These objects can be used to write evolution equations beyond the singularity.

The Schwarzschild metric is eternal, but in the case relevant to the problem of information loss, the black hole is created and then evaporates. The analytic extension through the singularity presented earlier also works for this case, and the Penrose-Carter diagram is shown in Fig. 3.B.

Figure 3. A. Penrose diagram for the evaporating black hole, standard scenario.B. Penrose diagram for the evaporating black hole, when the solution is analytically extended through the singularity (as in arXiv:1111.4837). In the new solution, the geometry can be described in term of finite quantities, without changing Einstein's equation. Fields can go through the singularity, beyond it.

Information is no longer blocked at the singularity. The physical fields can evolve beyond the singularity, carrying the information, which is therefore recovered if the black hole evaporates.

This is not a modification of General Relativity, it is just a change of variables. The proposed objects remain finite at singularity, and the standard equations can be rewritten in terms of these new, finite objects. These objects are natural, and don't require a modification of Einstein's General Relativity. The proposed fix is not made by changing the theory, but by changing our understanding of the mathematics expressing the theory.

One principal inspiration for me when finding this solution is the work of David Finkelstein, especially the brilliant solution to the problem of the apparent singularity on the event horizon. Imagine how happy I was when I received by email, at the end of December 2012, the following encouragements from him:

Dear Cristi Stoica,I write concerning your paper "Schwarzschild Singularity is Semi-Regularizable" (arXiv 1111.4837v2).I write first to thank you for the deep pleasure that this paper afforded me.Your regularization of the central true singularity of the Schwarzschild metric is a remarkable and beautiful example of thinking outside the box. It is a natural, generally covariant, and deep result on a problem that has drawn wide attention, that of gravitational singularities. You found your solution easily once you conceived the idea, and yet it has been overlooked for these many decades by the truly great minds in the field.[...]With good wishes for your future explorations,David Finkelstein

Thursday, October 3, 2013

In this post, which continues Black Hole Paradox 1. Susskind vs. Hawking, I will explain my reasons for not accepting the black hole complementarity principle (BHC). I will argue that, in the process of inventing this principle, Susskind, Thorlacius and Uglum (STU) found an important result, but ignored it.

The principle of information conservation was in
fact what it had to be proven for the case of black hole information. To
allow conservation, STU assumed that an external observer will see
that information in Hawking evaporation. They assumed implicitly that
information remains outside the event horizon, at least for an external
observer. So, they replaced implicitly 1. with

1'. The principle of information conservation by avoiding falling in the black hole.

The equivalence principle is the fundamental principle in General Relativity. It states that inertia and gravity are two faces of the same coin. Accelerated motion behaves as gravity, and gravity is due to the fact that spacetime is curved, so that reference frames cannot be without acceleration.

The quantum xerox principle is in fact the no-cloning theorem, which states that unknown quantum states cannot be copied. If we would be able to copy an unknown quantum state, then linearity of Quantum Mechanics would be violated. It is amazing that this simple but profound result was discovered only 30 years ago, given that the proof is so simple. There is a funny story about this. Asher Peres was anonymous reviewer for Foundations of Physics, and refereed a paper in which superluminal communication was predicted in Quantum Mechanics. He explained in the report that the result must be wrong, and even the author is aware. However, realizing that this mistaken result would stimulate the research, and a more important result would follow from this, he recommended publication. His intuition was right.

Assume Alice dives into the black hole. For an external observer Bob, she never reaches the event horizon. This is how the things look, according to General Relativity, from Bob's point of view. In Bob's coordinates, Alice never reaches the horizon, because GR
predicts it gets closer slower and slower, like in a Zeno paradox. But in Alice's reference frame, she crosses the horizon in a finite time. This apparent contradiction is due to the different coordinates used by Alice and Bob. Bob's coordinates are singular at the horizon. So he is wrong, Alice crosses the horizon in finite time, but because he is accelerating continuously to avoid falling in the black hole, there is a redshift of the light coming from Alice, so that in Bob's frame, her time stops.

Moreover, because Bob sees the event horizon as being hot, he would see Alice being vaporized. This would be OK from Bob's point of view, because other wise he would experience violation of the no-cloning theorem. But this also takes place in Bob's coordinate system, which is singular at the event horizon. So, he should again be wrong. However, let's go with STU and assume that Bob is right.

But the equivalence principle implies that Alice would not experience something special when she would cross the horizon. So, in fact, the information describing her would cross the event horizon.

This amounts to an apparent contradiction between what Bob sees, and what Alice experiences. On the other hand, STU want that the information describing Alice remains outside the horizon. This can't be done, unless the information is cloned, one copy going with Alice, and the other remaining available to Bob in the Hawking radiation.

For me, this is a proof that 1' is wrong. Admitting that 1' is true, we have to choose between no-cloning and the equivalence principle. Everybody agreed that we should not contradict these two principles. This means that the hypothesis that information survives by remaining outside the horizon, was wrong. Please note that this doesn't mean that 1 is wrong, only that 1' is wrong. 1 and 1' are not equivalent, although 1' implies 1. In other words, information may be preserved, but not as STU wanted.

I think this is a great result, found by STU, but they decided to ignore it. They didn't stop here. They didn't want to give up 1', because they believed that the only way to save information is this. In other words, he believed that 1 is equivalent to 1'. Because this led them to contradiction, they decided to accept 1' together with the contradiction. The way was to admit cloning of the information so that it is shared by Alice and Bob, but to claim in the same time that this is not violation of 2.

STU saw that there is a contradiction between Alice and Bob, so they decided to
apply the solution from the Sufi joke with Mulla Nasrudin, and agree with both of them. But, unlike the dervish, they did not go beyond dualism, and proposed instead the black hole complementarity. Essentially, it said that, even though Alice has a copy, and Bob has a copy, this doesn't contradict the no-cloning theorem, because Alice can't see Bob's copy, and vice-versa.

Now, call this however you want, but to me, it's a contradiction. Susskind even claimed that in fact this is just Bohr's complementarity, applied to this new case. It is true that Bohr stretched his idea of complementarity, until he saw it everywhere, and others stretched it more. But there is no connection between Bohr's and STU's complementarity. In Bohr's complementarity, there is no contradiction. Sometimes light behaves like waves, sometimes like point particles, but this is not a contradiction. If in a particular experiment, light behaves like waves for Alice, it does the same for Bob.

STU said that Alice and Bob can never meet, to compare their notes, hence there will be no proof that the no-cloning was violated. In other words, Nature can break her own laws whenever she wants, if we can't catch her in the act.

But, a question was raised, what if Bob dives into the black hole, following Alice, to compare their observations? Susskind found relatively quickly an answer to this: before Bob meeting Alice, they will be destroyed by the singularity. Indeed, calculations for Schwarzschild black holes show that Susskind is right about this.

But what if the black hole has the tiniest electric charge or rotation? In this case, the singularity is not spacelike, as in the Schwarzshild black hole. The singularity is timelike, and Alice and Bob can, in principle, avoid for indefinite time to reach it. So, there is plenty of time to meet and compare their notes. For some reason, this situation is never mentioned, only the Schwarzschild black hole case, for which there is an answer.

There is another reason why I disagree with BHC: it violates the equivalence principle. I explained this already in 2011. Ironically, although BHC was invented to allow 2 coexist with 1' and 3, it actually contradicts 2. Here is why. According to the equivalence principle, an experiment involving gravity should give the same result as an experiment in which we replace gravity with acceleration. Consider for example that Alice is moving inertially (free-fall motion), and Bob's frame is accelerated. This can happen at the black hole, when Bob sees Alice crossing the event horizon, while his accelerated motion helps him avoid falling. But it can happen somewhere far from any black holes. In this case, due to his acceleration, Bob will see something similar to the event horizon - the Rindler horizon. If he will see Alice crossing the Rindler horizon, he will see her evaporating. This is the equivalent of what happens in the case of a black hole, according to the equivalence principle. There is one big difference from the case when Alice falls in a Schwarzschild black hole: if Bob goes after her, he will find her alive and in good health. He will realize that she was not destroyed when she crosses the Rindler horizon. So, the equivalence principle tells us that even though Bob sees Alice being destroyed near the event horizon, he is again wrong, as it was in the case of the Rindler horizon. Hence, we have to choose between BHC and the equivalence principle.

Last year (in 2012), Almheiri, Marolf, Polchinski, and Sully (AMPS) wrote the paper Black Holes: Complementarity or Firewalls?, in which they show, by a different argument, that BHC doesn't solve the problem. They propose instead that Alice is actually destroyed at the horizon, by a firewall (formerly considered by Susskind, who called it "brick wall"). The price paid is that this sacrifices the equivalence principle.

So, if AMPS are right, and the solution is to admit the firewall, then why should we keep BHC? It is sometimes answered that BHC is still needed, to explain why Bob sees Alice never crossing the horizon, while she actually crosses her, in a finite proper time. But, as I explained, this is just an effect of GR, due to the fact that Bob's coordinates are singular at the horizon.

All
the discussions taking place within the last year around black hole
complementarity and firewall are concentrated near the event horizon.
Information is supposed to be destroyed by the singularity, but it is
hoped that, somehow, the event horizon plays the major role in
recovering it. Black hole complementarity is based on the idea that
Nature makes a backup of the information on the stretched horizon. The
firewall proposal suggests that the event horizon is a shield that burns
whatever may fall in the black hole, in order to make the information
immortal.

To me, these are a Deus ex machina kind of explanations; it appears as if the supporters of
these ideas see a purpose in the universe, and that purpose is eternal
life for the information falling into the black hole, at any costs. It
looks like God found a problem after he patched together General
Relativity and Quantum Mechanics, and decided to fix it somehow. For
instance, if God was a programmer, he would make a backup of the
information on the horizon, to fix the memory leak caused by the
singularities. Or, if God was a plumber, he would connect a pipe at the
event horizon, to deviate information and prevent it leaking through the
singularity. Fixing a bug, or a leak, would reveal intention in
creating the universe, a watchmaker who made an imperfect work and then
repaired it using an improvisation.

Most part of this post I explained why I don't buy BHC. I also said that, during the process, STU found that 1' contradict 2 and 3, and that I consider this the correct result, and the attempt to remove the contradiction by embracing and giving it a name, did not actually remove it. So, my main point was to explain that in fact to save the lost information, copying it at the horizon is not the solution. I also don't think it is a solution to break the principle of equivalence, by building a firewall in a place where GR and QFT work well. In fact, as I will explain in a future post, I think that all this endeavor was misguided: why search the lost information in another place than that where was lost? Giving up this assumption will reveal that there is no contradiction between 1, 2, and 3 on the event horizon, without having to invoke mystical principles like no contradiction is a contradiction, until it is an observed contradiction. We will see this in the next post, named Look for the information where you lost it.

Sunday, September 29, 2013

Stephen Hawking showed that there is a big problem with the information, if black holes are present. If singularities swallow matter without returning it, information vanishes too. If the initial state of a system is pure, after a part of the system falls in the black hole, it is entangled with the part remaining outside. After the part falling inside vanishes in the singularity, the outside part remains in a mixed state (its state depends on the state of the inside of the black hole). This was not considered to be a problem before Hawking's analysis, because it was assumed that the infalling part is there, somewhere inside the black hole (I represented this in figure 1, A., in a Penrose-Carter diagram). It became an issue after Hawking (and his precursors, Zel'dovich and Starobinski) found that the black holes evaporate. The problem was that the black hole may completely evaporate, vanishing with the information it swallowed, and the outside part remains in a mixed state. This means that the information is lost forever, and the unitary evolution is broken (fig. 1, B.).

Figure 1. A. In the case of a non-evaporating black hole, by choosing an appropriate foliation of spacetime in space+time, unitarity is not broken. B. Unitarity is broken for an evaporating black hole, because when the initial state is pure, the final state is mixed.

Hawking's result was bad news for many physicists of good-faith. Hawking obtained it by combining the most trusted and proven theories we know, General Relativity and Quantum Theory. So, his result became acknowledged as a paradox, because he seemed to force us to choose between things we considered to be true. On the one hand, Quantum Theory is based on unitary evolution, and we don't want to be broken. On the other hand, Hawking impeccably combined GR and QT to prove that the radiation obtained during evaporation doesn't carry information out of the black hole.

Two sides were formed. While many didn't accept the idea that information is lost, another side, made mostly of General Relativists (who also happens to support Quantum Field Theory), were more willing to accept that information is lost. Among those who don't think that violation of unitarity is such a catastrophe that would destroy the world, there were Hawking, Kip Thorne, Roger Penrose, who explains his position in Cycles of Time, also Robert Wald in his recent talk at the conference Fuzzorfire workshop, seems not to be bothered by this, etc. The side which rejected the possibility that information may be lost was made of John Preskill, Leonard Susskind, Gerard 't Hooft, and others. In 1997, Thorne and Hawking bet against Preskill that information is really lost in the black hole.

Damn it, I don't want to talk about CGHS or RST. It's a dead end. I want to do something that really will shake things up. Let's go way out on a limb and say something very bold that will really get their attention.

In fact, Hawking was
probably not convinced by the black hole complementarity, but rather by Maldacena'sAdS/CFT correspondence, and wrote a paper
in which he explained his own solution saving the information. In the paper, Hawking didn't need to use black hole complementarity, but he mentioned Maldacena's results, which rely on Susskind's and 't Hooft holographic principle.

In his proposal, Hawking uses the
method of sum over histories, originated by Feynman, but developed as an
approach to quantum gravity by himself and J. B. Hartle. To solve the
information loss problem, he proposes that one should sum over
topologies of spacetime. While it is clear that the trivial topologies,
those without black holes, don't break unitarity, the non-trivial ones,
including those containing black holes, break it. So, Hawking tries to
argue that the non-trivial topologies don't contribute to the sum over
histories. In other words, if a possible history has black holes,
information is lost, but if we consider all of them, it is not lost,
because the histories losing information don't contribute to the overall
sum.

His solution was criticized, for instance for not being well supported mathematically, by John Preskill.
Indeed, Hawking's paper from 2005 looks rather like a sketch of a research program, where
the key points are merely conjectured. Not much progress was made in that direction since then.

My
main objection to his proposal is the following. In general, when we sum over
histories, we have to impose some boundary conditions. For example, in
the case of the two-slit experiment, we take into consideration only
paths that go through the slits. Similarly, if we want to see what
happens with information in the presence of black holes, using the sum
over histories, one should impose conditions compatible with the
presence of the black hole. Or, Hawking claims that the only
contributions are given precisely by the histories which are not
compatible with the presence of black holes, and ignores exactly the histories
which actually should be considered.

When making his proposal, it seems that Hawking was not aware neither that he was the "evil side" in a black hole war, nor that Susskind defeat him. Recently, at the Fuzzorfire workshop, Hawking asked for the definition of the stretch horizon, to which Susskind replied that he
already told the definition 20 years ago, then left (minute 45).

I think that, when Hawking raised the problem of information loss, he did a great job. This is a very good problem indeed, and fueled plenty of research. In following posts, I will argue that in this process, Susskind did an excellent job too, by finding something very important, in my opinion, but then he lost it, by inventing the black hole complementarity principle. Next, in Stretched Complementarity, I will explain why I don't buy Susskind's solution.

Tuesday, September 24, 2013

My brother in law is very passionate about history. He sent me a text showing that Nicole Oresme discovered the law of uniformly varied motion, 2-3 hundred years before Galileo.

Nicole Oresme (image from Wikipedia)

In fact, he discovered several other things before others did. This should not be such big news to historians of science, but it was to me. I checked his Wikipedia page, and found written that "Oresme manages to anticipate Galileo's discovery". So, I replied to my brother in law, who also seemed a bit disappointed that Oresme is presented only as a "precursor",

it seems that the words "precursor" and "anticipate" have different meanings than we thought. The Romanian prime minister is accused of plagiarism, because he copied almost his entire PhD thesis from another guy's book. Does this make the other guy a precursor of the prime-minister?

For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken even that which he hath.

Even if this law apparently was given by God, it seems unfair to me. But why scientists abide by such an unfair law?

My only explanation is that it is more practical. When speaking with someone about an effect, we call it with the name used by majority. We do so also when writing papers, so that interested people can find it in databases using the most common name. So, the reasons seem to be practical. But, even in this case, you can use the names of both persons, and this is a common practice too. I for example make an effort to write all the names in "Friedmann-Lemaître-Robertson-Walker singularity", and do this in chronological order, because it is more fair than just call it "Robertson-Walker".

I think it is very important to make sure, especially in a published work, that every time you mention the well-known scientists and inventors, to acknowledge also their "poorer relatives" , the "precursors" who merely "anticipated". Otherwise, after a time, they will be completely erased from history. When you will mention them, everybody will say "I haven't heard of him or her, and this name is not mentioned in any textbook or paper I've read". Check for example the history and talk around the Wikipedia page about the Bohr magneton. The value of this physical constant was first found by Ştefan Procopiu, this is a historical fact.

Ştefan Procopiu, (image from Wikipedia)

Three years ago it was a "war", because somebody decided to "get rid of Procopiu" (his own words) from the Wikipedia page of the magneton. I will not reproduce the exchanged words, but I think that the main reason why he made the removal was that he never encountered Procopiu's name in relation to the magneton (in particular in Pais's biography of Bohr, which obviously was not the biography of Procopiu). There was a reference to Procopiu's paper, but he removed it too. I posted in the talk page, in addition to citations to two papers by Procopiu, a list of textbooks by experts, I explained that I have nothing against keeping the reference to Bohr, but why should we remove someone who really was the first who found it, and published this in two papers? Eventually, Ştefan Procopiu was accepted back in history, as a humble precursor who just happened to find the magneton first.

The precursor effect. This takes place when, if one wants to avoid to acknowledge that a person is the real author of a discovery or invention, one calls that person "precursor".

Friday, September 13, 2013

Sean Carroll blogged recently, in Is Work Necessary?, about a quote attributed to Buckminster Fuller, which seems to be trendy (or, how is trendy to say, "it became a meme"). I reproduce the picture from Sean's lucid blog.

I very much agree with the part of the quote saying that the technological progress should allow us to work less. Indeed, since we could make a living before the invention of machines, and especially computers, it seems logical that now we can make a living by working, say, one day a month or so. Because it is indubitable that technological progress multiplied dozens of times our productivity. And in this rhythm, who knows, maybe in twenty years or so there will be robots doing 99.99% of our jobs.

So,
why cannot we be unemployed in this society? If you don't pay, you
can't get even a glass of water, or a place to sit. Not to mention
the luxury of medical care. So, with all this progress, why do we still need jobs? Some of us need them just to live. Others, to live
and, in addition, to be able to buy the most recent stuff, like the latest iPhone version, a new car, TV, computer, game console you will not get enough time to use, etc. Add to this that having a job is fun sometimes, even if only during the lunch breaks. At job, you make friends, some for a lifetime, others just for the duration of the job. But the sad truth is that many of us can't even conceive to be unemployed, simply because it will be boring. It takes imagination to work at your dream, instead of building your employer's dream.

Obviously, if we decide to consume less, we can work
less and still make a living. We can choose downshifting
(this is something I did). But, the paradox is, whenever you try to work a smaller amount of time, the employer tends to
consider you lazy for this (even if you are more productive than some full-time employed colleagues). Your salary remains small forever, because you lack full-time experience. And you can't find another employer to hire you part-time with reasonable salary, because you raise suspicions for wanting more free time. In the meantime, the expenses
keep increasing, so eventually you have to give up and become another brick in the wall. Of course, some of us can build successful business, which allow them to do nothing
for the rest of their lives. But how many can do this? How many small
businesses have fail, bankrupting entire families, for a single one to be successful?

So I think that the main idea from the quote, that most of us can do what we like instead of working, is a romantic lie.

But one can say, "did you test Buckminster Fuller's advice, to criticize it?". Well, this is precisely what I did for several years.

Here is his concluding remark

The true business of people should be to go back to school and think about whatever it was they were thinking about before somebody came along and told them they had to earn a living.

After living a Bohemian life as a student and as high school math teacher, I had to find a better paid job. So I built a successful career as a computer programmer, specialized in something that guarantees high salaries even in Romania (geometric algorithms, especially for cad/cam).

After several years, I decided to go back to school and do my Master and PhD, in something I like, geometry and mathematical physics. Soon I will defend my PhD (the thesis is done for almost a year). I like physics, I love to think at unsolved problems in foundational physics, and try to solve them. I do this for fun, without being paid (not that I don't want to be paid).

For my thesis, I researched the problem of singularities in General Relativity, but in the meantime I also activated in the foundations of Quantum Mechanics. Against all standard approaches to the problem, I wrote and published several papers in well rated peer reviewed physics journals (here is a continuously updating list of my papers). In the meantime, I have to earn a living for me and my four member family, to pay the mortgage and bills, and sometimes attend conferences. So, I have to work, as part-time as I can, as a computer programmer. There is the alternative that, after I finish my PhD, I can join a team as a postdoc, and be paid to do what I love. I like this idea, but will I find a position that guarantees me the freedom to research what I want? Or the only way is to help senior researchers make their dreams come true?

So, Mr. Fuller, at thirty years after your death, your beautiful idea is still a romantic lie. And if, in another thirty years, robots will be able to do 99.99% of our work, chances are that society will still find a way to keep us busy.

Quantum mechanics forces us to reconsider certain aspects of classical
causality. The 'central mystery' of quantum mechanics manifests in different
ways, depending on the interpretation. This mystery can be formulated as the
possibility of selecting part of the initial conditions of the Universe
'retroactively'. This talk aims to show that there is a global, timeless,
'bird's view' of the spacetime, which makes this mystery more reasonable. We
will review some well-known quantum effects from the perspective of global
consistency.

Thursday, September 5, 2013

The Weyl curvature hypothesis of Penrose attempts to explain the high homogeneity and isotropy, and the very low entropy of the early universe, by conjecturing the vanishing of the Weyl tensor at the Big-Bang singularity.

In previous papers it has been proposed an equivalent form of Einstein's equation, which extends it and remains valid at an important class of singularities (including in particular the Schwarzschild, FLRW, and isotropic singularities). Here it is shown that if the Big-Bang singularity is from this class, it also satisfies the Weyl curvature hypothesis.

As an application, we study a very general example of cosmological models, which generalizes the FLRW model by dropping the isotropy and homogeneity constraints. This model also generalizes isotropic singularities, and a class of singularities occurring in Bianchi cosmologies. We show that the Big-Bang singularity of this model is of the type under consideration, and satisfies therefore the Weyl curvature hypothesis.

In a previous post, Picasso is so overrated!, I criticized Picasso's painting, Family of Saltimbanques, for containing several childish mistakes. Or at least I consider them mistakes, others may think that they were done at purpose, to send a message which only they can see.

The above mentioned painting was not the only one with mistakes. For instance, below is an annotated image of Boy with a Dog, painted in the same year. Again, we see a disregard to the proportions: this boy's hands are disproportionately long, being able to hang under his knees! The dog is fine.

Were these so-called mistakes really mistakes, or did they serve to a higher purpose, sending a message which could not be sent by conforming to the arid laws of proportion, perspective, anatomy? Last time I argued that they are mistakes, because they were made before Picasso's cubist period. If somebody can say that Picasso deliberately broke the rules, being one of the founders of cubism, well, he was not always a cubist. When being cubist, Picasso deliberately violated the rules, but before that, why would he try to be so conformist in all his paintings, only to break the rules occasionally? A possible explanation is that he was not mastering so well the techniques, he did not have so well the intuition of how objects are in space. Of course, to respect anatomy, he could use models, wooden manequins, and he could make first some sketches. Maybe he was lazy, or thought it is below him to do this, or that this would affect his inspiration. Perhaps he observed, or was told, that it's something wrong with the positions and proportions, that they don't fit well, but was to lazy to redo the entire painting, or thought that it represents so well what he meant, that he wouldn't change anything.

Anyway, if he was making such childish mistakes, then he may have found in cubism his salvation. He found in cubism the freedom of expression, but not because the classical means were too limited. Rather, because he could not master them. So, it is not excluded that he thought he had something to say, but couldn't because he was "illiterate" in painting. Like an aspiring poet who doesn't know grammar and spelling, and decides to invent his own grammar and spelling. He couldn't play the game, so he changed the rules and invented his own game. It seems that, by this, he was able to find many others willing to play by his rules, and even to spend real fortunes on his works. If there is a public, then this is, after all, art.

Audiences at those vast concerts are there for an excitement which, I
think, has to do with the love of success. When a band or a person
becomes an idol, it can have to do with the success that that person
manifests, not the quality of work he produces. You don't become a
fanatic because somebody's work is good, you become a fanatic to be
touched vicariously by their glamour and fame. Stars—film stars, rock
'n' roll stars—represent, in myth anyway, the life as we'd all like to
live it. They seem at the very centre of life. And that's why audiences
still spend large sums of money at concerts where they are a long, long
way from the stage, where they are often very uncomfortable, and where
the sound is often very bad.

Thursday, August 22, 2013

Recently, a study shown that, at classical music competitions, judges seem to evaluate more based on the package in which music is delivered, rather than the music itself (found this at John Baez's blog). Prior to that, it was revealed that judges of abstract art don't guess so well which paintings were made by human artists, and which by monkeys. Wine tasters prefer French wines over Californian, unless the tasting is blind.

So, I finally got courage to say something I want to say for years, but I was afraid to be accused of art blasphemy. I think Picasso is so overrated! In my opinion, just in one painting, Family of Saltimbanques,
“the masterpiece of Picasso’s Rose Period”, he severely broke several
laws of perspective, physics, and even saltimbanques’s bones (before his cubist period). Below, I annotated a picture from Wikipedia:

The leftmost clown seems to have a problem with his shoulder, or his left humerus is way too long. The young boy carries a barrel, by holding it with a hand which is in front, from its bottom, which is behind. Probably he is using the force, like the lady in the right, who makes her hat levitating, while she probably trains her left hand for a contortion number. And like the overweight red clown, who holds a bag which doesn't hang under the point where it's held, even though he obviously doesn't have the right leg! Although I am by no means an expert, I think that these are elementary mistakes.

Monday, August 5, 2013

The law of conservation of energy forbids perpetual motion. On the other hand, the small magnets in this image seem to be attracted / repelled by the large magnet, producing motion. As a child, I imagined this machine, and for a while I was perplexed, since on the one hand I knew it can't move forever, because it would break conservation of energy, but on the other hand, it is not that obvious what exactly happens that makes it stop.

Thursday, June 6, 2013

Scott Aaronson recently uploaded a mind-boggling paper, full of challenging ideas regarding free-will, quantum mechanics and computing, philosophical big questions, neuroscience, and many other hot topics. The title is The Ghost in the Quantum Turing Machine, and will be a chapter in the book The Once and Future Turing, edited by S. Barry Cooper and Andrew Hodges, 2013.

His paper is like a storm of puzzle pieces, which fit together perfectly in an amazing tapestry, centered around his idea of Knightian freedom.

Here is the abstract

In honor of Alan Turing's hundredth birthday, I unwisely set out some
thoughts about one of Turing's obsessions throughout his life, the question of
physics and free will. I focus relatively narrowly on a notion that I call
"Knightian freedom": a certain kind of in-principle physical unpredictability
that goes beyond probabilistic unpredictability. Other, more metaphysical
aspects of free will I regard as possibly outside the scope of science. I
examine a viewpoint, suggested independently by Carl Hoefer, Cristi Stoica, and
even Turing himself, that tries to find scope for "freedom" in the universe's
boundary conditions rather than in the dynamical laws. Taking this viewpoint
seriously leads to many interesting conceptual problems. I investigate how far
one can go toward solving those problems, and along the way, encounter (among
other things) the No-Cloning Theorem, the measurement problem, decoherence,
chaos, the arrow of time, the holographic principle, Newcomb's paradox,
Boltzmann brains, algorithmic information theory, and the Common Prior
Assumption. I also compare the viewpoint explored here to the more radical
speculations of Roger Penrose. The result of all this is an unusual perspective
on time, quantum mechanics, and causation, of which I myself remain skeptical,
but which has several appealing features. Among other things, it suggests
interesting empirical questions in neuroscience, physics, and cosmology; and
takes a millennia-old philosophical debate into some underexplored territory.

Recently, a new paper by Maldacena and Susskind appears, named Cool horizons for entangled black holes (arxiv:1306.0533). In the paper, the two authors propose that two entangled particles are connected by an Einsten-Rosen bridge, a wormhole. Their stake is in fact related to the black hole information paradox, the Maldacena correspondence, and the recent idea of black hole firewalls. It was covered, among others, by Sean Carroll.

AN EXPLICIT LOCAL VARIABLES TOPOLOGICAL MECHANISM FOR THE EPR CORRELATIONS

It is based on a non-trivial topology (wormholes).

Cut
two spheres out of our space, and glue the two boundaries of the space
together. This wormhole can be traversed by a source free electric
field, and used to model a pair of electrically charged particles of
opposite charges as its mouths (Einstein-Risen 1935, Misner-Wheeler's
charge-without-charge 1957, Rainich 1925).

For EPR we need a
wormhole which connects two electrons instead of an electron-positron
pair. A wormhole having as mouths two equal charges can be obtained as
follows: instead of just gluing together the two spherical boundaries,
we first flip the orientation of one of them. Since the electric field
is a bivector, the change in orientation changes the sign of the
electric field, and the two topological charges have the same sign.

Now
associate to the two electrons your favorite local classical
description. The communication required to obtain the correlation can be
done through the wormhole.

----------------

This may be
the basis of a mathematically correct local hidden variable theory.
Also, it seems to disprove, or rather circumvent, Bell's theorem. For
Bohm's hidden variable theory, it provides a mechanism to get the
correlation without faster than light signals. I proposed it here for
theoretical purposes only, as an example. My favorite interpretation is another one.

Cristi

I did not want to spend more time on this, only to break my neck proposing local models of entanglement, especially since I did not find the idea of hidden variables relevant.

Wednesday, May 1, 2013

Let every point of $\mathbb Z^2$ be surrounded by a mirrored disk of radius $r\lt 1/2$, except leave the origin (0,0) unoccupied by a disk.
Q. Is it the case that every disk can be hit by a lightray emanating from the origin and reflecting off the mirrored disks?

Here is an example

I liked the solution I gave, so I would like to share it here. It assumes the radius to be about $1/3$ or less. The idea is that, instead of light ray and reflection, to think in terms of rings and ropes connecting them. We assume the rings and the rope satisfying suitable idealizations. The following picture shows that any ring in the lattice $\mathbb Z^2$ is reachable.

Moreover, we can use this method to connect with the origin all rings in the plane, with a single rope.

Of course, if the radius gets close to $1$, the rope becomes overlapped with the rings, and this solution will no longer work. But we can still use it to find a correct solution. We start with a radius of $1/3$, then gradually increase it. At some point, the rope will become tangent to one or more rings. In this case, just wrap it more, using the moves in the following picture