Pages

Wednesday, February 26, 2014

Gravity is an exceedingly weak force compared to the other known forces. It dominates at long distances just because, in contrast to the strong and electroweak force, it cannot be neutralized. When not neutralized however the other forces easily outplay gravity. The electrostatic repulsion between two electrons for example is about 40 orders of magnitude larger than their gravitational attraction: Just removing some electrons from the atoms making up your hair is sufficient for the repulsion to overcome the gravitational pull of the whole planet Earth.

Alongside the search for observable consequences of quantum gravity – often referred to as the ‘phenomenology’ of quantum gravity – the field of analogue gravity has recently seen a large increase in activity. Analogue gravity deals with the theory and experiment of condensed matter systems that resemble gravitational systems, yet can be realized in the laboratory. These systems are “analogues” for gravity.

If you take away one thing from this post it should be that, despite the name, analogue gravity does not actually mimic Einstein’s General Relativity. What it does mimic is a curved background space-time on which fields can propagate. The background however does not itself obey the equations of General Relativity; it obeys the equation of whatever fluid or material you’ve used. The background is instead set up to be similar to a known solution of Einstein’s field equations (at least that is presently the status).

If the fields propagating in this background are classical fields it’s an analogue to a completely classical gravitational system. If the fields are quantum fields, it’s an analogue to what is known as “semi-classical gravity”, in which gravity remains unquantized. Recall that the Hawking effect falls into the territory of semi-classical gravity and not quantum gravity, and you can see why such analogues are valuable.
From the perspective of quantum gravity phenomenology, the latter case of quantized fields is arguably more interesting. It requires that the analogue system can have quantum states propagating on it. It is mostly phonons in Bose-Einstein condensates and in certain materials that have been used in the experiments so far.

The backgrounds that are most interesting are those modelling black hole inflation or the propagation of modes during inflation in the early universe. In both cases, the theory has left physicists with open questions, such as the relevance of very high (transplanckian) modes or the nature of quantum fluctuations in an expanding background. Analogue gravity models allow a different angle of attack to these problems. They are also a testing ground for how some proposed low-energy consequences of a fundamentally quantum space-time might come about and/or affect the quantum fields like deviations from Lorentz-invariance and space-time defects. It should be kept in mind though that global properties of space-time cannot strictly speaking ever be mimicked in the laboratory if space-time in these solutions is infinite. As we discussed recently for example, the event horizon of a black hole is a global property, it is defined as existing forever. This situation can only be approximately reproduced in the laboratory.

Another reason why analogue gravity, though it has been around for decades, is receiving much more attention now is that approaches to quantum gravity have diversified as string theory is slowly falling out of favor. Emergent and induced gravity models are often based on condensed-matter-like approaches in which space-time is some kind of condensate. The big challenge is to reproduce the required symmetries and dynamics. Studying what is possible with existing materials and fluids in analogue gravity experiments certainly serves as both inspiration and motivation for emergent gravity.

While I am not a fan of emergent gravity approaches, I find the developments in analogue gravity interesting from an entirely different perspective. Consider that mathematics is not in fact a language able to describe all of nature. What would we do if we had reached its limits? We could take out maths as the middle-man and directly study systems that resemble more complicated or less accessible systems. That’s exactly what analogue gravity is all about.

Monday, February 24, 2014

Thanks to all my readers, the new ones and the regulars, the occasionals and the lurkers, and most of all our commenters: Without you this blog wouldn't be what it is. I have learned a lot from you, laughed about your witty remarks, and I appreciate your feedback. Thanks for being around and enriching my life by sharing your thoughts.

If you have a research result to share that you think may be interesting to readers of this blog, you can send me a note, email is hossi at nordita dot org. I don't always have time to reply, but I do read and consider all submissions.

Friday, February 21, 2014

I am always disappointed by the media coverage on my research area. It forever seems to misrepresent this and forgets to mention that and raises a wrong impression about something. Ask the science journalist and they'll tell you they have to make concessions in accuracy to match the knowledge level of the average reader. The scientist will argue that if the accuracy is too low there's no knowledge to be transferred at all, and that a little knowledge is worse than no knowledge at all. Then the journalist will talk about the need to sell and point to capitalism executed by their editor. In the end everybody is unhappy: The scientists because they're being misrepresented, the journalist because they feel misunderstood, and the editor because they are being blamed for everything.

We can summarize the problem in this graph:

The black curve is the readership as a function of accuracy. Total knowledge transfer is roughly the amount of readers times the information conveyed. An article with very little information might have a large target group, but not much educational value. An article with very much information will be read by few people. The sweet spot, the maximum of the total knowledge transfer as a function of accuracy, lies somewhere in the middle. Problem is that scientists and journalists tend to disagree about where the sweet spot lies.

Scientists are on the average more pessimistic about the total amount of information that can be conveyed to begin with because they do not only believe but know that you cannot really understand their research without getting into the details, yet the details require background knowledge to appreciate. I sometimes hear that scientists wish for more accuracy because they are afraid of the criticism of their colleagues, but I think this is nonsense. Their colleagues will assume that the journalist is responsible for lack of accuracy, not the scientist. No, I think they want more accuracy because they correctly know it is important and because if one is familiar with a topic one tends to lose perspective on how difficult it once was to understand. They want, in short, an article they themselves would find interesting to read.

So it seems this tug of war is unavoidable, but let us have a look at the underlying assumptions.

To begin with I've assumed that science writers and scientists likewise want to maximize information transfer and not simply readership, which would push the sweet spot towards the end of no information at all. That's a rosy world-view disregarding the power of clicks, but in my impression it's what most science journalists actually wish for.

One big assumption is that most readers have very little knowledge about the topic, which is why the readership curve peaks towards the low accuracy end. This is not the case for other topics. Think for example of the sports section. It usually just assumes that the readers know the basic rules and moves of the games and journalists do not hesitate to comment extensively on these moves. For somebody like me, whose complete knowledge about basketball is that a ball has to go into a basket, the sports pages aren't only uninteresting but impenetrable vocabulary. However, most people seem to bring more knowledge than that and thus the journalists don't hesitate assuming it.

If we break down the readership by knowledge level, for scientific topics it will look somewhat like shown in the figure below. The higher the knowledge, the more details the reader can digest, but the fewer readers there are.

Another assumption is that this background level is basically fixed and readers can't learn. This is my great frustration with science journalism, that the readership is rarely if ever exposed to the real science and thus the background knowledge never increases. Readers don't ever hear the technical terms, don't see the equations, and aren't explained the figures. I think that popular science reporting just shouldn't aim at meeting people in their comfort zone, at the sweet spot, because the long-term impact is nil. But that again hits the wall of must-sell.

The assumption that I want to focus on here is that the accuracy of an article is a variable independent of the reader themself. This is mostly true for print media because the content is essentially static and not customizable. However, for online content it is possible to offer different levels of detail according to the reader's background. If I read popular science articles in fields I do not work in myself, I find it very annoying if they are so dumbed down that I can't make a match to the scientific literature, because technical terms and references are missing. It's not that I do not appreciate the explanation at a low technical level, because without it I wouldn't have been interested to begin with. But if I am interested in a topic, I'd like to have a guide to find out more.

So then let us look at the readership as a function of knowledge and accuracy. This makes a three-dimensional graph roughly like the one below.

If you have a fixed accuracy, the readership you get is the integral over the knowledge-axis in the direction of the white arrow. This gives you back the black curve in the first graph. However, if accuracy is adjustable to meet the knowledge level, readers can pick their sweet spot themselves, which is along the dotted line in the graph. If this match is made, then the readership is no longer dependent on the accuracy, but just depends on the number of people at any different knowledge background. The total readership you get is the sum of all those.

How much larger this total readership is than the readership in the sweet spot of fixed accuracy depends on many variables. To begin with it depends on the readers' flexibility of accepting accuracy that is either too low or too high for them. It also depends on how much they like the customization and how well that works etc. But I'm a theoretician, so let me not try to be too realistic. Instead, I want to ask how that might be possible to do.

A continuous level of accuracy will most likely remain impossible, but a system with a few layers - call them beginner, advanced, pro - would already make a big difference. One simple way towards this would be to allow the frustrated scientist whose details got scraped to add explanations and references in a way that readers can access them when they wish. This would also have the benefit of not putting more load on the journalist.

So I am cautiously hopeful: Maybe technology will one day end the eternal tug of war between scientist and science writers.

Tuesday, February 18, 2014

My prof was fond of saying there are no elementary particles, we should really call them “elementary things” - “Elementardinger”. After all the whole point of quantum theory is that there’s no point - there are no classical particles with a position and a momentum, there is only the wave-function. And there is no particle-wave duality either. This unfortunate phrase suggests that the elementary thing is both a particle and a wave, but it is neither: The elementary thing is something else in its own right.

That quantum mechanics is built on mathematical structures which do not correspond to classical objects we can observe in daily life has bugged people ever since quantum mechanics came, saw, and won over the physics departments. Attempts to reformulate quantum mechanics in terms of classical fields or particles go back to the 1920s, to Madelung and de Broglie, and were later continued by Bohm. This alternative approach to quantum mechanics has never been very popular, primarily because it was unnecessary. Quantum mechanics and quantum field theory as taught in the textbooks proved to work enormously well and there was much to be done. But despite its unpopularity, this line of research never went extinct and carried on until today.

Today we are reaching the limits of what can be done with the theories we have and we are left with unanswered questions. “Shut up and calculate” turned into “Shut up and let me think”. Tired of doing loop expansions, still not knowing how to quantize gravity, the naturalness-issue becoming more pressing by the day, most physicists are convinced we are missing something. Needless to say, no two of them will agree on what that something is. One possible something that has received an increasing amount of attention during the last decade is that we got the foundations of quantum mechanics wrong. And with that the idea that quantum mechanics may be explainable by classical particles and waves is back en vogue.

Enter Yves Couder.

Couder spends his days dropping silicone oil. Due to surface tension and chemical potentials the silicone droplets, if small enough, will not sink into the oil bath of the same substance, but hover above its surface, separated by an air film. Now he starts oscillating the oil up and down and the drops start to bounce. This simple experiment creates a surprisingly complex coupled system of the driven oscillator that is the oil and the bouncing droplets. The droplets create waves every time they hit the surface and the next bounce of the droplets depends on the waves they hit. The waves of the oil are both a result of the bounces as well as a cause of the bounces. The drops and the waves, they belong together.

Does it smell quantum mechanical yet?

The behavior is interesting even if one looks at only one particle. If the particle is given an initial velocity, it will maintain this velocity and drag the wave field with it. The drop will anticipate and make turns at walls or other obstacles because the waves in the oil had previously been reflected. The behavior of the drop is very suggestive of quantum mechanical effects. Faced with a double-slit, the drop will sometimes take one slit, sometimes the other. A classical wave by itself would go through both slits and interfere with itself. A classical particle would go through one of the slits. The bouncing droplet does neither. It is a clever system that converts the horizontal driving force of the oil into vertical motion by the drops bouncing off the rippled surface. It is something else in its own right.

You can watch some of the unintuitive behavior of the coupled drop-oil system in the blow video. The double-slit experiment is at 2:41 mins

Other surprising findings in these experiments have been that the drops exhibit an attractive force on each other, that they can have quantized orbits, they mimic tunneling and Anderson localization. In short, the droplets show behavior that was previously believed to be exclusively quantum mechanical phenomena.

But just exactly why that would be so, nobody really knew. There were many experiments, but no good theory. Until now. In a recent paper, Robert Brady and Ross Anderson from the University of Cambridge delivered the theory:

While the full behavior of the drop-oil system is so far not analytically computable, they were able to derive some general relations that shed much light on the physics of the bouncing droplets. This became possible by noting that in the range the experiments are conducted the speed of the oil waves is to good approximation independent of the frequency of the waves, and the equation governing the waves is linear. This means it obeys an approximate Lorentz-symmetry which enabled them to derive relations between the bounce-period and the velocity of the droplet that fit very well with the observations. They also offer an explanation for the attractive force between the droplets due to the periodic displacement of the cause of the waves and the source of the waves and tackle the question how the droplets are bounced off barriers.

These are not technically very difficult calculations, their value lies in making the theoretical connection between the observation and the theory which now opens the possibility of using this theory to explain quantum phenomena as emergent from an underlying classical reality. I can imagine this line of research to become very fruitful also for the area of emergent gravity. And if you turn it around, understanding these coupled systems might give us a tool to scale up at least some quantum behavior to macroscopic systems.

While I think this is interesting fluid dynamics and pretty videos, I remain skeptic of the idea that this classical system can reproduce all achievements of quantum mechanics. To begin with it gives me to think that the Lorentz-symmetry is only approximate, and I don’t see what this approach might have to say about entanglement, which for me is the hallmark of quantum theory.

Ross Anderson, one of the authors of the above paper, is more optimistic: “I think it's potentially one of the most high-impact things I've ever done,” he says, “If we're right, and reality is fluid-mechanical at the deepest level, this changes everything. It consigns string theory and multiple universes to the dustbin.”

In a nutshell they are suggesting that the horizon of the black hole vanishes, due to quantum gravitational effects, at a radius much larger than the Planck length. They call the remaining object a Planck star.

To understand why this is a really radical proposal, let me first give you some context. When matter collapses to a black hole, its radius shrinks and its density increases. Quantum gravitational effects are expected to become strong when the curvature reaches the Planckian regime. The curvature is the inverse of a length square, so that means the curvature is the inverse of the Planck length square or smaller. At which radius the collapsing matter reaches this regime depends on the total mass: The higher the mass, the larger the radius.

The radius at which the collapsing matter reaches the Planckian regime is larger than the Planck length if the mass is larger than the Planck mass. The radius is however always smaller than the horizon radius, so it doesn’t really matter exactly what happens because it’s not in causal contact with the exterior. The curvature at the horizon is weak as long as the total mass of the black hole is larger than the Planck mass. This is somewhat unintuitive, but the curvature at the black hole horizon goes with the inverse of the mass square, ie the higher the mass of the black hole, the smaller the curvature. Thus the often made remark that you wouldn’t notice crossing the black hole horizon - there’s nothing there and space-time can be almost flat if the black hole is large. In particular, you don’t expect any quantum gravitational effects at the horizon.

But the mass of the black hole decreases due to Hawking radiation. Keep in mind that Hawking radiation is not a quantum gravitational effect. It’s quantum fields in a classical gravitational background, a combination often referred to as ‘semi-classical’. If the mass of the black hole has shrunken to the Planck mass, the curvature reaches the Planckian regime and that’s when the semi-classical limit breaks down and quantum gravity becomes important. At that point also Hawking’s calculation breaks down and information can be released. However, the standard argument goes that by this time it’s already too late to get all the information out. Details are subtle but that’s a different story. Suffices to say that Rovelli and Vidotto want information release to be possible earlier, when the radius of the black hole is still much larger than the Planck length and its mass much above the Planck mass.

The only way to do this is to have strong quantum gravitational effects in a region where the curvature of the semi-classical metric is small, much below the Planck scale. In the paper they don’t explicitly say that this is what they do, but of course they have to. You see this most easily when you look at the metric they suggest, equation (14). The third term (containing α) is the correction term that supposedly has a quantum gravitational origin. The validity of the semi-classical limit means essentially that the third term should be smaller than the second as long as the second term is smaller than one. If you convert this into inequalities you find α < m, and that is explicitly the situation they do not consider. Instead α is supposed to start at m and then increase. They do not give any reason given as to why this should be so or what the meaning is of α or what the necessary source terms are for that.

At this point you are probably ready to throw the paper away. There is a reason one of the postulates of black hole complementarity is the validity of the semi-classical approximation near the horizon of a black hole with mass above the Planck mass. That’s because the curvature there is small and no quantum gravitational effects are at your disposal to screw up the semi-classical limit. However, allow me to exercise some good will. I think what Rovelli and Vidotto suggest may be possible if the Planckian-density core behind the horizon displays a very unusual behavior, though that’s a big “if”.

The behavior would have to be such that as the total mass is shrinking, the mass is taken from the center only, leaving behind an increasingly thinner shell of high density at a constant radius (or even an increasing one). This shell would eventually intersect the horizon of the black hole and could do so conceivably at a radius much above the Planck radius. This isn’t a priori in conflict with the semi-classical limit because there is a high density now and also a high curvature region.

However, the metric that is used by Rovelli and Vidotto does not describe such a scenario. (The metric inside a shell has to be flat while their metric is actually singular at the center.) Besides this, there exists no approach to quantum gravity that suggests such a hollow-core behavior. There doesn’t even exist a model that describes such a situation. I also strongly suspect that such a solution, even if it can be created by help of some quantum gravitational pressure (this is almost certainly possible), would be unstable under non-spherical perturbations and just recollapse to form a smaller Planckian-density core. Iterate and end at Planck scale radius as usual.

In summary, this is an ad-hoc proposal. It is not based on anything we know of quantum gravity. Neither is it a complete model. I am reasonably sure that the metric they use cannot describe the situation they want while still maintaining energy-conservation. They do not calculate the curvature that belongs to that metric to check whether their modification is consistent. Neither do they calculate the necessary source that presumably contains a quantum-gravitationally induced stress-energy. It is an interesting suggestion, but I do not think it is very plausible. Planck stars almost certainly do not exist.

Acknowledgements: Carlo Rovelli has been very patient explaining his idea by email, but as you can tell I’ve remained unconvinced...

Sunday, February 09, 2014

Physicists love good problems, they take them out for dinner and sleep with them. Unsolved problems are
their reason d’etre. And yet it pains me considerably if somebody dismisses a paper or research project with the remark:

“Well, what problem does that solve?”

Indeed, this came up in the discussion of the workshop I attended last week, the criticism that much of current research doesn’t seem to solve any existing problem.

I agree on the underlying sentiment. Yes, most of what gets published in physics these days will almost certainly turn out to be useless to the end of describing nature. But it’s always been this way and will always be this way. It’s in the nature of trial and error that you must try and err.

I disagree though that research is only worthy if it solves, or at least attempts to solve, a known problem.

To begin with good problems don’t grow on trees. Yes, it is often the case that the solution of one problem grows up to be the next problem, ready to pick. But that isn’t always so. Sometimes you have to go and hunt them down. And many problems are found just because researchers – both theorists and experimentalists – followed their curiosity and stumbled upon something interesting. The generation of problems is so important to progress that physicists sometimes are tempted to create problems where there are none, just so they have a target for their methods. Think of superluminal neutrinos, the pioneer anomaly, or the penta-quark.

So research is clearly also important if it draws attention to a problem rather than solving one. A recent example is the black hole firewall. And really, what problem did that solve?

The biggest part of research is dedicated to finding or solving problems, but that still isn’t all of it. Some of research is failed solution attempts. Failing and sharing failure is valuable not only because it can save other people’s time, but also because a failed solution to one problem can turn out to solve another problem. The post-it glue’s failure to stick was also its success. Einstein’s “blunder” eventually turned out to have its use when we discovered the universe’s expansion accelerates. Bubble wrap was conceived as washable wallpaper. Research in string theory was originally pursued to understand the strong nuclear force.

Somebody else’s failure of yesterday might be your solution tomorrow.

And then there is just free-wheeling curiosity that is often a by-product of researchers trying to better understand their gadgets or models. It might or might not turn out to be useful for anything. These are failed attempts to find a problem, or solutions without a problem.

I too used to be cynical about the irrelevance of most papers and their failure to address existing problems. Now I think of them as exercises, as documentations of physicists learning or improving their methods. In fact, often these papers are exactly this: projects give to students or postdocs. Others are reports on somebody’s current interests and thoughts, or their progress in understanding particular relations that will or will not lead anywhere. They might have been out hunting and now want to show off what they found, even if it wasn’t what they were hoping for. Or their idea of a good problem might just not agree with mine.

In the long run, science is much better off with a diversity of interests than with the streamlined attack favored by the dismissive comment “Well, what problem does that solve?”

Monday, February 03, 2014

I’ll be traveling for the rest of the week, so be warned of a period of silence.

Wednesday I’m giving a seminar in Nottingham, and after that I’m attending a workshop in Oxford. The workshop topic is “The Structure of Gravity and Space-time” and it’s part of the project “Establishing the Philosophy of Cosmology”. Sound more ominous than it is: They’ll have a session on the question whether there exists a “fundamental length”, which is what brought me on their invitation list. There will also be sessions on bi-metric gravity, massive gravity and strings and space-time structure, which sounds very promising to me. We’ll see how much philosophy infiltrates the physics. A preliminary program is here.

The girls are doing well, now attending Kindergarten. Our pediatrician didn’t raise any concerns at the 3-year checkup, except for Lara’s vision problems. She’ll get new glasses next week. The ones she has now always slip down and hang on the very tip of her nose, so we hope that the new ones will stay put better.

Lara and Gloria can open and remove all our children safety locks now and I’ve put away the door keys because I’m afraid they’ll lock themselves in. They also picked up lots of swear words since they attend Kindergarten. They don’t really know how to use them properly, which is often unwillingly funny. We’ve made a little progress with the potty training, but unfortunately the kids declare plainly they’re “too lazy” to go without diaper. It is similarly unfortunate that several older children at the Kindergarten still use binkies. Gloria told me the other day she will learn to use the toilet when she can “reach the ceiling”. She also declared that since Gloria came out of mommy’s belly, Lara must have come out of daddy’s belly. Everything far away is “Stockholm” and that’s a magical place where mommy goes and brings back gifts. They’re getting more entertaining by the day.

I finally replaced my old digital camera because some of the buttons were broken, and now have a Canon DSLR (EOS 1100D) which I am so far very happy with, though the learning curve is steep. I used to have a SLR Camera 15 years ago. You know, one of these things were you had to wind back the film and carry it to some store and wait a week just to see how badly you did. Remember that? The DSLR looks and feels quite different from that, as with all the menus that I keep getting lost in. Maybe reading the manual would help. In any case, I spent some weeks hunting after the kids. Below are some of my favorite photos.