The claim in the paper is that Loop Quantum Gravity (LQG), the most popular approach to quantum gravity after string theory, must be wrong because it violates the Holographic Principle. The Holographic Principle requires that the number of different states inside a volume is bounded by the surface of the volume. That sounds like a rather innocuous and academic constraint, but once you start thinking about it it’s totally mindboggling.

All our intuition tells us that the number of different states in a volume is bounded by the volume, not the surface. Try stuffing the Legos back into your kid’s toy box, and you will think it’s the volume that bounds what you can cram inside. But the Holographic Principle says that this is only approximately so. If you would try to pack more and more, smaller and smaller Legos into the box, you would eventually fail to get anything more inside. And if you would measure what bounds the success of your stuffing of the tiniest Legos, it would be the surface area of the box. In more detail, the amount of different states has to be less then a quarter of the surface area measured in Planck units. That’s a huge number and so far off our daily experience that we never notice this limit. What we notice in practice is only the bound by the volume.

The Holographic Principle is a consequence of black hole physics, which does not depend on the details of quantizing gravity, and it is therefore generally expected that the entropy bound must be obeyed by all approaches to quantum gravity.

Physicists have tried, of course, to see whether they can find a way to violate this bound. You can consider various types of systems, pack them as tightly as possible, and then calculate the number of degrees of freedom. In this, it is essential that you take into account quantum behavior, because it’s the uncertainty principle that ultimately prevents arbitrarily tight packing. In all known cases however, it was found that the system will collapse to a black hole before the bound is saturated. And black holes themselves saturate the bound. So whatever physicists tried, they only confirmed that the bound holds indeed. With every such thought-experiment, and with every failure of violating the entropy bound, they have grown more convinced that the holographic principle captures a deep truth about nature.

The only known exception that violates the holographic entropy bound are the super-entropic monster-states constructed by Hsu and collaborators. These states however are pathological in that not only will they inevitably go on to collapse to a black hole, they also must have come out of a white hole in the past. They are thus mathematically possible, but not physically realistic. (Aside: That the states come out of a white hole and vanish into a black hole also means you can’t create these super-entropic configurations by throwing in stuff from infinity, which should come as a relief to anybody who believes in the AdS/CFT correspondence.)

So if Loop Quantum Gravity would violate the Holographic Principle that would be a pretty big deal, making the theory inconsistent with all that’s known about black hole physics!

In the paper, the authors redo the calculation for the entropy of a particular quantum system. With the usual quantization, this system obeys the holographic principle. With the quantization technique from Loop Quantum Gravity, the authors get an additional term but the system still obeys the holographic entropy bound, since the additional term is subdominant to the first. They conclude “We have demonstrated that the holographic principle is violated due to the effects coming from LQG.” It’s a plain non-sequitur.

I suspect that the authors mistook the maximum entropy of the quantum system under consideration, previously calculated by ‘t Hooft, for the holographic bound. This is strange because in the introduction they have the correct definition for the holographic bound. Besides this, the claim that in LQG it should be more difficult to obey the holographic bound is highly implausible to begin with. LQG is a discretization approach. It reduces the number of states, it doesn’t increase them. Clearly, if you go down to the discretization scale, the number of states should drop to zero. This makes me think that not only did the authors misinterpret the result, they probably also got the sign of the additional term wrong.

(To prevent confusion, please note that in the paper they calculated corrections to the entropy of the matter, not corrections to the black hole entropy, which would go onto the other side of the equation.)

You might get away with the impression that we have here two unfortunate researchers who were confused about some terminology, and I’m being an ass for highlighting their mistakes. And you would be right, of course, they were confused, and I’m an ass. But let me add that after having read the paper I did contact the authors and explained that their statement that the LQG violates the Holographic Principle is wrong and does not follow from their calculation. After some back and forth, they agreed with me, but refused to change anything about their paper, claiming that it’s a matter of phrasing and in their opinion it’s all okay even though it might confuse some people. And so I am posting this explanation here because then it will show up as an arxiv trackback. Just to avoid that it confuses some people.

In summary: Loop Quantum Gravity is alive and well. If you feed me papers in the future, could you please take into account my dietary preferences?

Thursday, September 24, 2015

What images does the word “power” bring to your mind? Weapons? Bulging muscles? A mean-looking capitalist smoking cigar? Whatever your mind’s eye brings up, it is most likely not a fragile figure hunched over a keyboard. But maybe it should.

Words have power, today more than ever. Riots and rebellions can be arranged by text-messages, a single word made hashtag can create a mass movement, 140 characters will ruin a career, and a video gone viral will reach out to the world. Words can destroy lives or they can save them: “If you find the right tone and the right words, you can reach just about anybody,” says Cid Jonas Gutenrath who worked for years at the emergency call center of the Berlin police force [1]:

“I had worked before as a bouncer, and I was always the shortest one there. I had an existential need, so to speak, to solve problems through language. I can express myself well, and when that’s combined with some insights into human nature it’s a valuable skill. What I’m talking about is every conversation with the police, whether it’s an emergency call or a talk on the street. Language makes it possible for everyone involved to get out of the situation in one piece.”

Words won’t break your bones. But more often than not, words – or their failure – decide whether we take to weapons. It is our ability to convince, cooperate, and compromise that has allowed us to conquer an increasingly crowded planet. According to recent research, what made humans so successful might indeed not have been superior intelligence or skills, but instead our ability to communicate and work together.

In a recent SciAm issue, Curtis W. Marean, Professor for Archeology at Arizona State University, lays out a new hypothesis to explain what was the decisive development that allowed humans to dominate Earth. According to Marean, it is not, as previously proposed, handling fire, building tools, or digesting a large variety of food. Instead, he argues, what sets us apart is our willingness to negotiate a common goal.

The evolution of language was necessary to allow our ancestors to find solutions to collective problems, solutions other than hitting each other over the head. So it became possible to reach agreements between groups, to develop a basis for commitments and, eventually, contracts. Language also served to speed up social learning and to spread ideas. Without language we wouldn’t have been able to build a body of knowledge on which the scientific profession could stand today.

“Tell a story,” is the ubiquitous advice given to anybody who writes or speaks in public. And yet, some information fits badly into story-form. In popular science writing, the reader inevitably gets the impression that much of science is a series of insights building onto each other. The truth is that most often it is more like collecting puzzle pieces that might or might not actually belong to the same picture.

The stories we tell are inevitably linear; they follow one neat paragraph after the other, one orderly thought after the next, line by line, word by word. But this simplicity betrays the complexity of the manifold interrelations between scientific concepts. Ask any researcher for their opinion on a news report in their discipline and they will almost certainly say “It’s not so simple…”

There is a value to simple stories. They are easily digestible and convey excitement about science. But the reader who misses the entire context cannot tell how well a new finding is embedded into existing research. The term “fringe science” is a direct visual metaphor alluding to this disconnect, and it’s a powerful one. Real scientific progress must fit into the network of existing knowledge.

The problem with linear stories doesn’t only make the writing about science difficult, but also the writing in science. The main difficulty when composing scientific papers is the necessity to convert a higher-dimensional network of associations into a one-dimensional string. It is plainly impossible. Many research articles are hard to follow, not because they are badly written, but because the nature of knowledge itself doesn’t lend itself to the narrative.

I have an ambivalent relation to science communication because a good story shouldn’t make or break an idea. But the more ideas we are exposed to, the more relevant good presentation becomes. Every day scientific research seems a little less like a quest for truth, and a little more like a competition for attention.

In an ideal world maybe scientists wouldn’t write their papers themselves. They would call for an independent reporter and have to explain their calculations, then hand over their results. The paper would be written up by an unbiased expert, plain, objective, and comprehensible. Without exaggerations, without omissions, and without undue citations to friends. But we don’t live in an ideal world.

What can you do? Clumsy and often imperfect, words are still the best medium we have to convey thought. “I think therefore I am,” Descartes said. But 400 years later, the only reason his thought is still alive is that he put into writing.

Freese and Savage in their 2012 paper estimated the interaction rate of dark matter with the human body for weakly interacting massive particls (WIMPs). They came to the conclusion that the risk of getting cancer from damage caused by dark matter to the genetic code is much smaller than the risk posed by the cosmic radiation we are constantly exposed to.

Yes, dark matter can cause cancer. That’s because literally everything can cause cancer: The probability that a particle collision breaks a molecular bond is never strictly speaking zero, and such damage can potentially turn a cell into a cancerous reproduction machine. Even doing nothing at all can cause cancer, just because a bond may break simply due to quantum fluctuations. It’s not fair, I know. It’s also so unlikely to happen that it didn’t even make it onto the Daily Mail’s List of Things That Can Give You Cancer. Should dark matter go onto the list? After all, the idea that dark matter may lead to “biological phenomena having sometimes fatal late effects” dates back at least to 1990.

In the new paper the authors estimate the interaction probability with the human body for a different type of dark matter. They looked specifically at mirror dark matter whereas Freese and Savage had looked at one of the presently most popular dark matter models, the WIMPs. I can see a whole industry growing out of this.

But what is mirror dark matter and why have you never heard of it?

Mirror dark matter is a complex type of dark matter, a complete copy of the standard model that describes our normal matter. The mirror dark matter interacts only gravitationally with us, or at least only very weakly. This sounds like a nice idea, the next best thing you may think of after just having a single particle. The problem is that we know dark matter does not behave just like normal matter, which renders mirror dark matter immediately implausible.

To begin with there is more dark matter in the universe than normal matter. But more importantly, observations tell us that dark matter must be weakly interacting with itself, otherwise the cosmic microwave background would not have the observed spectrum of temperature fluctuations. Our normal matter interacts much too strongly with itself to achieve that. Then there are case studies like the Bullet Cluster, whose gravitational lensing images reveal that dark matter does not have as much friction among itself as normal dark matter. Dark matter also doesn’t form galaxies in the same way that normal matter does, but rather it acts as a seed for our galaxies. If it didn’t, structure formation wouldn’t come out correctly.

So clearly, dark matter that just does the same as normal matter doesn’t work. On the other hand, a copy of the standard model is a large set of particles with many emergent parameters (like particle abundances) that allow a lot of freedom to make the model fit the data.

You can try for example to adapt the mirror matter model by making changes to the initial conditions, so that they differ from the initial condition of normal matter. The mirror dark matter is assumed to start in the early universe from a specifically chosen configuration, that in particular implies that the two types of matter do not have the same temperature later on. This can solve some problems and make mirror dark matter fit many of our observations. It brings up the question though Why these initial conditions?

As has been argued, probably most vocally by Paul Davies, the distinction between initial conditions and evolution laws is fuzzy. If you fabricate your initial conditions smartly enough, you can make pretty much any model fit the data. (You can take the state that we observe today and evolve it backwards time. Then pick whatever you get as initial state. Voila.) So I don’t actually doubt that it is possible to explain the observations with mirror dark matter. But cherry picking initial conditions doesn’t seem very convincing to me.

In any case, leaving aside that mirror dark matter is not particularly popular because dark matter just doesn’t seem to behave anything like normal matter, it’s a model, and it has equations and so on, and now you can go and calculate things.

To estimate the cancer risk from mirror dark matter, the authors assume that the mirror dark matter forms atoms, which can bind together to “mirror micrometeorites” that contain about 1015 mirror atoms. They then estimate the energy deposited by the mirror micrometeorites in the human body and find that they can leave behind more energy than weakly interacting single-particle dark matter. These mirror-objects can thus damage multiple bonds on their path. The reason is basically that they are larger.

So how likely is mirror dark matter to give you cancer? Well, unfortunately in the paper they only estimate the energy deposited by the micrometeorites, but not the probability for these objects to hit you to begin with. I wrote an email to one of the authors and inquired if there is an estimate for the flux of such objects through earth, but apparently there is none. But one thing we know about dark matter is how much there has to be of it in total. So if dark matter is clumped to pieces larger than WIMPs this means that there must be fewer of these pieces. In other words, the flux of the mirror nuclei relative to that of WIMPs should be lower. Without a concrete model though, one really can’t say anything more.

In the new paper, the authors further speculate that dark matter may account for some types of cancer

“We can thus speculate that the mirror micrometeorite, when interacting with the DNA molecules, can lead to multiple simultaneous mutations and cause disease. For instance, there is an evidence that individual malignant cancer cells in human tumors contain thousands of random mutations and that normal mutation rates are insufficient to account for these multiple mutations found in human cancers [...]”

Whatever the risk of getting cancer from dark matter however, it probably hasn’t changed much for the last billion years or so. One could then try to turn the argument around and argue that if there were too many of such mirror micrometeorites then the dinosaurs would have died from cancer, or something like that. I am not very excited about such biological constraints, the uncertainties are much too large in this area. You almost certainly get better accuracy looking at traces in minerals or actual particle detectors.

In summary, the paper doesn’t estimate the cancer risk for an unconfirmed model. And in any case, short of moving to the center of the Earth there isn’t anything you could do about it anyway.

Tuesday, September 15, 2015

It means it doesn’t emit any electromagnetic radiation for all we can tell. Astronomers haven’t been able to find neither light visible to the eye, nor radiation in the radio range or x-ray regime, and not at even higher energies either.

2. “Matter” doesn’t just mean it’s stuff.

What physicists classify as matter must behave like the matter we are made of, at least for what its motion in space and time is concerned. This means in particular dark matter dilutes when it spreads into a larger volume, and causes the same gravitational attraction as ordinary, visible, matter. It is easy to think up “stuff” that does not do this. Dark energy for example does not behave this way.

3. It’s not going away.

You will not wake up one day and hear physicists declare it’s not there at all. The evidence is overwhelming: Weak gravitational lensing demonstrates that galaxies have a larger gravitational pull than visible matter can produce. Additional matter in galaxies is also necessary to explain why stars in the outer arms of galaxies orbit so quickly around the center. The observed temperature fluctuations in the cosmic microwave background can’t be explained without dark matter, and the structures formed by galaxies wouldn’t come out right without dark matter either.

Even if all of this was explained by a modification of gravity rather than an unknown type of matter, it would still have to be possible to formulate this modification of gravity in a way that makes it look pretty much like a new type of matter. And we’d still call it dark matter.

4. Rubin wasn’t the first to find evidence for dark matter.

Though she was the first to recognize its relevance. A few decades before Vera Rubin noticed that stars rotate inexplicably fast around the centers of galaxies, Fritz Zwicky pointed out that a swarm of about a thousand galaxies which are bound together by gravity to the “Coma Cluster” also move too quickly. The velocity of the galaxies in a gravitational potential depends on the total mass in this potential, and the too large velocities indicated already that there was more mass than could be seen. However, it wasn’t until Rubin collected her data that it became clear this isn’t a peculiarity of the Coma Cluster, but that dark matter must be present in almost all galaxies and galaxy clusters.

5. Dark matter doesn’t interact much with itself or anything else.

If it did, it would slow down and clump too much and that wouldn’t be in agreement with the data. A particularly vivid example comes from the Bullet Cluster, which actually consists of two clusters of galaxies that have passed through each other. In the Bullet Cluster, one can detect both the distribution of ordinary matter, mostly be emission of x-rays, and the distribution of dark matter, by gravitational lensing. The data demonstrates that the dark matter is dislocated from the visible matter: The dark matter parts of the clusters seem to have passed through each other almost undisturbed, whereas the visible matter was slowed down and its shape was noticeably distorted.

The same weak interaction is necessary to explain the observations on the cosmic microwave background and galactic structure formation.

6. It’s responsible for the structures in the universe.

Since dark matter doesn’t interact much with itself and other stuff, it’s the first type of matter to settle down when the universe expands and the first to form structures under its own gravitational pull. It is dark matter that seeds the filaments along which galaxies later form when visible matter falls into the gravitational potential created by the dark matter. If you look at some computer simulation of structure formation, what is shown is almost always the distribution of dark matter, not of visible matter. Visible matter is assumed to follow the same distribution later.

7. It’s probably not smoothly distributed.

Dark matter doesn’t only form filaments on supergalactic scales, it also isn’t entirely smoothly distributed within galaxies – at least that’s what the best understood models say. Dark matter doesn’t interact enough to form objects as dense as planets, but it does have ‘halos’ of varying density that move around in galaxies. The dark matter density is generally larger towards the centers of galaxies.

8. Physicists have lots of ideas what dark matter could be.

The presently most popular explanation for the puzzling observations is some kind of weakly interacting particle that doesn’t interact with light. These particles have to be quite massive to form the observed structures, about as heavy as the heaviest particles we know already. If dark matter particles weren’t heavy enough they wouldn’t clump sufficiently, which is why they are called WIMPs for “Weakly Interacting Massive Particles.” Another candidate is a particle called the axion, which is very light but leaves behind some kind of condensate that fills the universe.

There are other types of candidate particles that have more complex interactions or are heavier, such Wimpzillas and other exotic stuff. Macro dark matter is a type of dark matter that could be accommodated in the standard model; it consists of macroscopically heavy chunks of unknown types of nuclear matter.

Then there are several proposals for how to modify gravity to accommodate the observations, such as MOG, entropic gravity, or bimetric theories. Though very different by motivation, the more observations have to be explained the more similar the explanations through additional particles have become to the explanations through modifying gravity.

9. And they know some things dark matter can’t be.

We know that dark matter can’t be constituted by dim brown dwarfs or black holes. The main reason is that we know the total mass dark matter brings into our galaxy, and it’s a lot, about 10 times as much as the visible matter. If that amount of mass was made up from black holes, we should constantly see gravitational lensing events – but we don’t. It also doesn’t quite work with structure formation. And we know that neutrinos, even though weakly interacting, can’t make up dark matter either because they are too light and they wouldn’t clump strongly enough.

10. But we have no direct experimental evidence.

Despite decades of search, nobody has ever directly detected a dark matter particle and the only evidence we have is still indirectly inferred from gravitational pull. Physicists have been looking for the rare interactions of proposed dark matter candidates in many Earth-based experiments starting already in the 1980s. They also look for astrophysical evidence of dark matter, such as signals from the mutual annihilation of dark matter particles. There have been some intriguing findings, such as the PAMELA positron excess, the DAMA annual modulation, or the Fermi gamma-ray excess, but physicists haven’t been able to link any of these convincingly to dark matter.

Friday, September 11, 2015

I get a lot of email asking me for advice on paper publishing. There’s no way I can make time to read all these drafts, let alone comment on them. But simple silence leaves me feeling guilty for contributing to the exclusivity myth of academia, the fable of the privileged elitists who smugly grin behind the locked doors of the ivory tower. It’s a myth I don’t want to contribute to. And so, as a sequel to my earlier post on “How to write your first scientific paper”, here is how to avoid roadblocks on the highway to publication.

There are many types of scientific articles: comments, notes, proceedings, reviews, books and book chapters, for just to mention the most common ones. They all have their place and use, but in most of the sciences it is the research article that matters most. It’s what we all want, to get our work out there in a respected journal, and it’s what I will focus on.

Before we start. These days you can publish literally any nonsense in a scam journal, usually for a “small fee” (which might only be mentioned at a late stage in the process, oops). Stay clear of such shady business, it will only damage your reputation. Any journal that sends you an unsolicited “call for papers” is a journal to avoid (and a sender to put on the junk list). When in doubt, check Beall’s list of Potential, Probable and Possible Predators.

1. Picking a good topic

There are two ways you can go on a road trip: Find a car or hitchhike. In academic publishing, almost everyone starts out a hitchhiker, coauthoring a work typically with their supervisor. This moves you forward quickly at first, but sooner or later you must prove that you can drive on your own. And one day, you will have to kick your supervisor off the copilot seat. While you can get lucky with any odd idea as topic, there are a few guidelines that will increase your chances of getting published.

1.1 Novelty

For research topics as with cars it holds that a new one will get you farther than an interesting one. If you have a new result, you will almost certainly eventually get it published in a decent journal. But no matter how interesting you think a topic is, the slightest doubt that it’s new will prevent publication.

As a rule of thumb, I therefore recommend you stay far away from everything older than a century. Nothing reeks of crackpottery as badly as a supposedly interesting find in special relativity or classical electrodynamics or the foundations of quantum mechanics.

At first, you will not find it easy to come up with a new topic at the edge of current research. A good way to get ideas is to attend conferences. This will give you an overview on the currently open questions, and an impression where your contribution would be valuable. Every time someone answers a question with “I don’t know,” listen up.

1.2. Modesty

Yes, I know, you really, really want to solve one of the Big Problems. But don’t claim in your first paper that you did, it’s like trying to break a world record first time you run a mile. Except that in science you don’t only have to break the record, you also have to convince others you did.

For the sake of getting published, by all means refer to whatever question it is that inspires you in the introductory paragraph, but aim at a solid small contribution rather than a fluffy big one. Most senior researchers have a grandmotherly tolerance for the exuberance and naiveté of youth, but forgiveness ends at the journal’s front door. As encouraging as they may be in personal conversation, a journal reference serves as quality check for scientific standard, and nobody wants to be the one to blame for not keeping up the standard. So aim low, but aim well.

1.3. Feasibility

Be realistic about what you can achieve and how much time you have. Give your chosen topic a closer look: Do you already know all you need to know to get started, or will you have to work yourself into an entirely or partially new field? Are you familiar with the literature? Do you know the methods? Do you have the equipment? And lastly, but most importantly, do you have the motivation that will keep you going?

Time management is chronically hard, but give it a try and estimate how long you think it will take, if only to laugh later about how wrong you were. Whatever your estimate, multiply it by two. Does it fit in you plans?

2. Getting the research done

Do it.

3. Preparing the manuscript

Many scientists dislike the process of writing up their results, thinking it only takes time away from real science. They could not be more mistaken. Science is all about the communication of knowledge – a result not shared is a result that doesn’t matter. But how to get started?

3.1. Collect all material

Get an overview on the material that you want your colleagues to know about: calculations, data, tables, figures, code, what have you. Single out the parts you want to publish, collect them in a suitable folder, and convert them into digital form if necessary, ie type off equations, make vector graphics of sketches, render images, and so on.

3.2. Select journals

If you are unsure what journals to choose, have a look at the literature you have used for your research. Most often this will point you towards journals where your topic will fit in. Check the website to see whether they have length restrictions and if so, if these might become problematic. If all looks good, check their author guidelines and download the relevant templates. Read the guidelines. No, I mean, actually read them. The last thing you want is that your manuscript gets returned by an annoyed editor because your image captions are in the wrong place or similar nonsense.

Select the four journals that you like best and order them by preference. Chances are your paper will get rejected at the first, and laying out a path to continue in advance will prevent you from dwelling on your rejection for an indeterminate amount of time.

Show your paper to several colleagues and ask for feedback, but only do this once you are confident the content will no longer substantially change. The amount of confusion returning to your inbox will reveal which parts of the manuscript are incomprehensible or, heaven forbid, just plainly wrong.

If you don’t have useful contacts, getting feedback might be difficult, and this difficulty increases exponentially with the length of the draft. It dramatically helps to encourage others to actually read your paper if you tell them why it might be useful for their own research. Explaining this requires that you actually know what their research is about.

If you get comments, make sure to address them.

3.5. Pre-publish or not?

Pre-print sharing, for example on the arxiv, is very common in some areas and less common in others. I would generally recommend it if you work in a fast moving field where the publication delay might limit your claim to originality. Pre-print sharing is also a good way to find out whether you offended someone by forgetting to cite them, because they’ll be in your inbox the next morning.

3.6. Submit the paper

The submission process depends very much on the journal you have chosen. Many journals now allow you to submit an arxiv link, which dramatically simplifies matters. However, if you submit source-files, always check the complied pdf “for your approval”. I’ve seen everything from half-empty pages to corrupted figures to pdfs that simply didn’t load.

Some journals allow you to select an editor to whom the manuscript will be sent. It is generally worth checking the list to see if there is someone you know. Or maybe ask a colleague who they have made good or bad experience with. But don’t worry if you don’t know any one of them.

4. Passing peer review

After submission your paper will generally first be screened to make sure it fulfills the journal requirements. This is why it is really important that the topic fits well. If you pass this stage your paper is sent to some of your colleagues (typically two to four) for the dreaded peer review. The reviewers’ task is to read the paper, send back comments on it, and to assign it one of four categories: publish, minor changes, major changes, reject. I have never heard of any paper that was accepted without changes.

In many cases some of the reviewers are picked from your reference list, excluding people you have worked with yourself or who are in the acknowledgements. So stop and think for a moment whether you really want to list all your friends in the acknowledgements. If you have an archenemy who shouldn’t be commenting on your paper, let the editor know about this in advance.

Never submit a paper to several journals at the same time. Also don’t do this if the papers have even partly overlapping content. You might succeed, but trying to boost your publication count by repeating yourself is generally frowned upon and not considered good practice. The exception is conference proceedings, which often summarize longer paper’s content.

When you submit your paper you will be asked to formally accept the ethics code of the journal. If it’s your first submission, take a moment to actually read it. If nothing else, it will make you feel very grown up and sciency.

Some journals ask you to sign a copyright form already with submission. I have no clue what they are thinking. I never sign a copyright form until the revisions are done.

Peer review can be annoying, frustrating, infuriating even. To keep your sanity and to maximize your chance of passing try the following:

4.1. Keep perspective

This isn’t about you personally, it’s about a manuscript you wrote. It is easy to forget, but in the end you, the reviewers, and the editor have the same goal: to advance science with good research. Work towards that common end.

4.2. Stay polite and professional

Unless you feel a reviewer makes truly inappropriate comments, don’t complain to the editor about strong wording – you will only waste your time. Inappropriate comments are everything that refers to your person or affiliation (or absence thereof), any type of ideological arguments, and opinions not backed up by science. All other comments you should go through and address them one by one, in a reply attached to the resubmission and by changes to the manuscript where appropriate. Never ignore a question posed by a referee, it provides a perfect reason to reject your manuscript.

In case a referee finds an actual mistake in your paper, be reasonable about whether you can fix it in the time given until resubmission. If not, it is better to withdraw the submission.

4.3. Revise, revise, revise

Some journals have a maximum number of revisions that they allow after which an editor will make a final decision. If you don’t meet the mark and your paper is eventually rejected, take a moment to mull over the reason of rejection and revise the paper one more time before you submit it to the next journal.

Goto 3.6. Repeat as necessary.

5. Proofs

If all goes well, one day you will receive a note saying that your paper has been accepted for publication and you will soon receive page proofs. Congratulations!

It might feel like a red light five minutes from home when you have to urgently pee, but please read the page proofs carefully. You will never forgive yourself if you don’t correct that sentence which a well-meaning copy editor rearranged to mean the very opposite of what you intended.

6. And then…

… it’s time to update your CV! Take a walk, and then make plans for the next trip.

Monday, September 07, 2015

In June I attended the “Continuation of the Space-time Odyssey,” a meeting dedicated to the “celebration of the remarkable advances in the fields of particle physics and cosmology from the turn of the millennium to the present day.” At least that’s what the website said. In reality the meeting was dedicated to celebrate Katie Freese’s arrival in Stockholm. She spared no expenses; pricy hotels, illustrious locations, and fancy food was involved. So of course I went.

It was an “invitation only” conference, but it wasn’t difficult to get on the list because, for reasons mysterious to me, the conference system listed me as a “chair” of the meeting, whatever that might mean. I assure you I didn’t have anything to do neither with the chairs nor with the organization, other than pointing out some organization might be beneficial. So please don’t blame me that there was no open registration.

So much about academia. Now let me say something about the science.
Two talks from the conference were particularly memorable for they stood in stark contrast to each other. The one talk was by Lisa Randall, the other by Glenn Starkman. Both spoke about dark matter, but that’s about where the similarities end.
Randall talked about her recent work on “Partially Interacting Dark Matter” (slides here). Her research in collaboration with JiJi Fan, Andrey Katz, and Matthew Reece is based on a slightly more complicated version of existing particle dark matter models. They consider some so-far undetected particles which are not in the Standard Model. But in contrast to most of the presently used models, these additional particles are not, as commonly assumed, unable to interact with each other. Instead, the particles exert forces among themselves, which has a several consequences.

First, it allows them to explain why dark matter hasn’t been seen in direct detection experiments: the stuff just isn’t as simple as assumed for the estimates of detection rates. Second, it means that dark matter has friction and thus at least partly forms rotation disks during galaxy formation, the “dark disks.” If they are right, our galaxy too should have a dark disk, and our solar system would traverse it periodically every 35 million years or so. If you trust Nature News, which maybe you shouldn’t, then this can explain the extinction of the dinosaurs. Right or wrong, it’s a story catchy enough to spread like measles, and equally spotty. Third, and most important, the partially interacting dark matter introduces additional parameters which you can use to fit unexplained data like the 130 GeV Fermi gamma ray line.

If the model they are using has any theoretical motivation, Randall didn’t mention it. Instead her motivation was that it’s a model which might be testable in the soon future. It’s a search under the lamp post, phenomenological model building at its worst. You do it because you can and you’ll get it published – it certainly won’t harm your peer review assessment if you are one of the best-cited particle physicists ever. But just exactly why this interaction model and not some other that won’t be observed until the return of the Boltzmann brains, I don’t know.

In other words, I wasn’t very convinced that partially interacting dark matter is anything more than something you can publish papers about.

He set out alluding to Odysseus’ odyssey, which lead to strange and distant countries that Starkman likened to the proposed dark matter particles like WIMPS and axions. In the end though, Starkman pointed out, Odysseus returned to where he started from. Maybe, he suggested, things have gotten strange enough and it is time to return to where we started from: the Standard Model.

His talk was about “macro dark matter,” a dark matter candidate that has received little if no attention. I had only become aware of it briefly before, through a paper by Starkman together with David Jacobs and Amanda Weltman. Unlike the commonly considered particle dark matter, macro dark matter isn’t composed of single particles, but of macroscopically heavy chunks of matter with masses that are a priori anywhere between a gram and the mass of the Sun.

It is often said that observations indicate dark matter must be made of weakly interacting particles, but that is only true if the matter is thinly dispersed into light, individual particles. What we really know isn’t that the particles are weakly interacting but that are rarely interacting; you never measure a cross-section without a flux. Dark matter could be rarely interacting because it is weakly interacting. That’s the standard assumption. Or it could be rarely interacting because it is clumped together to tiny and dense blobs that are unlikely to meet each other. That’s macro dark matter.

But what is macro dark matter made of? It might for example be a type of nuclear matter that hasn’t been discovered so far, blobs of quarks and gluons that were created in the early universe and lingered around ever since. These blobs would be incredibly dense; at this density the Great Pyramid of Giza would fit inside a raindrop!

If you think nuclear matter is last-century physics, think again. The phases and properties of nuclear matter are still badly understood and certainly can’t be calculated from first principles even today.

Physicists were stunned for example when the quark gluon plasma turned out to have lower viscosity than anybody expected. They still argue about the equations governing the behavior of matter in neutron stars. Nobody has any idea how to calculate lifetimes of unstable isotopes. I recently talked to a nuclear physicist who told me that the state-of-the-art for composites is 20 nucleons. Twenty. This brings you just about up to neon in the periodic table. And that is, needless to say, using an effective model, not quarks and gluons. The Standard Model interactions are well-understood at LHC energies or higher, yes. But once quarks start binding together physicists are back to comparing models with data, rather than making calculations in the full theory.

So matter of nuclear density containing some of the heavier quarks is a possibility. But Starkman and his collaborators prefer to not make specific assumptions and keep their search as model-independent as possible. They were simply looking for constraints on this type of dark matter which are summarized in the figure below

On the vertical axis you have the cross-section, on the horizontal axis the mass of the macros. The grey and green diagonal lines are useful references marking atomic and nuclear density. In general the macro could be made up of a mixture, and so they wanted to keep the density a variable to be constrained by experiment. The shaded regions are excluded by various experiments.

To arrive at the experimental constraints one takes into account two properties of the macros that can be inferred from existing data. The one is the total amount of dark matter which we know from a number of observations, for example gravitational lensing and the CMB power spectrum. This means if we look at a particular mass of the macro, we know how many of them there must be. The other property is the macros’ average velocity which can be estimated from the mass and the strength of the gravitational potential that the particles move in. From the mass and the density one gets the size, and together with the velocity one can then estimate how often these things hit each other – or us.

The grey-shaded left upper region is excluded because the stuff would interact too much, causing it to clump too efficiently, which runs into conflict with the observed large scale structures.

The red regions are excluded by gravitational lensing data. These would be the macros that are so heavy they’d result in frequent strong gravitational lensing which hasn’t been observed. These constraints are also the reason why neutron stars, brown dwarfs, and black holes have long been excluded as possible explanations for dark matter. There are two types of lensing constraints from two different lensing methods, and right now there is a gap between them, but it will probably close in the soon future.

The yellow shaded region excludes macros of small mass, which is possible because these would be hitting Earth quite frequently. A macro with mass 109g for example would pass through Earth about once per year, the lighter ones more frequently. Searches for such particles are similar to searches for magnetic monopoles. One makes use of natural particle detectors, such as the sediment mica that forms neatly ordered layers in which a passing heavy particle would leave a mark. No such marks have been found, which rules out the lighter macros.

What about that open region in the middle? Could macros hide there? Starkman and his collaborators have some pretty cool ideas how to look for macros in that regime, and that’s what my New Scientist piece with Naomi is about. (Want me to keep interesting stories on this blog? Please use the donate button that’s in your face in the top right corner, thank you.)

Macro dark matter of course leaves many open questions. As long as we don’t really know what it’s made of, we have no knowing whether it can form in sufficient amounts or is stable enough. But its big advantage is that it doesn’t necessarily require us to construe up new particles.

Do I like this idea? Holy shit, no, I hate it! Like almost all particle physicists, I prefer my interactions safely in the perturbative regime where I can calculate cross-sections the way I learned it in kindergarten. I fled from the place where I made my PhD because everybody there was doing nuclear physics and I wanted nothing of that. I wanted elementary particles, grand unification and fundamental truths. I would be deeply disappointed if dark matter wasn’t a hint for physics beyond the standard model, but instead would drift into the realm of lattice qcd.

But then I was thinking. If everybody feels this way we might be missing the solution to a 80 year old puzzle because we focus on answer that we like and answers that are simple, not answers that are in our face yet complicated. Yes, macros are the most conservative and in a sense most depressing dark matter model around. But at least they didn’t kill the dinosaurs.

Thursday, September 03, 2015

Malcom Perry’s lecture that summarizes the idea he has been working on with Stephen Hawking is now on YouTube:

The first 17 minutes or so are a very well done, but also very basic introduction to the black hole information loss problem. If you’re familiar with that, you can skip to 17:25. If you know the BMS group and only want to hear about the conserved charges on the horizon, skip to 45:00. Go to 56:10 for the summary.

Last week, there furthermore was a paper on the arxiv with a very similar argument: BMS invariance and the membrane paradigm, by Robert Penna from MIT, though this paper doesn’t directly address the information loss problem. One is lead to suspect that the author was working on the topic for a while, then heard about the idea put forward by Hawking and Perry and made sure to finish and upload his paper to the arxiv immediately... Towards the end of the paper the author also expresses concern, as I did earlier, that these degrees of freedom cannot possibly contain all the relevant information “This may be relevant for the information problem, as it forces the outgoing Hawking radiation to carry the same energy and momentum at every angle as the infalling state. This is usually not enough information to fully characterize an S-matrix state...”

“Contacted via telephone Tuesday evening, Strominger said he felt confident that the information loss paradox was not irreconcilable. But he didn't think everything was settled just yet.He had heard Hawking say there would be a paper by the end of September. It had been the first he'd learned of it, he laughed, though he said the group did have a draft.”

(Did Hawking actually say that? Can someone point me to a source?)

Meanwhile I’ve pushed this idea back and forth in my head and, lacking further information about what they hope to achieve with this approach, have tentatively come to the conclusion that it can’t solve the problem. The reason is the following.

The endstate of black hole collapse is known to be simple and characterized only by three “hairs” – the mass, charge, and angular momentum of the black hole. This means that all higher multipole moments – deviations of the initial mass configuration from perfect spherical symmetry – have to be radiated off during collapse. If you disregard actual emission of matter, it will be radiated off in gravitons. The angular momentum related to these multipole moments has to be conserved, and there has to be an energy flux related to the emission. In my reading the BMS group and its conserved charges tell you exactly that: the multipole moments can’t vanish, they have to go to infinity. Alternatively you can interpret this as the black hole not actually being hair-less, if you count all that’s happned in the dynamical evolution.

Having said that, I didn’t know the BMS group before, but I never doubted this to be the case, and I don’t think anybody doubted this. But this isn’t the problem. The problem that occurs during collapse exists already classically, and is not in the multipole moments – which we know can’t get lost – it’s in the density profile in the radial direction. Take the simplest example: one shell of mass M, or two concentric shells of half the mass. The outside metric is identical. The inside metric vanishes behind the horizon. Where does the information about this distribution go if you let the shells collapse?

Now as Malcom said in his lecture, you can make a power-series expansion of the (Bondi) mass around the asymptotic value, and I’m pretty sure it contains the missing information about the density profile (which however already misses information of the quantum state). But this isn’t information you can measure locally, since you need an infinite amount of derivatives or infinite space to make your measurement respectively. And besides this, it’s not particularly insightful: If you have a metric that is analytic with an infinite convergence radius, you can expand it around any point and get back the metric in the whole space-time, including the radial profile. You don’t need any BMS group or conserved charges for that. (The example with the two shells is not analytical, it’s also pathological for various reasons.)

As an aside, that the real problem with the missing information in black hole collapse is in the radial direction and not in the angular direction is the main reason I never believed the strong interpretation of the Bekenstein-Hawking entropy. It seems to indicate that the entropy, which scales with the surface area, counts the states that are accessible from the outside and not all the states that black holes form from.

Feedback on this line of thought is welcome.

In summary, the Hawking-Perry argument makes perfect sense and it neatly fits together with all I know about black holes. But I don’t see how it gets one closer to solve the problem.

Tuesday, September 01, 2015

If you tie your laces, loops and strings might seem like parts of the same equation, but when it comes to quantum gravity they don’t have much in common. String Theory and Loop Quantum Gravity, both attempts to consistently combine Einstein’s General Relativity with quantum theory, rest on entirely different premises.

String theory posits that everything, including the quanta of the gravitational field, is made up of vibrating strings which are characterized by nothing but their tension. Loop Quantum Gravity is a way to quantize gravity while staying as closely as possible to the quantization methods which have been successful with the other interactions.

The mathematical realization of the two theories is completely different too. The former builds on dynamically interacting strings which give rise to higher dimensional membranes, and leads to a remarkably complex theoretical construct that might or might not actually describe the quantum properties of space and time in our universe. The latter divides up space-time into spacelike slices, and then further chops up the slices into discrete chunks to which quantum properties are assigned. This might or might not describe the quantum properties of space and time in our universe...

String theory and Loop Quantum Gravity also differ in their ambition. While String Theory is meant to be a theory for gravity and all the other interactions – a “Theory of Everything” – Loop Quantum Gravity merely aims at finding a quantum version of gravity, leaving aside the quantum properties of matter.

Needless to say, each side claims their approach is the better one. String theorists argue that taking into account all we know about the other interactions provides additional guidance. Researchers in Loop Quantum Gravity emphasize their modest and minimalist approach that carries on the formerly used quantization methods in the most conservative way possible.

They’ve been arguing for 3 decades now, but maybe there’s an end in sight.

In a little noted paper out last year, Jorge Pullin and Rudolfo Gambini argue that taking into account the interaction of matter on a loop-quantized space-time forces one to use a type of interaction that is very similar to that also found in effective models of string interactions.

The reason is Lorentz-invariance, the symmetry of special relativity. The problem with the quantization in Loop Quantum Gravity comes from the difficulty of making anything discrete Lorentz-invariant and thus compatible with special relativity and, ultimately, general relativity. The splitting of space-time into slices is not a priori a problem as long as you don’t introduce any particular length scale on the resulting slices. Once you do that, you’re stuck with a particular slicing, thus ruining Lorentz-invariance. And if you fix the size of a loop, or the length of a link in a network, that’s exactly what happens.

There has been twenty years of debate whether or not the fate of Lorentz-invariance in Loop Quantum Gravity is really problematic, because it isn’t so clear just exactly how it would make itself noticeable in observations as long as you are dealing with the gravitational sector only. But once you start putting matter on the now quantized space, you have something to calculate.

Pullin and Gambini – both from the field of LQG it must be mentioned! – argue that the Lorentz-invariance violation inevitably creeps into the matter sector if one uses local quantum field theory on the loop quantized space. But that violation of Lorentz-invariance in the matter sector would be in conflict with experiment, so that can’t be correct. Instead they suggest that this problem can be circumvented by using an interaction that is non-local in a particular way, which serves to suppress unwanted contributions that spoil Lorentz-invariance. This non-locality is similar to the non-locality that one finds in low-energy string scattering, where the non-locality is a consequence of the extension of the strings. They write:

“It should be noted that this is the first instance in which loop quantum gravity imposes restrictions on the matter content of the theory. Up to now loop quantum gravity, in contrast to supergravity or string theory, did not appear to impose any restrictions on matter. Here we are seeing that in order to be consistent with Lorentz invariance at small energies, limitations on the types of interactions that can be considered arise.”

In a nutshell it means that they’re acknowledging they have a problem and that the only way to solve it is to inch closer to string theory.

But let me extrapolate their paper, if you allow. It doesn’t stop at the matter sector of course, because if one doesn’t assume a fixed background like they do in the paper one should also have gravitons and these need to have an interaction too. This interaction will suffer from the same problem, unless you cure it by the same means. Consequently, you will in the end have to modify the quantization procedure for gravity itself. And while I’m at it anyway, I think a good way to remedy the problem would be to not force the loops to have a fixed length, but to make them dynamical and give them a tension...

I’ll stop here because I know just enough of both string theory and loop quantum gravity to realize that technically this doesn’t make a lot of sense (among many other things because you don’t quantize loops, they are the quantization), and I have no idea how to make this formally correct. All I want to say is that after thirty years maybe something is finally starting to happen.

Should this come as a surprise?

It shouldn’t if you’ve read my review on Minimal Length Scale Scenarios for Quantum Gravity. As I argued in this review, there aren't many ways you can consistently introduce a minimal length scale into quantum field theory as a low-energy effective approximation. And pretty much the only way you can consistently do it is using particular types of non-local Lagrangians (infinite series, no truncation!) that introduce exponential suppression factors. If you have a theory in which a minimal length appears in any other way, for example by means of deformations of the Poincaré algebra (once argued to arise in Loop Quantum Gravity, now ailing on life-support), you get yourself into deep shit (been there, done that, still got the smell in my nose).

Does that mean that the string is the thing? No, because this doesn’t actually tell you anything specific about the UV completion, except that it must have a well-behaved type of non-local interaction that Loop Quantum Gravity doesn’t seem to bring, or at least it isn’t presently understood how it would. Either way, I find this an interesting development.

The great benefit of writing a blog is that I’m not required to contact “researchers not involved in the study” and ask for an “outside opinion.” It’s also entirely superfluous because I can just tell you myself that the String Theorist said “well, it’s about time” and the Loop Quantum Gravity person said “that’s very controversial and actually there is also this paper and that approach which says something different.” Good thing you have me to be plainly unapologetically annoying ;) My pleasure.