Pages

Wednesday, October 22, 2014

Unless you've been living under a reasonably sizable rock for the last few months, it can't have escaped your attention that the world has yet another terror to throw on the mountain of things we should be scared of: Ebola. The ongoing situation in Africa is the largest Ebola outbreak in history and has seen the disease spread beyond Africa for the first time. At the time of writing this, nearly 10,000 people have become infected, almost half of whom have died. This number is growing...rapidly.

Ebola cases and deaths in the 2014 outbreak.

In this post, I will describe what Ebola is, why it is so scary, and what chances we have of defeating it.

What is Ebola?

'Ebola' as a biological term actually refers to a group of five viruses within the Filoviridae family, of which four can cause the disease generally called Ebola, but more specifically known as Ebola virus disease. The recent outbreak has been caused by just one of these viruses, which used to be known as Zaire Ebolavirus, but is now simply 'Ebola virus' given that it is the most common among humans, and Zaire no longer exists! It doesn't look a whole lot like most viruses, it has to be said - with long, tubular filaments waving around rather than the tight, spherical viruses we're used to seeing for 'flu, HIV, and most others.

Friday, September 19, 2014

In case anyone reading this doesn't recall, back in March an experiment known as BICEP2 made a detection of something known as B-mode polarisation in the cosmic microwave background (CMB). This was big news, mostly because this B-mode polarisation signal would be a characteristic signal of primordial gravitational waves. The detection of the effects of primordial gravitational waves would itself be a wonderful discovery, but this potential discovery went even further in the wonderfulness because the likely origin of primordial gravitational waves would be a process known as inflation which is postulated to have occurred in the very, very early universe.

The B-mode polarisation in the CMB as seen by BICEP2. Seen here for the first time in blog format without the arrows. Is it dust, or is it ripples in space-time? Don't let Occam's razor decide!

I said at the time, and would stand by this now, that if BICEP2 has detected the effects of primordial gravitational waves, then this would be the greatest discovery of the 21st century.

However, about a month after BICEP2's big announcement a large crack developed in the hope that they had detected the effects of primordial gravitational waves and obtained strong evidence for inflation. The problem is that light scattering of dust in the Milky Way Galaxy can also produce this B-mode polarisation signal. Of course BICEP2 knew this and had estimated the amplitude of such a signal and found it to be much too small to explain their signal. The crack was that it seemed they had potentially under-estimated this signal. Or, more precisely, it was unclear how big the signal actually is. It might be as big as the BICEP2 signal, or it might be smaller.

Either way, the situation a few months ago was that the argument BICEP2 made for why this dust signal should be small was no longer convincing and more evidence was needed to determine whether the signal was due to dust, or primordial stuff.

Tuesday, August 26, 2014

(and it probably isn't explained by a supervoid; although it is still anomalous)

In the cosmic microwave background (CMB) there is a thing that cosmologists call "The Cold Spot". However, I'm going to try to argue that its name is perhaps a little, well, wrong. This is because it isn't actually very cold. Although, it is definitely notably spotty.

Why care about a cold spot?
This spot has become a thing to cosmologists because it appears to be somewhat anomalous. What this means is that a spot just like this has a very low probability of occurring in a universe where the standard cosmological model is correct. Just how anomalous it is and how interesting we should find it is a subject for debate and not something I'll go into much today. There are a number of anomalies in the CMB, but there is also a lot of statistical information in the CMB, so freak events are expected to occur if you look at the data in enough different ways. This means that the anomalies could be honest-to-God signs of wonderful new physical effects, or they could just be statistical flukes. Determining which is true is very difficult because of how hard it is to quantify how many ways in which the entire cosmology community have examined their data.

However, if the anomalies are signs of new physics, then we should expect two things to happen. Firstly, some candidate for the new physics should come up, which can create the observed effect and produce all of the much greater number of other measurements that fit the standard cosmological model well. If this happens, then we would look for additional ways in which the universe described by this new model differs from the standard one, and look for those effects. Secondly, as we take more data, we would expect the unlikeliness of the anomaly to increase. that is, it should become more and more anomalous.

In this entry, I'm not going to be making any judgement on whether the cold spot is a statistical fluke or evidence of new physics. What I want to do is explain why, although it still is anomalous, and is definitely a spot, the cold spot isn't very cold. Then, briefly, I'll explain why, if it is evidence of new physics, that new physics isn't a supervoid.

Friday, June 27, 2014

In my last post in this series I described some of the ways in which gene therapy is beginning to help in the treatment of genetic disorders. A caveat of this (which was discussed further in the comments section of that post) is that currently available gene therapies do not remove the genetic disorder from the germline cells (i.e. sperm or eggs) of the patient and so do not protect that person's children against inheriting the disease. This could be a problem in the long run as it may allow genetic disorders to become more common within the population. The reason for this is that natural selection would normally remove these faulty genes from the gene pool as their carriers would be less likely to survive and reproduce. If we remove this selection pressure by treating carriers so that they no longer die young, then the faulty gene can spread more widely through the population. If something then happened to disrupt the supply to gene therapeutics - conflict, disaster, etc. - then a larger number of people would be adversely affected and could even die.

Although this is a significant problem to be considered, it is one that is fairly simply avoidable by screening or treating the germline cells of people undergoing gene therapy in order to remove the faulty genes from the gene pool. This is currently beyond our resources on a large scale, but will almost certainly become standard practice in the future.

All of this got me thinking: are there any other genes that might be becoming more or less prevalent in the population as a result of medical science and/or civilisation in general? If so, can we prevent/encourage/direct this process and at what point do we draw the line between this and full-blown genetic engineering of human populations? This is the subject of this post, but before we get into this, I want to first give a little extra detail about how evolution works on a genetic scale.

Imperfect copies

Evolution by natural selection, as I'm sure you're aware, is simply the selection of traits within organisms based on the way in which those traits affect that organism's fitness. An organism with an advantageous trait is more likely to survive and reproduce and so that trait becomes more and more common within the population. Conversely, traits that disadvantage the organism are quickly lost through negative selection as the organism is less likely to reproduce. The strength of selection in each case is linked to how strongly positive or negative that trait is - i.e. a mutation that reduces an animal's strength by 5% might be lost only slowly from a population, whereas one that reduces it by 90% will probably not make it past one generation. In turn, the strength of that trait is determined by the precise genetic change that has occurred to generate it.

Monday, May 5, 2014

The major theme of my 'human machine' series of posts has been that we are, as the name suggests, machines; explicable in basic mechanical terms. Sure, we are incredibly sophisticated biological machines, but machines nonetheless. So, like any machine, there is theoretically nothing stopping us from being able to play about with our fundamental components to suit our own ends. This is the oft feared spectre of 'genetic modification' that has been trotted out in countless works of science fiction, inexorably linked to concepts of eugenics and Frankenstein-style abominations. Clearly genetic modification of both humans and other organisms is closely tied to issues of ethics, and biosafety, and must obviously continue to be thoroughly debated and assessed at all stages, but in principle there is no mechanistic difference between human-driven genetic modification and the mutations that arise spontaneously in nature. The benefit of human-driven modification, however, is that it has foresight and purpose, unlike the randomness of nature. As long as that purpose is for a common good and is morally defensible, then in my eyes such intervention is a good thing.

One fairly obvious beneficial outcome of genetic modification is in the curing of various genetic disorders. Many human diseases are the result of defective genes that can manifest symptoms at varying times of life. Some genetic disorders are the result of mutations that cause a defect in a product protein, others are the complete loss of a gene, and some are caused by abnormal levels of gene activity - either too much or too little. A potential means to cure such disorders is to correct the problematic gene within all of the affected tissue. The most efficient means to do that would be to correct it very early in development, since if you corrected it in the initial embryo then it would be retained in all of the cells that subsequently develop from that embryo. This is currently way beyond our technical limitations for several reasons. Firstly, we don't routinely screen embryos for genetic abnormalities and so don't know which ones might need treatment. Secondly, the margin for error in this kind of gene therapy is incredibly narrow as you have to ensure that every single cell that the person has for the rest of their life will not be adversely affected by what you do to the embryonic cells in this early stage - we're not there yet. Thirdly, our genetic technology is not yet sophisticated enough to allow us to remove a damaged gene and replace it with a healthy one in an already growing embryo - the best we can do it stick in the healthy gene alongside the defective one and hope it does the job. There is certainly no fundamental reason why our technology could not one day reach the stage where this kind of procedure is feasible, but we are a long way off yet.

So, for the time being what can we do? Well instead of treating the body at the embryonic stage, the next best approach is to treat specifically the affected cells later on in life. This involves identifying the problematic gene and then using a delivery method to insert the correct gene into whatever tissues manifest the disease, preferably permanently. This is broadly known as gene therapy, and is one of the most promising current fields of 'personalised' medicine.

Thursday, March 27, 2014

One of the consequences of the BICEP2 data from last week, should it hold up to scrutiny, and be seen by other experiments (I hope it holds up to scrutiny and is seen by other experiments), is that there is a significant lack of "power" in the temperature anisotropies on large angular scales.

This was already somewhat the case before the BICEP2 discovery, but BICEP2 made it much more significant. The reason for this will hopefully turn into a post of its own one day, but, essentially, the primordial gravitational waves that BICEP2 has hopefully discovered would themselves have seeded temperature anisotropies on these large angular scales. Previously, we could just assume that the primordial gravitational waves had a really small amplitude and thus didn't affect the temperature much at all. Now, however, it seems like they might be quite large and therefore, this apparent lack of power becomes much more pertinent.

That's all fine and is something that any model of inflation that hopes to explain the origin of these gravitational waves will need to explain, despite what many cosmologists already writing papers on the ArXiv seem to want to believe (links withheld). As a side, ever-so-slightly-frustrated, note, the only papers I've seen that have actually analysed the data, rather than repeating old claims, have confirmed this problem that was clear from, at the latest, the day after the announcement.

But why does it imply a "cosmological coincidence problem"? And why is it a new coincidence problem? What's the old one?

Monday, March 24, 2014

All good machines need sensors, and we are no different. Everyone is familiar with the five classic senses of sight, smell, touch, taste, and hearing, but we often forget just how amazingly finely tuned these senses are, and many people have little appreciation of just how complex the biology behind each sense is. In this week's post, I hope to give you an understanding of how one of our senses, smell, functions and how, in light of recent evidence, is far more sensitive than we previously thought.

Microscopic sensors
The olfactory system is an extremely complex one, but it is built up from fairly simple base units. The sense of smell is of course located in the nose, but more specifically it is a patch of tissue approximately 3 square centimetres in size at the roof of the nasal cavity that is responsible for all of the olfactory ability in humans. This is known as the olfactory epithelium and contains a range of cell types, the most important of which is the olfactory receptor neuron. There are roughly 40 million of these cells packed into this tiny space and their job is to bind odorant molecules and trigger neuronal signals up to the brain to let it know which odorants they've detected. They achieve this using a subset of a huge family of receptors that I've written about before, the G protein-coupled receptors (GPCRs). These receptors are proteins that sit in the membranes of cells and recognise various ligands (i.e. molecules for which they have a specific affinity) and relay that information into the cell. There are over 800 GPCRs in the human genome and they participate in a broad range of processes, from neurotransmission to inflammation, but the king of the GPCRs has to be the olfactory family, which make up over 50% of all the GPCRs in our genome.

Wednesday, March 19, 2014

If anybody is interested, I'm currently drip-tweeting some of the constraints one can obtain from considering Planck and BICEP2 data together. BICEP2 did do a bit of this in their paper, but they only considered specific scenarios. They were also often a bit coy about the implications of the combined analysis. I'll try not to be ;-).

The results should only be seen as indicative, these aren't published, and never will be in this form (maybe they could be cited if used in a paper though!). They were provided to me by Sussex Uni's resident obtaining-cosmology-from-the-CMB expert Antony Lewis, after a hurried Tuesday adding the BICEP2 data to the Planck cosmology pipeline (i.e. CosmoMC) and may contain mistakes.

Questions here, or on Twitter are most welcome. If you want to see specific cosmologies, I'll do my best to show them (if I have them), or ask Antony very nicely to provide them (no guarantees, of course).

Friday, March 14, 2014

[Added note (on Monday): Well, wow, the rumours were, if anything, understated. I'm happy to go on record that, unless a mistake has been made, this is the greatest scientific discovery of the 21st century, and may remain so even once the century is over. I (and others) will write many more detailed summaries of what was observed over time, but BICEP2 have announced a discovery of primordial B-modes, which is extremely strong evidence of cosmological inflation (if it turns out to be scale invariant, inflation is as true as most accepted science). Matt Strassler has a good hastily written summary here. As does Liam McAllister at Lubos Motl's blog, here. Of course, this is just one experiment and maybe they've made a mistake, but the results look very robust at the moment.Congratulations on being alive today readers! We just learned about how particles work at energies \(10^{13}\) times greater than even the LHC can probe, and about what was happening at a time much, much less than a nanosecond after the beginning of the Big Bang.]............................................[Added note (on Sunday): It seems highly probable that these rumours are essentially true. Although the precise details of the results aren't yet public, the BICEP2 PI, John Kovac, has sent a widely distributed email with the following information: Data and scientific papers with results from the BICEP2 experiment will go public and be viewable here at 2:45pm GMT on Monday. At the same time a technical webcast will begin at this address.It's going to be an exciting day!]

This is the only hard-evidence of anything interesting on the way and it could be an announcement of anything that fits under the label of "astrophysics". This is important to keep in mind. However, for one reason or another (that is hard to nail down), cosmologists are suggesting that it is going to be about cosmology. The speculation is that it will be about the BICEP2 experiment, which has been measuring the polarisation in the CMB. The speculation is that BICEP2 have seen primordial "B-mode" polarisation.

If this speculation is true, this would be a result immense in its significance.

Primordial B-modes would be a smoking gun signal of primordial gravitational waves. This, alone, makes such a discovery important. Gravitational waves have not yet been observed, but are a prediction from general relativity. Therefore, such a discovery would be on the same level of significance as the discovery of the Higgs particle. We were almost certain it would be there, but it is good to finally see it.

However, the potential significance of such a result goes further because these primordial gravitational waves would need a source. The theory of cosmological inflation would/could be such a source. Inflation is a compelling theory, not without some problems, for how the universe evolved in its very earliest stages. If it occurred when the universe had a large enough temperature, it would generate primordial gravitational waves large enough to tickle the CMB enough to make these B-modes visible in the polarisation. As of yet, inflation has passed quite a few observational tests, but nothing has been seen that could be described as smoking gun evidence. A spectrum of primordial gravitational waves would very nearly be such a smoking gun. If the spectrum was scale invariant (i.e. if the gravitational waves have the same amplitude on all distance scales) that would be a smoking gun for inflation and accolades, Nobel Prizes, etc, etc, would flow accordingly.

All of this is just speculation, but some of it does seem to be coming from reputable sources. And some of my colleagues have been talking about tip-offs from people who wish to remain anonymous, so I figured I'd collect all the speculation I know of here in a post (let me know if I've missed anything):

The PI of BICEP2, John Kovac, gave a talk at the annual COSMO conference last year that had some pretty ambitious claims for how sensitive BICEP2 and similar experiments were going to be, so... well... we'll know on Monday. It should also be noted that, although the existence of these gravitational waves is a prediction of inflation, their amplitude is a free parameter and an amplitude this big is potentially a little surprising (for me, lower temperature inflation models just seem more compelling, others might disagree).

Friday, March 7, 2014

[The following is a guest post from Bjoern Malte Schaefer. Bjoern is one of the curators of the Cosmology Question of the Week blog, which is worth checking out. This post is a historical look at some of the early parts in the history of quantum mechanics, in particular, the black-body spectrum. Questions are welcome and I'll make sure he sees any of them. Image captions (and hyper-links, in this case) are, as usual, by me, because guest posters don't ever seem to provide their own.]Two unusual systems

Quantum mechanics surprises with the statement that the microscopic world works very differently from the macroscopic world. Therefore, it took a while until quantum mechanics was formally established as the theory of the microworld. In particular, despite the fact that two of the natural systems on which theories of quantum mechanics could initially be tested were very simple, even from the point of view of the physicists of the time, one needed to introduce a number of novel concepts for their description. These two physical systems were the hydrogen atom and the spectrum of a thermal radiation source. The hydrogen atom was the lightest of all atoms with the most simply structured spectrum. It exhibited many regularities involving rational numbers relating its discrete energy levels. It could only be ionised once implying that it had only a single electron and from these reasons it was the obvious test case for any theory of mechanics in the quantum regime. Werner Heisenberg was the first to be successful in solving this quantum mechanical analogue of the Kepler-problem, i.e. the equation of motion of a charge moving in a Coulomb-potential, paving the way for a systematic understanding of atomic spectra, their fine structure, the theory of chemical bonds, interactions of atoms with fields and ultimately quantum electrodynamics.

The Planck-spectrum was equally puzzling: It is the distribution of photon energies emitted from a body at thermal equilibrium and does not, in particular, require any further specification of the body apart that it should be black, meaning ideally emitting and absorbing radiation irrespective of wave length: From this point of view it is really the simplest macroscopic body one could imagine because its internal structure does not matter. In contrast to the hydrogen atom it is described with a continuous spectrum. In fact, there are at least two beautiful examples of Planck-spectra in Nature: the thermal spectrum of the Sun and the cosmic microwave background. The solution to the Planck-spectrum involves quantum mechanics, quantum statistics and relativity, and unites three of the four the great constants of Nature: the Planck-quantum h, the Boltzmann-constant \(k_B\) and the speed of light c.

The spectrum (basically intensity against wavelength or frequency) of the light from the sun (in yellow) and a blackbody with the same temperature (grey). I'm actually surprised by how similar they are.

Limits of the Planck-spectrum

Although criticised at the time by many physicists as phenomenological, the high energy part of the Planck-spectrum is relatively straightforward to understand, as had been realised by Wilhelm Wien: Starting with the result that photons as relativistic particles carry energies proportional to their frequency as well as momenta inversely proportional to their wave length (the constant of proportionality in both cases being the Planck-constant h), imposing isotropy of the photon momenta and assuming a thermal distribution of energies according to Boltzmann leads directly to Wien's result which is an excellent fit at high photon energies but shows discrepancies at low photon energies, implying that at low temperatures the system exhibits quantum behaviour of some type.

Monday, February 3, 2014

Over the course of my 'human machine' series of posts I've tried to convey the intricacy and beauty of our biological engineering, and demonstrate that we are incredibly well-engineered machines whose complexity and originality go all the way down to the atomic level. In this week's post, I will be exemplifying this with one of the best cases that I can think of; how we transport oxygen around our bodies. I feel that this is a great story to tell because it is one that most people might think that they know well, but that actually is far more complex and subtle than it may appear, and that demonstrates how our lives are highly dependent on perfectly evolved processes working on the subatomic scale.

"It will have blood, they say."

I'm sure that anyone reading this blog is fully aware that we need oxygen to survive (although if you want a more detail explanation of exactly why then I direct your attention to a previous post of mine available here), and anyone remembering their primary school biology will know that oxygen is transported around the body by the circulatory system, i.e. the blood. Most of the cells within your blood are the famous red blood cells (to distinguish them from the immune cells - the white blood cells), which are, unsurprisingly, responsible for blood's distinctive colour - earning them the respect of horror movie aficionados everywhere. You have roughly 20-30 trillion red blood cells in you as you read this, each of which is about 7 microns (i.e. 7 millionths of a metre) in diameter. They shoot around your body, taking roughly 20 seconds to make one circulation, and have just one job; take oxygen from the lungs (where there's lots of it) to the tissues (where there's not). So specific are they to this job that they don't even bother having a nucleus, thereby removing all possibility of them doing anything else.

Wednesday, January 22, 2014

The video above is a trailer of an upcoming documentary about CERN and the discovery of the Higgs particle. This documentary looks wonderful and important. CERN has triumphed again at outreach and is simply leagues ahead of basically everyone else in science when it comes to this sort of thing. If anyone is surprised or wonders how CERN is able to get such a relatively large sum of science funding (though only relative to other science funding) then don't be. This sort of thing matters and makes a difference. People care about CERN because they know about CERN and they know about CERN because documentaries like this are made, made well, marketed well and received well.

The documentary itself will be released March 5, in New York, and hopefully will be viewable in most major locations, eventually, after that.

My only gripe is that it is coming 18 months after the Higgs discovery. I know that part of the motivation for this is that people want to make sure the science is definitely true before disseminating it, otherwise things can become confusing for the less engaged viewer. However, in July 2012 those guys were reasonably sure that they'd found something. This research is owned as much by the public as it is by the researchers. CERN did do a great job on that day by holding press releases, announcing the discovery live, with live web-streams, and with public level discussions, at the moment, of what the implications were. And, of course, this is all great, and I love CERN for it. But maybe it can be done even better.

Here's (potentially) how...

This documentary will probably be reasonably widely viewed. It looks like it is potentially headed for some major awards and it is being reviewed very favourably by a bunch of major newspapers and film critics.

Imagine if the film had been released, and widely viewed, immediatelyprior to the discovery's announcement, and the climax of the film was all the researchers, scientists, students, engineers, and everyone involved in this experiment waiting, full of anticipation, not knowing the result. The viewer now has a reasonable understanding of what the researchers were looking for and how they were hoping to find it. Now everyone is waiting, full of anticipation, not knowing the result. Then, we cut to the actual, live, not even the majority of the scientists know the result, announcement of the detection. The general viewer will now share in this discovery, that their taxes paid for (and who's future taxes will pay for future experiments) in the moment.

That's not just great for science outreach, it is genuinely good theatre for everyone involved (even if there isn't a detection). But most importantly it allows this sharing of not just the result, but the acquisition of the result. The public feels like they were there, like they took part, like it is also their discovery. And, to bring back the bottom line, when funding is next being decided, they want to be able to contribute to, and participate in, more discoveries like this.

Instead, people could tune in to the discovery, and see the researchers and scientists, etc, and their excitement, without being able to share in it.

Having said all of that, 18 months isn't that long. So, when the documentary is released, go watch it, and remember that this stuff happened less than two years ago. This is the present.

Tuesday, January 14, 2014

[This carries on from a post yesterday where I attempted to explain what inflation has to do with a multiverse]Is that it?

You might be thinking: "OK, that's a toy-toy model about how a multiverse might come from an inflationary model. Cool. But are there any non-toy models?"

As far as I'm aware, no. And this is where I definitely agree with Peter that, although it is certainly possible to generate a multiverse, it definitely isn't inevitable. In fact, if anyone reading this does know of any full models where a multiverse is generated, with a set of vacua with different energies, please let me know (even if it's just a complete toy model).

In which case, you might now be wondering why is there so much excitement amongst some cosmologists about multiverses? Why do some physicists want it so much? There are two reasons I can think of. The first is that the multiverse, coupled with an anthropic principle, can explain why the cosmological constant has the value it does. If the true model of inflation generated Big Bangs in many vacua (i.e. more than 10^130 vacua), then, even though most of them will have large vacuum energies, the Big Bangs that occur in them also can't support life. Therefore we would expect to find ourselves in a Big Bang bubble where the cosmological constant was small, but just big enough to be detected. And this is actually exactly what we see. [Edit: As Sesh points out in a comment, an additional assumption is required to conclude that the cosmological constant should be both small and measurable. This assumption is that the distribution of vacuum energies in the multiverse favours large energies. See the comment and replies for discussion. Thanks Sesh.]

The second reason multiverses are popular is that there is a candidate for where this absurdly large number of possible minima could come from and this is string theory. In fact, string theory predicts many more than 10^130 possible vacua.

Summary

So, that's it. A multiverse needs two things: a way that multiple possible types of universe are possible; and a way to make sure that these universes all actually come into existence. String theory suggests that there may indeed be multiple possible types of "universe" (i.e. sets of laws of physics), but it is eternal inflation that would cause many Big Bangs to occur and thus, potentially, to populate these "universes".

Some parting words...

There are some (perhaps even many) scientists who hate the idea of a multiverse and demand that multiverses are stricken from science for being "unfalsifiable" or "unpredictive" (because we can't ever access the other Big Bangs).

I don't understand this mentality.

Forgetting about whether a multiverse is "scientific" or not, what if it is true? What if we do live in a universe that, it just so happens, is part of a multiverse? Would we not want whatever method we use to try to learn about our existence to be able to deal with it? If we want "science" to be something that examines reality, then (if we are in a multiverse) should it not be able to deal with a multiverse? We might not be able to directly measure other Big Bangs, but we can infer their probable existence by measuring other things. [Edit(06/02): I just want to clarify that I'm not meaning to suggest here that science needs changed to be able to talk about untestable things, but instead that scientists are justified when trying hard to find ways to test this idea. And that there are ways to test it.]

Suppose we all lived 500 years ago and wanted to know why the Earth is exactly the right distance from the sun to allow life to occur. What explanations could we come up with for why this is true?

About a week ago, Peter Coles, another cosmology blogger (who also happens to be my boss' boss' boss - or something), wrote a post expressing confusion about the association of inflation with the multiverse. His post was a reaction to a copy of a set of lectures posted on the arXiv by Alan Guth, one of the inventors of inflation (and discoverer of the name). Guth's lectures claimed, in title and abstract, that there is a very obvious link between inflation and a multiverse. Peter had some strong comments to make about this, including the assertion that at some points he's inclined to believe that any association between inflation and a multiverse is no different to a thought pattern of: quantum physics ---> woo ---> a multiverse!

I have some sympathy for Peter's frustration when people over-sell their articles/papers, and I would agree that inflation does not require a multiverse to exist, nor does inflation itself necessarily make a multiverse seem particularly likely/obvious. However, it is also true that, in a certain context, inflation and a multiverse are related. Put simply, through "eternal inflation", inflation provides a mechanism to create many Big Bangs. To get the sort of multiverse this post is about, these different Big Bangs need to have different laws of physics, which is not generic. However it can occur if the laws of physics depend on how inflation ends, in a way which I will describe below.

As with Peter though, I am unaware of any complete inflationary model that will generate a multiverse. We could both have a blindspot on this, but my understanding is that the situation is that people expect (or hope?) that complete models of inflation derived from string theory are likely to generate a multiverse for reasons that I will describe below.

Before that, you're probably wondering what this inflation thing is...

Inflation

The inflationary epoch is a (proposed - although the evidence for it is reasonably convincing) period in the past where the energy density of the universe was almost exactly constant and homogeneous (i.e. the same everywhere) and the expansion of the universe was accelerating. After this inflationary epoch ended, the expansion was decelerating (which isn't surprising given that gravity is normally attractive) and the universe gradually became less and less homogeneous, until it looked like it does today. We like inflation for all sorts of reasons, but for the purpose of this post, the preceding two sentences are all you need to know.

This is the "potential energy density" stored by a hypothetical inflationary field, \(\phi\). The x-axis is the value of \(\phi\). The y-axis is the energy density. The hatched region is where the conditions for "eternal inflation" would be satisfied.