Share this story

Our current model of cosmology—the origin and structure of the whole Universe—has survived another major test, with the release of the first 15 months of data from the Planck mission. Planck is a European Space Agency mission, designed to study the cosmic microwave background (CMB), which preserves information about the conditions that persisted immediately after the Big Bang.

Combined with results from prior experiments, Planck has revealed a Universe a little older than previously thought, and with a slightly different balance of ingredients. Although there were no major surprises, some of its data provided stronger hints about inflation, a popular model that explains why the modern Universe looks the way it does. Other measurements ruled out extra neutrinos, provided even stronger evidence for the existence (though not the identity) of dark matter, and indicated that there's a bit less dark energy than previous measurements had suggested.

But amid these incremental changes, there was a bit of a surprise: despite the best hopes of researchers, Planck data does not rule out the existence of anomalous temperature fluctuations at large scales. These may hint at either new physics that influenced the Universe's expansion, or previously unknown foreground sources that alter the CMB.

Planck is a phenomenally sensitive instrument, kept at a 0.1° C above absolute zero by the use of liquid helium. By placing it at a stable point beyond Earth's orbit, the probe has a nearly uninterrupted view of the whole sky (this is better than Earth orbit, where it would experience day-night cycles and blockage of much of the sky by Earth's bulk). Since its launch in 2009, Planck has mapped the entire sky in microwave and submillimeter light, creating a 5 million pixel image of the temperature fluctuations from when the Universe was very young.

That image helps tell us the Universe's age. Analysis of Planck data showed a slightly slower cosmic expansion rate, which means it took a bit longer to get to its current state. This nudges the age of the Universe upward to 13.81 billion years (with a margin of error of 50 million years either way). This is a marginal increase over the 13.77 billion year estimate, provided by the WMAP mission in combination with other observations.

Tiny fluctuations have big meanings

After the Big Bang, the cosmos was an opaque plasma, with too much energy to allow electrons and protons to form stable atoms. As it expanded, the Universe cooled, with stable atoms forming around 380,000 years after the Big Bang in an event known as recombination. This act both turned the Universe transparent and released photons, many of which have traveled uninterrupted into the modern era.

While these photons were in the ultraviolet range at the time of recombination, the expansion of the Universe cooled the photons as well, so today they lie in the microwave portion of the spectrum. For this reason, the light is known as the cosmic microwave background.

Enlarge/ The power spectrum measured by Planck, showing the fluctuations in temperature at a range of size scales on the sky. The anomaly previously seen by WMAP lies at the left edge. The three major peaks show the relative contributions of dark energy, ordinary matter, and dark matter.

Planck's mission is to record these photons, which tell us about fluctuations in temperature at the earliest times in the Universe's history. These fluctuations exist at many scales; their physical size on the sky and temperature variation reveal the composition and structure of the Universe. Together, these fluctuations are known as the power spectrum. The largest fluctuations are the result of the total energy content of the Universe—dark energy, dark matter, ordinary matter, light, and anything else. Smaller fluctuations are due to matter alone, and they reveal the relative amounts of ordinary compared to dark matter and how that matter clumps together.

Enlarge/ The composition of the cosmos, before and after today's Planck data release.

By comparing theoretical models to the real CMB, cosmologists determined that dark energy—the mysterious substance driving cosmic acceleration—comprises 68.3 percent of the energy content of the Universe, down slightly from earlier estimates of 72.8 percent. Similarly, dark matter's contribution was boosted from 22.7 percent to 26.8 percent, while ordinary matter's share went from 4.5 percent to 4.9 percent.

These adjustments largely came from an increase in the trustworthiness of power spectrum at smaller scales, where the effect of dark matter became more important. Comparing the earlier data on the third peak of the power spectrum to the Planck data leaves no doubt about the existence of dark matter (not that there was much prior to that). The fluctuations at even smaller scales agree with surveys of galaxies that measure the effect of sound waves in the early Universe, known as baryon acoustic oscillations.

The weird side of the CMB

One of the major challenges for any CMB measurement is confusion from the foreground. The Milky Way and other sources emit light at some of the same wavelengths as the CMB, so these effects need to be subtracted before a full CMB analysis can be performed. However, even with these foregrounds gone, both the Planck and earlier WMAP data still showed an anomalous set of fluctuations on very large scales (meaning they take up large areas of the total sky). In this case, one part of the sky seems warmer than the rest but contains an unusual "cold spot."

To put it mildly, these anomalies are not well understood. Planck researcher George Efstathiou even suggested in the announcement press conference that they could be hints of an earlier stage of the Universe preceding the Big Bang. Other more prosaic explanations are also possible, including foreground microwave sources from objects that aren't known and haven't been predicted.

Earlier cosmology data hinted that a fourth species of neutrino could exist (in addition to the known electron, mu, and tau neutrino species), but those hints were ambiguous. The new Planck results ruled that idea out, with the most likely number of neutrino species standing at 3. Additionally, Planck placed an upper bound on the sum of all three neutrino masses to be 0.85 eV, or about 0.0002 percent of the mass of an electron.

One final piece from the power spectrum, known as the spectral index, indicates how fluctuations depend on size. Models for inflation predict a number slightly smaller than 1 for the spectral index; Planck data shows a value of about 0.96, with errors small enough so they don't overlap with 1. While it's too soon to say that this is a verification of inflation, it's a strong hint in that direction.

The true tests of inflation—and of Planck's full prowess—will be revealed with the release of the polarization data, currently slated for early 2014. The polarization of different regions of the CMB may record a direct indication of the Universe's rapid expansion (called inflation) in its earliest moments. Planck's supply of liquid helium will be exhausted shortly after that, bringing the scientific mission to an end. But the data it has and will provide will keep cosmologists busy for a while.

The full list of Planck papers may be found at the ESA site; all will be published by Astronomy and Astrophysics.

Why does the pie chart have Ordinary Matter, Dark Matter, and Dark Energy but no Ordinary Energy?

Does anyone know if e=mc^2 is only true for ordinary matter and ordinary energy? Is it possible that there is a differnet formula for the Dark version? How about conversion between the? Does m(dm) = m (om)? That is, can 1 unit of dark matter be converted in one unit of ordinary matter?

Why does the pie chart have Ordinary Matter, Dark Matter, and Dark Energy but no Ordinary Energy?

Because while Energy and Mass are interchangeable (and thus you can lump them together), there is no relationship between Matter, Dark Matter, and Dark Energy that we know of.

Put simply, we have *ideas* on what Dark Matter might be, but we can't have formulas or equations without knowing how it behaves or what it is. Right now all we know is that it has mass, but we can't observe it in the same way we observe Matter like stars, gas and dust.

Dark Energy is even worse. We know there's some sort of measurable effect impacting the growth of spacetime itself, and we can even trace it a bit through time, and tell you what the effect is... but we are even more in the dark about what it is.

In this sense, Dark Matter and Dark Energy aren't matter and energy, but rather two things we only vaguely know about that behave *similar* to matter and energy, but may be unrelated to either one.

Why does the pie chart have Ordinary Matter, Dark Matter, and Dark Energy but no Ordinary Energy?

Does anyone know if e=mc^2 is only true for ordinary matter and ordinary energy? Is it possible that there is a differnet formula for the Dark version? How about conversion between the? Does m(dm) = m (om)? That is, can 1 unit of dark matter be converted in one unit of ordinary matter?

Its all in terms of energy really. If I remember correctly the contribution to the total energy of the universe from non massive things (photons) is negligible. The "ordinary energy" contribution is almost entirely due to baryons (atoms).

As to dark mater, it should not have a different formula. Mass is mass. What makes dark matter special is that it only very weakly interacts with ordinary matter.

cdclndc wrote:

What bothers me is that's about the image resolution of my cell phone. Sounds orders of magnitude off, but I'm not an astronomer.

The two aren't comparable because your camera does not measure radiation in the microwave region. The wavelengths are millimeter scale, instead of nanometers.

Why does the pie chart have Ordinary Matter, Dark Matter, and Dark Energy but no Ordinary Energy?

Does anyone know if e=mc^2 is only true for ordinary matter and ordinary energy? Is it possible that there is a differnet formula for the Dark version? How about conversion between the? Does m(dm) = m (om)? That is, can 1 unit of dark matter be converted in one unit of ordinary matter?

Photon energy density (i.e. "ordinary energy) is much much smaller than the others, it wouldn't even show up on the pie graph (I forget the exact number, but it's something like 4 orders of magnitude below matter).

The reason for this is that as the universe expands, volume increases according to the cube of the expansion (which reduces matter density), but photon energy falls according to the fourth power, as the wavelengths get stretched, so photons lose energy, in addition to there being less of them per cubic volume.

Edit: found it. The density of ordinary energy is currently about .008% of the total energy affecting the universe. So, negligible.

The LHe is (was) used to keep the detectors in the High Frequency Instrument (HFI) cold. They are bolometers, which run at a few 100 milliKelvin. Once you run out of LHe, they warm pretty quickly. They still work in principle, but above the milliK range they are no longer sensitive enough to see the cosmic microwave background. WMAP -- the previous generation satellite -- didn't observe at these frequencies, so didn't need to be kept as cold. That's why it ran for 9 years before it was shut down due to loss of funding, while Planck only ran for a bit over a year.

Note that the LHe isn't enough to cool something to 100 milliK -- it wraps an inner cooling stage that does the bit from 3K to 0.1K. But that part can't keep up once the LHe is gone, so everything warms up. I don't know the steady state temperature once the coolant is gone, but based on the mirror temperature of Herschel (which is in a similar orbit), I think it's probably 70-80 K. There is a technology to cool things without LHe (pulse-tube cooling), but it uses a lot of power. Still, someday, hopefully we won't need to use LHe on such missions and so they will be able to last a lot longer.

Why does the pie chart have Ordinary Matter, Dark Matter, and Dark Energy but no Ordinary Energy?

Does anyone know if e=mc^2 is only true for ordinary matter and ordinary energy? Is it possible that there is a differnet formula for the Dark version? How about conversion between the? Does m(dm) = m (om)? That is, can 1 unit of dark matter be converted in one unit of ordinary matter?

'Ordinary' Energy is basically part of 'Ordinary' Matter. If there are photons energetic to create particle pairs, they do so. In fact, just after the Big Bang, this is what happened. The huge explosion of energy created all the normal matter we see today. This process is known as Big Bang Nucleosynthesis (BBN). Actually WAY more than the matter we see today, the Big Bang energy created LOTS of matter and antimatter, and for some yet to be discovered reason (though there are theories) there was *slightly* more matter than antimatter. Most of the matter and antimatter annihilated, this energy basically becoming the CMB.

The initial Big Bang energy also created dark matter, but how it did that (ie the process from getting from normal energy/matter to dark matter), noone knows. That's what these experiments like Plank, the LHC, etc are trying to unravel.

The LHe is (was) used to keep the detectors in the High Frequency Instrument (HFI) cold. They are bolometers, which run at a few 100 milliKelvin. Once you run out of LHe, they warm pretty quickly. They still work in principle, but above the milliK range they are no longer sensitive enough to see the cosmic microwave background.

Indeed. I used a MGTS detector on a run-of-the-mill FTIR spectrometer a few years back that required cooling to 77K. It'd be interesting to know how many hundreds of litres of liquid He that Planck was originally equipped with.

Since its launch in 2009, Planck has mapped the entire sky in microwave and submillimeter light, creating a 5 million pixel image of the temperature fluctuations from when the Universe was very young.

Is that number remotely right? I tried a quick Google search and got nothing, but at least I tried.

What bothers me is that's about the image resolution of my cell phone. Sounds orders of magnitude off, but I'm not an astronomer.

Others have mentioned millimeter wavelengths but I will take it a step further. You are talking about very minute differences in intensity.

Think about taking a picture of a white wall with your cellphone. It would appear almost complete white and very smooth in your cell phones picture.

Now take a picture with a sensor equivalent to this sensor. It would appear bumpy with gray and black spots all over it and not smooth at all.

In both cases the megapixels are the same but the sensitivity of the sensor is different. One costs a lot more and requires a lot more effort than the other. Like dousing the sensor in Liquid Helium to keep it colder than the vacuum in space.

It's always delightful to learn that the universe is still "interesting" to some people... Fortunately, not everyone chooses to sit on the uncomfortable pinhead of the faux "omniscience" of early 21st-century science. Science used to be a tool for discovery and the unmasking of mysteries and the vehicle through which we learned that the list of things unknown is far longer than the list of that which is known. Science used to keep us in awe. Now, it is mainly invoked from hubris, and the closed mind is too often its main legacy. Let's hope another dark age is not just around the corner.

It also used to be a tool to gain status, and any competing ideas from "lesser" scientists weren't treated with the respect they deserved, even with evidence to prove their theories.

I don't see it as dismally as you and if anything the average person is more motivated than ever to have at least an elementary understanding of science. A dark age isn't going to happen in an information age; let's worry about that possibly ending instead?

5MP image of the CMB is incredibly accurate. We have 5 million data points in it.

EDIT NOTE: Per Y's response below... we're not even talking about visible light here! END EDIT

Understand these three things:1) The bigger the sensor the bigger the lens needs to be2) Bigger sensors with lower resolution have more space to receive each pixel3) More resolution on a smaller sensor does not mean a better picture

1) You need a big fat piece of glass (several in the optics of a lens actually) to capture light deserving of a a big fat sensor. This is why the lens on a pro camera is bigger than the one on the iPhone. The bigger the lens the more problems for the team I'm sure.

2) Think about it. You have the area the size of a quarter to record information about a pixel or the area the size of a pinhead. Its easier to do a better job (capture quality and reliability) with more space.

3) Go look at raw images out of an iPhone vs. raw images out of a professional camera at the same resolution from years ago (back when that resolution was pro). Zoom in and look at them pixel by pixel. They're the same resolution, sure, but the iPhone's resolution was crammed in to a tiny dot.

Sensor and lens size > resolution. Probably they looked at the first two and said "how much resolution can we actually use given the first two & does that do the job"

How do you read the angular scale on the power spectrum plot? I noticed that the error bars get progressively larger as this value approaches 90°, and I'm assuming there's a connection...

I am by no means an expert, but there is a fundamental limit to the accuracy we can measure the larger angular scales because at some point you are looking at such a large fraction of the sky that you cannot get a large enough sample size (we only have one universe to look at.) This is called the cosmic variance, and in Weinberg's Cosmology textbook he states that measurements for l<5 probably tell us very little about cosmology.

Since its launch in 2009, Planck has mapped the entire sky in microwave and submillimeter light, creating a 5 million pixel image of the temperature fluctuations from when the Universe was very young.

Is that number remotely right? I tried a quick Google search and got nothing, but at least I tried.

What bothers me is that's about the image resolution of my cell phone. Sounds orders of magnitude off, but I'm not an astronomer.

Optical instruments are imaging EM radiation with a far shorter wavelength than the microwaves being observed by Planck. The shorter the wavelength you are imaging, the finer the resolution you can observe with a given size of instrument.

Planck views the universe at between 350µm and 10,000µm which is roughly between 700x and 20,000x longer wavelength than what an optical telescope would observe. All things being equal, the resolution would be 700 to 20,000 times worse than an optical telescope of equivalent aperture. Useful pixel count within the image varies with the square of the resolution so you can see why it would capture far less detail than even a very small optical imaging system such as a cell phone camera.

The difference is that Planck measures at frequencies far, far lower than visible light, thus the resolution goes up. The maps can be pixelated as fine as you like, but the highest resolution features (from highest frequency channel of 857GHz) are limited to around 5 arcminutes.

Y

MrDetermination wrote:

Dilbert wrote:

cdclndc wrote:

Quote:

Since its launch in 2009, Planck has mapped the entire sky in microwave and submillimeter light, creating a 5 million pixel image of the temperature fluctuations from when the Universe was very young.

Is that number remotely right? I tried a quick Google search and got nothing, but at least I tried.

What bothers me is that's about the image resolution of my cell phone. Sounds orders of magnitude off, but I'm not an astronomer.

5MP image of the CMB is incredibly accurate. We have 5 million data points in it.

Understand these three things:1) The bigger the sensor the bigger the lens needs to be2) Bigger sensors with lower resolution have more space to receive each pixel3) More resolution on a smaller sensor does not mean a better picture

1) You need a big fat piece of glass (several in the optics of a lens actually) to capture light deserving of a a big fat sensor. This is why the lens on a pro camera is bigger than the one on the iPhone. The bigger the lens the more problems for the team I'm sure.

2) Think about it. You have the area the size of a quarter to record information about a pixel or the area the size of a pinhead. Its easier to do a better job (capture quality and reliability) with more space.

3) Go look at raw images out of an iPhone vs. raw images out of a professional camera at the same resolution from years ago (back when that resolution was pro). Zoom in and look at them pixel by pixel. They're the same resolution, sure, but the iPhone's resolution was crammed in to a tiny dot.

Sensor and lens size > resolution. Probably they looked at the first two and said "how much resolution can we actually use given the first two & does that do the job"

The LHe is (was) used to keep the detectors in the High Frequency Instrument (HFI) cold. They are bolometers, which run at a few 100 milliKelvin. Once you run out of LHe, they warm pretty quickly. They still work in principle, but above the milliK range they are no longer sensitive enough to see the cosmic microwave background. WMAP -- the previous generation satellite -- didn't observe at these frequencies, so didn't need to be kept as cold. That's why it ran for 9 years before it was shut down due to loss of funding, while Planck only ran for a bit over a year.

Note that the LHe isn't enough to cool something to 100 milliK -- it wraps an inner cooling stage that does the bit from 3K to 0.1K. But that part can't keep up once the LHe is gone, so everything warms up. I don't know the steady state temperature once the coolant is gone, but based on the mirror temperature of Herschel (which is in a similar orbit), I think it's probably 70-80 K. There is a technology to cool things without LHe (pulse-tube cooling), but it uses a lot of power. Still, someday, hopefully we won't need to use LHe on such missions and so they will be able to last a lot longer.

Hi,

Planck has a multi-stage all-active cooling chain, and does not use LHe in the a traditional "wet" sense to cool everything by boiling off a large bucket of cryogenic liquid. HFI was cooled to 100 milliKelvin by a custom dilution refrigerator using a mixture of a few liters of liquid helium isotopes. The other refrigerators are completely closed and are still operational.

BTW the HFI operated for nearly 2.5 years - only the first 15 months were released today.

Finally! The one year delay seems to be part the instrument performing well (meaning they now have 5 sky pass instead of the nominal 2), part problem to extract polarization (still working on it), part the large scale anomalies.

That said, the results are fantastic. Not only do they see inflation directly and test it unambiguously (scalar index < 1) at 6 sigma, they see dark energy directly and test it unambiguously _from the CMB alone_ at 10 sigma. And the observation of, and fit to, 7 acoustic peaks is ridiculously good.

This is far better than WMAP of course, which barely beat Planck on resolving inflation within the constraints. (Scalar index < 1 up from below 3 sigma to just above 5 sigma in the last 9 year data release 2012.)

In this context I must note that I'm not sure why polarization data should be considered "the true test of inflation". I think it may show you niceties like whether spacetime can fluctuate (tensor modes) or if all structure formation comes from primordial fluctuations in the inflaton field (scalar modes).

But as I interpret the Planck papers the latter case is the primary standard cosmology model: "In the base &Lambda;CDM model, the ﬂuctuations are assumed to be purely scalar modes." [ http://arxiv.org/abs/1303.5076 , p39.] (If tensor modes exist, they can be nice in that the give you inflation expansion rate and energy scale: http://cosmology.berkeley.edu/~yuki/CMBpol/CMBpol.htm .) And the smoothness of spacetime beyond Planck scales, no spacetime fluctuations, is tentatively consistent with timing measurements of cosmological supernova photons. (And, I hear, their polarization if supersymmetry is what gives us dark matter.)

The precision of cosmology age is also ridiculous, 0.3 % or 40 million years, rivaling our dating of the Earth at some 1 % or 50 million years. [ http://en.wikipedia.org/wiki/Age_of_the_Earth ] Nitpick on the article here is that we should compare the consensus age of Planck and other instruments of 13.80 Ga with the similar pre-Planck consensus age of 13.77 Ga. [ http://arxiv.org/abs/1303.5062 , p36.]

EDIT: The earlier links have been repointed to arxiv papers, with no conversion at the site. ESA, that is stupid. :-/ Have replaced them and hopes they will keep this time.

In this context I must note that I'm not sure why polarization data should be considered "the true test of inflation". I think it may show you niceties like whether spacetime can fluctuate (tensor modes) or if all structure formation comes from primordial fluctuations in the inflaton field (scalar modes).

Well it's those tensor modes etc that come from the polarization data that will allow for much tighter constraints and thus rule out more inflationary models and find out which ones are more likely correct. So in that sense the true test for inflationary models will only come with the polarization data.

As a test of the general concept of inflation, yeah, this has pretty much nailed it, not that it was in any significant danger beforehand.

Coming back to the pie chart, when these three things are compared and their proportions are determined, what is it that's being measured to add them up to 100%? If it's mass, I get how ordinary and dark matter has mass, but I don't get how dark energy has mass that can be compared against the other two. How do you compare a repulsive energy vs. a mass, it seems like we're adding apples and oranges.

The difference is that Planck measures at frequencies far, far lower than visible light, thus the resolution goes up. The maps can be pixelated as fine as you like, but the highest resolution features (from highest frequency channel of 857GHz) are limited to around 5 arcminutes.

Y

Yes. Of course you're right! Duh! Argh... newborn baby brain. Not think good with no sleep

The liquid He is used for what?- Guarding the sensor from the on-board electronics?- Active cooling below the floor of the part of space it's in?

Active cooling. The passively cooled Webb telescope, is only going to get down to ~20K even with massive sunshields.

I'd remembered something slightly different. A little googling shows that the passive cooling is good for about 40k, and that there is active cooling for the sensors. It's called a cyrocooler, but I can't find any references to a cyrogen, it appears to be a very sophisticated heat pump.

-- The MIRI has three Arsenic-doped Silicon (Si:As) detector arrays. The camera module provides-- wide-field broadband imagery, and the spectrograph module provides medium-resolution -- spectroscopy over a smaller field of view compared to the imager. The nominal operating -- temperature for the MIRI is 7K. This level of cooling cannot be attained using the passive-- cooling provided by the Thermal Management Subsystem. Instead, there is a two-step -- process: A Pulse Tube precooler gets the instrument down to 18K; and a Joule-Thomson -- Loop heat exchanger knocks it down to 7K.

It does not appear quite accurate to say that the Webb telescope is passively cooled, it has both passive and active components.

- I expect the dipole asymmetry will grab people, despite the Planck team being able to test the isotropic standard cosmology better than ever before on large spatial scales.

What is notable is that the high-l (small spatial scale) data test as constant such parameters as spatial flatness to larger scales than the observable universe (give or take cosmic variance). So maybe the dipole is an observation of some nearby anomaly that inflation expansion didn't quite erase. Meaning it could be compatible but hard to nail down.

- The now ascertained cold spot, which like the dipole was present in the WMAP data but could be attributed to instrument uncertainties, will likely make many (especially crackpots) claim premature evidence of multiverses in the form of vacuum bubble collisions. Beyond that it will be interesting when the teams that have looked at this on the uncertain WMAP data looks again:

"Cutting to the chase, we were first able to use simulated CMB data containing bubble collisions to rule out a range of parameter space as inconsistent with WMAP data. As it turned out, the existence of a temperature discontinuity at the boundary of the disc greatly increases our ability to make a detection. We did not find any circular temperature discontinuities in the WMAP data.

While we didn’t make any clear detections of bubble collisions, we did find four features in the WMAP data that are better explained by the bubble collision hypothesis than by the standard hypothesis of fluctuations in a nearly Gaussian field. We assess which of the two models better explain the data by evaluating the Bayesian evidence for each. The evidence correctly accounts for the fact that a more complex model (the bubble collisions, in this case) will generally fit the data better simply because it has more free parameters. This is the self-consistent statistical equivalent of applying Ockham’s Razor. In addition, using information from multiple frequencies measured by the WMAP satellite and a simulation of the WMAP experiment, we didn’t find any evidence that these features can be attributed to astrophysical foregrounds or experimental systematics.

One of the features we identified is the famous Cold Spot, which has been claimed as evidence for a number of theories including textures, voids, primordial inhomogeneities, and various other candidates. A nice aspect of our approach is that it can be used to compare these hypotheses, without making arbitrary choices about which features in the CMB need explaining (focusing on the Cold Spot is an a posteriori choice). We haven’t done this yet, but plan to soon.

While identifying the four features consistent with being bubble collisions was an exciting result, these features are on the edge of our sensitivity thresholds, and so should be considered only as a hint that there might be bubble collisions to find in future data. The good news is that we can do much more with data from the Planck satellite, which has better resolution and lower noise than the WMAP experiment. There is also much better polarization information, which provides a complementary signal of bubble collisions (found by Czech et. al. – arXiv:1006.0832). We’ll be gearing up to analyze this data, and hopefully there will be more to the story then."

- The work to constrain inflation physics which started with WMAP is going on strong with Planck. Already the simplest models are excluded at 2 sigma, but slow roll single field is likeliest and so are _concave_ (no or low tensor mode) hill like field potentials. [ http://arxiv.org/abs/1303.5082 ; fig 1, p 10] Phunny physics.

And as always, supersymmetric models are marginal. :-|

EDIT: The earlier links have been repointed to arxiv papers, with no conversion at the site. ESA, that is stupid. :-/ Have replaced them and hopes they will keep this time.

That said, the results are fantastic. Not only do they see inflation directly and test it unambiguously (scalar index < 1) at 6 sigma, they see dark energy directly and test it unambiguously _from the CMB alone_ at 10 sigma. And the observation of, and fit to, 7 acoustic peaks is ridiculously good.

I think "directly" is a little overstated. It's more accurate to say that the results are consistent (to those precisions) with the current models of the CMB with regards to the effect of inflation and the effect of dark energy. That's still pretty solid evidence, but especially in the case of dark energy, we still haven't detected the actual phenomenon directly.

I'd sure like to see that picture on a sphere I could rotate so I don't have to deal with the projection idiosyncrasies....

Actually, the projection they use in the map (Hammer-Aitoff) was used precisely to ameliorate projection idiosyncrasies, insofar as areas are equal across the map (angles, however, are not).

If you're really het up for a flat map to wrap around a sphere, there are several graphics packages that can convert a Hammer projection into a rectangular projection (Matthew's Map Projection Software comes to mind.)