Anyone interested in commenting to me privately, good or bad, can send email to sciencesprings@gmail.com

Google+

Sciencesprings has restarted at Google+. Please add sciencesprings to your Google+ circles. Just go to Google+ and search for "sciencesprings." What will come up is Richard Mitnick. That's O.K.,that's it, take it.

For decades physicists have been convinced that most of our universe is invisible, but how do we know that if we can’t see it? I want to explain the thought process that leads one to believe in a theory via indirect evidence. For those who want to see a nice summary of the evidence, check 81 Comments out. So this post isn’t 3000 words, I will simply say that either our theories of gravity are wrong, or the vast majority of the matter in our universe is invisible. That most of the matter in the universe is invisible, or “dark”, is actually well supported. Dark matter as a theory fits the data much better than modifications to gravity (with a couple of possible exceptions like mimetic dark matter). This isn’t necessarily surprising; frankly it would be a bit arrogant to assume that only matter similar to us exists. Particle physicists have known for a long time that not all particles are affected by all the fundamental forces. For example, the neutrino is invisible as it doesn’t interact with the electromagnetic force (or strong force, for that matter). So the neutrino is actually a form of dark matter, though it is much too quick and light to make up most of what we see.

The standard cosmological model, the ΛCDM, has had tremendous success explaining the evolution of our universe. This is what most people refer to when they think of dark matter: the CDM stands for “cold dark matter”, and it is this consistency that allows us to explain observations from almost every cosmological epoch that is so compelling about dark matter. We see the effect of dark matter across the sky in the CMB, in the helium formed in primordial nucleosynthesis, in the very structure of the galaxies. We see dark matter a minute after the big bang, a million years, a billion years, and even today. Simply put, when you add in dark matter (and dark energy) almost the entirety of cosmological history makes sense. While there some elements that seem to be lacking in the ΛCDM model (small scale structure formation, core vs cusp, etc), these are all relatively small details that seem to have solutions in either simulating normal matter more accurately, or small changes to the exact nature of dark matter.

Dark matter is essentially like a bank robber: the money is gone, but no-one saw the theft. Not knowing exactly who stole the money doesn’t mean that someone isn’t living it up in the Bahamas right now. The ΛCDM model doesn’t really care about the fine details of dark matter: things like its mass, exact interactions and formation are mostly irrelevant. To the astrophysicist, there are really two features that they require: dark matter cannot have strong interactions with normal matter (electromagnetic or strong forces), and dark matter must be moving relatively slowly (or “cold”). Anything that has these properties is called a dark matter “candidate” as it could potentially be the main constituent of dark matter. Particle physicists try to come up with these candidates, and hopefully find ways to test them. Ruling out a candidate is not the same as ruling out the idea of dark matter itself, it is just removing one of a hundred suspects.

Being hard to find is a crucial property of dark matter. We know dark matter must be a slippery bastard, as it doesn’t interact via the electromagnetic or strong forces. In one sense, assuming we can discover dark matter in our lifetime is presumptuous: we are assuming that it has interactions beyond gravity. This is one of a cosmologist’s fondest hopes as without additional interactions we are screwed. This is because gravity is by far the weakest force. You can test this yourself – go to the fridge, and get a magnet. With a simple fridge magnet, weighing only a few grams, you can pick up a paperclip, overpowering the 6*10^24 kg of gravitational mass the earth possesses. Trying to get a single particle, weighing about the same as an atom, to show an appreciable effect only through gravity is ludicrous. That being said, the vast quantities of dark matter strewn throughout our universe have had a huge and very detectable gravitational impact. This gravitational impact has led to very successful and accurate predictions. As there are so many possibilities for dark matter, we try to focus on the theories that link into other unsolved problems in physics to kill two birds with one stone. While this would be great, and is well motivated, nature doesn’t have to take pity on us.

So what do we look for in indirect evidence? Essentially, you want an observation that is predicted by your theory, but is very hard to explain without it. If you see an elephant shaped hole in your wall, and elephant shaped foot prints leading outside, and all your peanuts gone, you are pretty well justified in thinking that an elephant ate your peanuts. A great example of this is the acoustic oscillations in the CMB. These are huge sound waves, the echo of theCMB big bang in the primordial plasma.

The exact frequency of this is related to the amount of matter in the universe, and how this matter interacts. Dark matter makes very specific predictions about these frequencies, which have been confirmed by measurements of the CMB. This is a key observation that modified gravity theories tend to have trouble explaining.

The combination of the strong indirect evidence for dark matter, the relative simplicity of the theory and the lack of serious alternatives means that research into dark matter theories is the most logical path. That is not to say that alternatives should not be looked into, but to disregard the successes of dark matter is simply foolish. Any alternative must match the predictive power and observational success of dark matter, and preferably have a compelling reason for being ‘simpler’ or philosophically nicer then dark matter. While I spoke about dark matter, this is actually something that occurs all the time in science: natural selection, atomic theory and the quark model are all theories that have all been in the same position at one time or another. A direct discovery of dark matter would be fantastic, but is not necessary to form a serious scientific consensus. Dark matter is certainly mysterious, but ultimately not a particularly strange idea.

Disclaimer: In writing this for a general audience, of course I have to make sacrifices. Technical details like the model dependent nature of cosmological observations are important, but really require an entire blog post to themselves to answer fully.

It seems some disagreements are interminable: the Anabaptists versus the Calvinists, capitalism versus communism, the Hatfields versus the McCoys, or string theorists versus their detractors. It is the latter I will discuss here although the former may be more interesting. This essay is motivated by a comment in the December 16, 2014 issue of Nature by George Ellis and Joe Silk. The comment takes issue with attempts by some string theorists and cosmologists to redefine the scientific method by eliminating the need for experimental testing and relying on elegance or similar criteria instead. I have a lot of sympathy with Ellis and Silk’s point of view but believe that it is up to scientists to define what science is and that hoping for deliverance by outside people, like philosophers, is doomed to failure.

To understand what science is and what science is not, we need a well-defined model for how science behaves. Providing that well-defined model is the motivation behind each of my essays. The scientific method is quite simple: build models of how the universe works based on observation and simplicity. Then test them by comparing their predictions against new observation. Simplicity is needed since observations underdetermine the models (see for example: Willard Quine’s (1908 –2000) essay: The Two Dogmas of Empiricism). Note also that what we do is build models: the standard model of particle physics, the nuclear shell model, string theory, etc. Quine refers to the internals of the models as myths and fictions. Henri Poincaré (1854 – 1912) talks of conventions and Hans Vaihinger (1852 –1933) of the philosophy of as if otherwise known as fictionalism. Thus it is important to remember that our models, even the so-called theory of everything, are only models and not reality.

The Standard Model of elementary particles, with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

Partly filled valence orbitals for both neutrons and protons appear at energies over the filled inert core orbitals, in the shell model of the atomic nucleus

It is the feedback loop of observation, model building and testing against new observation that define science and give it its successes. Let me repeat: The feedback loop is essential. To see why, consider example of astrology and why scientists reject it. Its practitioners consider it to be the very essence of elegance. Astrology uses careful measurements of current planetary locations and mathematics to predict their future locations, but it is based on an epistemology that places more reliance on the eloquence of ancient wisdom than on observation. Hence there is no attempt to test astrological predictions against observations. That would go against their fundamental principles of eloquence and the superiority of received knowledge to observation. Just as well, since astrological predictions routinely fail. Astrology’s failures provide a warning to those who wish to replace prediction and simplicity with other criteria. The testing of predictions against observation and simplicity are hard taskmasters and it would be nice to escape their tyranny but that path is fraught with danger, as astrology illustrates. The feedback loop from science has even been picked up by the business management community and has been built into the very structure of the management standards (see ISO Annex SL for example). It would be shame if management became more scientific than physics.

But back to string theory. Gravity has always been a tough nut to crack. [Sir] Isaac Newton (1643 – 1727) proposed the decidedly inelegant idea of instantaneous action at a distance and it served well until 1905 and the development of special theory of relativity. Newton’s theory of gravity and special relativity are inconsistent since the latter rules out instantaneous action at a distance. In 1916, Albert Einstein (1879 – 1955) with an honorable mention to David Hilbert (1862 – 1943) proposed the general theory of relativity to solve the problem. In 1919, the prediction of the general theory of relativity for the bending of light by the sun was confirmed by an observation by [Sir] Arthur Eddington (1882 – 1944). Notice the progression: conflict between two models, proposed solution, confirmed prediction, and then acceptance.

Like special relativity and Newtonian gravity, general relativity and quantum mechanics are incompatible with one another. This has led to attempts to generate a combined theory. Currently string theory is the most popular candidate, but it seems to be stuck at the stage general relativity was in 1917 or maybe even 1915: a complicated (some would say elegant, others messy) mathematical theory but unconfirmed by experiment. Although progress is definitely being made, string theory may stay where it is for a long time. The problem is that the natural scale of quantum gravity is the Planck mass and this scale is beyond what we can explore directly by experiment. However, there is one place that quantum gravity may have left observable traces and that is in its role in the early Universe. There are experimental hints that may indicate a signature in the cosmic microwave background radiation but we must await further experimental results. In the meantime, we must accept that current theories of quantum gravity are doubly uncertain. Uncertain, in the first instance, because, like all scientific models, they may be rendered obsolete by new a understanding and uncertain, in the second instance, because they have not been experimentally verified through testable predictions.

Let’s now turn to the question of multiverses. This is an even worse dog’s breakfast than quantum gravity. The underlying problem is the fine tuning of the fundamental constants needed in order for life as we know it to exist. What is needed for life, as we do not know it, to exist is unknown. There are two popular ideas for why the Universe is fined tuned. One is that the constants were fine-tuned by an intelligent designer to allow for life as we know it. This explanation has the problem that by itself it can explain anything but predict nothing. An alternate is that there are many possible universes, all existing, and we are simply in the one where we can exist. This explanation has the problem that by itself it can explain anything but predict nothing. It is ironic that to avoid an intelligent designer, a solution based on an equally dubious just so story is proposed. Since we are into just so stories, perhaps we can compromise by having the intelligent designer choosing one of the multiverses as the one true Universe. This leaves the question of who the one true intelligent designer is. As an old farm boy, I find the idea that Audhumbla, the cow of the Norse creation myth, is the intelligent designer to be the most elegant. Besides the idea of elegance, as a deciding criterion in science, has a certain bovine aspect to it. Who decides what constitutes elegance? Everyone thinks their own creation is the most elegant. This is only possible in Lake Wobegon, where all the women are strong, all the men are good-looking, and all the children are above average (A PRAIRIE HOME COMPANION – Garrison Keillor (b. 1942)). Not being in Lake Wobegon, we need objective criteria for what constitutes elegance. Good luck with that one.

Some may think the discussion in the last paragraph is frivolous, and quite by design it is. This is to illustrate the point that once we allow the quest for knowledge to escape from the rigors of the scientific method’s feedback loop all bets are off and we have no objective reason to rule out astrology or even the very elegant Audhumbla. However, the idea of an intelligent designer or multiverses can still be saved if they are an essential part of a model with a track record of successful predictions. For example, if that animal I see in my lane is Fenrir, the great gray wolf, and not just a passing coyote, then the odds swing in favor of Audhumbla as the intelligent designer and Ragnarok is not far off. More likely, evidence will eventually be found in the cosmic microwave background or elsewhere for some variant of quantum gravity. Until then, patience (on both sides) is a virtue.

How do you make a world? This is the purview of theologists, science fiction authors and cosmologists. Broadly speaking, explaining how the universe evolves is no different from any other problem in science: we need to come up with an underlying theory, and calculate the predictions of this theory to see if they match with the real world. The tricky part is that we have no observations of the universe earlier than about 300,000 years after the big bang. Particle colliders give us a glimpse of conditions far earlier than that, but to a cosmologist even the tiniest fraction of a second after the big bang is vitally important. Any theorist who tries their hand at this is left with a trail of refuse models before they reach a plausible vision of the universe. Of course, how and why one does this is deeply personal, but I would like to share my own small experience with trying to make a universe.

For me, the end was very clear; I wanted to design a universe that explained the existence of dark and visible matter in a particular way. Asymmetric dark matter is a class of theories that try to link the origins of dark and visible matter, and my goal was to explore a new way of creating matter in the universe. So what do you start with? As a particle physicist, the most obvious (but not the only) building blocks at our disposal are particles themselves. Starting with the Standard Model, the easiest way to build a new theory is to just start adding particles.

The Standard Model of elementary particles, with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

While adding a new particle every time you want to explain a new phenomenon seems indulgent (and some people take this to excess), historically this is a very successful tactic. The neutrino, W and Z bosons, the charm, bottom and top quarks, and Higgs boson were all introduced before they were discovered to explain various theoretical or experimental problems. While back in 1930 [Wolfgang] Pauli apologised for the bad grace of introducing a particle no one had ever seen, theoretical physicists have well and truly overcome this reticence.

So what ingredients does a dark matter model need? Clearly there must be a viable candidate for dark matter, so at least one new particle must be introduced. While the simplest case is for dark matter to be made of one particle, there is no reason for a substance that makes up 85% of matter in the universe to be any simpler than the matter that we are made of. But, for the sake of simplicity, let us say just one particle for now. For my work to explain the creation of visible matter as well as dark matter, there must also be some interaction between the two. To do this there must be a “mediator”, something that communicates between the visible and the dark. So at least two particles are necessary. Now, two particles doesn’t sound so bad, not when we already know of 17.

The model I was originally going to study (one that already existed) was like this, with dark matter interacting with neutrons. Unfortunately, this is also when the realities of model building sank in; it is rare for any model to be this simple and still work as advertised. Under closer scrutiny it turned out that there was no satisfactory way to make the dark matter stick around for the lifetime of the universe – it quickly decayed away unless you made some theoretical sacrifices I wasn’t comfortable making. Thus began my first foray into model building.

The first hurdle to overcome, for a first time model builder, is simply the vast size of the literature itself. I was constantly worried that I had missed some paper that had beaten me to it, or had already considered some aspect of my work. Even though this was not the case, even the simplest of possible universes has a lot of complicated physics going on in a variety of areas – and any single aspect of the model failing could mean scrapping the whole thing. Most of these failing points are already known to those experienced in these matters, but a first timer has to find them out the hard way.

In the weeks I spent trying to come up with a model worth studying in detail, I had almost a dozen “Eureka” moments, which were almost always followed by me finding a flaw in the next few days. When you have no strict time limits, this is simply disheartening, but occasionally you can find flaws, or potential flaws, when you are already significantly invested and close to a deadline (such as thesis submission). Unfortunately the only real way to avoid this is to develop a level of care bordering on paranoia, to try and think of all the possible ways a theory might implode before getting bogged down in calculations. Of course, some things are inherently unforeseeable (otherwise why is it research) but many can be divined beforehand with enough experience and thought. This was driven home to me after spending a month working out the predictions of a theory, only to discover that I had drastically underestimated the consequences of a minor change to the model. Fortunately in research little is wasted; even though no part of that work appeared in the final version of my thesis, the methods I learnt certainly did.

My pride and joy, a model of ADM via a lepton portal. Leptons (like electrons and neutrinos) interact with scalar dark matter (the phi) to create the matter we see today.

Trying to come up with a theory yourself also forces you to confront your theoretical biases – naturalness, simplicity, renormalisability, testability and fine tuning are all considered by theorists to be important considerations, but it is almost impossible to satisfy all of these at once. Even worse, there are often many different competing interpretations of all of these. So, almost inevitably, sacrifices must be made. Perhaps your theory has to give up on technical naturalness, or has a hell of a hierarchy problem (which mine definitely did). That being said, this is not always an issue; many models are made to explore a particular avenue, or to provide a working example. The fact that some of these traits cannot be satisfied is important information. You have to pick and choose what you care about, because if the history of physics has shown us anything, it is that theoretical biases, even very well grounded ones, can simply be wrong. The discovery of CP (and consequently time reversal) violation and the non-deterministic (or the apparently non-deterministic, depending on whether you prefer a many worlds interpretation) nature of quantum mechanics are just a couple of examples where “essential” elements of a proper theory turned out to simply not apply.

While this seems like a frustrating experience, I actually greatly enjoyed model building. Too much of university coursework is rushed – you have to learn all of a subject in 12 weeks, and are tested in an exam that only lasts four hours, sometimes in quite shallow ways. This kind of research emphasises patience and care, and allows (or requires) you to deeply understand the physics involved. Calculations are irrelevant for a large part of the process. You simply don’t have time to try and brute force your way through dozens of theories, so you must devise more elegant ways to discriminate and choose those worth the time. I very much doubt that the model I worked on is the underlying truth of our world, but it was very fun to try.

This is the first part of a series of three on supersymmetry, the theory many believe could go beyond the Standard Model. First I explain what is the Standard Model and show its limitations. Then I introduce supersymmetry and explain how it would fix the main flaws of the Standard Model. Finally, I will review how experimental physicists are trying to discover “superparticles” at the Large Hadron Collider at CERN.

The Standard Model describes what matter is made of and how it holds together. It rests on two basic ideas: all matter is made of particles, and these particles interact with each other by exchanging other particles associated with the fundamental forces.

The basic grains of matter are fermions and the force carriers are bosons. The names of these two classes refer to their spin – or angular momentum. Fermions have half-integer values of spin whereas bosons have integer values as shown in the diagram below.

The Standard Model of elementary particles, with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

Fermions come in two families. The leptons family has six members, with the electron being the best known of them. The quarks family contains six quarks. The up and down quarks are found inside protons and neutrons. The twelve fermions are the building blocks of matter and each one has a spin value of ½.

These particles interact with each other through fundamental forces. Each force comes with one or more force carriers. The nuclear force comes with the gluon and binds the quarks within the proton and neutrons. The photon is associated with the electromagnetic force. The weak interaction is responsible for radioactivity. It comes with the Z and W bosons. All have a spin of 1.

The main point is: there are grains of matter, the fermions with spin ½, and force carriers, the bosons with integer values of spin.

The Standard Model is both remarkably simple and very powerful. There are complex equations expressing all this in a mathematical way. These equations allow theorists to make very precise predictions. Nearly every quantity that has been measured in particle physics laboratories over the past five decades falls right on the predicted value, within experimental error margins.

So what’s wrong with the Standard Model? Essentially, one could say that the whole model lacks robustness at higher energy. As long as we observe various phenomena at low energy, as we have done so far, things behave properly. But as [particle] accelerators are getting more and more powerful, we are about to reach a level of energy which existed only shortly after the Big Bang where the equations of the Standard Model start getting shaky.

This is a bit like with the laws of physics at low and high speed. A particle moving at near the speed of light cannot be described with the simple laws of mechanics derived by [Isaac] Newton. One needs special relativity to describe its motion.

One major problem of the Standard Model is that it does not include gravity, one of the four fundamental forces. The model also fails to explain why gravity is so much weaker than the electromagnetic or nuclear forces. For example, a simple fridge magnet can counteract the gravitational attraction of a whole planet on a small object.

This huge difference in the strength of fundamental forces is one aspect of the “hierarchy problem”. It also refers to the wide range in mass for the elementary particles. In the table shown above, we see the electron is about 200 times lighter than the muon and 3500 times lighter than the tau. Same thing for the quarks: the top quark is 75 000 times heavier than the up quark. Why is there such a wide spectrum of masses among the building blocks of matter? Imagine having a Lego set containing bricks as disparate in size as that!

The hierarchy problem is also related to the Higgs boson mass. The equations of the Standard Model establish relations between the fundamental particles. For example, in the equations, the Higgs boson has a basic mass to which theorists add a correction for each particle that interact with it. The heavier the particle, the larger the correction. The top quark being the heaviest particle, it adds such a large correction to the theoretical Higgs boson mass that theorists wonder how the measured Higgs boson mass can be as small as it was found.

This seems to indicate that other yet undiscovered particles exist and change the picture. In that case, the corrections to the Higgs mass from the top quark could be cancelled out by some other hypothetical particle and lead to the observed low Higgs boson mass. Supersymmetry just happens to predict the existence of such particles, hence its appeal.

Last but not least, the Standard Model only describes visible matter, that is all matter we see around us on Earth as well as in stars and galaxies. But proofs abound telling us the Universe contains about five times more “dark matter”, a type of matter completely different from the one we know, than ordinary matter. Dark matter does not emit any light but manifests itself through its gravitational effects. Among all the particles contained in the Standard Model, none has the properties of dark matter. Hence it is clear the Standard Model gives an incomplete picture of the content of the Universe but supersymmetry could solve this problem.

THIS QUANTUM DIARIES POST IS PRESENTED IN ITS ENTIRETY BECAUSE OF ITS IMPORTANCE.

April 4th, 2013
Pauline Gagnon

“After 18 years spent building the experiment and nearly two years taking data from the International Space Station, the Alpha Magnetic Spectrometer or AMS-02 collaboration showed its first results on Wednesday to a packed audience at CERN. But Prof. Sam Ting, one of the 1976 Nobel laureates and spokesperson of the experiment, only revealed part of the positron energy spectrum measured so far by AMS-02.

Positrons are the antimatter of electrons. Given we live in a world where matter dominates, it is not easy to explain where this excess of positrons comes from. There are currently two popular hypotheses: either the positrons come from pulsars or they originate from the annihilation of dark matter particles into a pair of electron and positron. To tell these two hypotheses apart, one needs to see exactly what happens at the high-energy end of the spectrum. But this is where fewer positrons are found, making it extremely difficult to achieve the needed precision. Yesterday, we learned that AMS-02 might indeed be able to reach the needed accuracy.

The fraction of positrons (measured with respect to the sum of electrons and positrons) captured by AMS-02 as a function of their energy is shown in red. The vertical bars indicate the size of the uncertainty. The most important part of this spectrum is the high-energy part (above 100 GeV or 102) where the results of two previous experiments are also shown: Fermi in green and PAMELA in blue. Note that the AMS-02 precision exceeds the one obtained by the other experiments. The spectrum also extends to higher energy. The big question now is to see if the red curve will drop sharply at higher energy or not. More data is needed before the AMS-02 can get a definitive answer.

Only the first part of the story was revealed yesterday. The data shown clearly demonstrated the power of AMS-02. That was the excellent news delivered at the seminar: AMS-02 will be able to measure the energy spectrum accurately enough to eventually be able to tell where the positrons come from.

But the second part of the story, the punch line everyone was waiting for, will only be delivered at a later time. The data at very high energy will reveal if the observed excess in positrons comes from dark matter annihilation or from “simple” pulsars. How long will it take before the world gets this crucial answer from AMS-02? Prof. Ting would not tell. No matter how long, the whole scientific community will be waiting with great anticipation until the collaboration is confident their measurement is precise enough. And then we will know.

If AMS-02 does manage to show that the positron excess has a dark matter origin, the consequences would be equivalent to discovering a whole new continent. As it stands, we observe that 26.8% of the content of the Universe comes in the form of a completely unknown type of matter called dark matter but have never been able to catch any of it. We only detect its presence through its gravitational effects. If AMS-02 can prove dark matter particles can annihilate and produce pairs of electrons and positrons, it would be a complete revolution.”

“The LHC has been shut down for about two months now, but that really hasn’t made anyone less busy. It is true that we don’t have to run the detector now, but the CMS operations crew is now busy taking it apart for various refurbishing and maintenance tasks. There is a detailed schedule for what needs to be done in the next two years, and it has to be observed pretty carefully; there is a lot of coordination required to make sure that the necessary parts of the detector are accessible as needed, and of course to make sure that everyone is working in a safe environment (always our top priority).

A lot of my effort on CMS goes into computing, and over in that sector things in many ways aren’t all that different from how they were during the run. We still have to keep the computing facilities operating all the time. Data analysis continues, and we continue to set records for the level of activity from physicists who are preparing measurements and searches for new phenomena. We are also in the midst of a major reprocessing of all the data that we recorded during 2012, making use of our best knowledge of the detector and how it responds to particle collisions. This started shortly after the LHC run finished, and will probably take another couple of months.

There is also some data that we are processing for the very first time. Knowing that we had a two-year shutdown ahead of us, we recorded extra events last year that we didn’t have the computing capacity to process in real time, but could save for later analysis during the shutdown. This ended up essentially doubling the number of events we recorded during the last few months of 2012, which gives us a lot to do. Fortunately, we caught a break on this — our friends at the San Diego Supercomputer Center offered us some time on their facility. We had to scramble a bit to figure out how to include it into the CMS computing system, but now things are happily churning away with 5000 processors in use.”

“The ALICE experiment is dedicated to the study of the quark-gluon plasma. Each year, the LHC operates for a few weeks with lead ions instead of protons. ALICE collects data both during proton-proton collisions and heavy ions collisions. Even when only protons collide, the projectiles are not solid balls like on a billiard table but composite objects. By comparing what can is obtained from heavy ion collisions with proton collisions, the ALICE physicists must first disentangle what comes from having protons in a bound state inside the nucleus as opposed to “free protons”.

So far, it appears that the quark-gluon plasma only formed during heavy-ion collisions since they provide the necessary energy density over a substantial volume (namely, the size of a nucleus). Some of the effects observed, such as the number of particles coming out of the collisions at different angles or momenta, depend in part on the final state created. When the plasma is formed, it reabsorbs many of the particles created, such that fewer particles emerged from the collision.

By colliding protons and heavy ions, scientists hope to discern what comes from the initial state of the projectile (bound or free protons) and what is caused by the final state (like the suppression of particles emitted when a quark-gluon plasma forms).

A “snapshot” of the debris coming out of a proton-lead ion collision captured by the ALICE detector showing a large number of various particles created from the energy released by the collision.

The ultimate goal is to study the so-called ‘structure function’, which describes how quarks and gluons are distributed inside protons, when they are free or embedded inside the nucleus.

More will be studied during the two-month running period with protons colliding on heavy ions planned for the beginning of 2013.”

“Last week, Seth and I met up to discuss the latest results from the Hadron Collider Physics (HCP) Symposium and what they mean for the Higgs searches. We have moved past discovery and now we are starting to perform precision measurements. Is this the Standard Model Higgs boson, or some other Higgs boson? Should we look forward to a whole new set of discoveries around the corner, or is the Higgs boson the final word for new physics that the LHC has to offer? We’ll find out more in the coming months!

Here are Aiden and Seth in their latest video. The sound is a touch weak, due to the outdoor location; but you can get plenty from what they report.

IT HAS BEEN A WHILE SINCE I HAVE BEEN ABLE TO PRESENT A POST FROM QUANTUM DIARIES. MY AUDIENCE IS A MORE GENERALIST PUBLIC – INTERESTED, EDUCATED, BUT NOT PROFESSIONAL SCIENTISTS. NOW COMES A POST WHICH I BELIEVE MIGHT BE APPROACHABLE FOR MY READERS.

Pauline Gagnon

2012.09.28
Pauline Gagnon

“Finding an experimental anomaly is a great way to open the door to a new theory. It is such a good trick that many of us physicists are bending over backward trying to uncover the smallest deviation from what the current theory, the Standard Model of particle physics, predicts.