Category Archives: LHC Background Info

In the long and careful process of restarting the Large Hadron Collider [LHC] after its two-year nap for upgrades and repairs, another milestone has been reached: protons have once again collided inside the LHC’s experimental detectors (named ATLAS, CMS, LHCb and ALICE). This is good news, but don’t get excited yet. It’s just one small step. These are collisions at the lowest energy at which the LHC operates (450 GeV per proton, to be compared with the 4000 GeV per proton in 2012 and the 6500 GeV per proton they’ve already achieved in the last month, though in non-colliding beams.) Also the number of protons in the beams, and the number of collisions per second, is still very, very small compared to what will be needed. So discoveries are not imminent! Yesterday’s milestone was just one of the many little tests that are made to assure that the LHC is properly set up and ready for the first full-energy collisions, which should start in about a month.

But since full-energy collisions are on the horizon, why not listen to a radio show about what the LHC will be doing after its restart is complete? Today (Wednesday May 6th), Virtually Speaking Science, on which I have appeared a couple of times before, will run a program at 5 pm Pacific time (8 pm Eastern). Science writer Alan Boyle will be interviewing me about the LHC’s plans for the next few months and the coming years. You can listen live, or listen later once they post it. Here’s the link for the program.

As promised in my last post, I’ve now written the answer to the second of the three questions I posed about how the Large Hadron Collider [LHC] can search for dark matter. You can read the answers to the first two questions here. The first question was about how scientists can possibly look for something that passes through a detector without leaving any trace! The second question is how scientists can tell the difference between ordinary production of neutrinos — which also leave no trace — and production of something else. [The answer to the third question — how one could determine this “something else” really is what makes up dark matter — will be added to the article later this week.]

In the meantime, after Monday’s post, I got a number of interesting questions about dark matter, why most experts are confident it exists, etc. There are many reasons to be confident; it’s not just one argument, but a set of interlocking arguments. One of the most powerful comes from simulations of the universe’s history. These simulations

and use equations for known physics, including Einstein’s gravity, the behavior of gas and dust when compressed and heated, the effects of various forms of electromagnetic radiation on matter, etc.

The output of the these simulations is a prediction for the universe today — and indeed, it roughly has the properties of the one we inhabit.

Here’s a video from the Illustris collaboration, which has done the most detailed simulation of the universe so far. Note the age of the universe listed at the bottom as the video proceeds. On the left side of the video you see dark matter. It quickly clumps under the force of gravity, forming a wispy, filamentary structure with dense knots, which then becomes rather stable; moderately dense regions are blue, highly dense regions are pink. On the right side is shown gas. You see that after the dark matter structure begins to form, that structure attracts gas, also through gravity, which then forms galaxies (blue knots) around the dense knots of dark matter. The galaxies then form black holes with energetic disks and jets, and stars, many of which explode. These much more complicated astrophysical effects blow clouds of heated gas (red) into intergalactic space.

Meanwhile, the distribution of galaxies in the real universe, as measured by astronomers, is illustrated in this video from the Sloan Digital Sky Survey. You can see by eye that the galaxies in our universe show a filamentary structure, with big nearly-empty spaces, and loose strings of galaxies ending in big clusters. That’s consistent with what is seen in the Illustris simulation.

Now if you’d like to drop the dark matter idea, the question you have to ask is this: could the simulations still give a universe similar to ours if you took dark matter out and instead modified Einstein’s gravity somehow? [Usually this type of change goes under the name of MOND.]

In the simulation, gravity causes the dark matter, which is “cold” (cosmo-speak for “made from objects traveling much slower than light speed”), to form filamentary structures that then serve as the seeds for gas to clump and form galaxies. So if you want to take the dark matter out, and instead change gravity to explain other features that are normally explained by dark matter, you have a challenge. You are in danger of not creating the filamentary structure seen in our universe. Somehow your change in the equations for gravity has to cause the gas to form galaxies along filaments, and do so in the time allotted. Otherwise it won’t lead to the type of universe that we actually live in.

Challenging, yes. Challenging is not the same as impossible. But everyone one should understand that the arguments in favor of dark matter are by no means limited to the questions of how stars move in galaxies and how galaxies move in galaxy clusters. Any implementation of MOND has to explain a lot of other things that, in most experts’ eyes, are efficiently taken care of by cold dark matter.

Dark Matter. Its existence is still not 100% certain, but if it exists, it is exceedingly dark, both in the usual sense — it doesn’t emit light or reflect light or scatter light — and in a more general sense — it doesn’t interact much, in any way, with ordinary stuff, like tables or floors or planets or humans. So not only is it invisible (air is too, after all, so that’s not so remarkable), it’s actually extremely difficult to detect, even with the best scientific instruments. How difficult? We don’t even know, but certainly more difficult than neutrinos, the most elusive of the known particles. The only way we’ve been able to detect dark matter so far is through the pull it exerts via gravity, which is big only because there’s so much dark matter out there, and because it has slow but inexorable and remarkable effects on things that we can see, such as stars, interstellar gas, and even light itself.

About a week ago, the mainstream press was reporting, inaccurately, that the leading aim of the Large Hadron Collider [LHC], after its two-year upgrade, is to discover dark matter. [By the way, on Friday the LHC operators made the first beams with energy-per-proton of 6.5 TeV, a new record and a major milestone in the LHC’s restart.] There are many problems with such a statement, as I commented in my last post, but let’s leave all that aside today… because it is true that the LHC can look for dark matter. How?

When people suggest that the LHC can discover dark matter, they are implicitly assuming

that dark matter exists (very likely, but perhaps still with some loopholes),

that dark matter is made from particles (which isn’t established yet) and

that dark matter particles can be commonly produced by the LHC’s proton-proton collisions (which need not be the case).

You can question these assumptions, but let’s accept them for now. The question for today is this: since dark matter barely interacts with ordinary matter, how can scientists at an LHC experiment like ATLAS or CMS, which is made from ordinary matter of course, have any hope of figuring out that they’ve made dark matter particles? What would have to happen before we could see a BBC or New York Times headline that reads, “Large Hadron Collider Scientists Claim Discovery of Dark Matter”?

Well, to address this issue, I’m writing an article in three stages. Each stage answers one of the following questions:

How can scientists working at ATLAS or CMS be confident that an LHC proton-proton collision has produced an undetected particle — whether this be simply a neutrino or something unfamiliar?

How can ATLAS or CMS scientists tell whether they are making something new and Nobel-Prizeworthy, such as dark matter particles, as opposed to making neutrinos, which they do every day, many times a second?

How can we be sure, if ATLAS or CMS discovers they are making undetected particles through a new and unknown process, that they are actually making dark matter particles?

My answer to the first question is finished; you can read it now if you like. The second and third answers will be posted later during the week.

But if you’re impatient, here are highly compressed versions of the answers, in a form which is accurate, but admittedly not very clear or precise.

Dark matter particles, like neutrinos, would not be observed directly. Instead their presence would be indirectly inferred, by observing the behavior of other particles that are produced alongside them.

It is impossible to directly distinguish dark matter particles from neutrinos or from any other new, equally undetectable particle. But the equations used to describe the known elementary particles (the “Standard Model”) predict how often neutrinos are produced at the LHC. If the number of neutrino-like objects is larger that the predictions, that will mean something new is being produced.

To confirm that dark matter is made from LHC’s new undetectable particles will require many steps and possibly many decades. Detailed study of LHC data can allow properties of the new particles to be inferred. Then, if other types of experiments (e.g. LUX or COGENT or Fermi) detect dark matter itself, they can check whether it shares the same properties as LHC’s new particles. Only then can we know if LHC discovered dark matter.

I realize these brief answers are cryptic at best, so if you want to learn more, please check out my new article.

Triggering is an essential part of the Large Hadron Collider [LHC]; there are so many collisions happening each second at the LHC, compared to the number that the experiments can afford to store for later study, that the data about most of the collisions (99.999%) have to be thrown away immediately, completely and permanently within a second after the collisions occur. The automated filter, partly hardware and partly software, that is programmed to make the decision as to what to keep and what to discard is called “the trigger”. This all sounds crazy, but it’s necessary, and it works. Usually.

Let me give you one very simple example of how things can go wrong, and how the ATLAS and CMS experiments [the two general purpose experiments at the LHC] attempted to address the problem. Before you read this, you may want to read my last post, which gives an overview of what I’ll be talking about in this one.

Day 2 of my visit to CERN (host laboratory of the Large Hadron Collider [LHC]) was a pretty typical CERN day for me. Here’s a rough sketch of how it panned out:

1000: after a few chores, arrived at CERN by tram. Worked on my ongoing research project #1. Answered an email about my ongoing research project #2.

1100: attended a one hour talk, much of it historical, by Chris Quigg, one of the famous experts on “quarkonium” (atom-like objects made from a quark or anti-quark, generally referring specifically to charm and bottom quarks). Charmonium (charm quark/antiquark atoms) was discovered 40 years ago this week, in two very different experiments.

1200: Started work on the talk that I am giving on the afternoon of Day 3 to some experimentalists who work at ATLAS. [ATLAS and CMS are the two general-purpose experimental detectors at the LHC; they were used to discover the Higgs particle.] It involves some new insights concerning the search for long-lived particles (hypothesized types of new particles that would typically decay only after having traveled a distance of at least a millimeter, and possibly a meter or more, before they decay to other particles.)

1230: Working lunch with an experimentalist from ATLAS and another theorist, mainly discussing triggering, and other related issues, concerning long-lived particles. Learned a lot about the new opportunities that ATLAS will have starting in 2015.

1400: In an extended discussion with two other theorists, got a partial answer to a subtle question that arose in my research project #2.

1415: Sent an email to my collaborators on research project #2.

1430: Back to work on my talk for Day 3. Reading some relevant papers, drawing some illustrations, etc.

1600: Two-hour conversation over coffee with an experimentalist from CMS, yet again about triggering, regarding long-lived particles, exotic decays of the Higgs particle, and both at once. Learned a lot of important things about CMS’s plans for the near-term and medium-term future, as well as some of the subtle issues with collecting and analyzing data that are likely to arise in 2015, when the LHC begins running again.

[Why triggering, triggering, triggering? Because if you don’t collect the data in the first place, you can’t analyze it later! We have to be working on triggering in 2014-2015 before the LHC takes data again in 2015-2018]

1800: An hour to work on the talk again.

1915: Skype conversation with two of my collaborators in research project #1, about a difficult challenge which had been troubling me for over a week. Subtle theoretical issues and heavy duty discussion, but worth it in the end; most of the issues look like they may be resolvable.

2100: Noticed the time and that I hadn’t eaten dinner yet. Went to the CERN cafeteria and ate dinner while answering emails.

2130: More work on the talk for Day 3.

2230: Left CERN. Wrote blog post on the tram to the hotel.

2300: Went back to work in my hotel room.

Day 1 was similarly busy and informative, but had the added feature that I hadn’t slept since the previous day. (I never seem to sleep on overnight flights.) Day 3 is likely to be as busy as Day 2. I’ll be leaving Geneva before dawn on Day 4, heading to a conference.

It’s a hectic schedule, but I’m learning many things! And if I can help make these huge and crucial experiments more powerful, and give my colleagues a greater chance of a discovery and a reduced chance of missing one, it will all be worth it.

First things first. As with all major claims of discovery, considerable caution is advised until the BICEP2 measurement has been verified by some other experiment. Moreover, even if the measurement is correct, one should not assume that the interpretation in terms of gravitational waves and inflation is correct; this requires more study and further confirmation.

The media is assuming BICEP2’s measurement is correct, and that the interpretation in terms of inflation is correct, but leading scientists are not so quick to rush to judgment, and are thinking things through carefully. Scientists are cautious not just because they’re trained to be thoughtful and careful but also because they’ve seen many claims of discovery withdrawn or discredited; discoveries are made when humans go where no one has previously gone, with technology that no one has previously used — and surprises, mistakes, and misinterpretations happen often.

In my last post, I expressed the view that a particle accelerator with proton-proton collisions of (roughly) 100 TeV of energy, significantly more powerful than the currently operational Large Hadron Collider [LHC] that helped scientists discover the Higgs particle, is an obvious and important next steps in our process of learning about the elementary workings of nature. And I described how we don’t yet know whether it will be an exploratory machine or a machine with a clear scientific target; it will depend on what the LHC does or does not discover over the coming few years.

What will it mean, for the 100 TeV collider project and more generally, if the LHC, having made possible the discovery of the Higgs particle, provides us with no more clues? Specifically, over the next few years, hundreds of tests of the Standard Model (the equations that govern the known particles and forces) will be carried out in measurements made by the ATLAS, CMS and LHCb experiments at the LHC. Suppose that, as it has so far, the Standard Model passes every test that the experiments carry out? In particular, suppose the Higgs particle discovered in 2012 appears, after a few more years of intensive study, to be, as far the LHC can reveal, a Standard Model Higgs — the simplest possible type of Higgs particle?

Before we go any further, let’s keep in mind that we already know that the Standard Model isn’t all there is to nature. The Standard Model does not provide a consistent theory of gravity, nor does it explain neutrino masses, dark matter or “dark energy” (also known as the cosmological constant). Moreover, many of its features are just things we have to accept without explanation, such as the strengths of the forces, the existence of “three generations” (i.e., that there are two heavier cousins of the electron, two for the up quark and two for the down quark), the values of the masses of the various particles, etc. However, even though the Standard Model has its limitations, it is possible that everything that can actually be measured at the LHC — which cannot measure neutrino masses or directly observe dark matter or dark energy — will be well-described by the Standard Model. What if this is the case?

Michelson and Morley, and What They Discovered

In science, giving strong evidence that something isn’t there can be as important as discovering something that is there — and it’s often harder to do, because you have to thoroughly exclude all possibilities. [It’s very hard to show that your lost keys are nowhere in the house — you have to convince yourself that you looked everywhere.] A famous example is the case of Albert Michelson, in his two experiments (one in 1881, a second with Edward Morley in 1887) trying to detect the “ether wind”.

Light had been shown to be a wave in the 1800s; and like all waves known at the time, it was assumed to be a wave in something material, just as sound waves are waves in air, and ocean waves are waves in water. This material was termed the “luminiferous ether”. As we can detect our motion through air or through water in various ways, it seemed that it should be possible to detect our motion through the ether, specifically by looking for the possibility that light traveling in different directions travels at slightly different speeds. This is what Michelson and Morley were trying to do: detect the movement of the Earth through the luminiferous ether.

In Michelson’s case, the failure to discover the ether was itself a discovery, recognized only in retrospect: a discovery that the ether did not exist. (Or, if you’d like to say that it does exist, which some people do, then what was discovered is that the ether is utterly unlike any normal material substance in which waves are observed; no matter how fast or in what direction you are moving relative to me, both of us are at rest relative to the ether.) So one must not be too quick to assume that a lack of discovery is actually a step backwards; it may actually be a huge step forward.

Epicycles or a Revolution?

There were various attempts to make sense of Michelson and Morley’s experiment. Some interpretations involved tweaks of the notion of the ether. Tweaks of this type, in which some original idea (here, the ether) is retained, but adjusted somehow to explain the data, are often referred to as “epicycles” by scientists. (This is analogous to the way an epicycle was used by Ptolemy to explain the complex motions of the planets in the sky, in order to retain an earth-centered universe; the sun-centered solar system requires no such epicycles.) A tweak of this sort could have been the right direction to explain Michelson and Morley’s data, but as it turned out, it was not. Instead, the non-detection of the ether wind required something more dramatic — for it turned out that waves of light, though at first glance very similar to other types of waves, were in fact extraordinarily different. There simply was no ether wind for Michelson and Morley to detect.

If the LHC discovers nothing beyond the Standard Model, we will face what I see as a similar mystery. As I explained here, the Standard Model, with no other particles added to it, is a consistent but extraordinarily “unnatural” (i.e. extremely non-generic) example of a quantum field theory. This is a big deal. Just as nineteenth-century physicists deeply understood both the theory of waves and many specific examples of waves in nature and had excellent reasons to expect a detectable ether, twenty-first century physicists understand quantum field theory and naturalness both from the theoretical point of view and from many examples in nature, and have very good reasons to expect particle physics to be described by a natural theory. (Our examples come both from condensed matter physics [e.g. metals, magnets, fluids, etc.] and from particle physics [e.g. the physics of hadrons].) Extremely unnatural systems — that is, physical systems described by quantum field theories that are highly non-generic — simply have not previously turned up in nature… which is just as we would expect from our theoretical understanding.

[Experts: As I emphasized in my Santa Barbara talk last week, appealing to anthropic arguments about the hierarchy between gravity and the other forces does not allow you to escape from the naturalness problem.]

So what might it mean if an unnatural quantum field theory describes all of the measurements at the LHC? It may mean that our understanding of particle physics requires an epicyclic change — a tweak. The implications of a tweak would potentially be minor. A tweak might only require us to keep doing what we’re doing, exploring in the same direction but a little further, working a little harder — i.e. to keep colliding protons together, but go up in collision energy a bit more, from the LHC to the 100 TeV collider. For instance, perhaps the Standard Model is supplemented by additional particles that, rather than having masses that put them within reach of the LHC, as would inevitably be the case in a natural extension of the Standard Model (here’s an example), are just a little bit heavier than expected. In this case the world would be somewhat unnatural, but not too much, perhaps through some relatively minor accident of nature; and a 100 TeV collider would have enough energy per collision to discover and reveal the nature of these particles.

Or perhaps a tweak is entirely the wrong idea, and instead our understanding is fundamentally amiss. Perhaps another Einstein will be needed to radically reshape the way we think about what we know. A dramatic rethink is both more exciting and more disturbing. It was an intellectual challenge for 19th century physicists to imagine, from the result of the Michelson-Morley experiment, that key clues to its explanation would be found in seeking violations of Newton’s equations for how energy and momentum depend on velocity. (The first experiments on this issue were carried out in 1901, but definitive experiments took another 15 years.) It was an even greater challenge to envision that the already-known unexplained shift in the orbit of Mercury would also be related to the Michelson-Morley (non)-discovery, as Einstein, in trying to adjust Newton’s gravity to make it consistent with the theory of special relativity, showed in 1913.

My point is that the experiments that were needed to properly interpret Michelson-Morley’s result

did not involve trying to detect motion through the ether,

did not involve building even more powerful and accurate interferometers,

and were not immediately obvious to the practitioners in 1888.

This should give us pause. We might, if we continue as we are, be heading in the wrong direction.

Difficult as it is to do, we have to take seriously the possibility that if (and remember this is still a very big “if”) the LHC finds only what is predicted by the Standard Model, the reason may involve a significant reorganization of our knowledge, perhaps even as great as relativity’s re-making of our concepts of space and time. Were that the case, it is possible that higher-energy colliders would tell us nothing, and give us no clues at all. An exploratory 100 TeV collider is not guaranteed to reveal secrets of nature, any more than a better version of Michelson-Morley’s interferometer would have been guaranteed to do so. It may be that a completely different direction of exploration, including directions that currently would seem silly or pointless, will be necessary.

This is not to say that a 100 TeV collider isn’t needed! It might be that all we need is a tweak of our current understanding, and then such a machine is exactly what we need, and will be the only way to resolve the current mysteries. Or it might be that the 100 TeV machine is just what we need to learn something revolutionary. But we also need to be looking for other lines of investigation, perhaps ones that today would sound unrelated to particle physics, or even unrelated to any known fundamental question about nature.

Let me provide one example from recent history — one which did not lead to a discovery, but still illustrates that this is not all about 19th century history.

An Example

One of the great contributions to science of Nima Arkani-Hamed, Savas Dimopoulos and Gia Dvali was to observe (in a 1998 paper I’ll refer to as ADD, after the authors’ initials) that no one had ever excluded the possibility that we, and all the particles from which we’re made, can move around freely in three spatial dimensions, but are stuck (as it were) as though to the corner edge of a thin rod — a rod as much as one millimeter wide, into which only gravitational fields (but not, for example, electric fields or magnetic fields) may penetrate. Moreover, they emphasized that the presence of these extra dimensions might explain why gravity is so much weaker than the other known forces.

Fig. 1: ADD’s paper pointed out that no experiment as of 1998 could yet rule out the possibility that our familiar three-dimensional world is a corner of a five-dimensional world, where the two extra dimensions are finite but perhaps as large as a millimeter.

Given the incredible number of experiments over the past two centuries that have probed distances vastly smaller than a millimeter, the claim that there could exist millimeter-sized unknown dimensions was amazing, and came as a tremendous shock — certainly to me. At first, I simply didn’t believe that the ADD paper could be right. But it was.

One of the most important immediate effects of the ADD paper was to generate a strong motivation for a new class of experiments that could be done, rather inexpensively, on the top of a table. If the world were as they imagined it might be, then Newton’s (and Einstein’s) law for gravity, which states that the force between two stationary objects depends on the distance r between them as 1/r², would increase faster than this at distances shorter than the width of the rod in Figure 1. This is illustrated in Figure 2.

Fig. 2: If the world were as sketched in Figure 1, then Newton/Einstein’s law of gravity would be violated at distances shorter than the width of the rod in Figure 1. The blue line shows Newton/Einstein’s prediction; the red line shows what a universe like that in Figure 1 would predict instead. Experiments done in the last few years agree with the blue curve down to a small fraction of a millimeter.

These experiments are not easy — gravity is very, very weak compared to electrical forces, and lots of electrical effects can show up at very short distances and have to be cleverly avoided. But some of the best experimentalists in the world figured out how to do it (see here and here). After the experiments were done, Newton/Einstein’s law was verified down to a few hundredths of a millimeter. If we live on the corner of a rod, as in Figure 1, it’s much, much smaller than a millimeter in width.

But it could have been true. And if it had, it might not have been discovered by a huge particle accelerator. It might have been discovered in these small inexpensive experiments that could have been performed years earlier. The experiments weren’t carried out earlier mainly because no one had pointed out quite how important they could be.

Ok Fine; What Other Experiments Should We Do?

So what are the non-obvious experiments we should be doing now or in the near future? Well, if I had a really good suggestion for a new class of experiments, I would tell you — or rather, I would write about it in a scientific paper. (Actually, I do know of an important class of measurements, and I have written a scientific paper about them; but these are measurements to be done at the LHC, and don’t involve a entirely new experiment.) Although I’m thinking about these things, I do not yet have any good ideas. Until I do, or someone else does, this is all just talk — and talk does not impress physicists.

Indeed, you might object that my remarks in this post have been almost without content, and possibly without merit. I agree with that objection.

Still, I have some reasons for making these points. In part, I want to highlight, for a wide audience, the possible historic importance of what might now be happening in particle physics. And I especially want to draw the attention of young people. There have been experts in my field who have written that non-discoveries at the LHC constitute a “nightmare scenario” for particle physics… that there might be nothing for particle physicists to do for a long time. But I want to point out that on the contrary, not only may it not be a nightmare, it might actually represent an extraordinary opportunity. Not discovering the ether opened people’s minds, and eventually opened the door for Einstein to walk through. And if the LHC shows us that particle physics is not described by a natural quantum field theory, it may, similarly, open the door for a young person to show us that our understanding of quantum field theory and naturalness, while as intelligent and sensible and precise as the 19th century understanding of waves, does not apply unaltered to particle physics, and must be significantly revised.

Of course the LHC is still a young machine, and it may still permit additional major discoveries, rendering everything I’ve said here moot. But young people entering the field, or soon to enter it, should not assume that the experts necessarily understand where the field’s future lies. Like FitzGerald and Lorentz, even the most brilliant and creative among us might be suffering from our own hard-won and well-established assumptions, and we might soon need the vision of a brilliant young genius — perhaps a theorist with a clever set of equations, or perhaps an experimentalist with a clever new question and a clever measurement to answer it — to set us straight, and put us onto the right path.

As I explained on Tuesday, I’m currently writing articles for this website that summarize the results of a study, on which I’m one of thirteen co-authors, of various types of decays that the newly-discovered Higgs particle might exhibit, with a focus on measurements that could be done now with 2011-2012 Large Hadron Collider [LHC] data, or very soon with 2015-2018 data. See Tuesday’s post for an explanation of what this is all about.

On Tuesday I told you I’d created a page summarizing what we know about possible Higgs decays to two new spin-zero particles, which in turn decay to quark pairs or lepton pairs according to our general expectation that heavier particles are preferred in spin-zero-particle decays. A number of theories (including models with more Higgs particles, certain non-minimal supersymmetric models, some Little Higgs models, and various dark matter models) predict this possibility.

Today I’ve added to that page (starting below figure 4) to include possible Higgs decays to two new spin-zero particles which in turn decay to gluon or photon pairs, according to our general expectation that, if the new spin-zero particles don’t interact very strongly with quarks or leptons, then they will typically decay to the force particles, with a rate roughly related to the strengths of the corresponding forces. While fewer known theories directly predict this possibility compared to the one in the previous paragraph, the ease of looking for Higgs particles decaying to four photons motivates an attempt to do so in current data.

I have a few other classes of Higgs particle exotic decays to cover, so more articles on this subject will follow shortly!

First Time Visitor?

This site addresses various aspects of science, with a current focus on particle physics. I aim to serve the public, including those with no background knowledge of physics. If you're not yourself an expert, you might want to click on "New? Start Here" or "About" to get started. If you'd like to watch my hour-long public lecture about the Higgs particle, try ``Movie Clips''.

A Higgs particle is produced in a proton-proton collision at center, and decays to two photons (particles of light, indicated by green towers) in an LHC detector. Tracks emerging from center are from remnants of the two protons.