A Few Tidbits from Nagoya, including OPERA news

The Nagoya (Japan) conference in celebration of the Inauguration of the Kobayashi-Maskawa Institute has come to a close this morning. There was a pleasant little ceremony yesterday in which Kobayashi and Maskawa took up shovels to place dirt around a newly-planted tiny apple tree outside the institute — an apple tree descended directly from “Newton’s apple tree” at Trinity College, Cambridge (you know the one, the tree whose apple is said to have inspired Newton’s theory of gravity — as though he’d never seen a dropped fork before.)

Professors Kobayashi (right) and Maskawa await the shovels they will use to place dirt around the roots of the newly-planted descendant of Newton's apple tree.

Meanwhile, back indoors there were numerous talks on a wide variety of research topics. Several of these addressed Japan’s broad experimental particle physics program, which covers neutrinos, bottom quarks, dark matter, cosmic rays, and the development of new experimental devices. Here are a few tidbits I heard about yesterday.

First, the one you all want to know: there was some very good news from the OPERA experiment (the one with the speedy neutrinos,) in which Nagoya is a participant. A key problem with the experimental method used in that measurement (one that I and many others expressed concerns about immediately) is that the pulses of neutrinos that were sent from CERN to OPERA were 10,000 nanoseconds long, while the effect observed by OPERA involved a shift of only 60 nanoseconds; the measurement therefore required precise knowledge of the neutrino pulse shape, but this had to be inferred from the shape of the pulse of protons that leads to the pulse of neutrinos. (Recall how you make a neutrino beam.) There have been widespread concerns that a very small error in that inference could potentially cause a fake shift. So the obvious thing to do instead is to have CERN send a series of short pulses — a couple of nanoseconds long, with big gaps between them. It’s like sending a series of loud and isolated clicks instead of a long blast on a horn; in the latter case you have to figure out exactly when the horn starts and stops, but in the former you just hear each click and then it’s already over. In other words, with the short pulses you don’t need to know the pulse shape, just the pulse time. And you also don’t need to measure thousands of neutrinos in order to reproduce the pulse shape, getting the leading and trailing edges just right; you just need a small number — maybe even as few as 10 or so — to check the timing of just those few pulses for which a neutrino makes a splash in OPERA (recall how you detect neutrinos). OPERA didn’t want to do this because it comes at the cost of a large reduction in the sheer number of neutrinos, and this affects OPERA’s main research program (which involves neutrino oscillations). But apparently the concerns raised by the community have been strong enough to prompt OPERA to request that the CERN neutrino beam operators (remember OPERA is not part of CERN, despite press reports to the contrary) send them short pulses. This process has already begun, as of last week, and according to the speaker, Nagoya’s own Professor Mitsuhiro Nakamura, it will be a matter of only a few weeks before OPERA will have enough neutrinos to make this important cross check. So this is very good news.

Meanwhile, I personally am still quite confused about what the Minos experiment [which measures similar neutrinos, at only slightly lower energies, traveling from Fermilab (near Chicago) to a mine in Minnesota — just about the same distance as from CERN to OPERA] can and can’t do to check the OPERA measurement. I have not run into a Minos expert and have heard conflicting information. So please set me straight if what I now say is wrong. What I was told yesterday is that Minos does not need to take any new data in order to check the OPERA measurement; the data is fine. All that is needed is to calibrate the clocks, which can be done in relatively short time: months, not years. So that too would be good news… though I did not hear what the expected level of precision would be. [UPDATE — see a statement and a link from a more knowledgeable person in the comments below.]

There were many other talks, and here are just a few I enjoyed hearing about.

There was a very nice talk that included a description of the Super-KEK B-factory (a machine for making bottom quark/antiquark pairs in a controlled environment, at a rate 100 times higher than its predecessor machines.). [Why make so many bottom quarks? The decays of hadrons that contain bottom quarks are a well-known opportunity for high-precision tests of the equations that describe the known particles and forces, tests that are often complementary to what can be done at a very high-energy machine such as the Large Hadron Collider.]

Nagoya University, which has a long and distinguished history in experimental high-energy physics, is involved in the development of innovative very high-precision devices for detecting particle tracks. These have several potential important applications to particle experiments. A previous generation of these devices was used in the OPERA experiment.

There was a presentation of this summer’s result from the T2K neutrino oscillation experiment (which sends neutrinos from one Japanese laboratory to another — Tokai to Kamiokande [hence “T2K”]). This very important result, which still has rather low statistical significance and therefore is potentially subject to considerable change, suggests that the oscillation of muon neutrinos into electron neutrinos might be just below what was excluded by previous experiments. If this is true, it will not only be important in and of itself, it will also mean that other interesting neutrino-oscillation measurements will be easier than feared. (By the way — because they use rather low-energy neutrinos and because of certain timing uncertainties at Kamiokande, it appears they are unlikely to be competitive in checking the OPERA result on neutrino speeds.)

A very nice talk mainly focused on the Fermi/LAT satellite’s results covered many interesting topics. One that caught my eye included a very interesting limit (i.e. no signal was observed) on collisions of dark-matter particles (in which they are converted to known particles, which are then observable.) Specifically, none were seen occurring in dwarf galaxies near the Milky Way, a good place to look because backgrounds from astrophysical sources are small in dwarf galaxies. And another result that was quite striking put a very powerful limit on the possibility that high-energy photons travel at a different speed from lower-energy photons — confirming to a new level of precision that the speed of light does not vary with the energy of the photons that make up the light.

There were interesting presentations on “lattice gauge theory” (computer simulations of the physical behavior of forces like the strong nuclear force, involving particles such as quarks, antiquarks and gluons) as applied to hypothetical worlds in which the number of types of lightweight quarks (lighter than the proton) is larger than we have in nature. Such studies might be relevant for understanding the Higgs field itself. (The buzzword here is a speculation for the origin of the Higgs field called “technicolor”. ) Personally I find this very interesting, as I’ve been part of a community of theorists who for well over a decade have been urging lattice gauge theory experts to do these studies. Computer power seems to be reaching the point where useful results on the tougher cases are possible.

One of the world’s experts on technicolor (Professor Elizabeth Simmons) talked about a [relatively!] simple version of technicolor, called topcolor-assisted-technicolor, and presented evidence that current LHC data (from the search for the Higgs) already essentially excludes this possibility. This means a more complex version of this class of models (such as top-seesaw-assisted technicolor) is needed.

As far as I understood, minos will pursue both approaches, i.e. re-analyse existing data using improved methods (whatever that means), and taking new data with a setup optimized for this measurement.
The timescale for the re-analysis seems to be several month, the new data of course will take longer, 1-2 years.

Sorry I forgot all the details from the presentation, in particular I don’t remember how much improvement they expect for each analysis.

The slides from the talks do not appear to be posted on the web, unfortunately. I am not sure there is any plan to do so: I spoke at the Sakata Conference yesterday and no one asked me about making my talk public. In the case of the experimental results, you are most likely to find the results by going to the experiment’s website (which you can always find easily using search engines). In the case of theoretical results, you are most likely to find them by going to http://inspirehep.net/ , a powerful search engine; you can type in the author’s name (syntax: “a strassler, m” where “a” is for author) and you’ll get all the papers in reverse chronological order. If you fail and need help with a specific one, I can try to assist over the weekend.

I personally gave the talk at the Advance Neutrino Technologies Workshop a few weeks back. I am happy to answer any questions on MINOS. You will also find a few slides with information provided by T2K, with their very official statement.

More informally, as far as we can tell right now MINOS can reduce the systematic error on our previous measurement by a factor of 2 (maybe 3) with existing data. MINOS has an interesting beam structure that will help significantly, plus it makes a measurement which is neutrino to neutrino as opposed to relying on the proton waveforms. Work is also in progress to take all new data with better timing which will help us verify systematics in old data. Finally, the timing of the whole experiment will be updated for future running in the MINOS+ era.

I don’t know the precise answer; I do not have a reliable historical reference handy. It’s not hard to do it; Michelson reached the part per 100,000 level back in the 1920s, before we had electronics and lasers and atomic clocks.

The maximum likelihood fit is done without binning and the interior structure of the proton pulses affects the fit, not just the edges, in contradiction to the assumptions made by the paper you referenced. The paper relies exclusively on information gleaned from the original OPERA preprint and makes incorrect assumptions about the fitting methodology.

Can you elucidate further on your Oct. 26th blog:
“And another result that was quite striking put a very powerful limit on the possibility that high-energy photons travel at a different speed from lower-energy photons — confirming to a new level of precision that the speed of light does not vary with the energy of the photons that make up the light.”
?

This seems to be the latest result : http://arxiv.org/abs/0908.1832 ; I thought, from what the speaker said, that they had an update, but maybe not. I’ll try to find time to explain this in more detail.

Nice explanation, Matt, which has been picked up in the press. The few-ns spikes will mean that every neutrino will count (not just mainly the ones at the ends, as per the Palmer analysis).

I’m not sure, however, that MINOS replicates the OPERA set-up, as I’ve commented at http://t.co/lOzF0IYF . As far as I understand, if there’s funny business across the hadron stop, MINOS (up to now) won’t have picked it up, because they rely on near detector / far detector deltas.

John — thanks! could I ask you to explain that last bit about the hadron stop a little more slowly, for my non-expert readers (and hey, for me too…!) I would have thought that relying only on the comparison of the near detector timing with the far detector timing would be an *advantage*… that this would make MINOS less sensitive to OPERA’s potential difficulties.

I would like just to put to your attention the following paper http://arxiv.org/abs/1110.3783 The authors questions the statistical procedure used to compute the global emission PDF. He claims the order of the two operations of sum and normalization of the single waveforms used by the OPERA team is wrong. I did not catch all the details but the idea sounds interesting. What do you think?

[Actually, since the earth is curved, the neutrino beam is much deeper during much of its travels. And gravity far inside the earth is actually weaker than at the surface, so I’m not sure that part of your hypothesis would hold.]

In Einstein’s theory of gravity, there are certainly effects on time due to gravitational fields. And the GPS system, in fact, has to correct for it — it is a big enough effect that the GPS system would start giving you wrong directions very quickly if you didn’t account for it. But by the same token, the effect would be too small to give any effect on the OPERA neutrinos. Conversely, if you said Einstein is wrong and the effect is larger than he predicted, then you’d expect you’d already have seen signs of that in the GPS system.

Hi Matt, the second document I referred to suggests that gravity increases initially with depth (extract: “The value of g rises to a maximum of 999 gal at a depth of about 6 to 700km”) before then falling off . . . . The beam does not go below that depth in its journey.

It strikes me that if we assume travel at greater than the speed of light in a vacuum is impossible, might it be possible in solid rock. If the neutrinos are passing their momentum to other neutrinos on the jouney they could plausibly travel at any speed, much like hitting a 732km long piece of rock with a very large hammer, the movement observed from one end in comparison to the other is faster than the speed of light.

The general idea that the rock might be part of the story isn’t a crazy one; most of our measurements of speeds are done in vacuum (or air, at worst.) Your specific idea doesn’t make sense, I’m afraid: in fact, your premise isn’t correct, for an interesting reason. If, in fact, you hit a 732 km piece of rock with a hammer, a shock wave will travel down the rock at the speed of sound in rock, and the other end won’t move until the shock wave arrives. (That’s how earthquakes work, right?! the earth cracks at one point, but it is seconds or minutes before the shaking arrives at any given place.) Since the shock waves travel below the speed of light, the effect of the hammer-blow travels slower than light. And the same would be true for any imaginable effect of neutrinos on matter, or vice versa: within Einstein’s theory of relativity, you can prove [looking at the mathematics of the equations] that any such effect would be at or slower than the ultimate speed limit. So if OPERA is right, we really do have to modify Einstein’s equations somewhere, even if the effect is somehow due to the rock.

If the experimental results from OPERA are correct then since we know from astronomical observations that the speed of neutrinos in empty space is the same as the speed of light then it must imply that the speed of neutrinos through rock must be marginally faster than the speed through empty space. A surprising result but not a contradiction of Special and General Relativity. It would be a great follow up test to try firing the neutrinos through the centre of the Earth.

On the contrary, Special and General relativity DO imply that any effect of the rock can only slow the neutrinos down. You can prove this. So even if the rock has a role to play, one must change Einstein’s equations somehow.

Hi, another thought (too much coffee!!). Is there a wave/particle duality issue here? The neutrinos are are apparently arriving 60nS too soon. Assuming the speed of light, this is equivalent to approx. 18 metres too soon. What if this was the wavelength (or factor of) of the neutrinos? If the muon detectors at CERN trigger when the neutrino leaves (tail of the wave) and the OPERA detectors trigger on the leading edge of the wave at arrival, this could create an error. I tried the maths and it appears to work for low energy neutrinos (~10 to the minus 27 eV) and depends on the detector functionality. I can’t make the maths work for the high energy (17GeV) neutrinos fired from CERN though . . .

However, regarding the 18 meter/61ns discrepancy, no matter how accurately the geodesy distance between locations was determined, neutrino propagation is subject to many potential influences including relativistic and dynamic gravitational variations involving the Earth’s geometry, the moon and the Sun.

Critically, the actual path taken by propagating neutrinos and especially the distance that they traversed cannot be definitively and precisely determined.

How can any approximation of traversal time of an indeterminable distance (estimated to be 730 km) be so precisely determined to be 61ns less than the speed of light in a vacuum over the presumedly identical distance?

While the actual traversal path and distance of neutrinos cannot be determined by any experiment subject to so many other potential uncontrolled influences, similar experiments at other geologically disperse facilities may at least produce unexplained variations in their supposed discrepancy with the speed of light – hopefully drawing attention to the fundamental indeterminable traversal distance issue.

I hope this explains my concerns, although I’m merely an inelegant layman…

We know that light can slow down as it passes through different mediums and we know that the distance measured between the A and B points in the experiment uses lasers at least in part.

Is it possible that in fact the reverse has actually occurred … that “light slowed down rather than neutrinos sped up” … that something in the facility is causing the distance measured to be “60ns greater” than it actually is and consequently the particles are indeed traveling at the speed of light just over an miscalculated apparently greater distance?

The distances were measured using the global positioning satellite system, using electromagnetic waves passing mainly through the vacuum of space; if the distances were wrong because of the speed of light being slowed down somehow, we would certainly have detected that, in other ways, long ago.

Not to be argumentative, but the distance determined by GPS & standard geodesy routines were used to determine a distance between two locations, NOT the path taken by neutrinos in flight! The actual distance traversed by any neutrino is indeterminable!

Unlike laser beams following an optical fiber, neutrinos are free to follow the shortest path between two point, but they do interact gravitationally. As I understand, gravitational influences can vary at small scales within the Earth, depending on the path previously taken. Unless the geodesy routines used determine distance in the same maner that neutrinos select their flight path the incorrect distance is being used to evaluate speed.

As a layperson I cannot assess the potentially variable uncontrolled factors dynamically affecting the paths taken by neutrinos, but some may be addressed in the report: Wolfgang Kundt, (2011). “Speed of the CERN Neutrinos released on 22.9.2011 – Was stated superluminality due to neglecting General Relativity?”, http://arxiv.org/abs/1111.3888v1

Assume the speed of light is constant. The time taken is accurate then the distance travelled is incorrect. Is the discrepancy observed actually the speed of our solar system or our galaxy through space?

May be I should elaborate. Perhaps the discrepancy is as a result of a directional motion of the earth that to date we have been unaware of. Assuming the experiment has allowed for Earth’s rotational speed, orbital speed, solar system rotation and galactic speed then by performing the experiment repeatedly with all this factors accounted for it would be possible to calculate this new speed and directional movement of the Earth.

I very simply argue that any distance estimate used to calulate the speed of neutrinos passing through the Earth, for example, cannot be confirmed. As a result, neutrinos paths through the Earth’s locally variable gravitational fields are not determinable.

Since the path and distance actually traversed by neutrinos is not precisely determinable, no conclusive result can be obtained.

In my receding comment I also refer to Wolfgang Kundt argument that the OPERA experiment “is the first experiment to test Einstein’s
theory for the (weak) gravity field of Earth, with the result that the neutrinos
propagated (just) luminally.”

In combination, Kundt argues that the 61 ns discrepancy is an effect of general relativity for precisely timed measurements of gravitational effects, while I simply dismiss any such attempts to precisely determine the distance traversed by neutrinos over such relatively small distances through complex gravitational fields as invalid.

IMO, If we were to set up a neutrino detecter facitiy on the moon (accounting for a much larger dsipersion angle) perhaps we could obtain reliably conclusive timing results for detected neutrinos, since the precise distance to the moon can be experimentally confirmed through laser reflection.

IMO, If we were to set up a neutrino detector facility on the moon (accounting for a much larger dispersion angle) perhaps we could obtain reliably conclusive timing results for detected neutrinos, since the precise distance to the moon can be experimentally confirmed through laser reflection. Repeatable results should allow calibration of any gravitational effects.

Some cards only reimburse for los of life with the buying, selling or fixng oof best electronic
cigarette uk cars, youu could win, the better off getting
temporary 1 day insurancde aand see what is necessary.

First Time Visitor?

This site addresses various aspects of science, with a current focus on particle physics. I aim to serve the public, including those with no background knowledge of physics. If you're not yourself an expert, you might want to click on "New? Start Here" or "About" to get started. If you'd like to watch my hour-long public lecture about the Higgs particle, try ``Movie Clips''.

If a Higgs particle is produced in a proton-proton collision, an LHC detector might infer what you see here. The two red blobs indicate deposits of energy left by particles of light (photons) that are the remnants of the disintegrating Higgs.