(Although I am not a physicist) I understand that this is talking about the concept of "time" from a frame of reference between the GPS satellites and the ground stations. However, the original paper's implementation did not measure time with GPS satellites (that would be silly). Instead, it used the satellites to obtain very precise distances and when they did this, they accounted for relativity. The time recording devices were atomic clocks at the locations of the facilities on the surface of the Earth. As the second article notes, they just said they did this and you assume they did it correctly. However, if they miscalculated relativity between the satellites and ground stations, it's going to be in the form of the distance being incorrectly measured -- not the actual time itself. And that distance (which would be slightly shorter than they calculated) should then result in an explanation of the nanosecond difference.

The paper doesn't claim that the distances were measured incorrectly, it claims the timing was inaccurate do to special relativity (not general relativity which another poster in this thread was confused by.

In essence, the paper makes the claim that the experiment is using GPS as a reference clock, and the reference clock (the satellite) is in motion differently, relative to the neutrino source, and detector.

60 ns translates into 18 meters at the speed of light. If the error was that large any car GPS device would be showing you as driving on some other street.

I was working with some high precision GPS receivers [trimble.com], and they can place you on the map with accuracy of a couple of centimeters. The shape of the Earth is also pretty well understood now.

One unfortunate possibility would be that the clocks are wrong. They had to move them between sites, since they weren't willing or able to synchronize them over the radio where they are (the varying propagation paths would be hard to deal with.) A more pleasing (to me) outcome would be that FTL is real.

60 ns translates into 18 meters at the speed of light. If the error was that large any car GPS device would be showing you as driving on some other street.

Not if the map is also off by 18 meters. How do they put the roads on the map in the first place? Most likely, by GPS, or whatever GPS was calibrated against when it was implemented.

I'm not qualified to judge if the paper is right, but it is very easy to get into circular reasoning when it comes to standards - deriving true standards to that level of precision is hard so lots of stuff gets derived and if anywhere on the chain something goes wrong you can get lots of results that agree and yet are all wron

Not if the map is also off by 18 meters. How do they put the roads on the map in the first place? Most likely, by GPS, or whatever GPS was calibrated against when it was implemented.

The current datum [about.com] for GPS is WGS84. Locations of many places on Earth were carefully measured for centuries, using astronomy and trigonometry. I don't know if they are accurate enough to calibrate the GPS.

A systematic, uniform error, like a translation of the entire datum, would have no effect on the OPERA experiment - however you slide or rotate the outer shell of a sphere it doesn't change the distance between two points. It would require a systematic but non-uniform error to cause this effect. I guess it is possible, since there is no explanation so far of the OPERA results. Such an error has to be location-specific and it should be invisible to the WAAS.

According to Wikipedia [wikipedia.org], the maximum distance from Earth's center is at the summit of Chimborazo, 6,384.4 kilometers from center. The lowest is the floor of the Arctic Ocean, 6,352.8 from the center. This makes for a difference of 31.6 kilometers. Dividing this difference with either of these - or any of the various mean radiuses in the linked page - gets a deviation of less than 0.5 percent from a perfect sphere.

Unless you believe that map-making started with GPS in 1994, then obviously not.

Most map making is done with aerial photographs mixed with good-old-fashioned triangulation-based surveying, which is then reconciled with GPS. The error ratios in GPS are well understood, as the system has been used to check against other methods in this way almost continuously since it came online. If GPS positioning was throwing up 18 metre errors left and right it would have been noticed many many times.

I don't understand this as well. Either this is simply a preexisting inaccuracy in all GPS readings due to relativity that has never been taken into consideration (highly unlikely), or there's something else going on that I don't grasp. I don't see how the neutrino motion relative to the motion of the satellites is a factor here, as no direct measurement between the two is being made in that way.

One of the things the GPS system helped prove was that relativity is real and must be accounted for in systems o

Maybe its because GPS understands relativity well enough to get planes to the correct runway, and cruise missiles to their target, but the people who designed it didn't anticipate measuring the speed of neutrinos.

Maybe its because GPS understands relativity well enough to get planes to the correct runway...

GPS understands relativity well enough to require General Relativistic corrections. This paper suggests that the GPS clock is inaccurate and suffers a lag based on location which, since GPS requires accurate timing to pinpoint your location a 64ns time difference would put you 20m off your correct location. In addition the author uses a very simplistic model of GPS clock and satellite for getting the clock. I would also have assumed that the GPS clock is based on multiple satellites since it has to know your location to calculate the propagation delay and it does this by comparing one satellite clock to another.

However the final nail in the coffin is that he doesn't know how to spell photon (it is not spelt foton!)...so I have extreme doubts that this is paper is correct. In fact I'd need to hear from a GPS expert that his simplistic model is reasonable because I don't believe that it is (but then I'm not a GPS expert!).

Maybe its because GPS understands relativity well enough to get planes to the correct runway...

GPS understands relativity well enough to require General Relativistic corrections. This paper suggests that the GPS clock is inaccurate and suffers a lag based on location which, since GPS requires accurate timing to pinpoint your location a 64ns time difference would put you 20m off your correct location. In addition the author uses a very simplistic model of GPS clock and satellite for getting the clock. I would also have assumed that the GPS clock is based on multiple satellites since it has to know your location to calculate the propagation delay and it does this by comparing one satellite clock to another.
However the final nail in the coffin is that he doesn't know how to spell photon (it is not spelt foton!)...so I have extreme doubts that this is paper is correct. In fact I'd need to hear from a GPS expert that his simplistic model is reasonable because I don't believe that it is (but then I'm not a GPS expert!).

I'm not an expert either although I have worked on GPS aircraft navigation and augmentation systems. You are right that the GPS clock is based on multiple satellites. A GPS fix needs a minimum of four satellites, and the receiver triangulates position in 4-dimensions: the three spatial dimensions and time (four unknowns, four data points). What's more, those 4 will not be in the same plane (the satellites themselves form 6 orbital planes), so the bit in the article about "The orbits of these satellites are at 20.2 106 m from the earth’s surface in a fixed planes inclined 55 from the equator with an orbital period of 11 h 58 min [3]. This implies that they fly predominantly West to East when they are in view of CERN and Gran Sasso, which is roughly parallel to the line CERN-Gran Sasso" looks to me like a fundamental misunderstanding of the satellite orbits. The satellites on which a time fix is based will not all be travelling in the same direction. It is possible to use other position information as data points, and so reduce the number of satellites needed for a fix, but I'm not sure why anybody would do that when they can improve accuracy by using all visible satellites (and anyway, even if they did use a single satellite plus accurately known spatial position, the author of the paper still wouldn't know which orbital plane the satellite used was, and so wouldn't know the direction of movement).

However the final nail in the coffin is that he doesn't know how to spell photon (it is not spelt foton!)...

Not every language spells photon as such. For example, Dutch, the language of the author, spells it as foton [google.com]. So yes, photon is spelled "foton", just not in English.

As to the paper, the sort of error that supposedly happened with GPS, strikes me as the sort of error that would not be corrected by the system since it's not relevant to GPS's primary task, positioning to within tens of meters. It's particularly suspicious given that it is of the right size to explain the anomaly.

As to the paper, the sort of error that supposedly happened with GPS, strikes me as the sort of error that would not be corrected by the system since it's not relevant to GPS's primary task, positioning to within tens of meters. It's particularly suspicious given that it is of the right size to explain the anomaly.

Not sure why you think that GPS's primary task is positioning to within tens of meters. For military and scientific research, GPS is capable of getting down to within +/- 6 inches. The article doesn't state, but I would assume they are using the far more accurate Differential GPS often referred to as DGPS. [wikipedia.org]

The only time GPS was accurate to within tens of meters, was when they had the SA turned on to limit enemy use of the system. That was abandoned years ago.

Not sure why you think that GPS's primary task is positioning to within tens of meters. For military and scientific research, GPS is capable of getting down to within +/- 6 inches. The article doesn't state, but I would assume they are using the far more accurate Differential GPS often referred to as DGPS.

DGPS includes one or more local reference points and hence, isn't strictly a function of GPS. I did forget that military and certain other uses do get more accurate placement, but they don't get the accuracy of DGPS.

However the final nail in the coffin is that he doesn't know how to spell photon (it is not spelt foton!)...so I have extreme doubts that this is paper is correct. In fact I'd need to hear from a GPS expert that his simplistic model is reasonable because I don't believe that it is (but then I'm not a GPS expert!).

The author is Dutch. In Dutch, it is spelled foton [wikipedia.org]. You can't blame everyone for speaking English as a second language.

(I *am* a physicist) Actually, the original paper *did* measure time with GPS - more to the point, they use GPS to establish a common frame between the two locations. Look at Figure 5 of the OPERA paper (http://arxiv.org/pdf/1109.4897v1).

Having said that, as other replies have noted, this kind of correction is well-understood, so while it isn't explicitly laid out as far as I can tell, it's unlikely the OPERA group screwed this up. What may well be true, though, is that there may be systemic offsets either in the GPS timing system, the implementation at Gran Sasso (they actually have a big waveguide that they run from the Earth's surface all the way to the GPS reveivers they have by their detector deep underground), or any of the myriad corrections that were needed to determine the time-of-flight baseline (although as far as I can tell they worked very hard to get this measurement right...).

It's also rather suggestive that the author of this paper has no particle physics (or even physics) credentials. So he/she probably doesn't know the OPERA collaboration's processes very well (admittedly, these details should be in the paper, but the tradition of the community is to not do that sort of detail in announcement papers like this...)

The summary uses the word 'infamous' to describe the original announcement. In my mind it could only be considered infamous if they made a simple and blatant error. As it looks to a layman they have made an error but in finding the error physics will gain some interesting knowledge it didn't have before. If the latter is true then I'd imagine this would be remembered for all the right reasons not all the wrong ones.

The word "infamous" is often used by scientists to describe problems that a large number of people have attempted to solve and have failed. I suspect that this is the sense the submitter was using it in.

Uum, isn't the resolution that GPS can give you ridiculously low for doing physics experiments, even with access to the military-quality signal?

The civilian specification is that absolute time accuracy is +/- 100ns (95% if I recall correctly) although the system routinely achieves +/- 10ns accuracy (not least because the 100ns specification was set when only L1 was available to civilians so GPS on its own couldn't be corrected for ionospheric effects). What's more, for these experiments it's not the absolute time that matters, it's drift over a very short time that matters. In other words, standard GPS is more than good enough for the CERN experime

CERN has its own very accurate time product. There is a team at JPL that uses it and others to determine earth orientation. Altogether it might be the most accurate time product in the world. It's definitely more accurate than GPS and definitely available to scientists at CERN. If they were using GPS for this they are a bunch of mooks even if GPS was "good enough".

This is another easy-to-digest paper written by someone who doesn't have the first clue about what was actually done in the experiment, trying to explain it with undergrad physics. And the press jumps on each and every one of these, no matter how bad they are.

In this case, GPS clock synchronization to nanosecond levels is regularly done in meteorology, the relativistic effects are well known and compensated for, because it wouldn't work at all if they weren't, and the synchronization was confirmed by a non-GPS method.

I won't call it "garbage", but otherwise I was thinking along similar lines (disclaimer -- I have a Master's in Physics but I haven't bothered to do the math). 60ns is an eternity in an experimental setup, and while the two sites are at different latitudes (and a straight-line three-space trajectory sends the neutrinos along a curved path in spacetime), I can't see earth's relatively weak gravity accounting for such a discrepancy. It's a curved 4-space path, but it's not *that* curved.

In order for their experiment to succeed they needed extreme target accuracy, to within 1 meter. This requires they be off by 20 metres. The fact that their experiment succeeded at all for their original purpose kills this bullshit right off.

They don't automatically work out the precise timing. So if one is using GPS for timing one can't just rely on the standard GPS software and calculations for timing.

That must be bad news for manufacturers of GPS timing receivers [trimble.com]. As matter of fact, I was working with this very receiver, it's tiny but it tells you time with 15 ns. accuracy - it is more accurate than the error in the experiment.

But as I understand the OPERA people weren't using the GPS timing, they physically moved a synchronized clock (

Yes, sorry imprecise on my part. I mean they don't do precise timing that is universal. They can be used as I understand for short timing in any specific location but not for timing that is by itself correct for two different locations. Is that correct? In any event, I agree with you that it seems like this shouldn't be an issue given how OPERA was doing the timing.

I mean they don't do precise timing that is universal. They can be used as I understand for short timing in any specific location but not for timing that is by itself correct for two different locations. Is that correct?

I'm not quite sure how to approach this. First of all, we must ignore the large relativistic effects because then the notion of common timing becomes moot (What time is now at Proxima Centauri? Well, it depends on how fast you fly the clock there.)

My money is on the fact that the true path of the beam was not from one city to the other, but from the spot where one city was when it started to where the other city was when it stopped. If the path was opposite the rotation of the earth, that'd be very slightly shorter right? Earth doesn't spin fast compared to the speed of light, but this error wasn't very large either

Earth is not just rotating around its axis. Earth is also rotating around the Sun, and the Sun moves on its orbit within the Galaxy, etc. etc.

Earth's orbital speed is about 30 km/s. The test distance is 730 km. Neutrinos traveled the same path in 2.4 ms. Earth during this time moved by 30 km/s * 2.4 ms = 72 meters. Since the neutrino was emitted at the speed of light, even though the source was receding, it can be interpreted as if the receiver was closer to the source than anticipated.

Only relative motion matters, aka, in the inertial frame of the source when the leading edge of the beam was emitted, how much has the target moved by the time the leading edge of the beam reaches the detector.

At two points opposite each other on the equator, that distance would be less than 2.3 meters, with only 730km separating the source and target, that distance is only a few centimeters (the source and target have nearly the same velocity)

If neutrinos were faster than c, the neutrinos from SN1987A would have arrived "five years sooner," [newscientist.com] while they were measured arriving "3 hours before the dying star's light caught up" as expected...

If neutrinos were faster than c, the neutrinos from SN1987A would have arrived "five years sooner," [newscientist.com] while they were measured arriving "3 hours before the dying star's light caught up" as expected...

You are making the assumption that the neutrinos from SN1987A were excited to the same or higher energy level by the supernova that the LHC neutrinos were excited to. My bet is this assumption is false.

Sorry, but the suggesting that CERN and OPERA clocks are the GPS satellites and
adjusting for there speed is just wrong. CERN and OPERA used GPS for accurate
geophysics and timing measurement but have they own synchronized clocks in the
earths frame. The fasting than light measurement isn't going to go away that easily.

My personnel solution is that neutrinos feel a fifth force (many at low energy), and
this fifth force as left a enough binding energy for the Scarnhorst effect to increase
the speed of th

It's bogus. (Yes, I am a physicist.) OPERA used portable atomic clocks, which were moved to the the two labs and then synchronized via GPS (see this article [nature.com]). GPS thoroughly incorporates general relativity (which includes special relativity). It has incorporated GR ever since it was first built, because if it didn't, it wouldn't work. At all. No, not even well enough for hiking and driving. Here [livingreviews.org] is a review article on relativity in GPS. GPS uses coordinates called Earth-Centered Inertial (ECI). These are coordinates (t,r,theta,phi), where the spatial coordinates are spherical coordinates that rotate along with the earth, and t is the time coordinate of a hypothetical observer in a nonrotating frame at rest relative to the center of the earth. General relativity is completely agnostic about what coordinate system you use, so this choice of a coordinate system is not a choice that has any physical significance; it's just a bookkeeping thing. Van Elburg assumes that GPS was constructed by people who didn't understand relativity, and therefore GPS times need to be corrected for relativistic effects. That's just completely wrong.

Just like you don't need to remove the air in a "tunnel" between point A and point B to send a beam of light between them, you don't need to remove the rock in a "tunnel" between point A and point B to send a beam of neutrinos between then. Of course enough air will block the light as and several hundred light years of solid rock would block the neutrinos. 900km of rock however is not going to do anything, digging a tunnel would make no difference at all.

The authors have documented their whole procedure here: http://www.ohwr.org/projects/cngs-time-transfer/wiki [ohwr.org]
The author of the bogus paper assumes the people who designed GPS and those who use it in metrology labs around the world to manufacture GPS do not know anything about relativity. He also proceeds to an analysis without checking his very basic premises first with the authors of the neutrino velocity paper, or anybody close to the actual experiment. Is it that hard to check one's assumptions first?

I think it's fair to assume that the researcher would read the original paper before publishing a reaction to it,

The original paper does not go into detail about the procedures, because it beyond the scope of the paper. You are supposed to go look these things up for yourself, and the person who wrote this paper very clearly didn't.

There are projects looking for changes in the constants, cosmologically speaking. It's not something they haven't thought about, it's just really hard to detect. Nobody knows if this is the case and it's surely not ruled out, nor assumed to be constant everywhere, but it surely seems to be everywhere local to us.

This attitude is not helpful. This is part of the reason why biblical literalists get away with what they do. They say "hurp, we don't know anything at all, so you may as well believe Genesis word-for-word."

It is anti-reason and a cop-out.

And you cap it off with a complete misunderstanding about what a theory is.

Well, if one particle goes one direction at near C, and another particle goes the opposite direction at near C, then the they are traveling apart at nearly 2C (relative to the original frame, that is). IANAP, but that's how I understand it.

That could help your lag, because you could position an intermediate server between Earth and Mars serving as the host. Then it would only be a minimum of 1 minute 33 seconds lag, perfectly acceptable for a 1993 game of Doom.:)

I don't think so. If a particle is traveling away from me at 185K miles per second, after 1000 seconds, it will be a distance of 185M miles. If another particle is traveling the opposite direction at the same velocity, after 1000 seconds it will be a distance of 185M miles. (All distance/time measurements in my frame.)

So in my frame, they will be 370M miles apart after 1000 seconds, and having started 0 miles apart, the delta-T is 370K mi/sec. Which is nearly 2C (372K mi/sec).

And to be clear, I understand the concepts of time dilation and length contraction. So I'm still acknowledging that each of those particles will, in their own frame, see the other particle leaving at a speed of less than C. They will see the distance between themselves as less than the distance I see between the two.

And this doesn't account for the time light travels back to me. That has to be accounted for and calculated as I receive the information back. It also doesn't speak for relativity of simulta

I don't get the point of your post, or why you claim I don't understand relativity. I never claimed that anything would actually travel FTL, or that I have made some huge discovery.

You even confirmed what I did say, that it is possible to observe separation speeds of at-or-near 2C without violating relativity or the principle of invariant light speed. My post was a response to a post that assumed that I was talking about the speed of any single particle relative to another point in space... which isn't tr

No worries then - what you say is completely correct then, you can observe the distance between two particles increasing at 2c. Somewhere in the conversation thread, I thought someone was saying that this would equate to a speed of 2c.

If Alice and Bob are one light-minute apart, and they both agree to measure the state of entangled particles, and agree that Bob sends Alice his measurement immediately. Alice predicts the value Bob will send, a minute prior to receiving it, which means she knew it in the past according to the principal of causality.

Well... that's how I understand it. But this is an area that I really don't get very well... my interpretation is probably fundamentally wrong somewhere. I know that Alice couldn't send any n

Ok, but even if Alice and Bob found a way to transmit new information instantly, I'm still not seeing how a paradox could exist. Alice could send Bob a message, and Bob could calculate a response and send it back. Alice would receive the calculated response before light-speed would allow, but that wouldn't seem to violate causality by creating a paradox... from what I can tell, it would only violate the principle that nothing could travel faster than the speed of light. But instantaneous information tran

The reason it seems unproblematic is that you think about the problem in regular Euclidean space. However, our universe is not Euclidean, it is (ignoring general relativity) Minkowski space. In Minkowski space, the ordering of events in spacetime is much harder to pin down. Sending a signal "faster than light" allows you to send signals back through time, causing paradoxes.

Try finding a good introduction to special relativity, it should have some thought experiments to further demonstrate how this works.

Yeah, I understand all of that, but (after a little more research) it doesn't make any difference when all parties are in the same inertial reference frame. It makes a difference when you add another inertial reference frame. I like the explanation at http://www.theculture.org/rich/sharpblue/archives/000089.html [theculture.org], which begins with the situation I proposed with Alice and Bob transmitting with an ansible (which would be equivalent to instantaneous transmission via some mechanism like quantum teleportation).

Ok, but even if Alice and Bob found a way to transmit new information instantly, I'm still not seeing how a paradox could exist. Alice could send Bob a message, and Bob could calculate a response and send it back. Alice would receive the calculated response before light-speed would allow, but that wouldn't seem to violate causality by creating a paradox... from what I can tell, it would only violate the principle that nothing could travel faster than the speed of light. But instantaneous information transfer (FTL) is the supposition, therefore nothing has been proven.

Your 100% correct there is no contradiction with regards to FTL communication as an abstract idea. The devils in the details (How you actually accomplish it)

This can be done by sending your message through a region of space with negative density it will get there faster than if sent through a region of normal space... (Wormholes)

Or along the same lines you can use a warp drive to beat light.

The problem with going faster than light without playing games with space, coherence loopholes, extra dimensions..etc

Alice predicts the value Bob will send, a minute prior to receiving it

Bob is superfluous here; he is not sending anything that Alice doesn't know. Alice can flip a coin and write the outcome down. Then she copies it on a Post-It note and sticks it to the monitor. Then she looks at it a minute later, compares with the earlier record and finds that they are identical. This only means that she sent the information into the future, which is not very unusual.

This is information wise identical to me giving two sealed envelopes to Alice and Bob. Each contains an identical sheet of paper with either a 0 or a 1 written on it. Bob opens his envelope and sends the number to Alice. Alice then opens hers and after a minute notes it's the same as what Bob sent her.

The situation you describe can't violate causality. In order to have become entangled, the particles must have been near to each other in the past before one of them was put on the slow boat to Bob's lab.

They measured the actual difference is like 1.00001x faster, so this is of total insignificance

You're wrong. The error bars on experimental data are a statistical thing. By the very fact that their margins of error didn't allow their confidence intervals to capture the speed of light, the speeds of the neutrinos were statistically significant in their difference from (in this case, above) c for some significance level. I don't know what their level was, but it was probably.1,.05, or tighter, since these are pretty standard significance levels. That means that if all their instruments were calib

The reason the speed of light is an unbreakable barrier is because it would take theoretically infinite energy to accelerate anything past the speed of light. It's the place in the equation where the equations break down into infinity, and we can't predict exactly what's going on.

If there's evidence that the speed of light isn't an absolute barrier, it means our current understanding of relativity is wrong.

Exactly, like Newtonian physics wasn't "wrong" at the time because it explained what people could observe but couldn't explain everything we could see once we could look further into space. Maybe this is us looking further.

I think what you're missing is the fact that experiments done to calculate the speed of photons are completely different from experiments done to calculate the speed of neutrinos. It's quite feasible for an error to crop up in one group of experiments and not in the other.
Calculating the speed of photons via experimentation is *much* easier, since they're not nearly as hesitent to interact with other particles. Additionally, humanity has done way more experiments calculating the speed of light. We're g