This site is the blogging component for my main site Crank Astronomy (formerly "Dealing with Creationism in Astronomy"). It will provide a more interactive component for discussion of the main site content. I will also use this blog to comment on work in progress for the main site, news events, and other pseudoscience-related issues.

Sunday, August 19, 2012

what part of "The whole point about this page is to show that in practice the drift of the satellite clock rates compared to a ground clock does not accumulate if the position is determined by the difference between satellite timings rather than a comparison to a ground clock" you don't understand?Do you have practical knowledge of the software implementation running the gps structure? The "correction" made by general relativity amounts to 0.5 cm, so completely irrelevant for a discussion of the experimental validity of this theory.Furthermore as is clearly stated in the Thomas Smid page and in countless pages with gps info (google it, don't be lazy) the position of a receiver is not calculated having as reference a ground clock, but satellite clocks, which have the same "important to the problem" special relativity factor (ex. time dilation).

I've studied the books and documents used as training resources for those who wish to develop products using GPS signals (such as the references listed below). I also know a few people responsible for maintaining the precision timing systems needed for space flight, including GPS. I also have to keep track of timing issues in my day job where I must often assemble simulations and satellite datasets synchronized to different types of clocks, requiring use of the IAU standards on reference system (see references below). The newest ephemerides I use (Wikipedia: Jet Propulsion Laboratory Development Ephemeris) includes an ephemeris time which needs relativistic corrections to synchronize with Earth clocks.

According to the references and documentation for GPS signals, Dr. Smid uses the wrong equations for GPS triangulation. It is not clear what the GPS 'operating model' is for the equations used by Dr. Smid but it is inconsistent with the GPS documentation (see references below).

Actually the importance (%) of General Relativity in the accuracy of the gps system is close to zero.Have you read "Relativity in the Global Positioning System" by Neil Ashby. Be aware that one of the relavistic corrections was wrongly implemented in the first gps satellites but it didn't make much of a difference for the practical accuracy of the system.

I've been to talks by Dr. Ashby and read the Ashby reference, as well as many others. Perhaps the commenter should read the reference as well:

"There is an interesting story about this frequency offset. At the time of launch of the NTS-2 satellite (23 June 1977), which contained the first Cesium atomic clock to be placed in orbit, it was recognized that orbiting clocks would require a relativistic correction, but there was uncertainty as to its magnitude as well as its sign. Indeed, there were some who doubted that relativistic effects were truths that would need to be incorporated [5]! A frequency synthesizer was built into the satellite clock system so that after launch, if in fact the rate of the clock in its final orbit was that predicted by general relativity, then the synthesizer could be turned on, bringing the clock to the coordinate rate necessary for operation. After the Cesium clock was turned on in NTS-2, it was operated for about 20 days to measure its clock rate before turning on the synthesizer [11]. The frequency measured during that interval was +442.5 parts in 10^12 compared to clocks on the ground, while general relativity predicted +446.5 parts in 10^12. The difference was well within the accuracy capabilities of the orbiting clock. This then gave about a 1% verification of the combined second-order Doppler and gravitational frequency shift effects for a clock at 4.2 earth radii."

Therefore, the relativistic correction IS necessary. Two different frequency synthesizers were installed in the first satellites because of the doubters. Considering the commenter included the author and the article title of a publicly available resource so easy to check, the question arises as to whether the commenter is just blindly repeating something from another (false) source, or intentionally lying. Funny that they accuse me of laziness when they apparently never read the article.

Without the relativistic correction in clock rate, the satellite time will move out-of-sync with a ground clocks by 38 microseconds per day in the timestamp transmitted by the satellite clock. In the GPS pseudo-range equation (Wikipedia:pseudorange), the daily accumulation of an offset of 38 microseconds between a satellite clock and receiver clock increases the value of the pseudo-range between the satellite & receiver by 38e-6 s * 3e5 km/s = 11.4 km, accumulated each day. This means the three or more ranges used to triangulate (Wikipedia:Trilateration) an Earth position from the orbiting satellites will get deviate from an optimal solution. This is just the difference in satellite clock relative to the ground clock, needed to define a consistent time in the geocentric intertial reference frame. There is an addition correction that must be done at the receiver, to account for among other things, the fact that the receiver is in a rotating frame on the surface of the Earth, the GPS satellite orbit is not perfectly circular, etc. That the relativistic effects must be included holds true even if you use the 4 (or more) -satellite solution that doesn't explicitly require the receiver time. This is because it doesn't matter if the satellites are all in the same frame, the correction comes about because the satellites and the receivers are in different reference frames (for more details, see Relativity Denial: The GPS 4-Satellite Solution).

What pisses me off is the propagandization of science in misleading and inaccurate ways. What you should say is that atomic clocks orbiting the earth suffer a 38 microsecond delay per day, and this is consistent (within experimental accuracy) with special relativity and general relativity. NOT that the practical accuracy of the GPS or GALILEO or GLONASS systems are dependent on relativity, because they aren't!!!

What I find funny is this commenter goes from claiming the relativistic effects are ignorable in the first two paragraphs, to now claiming that the match between the (NOT ignorable) 38 microsecond/day relativity prediction and the actual GPS offset is NOT support for relativity!

The fact that the numbers agree is evidence for the theory and therefore supports the theory. Those who use this complaint tactic usually have no theory that can actually produce the numbers. When pressed, they might try an ad hoc solution, such as relabeling the computation as a 'correction' or 'fudge' factor of unknown origin. This can work only so long as you don't need to know what is happening in some different case such as the satellites being in an elliptical rather than circular orbit. How do you calculate it then?

The other problem with the argument is you could make a similar argument for Maxwell's Equations or quantum theory. The fact that the theory gives numbers for configurations which we can actually measure could be just a coincidence. The fact that these mathematical tools work isn't proof that the behavior of particles and fields isn't being controlled by magical pixies who just happen to let it work that way and may change it's operation at any time. This ploy is a common excuse, especially when opponents to the science under discussion can provide nothing testable which can explain the effects. Of course, I've yet to see one of these individuals actually do real stuff with these types of claims. They can only manufacture an explanation after someone else has gone through the effort to make it work!

The beauty of relativity is that it defines things at the very fundamental measurements of position and time in a way that we can use to derive the corrections for a variety of experimental and engineering configurations. As represented above, many who want to deny relativity construct ad hoc 'corrections' and 'adjustments' which are needed to make the technology work. Such a solution might work for copying a technology, but it is totally unclear how you would apply it in a new technology in realms where it has not be applied before, without building the technology first and then running it for a time to determine the 'adjustment' needed.

I've read a report, which I have yet to get confirmed, that the European Union's GALILEO system will not be adjusting their satellite clocks to account for the relativistic corrections. Instead they will require that all the corrections be performed in the receiver (the GPS signal can actually transmit the satellite orbit ephemeris so the correction can be computed). One advantage of this is that you can decide to place your satellites in a different orbits at launch time, or perhaps even change orbits after launch, rather than building the satellites specifically for a given orbit. But could this also be a clever way to lock the relativity-denying cranks out of the business of building GPS receivers?

So to return to the commenter's opening paragraph, where do I get my information? I've answered that. I get my information from those who write the books and references written by those who designed, built, and maintain the system, and used by others who need access and interpret the raw GPS signals, such as those listed below.

Now it is certainly fair to ask: what are the commenter's credentials? The commenter seems to rely on random pages found online.

15 comments:

Anonymous
said...

There is some truth is the claims of the deniers: relativistic position correction is small with respect to the claimed accuracy of GPS. But a crucial part is always missing from that claim. That claim is true if you start with synchronized clocks and you calculate the correction for a single measurement. They tend to forget that the offsets will increase because the relativistic error accumulates with time. That's why drift values are given per day. A negligible error, if let to accumulate or pile-up with no correction at some point, will become from negligible to dramatic, within a given time. And for the case of GPS, that time is short enough to require the correction. In essence, giving the error for a single measurement can lead to wrong conclusions...

There are also periodic relativistic corrections due to the changing relative motions of the satellites and receiver which act separate of the accumulating clock difference. The 38 microsecond/day correction defines a standard reference frame, but deviations from this frame in altitude and speed can require a number of additional corrections in the nanosecond range.

While relativistic corrections are on the order of nanoseconds relative to the total signal travel time from the satellites (about 80 milliseconds), those nanoseconds are vital for the precision of the system. Each nanosecond error in the range computation corresponds to a distance of about one foot. I think the algorithms for converting the satellite-receiver distances to an actual position mathematically move the mismatch in the receiver position into the position uncertainty.

Would you want a system with position errors in tens of feet guiding your car? That's more than enough to put you into an oncoming lane or off the edge of a mountain road.

What about for navigating an airliner during landing or a cruise missile heading towards a target?

How would you feel about a 46 foot difference in selecting the property line with your neighbor?

And that's just the near-Earth applications. Relativistic corrections become even more important for applications when the GPS constellation is used to get accurate positions of orbiting satellites where the relative motions are much faster.

I don't disagree. When saying " the claimed accuracy of GPS", I was more having in mind the accuracy that most of us make use of with our car devices.

I know there are more applications requiring supreme accuracy and where centimeters count - I dont take seriously the claims of the deniers. What I wrote was one example of how some statements, that may be true to some extent, can mislead if they are put in the wrong context.

"Would you want a system with position errors in tens of feet guiding your car?"

Of course not, which is exactly why none of the off-the-shelf GPS receivers available are used for controlling a car.

Some receivers will assume you are on a road and will simply continue to show you travelling on a road in case the position shifts a bit to the side.

This becomes very evident when driving on uncharted new roads. My satnav will happily show me travelling on an intersecting road (that was there before they built the new road) until it gives up and places me back in the middle of nowhere. (thus >50m inaccuracy at some point -- this is despite the fact that my car's satnav system gets speed information via the CAN-bus)

So no, I don't want a positioning system that guides my car. I want a system that provides me with a hint about where I am and where I should turn next. I drive the car, not the GPS.

Good luck finding a civilian GPS unit with the accuracy you mention.

(FWIW: I don't know what "electrical universe" entails, but I find it difficult to find a need for the theory of relativity with regards to GPS -- It looks to me that you are barking up the wrong tree here)

While GPS is not used for controlling automobiles TODAY, that may change before too long.

Civilians are not the only GPS users. It is used for determing positions of aircraft and other moving systems, including orbiting satellites. It is also used for surveying where centimeter precision becomes important. Another high-precision application is determination of ground motions (used for earthquake monitoring and flow of ice sheets). See GRACE mission

There are a whole class of current GPS applications where this higher precision is important and relativitic corrections of the signals become even more important. Each nanosecond of error in signal propagation time translates to about one foot of distance precision.

While relativity is certainly correct, I am not sure that failure to incorporate it into GPS would result in large accumulated errors.

The fundamental reason is that GPS is a closed-loop system. The satellites are continually monitored by a set of well-surveyed ground reference stations, so the positions you get from a GPS are actually with respect to those stations. They monitor the GPS orbits and clocks, and their observations are used to generate the navigation message each satellite sends at 50 bits per second and updated every 2 hours. Any errors in the system (including failure to correct for relativity) would therefore be detected by the ground stations and the navigation message automatically updated to correct for them.

IS-GPS-200H Section 20.3.3.3.3.1 specifies a quadratic polynomial used to correct the spacecraft clock from the parameters in the navigation message. Although the polynomial specifically excludes the (special?) relativistic correction and requires the user to compute it, this is just a convention. In principle it *could* have been folded into the polynomial coefficients. It probably wasn't because the SR correction is not well modeled by a simple quadratic (Ek is time varying, so it varies sinusoidally) so more polynomial terms and/or more frequent ephemeris updates would be needed to maintain accuracy.

It does appear -- please correct me if I'm wrong -- that the polynomial would automatically correct a failure to account for GR by biasing the clocks low before launch, though again this would require more bits in the correction polynomial (specifically the first-order term.)

Also, it is a general principle of communications engineering (my field) to not waste capacity on information that the receiver already has or can easily generate for itself, such as SR corrections from the orbital parameters elsewhere in the navigation message. A channel (especially one at only 50 bps) should be reserved for truly uncertain information, such as the random noise in a caesium clock.

While the polynomial you describe might work for GPS satellites in circular orbits (where the special and general relativistic corrections would be roughly constant) there would be problems for signal satellites in elliptical orbits, where these parameters would vary in a way that a simple polynomial cannot accommodate. Elliptical and other orbit changes might be used as a method of protecting the GPS system in times of conflict.

How do you know your reference stations are actually fixed? Earthquakes, continental drift, underground water depletion, etc. can alter the station positions. The GPS is used for detecting these types of ground changes.

Also, I've heard, but not been able to confirm, that the ESA Galileo system will not include the relativistic correction in the on-board clock. This system would need the entire relativistic calculation in the GPS receiver to be from the information packet from the satellite.

Note that the relativity correction polynomial specified in 20.3.3.3.3.1 does include the eccentricity, e. In fact, the correction is zero for a circular orbit:

F e sqrt(A) sin(Ek)

I don't see how the GPS satellites could be protected by orbit changes, since they have to transmit their own positions for the system to work. Such changes would also affect coverage that depends on having the satellites in distinct planes with the satellites evenly spaced out along each orbit.

The other factors you talk about are real, but significant only with special survey-quality equipment performing long-term observations at the centimeter level. They're insignificant for typical consumer or commercial grade GPS, where the major error is usually incompletely modeled ionospheric delay.

For users needing greater accuracy, differential GPS is available. One example is the FAA's Wide Area Augmentation System (WAAS). Fixed reference stations (distinct from those part of the GPS itself) monitor the satellites and broadcast corrections over radio links to mobile users who then apply them to their own measurements. This removes systematic errors such as ionospheric dispersion and unmodeled relativistic effects, leaving only receiver noise.

In section 20.3.3.3.3.1 you cite, you're thinking you could absorb the eccentricity term, deltat_r into the polynomial coefficient, a_f0 and fit that, without explicitly including the relativity correction.

As I see it, the problem would be that deltat_r is periodic based on the orbital period since it is a function of the eccentric anomaly, E_k. E_k varies between 0 and 2pi over the course of the orbit.

Meanwhile, you're assuming the coefficients a_f0, a_f1, a_f2, are dependent on slowly varying clock parameters, much longer than the orbit period. That would turn a_f0 into a periodic function instead of a constant. Your fitting method would have to be different.

Depending on the relative sizes of these parameters, that might work, but adding parameters to these types of fits often improve convergence at the expense of larger error bars.

Oh, and changing orbits probably wouldn't protect a satellite from a LASER or beam weapon which could quickly adjust for the position. However, it might provide some protection from other interceptor satellites which might require more time to move to an intercept trajectory.

Exactly -- a low-order polynomial is a poor fit to a sinusoid, but it's probably a good fit to the actual clock errors. F0 is the time offset and F1 is the frequency offset, and those two are probably enough to model most of the errors.

So I agree, they did it the right way. Because the correction is zero when the eccentricity is zero, this seems to be a second-order relativistic correction. I'm not sure, but I now think this is a *general* relativistic correction caused by an eccentric orbit moving the satellite up and down in the earth's gravity well.

I'm not sure what the SR corrections look like, or how they compare in magnitude to the GR corrections, but they would probably be observer dependent. It's my impression that GR effects are easier to detect with atomic clocks because you don't usually encounter the high velocities needed to get significant SR effects.

By the way, here's proof that an individual (suitably knowledgeable, motivated and equipped) can demonstrate GR gravitational time dilation. And during a family outing with the kids, no less. I saw Tom give a very entertaining talk on this a couple of years ago.

A number of errors are easy to make by treated GR and SR effects as separate corrections. Neil Ashby recommends using the metric which keeps effects together and avoids some of the double counting possible with separate terms.

Thanks for the leapsecond link. I've used that site occasionally but had never noticed that interesting piece, and I guess it's been nearly a decade since I would have seen it in Physics Today.

Search This Blog

About Me

I obtained my doctorate in physics and astronomy in 1994. I currently work in scientific data visualization for the media and public outreach. For more information on how I became involved in the creationism issue, visit my main page