GPS – Measuring the Distance to a Satellite

GPS is one of the things that are conceptually easy to explain and different aspects of GPS are more or less interesting. Unfortunately many explanations focus on the trivial trilateration: How to get your position when you know the distance to three known points.

But (besides the relativistic effects experienced by the satellites) I think there is an interesting issue that deserves a short post: How really to measure the distance to a satellite.

Timing a signal

In theory this is easy: The receiver calculates the difference between sending and receiving time. But in practice the satellite and receiver have two different clocks that give two different times. Why is that?

Time – like everything – can only be measured with a certain accuracy.[1] The GPS satellites have a few atomic clocks on board each having a precision of around 1 part in 1014 – that’s an error of less than a Millisecond in 1000 years which is as accurate as it gets at the moment. But the receiver doesn’t have an atomic clock. At best they would have a quartz clock which has a precision of around 1 part in 109 – that’s 30 Milliseconds each year. In everyday life that’s enough, but when calculating a distance by measuring the light’s travel time an error of 30ms becomes 9000km.

The fourth satellite

So how does GPS do it? While the absolute time measurement will be off, GPS uses multiple satellites. The receiver may not be able to measure the exact time one signal takes to reach the receiver but it can quite accurately measure the differences between the different satellites’ signals. I.e. the receiver doesn’t know the distance to satellite A. But it does know e.g. the signal from satellite B arrived 16ms after the signal from sat A which means sat B must be 5000km farther away than sat A. With time differences from 4 satellites it can construct a probability distribution for a position.[2]

If the receiver had a clock synched to the satellites’ clocks it could also use three sats to get a position. The trilateration would in fact return 2 points – one of which would have an improbable speed or altitude and could be discarded. But in practice the fourth sat is needed to correct the clock error.

As a byproduct this process gives the receiver a time measured by an atomic clock in space moving so fast and so far away from earth that it has to be corrected for relativistic effects (!).

[1] So every clock shouldn’t actually display one exact time but rather a time-interval containing the true time with a certain probability.