Several GPS applications, like this one, or this one, acquire multiple (lat,lon) samples of a given location, assuming that the GPS unit is not moving, and then take an average of the samples in order to compute a "more precise" 2D location.

(We do not care about the elevation/altitude position here!)

The second app (GPS Averaging) uses the accuracy value associated to each sample as a weight for the current location, and then calculates the weighted average accordingly. It also provides an estimate of the accuracy of the averaged location.

Questions:

1) While common sense pushes us to believe that averaging should lead to increased accuracy, how much sense does it make for handheld units like phones (i.e. simple devices that do not use differential GPS)?

2) Would you recommend another method than GPS Averaging's method to compute the average location?

3) How to compute an estimate of the accuracy of the averaged location?

4) Is there a way different than averaging to get a better 2D positioning by acquiring multiple (lat,lon) samples of a given location?

UPDATE 1: results of my preliminary study with 2 handheld GPS units (Sony phone models ST15i and ST17i) acquiring 3m accuracy fixes at the same position during 4.5 hours gave the following data:

=> It is quite interesting to note that even though the supposed accuracy of the fixes was 3 meters, the ST17i model had lots of points further than 3 meters from the median/average.

=> Also remarkable is the monotone drift of the longitude on the ST15i model.

(Note that the ST15i seems to have a more sensitive antenna than the ST17i as I could analyse it used on average 3 more satellites for its fixes than the ST15i!)

UPDATE 2: some further stats and numbers, still from the same datasets

=> The data is definitely not normal

=> I also computed the distance between the median location of the ST15i and the median location of the ST17i: it is 3 meters, as if the study was playing with us, since all fixes used had an accuracy of 3 meters or better. This definitely validates the suggestion below of using a known reference in order to derive meaningful conclusions about the accuracy of each GPS unit!

Would you happen to be near a CORS or some other location with known accurate coordinates that you can use for calibration? Without a calibration location, I guess you can only get better precision, but not better accuracy. I think your charts are great! If you have more results I think just adding here would be fine.
– Kirk KuykendallMay 26 '12 at 16:12

3

The updates are interesting and valuable. Note, though, that of course the distance from the median will not be normally distributed! Distances can't even be negative. If the drift is bivariate normal, then theory shows the distance (to the mean location) will have a scaled chi distribution. Over short times--during which patterns like those shown here are apparent--you will see artifacts of the high positive temporal correlation. Thus, the histograms and probability plots aren't telling us anything new.
– whuber♦May 29 '12 at 20:09

2

All in all, I am starting to understand all the intricacies of a GPS location accuracy: it is way more complex than what I first thought. It's making me wonder about the following: keeping the true positioning aside, and using a ref point to which we could come back regularly during a terrain survey, would it make sense to correct i.e. increase the accuracy of (through linear approximation?) the surveyed locations and/or path according to the drift of the reference point location? I should maybe open a new question for that one unless the answer is quick and easy and someone posts it here!
– John DoisneauMay 30 '12 at 21:02

3

(2) Due to the strong temporal correlation I would expect non-normality over relatively short periods, John, but over long periods the histograms should become symmetric and probably fairly close to normal (with the usual attendant outliers, no doubt). Difficult locations for receiving the signals might present exceptions to this general rule, depending on how the signals are compromised. (1) (Re an earlier comment) It sounds like you have re-invented differential correction :-).
– whuber♦Jun 1 '12 at 19:11

2 Answers
2

Averaging only makes sense if you assume that the "noise" in your location measurements is roughly symmetrical - evenly distributed in every direction. That is, for any one measurement, it's equally likely to be wrong in any particular direction.

It is probably possible that you could get a noise distribution that isn't symmetrical. For example, if your GPS device systematically underestimates the distance to all satellites, and is using more satellites from from a given direction (perhaps you're standing at the bottom of a cliff), then all measurements are more likely to be biased in that direction. In this instance, averaging will improve precision, but it won't fix your bias problem.

I don't know whether such over/underestimation is common, but I doubt it would be significant enough in most devices to reduce the utility of averaging. Perhaps it might introduce a little bias, but the increase in precision would still improve reliability (eg. for geocaching).

Regarding your 4 questions:

Depends how much much you value reliability over time spent standing in one spot, waiting for extra measurements.

That app doesn't mention it's method, but it probably uses plain averaging. Taking the median may be more reliable, but without knowing the noise distribution, it'd be hard to say. I would assume gaussian noise, in which case if you get enough measurements, they'll be about the same. A better method might be to use multiple devices, take many measurements with each device, and then average the entire set. This would remove device-specific biases, but would obviously not be quick or easy to do (if your devices do averaging themselves, then you could just average the averages - same result).

You can only estimate the precision, not the bias. If you assume gaussian noise, you can calculate a confidence interval around your estimate (average), based on your standard error. Some units to this live (based on the number of satellites), and represent the confidence interval by a circle around your position.

+1--good analysis and advice. But note that asymmetry of the noise and lack of bias are different things: the noise distribution can, in principle, be strongly asymmetrical and still be accurate. Concerning (4), there are more approaches available once one appreciates that the "noise" has a component that is positively correlated over time (a slowly moving "drift"). This implies that waiting longer between obtaining fixes may improve the accuracy of the averages. It also implies that standard errors estimated from a short series of fixes will usually optimistically overestimate the precision.
– whuber♦May 21 '12 at 15:13

1

Thank you naught101, this was the sort of answer I was expecting, and it confirmed my thinking, especially after having found and read some nice articles about GPS precision, available here. I understand that everything is, in fact, linked to the characteristics of my own GPS, and things can change with other GPS chips and manufacturers. I guess I'll try to gather a huge dataset of fixes, if possible during several days, in order to confirm my assumptions.
– John DoisneauMay 21 '12 at 22:19

1

@whuber interesting point. I assume you're talking about GPS drift? If so, is that something that happens monotonically, or does it right itself somewhat when new satellites come into view? I mean, if it's monotonic, then the longer you stand in one place, the more your average will drift too. How do you account for that?
– naught101May 22 '12 at 0:18

1

@JohnDoisneau: an experiment sounds like a great idea. My understanding is that because all the data points are drawn from the same distribution (if you account for whuber's point about drift), then the uncertainty in the individual measurements is going to be similar to the uncertainty between measurements, and you can more or less ignore the confidence radius for each individual measurement, and just calculate a new one for the whole data set.
– naught101May 22 '12 at 0:27

1

@naught, Those are great questions in your latest comment. Briefly, we can view the error as a random process, but we don't have to assume it's continuous in time: it can have jumps, as you suggest. The GPS is designed so that over long periods of time, the error at an uncluttered location will average out to zero. (This is the rationale for taking long-term readings at fixed stations to measure the rate of continental drift.) The "drift" is a positively autocorrelated component of the error process. The autocorrelation means errors won't average out immediately, but they should eventually.
– whuber♦May 22 '12 at 14:09

Is this a comment or is it a new question? If so, please consult our help center for guidance on creating posts here. If it's intended as an answer, would you mind augmenting it to provide a fuller explanation?
– whuber♦Feb 3 '15 at 21:53