Invariance Under Monotonic Transformations

Upon investigation, we were stunned to find that the formula we use to calculate how many bars of signal strength to display is totally wrong. Our formula, in many instances, mistakenly displays 2 more bars than it should for a given signal strength. For example, we sometimes display 4 bars when we should be displaying as few as 2 bars. Users observing a drop of several bars when they grip their iPhone in a certain way are most likely in an area with very weak signal strength, but they don’t know it because we are erroneously displaying 4 or 5 bars. Their big drop in bars is because their high bars were never real in the first place.

Apple will soon be releasing a software update that will fix the problem by lowering the number of bars displayed on your phone. In related news, in response to my students’ grade groveling I have re-examined the midterm and noticed that everyone’s score was 5 points higher than it should have been. The curve has been re-calculated.

I don’t think that your point is quite right. Apple probably intended the bars to correspond to some measurement. The key point is that the users are already used to this correspondence and expect it, and it would take them time and a bit of effort and confusion to readjust their scale.

Assuming that the erroneously high bar counts were displayed randomly, invariance doesn’t hold at all. If I give you the correct signal 75% of the time but random noise the other 25% of the time, your behavior will differ from a situation where you have accurate readings 100% of the time.

That said, you’re right calling the bar total “too high” (as opposed to simply inaccurate) is silly, unless there is some universally accepted definition of what a certain number of bars is supposed to mean.

Re: “unless there is some universally accepted definition of what a certain number of bars is supposed to mean:”

It’s not a precise definition, but I’d argue that most people do have a mental construct of what X# of bars means — something on the order of:
5 — Great reception (my G3/G4 should be blazing fast, and calls clear and non-dropped)
4 — Really good (still fast, still clear, definitely non-dropped)
3 — Good (G3/G4 good, calls clear, still non-dropped)
2 — Fair, don’t move around much (G3/G4 noticeably slower, calls might have a bit of static, nothing dropped – but, again, don’t move around much, ’cause you’re on the edge)
1 — Poor to Middling (G3/G4 may be unavailable, calls might cut in/out, some calls dropped, moving around might be *good* to find an extra bar)

The problem was that, intermittently, it was showing the “Really Good” to “Good”, with no calls dropped, when it was, in fact, well into the “Your call may die” zone. You don’t have to have a precise definition, or even what would normally be considered to be a universally accepted definition, to know that there’s quite the qualitative difference between “calls never dropped” and “calls often dropped.”

But number of bars is not a continuous variable, so you lose information when you transform a wider interval onto a narrower interval. Compared with a three-bar scale, a five-bar scale makes more distinctions and thus conveys more information about signal strength. If they deterministically add bars so that the lowest two bar levels are never used, then they’re essentially shrinking their five-bar scale into a three-bar scale. And they aren’t doing so efficiently – it sounds like they’re squeezing most of the scale into the top two levels – which would make it even less informative than a scale that was designed to have only three levels.