As you know from all my other posts to you my digital scale has considerable variability and I’m trying to figure out how to deal with this.

Need some help here, Nate

(you seem to be ignoring my previous pleas for help, but maybe you’ll respond to this one)

Early in my weight recording I merely took a single weight value on Sunday at approximately the same time. And for a few weeks those data looked nice and uniform. Then I got an anomalous point and realized the scale has its (unknown) variability so I began to take multiple weights and use my own formula to get a “consensus” value which I now plot. Then I noticed another source of volatility, that is my weight actually varies daily, and by a lot, in fact, more than the sustained weekly trendline.

In short this means the signal is really buried in the noise and since you wrote about this I expect a little consult on what to do here. In this post I’ll just deal with the scale variability.

Now here are the sources of variability I can identify:

actual scale accuracy/variability

the orientation of the scale on the floor (there is just enough unevenness and “sponginess” to cause errors)

the placement of my feet and orientation of my body mass when I climb on the scale

Now once I tried to isolate the #2 source of variability, but what I found was that the scale has “memory” and just repeats the last reading, so you either have to move the scale (undoing the isolation of #2) or wait a while. Waiting is a bother, plus my body is changing (like drinking coffee, going to bathroom, getting older, etc.) so getting many weights is time consuming and boring. Here’s the best I’ve done on that.

This data includes all three factors of variability since I moved the scale for each reading.

Now I realize I could probably, in an controlled experiment, isolate #2 and #3 so I could just get the variability of the scale itself. I could suspend a 200lb weight with a bottom about the same area as my feet and lower it onto the scale (would need pulleys) which I’m never moving and get enough data (the deadweight, unlike me, won’t be changing its true weight) and finally just do the conventional statistical analysis (I’m assuming the scale is probably Gaussian distribution in its error).

But I can’t do that experiment. And even if I did, factors #2 and #3 are part of my “real world” issue and need to be characterized as well.

Nor can I have the patience to get enough data with me as the object to be weighed. BUT, I do get a lot of daily samples now, usually six values per day and thus now 272 total samples, probably enough for a respectable analysis. I can’t just take all 272 values and generate a distribution and extract statistics since my “true weight” is also changing during all those measurements.

So how do I use this data to try to get a good value for the weigh-in variability?

My first approach was to calculate the standard deviation for each set of weights (under relatively constant conditions). But that has a great deal of variability, as you can see below:

But just eyeballing this data it certainly seems to fall in a Gaussian so I did a histogram and got this result:

Not quite enough data to smooth out and possibly a little skewed to the right (higher values) but close enough I’ll call it. So then I get the following statistics about my 43 standard deviations:

mean=0.60690

median=0.59281

stdev=0.22140

skew=0.37243

So, Nate, does it make any sense to use either the mean or median to say what the weigh-in variability is?

Now let’s look at another approach. For each set of data I could “normalize” the raw data around either the mean or median and then use all 272 points for a distribution. So let’s take a short at that using the mean (median seems iffy with only 6 points (usually) per set).

Here’s the first look at every raw measurement normalized around the mean of the set of measurements on the same day:

Now notice the data to the left and to the right of the gap (around horizontal axis value 10). To the left is the once-a-week set, usually taken over several hours, and to the right, are the daily sets, usually taken in a few minutes. To my eyeball the daily set appear to have more volatility and since there is a difference in the way they are measured I think those should be excluded and just use the more numerous daily data. So now we’re ready for the histogram. In Excel generating a histogram is subject to “fiddling” since the choice of “bins” is arbitrarily, but I think my choices are reasonable.

The central (x=0) value is surprisingly low but I think this is an artifact of the actual numbers, i.e. the average of just a same set (usually 6) means the smallest deviation from the average (given weights have a single decimal value) is about 0.17 and thus tends to fall outside that central value. BUT, the key point is that it looks like a normal distribution to me. So I’m declaring it to be one and thus the standard deviation of all the data may represent the standard deviation of the way I get measurements.

So I’m declaring the standard deviation is

0.607264

Now, Nate, are either one of these approaches valid?

Advertisements

Rate this:

Share this:

Like this:

LikeLoading...

Related

About dmill96

old fat (but now getting trim and fit) guy, who used to create software in Silicon Valley (almost before it was called that), who used to go backpacking and bicycling and cross-country skiing and now geodashes, drives AWD in Wyoming, takes pictures, and writes long blog posts and does xizquvjyk.

One Response to Applied Nate Silver – another statistics issue

I’m starting over with a new set of data. In this case I am dividing the scale samples by their average thus generating a distribution centered on 1.0. I don’t have enough data yet for a conclusion but I think this might get one. The histogram tool in Excel is hard to use to get what I really want graphically.