Posts

Homoscedasticity and heteroscedasticity - two of the scariest sounding terms in all of Statistics! So what do they mean?

When one calculates the variance or standard deviation of a dataset of random variables, one assumes that the variance is constant across the entire population. This assumption is homoscedasticity. The opposite of this assumption is heteroscedasticity.

In other words, a collection of random variables is heteroscedastic if there are sub-populations within the dataset that have different variances from others (source: https://en.wikipedia.org/wiki/Heteroscedasticity). Another way of describing homoscedasticity is constant variance and another way of describing heteroscedasticity is variable variance.

Jeremy J Taylor in his blog provides a great example of a distribution that is heteroscedastic. In his example, the independent variable is "age" and the predictor variable is "income". The example discusses how incomes start to vary more as age …

One of the first things that any student of statistics learns is 2 popular measures of descriptive statistics: mean and standard deviation.

Has the approach to calculating Standard Deviation ever got you wondering about the need to square the distances from the mean in order to remove negatives instead of just using the average of the absolute values to eliminate negatives? Well, you are certainly not alone.

As it turns out, squaring the distances from the mean and then calculating their square root to arrive at the Standard Deviation of a distribution is more as a result of convention than anything else. In fact, there is a measure called the Absolute Mean Deviation that does not take the squared distances from the mean to eliminate negative values. Instead, it just takes the absolute values of the differences from the mean and calculates the average of the sum of those values to determine deviation from the mean.