When you compute the average of the sum of the squares using the sample mean (i.e. the sample variance), the mathematical expectation of the sample variance equals the theoretical variance with n-1 not n.

You want to estimate the variance of the whole population, if you have the whole population you can just calculate it, but when you have only a small portion (a sample) of the population you know that the variance in that sample is probably going to be bit lower than the variance in the whole population, so in order to make your estimate of the variance in the population more suitable you divide by one less than the number of data points in your sample so that you get a somewhat higher number for your estimated population variance. If the sample gets large enough there is hardly any difference (it does not matter very much whether you divide by 1000 or by 999).

Let me just make an addition. When you compute the variance, you have to know the mean. When you computed the mean, the number of degrees of freedom you had to account for in your finite sample was just the number of members in it, say n. Now when you come to use the mean in your variance calculation, you have used up one of your degrees of freedom and bundled it up into the value of the mean. So to divide out the number of degrees of freedom, you only have n-1 of them left.