Pages

Monday, January 16, 2012

Different Types of "Asymptotics"

When econometricians talk about the "asymptotic" properties of their estimators or tests, they're usually referring to their properties when the sample size becomes infinitely large. However, there are other types of "asymptotics" that are also interesting and important. It's worth being aware of this, and of the way they arise in econometric analysis.

Large-sample asymptotics focus on the "limiting distribution" of a suitably scaled statistic (estimator or test statistic) of interest, as the sample size (n) becomes infinitely large. We're all familiar with the estimator properties of (weak) consistency, asymptotic efficiency, etc. Most of the time, Maximum Likelihood estimators enjoy these properties, for instance. When it comes to testing, we often have to rely on the asymptotic distribution of the test statistic to get critical values, because the finite-sample distribution is intractable.

Of course, we can use the bootstrap to learn about the finite-sample distribution of a statistic.

Large-sample asymptotic results can provide very helpful, but approximate, results in many cases. In others, they can be quite misleading when the sample size is modest. Estimators may be substantially biased and imprecise, and the "size" of our tests can be distorted badly.

I like to think of "weak consistency" as being a particularly well-named estimator property! If an estimator is inconsistent, then it's probably best to avoid it altogether. That's because if an estimator is inconsistent then it will give the wrong answer, with probability one, even when we have entire population of data in front of us, and we actually know the true answer!

Who would want to use such an estimator? Not me!

A second type of "asymptotics" that we use when evaluating estimators and tests is the so-called "Small-Sigma Asymptotics", suggested originally by Kadane (1971). You may not have encountered this, but it's been used to good effect by various econometricians.

The motivation for large-sample asymptotics is that we'd like our inferences to eventually become very good indeed if the sample size grows without limit. Small-sigma asymptotics are motivated by the idea that we'd like these inferences to become extremely good when the variability in the population (and hence in the sample) becomes less and less.

Think of the extreme case. If the population has zero variance, then all of the population items will take the same value, and so will all of the values in any sample. In that case, any "sensible" statistic that we construct from the sample should be able to give us an exact result when we use it to draw inferences about the population. For instance, in this extreme case, the sample mean will equal the population mean exactly, and it will provide a "perfect" estimator of the latter parameter.

So, with "Small-Sigma" asymptotics, we're asking what happens to the sampling distribution of a statistic when the population variance goes to zero. This provides us with an alternative set of asymptotic results - a different type of "consistency" than we're perhaps used to.

Let's see how this works out in the case of the OLS estimator for the usual k-regressor linear regression model:

y = Xβ + ε ; ε ~ [0, σ2I].

We can re-write this equivalently as:

y = Xβ + σv ; v ~ [0, I].

Now, the OLS estimator of β is:

b = (X'X)-1X'y = (X'X)-1X'(Xβ + σv),

and we can write the "estimation error" as:

(b - β) = (X'X)-1 X'βσv.

Clearly, as σ → 0, b → β.

Not only is the OLS estimator of β (large-n) consistent under certain conditions, but it is also small-σ consistent. Notice that the latter result holds provided that:

The OLS estimator is defined - that is, as long as X has full column rank; and

X does not depend on σ.

Just as random regressors will render the OLS estimator (large-n) inconsistent unless the regressors are uncorrelated with the errors in the limit; so too, this estimator may be (small-σ) inconsistent in the unlikely event that the regressors are random and their variation is a function of σ.

As a second, related, example consider the usual unbiased estimator of σ2 in this model, namely:

s2 = (y - Xb)'(y - Xb) / (n - k).

It's easy to prove, using Khintchine's Theorem, that this estimator is (large-n) consistent for σ2. So, by Slutsky's Theorem s is (large-n) consistent for σ.

Notice that we can write:

s2 = (Xβ + σv - Xb)'(Xβ + σv - Xb) / (n - k),

or,s2 = (σv)2 / (n - k).

Then, as σ → 0, so does s2, and we see that in a rather trivial sense, s2 is also (small-σ) consistent for σ2. Equally, s itself is (small-σ) consistent for σ.

Small-σ asymptotics have been used to analyze the properties of various econometric estimators and test statistics. Some interesting examples are provided by Kadane (1971), Inder (1986), Ullah et al. (1995), Srivastava and Ullah (1995), and Ullah (2004, pp. 36-45).

Finally, there's an interesting interview with the man who started all of this - "Jay" Kadane - here.

Note: The links to the following references will be helpful only if your computer's IP address gives you access to the electronic versions of the publications in question. That's why a written References section is provided.