NumXL Support Desk

Autoregressive (AR) Model

We originally composed these technical notes after sitting in on a time series analysis class. Over the years, we’ve maintained these notes and added new insights, empirical observations and intuitions acquired. We often go back to these notes for resolving development issues and/or to properly address a product support matter.

In this paper, we’ll go over another simple, yet fundamental, econometric model: the auto-regressive model. Make sure you have looked over our prior paper on the moving average model, as we build on many of the concepts presented in that paper.

This model serves as a cornerstone for any serious application of ARMA/ARIMA models.

Background

The auto-regressive model of order p (i.e. AR(p)) is defined as follows:

Essentially, the AR(p) is merely a multiple linear regression model where the independent (explanatory) variables are the lagged editions of the output (i.e. $x_{t-1},x_{t-2},\cdots,x_{t-p}$). Keep in mind that $x_{t-1},x_{t-2},\cdots,x_{t-p}$ may be highly correlated with each other.

Why do we need another model?

First, we can think of an AR model as a special (i.e. restricted) representation of a $MA(\infty)$ process. Let’s consider the following stationary AR (1) process:

By having $\left \| \lambda_i \right \| <1, \forall i\in \{1,2,\cdots,p\} $, we can use the partial-fraction decomposition and the geometric series representation; we then construct the algebraic equivalent of the $MA(\infty)$ representation.

Hint: By now, this formulation looks enough like what we have done earlier in the MA technical note, since we inverted a finite order MA process into an equivalent representation of $AR(\infty)$.

The key point is being able to convert a stationary, finite-order AR process into an algebraically equivalent $MA(\infty)$ representation. This property is referred to as causality.

Causality

Definition: A linear process $\{X_t\}$ is causal (strictly, a causal function of $\{a_t\}$) if there is an equivalent $MA(\infty)$ representation.

$$x_t=\Psi(L)a_t=\sum_{i=0}^\infty \psi_i L^i a_t$$

Where

$$\sum_{i=1}^\infty \left \| \psi_i \right \| < \infty $$

Causality is a property of both $\{X_t\}$ and $\{a_t\}$.

In plain words, the value of $\{X_t\}$ is solely dependent on the past values of $\{a_t\}$.

The process above is non-causal, as its values depend on future values of $\{{a_t}'\}$observations. However, it is also stationary.

Going forward, for an AR (and ARMA) process, stationarity is not sufficient by itself; the process must be causal as well. For all our future discussions and application, we shall only consider stationary causal processes.

Stability

Similar to what we did in the moving average model paper, we will now examine the long-run marginal (unconditional) mean and variance.

Example: AR(1)

Assuming all characteristic roots ($1/\lambda$) fall outside the unit circle, the AR(p) process can be viewed as a weighted sum of p-stable MA processes, so a finite long-run variance must exit.

Impulse Response Function

Earlier, we used AR(p) characteristics roots and partial-fraction decomposition to derive the equivalent of an infinite order moving average representation. Alternatively, we can compute the impulse response function (IRF) and find the MA coefficients’ values.

The impulse response function describes the model output triggered by a single shock at time t.

We derived the values for the MA coefficients as follows: $$y_t=[(c_1+c_2+\cdots+c_p) + (c_1\lambda_1+c_2\lambda_2+\cdots+c_p\lambda_p)L+ (c_1\lambda_1^2+c_2\lambda_2^2+\cdots+c_p\lambda_p^2)L^2+\cdots)a_t$$

In principle, the IRF values must match the MA coefficients values. So we can conclude:

The sum of denominators (i.e. $c_i$) of the partial-fractions equals to one (i.e. $\sum_{i=1}^p c_i = 1$).