Modeling Cycles: MA, AR, and ARMA Models

Describe the properties of the first-order moving average (MA(1)) process, and distinguish between autoregressive representation and moving average representation.

Describe the properties of a general finite-order process of order \(q\) (MA(\(q\))) process.

Describe the properties of the first-order autoregressive (AR(1)) process, and define and explain the Yule-Walker equation.

Describe the properties of a general \(p\)th order autoregressive (AR(\(p\))) process.

Define and describe the properties of the autoregressive moving average (ARMA) process.

Moving Averages (MA) Models

The moving average process of finite order is considered an approximation to the Wold representation that happens to be a moving average process of infinite order. Various sorts of shocks in a time series drive all variations.

The First-Order Moving Average (MA(1)) Process

In the general MA process, and particularly the MA(1) process, a function of current and lagged unobservable shocks expresses the current value of the observed series. This is an important feature that generally defines the MA process.

This function happens to be the autocovariance function scaled by the variance.

The sharp cutoff in the autocorrelation function is a crucial feature in this case. Regardless of the values of MA parameters, the necessities for covariance stationarity for any MA(1) process are always met.

The MA(1) process is considered invertible if:

$$ |\theta |<1 $$

Therefore, the MA(1) process can be inverted and the current value of the series expressed in terms of a current shock and the lagged values of the series, instead of a current and a lagged shock. This is referred to as the autoregressive representation, and it can be calculated as follows:

The process has been defined as:

$$ { y }_{ t }={ \epsilon }_{ t }+\theta { \epsilon }_{ t-1 } $$

$$ { \epsilon }_{ t }\sim WN\left( 0,{ \sigma }^{ 2 } \right) $$

We then solve for the innovation:

$$ { \epsilon }_{ t }={ y }_{ t }-\theta { \epsilon }_{ t-1 } $$

The expressions for the innovations at various dates are given as follows after lagging by successively more periods:

The finite autoregressive representation can be expressed as follows using the lag operator notation:

$$ \frac { 1 }{ 1+\theta L } { y }_{ t }={ \epsilon }_{ t } $$

Since \(\theta\), in the back substitution, is raised to some progressively higher powers, if \(|\theta |<1\) only then will a convergent autoregressive representation exist.

The only root of the MA(1) lag operator polynomial is the solution to:

$$ 1+{\theta L}=0 $$

Which is:

$$ L=-\frac { 1 }{ \theta } $$

This implies that if \(|\theta |<1\) then its inverse will be lower than 1 in absolute value.

The next step is to evaluate the partial autocorrelation function for the MA(1) process. This function will have a gradual decay to zero. In a sequence of progressive higher-order autoregressive approximations, the coefficients on the last included lag are the partial autocorrelations.

The MA(\(q\)) process is a generalized representation of the MA(1) process. This means that the MA(1) process is a special case of the MA(\(q\)) process, with \(q\) being equal to 1.

Therefore, the MA(\(q\)) and the MA(1) processes have properties that are similar in all aspects. When \(q>1\), the MA(\(q\)) lag operator polynomial has \(q\) roots, and there are chances of ending up with complex roots.

For invertibility of the MA(\(q\)) process to exist, all the roots must have inverses that are inside the unit circle. This enables us to have the convergent autoregressive representation:

Depending on the information set, the MA(\(q\)) process’ conditional mean changes accordingly. However, the unconditional moments are fixed. The \(q\) lags of the innovation in the MA(\(q\)) process are the determining factors for the conditional mean. This makes the MA(\(q\)) process have a potentially longer memory, which is clearly observed in its autocorrelation function where all the autocorrelations are zero, beyond the \(q\)th displacement.

The defining property of the moving average process is this autocorrelation cutoff.

According to Wold’s representation:

$$ { y }_{ t }=B\left( L \right) { \epsilon }_{ t } $$

With the order of \(B\left( L \right)\) being infinite. The infinite order polynomial \(B\left( L \right)\) is approximated by applying the first order polynomial \(1+{ \theta }L\), as the MA(1) model is being fit.

Even better approximations to the Wold representation can be provided by the MA(\(q\)) processes. The infinite moving average is approximated by the MA(\(q\)) process, with a moving average of finite order,

$$ { y }_{ t }=\Theta \left( L \right) { \epsilon }_{ t }. $$

Autoregressive Models (AR) Models

This another approximation to the Wold representation. The autoregressive process is a simple stochastic difference equation. In discrete time-stochastic dynamic modeling, the natural vehicle is the stochastic difference equations.

The AR(1) process

The following equation is the AR(1) for short, in the AR(1) process:

$$ { y }_{ t }={ \epsilon }_{ t }+\varphi { y }_{ t-1 } $$

$$ { \epsilon }_{ t }\sim WN\left( 0,{ \sigma }^{ 2 } \right) $$

It can also be expressed in the lag operator form as follows:

$$ \left( 1-\varphi L \right) { y }_{ t }={ \epsilon }_{ t } $$

It is also important to note that a finite-order moving average process is always covariant stationary. However, for invertibility, certain conditions have to be met. But for autoregressive process invertibility always exist. However, covariance stationarity in the autoregressive process requires some conditions to be satisfied.

For the AR(1) process:

$$ { y }_{ t }=\varphi { y }_{ t-1 }+{ \epsilon }_{ t } $$

Then on the right hand side backward substitution for the lagged \(y\)’s is done to obtain:

This results in the Yule-Walker Equation: Given \(\gamma \left( \tau \right)\), for any \(\tau\) , \(\gamma \left( \tau -1 \right)\) can be obtained in accordance with the Yule-Walker equation. Let \(\gamma \left( 0 \right)\) be the variance of the process, such that:

Covariance stationarity in the AR(\(p\)) process occurs iff all the roots of the autoregressive lag operator polynomial \(\Phi \left( L \right)\) have inverses that fall inside the unit circle. Here, the process can be written in the form of a convergent infinite moving average:

At displacement \(p\), the cutoff for the AR(\(p\)) partial autocorrelation is sharp.

Autoregressive Moving Average (ARMA) Models

These are models combined with a view of obtaining a better approximation to the Wold representation. The result is the autoregressive moving average, ARMA (\(p,q\)), process. The ARMA(1,1) is the simplest ARMA process which is neither a pure autoregression or a pure moving average. That is:

The process is considered covariance stationery with a convergent infinite moving average representation, in the event that all the inverses of the roots of \(\Phi \left( L \right)\) fall inside the unit circle.

The process is considered invertible with a convergent infinite autoregressive representation, in the event that all the inverses of the roots of \(\Theta \left( L \right)\) fall inside the unit circle.

This implies that an invertible moving average can be approximated as an autoregression of finite-order, with better approximations being obtained when the value of \(m\) is increased. Therefore, the residuals can be expressed approximately in terms of the observed data and then solve for the parameters minimizing the sum of squared residuals, (using a computer):