where φ1,…,φp{\displaystyle \varphi _{1},\ldots ,\varphi _{p}} are the parameters of the model, c{\displaystyle c} is a constant, and εt{\displaystyle \varepsilon _{t}} is white noise. This can be equivalently written using the backshift operatorB as

Some parameter constraints are necessary for the model to remain wide-sense stationary. For example, processes in the AR(1) model with |φ1|≥1{\displaystyle |\varphi _{1}|\geq 1} are not stationary. More generally, for an AR(p) model to be wide-sense stationary, the roots of the polynomial zp−∑i=1pφizp−i{\displaystyle \textstyle z^{p}-\sum _{i=1}^{p}\varphi _{i}z^{p-i}} must lie within the unit circle, i.e., each root zi{\displaystyle z_{i}} must satisfy |zi|<1{\displaystyle |z_{i}|<1}.

In an AR process, a one-time shock affects values of the evolving variable infinitely far into the future. For example, consider the AR(1) model Xt=c+φ1Xt−1+εt{\displaystyle X_{t}=c+\varphi _{1}X_{t-1}+\varepsilon _{t}}. A non-zero value for εt{\displaystyle \varepsilon _{t}} at say time t=1 affects X1{\displaystyle X_{1}} by the amount ε1{\displaystyle \varepsilon _{1}}. Then by the AR equation for X2{\displaystyle X_{2}} in terms of X1{\displaystyle X_{1}}, this affects X2{\displaystyle X_{2}} by the amount φ1ε1{\displaystyle \varphi _{1}\varepsilon _{1}}. Then by the AR equation for X3{\displaystyle X_{3}} in terms of X2{\displaystyle X_{2}}, this affects X3{\displaystyle X_{3}} by the amount φ12ε1{\displaystyle \varphi _{1}^{2}\varepsilon _{1}}. Continuing this process shows that the effect of ε1{\displaystyle \varepsilon _{1}} never ends, although if the process is stationary then the effect diminishes toward zero in the limit.

Because each shock affects X values infinitely far into the future from when they occur, any given value Xt is affected by shocks occurring infinitely far into the past. This can also be seen by rewriting the autoregression

ϕ(B)Xt=εt{\displaystyle \phi (B)X_{t}=\varepsilon _{t}\,}

(where the constant term has been suppressed by assuming that the variable has been measured as deviations from its mean) as

When the polynomial division on the right side is carried out, the polynomial in the backshift operator applied to εt{\displaystyle \varepsilon _{t}} has an infinite order—that is, an infinite number of lagged values of εt{\displaystyle \varepsilon _{t}} appear on the right side of the equation.

where B is the backshift operator, where ϕ(⋅){\displaystyle \phi (\cdot )} is the function defining the autoregression, and where φk{\displaystyle \varphi _{k}} are the coefficients in the autoregression.

The autocorrelation function of an AR(p) process is a sum of decaying exponentials.

Each real root contributes a component to the autocorrelation function that decays exponentially.

The simplest AR process is AR(0), which has no dependence between the terms. Only the error/innovation/noise term contributes to the output of the process, so in the figure, AR(0) corresponds to white noise.

For an AR(1) process with a positive φ{\displaystyle \varphi }, only the previous term in the process and the noise term contribute to the output. If φ{\displaystyle \varphi } is close to 0, then the process still looks like white noise, but as φ{\displaystyle \varphi } approaches 1, the output gets a larger contribution from the previous term relative to the noise. This results in a "smoothing" or integration of the output, similar to a low pass filter.

For an AR(2) process, the previous two terms and the noise term contribute to the output. If both φ1{\displaystyle \varphi _{1}} and φ2{\displaystyle \varphi _{2}} are positive, the output will resemble a low pass filter, with the high frequency part of the noise decreased. If φ1{\displaystyle \varphi _{1}} is positive while φ2{\displaystyle \varphi _{2}} is negative, then the process favors changes in sign between terms of the process. The output oscillates. This can be likened to edge detection or detection of change in direction.

where εt{\displaystyle \varepsilon _{t}} is a white noise process with zero mean and constant variance σε2{\displaystyle \sigma _{\varepsilon }^{2}}. (Note: The subscript on φ1{\displaystyle \varphi _{1}} has been dropped.) The process is wide-sense stationary if |φ|<1{\displaystyle |\varphi |<1} since it is obtained as the output of a stable filter whose input is white noise. (If φ=1{\displaystyle \varphi =1} then Xt{\displaystyle X_{t}} has infinite variance, and is therefore not wide sense stationary.) Assuming |φ|<1{\displaystyle |\varphi |<1}, the mean E⁡(Xt){\displaystyle \operatorname {E} (X_{t})} is identical for all values of t by the very definition of wide sense stationarity. If the mean is denoted by μ{\displaystyle \mu }, it follows from

It can be seen that the autocovariance function decays with a decay time (also called time constant) of τ=−1/ln⁡(φ){\displaystyle \tau =-1/\ln(\varphi )} [to see this, write Bn=Kφ|n|{\displaystyle B_{n}=K\varphi ^{|n|}} where K{\displaystyle K} is independent of n{\displaystyle n}. Then note that φ|n|=e|n|ln⁡φ{\displaystyle \varphi ^{|n|}=e^{|n|\ln \varphi }} and match this to the exponential decay law e−n/τ{\displaystyle e^{-n/\tau }}].

This expression is periodic due to the discrete nature of the Xj{\displaystyle X_{j}}, which is manifested as the cosine term in the denominator. If we assume that the sampling time (Δt=1{\displaystyle \Delta t=1}) is much smaller than the decay time (τ{\displaystyle \tau }), then we can use a continuum approximation to Bn{\displaystyle B_{n}}:

where γ=1/τ{\displaystyle \gamma =1/\tau } is the angular frequency associated with the decay time τ{\displaystyle \tau }.

An alternative expression for Xt{\displaystyle X_{t}} can be derived by first substituting c+φXt−2+εt−1{\displaystyle c+\varphi X_{t-2}+\varepsilon _{t-1}} for Xt−1{\displaystyle X_{t-1}} in the defining equation. Continuing this process N times yields

It is seen that Xt{\displaystyle X_{t}} is white noise convolved with the φk{\displaystyle \varphi ^{k}} kernel plus the constant mean. If the white noise εt{\displaystyle \varepsilon _{t}} is a Gaussian process then Xt{\displaystyle X_{t}} is also a Gaussian process. In other cases, the central limit theorem indicates that Xt{\displaystyle X_{t}} will be approximately normally distributed when φ{\displaystyle \varphi } is close to one.

The AR(1) model is the discrete time analogy of the continuous Ornstein-Uhlenbeck process. It is therefore sometimes useful to understand the properties of the AR(1) model cast in an equivalent form. In this form, the AR(1) model is given by:

It is based on parameters φi{\displaystyle \varphi _{i}} where i = 1, ..., p. There is a direct correspondence between these parameters and the covariance function of the process, and this correspondence can be inverted to determine the parameters from the autocorrelation function (which is itself obtained from the covariances). This is done using the Yule–Walker equations.

which, once {φm;m=1,2,⋯,p}{\displaystyle \{\varphi _{m};m=1,2,\cdots ,p\}} are known, can be solved for σε2.{\displaystyle \sigma _{\varepsilon }^{2}.}

An alternative formulation is in terms of the autocorrelation function. The AR parameters are determined by the first p+1 elements ρ(τ){\displaystyle \rho (\tau )} of the autocorrelation function. The full autocorrelation function can then be derived by recursively calculating [4]

The above equations (the Yule–Walker equations) provide several routes to estimating the parameters of an AR(p) model, by replacing the theoretical covariances with estimated values.[citation needed] Some of these variants can be described as follows:

Estimation of autocovariances or autocorrelations. Here each of these terms is estimated separately, using conventional estimates. There are different ways of doing this and the choice between these affects the properties of the estimation scheme. For example, negative estimates of the variance can be produced by some choices.

Formulation as a least squares regression problem in which an ordinary least squares prediction problem is constructed, basing prediction of values of Xt on the p previous values of the same series. This can be thought of as a forward-prediction scheme. The normal equations for this problem can be seen to correspond to an approximation of the matrix form of the Yule–Walker equations in which each appearance of an autocovariance of the same lag is replaced by a slightly different estimate.

Formulation as an extended form of ordinary least squares prediction problem. Here two sets of prediction equations are combined into a single estimation scheme and a single set of normal equations. One set is the set of forward-prediction equations and the other is a corresponding set of backward prediction equations, relating to the backward representation of the AR model:

Here predicted of values of Xt would be based on the p future values of the same series. This way of estimating the AR parameters is due to Burg,[5] and is called the Burg method:[6] Burg and later authors called these particular estimates "maximum entropy estimates",[7] but the reasoning behind this applies to the use of any set of estimated AR parameters. Compared to the estimation scheme using only the forward prediction equations, different estimates of the autocovariances are produced, and the estimates have different stability properties. Burg estimates are particularly associated with maximum entropy spectral estimation.[8]

Other possible approaches to estimation include maximum likelihood estimation. Two distinct variants of maximum likelihood are available: in one (broadly equivalent to the forward prediction least squares scheme) the likelihood function considered is that corresponding to the conditional distribution of later values in the series given the initial p values in the series; in the second, the likelihood function considered is that corresponding to the unconditional joint distribution of all the values in the observed series. Substantial differences in the results of these approaches can occur if the observed series is short, or if the process is close to non-stationarity.

If φ1>0{\displaystyle \varphi _{1}>0} there is a single spectral peak at f=0, often referred to as red noise. As φ1{\displaystyle \varphi _{1}} becomes nearer 1, there is stronger power at low frequencies, i.e. larger time lags. This is then a low-pass filter, when applied to full spectrum light, everything except for the red light will be filtered.

If φ1<0{\displaystyle \varphi _{1}<0} there is a minimum at f=0, often referred to as blue noise. This similarly acts as a high-pass filter, everything except for blue light will be filtered.

When φ1>0{\displaystyle \varphi _{1}>0} it acts as a low-pass filter on the white noise with a spectral peak at f=0{\displaystyle f=0}

When φ1<0{\displaystyle \varphi _{1}<0} it acts as a high-pass filter on the white noise with a spectral peak at f=1/2{\displaystyle f=1/2}.

The process is non-stationary when the roots are outside the unit circle. The process is stable when the roots are within the unit circle, or equivalently when the coefficients are in the triangle −1≤φ2≤1−|φ1|{\displaystyle -1\leq \varphi _{2}\leq 1-|\varphi _{1}|}.

The impulse response of a system is the change in an evolving variable in response to a change in the value of a shock term k periods earlier, as a function of k. Since the AR model is a special case of the vector autoregressive model, the computation of the impulse response in Vector autoregression#Impulse response applies here.

have been estimated, the autoregression can be used to forecast an arbitrary number of periods into the future. First use t to refer to the first period for which data is not yet available; substitute the known prior values Xt-i for i=1, ..., p into the autoregressive equation while setting the error term εt{\displaystyle \varepsilon _{t}} equal to zero (because we forecast Xt to equal its expected value, and the expected value of the unobserved error term is zero). The output of the autoregressive equation is the forecast for the first unobserved period. Next, use t to refer to the next period for which data is not yet available; again the autoregressive equation is used to make the forecast, with one difference: the value of X one period prior to the one now being forecast is not known, so its expected value—the predicted value arising from the previous forecasting step—is used instead. Then for future periods the same procedure is used, each time using one more forecast value on the right side of the predictive equation until, after p predictions, all p right-side values are predicted values from prior steps.

There are four sources of uncertainty regarding predictions obtained in this manner: (1) uncertainty as to whether the autoregressive model is the correct model; (2) uncertainty about the accuracy of the forecasted values that are used as lagged values in the right side of the autoregressive equation; (3) uncertainty about the true values of the autoregressive coefficients; and (4) uncertainty about the value of the error term εt{\displaystyle \varepsilon _{t}\,} for the period being predicted. Each of the last three can be quantified and combined to give a confidence interval for the n-step-ahead predictions; the confidence interval will become wider as n increases because of the use of an increasing number of estimated values for the right-side variables.

The predictive performance of the autoregressive model can be assessed as soon as estimation has been done if cross-validation is used. In this approach, some of the initially available data was used for parameter estimation purposes, and some (from available observations later in the data set) was held back for out-of-sample testing. Alternatively, after some time has passed after the parameter estimation was conducted, more data will have become available and predictive performance can be evaluated then using the new data.

In either case, there are two aspects of predictive performance that can be evaluated: one-step-ahead and n-step-ahead performance. For one-step-ahead performance, the estimated parameters are used in the autoregressive equation along with observed values of X for all periods prior to the one being predicted, and the output of the equation is the one-step-ahead forecast; this procedure is used to obtain forecasts for each of the out-of-sample observations. To evaluate the quality of n-step-ahead forecasts, the forecasting procedure in the previous section is employed to obtain the predictions.

The question of how to interpret the measured forecasting accuracy arises—for example, what is a "high" (bad) or a "low" (good) value for the mean squared prediction error? There are two possible points of comparison. First, the forecasting accuracy of an alternative model, estimated under different modeling assumptions or different estimation techniques, can be used for comparison purposes. Second, the out-of-sample accuracy measure can be compared to the same measure computed for the in-sample data points (that were used for parameter estimation) for which enough prior data values are available (that is, dropping the first p data points, for which p prior data points are not available). Since the model was estimated specifically to fit the in-sample points as well as possible, it will usually be the case that the out-of-sample predictive performance will be poorer than the in-sample predictive performance. But if the predictive quality deteriorates out-of-sample by "not very much" (which is not precisely definable), then the forecaster may be satisfied with the performance.