3 Answers
3

First of all it makes a sense to talk about the correlations of error terms $e_t$ and not the returns $r_t$ themselves.

Second, in the OLS model the consequences are mild, i.e. the estimate of $\beta$ is going to be Ok. The problem would be with the estimate of the uncertainty or t-stats of this $\beta$, not its value.

Third, the autoregressive (dynamic) model will be in trouble. Any time you have the lagged dependent variable as a regressor (on the right side) the autocorrelation in residuals $e_t$ causes serious issue. Your coefficient $\phi$ will be biased.

If you're a financial analyst, then checkout the regression chapter in CFA Level II reading materials, it covers the answers to your questions in details for non-statisticians to understand.

$\begingroup$The OLS coefficient of an AR(1) model is biased regardless of residual autocorrelation, isn't it? What exactly happens when the residuals are autocorelated, does the bias increase?$\endgroup$
– Richard HardyFeb 9 '18 at 19:47

I understand that every time we want to draw a conclusion studying the sample we should have the iid property.

To be exact, classical linear regression relies upon the assumption of i.i.d. residuals. In your notation, the assumption of the model would be written as $e_t \overset{i.i.d.}{\sim} \mathcal{N}(0,\sigma^2)$

Practically speaking though, the presence of time dependency in the covariates often does lead to time dependency in these residuals, if they result from fitting a classical linear regression model.

What are the consequences of dependence in the response variable
observations if we use a static model like a simple index model?

$$r_t = \alpha + \beta x_t + e_t$$

Can we still make use of the model as explicative one?

I'm not sure what an "index model" is. Leaving aside the terminology of economics & finance, you are right to be suspicious of such a model in general. If the $r_t$ terms exhibit time dependency (also known as "autocorrelation" or "serial correlation") and the $x_t$ terms do not follow a similar dependency structure, then it is very possible to have time dependency in your fitted residuals $e_t$.

If that happens, it would invalidate your model. Practically, this means it would make poor predictions and capture an inaccurate relationship. And this is often the case in economic & financial domains, which explains why econometrics places a lot of emphasis on time series forecasting models.

On the other hand, it is easy to come up with counterexamples too: if $x_t=2r_t$, then the residuals of this trivial model should all be $e_t=0$ and so it would be valid in this case. So it does depend on your data.

I have a different framework in mind that is confusing me. I use to
fit a ARMA models or ARMA GARCH models to a dependent series like
returns to get rid of the autocorrelation and dependence.

An $ARMA(p,q)$ model is an example of a time series model. It is suitable for modelling weakly-stationary processes that exhibit a dependence between time points. It accomplishes this by modelling the response using $p$ autoregressive terms and $q$ moving-average terms. From Wikipedia:

So models like this are very capable of forecasting time dependent data, as long as the underlying process is assumed to be weakly stationary. For more information on these models, including how to specify them, and even more flexible generalisations, I strongly recommend Chatfield.

What are instead the consequeces of autocorrelation (or dependence in
general) in residuals if we use a predictive model like $$r_t = \alpha + \phi r_{t-1} + e_t$$

Compare your model to the form of the $ARMA(p,q)$ model above. It looks very similar to this part:

$$ X_t = c + \epsilon_t + \sum_{i=1}^p \phi_i X_{t-i} $$

In fact, this is called an $AR(p)$ model. It is also a time series model for a weakly-stationary stochastic process, without any moving-average terms.

And so your model is just an $AR(1)$ model: by specifying $p=1$, it is in the form

$$ X_t = c + \epsilon_t + \phi_1 X_{t-1} $$

Therefore, your suggestion is a first-order autoregressive model, which may be suitable enough for your task.

To decide what type of time series model you need, and what sort of parameter values (like $p$ or $q$) you need to specify, you must analyse your data using ACF and PACF plots and follow the Box-Jenkins method. Chatfield explains it very clearly, and there are many books on the market about time series forecasting.

$\begingroup$Nice answer, but I did not get the last part, namely, what are the consequences of autocorrelation in the given model.$\endgroup$
– Richard HardyFeb 9 '18 at 19:53

$\begingroup$The AR(1) model that the OP suggested is certainly capable of modelling autocorrelation: specifically that between a variable and itself 1 time step ("lag") in the past. It should be able to deal nicely with this specific dependency structure, but might struggle to model more complex ones.$\endgroup$
– A. G.Feb 9 '18 at 19:57

$\begingroup$I think this is an answer to a different question. I am not objecting to the information you provide, though.$\endgroup$
– Richard HardyFeb 9 '18 at 20:21

Consequeces of autocorrelation in residuals in AR(1)

Answer: Estimates will be biased

In the model
$$
r_t = \alpha + \phi r_{t-1} + e_t,
$$

$r_{t-1}$ is correlated with $e_{t-1}$, which is correlated with $e_t$. So one of the assumption of OLS, no-correlation of error terms and dependent variable, is violated. The estimates will be biased.

ARCH type models are serial correlation of the variance of error term. So the problem above will not happen.

$\begingroup$The OLS coefficient of an AR(1) model is biased regardless of residual autocorrelation, isn't it? What exactly happens when the residuals are autocorelated, does the bias increase?$\endgroup$
– Richard HardyFeb 13 '18 at 19:16