This way one can target mid-term trading applications based on fundamental (macro-) economic data.

The filter explicitly accounts for the `freshness’ of the low-frequency (macro) data by reestimating optimal coefficients each day: heading away from the release time point will `automatically’ downsize the importance of the macro-series.

And of course you can combine this new feature with customization and regularization…

And cointegration…

I’m playing with first prototypical code right now and I can tell’ya: exciting stuff! And of course the `exotic’ optimization criteria: all new, completely fresh. 2015 will be an exciting millesime!

I finished writing about replication and customization of univariate model-based (ARIMA and unobserved-components state-space models) approaches. I did not treat classic HP- or CF-filters yet. Neither did I write something about multivariate models, yet.

Due to a stimulating calendar filled with running and prospective new research projects I changed the schedule of my MDFA-Legacy project. The next topic (chapter) will be about replicating classic model-based approaches (ARIMA and state-space) in the generic DFA-framework. Once replicated, nothing stands in the way of customization, obviously. For the unobserved components (state space) models the empirical framework emphasizes quarterly (log, real) US-GDP. Various time spans ending before and after the great recession are analyzed as well as different models with various integration orders and/or cycle lengths (freely determined or imposed). I’ll introduce new packages: state-space (dlm package) and quandl, noticeably (there is also a nice graphical feature with NBER-recessions). Quandl is used because the data is downloaded directly from the corresponding site: the book works with fresh data…

I’m currently working on the customization chapter in the Legacy-project and here’s a short teaser:

I show that customized designs outperform the best theoretical mean-square error (MSE-) filter, assuming knowledge of the true data-generating process (DGP), in terms of speed (smaller time-shift) AND noise-suppression (smoother output), both in-sample as well as out-of-sample. To be fair, this has been shown in McElroy and Wildi (ATS-trilemma paper), already, but the main `added-value’ in the book is that I reconciled different code sources i.e. results are safeguard.

Going beyond, I show that a customized univariate filter also outperforms a bivariate MSE-design relying on an anticipative leading-indicator (lead by one time unit) in terms of speed and noise suppression. This is of course a stronger claim because the multivariate (MSE) design is `cheating’.

PS: I forgot to link the GDP-data in my previous entry so here it is: GDP1 and GDP2 . This data is called by the R-code of the MDFA-legacy project.

R-code is ready but I’m unable to upload the files because of security reasons. I’ll find a solution… In the meantime here’s an updated version of the book MDFA-Legacy. You may have a look at the new sections in chapters 2-4. I finally managed to tackle the tedious i1=F,i2=T case in my code.