Long Memory

In 1906, Harold Edwin Hurst, a young English civil servant, came to Cairo, Egypt, which was then under British rule. As a hydrological consultant, Hurst’s problem was to predict how much the Nile flooded from year to year. He developed a test for long-range dependence and found significant long-term correlations among fluctuations in the Nile’s outflows and described these correlations in terms of power laws. This statistic is known as the rescaled range, range over standard deviation or R/S statistic. From 1951 to 1956, Hurst, then in his seventies, published a series of papers describing his findings (Hurst, 1951).
Hurst’s rescaled range (R/S) statistic is the range of partial sums of deviations of a time series from its mean, rescaled by its standard deviation.

Contents

Publications

CAMPBELL, John Y., Andrew W. LO and A. Craig MacKINLAY, 1997, The Econometrics of Financial Markets. Princeton University Press. [Cited by 135] (300/year)
2.6 Tests For Long-Range Dependence [5 pages]
"In particular, what the earlier literature had assumed was evidence of long-range dependence in US stock returns may well be the result of quickly decaying short-range dependence instead."

BAILLIE, Richard T., 1996. Long memory processes and fractional integration in econometrics. Journal of Econometrics, Volume 73, Issue 1 , July 1996, Pages 5-59. [Cited by 401] (38.83/year)
Abstract: "This paper provides a survey and review of the major econometric work on long memory processes, fractional integration, and their applications in economics and finance. Some of the definitions of long memory are reviewed, together with previous work in other disciplines. Section 3 describes the population characteristics of various long memory processes in the mean, including ARFIMA. Section 4 is concerned with estimation and examines semiparametric procedures in both the frequency and time domain, and also the properties of various regression based and maximum likelihood techniques. Long memory volatility processes are discussed in Section 5, while Section 6 discusses applications in economics and finance. The paper also has a concluding section."

TAQQU, Murad S., Vadim TEVEROVSKY and Walter WILLINGER, 1995. Estimators for long-range dependence: An empirical study, Fractals, Vol. 3, No. 4 (1995) 785-798. [Cited by 316] (27.93/year)
Abstract: "Various methods for estimating the self-similarity parameter and/or the intensity of long-range dependence in a time series are available. Some are more reliable than others. To discover the ones that work best, we apply the different methods to simulated sequences of fractional Gaussian noise and fractional ARIMA (0, d, 0). We also provide here a theoretical justification for the method of residuals of regression."

SIMONSEN, Ingve, Alex HANSEN and Olav Magnar NES, 1998. Determination of the Hurst exponent by use of wavelet transforms, Physical Review E, Volume 58, Number 3, Pages 2779-2787, September 1998. [Cited by 50] (6.01/year)
Abstract: "We propose a method for (global) Hurst exponent determination based on wavelets. Using this method, we analyze synthetic data with predefined Hurst exponents, fracture surfaces, and data from economy. The results are compared to those obtained with Fourier spectral analysis. When many samples are available, the wavelet and Fourier methods are comparable in accuracy. However, when one or only a few samples are available, the wavelet method outperforms the Fourier method by a large margin."

GIRAITIS, Liudas, Piotr KOKOSZKA and Remigijus LEIPUS, 2001. Testing for Long Memory in the Presence of a General Trend, Journal of Applied Probability, Vol. 38, No. 4. (Dec., 2001), pp. 1033-1054. [Cited by 29] (5.46/year)
Abstract: "The paper studies the impact of a broadly understood trend, which includes a change point in mean and monotonic trends studied by Bhattacharya et al. (1983), on the asymptotic behaviour of a class of tests designed to detect long memory in a stationary sequence. Our results pertain to a family of tests which are similar to Lo’s (1991) modified R/S test. We show that both long memory and nonstationarity (presence of trend or change points) can lead to rejection of the null hypothesis of short memory, so that further testing is needed to discriminate between long memory and some forms of nonstationarity. We provide quantitative description of trends which do or do not fool the R/S-type long memory tests. We show, in particular, that a shift in mean of a magnitude larger than N -1/2, where N is the sample size, affects the asymptotic size of the tests, whereas smaller shifts do not do so."

GIRAITIS, L., et al., 2000. Semiparametric estimation of the intensity of long memory in conditional heteroskedasticity, Statistical Inference for Stochastic Processes, Volume 3, Numbers 1-2 / January, 2000, Pages 113-128. [Cited by 17] (2.69/year)
The paper is concerned with the estimation of the long memory parameter in a conditionally heteroskedastic model proposed by Giraitis et al. (1999b). We consider estimation methods based on the partial sums of the squared observations, which are similar in spirit to the classical R / S analysis, as well as spectral domain approximate maximum likelihood estimators. We review relevant theoretical results and present an empirical simulation study.

ROSE, O., 1996. Estimation of the Hurst Parameter of Long-Range Dependent Time Series. Research Report. [Cited by 14] (1.36/year)
Abstract: "This paper is a condensed introduction to self-similarity, self-similar processes, and the estimation of the Hurst parameter in the context of time series analysis. It gives an overview of the literature on this subject and provides some assistance in implementing Hurst parameter estimators and carrying out experiments with empirical time series."

GIORDANO, S., et al., 1997. A Wavelet-based approach to the estimation of the Hurst Parameter for self-similar data. Digital Signal Processing Proceedings, 1997. DSP 97., 1997 13th International Conference on [Cited by 10] (1.07/year)
Abstract: "In this paper we analyse a wavelet based method for the estimation of the Hurst parameter of synthetically-generated self-similar traces, widely used in a great variety of applications, ranging from computer graphics to parsimonious traffic modelling in broadband networks. The aim of this work is to point out the efficiency of multiresolution schemes in the analysis of fractal processes, characterized by similar statistical features over different time scales. To this end we generated a huge amount of data using the random midpoint displacement (RMD) algorithm, a well-known fast technique for the generation of fractional Gaussian noise (fGn) traces. We then evaluated the Hurst parameter of such sequences in the wavelet domain and compared the results with those obtained with more traditional methods, based on the estimation of the fractal dimensional (Higuchi method) and the moments of the aggregated series."

HORV?TH, L., 2001. Change-Point Detection in Long-Memory Processes. Journal of Multivariate Analysis. [Cited by 2] (0.38/year)
Abstract: "We discuss some methods to test for possible changes in the parameters of a long-memory sequence. We obtain the limit distributions of the test statistics under the no-change null hypothesis. The consistency of the tests is also investigated."

BARKOULAS, John T. and Christopher F. BAUM, 1997. Long Memory and Forecasting in Euroyen Deposit Rates, Asia-Pacific Financial Markets, Volume 4, Number 3 / January, 1997, pages 189-201. Also: Financial Engineering and the Japanese Markets, 1997, 4:189-201. [Cited by 2] (0.21/year)
Abstract: "We test for long memory in 3- and 6-month daily returns series on Eurocurrency deposits denominated in Japanese yen (Euroyen). The fractional differencing parameter is estimated using the spectral regression method. The conflicting evidence obtained from the application of tests against a unit root as well as tests against stationarity provides the motivation for testing for fractional roots. Significant evidence of positive long-range dependence is found in the Euroyen returns series. The estimated fractional models result in dramatic out-of-sample forecasting improvements over longer horizons compared to benchmark linear models, thus providing strong evidence against the martingale model."

TEVEROVSKY, Vadim, Murad S. TAQQU and Walter WILLINGER, 1999. A critical look at Lo’s modified R/S statistic. Journal of Statistical Planning and Inference, Volume 80, Issues 1-2, 1 August 1999, Pages 211-227. [Cited by 1] (0.14/year)
Abstract: "We report on an empirical investigation of the modified rescaled adjusted range or R/S statistic that was proposed by Lo, 1991. Econometrica 59, 1279–1313, as a test for long-range dependence with good robustness properties under ‘extra’ short-range dependence. In contrast to the classical R/S statistic that uses the standard deviation S to normalize the rescaled range R, Lo’s modified R/S-statistic Vq is normalized by a modified standard deviation Sq which takes into account the covariances of the first q lags, so as to discount the influence of the short-range dependence structure that might be present in the data. Depending on the value of the resulting test-statistic Vq, the null hypothesis of no long-range dependence is either rejected or accepted. By performing Monte-Carlo simulations with ‘truly’ long-range- and short-range dependent time series, we study the behavior of Vq, as a function of q, and uncover a number of serious drawbacks to using Lo’s method in practice. For example, we show that as the truncation lag q increases, the test statistic Vq has a strong bias toward accepting the null hypothesis (i.e., no long-range dependence), even in ideal situations of ‘purely’ long-range dependent data."

GRAU-CARLES, Pilar, 2005. Tests of Long Memory: A Bootstrap Approach. Computational Economics, Volume 25, Numbers 1-2 / February, 2005, Pages 103-113. [not cited] (0/year)
Abstract: "Many time series in diverse fields have been found to exhibit long memory. This paper analyzes the behaviour of some of the most used tests of long memory: the R/S analysis, the modified R/S, the Geweke and Porter-Hudak (GPH) test and the detrended fluctuation analysis (DFA). Some of these tests exhibit size distortions in small samples. It is well known that the bootstrap procedure may correct this fact. Here I examine the size and power of those tests for finite samples and different distributions, such as the normal, uniform, and lognormal. In the short-memory processes such as AR, MA and ARCH and long memory ones such as ARFIMA, p-values are calculated using the post-blackening moving-block bootstrap. The Monte Carlo study suggests that the bootstrap critical values perform better. The results are applied to financial return time series."