Abstract
There is an extensive historical dataset on real GDP per capita prepared by Angus Maddison. This dataset covers the period since 1870 with continuous annual estimates in developed countries. All time series for individual economies have a clear structural break between 1940 and 1950. The behavior before 1940 and after 1950 can be accurately (R2 from 0.7 to 0.99) approximated by linear time trends. The corresponding slopes of regressions lines before and after the break differ by a factor of 4 (Switzerland) to 19 (Spain). We have extrapolated the early trends into the second interval and obtained much lower estimates of real GDP per capita in 2011: from 2.4 (Switzerland) to 5.0 (Japan) times smaller than the current levels. When the current linear trends are extrapolated into the past, they intercept the zero line between 1908 (Switzerland) and 1944 (Japan). There is likely an internal conflict between the estimating procedures before 1940 and after 1950. A reasonable explanation of the discrepancy is that the GDP deflator in developed countries has been highly underestimated since 1950. In the USA, the GDP deflator is underestimated by a factor of 1.4. This is exactly the ratio of the interest rate controlled by the Federal Reserve and the rate of inflation. Hence, the Federal Reserve actually retains its interest rate at the level of true price inflation when corrected for the bias in the GDP deflator.

5/25/12

The growth rate of real GDP per capita in the
biggest OECD countries is represented as a sum of two components – a steadily decreasing
trend and fluctuations related to the change in some specific age population.
The long term trend in the growth rate is modelled by an inverse function of
real GDP per capita with a constant numerator. This numerator is equivalent to
a constant annual increment of real GDP per capita. For the most advanced
economies, the GDP estimates between 1950 and 2007 have shown very weak and
statistically insignificant linear trends (both positive and negative) in the
annual increment. The fluctuations around relevant mean increments are
characterized by practically normal distribution. For many countries, there
exist historical estimates of real GDP since 1870. These estimates extend the
time span of our analysis together with a few new estimates from 2008 to 2011. There are severe structural breaks in the corresponding
time series between 1940 and 1950, with the slope of linear regression
increasing by a factor of 4.0 (Switzerland) to 22.1 (Spain). Therefore, the GDP
estimates before 1940 and after 1950 have been analysed separately. All
findings of the original study are validated by the newly available data. The
most important is that all slopes (except that for Australia after 1950)of the regression lines obtained for the
annual increments of real GDP per capita are small and statistically insignificant, i.e. one cannot reject the null hypothesis of a
zero slope and thus constant increment. Hence the growth in real GDP per capita
is a linear one since 1870 with a break in slope between 1940 and 1950.

5/23/12

A month ago, we predicted
a drop in the S&P 500 to the level of 1300 by the end of May. We also
suggested buying the index when it is 1300.Both are done by now. We are waiting the level 1500 in October 2013 to
sell and fix profit. The explanation from April is fully repeated below. The
red segment in Figure 2 is now black since the prediction is realized.

This repeats
our previous postSeveral days ago we
predicted the current fall in the S&P 500 index. For this reason, we did
not enter the stock market and instead invested in a defensive portfolio. We
are waiting the level of 1350.The
reason is explained below.

Figure 1 shows the evolution
of the S&P 500 index since 1980. After 1995, the index behavior reveals
some saw teeth with peaks in 2000 and 2007. The current growth resembles those
between 1997 and 2000 and from 2003 and 2007.There are two deep troughs in 2002 and 2009 which are marked by red and
green lines, respectively.For the
current analysis we assume that the repeated shape of the teeth is likely
induced by a degree of similarity in the evolution of macroeconomic variables.
The intuition behind such an assumption is obvious – in the long run the market
depends on the overall economic growth.

Having two peaks and
troughs between 1995 and 2009, what can we say about the current growth in the
S&P 500? Before making any statistical estimates, in Figure 2 we have
shifted forward the original curve in Figure 1 in order to match the 2009
trough (blue line).When the 2002 and
2009 troughs are matched, one can see that the current growth path closely
repeats that after 2002. The first big deviation from the blues curve in Figure
2 started in 2011 and had amplitude of 150 units (from 1210 to 1360).The black curve returned to the blue one in
August/September 2011. A month ago, we observed a middle-size deviation of
about 100 units and predicted that the
index will have a negative correction down to the level of 1300 any time soon. If the index will repeat the path of the
previous rally one-to-one, one may expect the peak level of 1500 in the end of
2013.In two to four weeks it might be a
good time to invest for a 15% return cumulated to October 2013 (but not more
than two months), when the negative correction is over.

With the S&P 500 falling
down to 1350, the prediction does not seem inappropriate. The next several weeks
should decide on the new level. In Figure 2, we have drawn the fall we expect
by the end of May 2012. We would wait by the end of April to decide on the
following move in the S&P 500. If the current fall will reach 1300, it’s
likely a good time to buy. Otherwise, the end of May is the horizon to wait the
bottom.

Figure 1. The evolution of the S&P 500 market
index between 1980 and 2012.

Figure 2. The curve in Figure 1 peak is shifted
forward to match the 2009 trough (blue line). Red line – expected fall in the
S&P 500: from 1400 in Mach to 1300 in May.

5/11/12

The name of this blog suggests that I am a physicist trying to introduce some principal rules of mechanics into economics. Economics is a hobby rather than every day activity. At some point, it's worth to present my professional work. Here is our poster from the 2012 General Assembly of the European Geophysical Union.

We have introduced cross correlation between seismic waveforms as a technique for signal detection and automatic event building at the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organization. The intuition behind signal detection is simple – small and mid-sized seismic events close in space should produce similar signals at the same seismic stations. Equivalently, these signals have to be characterized by a high cross correlation coefficient. For array stations with many individual sensors distributed over a large area, signals from events at distances beyond, say, 50 km, are subject to destructive interference when cross correlated due to changing time delays between various channels. Thus, any cross correlation coefficient above some predefined threshold can be considered as a signature of a valid signal. With a dense grid of master events (spacing between adjacent masters between 20 km and 50 km corresponds to the statistically estimated correlation distance) with high quality (signal-to-noise ratio above 10) template waveforms at primary array stations of the International Monitoring System one can detect signals from and then build natural and manmade seismic events close to the master ones. The use of cross correlation allows detecting smaller signals (sometimes below noise level) than provided by the current IDC detecting techniques. As a result it is possible to automatically build from 50% to 100% more valid seismic events than included in the Reviewed Event Bulletin (REB).We have developed a tentative pipeline for automatic processing at the IDC. It includes three major stages. Firstly, we calculate cross correlation coefficient for a given master and continuous waveforms at the same stations and carry out signal detection as based on the statistical behavior of signal-to-noise ratio of the cross correlation coefficient. Secondly, a thorough screening is performed for all obtained signals using f-k analysis and F-statistics as applied to the cross-correlation traces at individual channels of all included array stations. Thirdly, local (i.e. confined to the correlation distance around the master event) association of origin times of all qualified signals is fulfilled. These origin times are calculated from the arrival times of these signals, which are reduced to the origin times by the travel times from the master event. An aftershock sequence of a mid-size earthquake is an ideal case to test cross correlation techniques for autiomatic event building. All events should be close to the mainshock and occur within several days. Here we analyse the aftershock sequence of an earthquake in the North Atlantic Ocean with mb(IDC)=4.79. The REB includes 38 events at distances less than 150 km from the mainshock. Our ultimate goal is to excersice the complete iterative procedure to find all possible aftershocks.We start with the mainshock and recover ten aftershocks with the largest number of stations to produce an initial set of master events with the highest quality templates. Then we find all aftershocks in the REB and many additional events, which were not originally found by the IDC. Using all events found after the first iteration as master events we find new events, which are also used in the next iteration. The iterative process stops when no new events can be found. In that sense the final set of aftershocks obtained with cross correlation is a comprehensive one.

Check these journals

DISCLAIMER

This is a personal site reflecting only my personal opinion. Statements on this site do not represent the views or policies of my employer, past or present, or any other organisation with which I may be affiliated