Tuesday, 28 July 2009

Results for the CFA level 1 exam came out today. According to the CFA Institute, there was a 46% pass rate - the highest in years. I don't know if it was due to the move to 3-answer multiple choice questions or to a better-than-average candidate pool (there were probably a number of people out of work with extra time to study in there). Either way, congratulations to those who passed - you can now register for the Level 2 exam. If you didn't make it this time, you can take Level 1 again in December.

I have 3 former students (that I know of) who took the exam, and a couple more that were considering it. I expect the three I know of did well (hopefully, they'll let me know sometime today).

update: So far, two have reported in, and one passed - still waiting on the thrid.

Thanks for stopping by. If you want to receive updates, either sign up for email updates on the right sidebar, or add our feed to your RSS reader

Sunday, 26 July 2009

I try not to get too excited about short-term market movements. At the same time, I have to keep up since I'm the faculty advisor for Unknown University's St udent-managed fund. Even so, it's been a pretty good week (and month and year) so far - almost every equity index I can think of is in the green for the last month (and even year to date). As an aside, our fund is up 11.4% YTD (but I'm sure that'll change).

Monday, 20 July 2009

As many of you know, I'm in the midst of taking the CFA (Chartered Financial Analyst) exams. I had been registered for the Level 3 exam this last June, but dropped out because of the Unknown Son's health issues (and I'm glad I did, because the time was much better spent with him in his last days).

Well, I just re-upped and re-registered for the 2010 exam. So, I'm once again a Level 3 Candidate. At least this time I'm already familiar with well over half the material, so it shouldn't be nearly as stressful.

I might even start early and keep with it this time around. With luck, I might be done with my first pass through he material by January 1. While that seems early, if I shoot for January 1, I'll probably actually finish by March or so, and then I can focus on actually locking this stuff down. After all, the material is interesting, but I don;t want to take any more exams for a while after this.

I remember watching/listening to the Apollo 11 moon landing as it happened on the television. Hard to believe, but it's been 40 years since .

And yet, some conspiracy theory whack jobs still doubt that it happened. One moonbat (sarcasm intended) named Bart Sibrel systematically harassed the Apollo crewmembers to see if they'd admit that the landing was a hoax. He made the mistake of calling Buzz Aldrin a liar. Click below to see what happened.

Man - I could easily keep clicking this all day like one of those experiments where they gave mice crack.

Sunday, 19 July 2009

Instead of the CCAPM (Consumption CAPM), we now have the GCAPM (Garbage CAPM). Alexi Savov (graduate student at U of Chicago) finds that he can explain much more of the Equity Risk Premium using aggregate garbage production than he can using National Income and Product Account (NIPA) data. Here's the logic behind his research (from Friday's Wall Street Journal article titled "Using Garbage to Measure Consumption"):

In theory, one way to explain the premium would be to look at consumption, a broad measure of wealth. People should demand a premium from an investment that goes down when consumption goes down. That’s because the alternative — bonds — hold on to their value when consumption declines. Another way to put it: When you are making lots of garbage, you are rich. When you stop making garbage, you are poor. Unlike bonds, which continue to pay out whether you produce lots of garbage (and are rich) or not, stocks are likely to lose their value during bad times. Therefore, investors should want a large reward for putting their money in something whose value decreases at the same time as their overall wealth decreases.

Unfortunately, the data typically used to measure consumption (the US Government's figures for personal expenditure on nondurable goods and services category in the National Income and Product Account) don't have a lot of variation. So, they don't work very well as an explanatory variable. Savov finds that whe he uses EPA records on aggregate garbage production, they're exhibit a correlation with equity returns that are twice as high as the NIPA/Equity returs correlations. Here's the abstract of his paper (downloadable from the SSRN):

A new measure of consumption -- garbage -- is more volatile and more correlated with stocks than the standard measure, NIPA consumption expenditure. A garbage-based CCAPM matches the U.S. equity premium with relative risk aversion of 17 versus 81 and evades the joint equity premium-risk-free rate puzzle. These results carry through to European data. In a cross section of size, value, and industry portfolios, garbage growth is priced and drives out NIPA expenditure growth.

Monday, 13 July 2009

It's a pretty well-known fact that correlations between asset classes increase in really bad markets. To get a sense of how much this effect matters in terms of portfolio diversification, read this Wall Street Journal piece (published Friday, 7/10) titled "Failure of a Fail-Safe Strategy Sends Investors Scrambling. Here's a snippet:

Correlation is a statistical measure of the degree to which investment returns move together. Between 1991 and 1994, the correlation between the S&P 500 index and high-yield bonds was low, at 0.2 or 0.3, according to Pimco statistics. (A correlation of 1 means returns move in perfect sync.) International stocks had a correlation with the S&P 500 of 0.3 or 0.4, and real-estate investment trusts had a correlation of 0.3, according to Pimco data. Commodities showed little correlation to U.S. stocks. By early 2008, investment categories of just about every stripe were moving significantly more in sync with the S&P 500. The correlation on international stocks and high-yield bonds rose to 0.7 or 0.8, and real-estate investment trusts to 0.6 or 0.7, according to Pimco's data for the previous three years

The problem with portfolio diversification is that it is typically implemented using historical correlations (actually, on covariances, but the two are essentially the same). To provide optimal diversification, portfolio allocations should be made based on "forward looking" correlations. In practice, some managers adjust historical correlation estimates to reflect their views of future relationships. But that becomes far more complicated than simply using historical estimates and assuming that they'll continue unto the future.

Note: if you don't have an online subscription to the Journal, try searching for the article using Google News - if you click on the link there, it works around the WSJ subscription filter (however, not all WSJ articles can be accessed this way).

Friday, 10 July 2009

I've made progress on the paper I'm working on. Unfortunately, this week has been a good illustration of a quote from McCloskey: I believe it went something like "90% of writing is getting your thoughts straight, and 90% of empirical work is getting your data straight."

Unfortunately, my data wasn't straight - I realized that I had used the wrong data code (a certain type of dividend distribution) from CRSP. So, my previous analysis was basically crap (that's a technical term for the unitiated) and had to be redone using the proper data set.

Luckily, it looks like my primary results after using the proper code, but with a few minor changes. For now, I'm still doing the preliminary descriptive stuff. Since I did the initial version of the paper in a hurry (hey - it was a conference deadline), I took a few shortcuts. This time, I'm going back to step 1 and going over every line of code, and (just as important), making sure I know how the sample changes at each point. As a result, I'm much more confident with my data this time around.

But doing the descriptive statistics is still (to me) about the most boring part of the paper. Still, it's gotta be done.

Thursday, 9 July 2009

Here's an excellent piece on the Psi-Fi Blog, titled "Quibbles With Quants." Here's a choice part:

What the models failed to capture was that humans don’t behave in simple, predictable and uncorrelated ways. It’s impossible to overstate the importance of the way these models cope with correlation of peoples’ psychology. To sum it up: they don’t. Let me know if that’s too complex an analysis for the mathematical masters of the universe.

Anyone who’s ever been to a nightclub, a football game or even a very loud party will know that there are situations where we don’t act as individuals, buzzing about doing our own thing. These are occasions when we all suddenly stop being individuals and start doing the same thing – usually involving large quantities of drugs and some very bad singing. Although these sorts of events are specifically designed to trigger this behaviour – which is probably a deep evolutionary adaptation to sponsor group behaviour, useful when it comes to running down tasty antelope and dealing with giant, carnivorous sabre toothed beavers – it can also happen in other situations. Most stockmarket booms and busts are generated by similar group effects.

In general, people behave in an uncorrelated fashion right up until the point they don’t.

The momentum profits are realized through price adjustments reflecting shocks to firm fundamentals after portfolio formation. In particular, there is a consistent cross - sectional trend, from short-term momentum to long-term reversal, that happens to earnings shocks, to revisions to expected future cash flows at all horizons, and to prices. The evidence suggests that investors myopically extrapolate current earnings shocks as if they were long lasting, which are then incorporated into prices and cash flow forecasts. Accordingly, the realized momentum profits can be completely explained by the cross - sectional variation of contemporaneous earnings shocks or revisions to future cash flows. Importantly, these cash flow variables dominate the lagged returns in explaining the realized momentum profits. As a result, the realized momentum profits represent cash flow news that has little to do with the ex ante expected returns. In fact, the ex ante expected momentum profits are significantly negative.

On an unrelated note, the Unknown Family will be traveling the next few days for a family reunion in West Virginia (the Unknown Wife's father grew up their, and that fork in the family tree has a get-together every year). So, unless I schedule a few pieces to post automatically, posting will likely be slim for the next few days.

Saturday, 4 July 2009

We're just got back from a trip to Wal-Mart (what could be more American) to buy some clothes for the Unknown Daughter (she's grown enough that her old bathing suit no longer fits). The Unknown Wife and Unknown Daughter tried on clothes, while I wheeled the Unknown Baby Boy around the store until he went to sleep. Not surprisingly, the large-screen flat-panel TVs did the trick (based on initial indications, he's definitely a boy-child to the core).

Now we're getting ready to grill some critters, followed by fireworks. In the meanwhile, here are some links. They're from a previous year's post, but they.re worth repeating (after all at Financial Rounds, we're all about the efficiency thing):

Newsweek recently ran a profile of Paul Wilmott, one of the biggest names in the "quant" world. Here are the opening paragraphs

Imagine an aeronautics engineer designing a state-of-the-art jumbo jet. In order for it to fly, the engineer has to rely on the same aerodynamics equation devised by physicists 150 years ago, which is based on Newton's second law of motion: force equals mass times acceleration. Problem is, the engineer can't reconcile his elegant design with the equation. The plane has too much mass and not enough force. But rather than tweak the design to fit the equation, imagine if the engineer does the opposite, and tweaks the equation to fit the design. The plane still looks awesome, and on paper, it flies. The engineer gets paid, the plane gets built, and soon thousands just like it are packed full of people and sent out onto runways. They fly for a while, but eventually, because of that fatal tweak, they all end up crashing.

In a way, this is what's happened in quantitative finance. The planes are the complex derivatives—like collateralized debt obligations—that now lie smoldering on the balance sheets of banks. The engineers are the "quants": those math and science Ph.D.s who flocked to Wall Street over the past decade and used mathematical models to build these new investment products. These are the people Warren Buffett was talking about when he said, "Beware of geeks bearing formulas" in his letter to shareholders this year. The quants aren't entirely to blame for the financial meltdown; there's plenty of guilt to be shared by regulators, top executives and the investors who bought the instruments the quants created. Yet while aeronautical engineers who willfully designed a faulty plane might be on trial for criminal negligence, Wall Street's math gurus are, for the most part, still employed. Strangely, the banks need quants more than ever right now. If anyone's going to figure out how to price these toxic assets, it's them. Quantitative finance isn't going away, but it is in desperate need of reform. And one man—a math geek himself—thinks he knows where to start.

Paul Wilmott is a 49-year-old Oxford-trained mathematician and arguably the most influential quant today, the brightest star in their insular, nerdy universe. The Financial Times calls him a "cult derivatives lecturer."

Wednesday, 1 July 2009

Just another day (or two) of torturing data. Like I mentioned a couple of days ago, a week back I decided to update a data set to include the last year or so of data (the data sources I use were recently updated). Like most "simple" jobs, it's turned out to be much more of a hairball than I expected. Although the program I used was fairly simple to rewrite, I realized that I had to update not one, not tWo, but THREE datasets in order to bring everything up to the present.

Caution: SAS Geekspeak ahead

One of the data sets is pretty large (it was about 70 gigabytes, but with the updates and indexing I've done, it's almost 100 gig). So, adding the new data and checking it took quite a while (no matter how efficiently you code things, SAS simply takes a long time to read a 70 gigabyte file). I thought I had everything done except for the final step. Unfortunately, the program kept crashing due to "insufficient resources."

For the unitiated, when manipulating data (sorting, intermediate steps on SQL select statements, etc...) SAS sets up temporary ("scratch") files. They're supposed to be released when SAS terminates, but unfortunately, my system wasn't doing that. So, I had over 180 gigabytes of temporary files clogging up my hard drive. This means that there wasn't enough disk space on my 250 gigabyte drive for SAS to manipulate the large files I'm using.

Of course, I only realized this when my program crashed AFTER EIGHT HOURS OF RUNNING! TWICE!

I've now manually deleted all the temporary files, and I'm running the program overnight to see if this fixes the problem.

Ah well - if it was easy, anyone could do it.

update (next morning): Phew! It ran - it seems the unreleased temporary files were the issue. On to the next problem.