Wednesday, February 29, 2012

In today's Financial Times, there's a front page article that provides a ranking of the top hedge fund managers, by the dollars in profit they've produced. It points out that Ray Dalio (Bridgewater Pure Alpha) has surpassed the legendary George Soros (Quantum Endowment Fund) for the number one slot.

LCH Investments, part of the Edmund de Rothschild group, did the rankings. LCH "assesses how much hedge funds have made over their lifetimes for investors in dollars. It argues that percentage returns distort performance as fund managers frequently find it hard to maintain big returns as they take in more money." Possibly LCH knows something the rest of us don't, but then again, perhaps not.

As reported in the article, Dalio's fund is the world's biggest with $72 billion. Does that not give him an unfair "leg up" on his competition, right from the start? Don't get me wrong, the numbers are impressive. But, to discard the use of percentages as a ranking tool seems to be a mistake to me.

Individual investors do want to know how they've done, in both percentile (hopefully delivered using a money-weighted method) and dollar (or Euro, Pound, Yen, etc.) terms. But dollars themselves bias the results towards the larger funds.

There is no question that small managers can have significant results that cannot be replicated as their AUM (assets under management) grows, in both the hedge fund as well as long-only space. It does seem a bit odd when we see annual rankings (by percent) where an extremely small mutual fund, for example, may get top honors, but only be managing a minimal amount, which was primarily invested in but a few exceptional stocks (probably a case of luck rather than skill).

To discount the value of returns, at least along with dollars, is unfortunate. But, if the intent of this ranking is solely to say "who has generated the highest lifetime profits," then this is probably okay. Performance, though, should always be percent driven. Thoughts?

I have speculated that SS&C might be one of the firms that might acquire them, as they have a history of buying others products (for example, acquiring Financial Models). The irony is that Thomson, too, has operated in this mode for the past couple decades.

I believe it was shortly after 9/11 when PORTIA had a major layoff, and began rethinking the appropriateness of this software in their long term strategy. After a few years, they seemed to conclude that it was better to move forward, and began to pump some money into their products, including performance measurement. Then last year we learned that Thomson was once again "shopping" PORTIA. We look forward to seeing how this progresses. Both SS&C and Thomson have been clients of The Spaulding Group for years, and we wish our colleagues the best of luck.

Tuesday, February 28, 2012

Last November's WSJ's had a Jason Zweig article in which he reminded us of Charles Mackay's '41 book on market bubbles, whose title serves as this post's title. He pointed out how the author himself was fooled, just shortly after his book was published, in believing the "absurdly unrealistic projections of future growth" for railway stocks. Others, too, who have cautioned against bubbles often fall victim.

Investing is often equivalent to the "prisoner's dilemma," from game theory. Just recall for a moment how housing took off. Didn't you see at least one property whose value appeared completely absurd to you? But, the reality was that the price continued to rise! And so, were you the fool for not buying at the "absurdly high" price, but still below what it ultimately sold for (before tumbling)? Academic research has many articles on this topic, but as I recall, knowing you're in a bubble can be very difficult (impossible, perhaps?), and there is even debate as to whether you can tell when a bubble occurred!

No one wants to miss out on a rising market; to watch your friends accumulate great wealth (on their investments in growth stocks, real estate, or even tulips) can be quite depressing. Thus the dilemma.

What one has to be wary of are those who make fantastic claims about the future. Unfortunately "outlandish claims" only appear outlandish after the fact. We often experience what seems to be outlandish, only later to find out that it wasn't; likewise, what seems to make sense often does not. Surely many would have thought it outlandish to think that the esteemed veteran of Wall Street, Bernie Madoff, would have been a crook, or that OJ Simpson a murderer. And who would have thought that the Jets would have won the Superbowl in 1969, or for that matter, the Giants this year (after their less than stellar regular season).

More fuel, perhaps, for a challenge to placing too much stock in any one's predictions.

Monday, February 27, 2012

I had a call on Friday from a longstanding friend and colleague who wanted to know whether she should use the beginning (VB) or ending (VE) values to asset weight portfolio returns. Let's put this into context.

This manager has a client whose history includes returns from a variety of managers, and they want to provide the client with a consolidated return. Of course, you won't be surprised to learn that I suggested that they use money-weighting, not time-weighting, since that would have greater value. But, putting that aside for now, if we are treating the collection as a "composite," do we use VE or VB?

Well, if you're already familiar with GIPS(R) (Global Investment Performance Standards), you probably know that they require the use of beginning values, and even though this client isn't creating performance that needs to comply with GIPS, the rationale behind this rule makes sense. I stumbled upon this scenario, which I think demonstrates why quite well:

If we use VE, we're using the values after the returns have been applied, which is arguably double-counting. In addition, what will we find? Well, let's see:

In spite of the overall market value not changing, by using the ending values we get a nonsensical 25% return.

However, if we use the beginning value:

Doesn't this result make a lot more sense? There was no change in the overall market value, and the 0.0% return reflects this. Using the VE method "double counts" the underlying returns, and this is ﻿the reason it's not what we do. Bottom line: VE means an erroneous result. VB is the rule in GIPS and should be the rule in any situation where you're looking to consolidate returns to provide a "big picture." Now, if I can just convince them to use money-weighting!

Saturday, February 25, 2012

The Journal of Performance Measurement(R) is beginning a series on performance measurement professionals, and we need your help to identify the folks we should include. We plan to focus on one or two people in each issue, but want the list to be driven by input from other PMPs.

Friday, February 24, 2012

Yesterday, The Spaulding Group held a luncheon in NYC, where Jed Schneider, CIPM, FRM reviewed some of the findings from our recent Performance Attribution survey. As with all of these research projects, some very interesting insights can be drawn. And since this is the fourth time we've surveyed asset managers on this topic, we can compare results from one period to another, to identify trends, changes, etc. Our cosponsors for this survey were:

As with all of our surveys, we invited our cosponsors to come to the luncheon and make brief presentations, and four did. One, Steve Shefras from BiSam, mentioned how it is often said that risk and return are two sides of the same coin. He went on to say that he believes they're on the same side. I immediately thought of a totally different anaology: a Möbious strip!

You may be familiar with them. They come from the mathematical field topology, that deals with "mapping." A Möbious strip is a one-sided object. You can build your own by taking a long strip of paper, twisting it, and then connecting the ends. If you traverse either "side," you'll cover both "sides," meaning there is only one side.

And so, rather than saying that risk and return are on the same side of a coin (which encourages one to ask, "what's on the other side?"), we could say that they're both on a Möbious strip, and therefore on the same (and only) side.

Wednesday, February 22, 2012

In his autobiography, Bad as I Wanna Be, Dennis utilized some fontmagic, which I'd never seen before or since. Variable font sizing, bolding, etc., were employed throughout the book. His dramatic use of these tools were, I guess, in line with his own personal style, which through the inking of tattoos around his body, hair color changes, various body piercings, and clothing choices, make him a person who cannot be easily missed.

Well, might it not also be worthwhile to consider introducing a little "Dennis"into your performance reporting?

Why not accentuate certain items by bolding or enlarging the font size a bit? Why not introduce color, since a single color is not just monochromatic, it can also be monotonous!

Add some underlining or italics to set apart certain text.

Words alone convey information, no doubt. But, by taking advantage of the ability to alter font sizes and appearances, you can add dramatics and emphasis; can highlight what needs attention; can direct readers to items you really feel they need to notice.

As you may already know, I am not a fan of the idea of performance reporting standards. Firms often take pride in the custom materials they provide their clients, and don't need to conform to anyone's idea of "best practices." I doubt if we'll see anything regarding what I'm suggesting today in the ultimate standards document that's produced. But consider adding some of these techniques to your reporting. You're sure to get some attention!

BTW, I recommend Dennis' book. I read it when it first appeared, and found it quite enjoyable. Dennis is a unique character, no doubt.

Tuesday, February 21, 2012

In this past weekend's WSJ, an article titled "Religion for Everyone" appeared which was taken from a book by Alain de Botton, that touched on the subject of "community." It begins "One of the losses that modern society feels most keenly is the loss of a sense of community. We tend to imagine that there once existed a degree of neighborliness that has been replaced by ruthless anonymity, by the pursuit of contact with one another primarily for individualistic ends: for financial gain, social advancement or romantic love."

The very word "community" carries a special meaning. Several years ago the pastor of our Church included the word in a sign that sits out front of our church building: instead of just "St. Matthias," it reads "Community of St. Matthias." Community means a sense of belonging, sharing, having something in common, friendship, interaction, having a common bond, and much more.

The article goes on to speak about the importance of discovering what someone does for a living, when we first meet them. And so it's clear that our professional roles count a great deal. Putting aside the religious aspect of community, let's just think about community and the profession of investment performance. Do we need one? Do we seek it out? Does it exist?

The reality is that community exists and is available for anyone in the investment performance field who wishes to take advantage of it. From The Spaulding Group's Performance Measurement Forum (a membership group that meets twice a year in the States, and twice a year in Europe); to the various conferences held each year, including the CFA Institute's annual GIPS(R) (Global Investment Performance Measurement) conference and TSG's annual PMAR (Performance Measurement, Attribution & Risk) conferences; to the CIPM (Certificate in Investment Performance Measurement) program, that assigns upon those who successfully complete the examinations and meet its requirements, an increasingly important designation; to the various Linkedin Groups; to the various industry committees that exist. Even subscribing to, sitting on the board of, or writing articles for The Journal of Performance Measurement(R) is a form of community for the performance measurement professional. The degree to which PMPs (Performance Measurement Professionals) avail themselves of these opportunities can determine the extent of knowledge, awareness, acknowledgement, and success they have. And arguably the degree of pride they have in working in our field. And no doubt, their sense of community and sense of belonging.

When I was 14 I joined a youth group called the Order of DeMolay. I attended the first few meetings and didn't feel as if I really belonged. But,when I was appointed to a position in the chapter, that turned it all around for me, and I went on to be very active. How much we participate in any group or profession, (whether holding positions on committees, writing articles, speaking at conferences, attending events, etc.), no doubt factors in to our sense of belonging; our sense of community.

When I chair our conferences I mention how the attendees cannot only benefit from the speakers and the vendors who are present, but also those fellow PMPs among them, who are there to gain knowledge and insight. Not only can relationships be formed, which can be profitable from multiple perspectives, but additional knowledge can be obtained in learning how others deal with issues, use software, struggle with compliance, etc.

Our firm's "tag line" is "Performance Measurement is Our Passion." And so, it's not surprising that we have formed several groups, and regularly seek to provide a "sense of community" for PMPs. We strongly believe that such communities provide opportunities for individuals not only to learn, but to grow professionally, establish relationships with others, discover opportunities, enhance their firm's operations, and benefit them professionally. Communities allow folks to fell part of something. And it's clear that there are plenty of opportunities for just that, for the performance measurement professional. We hope you're taking advantage of some of them.

Friday, February 17, 2012

In the Fall 2011 issue of The Journal of Performance Measurement(R) we mentioned that this year we're starting a new section, that will highlight the "Who's Who" of performance measurement. We invited readers to submit names to our editor, Doug Spaulding. While we've already received several names, we're looking to hear from more folks.

The chair of the verifier/practitioner committee (currently held by Carl Bacon, CIPM) for the GIPS(R) (Global Investment Performance Standards) Executive Committee is up, and nominations are being sought. And so, if you think you'd like to serve, let them know!

Who serves on the EC is very important, as it influences the direction of the Standards.

I was fortunate to be a member of the EC's predecessor group, the Investment Performance Council. I recall being approached by another American member the evening before a critical meeting, who told me flat out that "we lost": that mandatory verification was going to go through.

Just a bit of background: at that time we were working on the 2005 edition of the Standards, and the IPC was addressing whether or not mandatory verification should become the rule. I, as well as virtually all American members of the IPC, along with a few others, opposed mandatory verification. But here I was being informed that it was going to become a reality.

Later that evening I was approached by two other members of the IPC who asked me "what would it take for me to support mandatory verification?" My reply? Nothing; it wouldn't happen! I knew how important it was for verification to remain a recommendation, and I fully and strongly opposed this change. I was (to borrow a favorite George W. Bush term) resolute in my position. Let me be clear: it wasn't just my view, as that has minimal importance; it was the view of most of the people I represented on the IPC. I knew this was a critically important matter, and couldn't agree with the change.

The following day, this topic was brought up, and one of the gentlemen who approached me the prior evening suggested to the members that we alter direction regarding this matter, and after a long but cordial discussion, the IPC agreed to drop the idea entirely! And when the question of the shift from ten to five years of history was brought up, I was asked if the Americans would support this, and I said "of course," given that the members were willing to agree to drop mandatory verification.

And so, it matters a great deal who is on the EC. Differing opinions should be sought, desired, respected, and honored. Yes, members must be willing to compromise and seek common ground. But the failure to hold strong to important matters benefits no one.

Consider the change to the carve-out rules (i.e., eliminating the ability for compliant firms to allocate cash to the carved out sectors and requiring firms to manage the cash separately). The IPC had planned to introduce this change in 2005; however, in the crafting of the 2005 edition, given the overwhelming opposition to the change (that is, to eliminate the ability to allocate cash), it was pushed back. Unfortunately, we weren't given another chance to comment. Why not? This was, and still is, an important topic. Since I wasn't in any of these meetings, I do not know how this was addressed. But I do know that a large number of firms opposed this change.

And so it matters who serves on the EC. Hope you agree.

p.s., If you're wondering if I will be applying, I will. Wish me luck!

Thursday, February 16, 2012

I haven't given up on my arguments against the use of the aggregate method to derive the extremely important composite returns (which are the bedrock of the Standards).

I was conducting a GIPS(R) verification earlier this week, and stumbled upon the following on page 6 of the 2010 edition of the Global Investment Performance Standards:

"The composite return

is the asset-weighted average

of the performance

of all portfolios in the composite."

[emphasis added]

But, as I have pointed out repeatedly, this does not hold for the aggregate method, which calculates the return of the composite, which can yield a hugely different result. And so, IF this IS the definition, why allow the use of a formula that violates it? We have two measures which DO satisfy this definition, and should be the ONLYones permitted.

Can I get an "amen" on this?

Now, in reality, I favor equal-weighting, but asset-weighting won't go away. But we can at least adhere to the intended definition and calculate it properly, can't we?

p.s., I learned this form of writing from reading NBA Hall of Famer Dennis Rodman's autobiography (yes, I read it).

Wednesday, February 15, 2012

On Monday, Chris Spaulding, a company SVP who heads sales and client relations, and Patrick Fowler, our firm's COO, were interviewed on WCTC Radio. They discussed sales and marketing techniques, and a venture they began that provides services to New Jersey businesses.

And last night I participated in a panel discussion titled "How to Win the Hearts and Minds of Consultants," which will be aired on Asset TV at the beginning of March, and emailed to their 30,000 plan sponsors and consultant viewers in the US, and added to Bloomberg's 140,000 terminal network in North America. Lillian Jones (Managing Director, Artemis Global Partners) moderated the panel, along with Anna Gilligan of Asset.TV. Joining me on the panel were Russell Kamp (Kamp Consulting Solutions) and Oscar Gil Vollmer (Appomattox Advisory). The session centered on hedge fund transparency and compliance, and as you might expect, my focus was on hedge funds and GIPS(R) (Global Investment Performance Standards).

We hope to provide links to both programs, so you can listen and/or watch!

Tuesday, February 14, 2012

Monday, February 13, 2012

In a recent blog post I referenced the notion of taking into consideration the "spirit" of the GIPS(R) standards (Global Investment Performance Standards) to consider what would be permitted. Being ever mindful of the underlying "spirit" of the Standards, though often difficult to define, is critically important.

Some 35 years ago I worked for a software consulting firm, that was looking to hire new staff. And so, to encourage us to identify prospects, they rewarded anyone who provided management with names. Furthermore, if these folks got hired, the staff member would get an additional "bounty." One rather creative fellow decided to post a "help wanted" advertisement in the local paper, which resulted in quite a large number of resumes. Well, he was paid for each, but management quickly clarified that the idea was to refer people you knew, who you felt were (a) qualified and (b) would fit in with our culture. This fellow, while being industrious, perhaps, clearly went against the the spirit of the initial request; something anyone should have understood. Okay, and so the rules weren't as clearly defined as they might have been, but give me a break!

Twisting and turning rules to fit can justify almost any improper or inappropriate behavior. At dinner last night, with my wife, younger son and his friend, we briefly discussed the idea of someone qualifying why their actions might be acceptable; I'm sure that most thieves and rule breakers have somehow found justification for their actions. While on a recent flight, a fellow sitting next to me continued to have his cell phone on after the plane began to taxi. I suggested that perhaps he should turn it off. He was a bit offended by my request, but my belief is that if the airline bothers to tell us to turn these devices off, there must be a reason for it. He justified his action by saying that he was emailing his son, who has diabetes; I guess he wasn't able to do that until we were taxing. Oh, well.

And so, some things are clearly black and white, while there are other times when there may be gray areas, in which case being mindful of the spirit of the rules should help direct our choices.

Thursday, February 9, 2012

The GIPS(R) Standards (Global Investment Performance Standards) require compliant firms to make every reasonable effort to provide prospects with a compliant presentation (¶ I.0.A.9); actually, not just any presentation, but the one(s) that aligns with the prospective client's objectives. The Standards are kind of silent as to the timing as to when the presentation needs to be provided.

The GIPS Guidance Statement on Supplemental Information states that "[it] does not prohibit firms from preparing and presenting information according to specific requests from prospective clients. However, firms are required to provide a compliant presentation prior to or accompanying any supplemental information." [emphasis added]

The GS defines "supplemental information" as "any performance-related information included as part of a compliant presentation that supplements or enhances the required and/or recommended provisions of the GIPS standards." [emphasis added]

This is almost circular logic is it not? By definition, supplemental information accompanies a presentation, so how could it be sent in advance? But does it therefore also mean that if a firm provides performance information, separate and apart from the GIPS presentation, along with the requested details from the prospect, by definition it cannot be "supplemental information," and is therefore permitted to be sent?

While I recognize that someone may suggest that I am merely being difficult, I do see a hole in this logic. And I realize that one must always be mindful "of the spirit" of the Standards. If a manager isn't quite sure which presentation to send, or if it's quarter- or year-end, and the most recent materials haven't yet been finalized, but the firm wishes to quickly respond to the RFP, and include some preliminary information, with the expectation that presentations will be sent shortly thereafter, is there a problem? I don't see one.

Bottom line, I see ambiguities in what exists today. I also believe that "the spirit" of the Standards is always something to be mindful of. I realize that many like the Standards to be "black and white" on all matters, but there is often some gray, meaning room for interpretation. At a minimum, the presentation(s) needs to be provided in advance of the prospect becoming a client. And, I would say should be provided once you know which presentation(s) are appropriate. The wording regarding it being sent in advance of or along with supplemental information is a bit confusing, however, and I think can be open to interpretation. What do you think?

Wednesday, February 8, 2012

Leave it to my friend, Philip Lawton, PhD, CFA, CIPM, to find a way to link a philosopher with client reporting. In a recent blog post, he did just that, commenting on the initiative spearheaded by Stefan Illmer, to develop client performance reporting standards for the CFA Institute.

I want to preface my remarks by saying that I love Stefan; he is truly a gift to our industry. He has served us all quite well, most notably in his work on the GIPS(R) (Global Investment Performance Standards) Executive Committee. He is a great leader, who is skilled at the art of compromise. Client reporting has been a passion of Stefan's for some time (he was involved in the development of guidance for the European Investment Performance Committee, several years back). You should also know that Stefan and I agree on many more things than we disagree on (for example, we are both passionate champions of money weighting). Stefan is apparently being assisted by Dmitri Senik, along with others from the industry. I also hold Dmitri in high regard, as well as those members of the committee who have been identified to me. Volunteers should always be honored, for their contributions to our industry are great.

In his post, Philip appears to favor the introduction of standards; I do not.My issues include:

What problem are these standards to solve? You might ask, why must there be a problem, and you'd have a valid point; and so, why do we need them, then?

Has there been any evidence that the industry wants them? To the contrary, I've found that the industry clearly does not.

Many firms have custom reporting, and have little interest in adopting standards.

What impact will the standards have on asset managers? Plenty! Not only additional time and effort to comply, but also the cost of getting verified (as I understand it, Stefan's committee plans to include this as a recommendation). I often find myself having to remind folks that our firm, The Spaulding Group, actually is a "for profit" company. This isn't always so obvious, when I come out against GIPS performance examinations, and now reporting standards, which would surely bring additional revenue to our firm. But I do not want our verification clients to spend money they don't need to. I am doing a GIPS examination this week, so am clearly supportive of clients who find benefits in having them done; and, we will no doubt be ready-and-willing to verify clients' compliance with reporting standards. We just find the notion of such standards difficult to appreciate.

Let's face it: the CFA Institute has a great presence in our industry. Their contributions are exceptional. We fully support the CIPM program, recognize the value of the CFA, and obviously support GIPS. However, moving forward with this initiative, without first validating the need and desire for standards, is a problem, I believe. Will the reporting standards be implemented and introduced, regardless of what the majority of firms feel? If yes, it is likely that they will become de facto standards, requiring compliance.

We've taken this topic up at the Performance Measurement, Attribution & Risk (PMAR) conferences, meetings of the Performance Measurement Forum, and in conversations with clients, and find an overwhelming opposition to such standards. There is, however, interest in "guidance." Will Stefan and Dmitri's committee be willing to adjust what they're doing to introduce this softer item, or are they (and the CFA Institute) committed to standards? The CFA Institute has enough clout that the line from Field of Dreams will read "build it and they must come." Hopefully flexibility will be present, but only time will tell.

p.s., The Battle Royale at this year's PMAR conferences will deal with this topic. In London you'll be able to witness Stefan battle my colleague, John D. Simpson, CIPM. To avoid bias, Patrick Fowler will, as he always does, serve as moderator.

Tuesday, February 7, 2012

In conducting research studies, we often want to introduce a degree of randomness, to avoid the potential bias that might creep in if we select our cases directly. There are random number generators available to assist us, although many of them have been challenged for their "true randomness." I know that in research I'm doing on transaction based attribution, I sometimes question whether the random number generator I've chosen is truly producing random results.

While sitting at the boarding area in Oslo, Norway this past Saturday, after conducting a GIPS(R) (Global Investment Performance Standards) verification for a client, one of the officials mentioned that there would be security checks done on a random basis. Well, I soon learned what random meant.

A young man, probably in his mid-to-late 20s, was the security person charged with "randomly" selecting passengers. I observed as he walked up and down the rows of passengers who were seated, awaiting word to board. A couple rows across from me was a very attractive young lady; and sure enough, she was his first "random" selection. While at first I thought this was humorous, when I observed that he was the one who carried out the full body search (and I want to emphasize the word "full"), I became a bit disturbed. But I became even more upset when I saw that his second "random" passenger was another young, attractive woman. This time the girl's father came rushing over, when he learned that she was to be screened. But this didn't deter the security man from once again carrying out the full body search.

Shortly thereafter we boarded, and this episode continued to bother me, to the point that I mentioned it to the attendant in charge, who suggested I contact United when I got home. Well, I decided to go up to some of the passengers who were forced to undergo what I saw as an embarrassing ordeal. I went first to the father of the young girl, who expressed his upset at what had occurred, and how he was reluctant to say much, fearing the result (thinking that he might even be arrested); i.e., he was intimidated by security. I ended up speaking with several of the young ladies (and there were several; all attractive) who were randomly picked. I wanted to see if I could get their contact information, in the event United wanted to speak with them.

Well, the compliant has been filed, and we'll see what becomes of it. But so much for randomness, right? What good is this extra level of security when most of those selected are young, attractive women (the heck with the older guy with explosives strapped to his body!).

Randomness: a great concept that offers much value, but only if it really is random.

p.s., In the unlikely event you're wondering what my issues were: first, it wasn't random, which defeats the purpose of the exercise; second, a woman should have done the body checks of women; third, when checking "private" areas, the back of the hand should be used, not the front. Need I say more?

The (original) Sharpe ratio in effect compares two alternative combinations of treasury bills and portfolios (funds). The one with the higher (ex post) ratio provided a better (or less bad) average return per unit of risk. Thus if portfolio A had a higher ratio than B, a combination of bills and A with the same risk as a combination of bills and B had better (or less bad) performance.

Unfortunately such comparisons are likely to common these days, so keep up your campaign.

I think sometimes think that they don't make sense, but deep down they do. I confess to falling into the belief that negative Sharpe ratios were a problem, but I've come to believe that they are correct. The same issue often happens with time-weighted returns that sometimes don't make sense at first glance, but in reality are perfectly correct.

Friday, February 3, 2012

In The Spaulding Group'sJanuary newsletter, I expanded upon a recent blog post where I introduced a couple graphics in an attempt to "make sense out of" negative Sharpe ratios. Two pillars of the investment performance community, Carl Bacon and Steve Campisi, chimed in with comments, which will appear in the February newsletter. In the mean time, I will take this topic a bit further, with inspiration from both gentlemen.

Would it not be useful to see how the Modigliani-Modigliani risk-adjusted measure responds to negative Sharpes? I believe so. And in truthfulness and full disclosure, I will admit to being kept awake last night thinking about this (graphing it in my head), until I got up to put the materials together.

On the positive side. Let's begin by recalling how the M-squared looks when we're dealing with positive returns.

Recall that we first plot the benchmark (in the risk/return graph), and draw a line from the risk free rate and through it; this is the "market line." We can next plot the portfolio, and draw a similar line. Here I show a case where the portfolio and benchmark have identical returns, but the portfolio has taken on added risk. Note that its line falls below the benchmarks, meaning it will end up with a lower M-squared value.

The fundamental step in this method is to equalize the risks, and this is done graphically here, where we shift the portfolio's point to the left, so that it aligns with the benchmark's risk; and, as predicted, we have a lower return.

What happens on the negative side?

I again chose a case where the portfolio has the same return as the benchmark, and where it also has taken on greater risk. But notice that it plots above the line. We again adjust the portfolio's risk, so that it aligns with the benchmark's, and we see that it has a higher return.

This is what people find confusing: more risk, same negative return, why not a lower Sharpe ratio (risk-adjusted return)? Do the graphics help? Perhaps in some cases, but surely not all.

That's why I hold to the notion that we would expect that by taking on more risk, the portfolio should have a much lower return; however, it doesn't, and thus it gets rewarded. Perhaps if we inverse the thinking a bit: the benchmark took on less risk but did equally bad (i.e., it managed to do as badly as a portfolio that took on more risk, so it somehow captured even greater negativeness than one would have anticipated.

Spaulding, David Spaulding

About David Spaulding

is an internationally recognized authority on investment performance measurement. He's the founder and Chief Executive Officer of The Spaulding Group, Inc. (www.SpauldingGrp.com), and founder and publisher of The Journal of Performance Measurement. He's the author, contributing author, and co-editor of several investment books. He's actively involved in the investment performance industry, serving on numerous committees and working groups.
Dave earned his BA in Mathematics from Temple University, his MS in Systems Management from the University of Southern California, an MBA in Finance from the University of Baltimore, and a doctorate in Finance and International Economics from Pace University.
For more information please visit www.spauldinggrp.com/the-company/david-spaulding.html

Friends, colleagues, associates: by signing up, you'll be notified of new postings

Follow by Email

Dave is available...

Dave Spaulding is available for keynotes, corporate seminars, conferences, user groups, and interviews with well established media outlets. To schedule or discuss, please contact Chris Spaulding at (732) 873-5700 or via email CSpaulding@SpauldingGrp.com.

Important Performance Links

Are you truly a Performance Measurement Professional?

Two important indicators of you truly being a Performance Measurement Professional:

#1 - That you subscribe to The Journal of Performance Measurement. This publication has been the "bible" of performance measurement for over a decade. It's where new ideas are presented, debates are held, information shared. To learn more about the journal and to receive a complimentary copy, visit www.SpauldingGrp.com or contact Patrick Fowler (PFowler@SpauldingGrp.com) or Chris Spaulding (CSpaulding@SpauldingGrp.com).Subscribe now!

#2 - That you dress like a performance measurement professional. For the latest in fashion contact Patrick Fowler (PFowler@SpauldingGrp.com) or Chris Spaulding (CSpaulding@SpauldingGrp.com)

TSG's Guide to the Performance Presentation Standards

Our latest book is now available: TSG's Guide to the Performance Presentation Standards. This is a revision of an earlier book, that has been significantly enhanced. To learn more contact Christopher Spaulding (CSpaulding@SpauldingGrp.com).

Ask how you can get a free copy!

Don't miss out ... sign up for our newsletter

The Spaulding Group's critically acclaimed, complimentary monthly newsletter, Performance Perspectives, is in its 9th year. Each issue contains more in-depth analysis and ideas than can typically be provided in a Blog post. If you're not already a recipient, please sign up today. Contact Patrick Fowler at PFowler@SpauldingGrp.com.