The risk corridor algorithm itself will tend to result in higher insurer receivables, compared to payables, due to an asymmetry in calculating the “target amount” (or expected cost) for each insurer.

The problem here is that the target amount really isn’t the “expected cost”. Actuaries of all stripes have struggled with how to characterize the target. In their seminal “3R” paper in 2013, the American Academy of Actuaries said virtually the same thing in their otherwise informative Risk Corridor Chart:

Below the fold, I show that the “Target Claims” should best be thought of as a function off *actual claims*, with two adjustments. The first adjustment occurs if the plan has profits lower than the amount provided for in the regulation. The second adjustment occurs if administrative expenses exceed the regulation’s cap. If the plan profit falls below 3% (or 5% in 2015), then the target is raised to give a plan a chance to receive money through the risk corridor program. If the plan’s administrative expense, inclusive of profit, exceeds 20% (or 22% in 2015), then the target is lowered, giving the plan a higher probability that they will have to pay money into the program.

Given the nature of competitive markets, it is clearly much less likely that a plan will have administrative expenses (plus profit) greater than 20%. Conversely, it is quite easy to have profits less than 3%; in fact, many plans may have priced for lower profit margins than that. And this is the crux of Katterman’s “asymmetry”; it isn’t so much a direct characteristic of the formula as it is a characteristic for how the formula was calibrated.

It is admittedly mind-blowing to think of a target as being predominantly a function of the actual amount with those two odd adjustments, but the algebra is unforgiving. This is a significant distinction between the Part D risk corridor program, where the target is a function of a plan’s “bid,” which makes a reasonable proxy for “expected”. In the ACA risk corridor, in contrast, if you price for something lower than 3% profit, then you would *expect* claims to be higher than target.

The first AV Calculator was published prior to the 2014 benefit year. Deep in the bowels of the calculator were a series of continuance tables. In these tables, you could add up all the parts that make up health insurance claims — in-patient claims, ER claims, drug claims — and compare that to the total expected cost per member. If you do that comparison, you will find that the pieces are $297.06 short of the total, for all “combined” continuance tables.

That’s true for the platinum tables that have higher expected costs. It’s true for the bronze tables that have lower expected costs. That’s true of the silver and gold tables that are somewhere in between. To be clear, it’s not just true that all have a gap. All have the exact same size of gap, $297.06.

Then along came 2015, when CMS introduced something called an “effective coinsurance rate” calculation its draft calculator. All of the continuance tables were identical to the 2014 calculator, so the magic $297.06 appeared again, repeatedly. More interestingly, somewhere around line 3200 of the code inside Excel, you will find the following:

Suddenly, the magic $297.06 that was fixed for any benefit tier in 2014 becomes a function of the underlying coinsurance for the benefit plan (the variable “coins”). The draft 2015 calculator was never approved for use, so no harm, no foul, but set the seeds for …

… the 2016 Calculator , the odd code that appeared in the draft 2015 calculator appears again, this time around line 3300 (search for “+ 297.06”). This time, the calculator goes into production.

Even worse, for 2016 ER costs in the continuance tables were increased by more than 13% (6.5% per year for two years). IP costs went up similarly. Physician costs went up similarly. In fact, every component of health spending for every benefit tier went up by about that same amount … except for the poor, lonely, uncategorized $297.06. This is why when you measure annualized trend by category from the continuance tables, you will see roughly a 6.5% rate applied to each category … but the total increases by something lower than that, 6.2% and change, depending (see below).

And now, at long last, we have a draft 2017 Calculator . Again, each component of the continuance tables went up 6.5%. So, naturally, the first thing I check in the new calculator is the magic $297.06. Would it survive another wave of trend? Would it remain constant across benefit plans? Would they continue to apply coinsurance to it in the macro, despite not allowing the underlying richness of the benefit tier to impact it?

Yes, YES, and YESSS!

This, my friends, is why you don’t set your premiums based upon the AV calculator. Because outside of Washington DC bureaucracy, there is nothing magic about $297.06 that makes it work for every benefit, every year, where you can simultaneously *apply* coinsurance (within the effective coinsurance rate calculation in the macro) and *not apply* coinsurance (the same gap exists in each metallic tier).

And in case anyone is interested, there’s a parallel issue within the separate medical/rx portion of the calculator. Perhaps that will be fun in a post I’ll do for 2018.

A summary of all of the relevant continuance tables, trend values, and the calculation of this magic $297.06 are below the fold.

P.S. Despite the sarcastic tone in this post, in all seriousness if there’s a rational explanation for what this $297.06 represents, I would like to hear about it. If there is such a rationale, it should be a prominent part of the documentation.

All forecasts are properly viewed as historical studies. Forecasts are, by definition, historical. The art is to discern which portions of the past are most relevant to understand the future. We want to choose objects in the mirror that are not just closer than they appear, but that are also predictive of where the objects will be. To be a good forecaster, it is first good to be a great historian.

A great example of a historical forecasting methodology is PECOTA. PECOTA forecasts future performance for a player by first going back in time and finding historical players that are “similar”. It then uses information on how those historical players evolved over their careers to project how the current player will evolve. So far, this approach is as good as it can get for forecasting player performance.

Not all forecasts are this obviously historical, but all forecasts really are about intelligent selection of historical comparators.

This key relationship indicates why forecasts will always need both quantitative and qualitative components. Quantitative components — from numerical data that describes the past — are key to anchoring estimates of magnitude in an objective way.[1]

Qualitative components are necessary to adjust for limitations in the data and to accommodate the possibility that this time “really is different”. Frequently, historical data exists because of convenience or some other business purpose; rarely is the historical data directly applicable to the current problem. A significant degree of wisdom is required to judge when “this time really is different”, as is perhaps obvious.

This post may appear to be a truism, but I've found that model interpretation and forecasting errors frequently stem from a lack of appreciation regarding the relationship between history and prediction. Curiosity, energy, and time are all required to investigate the past in a comprehensive way. It is difficult — even in retrospect — to identify key causes for historical events. It is exponentially more difficult to select and measure which of those causal relationships will be the key drivers in the future.

Companies would do well to keep in mind that forecasts are as much about the past as they are about the future. The better you know where you've been, and why, the better you will be able to navigate where you will be.

[1] Even quantitative measures are susceptible to subjective interpretation and biases that influence the selection of the data. Nevertheless, quantitative evaluation helps provide a degree of dispassion, if used wisely.

[Technical Post] On Friday, a Vice President in our company asked me what our definition of Economic Capital was. I responded that we defined “economic capital” as the amount of capital necessary to cover unexpected losses at the 99% confidence level. That is total and complete gibberish. I have no idea what it means, I just mirror the sentence structure used by others. For examples, see Investopedia, and other sources.[1]

Investopedia also provides a standard graphical representation, produced below:

Below I will describe why my definition is gibberish, I will contrast this to what we are really trying to say, and I’ll close by saying that this is more than a semantics problem.

“The Machine Knows” is a classic Office episode that teaches us about Actuarial modeling in the following clip:

There are many lessons embedded in this two minutes.

1) Be careful with model interpretation. Michael wanted to interpret the result literally; be warned, if you do this, you may get very wet.

2) Models are only guides. The Map is Not the Territory; the model is not reality. From the clip, it seems possible that the GPS system (aka the Map, aka the Model) was wrong and Michael was following it into disaster (like these individuals did in real life). Even if the GPS was correct, either way the clip illustrates that reality itself needs to be paid attention to, regardless of what you believe any model says.

2-alternate) Michael fell prey to the Reification Fallacy, one of the most prevalent and powerful modeling fallacies.

3) If you can’t understand a model, then be warned that disastrous results may follow. Sometimes, models erroneously embolden you. Always be humble when interpreting model results, and be open to contrary evidence. All models are wrong; some are useful.

4) Don’t be a passenger in a car driven by someone who takes his models literally. Unless you make your living in disaster recovery.

Tomorrow I’ll post on the analysis of risk, and how that could be applied to this video.

Russ Roberts recently interviewed Sam Altman, of YCombinator. In this EconTalk episode, Sam offered the following insights into what level of planning he expects to see from those who apply to become part of YCombinator:

Sam Altman: I’ve never written [a business plan] in my life. At the stage that we are operating at, it’s irrelevant. Like financial projections also we never look at. … We would rather them spend the time working on their product, talking to users. What we care about is: Have you built a product? Have you spoken to users? Can we see that? Can we talk about where it may involve?

I think this is exactly right. As actuaries, our first instinct is to measure, quantify, and plan. However, you don’t have to have a detailed financial plan before engaging in an activity. What you have to have is a rational basis for believing that the activity has substantial merit. The level of modeling and projections must be related to (a) the ability of the forecaster to model accurately, and (b) the relative cost of producing the forecast. There is art in knowing when to model and when not to.

For young start-ups, the ability to forecast accurately is low, and the cost of forecasting is high, especially the opportunity cost. Further, if the business case has merit, the value proposition has to be easy to explain or it won’t take off. Business plans and pro formas should properly be viewed as a means of communication, not an end of themselves. And frequently the idea and business prospects can be best communicated in words, with examples, or with simple math that demonstrates scalability. And the simplest communication vehicle is frequently the most persuasive.

Most Catholic institutions affected by the recent contraceptive ruling fund their own health benefit plans. This means that there is no “insurer” available to pass the cost of that coverage on to, even in a shell-game sort of way (see prior posts on large and small employers that purchase insurance). When the Administration announced its compromise for the relatively insignificant fully-insured market, it didn’t offer any compromise for the much larger set of religious self-funded plans. Instead, they announced an intention to figure out how to compromise:

The Departments intend to develop policies to achieve the same goals for self-insured group health plans sponsored by non-exempted, non-profit religious organizations with religious objections to contraceptive coverage.

For such religious organizations that sponsor self-insured plans, the Departments intend to propose that a third-party administrator of the group health plan or some other independent entity assume this responsibility. The Departments suggest multiple options for how contraceptive coverage in this circumstance could be arranged and financed in recognition of the variation in how such self-insured plans are structured and different religious organizations’ perspectives on what constitutes objectionable cooperation with the provision of contraceptive coverage.

These options (beginning on page 16,507) can be summarized as follows:

1) Use drug rebates;
2) Use fees paid by the religious organization nominally designated for another purpose, such as disease management fees;
3) Use funds from a private, non-profit entity to be specified later;
4) Receive a “reinsurance contribution” fund rebate or tax credit (this only “works” for 2014-2016);
5) Use the federal Office of Personnel Management designate a national, private insurer that would offer this stand-alone coverage;
6) Give the national plan a “credit” so they wouldn’t have to pay their entire Exchange fee bill;

This is an incredibly weak set of ideas. These boil down into the following “pass the hot potato” funding sources: