Megaproject management

Me­gapro­ject man­age­ment is a new-ish sub­field of pro­ject man­age­ment. Origi­nally con­sid­ered to be the spe­cial case of pro­ject man­age­ment where the bud­gets were enor­mous (billions of dol­lars), it is de­vel­op­ing into a sep­a­rate spe­cial­iza­tion be­cause of the high com­plex­ity and tra­di­tion of failure among such pro­jects. The driv­ing force be­hind treat­ing it as a sep­a­rate field ap­pears to be Bent Flyvb­jerg, pre­vi­ously known around here for Refer­ence Class Fore­cast­ing as the first per­son to de­velop an ap­plied pro­ce­dure. That pro­ce­dure was mo­ti­vated by megapro­jects.

Me­gapro­jects got their name from the as­so­ci­a­tion of mega with big, so think mega-city rather than mega-joule. It did match the unit pre­fix in the be­gin­ning how­ever, as such pro­jects were mostly dams, bridges, or very large build­ings in the early 20th cen­tury.

The next shift up­ward took place with the Man­hat­tan Pro­ject and then the Apollo pro­gram, which are also fre­quently drawn on as pos­i­tive ex­am­ples. The term ‘megapro­ject’ picked up steam in the 1970s, at the same time pro­ject costs crossed over into the billions.

Cur­rently pro­ject costs of 50-100 billion are com­mon, with even larger pro­jects less com­mon but not rare. If you were to view cer­tain things which need ded­i­cated man­age­ment as a pro­ject, like the stim­u­lus pack­ages from 2008 or US defense pro­cure­ment, then we have crossed over into the trillions and are en­ter­ing a ‘tera era’ of megapro­jects.

Ig­nor­ing these spe­cial cases, but count­ing in­fras­truc­ture and in­dus­tries where billion dol­lar pro­jects are com­mon, megapro­jects ac­count for ~8% of global GDP.

Four Sublimes

Th­ese are four rea­sons which drive the pop­u­lar­ity of megapro­jects. They are kind of a group bias for each type of stake­holder. They are:

The fol­low­ing char­ac­ter­is­tics of megapro­jects are typ­i­cally over­looked or glossed over when the four sub­limes are at play and the megapro­ject for­mat is cho­sen for de­liv­ery of large-scale ven­tures:

1. Me­gapro­jects are in­her­ently risky due to long plan­ning hori­zons and com­plex in­ter­faces (Flyvb­jerg, 2006).

2. Often pro­jects are led by plan­ners and man­agers with­out deep do­main ex­pe­rience who keep chang­ing through­out the long pro­ject cy­cles that ap­ply to megapro­jects, leav­ing lead­er­ship weak.

4. Tech­nol­ogy and de­signs are of­ten non-stan­dard, lead­ing to “unique­ness bias” amongst plan­ners and man­agers, who tend to see their pro­jects as sin­gu­lar, which im­pedes learn­ing from other pro­jects. 3

5. Fre­quently there is over­com­mit­ment to a cer­tain pro­ject con­cept at an early stage, re­sult­ing in “lock-in” or “cap­ture,” leav­ing al­ter­na­tives anal­y­sis weak or ab­sent, and lead­ing to es­ca­lated com­mit­ment in later stages. “Fail fast” does not ap­ply; “fail slow” does (Cantarelli et al., 2010; Ross and Staw, 1993; Drum­mond, 1998).

6. Due to the large sums of money in­volved, prin­ci­pal-agent prob­lems and rent-seek­ing be­hav­ior are com­mon, as is op­ti­mism bias (Eisen­hardt, 1989; Stiglitz, 1989; Flyvb­jerg el al., 2009).

7. The pro­ject scope or am­bi­tion level will typ­i­cally change sig­nifi­cantly over time.

8. De­liv­ery is a high-risk, stochas­tic ac­tivity, with over­ex­po­sure to so-called “black swans,” i.e., ex­treme events with mas­sively nega­tive out­comes (Taleb, 2010). Man­agers tend to ig­nore this, treat­ing pro­jects as if they ex­ist largely in a de­ter­minis­tic New­to­nian world of cause, effect, and con­trol.

9. Statis­ti­cal ev­i­dence shows that such com­plex­ity and un­planned events are of­ten un­ac­counted for, leav­ing bud­get and time con­tin­gen­cies in­ad­e­quate.

10. As a con­se­quence, mis­in­for­ma­tion about costs, sched­ules, benefits, and risks is the norm through­out pro­ject de­vel­op­ment and de­ci­sion-mak­ing. The re­sult is cost over­runs, de­lays, and benefit short­falls that un­der­mine pro­ject vi­a­bil­ity dur­ing pro­ject im­ple­men­ta­tion and op­er­a­tions.

The Iron Law of Megaprojects

Over time.

Over bud­get.

Un­der uti­lized.

Th­ese aren’t lit­tle, ei­ther: cost over­runs of 1.5x are com­mon, in bad cases they can run more than 10x, and 90% of pro­jects have them; it is com­mon for pro­jects to have 0.5x or less uti­liza­tion once com­plete. This holds for the pub­lic and pri­vate sec­tors, and also across coun­tries, so things like ex­ces­sive reg­u­la­tion or cor­rup­tion aren’t good ex­pla­na­tions.

They start off badly, but they do still man­age to get com­pleted, which is due to...

Break-Fix Model

Since man­age­ment of megapro­jects doesn’t know what they are do­ing or don’t have the in­cen­tives to care, in­evitably some­thing breaks. Then ad­di­tional time and money are spent to fix what broke, or the con­di­tions of the pro­ject are rene­go­ti­ated, and it limps along to the next break. This pro­cess con­tinues un­til the pro­ject is finished.

If it is so ter­rible and we know it is ter­rible, why do we do it this way?

Hirschman’s Hid­ing Hand

Be­cause a lot of im­por­tant stake­hold­ers don’t know how ter­rible it is. From Willie Brown, former mayor of San Fran­cisco:

“News that the Trans­bay Ter­mi­nal is some­thing like $300 mil­lion over bud­get should not come as a shock to any­one. We always knew the ini­tial es­ti­mate was way un­der the real cost. Just like we never had a real cost for the [San Fran­cisco] Cen­tral Sub­way or the [San Fran­cisco-Oak­land] Bay Bridge or any other mas­sive con­struc­tion pro­ject. So get off it. In the world of civic pro­jects, the first bud­get is re­ally just a down pay­ment. If peo­ple knew the real cost from the start, noth­ing would ever be ap­proved. The idea is to get go­ing. Start dig­ging a hole and make it so big, there’s no al­ter­na­tive to com­ing up with the money to fill it in.”

Nor are they with­out jus­tifi­ca­tion, for ar­gu­ments have been made that sup­port it. The first ar­gu­ment is ex­actly as Willie made it: if we knew how difficult large pro­jects were, we would never build them.

For the sec­ond, note the ti­tle of the sec­tion is hid­ing, not hid­den. This ar­gu­ment was made by Albert O. Hirschman on the ba­sis of ear­lier work by J.E. Sawyer, and it says that there is an er­ror in both the es­ti­ma­tions of costs, and in the es­ti­ma­tion of benefits, and this er­ror should roughly can­cel out. The prob­lem is that Sawyer’s work just pointed out that this was pos­si­ble based on a picked sam­ple of 5 or so. Hirschman then gen­er­al­ized it into a “Law of the Hid­ing Hand” and thereby le­gi­t­i­mated ly­ing to our­selves.

Alas it is bunk. Aside from be­ing falsified by the ac­tual data, Flyvb­jerg points out the non-mon­e­tary op­por­tu­nity costs through the ex­am­ple of the Syd­ney Opera House. It’s ar­chi­tect, Dane Jorn Ut­zon, won the Pritzker Prize (the No­bel of ar­chi­tec­ture) in 2003 for the Syd­ney Opera House. It is his only ma­jor work—the catas­trophic de­lays and cost over­runs de­stroyed his ca­reer. Con­trast with Frank Gehry, an­other in­spired ar­chi­tect, and it looks like man­age­ment’s bungling of the Opera House prob­a­bly cost us half a dozen gor­geous land­marks.

Sur­vival of the Unfittest

The pre­vailing at­ti­tude that it is perfectly ac­cept­able to lie about and then badly man­age megapro­jects leads to a weird sce­nario where worse pro­jects are more likely to be cho­sen. Con­sider two com­pet­ing pro­jects, one with hon­est and com­pe­tent man­age­ment, and one with dishon­est and in­com­pe­tent man­age­ment. The costs look lower for the lat­ter pro­ject, and the benefits look higher, and the choosers be­tween them prob­a­bly ex­pect them to both be over bud­get and be­hind sched­ule by about the same amount. There­fore we sys­tem­at­i­cally make worse de­ci­sions about what pro­jects to build.

Light at the End of the Tunnel

For­tu­nately there are bright spots. Dur­ing the Obama ad­minis­tra­tion these failings were iden­ti­fied as an im­por­tant policy area for the US gov­ern­ment. It is now much more com­mon for a megapro­ject failure to re­sult in con­se­quences for lead­er­ship, like the CEOs of BP and Deep­wa­ter Hori­zon, or Air­bus and the A380 su­per­jumbo. There are megapro­jects that go well and serve as ex­am­ples of how to do it right, like the Guggen­heim Mu­seum in Bilbao. Lastly, there is scat­tered adop­tion of vary­ing lev­els of good prac­tices, like refer­ence class fore­cast­ing and in­de­pen­dent fore­cast­ing.

Cu­rated. This was the first post in awhile that caused me to ex­pand my think­ing mean­ingfully. I had a vague sense that large pro­jects had a bunch of dys­func­tion. It hadn’t oc­curred to me that re­ally large pro­jects might have differ­ent and/​or worse sys­tem­atic dys­func­tion, and that this might be an im­por­tant lens through which to view global in­ad­e­quacy.

I won­der about the suit­abil­ity of this field as a tar­get for EA ca­reers. An un­ac­cept­ably high per­centage of that ~8% of GDP is wasted, and the pic­ture gets worse when we en­ter­tain op­por­tu­nity costs. In­so­far as eco­nomic growth in gen­eral is good for alle­vi­at­ing suffer­ing, the abil­ity to pre­vent hun­dreds of mil­lions of dol­lars in waste per pro­ject seems like a good deal.

The same mechanism oc­curs in de­vel­op­ing coun­tries, which are the tra­di­tional place to look for high im­pact in­ter­ven­tions. It seems to me that in places with­out a lot of other in­fras­truc­ture built already, and not a lot of cap­i­tal to in­vest, the uti­liza­tion and op­por­tu­nity cost fac­tors are big­ger than they would be oth­er­wise.

The new­ness of the field strongly sug­gests it is ne­glected, al­though I don’t have any sense of how peo­ple are cho­sen to man­age this size of pro­ject so even if the ex­per­tise is ne­glected it still might be very difficult to ap­ply it be­cause of net­work effects or the like.

I won­der about the suit­abil­ity of this field as a tar­get for EA ca­reers. An un­ac­cept­ably high per­centage of that ~8% of GDP is wasted, and the pic­ture gets worse when we en­ter­tain op­por­tu­nity costs. In­so­far as eco­nomic growth in gen­eral is good for alle­vi­at­ing suffer­ing, the abil­ity to pre­vent hun­dreds of mil­lions of dol­lars in waste per pro­ject seems like a good deal.

I sus­pect there would be a high re­place­ment effect, i.e. if we man­aged to spend less on these big pro­jects, we’d prob­a­bly just spend the ex­cess on more big pro­jects or be more am­bi­tious on these pro­jects. Many megapro­jects are not ob­vi­ously con­tribut­ing to in­creas­ing welfare on the mar­gin, al­though per­haps if megapro­jects were cheaper we’d be more will­ing to in­vest ones that are more about in­creas­ing welfare than sta­tus (I sus­pect sta­tus is a ma­jor player in megapro­jects since many of the ex­am­ples that come to mind are un­nec­es­sar­ily am­bi­tious when sim­pler, cheaper solu­tions would have worked but would have been less pres­ti­gious).

My in­tu­ition is that there wouldn’t be much of a re­place­ment effect, un­less you con­sider differ­ent groups be­ing more likely to do megapro­jects be­cause they are more suc­cess­ful a re­place­ment effect.

I ex­pect this for a few rea­sons. First, megapro­jects are usu­ally or­ga­nized ac­cord­ing to a spe­cific need, and I would be sur­prised if a given stake­holder (like a city or a cor­po­ra­tion) had a mean­ingful back­log. Se­cond, the cur­rent amount of spend­ing is an ac­ci­dent; I think this a differ­ent case to one where they spent much less than they origi­nally planned. Lastly most of this is debt spend­ing, and I feel like or­ga­ni­za­tions don’t go look­ing for ways to ab­sorb all of their available credit.

It does oc­cur to me that the debt point prob­a­bly weighs against EA value, be­cause that effec­tively means the sav­ings are amor­tized over the length of the fi­nanc­ing, and be­cause the same amount won’t nec­es­sar­ily be spent el­se­where it isn’t a di­rect benefit to any­one.

In­ter­est­ing. One think I would like to see more of men­tioned here—but per­haps will have to dig for my­self—would be about the struc­ture of the pro­ject man­age­ment. It seems one clear char­ac­ter­is­tic is com­plex­ity of the whole. While cost, and over­all “size”, would clearly be well cor­re­lated. How­ever I don’t think that is the crit­i­cal fea­ture. I would per­haps pose it as a sep­a­ra­bil­ity is­sue. Can the over­all whole be chun­ked out into bite-sized bits with­out too much co­or­di­na­tion type work or not?

As more of a side thought, I won­der if any­one has done much in the way of spill-over type effects on these mega pro­jects and if any cat­e­go­riza­tion or char­ac­ter­is­tics are iden­ti­fi­able. We know there have been spillover from both the space pro­gram and mil­i­tary pro­grams. Not sure about more com­mer­cial or gov­ern­ment mega pro­jects. But you would think all in­fras­truc­ture type pro­jects should benefit from some pos­i­tive net­work ex­ter­nal­ity effects.

I don’t try to make any “if you built it they will come” ar­gu­ment here. If any it would be a “peo­ple are pretty good at figur­ing out how to make lemon­aide from a lemon” type “ar­gu­ment”. This kind of goes along with the too com­plex to man­age well cases as well. We of­ten will not know what the end benefits will be for many things—if the USA didn’t do the elec­trifi­ca­tion pro­ject to get power to ru­ral com­mu­ni­ties would we have the same type of com­mu­ni­ca­tions net­works we cur­rently have? Worse? Bet­ter???

Of course this is not re­ally about how to bet­ter man­age such pro­jects and it’s likely bet­ter man­age­ments would al­low such as­pects greater po­ten­tials and lessen any such af­fects.

I would per­haps pose it as a sep­a­ra­bil­ity is­sue. Can the over­all whole be chun­ked out into bite-sized bits with­out too much co­or­di­na­tion type work or not?

My un­der­stand­ing is no, it can­not. What you de­scribe is the ba­sic ap­proach to pro­ject man­age­ment, and the failure of that ap­proach mo­ti­vates the field. I can think of two spe­cific rea­sons why:

The first is scale, and I think an in­tu­ition similar to Dis­solv­ing the Fermi Para­dox ap­plies: the ques­tion is not the like­li­hood of each part failing, but rather the like­li­hood of at least one bot­tle­neck part failing. As the pro­ject grows large enough, we should ex­pect to be per­pet­u­ally chok­ing on one bot­tle­neck or other.

The sec­ond is mag­ni­tude, which is re­ally the fo­cus of the above pa­per. Once pro­jects reach a large enough ab­solute size, more and differ­ent stake­hold­ers en­ter the pic­ture. Each new stake­holder is a stu­pen­dous in­crease in the poli­ti­cal com­plex­ity of the pro­ject, so much so that even at the smaller level of pro­jects where we know the right an­swers about how to do them ap­ply­ing the right an­swers is of­ten im­pos­si­ble be­cause of the differ­ent in­ter­ests at play. This is why there is so much effort in keep­ing the num­ber of stake­hold­ers as small as pos­si­ble in de­ci­sion mak­ing.

But you would think all in­fras­truc­ture type pro­jects should benefit from some pos­i­tive net­work ex­ter­nal­ity effects.

This is a com­po­nent of the eco­nomic sub­lime, as I un­der­stand it. One ex­am­ple of the kind of stake­holder who en­ters the pic­ture would be a restau­rant owner a block away from the con­struc­tion site, who ex­pects to benefit from the redi­rected foot traf­fic due to con­struc­tion, or the busi­ness of the con­struc­tion work­ers, or the in­creased foot traf­fic af­ter the pro­ject is com­pleted, or all of the above.

As the pro­ject grows large enough, we should ex­pect to be per­pet­u­ally chok­ing on one bot­tle­neck or other.

I did un­der­stand that but was sug­gest­ing that the crite­ria as a mega pro­ject was re­ally not about the costs—though fully ex­pect a high costs to be as­so­ci­ated with such effort. As you say, they can­not be eas­ily be sep­a­rated into more man­age­able sub-pro­jects. Per­haps I can rephrase my though. Is the po­si­tion that any and ev­ery pro­ject that costs $X or more nec­es­sar­ily has the type of com­plex­ity and non-sep­a­ra­bil­ity?

If not then the abil­ity to clas­sify high cost pro­jects should be use­ful—and point to al­ter­na­tive man­age­ment re­quire­ments if all pro­jects greater than $x still suffer from many of the same in­effi­cien­cies.

Each new stake­holder is a stu­pen­dous in­crease in the poli­ti­cal com­plex­ity of the pro­ject, so much so that even at the smaller level of pro­jects where we know the right an­swers about how to do them ap­ply­ing the right an­swers is of­ten im­pos­si­ble be­cause of the differ­ent in­ter­ests at play.

Sure, and you run into whole prob­lem of what ex­actly is the right an­swer as the differ­ent stake­holder are max­i­miz­ing slightly differ­ent (and likely equally le­gi­t­i­mate) crite­ria. That alone is not a bad or wrong thing. But the ap­proach of limit­ing par­ti­ci­pa­tion, in a way, seems ex­actly the same thing as chunk­ing the pro­ject into man­age­able bites. But it’s not clear that can be done much bet­ter than dis­assem­bling the pro­ject into smaller, sim­pler and more man­age­able sub pro­jects.

If so, limit­ing the stake­hold­ers then the as­sess­ment of the pro­ject will always be one of par­tial failure. That would also drive var­i­ous type of cost over run and time de­lays when such ex­cluded stake­hold­ers seek to in­fluence the pro­ject from out­side the man­age­ment pro­cess.

It’s not clear to me that would be the op­ti­mal solu­tion to all mega pro­jects.

Is the po­si­tion that any and ev­ery pro­ject that costs $X or more nec­es­sar­ily has the type of com­plex­ity and non-sep­a­ra­bil­ity?

is a rea­son­able ap­prox­i­ma­tion of Flyvb­jerg’s po­si­tion. As you say, it is not re­ally about costs per se; the cost is a heuris­tic for things that drive com­plex­ity and non-sep­a­ra­bil­ity, while also be­ing the pri­mary met­ric for suc­cess.