Neo-Liberalism as Feudalism

by Henry on October 15, 2013

There’s a lot of good stuff in Colin Crouch’s new book, Making Capitalism Fit for Society (Powells, Amazon), but one point seems particularly relevant today. As umpteen people have pointed out, the rollout of the federal enrollment system for Obamacare has been a disaster. The polymathic David Auerbach has been particularly excellent on this.

The number of players is considerably larger than just front-end architects Development Seed and back-end developers CGI Federal, although the government is saying very little about who’s responsible. The Department of Health and Human Services’ Centers for Medicare and Medicaid Services (CMS), which issued the contracts, is keeping mum, referring reporters to the labyrinthine USASpending.gov for information about contractors. … By digging through GAO reports, however, I’ve picked out a handful of key players. One is Booz Allen … Despite getting $6 million for “Exchange IT integration support,” they now claim that they “did no IT work themselves.” Then there’s CGI Federal, of course, who got the largest set of contracts, worth $88 million, for “FFE information technology and healthcare.gov,” as well as doing nine state exchanges. Their spokesperson’s statement is a model of buck-passing … Quality Software Solutions Inc …[have] been doing health care IT since 1997, and got $55 million for healthcare.gov’s data hub in contracts finalized in January 2012. But then UnitedHealth Group purchased QSSI in September 2012, raising eyebrows about conflicts of interest.

… Development Seed President Eric Gundersen oversaw the part of healthcare.gov that did survive last week: the static front-end Web pages that had nothing to do with the hub. Development Seed was only able to do the work after being hired by contractor Aquilent, who navigated the bureaucracy of government procurement. “If I were to bid on the whole project,” Gundersen told me, “I would need more lawyers and more proposal writers than actual engineers to build the project. Why would I make a company like that?” These convolutions are exactly what prevented the brilliant techies of Obama’s re-election campaign from being involved with the development of healthcare.gov. To get the opportunity to work on arguably the most pivotal website launch in American history, a smart young programmer would have to work for a company mired in bureaucracy and procurement regulations, with a website that looks like it’s from 10 years ago. So much for the efficiency of privatization.

Otherwise put, it’s a good example of Crouch’s critique of neo-liberal efforts to ‘shrink’ government – that in practice it is less about free markets than the handing over of government functions to well connected businesses.

Outsourcing is … justified on the grounds that private firms bring new expertise, but an examination of the expertise base of the main private contractors shows that the same firms keep appearing in different sectors … The expertise of these corporations, their core business, lies in knowing how to win government contracts, not in the substantive knowledge of the services they provide. … This explains how and why they extend across such a sprawl of activities, the only link among which is the goernment contract-winning process. Typically, these firms will have former politicians and senior civil servants on their boards of directors, and will often be generous funders of political parties. This, too is part of their core business. It is very difficult to see how ultimate service users gain anything from this kind of managed competition.

As Crouch suggests in an aside, we’ve been here before. The cosy relationship between corporations like CGI Federal and Booz Allen and the government bears a strong resemblance to feudalism (which, stripped of the pageantry, was a complex web of relations and privileges between a small and privileged elite of nobles and the state). It bears an even stronger resemblance to Old Corruption, the strangling web of sinecures and emoluments that radicals like William Cobbett inveighed against in the early nineteenth century. Government – even at the best of times – has many clunky and inefficient features (the American version particularly so – many of the worst inflexibilities of the US government have their origins in people’s distrust of it). Yet the replacement of large swathes of government with a plethora of impenetrable subcontracting relationships is arguably even worse – it has neither the efficiencies (sometimes) achieved by markets, nor the accountability (sometimes) achieved by democratic oversight.

Share this:

I think this is somewhat of a misinterpretation of the events in the story. Most of the problems experienced here are common to large institutional IT projects. Think about the IT systems at any university you’ve dealt with, or any airline. These problems happen for a lot of reasons – intrinsic difficulty, time pressure, principal agent issues, etc.

This is compounded by several unique factors that made most of the usual remedies unavailable. For starters, delay was off the table, as was just not shipping the product and trying again another time. This is the most common outcome for things like this, and was out of the question here.

Additionaly, government procurement is inherently more problematic than private sector procurement, especially in a context like the US where trust in government is low, leading to more complexity.

Some of these problems are fixable, but some are not, and all in all I’m actually surprised it worked at all in the time frame.

The absolute, hands down, not-even-a-smidgen-of-doubt-about-it way to have built the site cheaply and well would have been just to hire a large team of smart developers and pay them what they’re worth on the open market. The fact that the system has essentially been fundamentally rewired to keep that from happening is depressing on any number of levels.

It is astonishing how committed the Clinton-Obama strand of Democrats (and I guess New Labour in the UK?) are to the principle that activity carried out directly by the public sector –and especially by the federal government — must be kept to an absolute minimum. I admit I don’t really understand how someone can support a big expansion of subsidies and regulation in health care but be adamantly opposed to any new services related to health care being delivered by public employees. But it’s clear that’s the worldview of the people running the show. I’m glad to see that you guys are not too distracted by the current clown show to talk about this — for my money it’s much more interesting (and more destructive).

My only quibble, is “neoliberalism” (with or without hypen) really the best term? It seems like a new mutation.

A small example. One weird tic in a lot of the liberal defenses of Obamacare has been how much time they spend talking about the way state officials can bring their special local knowledge to designing their exchanges — which in practice turns out to mean something like co-branding them with a local sports team. I’m sure in part this is just trying to put a good face on the Rube Goldberg complexity of the thing, but I also get the sense that they genuinely feel that “no federal employees involved” should be a big selling point.

I’m not sure this is neoliberalism or anything particularly sinister. It’s just that some people now believe the software development (and IT in general) is a standardized capitalist industry, like, say, the automotive industry. You are not suggesting that the government should produce its own cars, are you? So, the bosses now hope and believe that producing software is the same as producing Buicks. That’s all.

If one really wanted to dive into the reasons why federal procurement is complex, you could do it, but trust me, it gets boring really fast. In short, the awarding civil servants are keen to paper the files with generically defensible and “rational” reasons for contract awards (price, the ability to regurgitate an “understanding” of the government’s requirements using the preferred jargon, performance ratings under past government contracts, ability to staff up quickly, key-employee qualifications), most especially major awards, because they fear legal protests by one or more losing bidders arguing that the awardee didn’t “deserve” the contract and the prescribed rules weren’t applied with scrupulous fairness. Protests are cheap to file at the GAO (where you don’t even need a lawyer) and are not especially expensive, as lawsuits go, at the U.S. Court of Federal Claims. Not that there’s anything wrong with that.

Observers have complained for decades that federal acquisitions degenerate into “proposal-writing contests” that favor the Beltway Bandits like Booz Allen and CGI. But the idea of federal procurement as a targeted patronage system is just not plausible from the inside. Graft and bribery happen, as most things do, but they are generally not worth the candle. For one thing, people can go, and have gone to prison. For another, the people in secure federal procurement jobs typically don’t hear about political ramifications and couldn’t care less. They want to get on with their jobs — get the contract awarded, to avoid or prevail against the loser’s protest, and get the appropriated money flowing.

Makes a lot of sense to stop talking about public and private, instead splitting things up 3 ways: public service, free market and contracting, corresponding to different priorities to the three common business motives of ‘do the job’, ‘make a profit’ and ‘don’t get caught breaking the law’.

Of those, it’s obvious to everyone except libertarians that the third should be a last resort only applicable if for some reason you can’t get either employees who would do the job for a fair wage, or a high enough volume of transactions the word _market_ is a fair description of the situation.

What do you mean “for direct personal consumption”? They buy cars for the FBI (or whatever), and I’m sure they are somewhat customized.

It’s just a matter of how well-standardized the industry is:
– I’ll take one healthcare application with a ERP and CRM, and customer service on the side.
– Yes, Sir. Be ready next week. Where do you want it delivered?

I thought a big part of the “contracting out” phenomena was because of cost, and to avoid stuff like this. Of course, the result is that you’re ultimately selecting for companies that are the best and most connected in the contracting process, not the ones that are actually the best at doing the tasks needed.

The absolute, hands down, not-even-a-smidgen-of-doubt-about-it way to have built the site cheaply and well would have been just to hire a large team of smart developers and pay them what they’re worth on the open market.

I’m just utterly unconvinced of this. (I work in an IT-heavy field in insurance.) The number of projects that just take smart developers (versus experts on random weird interfaces, user functionality, and so on) is small. The number of projects that simply don’t work right after years and millions of dollars, when they have to integrate correctly with multiple other systems, is significant; the number of systems that have multiple off-system interfaces that actually work correctly at launch is in the “I once heard a rumor of one” category.

I’m just utterly unconvinced of this. (I work in an IT-heavy field in insurance.) The number of projects that just take smart developers (versus experts on random weird interfaces, user functionality, and so on) is small. The number of projects that simply don’t work right after years and millions of dollars, when they have to integrate correctly with multiple other systems, is significant; the number of systems that have multiple off-system interfaces that actually work correctly at launch is in the “I once heard a rumor of one” category.

I didn’t say they “just” take smart developers, nor did I say it would be flawless. It would simply be done better and cheaper than this idiotic morass of contracts and subcontracts. Doesn’t mean it would “just work” on launch, because nothing ever does, but it would still have been done better.

Whatever the model (I don’t have time to read the comments closely), software has always been contracted out, the government has almost no programmers on the payroll and never has, except for very old and boring applications like accounting. “Outsourcing,” to the extent it refers to something new, isn’t an applicable term.

The relevant problem is that there are basically three software models: off the shelf or commercial (which works pretty well), “outsourced” one-of-a-kind custom software (which often has coordination problems), and government outsourced (which has additional problems due to what might be compared to micromanaging on the part of the government procurement agencies).

I once worked for a large contributor to government contracts (US and otherwise) and one contribution the primary contractor apparently made was being permitted to know what was actually on the site: once a problem was tracked down to, “this is impossible, the only thing I can think of is if you’ve got some encryption hardware on the system you’re not telling us about that’s malfunctioning, and in that case it can’t be our problem,” and no answer was received.

Part of the problem is thinking that there is “a” thing here. This is part of what’s really good about Auerbach’s article.

For a small bank (an industry I don’t know much about) I’d guess there are a few pieces:

1. The internal software that actually manages the money. This may be purchased, may be developed in-house long ago. I’ll be on purchased these days.
2. The software that runs the mobile banking part of the website.
3. The software that runs the rest of the website.

All of these are almost certainly written by different people. Fortunately 3 doesn’t touch much of the rest of the system. 3 is probably written by a web development company using some content management system (WordPress, which powers this blog, is a CMS that’s probably much better than what your bank uses) that they bought from yet another company.

2 is written by some company that the bank bought the software from, and it was customized for the bank’s purposes by a consultant. The consultant might be the same as in 3, or the company that made the software, or someone else entirely.

1, which you never see, is written by yet other people, and is probably old.

In the healthcare.gov site, 3 was done by a company called Development Seed, which did a good job (and is a small startup contracted to by another contractor). They used much more modern tools, and are the reason the site looks nice.

The equivalent of 2 was done by a lot of contractors, and that’s where things all broke down — see Auerbach’s article.

1 is, in this setting, composed partly of a lot of existing, complex, and hard-to-interface-with government and insurance industry systems, and partly of new systems also written by contractors like CGI Federal.

It would simply be done better and cheaper than this idiotic morass of contracts and subcontracts.

That’s not my observation. In any big software build and implementation at which I’ve had a ring-side seat, that same kind of multiple contractors and sub-contractors system is standard. The reason is the sheer specificity of some of the work–there aren’t that many experts on “interfacing with system such-and-such (written in COBOL, and improved with random accretions)” out there, so the normal thing is to hire one of the 10 people who is as a sub-contractor. And unless you’ve done similar projects before, you won’t even know what experts you need.

The major challenge in web engineering is knowing how much you need to scale. You don’t know if you’re going to have ten thousand users or ten million users. Solutions that work great for small sites don’t work at all for medium-sized sites, and solutions that work for medium-sized sites don’t work for large sites. All the major sites (Amazon, Twitter, Facebook, etc) have repeated re-engineered their systems to handle scaling problems, at huge expense and risk. Friendster failed because they couldn’t scale (Nobody remembers it, but the first social networking site, PlanetAll, also failed to scale). But Obamacare is different. You know many users you’re going to have. All you have to do is google “number of uninsured americans.” It shouldn’t have been this bad. If they’re this boned right now I doubt they’ll be doing any better two months from now.

This is basically the argument made by libertarian advocates of single payer (e.g. Mike Munger). There aren’t many of these folks, but there are some. The critique, at least, is as common in public choice circles as it is on the left.

I’m not sure I buy the frame that this is neoliberalism, however, under any definition of that term that is useful. This is a partial nationalization of perhaps the most tightly regulated sector in the economy. Hence the need for all the lawyers and proposal writers complained about in the pullquote. The fact that administration of the system still relies on corporations is not a sufficient condition for neoliberalism. The neoliberal position was — and remains — to deregulate the sector, not subsidize it. So this is stretching things a bit thin:

“Otherwise put, it’s a good example of Crouch’s critique of neo-liberal efforts to ‘shrink’ government – that in practice it is less about free markets than the handing over of government functions to well connected businesses.”

Even if one accepts that the ACA is neoliberal, AFAIK there is not a single person arguing that the ACA is an effort to shrink the government.

Look, people, the concept of data sharing is not new and it’s not that difficult. The fact that people exist out there who handle this kind of work is proof that it can be done. If you’re prepared to do it correctly, then you go out there and you get those people. Offer them double if you have to, because otherwise, you’re just funneling money into administrative black holes that produce nothing of value. The whole idea that there’s this one person out there who knows the ancient COBOL system and the only way to get that person is to go through multiple subcontracting stages is beyond preposterous; no one is that valuable.

All of this is a classic example of piecemeal solutions that fail to do the right thing out of the gate. Oh, we’ll just hire this company, and then we’ll get these other people to fix the mess that the first people have made, and blah blah blah. Hell, I’m dealing with something like this right this very moment because some of my work involves dealing with people who work for a contractor and that contractor’s arcane security and data-sharing policies are driving me up the wall. And that’s a team with like 10 people on it! But this is a classic example of a situation where you pretty much know what the right thing to do is when you start out, because software engineering wasn’t invented yesterday, and from a technical standpoint most of these are solved problems. You just have to have the resolve to, you know, go ahead and actually implement the right solution. Incidentally, I was told that this is what the CFPB did when it was created; they just burned the old system to the ground and built a new one based on proper engineering principles.

The whole idea that there’s this one person out there who knows the ancient COBOL system and the only way to get that person is to go through multiple subcontracting stages is beyond preposterous; no one is that valuable.

One stupidly run government organization I was with (now sadly but mercifully destroyed) would get employees to start up some ridiculous precedent-free programming job that was nevertheless deeply tied into the existing infrastructure. Those employees would quit and get hired back as contractors with a ridiculous rise in pay because the managers knew no better.

It doesn’t really seem like the federal exchange has a swarm of old Cobol systems it needs to accommodate. The federales are the big fish, they just need to set their own standard for health insurance communication, make a new from-scratch system to implement it, and publicize the standard. Then the burden falls on the insurance companies to build an interface layer between their internal systems and the federal system. That’s part of their cost of doing business, and they can be left to sink or swim individually. Making a new system is relatively easy; you set your own requirements and you can accomplish them any way you find convenient. You can certainly do a bad job of it–your new system can be badly designed, inefficient, hard to maintain, confusing, insecure–but even if it has all those problems the system will still basically work. As long as the federal end works to a bare minimum extent the insurance companies can take up the burden of accommodating the exchange’s peculiarities; if a few fail you just shut them out of the system until they’re ready. If the federal system doesn’t work to even that minimum extent you can just hack away at and add supplementary systems it until it does. I think the fact that the fed system will set the standard is even more reason for it to be designed and maintained in house.

The federales do have to worry about (50+ varieties of) Medicaid, but I don’t think the exchanges even handle Medicaid enrollment, do they? They just provide information and links (easy) and people sign up for Medicaid through existing state bureaucracies.

Henry in the OP: As umpteen people have pointed out, the rollout of the federal enrollment system for Obamacare has been a disaster.

I realize this doesn’t speak to the points the OP wants to make, or to the rest of the thread, but honestly, just for the record, the rollout of the federal enrollment system really hasn’t been a “disaster” unless you think that anything short of flawless rollout constitutes a disaster. This is needless hyperbole.

Having occasionally benefited by being an academic sub-contractor to a federal (defense) contractor, I actually quite admire the competence and efficacy of the industrial proposal-writing process. These people are way more organized than we could ever imagine: hard deadlines for drafts, merciless reviews, professional copy-editing and visuals production. I only wish my university had such people on the payroll… but I’m sure they couldn’t afford it (in that penny-wise sort of way).

I don’t know about the marketing end of it, but on the technical end big-league contractors hire reputable first-rate practitioners who deeply understand their content area and are not afraid to let you know it. It really ups your game.

Of course what I’m familiar with is the research work end of things, not the marketing stuff (which I suppose is what healthcare.gov is really about). I’m hoping somebody has the energy to write a book about it, maybe something along the lines of Fred Brooks’ “The mythical man-month”.

Having had some experience with the implementation of hugely expensive turnkey systems, although as a data conversion guy, rather than a systems administrator or programmer, I used to ask myself how come the promised functionality was invariably either missing or terminally bollixed when it came to actual on-site deployment. How come Microsoft or Oracle or Sun stuff worked, and XYZ’s million-dollar custom-tailored application was crap? The only partially satisfactory answer I ever came up with is that high volume, low-margin stuff a) gets looked at by a lot more eyes, and b) puts the vendor out of business PDQ if it doesn’t work. It can’t hold the customer hostage like the big custom integrations can.

IBM seems to be able to pull such large-scale magic off — it’s been their core business for a while now, and as folklore has it, was in fact their salvation after main frames went the way of the dinosaur — but the smaller, more specialized outfits I worked with always seemed to be undercapitalized messes which habitually promised more than they could deliver. Cost-plus may not be what the contract you signed specified, but when the lights go out on the back end, who you gonna call?

jeez, it has been , what, two weeks ?
with a hostile GOP ?
and this minor shutdown/debt ceiling going on ?

and anecdotal data ?

In any event, the idea that the private sector does better is survivorship bias bollocks
as Chris@23 pointed out, lots of IT companies fail, and we hear of the successful ones.

and, unlike the public sector, the private sector can bury failures; they spend 100 million on software that doesn’t work, no one hears about it – or mabye, you do a Graham and read the footnotes in the annual report you might

I know of several really, really bad websites at billion dollar companies, where the website cost 50 million and barely works…and that is a lot less complex then health care.

I don’t know what other federal systems the exchange has to be integrated with. Maybe I’m just ignorant.

If the other federal IT systems are a notorious mess that’s a killer argument for bringing them in-house. Have in-house teams working on each system with a continuous improvement approach rather than treating it as one-time procurement. Have a fixed size team/budget and have them prioritize work at as low a level as possible rather than giving out contracts with bidding, negotiation, inflexible requirements, and the rest.

But why was it all done as a “big bang” on a single date? Surely the way to do this was to progressively roll out the exchanges state-by-state, starting with a couple of small states as guinea pigs.

Anybody who knows anything at all about IT projects (or indeed any other large complex project) knows that wherever at all possible you start small and grow so you can learn from your (absolutely inevitable) mistakes. It’s as though NASA’s first launch after Kennedy’s speech was a moon landing attempt.

Basically, very large software projects are *hard*, and interfacing with many other systems and databases is *nightmarishly hard*. I don’t know any way to *reliably* get a project as huge and gnarly as healthcare.gov built, tested, and functional if you can’t start small and build, which the law didn’t really permit.

18 or more years ago, I was a sub-contractor on a large project related to the US Postal Service’s efforts to do something with bulk (aka “junk”) mail (not to get rid of, mostly to make it cheaper for the senders and the USPS).

More precisely, I was the $35/hr independent programmer hired by the small consulting company who was hired by the Big 3 consulting company that was hired by the USPS. At a meeting in DC, I managed to figure out that my time was billed to the USPS via the Big 3 company at about $350/hr.

It is small wonder that large scale IT projects are so expensive with this level of graft. And just how bad is the graft? After I gave an impromptu talk on “Internet Security” (a somewhat novel idea back then), the Technical Lead (his official title) from the Big 3 firm came up to me and asked to meet in his office. His question: “Could you clarify what you mean by a protocol?”

(for laymen, this is like a senior car mechanic asking the kid around the corner “Could you clarify for me what you mean by the word tire ?”)

I don’t have immediate answers, but it is clear that whatever systems of procurement and bidding could rise to this kind of abject wast and stupidity really need to be completely rejected and reinvented anew.

What do you want, it’s a booming industry, a gold rush that has been going on for decades. The real miracle is that something actually gets done somehow, when your dumb specs, after passing through 7 middle-men, end up with a 20-year-old in Bangalore, the proud graduate of a 3-months class in javascript. But don’t worry, we’ll get you on support and maintenance.

“How come Microsoft or Oracle or Sun stuff worked, and XYZ’s million-dollar custom-tailored application was crap? … gets looked at by a lot more eyes ….”

Right. In the end, as they say, there can be only one. One bank app, one insurance app, one hotel application, one post office app. You buy it, install it, and it works. You adjust your business processes to comply with the app, not vice versa.

This is compounded by several unique factors that made most of the usual remedies unavailable. For starters, delay was off the table, as was just not shipping the product and trying again another time.

The fact that “delay” and “not shipping the product” are considered not only to be “remedies” but “the usual remedies” fills me with a certain amount of dread.

I kinda of agree, and I would like to read the whole comments before, but I feel obligated to point out as an IT worker of some decades already that, frankly, the state of the IT services industry in the fundamental metric of “not having your project cost 100x more and end up being a total failure anyway” is abismally low, public or private sector alike.

Not to say all those other factors dont play a role and hell, is not like the “feudalist” network of ties and bonds does not apply to when big corporation X wants new system Y and seeks a provider. But really, overpriced, awful, barely working when not actively sabotaging the productivity of your company are … something close to the norm in IT projects.

Kindred Winecoff:
“I’m not sure I buy the frame that this is neoliberalism … This is a partial nationalization of perhaps the most tightly regulated sector in the economy … The neoliberal position was — and remains — to deregulate the sector, not subsidize it.”

We can distinguish neoliberal principles, and the commentariat asserting them, from neoliberal practices, i.e. actual outcomes from attempts to move toward implementing those principles in a real world with deep disagreements and some people hungry for hierachy and power.

I’ve worked through a few large IT projects on the government side. Some went well, some badly. A glance through a few issues of the trade mags and some scrutiny of the IT pages suggested this was pretty normal. For instance, in a period of two years all of Australia’s major banks each wrote off IT projects in the 300-400 million dollar range. Which attracted a couple of paragraphs in most cases (whereas the government ones got full pages plus scrutiny by parliamentary committee). Major IT seems to be very hard, and is not made easier by the common insistence that the best remedy for failure is more layers of formal project management.

#42 “delay” and “not shipping” are the usual remedies, yes. Imagine it was an airplane, would you insist on having it flying even if the builders see that it is in no way flight-worthy?

The question of why it happens so much, so often in IT is the one that is interesting. (Quick explanation: Mainly because communication and the fact that half the client wants an airplane, half a tank, management reads that and think the answer is a submarine, the engineers are dreaming of a spaceship…) But between the failure of being late and the failure of launching a disaster, if you cant have the first one you are really screwed.

not having your project cost 100x more and end up being a total failure anyway

For a profit-driven third party, how does ‘having your product cost 100x more’ count as anything but a glorious success beyond the dreams of avarice?

Realistically, any example of delivering a one-off product on time and budget means someone screwed up big time; that is just leaving the contingency budget on the table. Of course, sometimes they screw up the other way and exceed the contingency budget too; measures added to justify a budget-matching cost can take on a life of their own…

No reason you can’t hire on a temporary or indirect basis; seasonal fruit pickers are still labour, still work for the farmer. And if someone has a shrink-wrapped product that you can commit to using as-is, you can buy that.

Anything else is applied computational demonology: you are trying to use law to bind a hostile entity to your will. And not only is more powerful, it’s _smarter than you_: employs more, and better paid, engineers, lawyers, marketers, etc. . Not many examples of that ending well, fictional or otherwise.

The problem is that it seems to be an universal constant, not a case of greedy exploitation. That is, is not that crafty and unscrupulous providers are conspiring to bleed you dry; those same providers fail EXACTLY THE SAME when doing their own projects.

Simply put, nobody has a clue about how to deliver IT projects in a reliable way.

those same providers fail EXACTLY THE SAME when doing their own projects

You seem to have the perspective that going massively over budget is somehow a loss, not a win. Does Walmart say ‘damn, we had to build another X mega-stores to fully serve the population’, or do they say ‘now we have Y% increased market-share’?

That’s how capitalism _works_; greater growth means a bigger pool from which bigger profits can be extracted. A contracted project coming in on budget is like there being one town where Walmart just never bothered opening up.

Obviously it’s pathological if you are the one both issuing and fulfilling the contract, but what organisations are so self-aware they can switch ideologies and working practices on a dime?

@Sam Tobin-Hochstadt #1, @SamChevre #13, @Doctor Science #38. Yes. Though I’m not an IT guy, some of my best friends have spent their adult lives in the biz and, for various reasons (e.g. Sam Tobin-Hochstadt #20, SamChevre #21), large software projects are a mess. Whether programs for the Federal Gov’t are worse the private sector projects, I don’t know, but I’d guess that that’s secondary to the sheer size and complexity of the project.

For a now-classic statement of at least some aspects of the problem, see Frederick Brooks, The Mythical Man-Month (1975) (Sam Tobin-Hochstadt #32). You can find the Wikipedia entry HERE, which has a useful breakdown of problems, with links, and a free download HERE.

Jerry Vinokurov #25 mentions a problem that wouldn’t have been nearly so bad in Brooks’s day, that of legacy code. There’s lots of ancient COBOL code in the world, much of it ported over from machines that died a two or three decades ago.

#52 It IS a loss. To society, to the companies, on both sides, to the state of the profession.

Thats not how capitalism work. I’m not talking about any of those things you are saying. We are talking about a much simpler thing that lies in the heart of every complex endeavour.

If you have to plan for something to be done, and you need piece X to do it, you want to be able to say how much will it cost and when you will have it. Almost all the other parts of our industrial civilization have developed to the point this is possible with a good degree of confidence, not 100%, but good enough.

IT is in the amateur stage. And it hurts a lot. Wasting more time, money and resources into projects that have as much of a chance of being useful as of getting heads up in a toin coss is not how capitalism works, its a fucking disgrace and a big worry when IT is becoming as necessary to keep the infrastructure of civilization going as it is.

If clients where more sure of what the hell they would be getting when starting a project, they would request more projects, more often. That, apart from being a boon to the maturity of our profession, would be in line with capitalism. As it is, you know there is a ton of potential not tapped because clients approach the request of a new project with the same expectatives as if they were going to be personally exploring a remote jungle.

(Also: the first ones to get to a model where they consistently deliver, say, 75% of the projects with good satisfaction and not a lot of money over budget would EAT the market alive. Clients rotate between companies a lot in the search of one that does not suck as bad as the others; with the averages of the industry thats a pipe dream, but they keep searching…)

For example… One thing that has been mentioned in professional sites about this fiasco is the fact that goverment requirements on IT are of a kind that ends up with Windows Server 2003 is still considered a good platform to use.

It is better, but not by much, in the private sector. It is much better in small companies and startups.

Guess what is the reason why clients are so conservative in this regard?

And if you’re really interested in neoliberalism as feudalism, check out the work of my old buddy, Abbe Mowshowitz, who wrote on how the emergence of (IT mediated) virtual organizations was sending the world toward a virtual feudalism in which a small international elite lords it over the proles and nation-states are badly weakened by large transnational corporations:

Absent a sense of loyalty to persons or places, virtual organizations distance themselves—both geographically and psychologically—from the regions and countries in which they operate. This process is undermining the nation-state, which cannot continue indefinitely to control virtual organizations. A new feudal system is in the making, in which power and authority are vested in private hands but which is based on globally distributed resources rather than on possession of land. The evolution of this new political economy will determine how we do business in the future.

1) An inherently complex job
2) With interfaces to multiple other systems
3) Systems that are maintained by other organisations
4) The data in it is highly privacy- and security-sensitive
5) It must scale to tens of millions of users and billions of pageviews
6) All of them will arrive on day one within seconds of opening and will keep hammering until served
7) It must serve the general public as citizens, so it can’t assume user competence, literacy, vision, English-speaking, or even just that they won’t try to destroy it, and it cannot turn anyone away
8) It’s transactional – something meaningful happens or doesn’t when you press GO
9) It’s stateful – if interrupted you want to be able to come back to it
10) Money is attached to it

I’m on a board of a small community health center. Our director fell in love with iPads … sure enough, we found an App for that. Spent some money and got our medical employees to enter data, and use it (somewhat) during visits. Then our main IT system got upgraded to comply with changes in HIPAA and EMR reporting … and BINGO! Now our exam rooms needed a desktop computer for a number of important functions and our fancy iPads for another set of important functions.

Faced with 2 choices – dump the iPads and admit a mistake, or spend MILLIONS we don’t have to port the latest (but not the last) set of HIPAA/EMR/Insurance company requirements on to the iPad … we have chosen to use both systems for the time being. When the executive director retires, she can take the iPads with her…

If you have to plan for something to be done, and you need piece X to do it, you want to be able to say how much will it cost and when you will have it. Almost all the other parts of our industrial civilization have developed to the point this is possible with a good degree of confidence, not 100%, but good enough.

Plenty of software is that way: games, productivity applications, development tools, infrastructure.Which are, between them, more complex and technically challenging, by any measure you can choose, than virtually any one-off contracted system.

Some hardware is the other way: fighter jets and warships and opera houses. And, in the US, health care. Any time the charged the cost of an operation is below the delivered value to the patient, that’s a market failure, probably caused by government regulations or the presence of non-profits in the marketplace. Unrestricted profit seeking without effective price competition could probably push the market share of health services up to 50% or so of gdp.

Only security has bigger growth potential; under feudalism, 90% was typical. Hence the saying ‘as rich as a lord’.

It really is the economic model, not the domain. Though perhaps software shows the dynamics more clearly, due to the lack of large fixed costs to lower the overall variance.

“nobody has a clue about how to deliver IT projects in a reliable way”

I think this is agnotology; there is plenty of knowledge about how to deliver reliable IT projects, or at least how not to do it. It’s just not followed because people don’t want to hear it.

Unfortunately government projects tend to violate one principle (incrementalism) straight away. There is always a single, national rollout, and it’s always a minor fiasco.

The underlying problem is that this kind of software is automating business processes, and by doing so crystalises and solidifies them. Software won’t do the fudging that all of your human employees and form-interpreters have been doing in order to get the job done. The automated solution is rigid and brittle. Unless you can adapt the human organisation around the thing you’ve built, it will break.

The reason it’s hard to find out how the job is actually done, rather than the official process, is that traditional management structures generate pressure to avoid unfavourable information reaching management. This is the thing: not feudalism but managerialism, the belief that rather than investigating the work and the workers one must instead talk only to and among members of the management class.

Sheer complexity is also a big problem, not just in software but in legislation. This may be an unavoidable cost of having a complex, dense society.

(I should also have another go at introducting the phrase “mangalooting”, or looting by management, which is where a business is run to maximise executive pay at the expense of everything else, and is endemic in the large contracting and public sector contracting world.)

Ok, now I need my asthma inhaler… You really dont know the videogames industry, right? Tell them they get games on time, and on budget. They may nail quality by just not caring and dumping it to the public as it is.

= = = Jesús Couto Fandiño @ 9:00 am: #42 “delay” and “not shipping” are the usual remedies, yes. Imagine it was an airplane, would you insist on having it flying even if the builders see that it is in no way flight-worthy? = = =

You might want to do some research on the 787 project. Which (along with the Airbus A400) tends to falsify the belief that non-IT design worlds have reached boring perfection (although admittedly one year of the A400’s 10-year overrun was due to a massive software oops).

Really, non-IT projects fail quite often. Not as often, probably, nor as visibly, but they do fail. Private entities are just much better at hiding the failures, redefining them as successes, or just promoting the executives responsible and firing some more salarymen and line workers. Harder to do that with non-secret government projects.

Over on my FB page my friend Rich Fritzson highlights this portion of the quote from Crouch:

The expertise of these corporations, their core business, lies in knowing how to win government contracts, not in the substantive knowledge of the services they provide. … This explains how and why they extend across such a sprawl of activities, the only link among which is the government contract-winning process.

He goes on:

The players are not particularly good at doing big software – it’s not their core strength. What they are really good at is working the system to get contracts. This is not easy, but it’s much more about cronyism than anything else.

Thanks to all for the many insightful views on “What went wrong” and why the creation of these exchanges has been so fraught with problems. As a complete non-tech head my familiarity is with mass market websites like Amazon or Expedia so I look at the Federal exchanges and think- why should this be any more complicated.

I see why now. Thanks for lending your expertise.

My sense would be there aren’t a lot of “quick fixes” and it could be the President would be wise to delay his signature program – simply to give the IT guys and various involved agencies a “time out” to address known problems based on the short experience they’ve had. Politically this might be impractical, but if it were the real world, it would seem like the more logical solution.

@66: if you are not over budget 50% of the time, you have a systematic bias in your estimates. Which is bad, at least under an economic model where you are paid proportionally to the value in the delivered product, not proportionally to the estimated cost. Let alone the _difference_ between real and estimated costs (talk about pathological incentives…).

For games, or anything else sold in an actual market, both lower costs and higher quality mean higher profitability. So investment in tools, techniques and training that lower costs or raise quality make sense. As a consequence, you can go to, say, the ios store and see thousands on thousands of games, almost all of which are basically functional, and many of which amaze. There are a few scams or genuine technical failures, but none will be pulling $100 million and getting the managers involved bonuses.

There are a few horror stories where corporate shenanigans led to a game being developed on the contract model. If the cause of contract-model problems was mainly technical, then surely switching to a domain without those issues would lead to them racing ahead of the competition like a Ferrari in the Derby?

#60: CREST was also, of course , a second go at the problem and was able to learn from the failure of Talisman. If the world had just showed up at the Bank of England’s door, asking them to design a system, things might have gone differently.

Yeah, let’s not pretend this isn’t a problem in other industries doing bespoke work. Construction projects are also plagued with overruns.

The only industry with really predictable cost and delivery is mass manufacturing once you’ve already got it in production. This leads to a lot of people trying to adapt manufacturing process (focused on repeatability) to software (where generally you’re trying to build something that hasn’t been built before). This works badly.

The linked article wouldn’t AFAICS fix this problem, which seems to have to do with procurement requirements: the contract specifies WS 2003 and that’s what they’ve got, and that’s what they’ll have until another bid gets put out. And WS 2003 apparently doesn’t meet regulations, which sounds pretty bad, because although a lot of the time you can get modified secure versions of commercial software, occasionally IIRC the bid goes out for something that can only be gotten bespoke (as it were) but that doesn’t work as well as (say) Microsoft, so they get a dispensation to get the big-name commercial product in spite of it not meeting requirements. (The problem there is assuming that a government procurement requirement can conjure quality software out of thin air when there’s no widespread market for it, an argument in favor of making security software more available to everybody.)

You can tell people till you’re blue in the face, “You can have it good, soon and expensive, good, late and cheap or bad, soon and cheap, but you can’t have it good, soon and cheap. But they’ll never believe you.

You adjust your business processes to comply with the app, not vice versa.

Boy, do you ever! And when, cursed, bloodied and broke, you DO stumble across something that works, you cling to it like Linus does his blanket, which probably explains why beehive terminals, Windows NT and XP, etc., persisted as long past their obsolescence as they did, particularly in government offices.

In reading the later comments this morning, I conclude that an awful lot of people know what’s wrong from bitter experience, and yet the beat goes on, which means that there even more people out there who profit somehow from the miserable status quo than are driven mad by it. It’s a good argument for insisting that the government cultivate its own IT expertise, and if they can’t afford to pay the going rate, which they clearly can’t, they should maybe think more seriously about how privatization and outsourcing have undermined the tradition of public service. As should we all, since IT isn’t the only home of corporate rapists. Ask Diane Ravitch about education, or our wholly-owned state legislatures about ALEC.

Re 71, being willing to put off the launch day and being willing to accept failure are substitutes. You can insist that it launches on D-day, but accept a risk that it might break; you can insist that it works first time, but this is going to cost you in terms of time. And some of the best approaches to delivering good software are all about putting up with a minimal solution and then improving it incrementally.*

Basically, another of the “fast, cheap, good, pick any two” rules – you can have assurance of quality, or assurance of delivery on time, but if you want both you’re going to need huge amounts of money.

*in this case, interestingly, the policy is affected by the principle in the opposite sense to how it affects the implementation. a minimalist system like Obamacare is hugely more complicated from a practical point of view than a more fundamental reform like the NHS. Compare the website where you sign up for the NHS…oh wait, there is no such thing, because everyone in the UK is covered, period.

@76 …this would be made even worse if they were mixing platforms, such as running Java or Oracle on top of the Windows kernel. A veritable smörgåsbord of incompatibility.

Generally this is the best thread I’ve read about the human failures behind not just the ACA websites, but so many IT systems integration projects in the private sector also. Nothing vexes me more than to continually find nepotism behind the curtain of the ranks of executive management; brothers-in-law ignorantly speaking self-contradictory jargon and fumbling to clip on a lav mic. But, at this point corruption and failure in all forms should be expected and anticipated when we deal with a humans. And when aren’t we?

The failure of the IT system designs I see is that they aren’t tolerant of a corrupt and failing human system of production and management. Sure, there are new-sounding “anti-fragile” and “system tolerance” practices for technology that seek to make the rigid, deterministic technology parameters “fit better” into reality. Seeking an answer to the question: “Is the software compatible with human reality?”

Still, it would be my recommendation to have the fallible, corruptible and inefficient humans in control of complex tech systems for the foreseeable future. Lest we spawn the seeds of Skynet or fuel the likes of Singularity University.

You can tell people till you’re blue in the face, “You can have it good, soon and expensive, good, late and cheap or bad, soon and cheap, but you can’t have it good, soon and cheap. But they’ll never believe you.

See, I don’t think that’s actually true. Or rather, a modified version of this holds: you can have it be expensive now, or you can have it be cumulatively even more expensive except that expense will be spread out over a longer time. This is a phenomenon I’ve encountered again and again, most notably as a graduate student tasked with maintaining some legacy code. I took a long look at it, realized that my predecessor didn’t know what he was doing, called him, confirmed that he didn’t know what he was doing, and then relayed the information to the project PI, explaining that the codebase needed to be thrown away and rewritten properly from the ground up. The PI nixed that idea on the grounds that “we already have something that works,” and instead of spending two months on a rewrite, I spent two years and more maintaining someone else’s broken code. Finally, after I’d graduated, another student managed to convince enough people to let him rewrite the system.

And that’s how it always goes. Doing it right the first time is hard, and initial expenses are hard to justify. But the money just leaks out year after year because you’re spending untold time and effort fixing problems that you shouldn’t have allowed to occur in the first place. And the companies that profit from this know that; they’ll just continue sucking money through layers and layers of administration. This is all a consequence of the widely accepted “truth” that government can’t do anything productive and should outsource everything possible to the private sector; when the private sector fucks up, oh well, that’s just part of doing business, but if somewhere some pothole take $5 more to fix, you can be sure some asshole is going to use that as an opportunity to grandstand about government wasting your hard-earned money. Outsourcing projects like this doesn’t produce anything worthwhile except salaries for managers whose job it is to apply for these kinds of projects. It should be understood that just as government have departments of public works and so on, they should have IT departments as well; in this day and age, IT is critical infrastructure, not just consumer widgets.

After the asterisk, you come to the heart of the matter. You shouldn’t have to belabor the obvious, of course, but in the U.S., at least, the whole country has stumbled blindly away from the only point that really matters. Is there nothing we won’t do to protect our rentiers? For that matter, is there no limit at all to our creativity in creating new classes of them? Robbing people with a fountain pen is our only growth industry these days — indeed, it seems to be our only industry, period. To people in other countries who shake their heads at our folly, I can only say: Please remember that this is first and foremost an EXPORT industry.

“the accountability (sometimes) achieved by democratic oversight.”
I don’t see how that has anything to do with whether a government agency implemented some IT project themselves, or contracted it out. There might possibly be an additional layer of agency problems, but it’s not like the CTO is going to be democratically elected in either case. We’ve got the same set of elected officials to be held accountable for the results, regardless of how they got the people under them to go about it.

When I worked in a mid-size firm’s IT dept, practically none of the projects we outsourced were completed on budget. Luckily, we had a very effective executive who often got the other party to swallow the extra costs.

In this case, you compound those issues by having a number of contractors working on the problem. I can only imagine the number of meetings it must have taken to coordinate among all the moving parts.

Jerry Vinokurov. Not exactly, because what you are saving on in the good, cheap and late option is resources. Throwing money at the project while insisting on your timetable should mean enabling massively parallel development, with many development environments and many project managers with a very talented, and therefore expensive, programme manager holding it together. This can’t always be achieved in practice, as the design phase becomes horrendously complicated, but your IT contractor knows how to do it in principle. The outcome you describe is bad, late and expensive, which is common enough in all conscience, but a different problem, usually attributable to the sunk cost fallacy.

It may be the case that there is a theoretical limit to the complexity of a feasible IT project, due to a maximum number of controllable dependencies in both the development and implementation phases, but that’s just speculation as far as I know.

The reason I don’t think “good, cheap, and late” is an actually achievable combination is because “cheap” and “late” are actually opposites of each other. Time is money, in the most straightforward sense imaginable, and being late costs you money. Maybe it’ll be good and maybe not, but it might not matter if your great product is 5 years late to the party.

Another way to look at it is that most undertakings have a specified degree of complexity which is determined by what they’re trying to achieve. Obviously there are both known unknowns and unknown unknowns, but the overall vision needs to be clear. So while you anticipate that you’re going to have some overruns or miscalculations and form contingency plans for it, it’s not like you’re going to find out that your product that helps people sign up for health care exchanges also needs to fly the space shuttle. Now you have a decision to make about how you’re going to attack the problem; you can either start with a design process that tries to encapsulate the essential aspects of what your product does (the correct way) or you can start just slapping things together and hope they work (the incorrect way). Slapping things together is what I do when I write code for my own use, but sadly, it’s also what a lot of academics do when working on collaborative software projects. Undergoing an extensive design phase is obviously difficult and expensive, and what’s more, you pay that cost up front over a relatively short timescale, whereas you pay maintenance costs for years, but later. Because of this, doing it wrong is often a very attractive option, because you can just kick the can down the road a few years.

The presence of outside contractors compounds these problems, because when the contractors screw up, they are the only ones who know the system. And each level of the contract pyramid is skimming off the stream of money that goes through them. It’s almost certainly cheaper to just hire competent developers at prevailing market rates and put together a competent managerial team than it is to endlessly outsource these functions to third parties, because at the end of the day, all your experts will be in-house and available to make changes. The up-front costs will be higher, but over the long run I don’t for a second believe that the contracting scheme produces either cheaper or better results. This is the correct way of doing things, but it’s not the politically popular way because it doesn’t enable the direct graft of subcontracting and because the notion of “public works” has been poisoned by conservatives.

“Undergoing an extensive design phase is obviously difficult and expensive, and what’s more, you pay that cost up front over a relatively short timescale, whereas you pay maintenance costs for years, but later.”

This is, of course, common wisdom, but I’m very skeptical. As 87 says, Amazon’s probably spent billions, Google developed it’s own hardware platform, OS, … well, its own everything. And it takes a decade.

Chances are, your extensive design will hit the trash bin as soon as your business users starts working with your system, as they, inevitably, realize that they want something completely different.

I suspect, slapping things together, and then keeping slapping more things on top of that is often a better strategy. And more user-friendly. But of course for this strategy you need a large and stable IT/development force, and they don’t want it anymore.

@87 and @89
Thats true, but Amazon “worked” as a website to buy things and sign up for them as far back as 1997 and has worked continuously to date. While its certainly true they’ve spent billions over that time, its also true that they had a viable working product througout that spend. Maybe healthcare buyers should have just gone to Amazon to price their coverage – afterall most buyers probably already have an account.

As I said in my original post @69, I have no idea if comparing Amazon in 1997 to the Federal exchange in 2013 is a fair comparison or not – many commenters have mentioned the importance of incremental rollouts and the differences between starting from scratch and trying to meld legacy systems – these comments help me see the differences.

JV seems, at various posts, to suggest that planning and design choices, including insource vs. outsource decisions – i.e. “Time zero” choices effectively sent the project on the trajectory it is now on. Others seem to suggest that all large scale tech projects are doomed to different versions of ‘growing pains.’

I can’t contribute technically between these viewpoints – I can say as a user that websites like Amazon and Expedia (+oodles of others) seemed to have operated flawlessly from day 1 – whether that was a planning choice, a design choice or the way they spent money I couldn’t say – but it suggests to me its possible to “get it right” despite the challenges.

Maybe healthcare buyers should have just gone to Amazon to price their coverage – afterall most buyers probably already have an account.

What makes you think Amazon knows anything at all about how to price *insurance*?

I can say as a user that websites like Amazon and Expedia (+oodles of others) seemed to have operated flawlessly from day 1

With few notable exceptions, like that time Amazon Web Services’s US-EAST region shat the bed and killed literally thousands of other businesses’ web sites, even ones who followed the instructions, paid the money, and hosted in multiple availability zones to prevent exactly that failure mode. Or the time they did it again, or the other one. Amazon engineering is pretty damn impressive but even they cock up sometimes.

And let’s recall the actual complaint about healthcare.gov – it’s occasionally been down. Nothing in the world can convince me Amazon.com didn’t have some downtime in 1997 – I remember the web in 1997.

More seriously, note the vast gulf in expectations and standards here. Amazon broken? Everyone will have forgotten by next week. Healthcare.gov down? CRISIS CRISIS CRISIS.

@91
Amazon doesn’t need to know anything about pricing healthcare – they just sell it for someone who does know how to price it at whatever they want to charge. My comment was admittedly both facetious and naive – undoubtedly there are a few more things to consider in selling insurance than in selling books – but the basics of creating a user account and showing people prices for something would seem like a pretty settled technology.

Rackspace’s JungleDisk, for example, is down semipermanently. Cheap flights websites were notorious, for years, for shitting out and leaving people on opposite sides of the sale unclear as to whether a booking had happened.

I have no idea what their uptime metrics are, but I have a very clear one about the exaggeration/bullshit/troll factor to apply.

From what I’ve been told about Amazon’s internal IT systems, it’s basically held together by sealing wax and tape. But I find these types of comparisons annoying. To whit, I have read:

1) “The only system which stayed up was built by a nimble startup.”
Big fucking deal. It’s a home page. It doesn’t do anything.

2) “They need to rewrite all the older systems”. Just no. Anyone who suggests this needs to be quietly removed and taken to a place where there are no sharp objects…

3) “Amazon, blah, blah, Google.” – Legacy is at worst 12 years. They don’t have to integrate with third party systems. This incidentally is a nightmare. You don’t have control, you don’t know how clean their data is, what they do with errors. External APIs are rarely documented. In addition, dealing with legacy systems is also difficult, and anyone who suggests rewriting them as part of the project needs to be taken out and taken to a place where there are no sharp objects.

Also survivorship bias. Plenty of online sites over the years have had terrible IT.

4) “What do you expect of the gubmint” – Go to any investment bank and you will see a string of failed projects, developed by the very best of the IT industry. Some government sites are actually okay. Online tax assessment in the UK is pretty painless, for example.

5) “Why can’t IT be more like Engineering/Building where everything is perfect”. Are you fucking kidding me? Am I the only one who has been forced to inhabit/use/work in failed buildings their whole life? Complex projects fail. Fact of life.

6) “This was a simple project” – Hardly. I’d say the problem with this project, was that Obamacare is too damn complicated. Complicated requirements, result in complicated buggy code. Have you see the crap that they have to calculate to work out eligibility/pricing? They’re integrating with credit check agencies? At the beginning you have no idea how many states you’re developing for? I mean the project looked like a nightmare from the outset.

7) “Outsourcing bad” – As a consultant I’m a little biased, but it really does depend. Yeah the CSCs, Accentures, etc are pretty terrible. But the inhouse IT teams at most corporations also aren’t great. And you know, I do great work. You should totally hire me…

8) “Neoliberalism, blah, blah” – Much as I’d like to argue this is is true – government isn’t doing anything that most large companies don’t also do. And prior to the craze for outsourcing, inhouse development was also pretty unsuccessful.

9) “They just need to use proper engineering” – Yeah because that worked… Software engineering is a thing you study at university, and then never use because it doesn’t work.

Don’t get me wrong, this was clearly a doomed project a year ago, and everyone involved (including Congress) is partly to blame. And IT as an industry/profession has serious problems. But the received wisdom is seriously off. I’m less sanguine than Alex, partly because I’ve seen this in operation (it’s bad), but sadly it’s nothing out of the ordinary.

people will generally understand how to open the doors and poop in the right places.

Sure, by recycling old interfaces that have been well established for decades or even centuries. There’s nothing really new about most new buildings; at most they’re a new way of arranging some mature technologies. The number of times something major goes wrong *anyway* is pretty shocking, or it’s an indication that someone involved has already come around to the idea that cost overruns are a good thing.

Occasionally someone installs doors that really don’t work the same as the last couple centuries of door technology, and it usually results in dozens of people having to learn how to get the damn things open.

@94
Dude get a grip.
My comment was no where close to a troll. I have no doubt they’ll eventually get their sites to work and that millions and millions will be served by their choice of mc-happy-meal-health plans. I have no way of guessing how long ‘eventually’ will turn out to be thats why I’m in praise of the commentary that seems to offer some windows into that.

If you’ve read any of those posts, folks who seem to have some credibility on the topic of software/website development have identified dozens of potential issues and have been pretty reticent to suggest there are any easy solutions.

The rollout was clearly an embarassment. You’re an ostrich if you can’t recognize that no matter how much you love the ACA. But what implications it has for the program and its long-run success is impossible to judge – nor did I. We’re on the first batter of the first inning of a multi-game series.

I recently read about a tall apartment building (I believe it was) in Spain that had a bunch of floors added once construction had started. Alas, no one thought to consider that there wasn’t enough space for elevators to serve the newly added upper floors.

@Cian 95 – I think bill benzon @68 points to the real issue here (and all the discussion of this particular project implementation is somewhat besides the point.)

To wit: yes, IT is hard and things fail a lot outside of government too. However, general government procurement of IT has become an awful lot like military procurement. There are 3-5 vendors who actually have the non-technical staff (lawyers, proposal writers, lobbyists, etc.) to win a bid. And they are the “CSCs, Accentures” (as you note) and I’d add the Booz vampire squid and a few others.

The problem with all this is boring and banal – oligopolistic markets are just monopolistic ones with a bit of rotation and a few quirks. So there is nothing in the bidding process that is getting you any “market competition efficiency.”

A further problem is that “contract law” is largely not up to the job of policing this kind of project. Resolution is so slow that the Accentures of this world have the upper hand. You can pay them more to fix the mess they made, or you can take it to court – but all the while you’re bleeding because your service is down… so what are you going to do?

Well, at the beginning they do. And then they don’t anymore. At which point we stop noticing that these projects are complex. It’s like that libertarian story about the pencil: to make a pencil (let alone a car) is a very complex project. But it’s become routine, and we don’t notice the complexity. The problem with IT is that you still have to figure out how to glue the pieces together every time, and where to stick the eraser.

& I have listened to any number of stories about the horrors of crappy data in legacy systems in private sector financial firms.

And then there’s the code running in some nuclear plants. Ive even heard stories of cinderblock walls full of walls. Seems they didn’t keep track of where the cabling went. So when it comes time to fix something or install something you just take a hammer and start pounding on the wall to see what’s back there.

No, this kind of SNAFU is not a government-only problem. The private sector is pretty bad too.

There was I time when I believed that we just didn’t know how to build quality software beyond a certainly relatively small scale. Now my friends are telling me that, yes, we CAN do it. But no one wants to pay for it.

Occasionally someone installs doors that really don’t work the same as the last couple centuries of door technology, and it usually results in dozens of people having to learn how to get the damn things open.

Eh, when I was an undergraduate, software engineering was supposed to be the practical approach because computer science was a thing you studied at the university which never worked. (To be totally accurate, when I was an undergraduate, software engineering didn’t exist and then was a half course that taught about source control and make and how to find your way around Unix without crashing the new VAX. But that’s not the point.)

Trader Joe@69 – ” My sense would be there aren’t a lot of “quick fixes” and it could be the President would be wise to delay his signature program – simply to give the IT guys and various involved agencies a “time out” to address known problems based on the short experience they’ve had. Politically this might be impractical, but if it were the real world, it would seem like the more logical solution.”

That’s not the real world at all. In the real world, CEO’s run on reality distortion and strength of will etc, so when a CEO’s signature project – one which he has talked up in the Wall Street Journal – runs into weather, then one or many of the following happen not neccessarily in the same order –

1. Someone or several people get fired.
2. Vacation is cancelled – good luck selling the superbowl tickets on ebay. “Tiger team’s” are convened, everyone works 24*7 to bring it in on time or as close to schedule as possible.

Incidentally, this idea that Amazon and Expedia (which IIRC is just the old SABRE system) ran smoothly on first unwrapping suggests an excessive amount of kool-aid in the bloodstream.

Yeah i have heard some hair raising stories about the early days of expedia. Kludge doesn’t even begin to describe it.

That said, what I’ve seen of the healthcare exchanges suggests that fixing them is going to be very difficult, if not impossible. The architecture appears to be broken in ways that are usually very difficult to fix. Data integrity problems, and the error handling issues, are both bad signs. What they’ll probably do is scrap the worst parts, and rebuild them.

Demand for healthcare being what it is, I doubt any of these things will stop people using it.

What do you think of contractual requirements that developers have mature processes, as measured by, for example, the CMMI? Or Certification and Accreditation requirements for the product? Given health and credit data, there are probably some for this system.

Mainly because it doesnt matter how high in the CMMI we (in this case, an outsourcing, offshoring center) are, the moment we have to interface with a new client we get all the shit of that client “inmaturity” + the communication and interface problems.

I can be all the mature you want but if we win a contract with a client that doesnt even know how many services they are actually running…

2) “They need to rewrite all the older systems”. Just no. Anyone who suggests this needs to be quietly removed and taken to a place where there are no sharp objects…

After university my girlfriend went to work for a fairly small financial institution, which had the distinction of running all its customer transactions online and in real time. Thirty years later, my wife works for a very, very large financial institution, which runs all its customer transactions online and in real time. Same system. (There was that bit where they took out all the Assembler and rewrote it in C – that was a bit ticklish. System went on running, though. Got “Proud Mary” in my head now…)

The problem with all this is boring and banal – oligopolistic markets are just monopolistic ones with a bit of rotation and a few quirks. So there is nothing in the bidding process that is getting you any “market competition efficiency.”

A further problem is that “contract law” is largely not up to the job of policing this kind of project. Resolution is so slow that the Accentures of this world have the upper hand. You can pay them more to fix the mess they made, or you can take it to court – but all the while you’re bleeding because your service is down… so what are you going to do?

This is really the key thing that’s going on. Your latter paragraph is basically Coases’ Theory of the Firm; contract law isn’t adequte for all types of relations and a loose swarm of contracting entities is not as efficient as a firm.

But the outsourcing of core services from government is driven by all sorts of unacknowledege political motivators. The desire to squash unions. The desire to route complaints about service out of the political arena. The desire to recieve political donations from outsourcing companies. The ideological determination that “government can’t do anything right”: if government is doing something right, it must be sabotaged or privatized.

What do you think of contractual requirements that developers have mature processes, as measured by, for example, the CMMI? Or Certification and Accreditation requirements for the product? Given health and credit data, there are probably some for this system.

At best its a worthless piece of paper, at worse it’s another layer of bureacracy. Some of the worst projects I’ve witness were very paperwork compliant.

I would say that the the bulk of the blame lies with the Obama administration/the politicians who created the monster that is Obamacare. Eligibility for this thing is very complicated. There are subsidies, income checks. There are credit checks. Estimates vary from person to person, depending upon a range of factors. They probably also vary across the various providers.

Complexity is one of the main causes of project failure. A well run project is where those in charge are doing everything they can to remove it. Project complexity increases costs/time/bugs at what feels like (and probably is) an exponential rate. For this project, each additional check (oh so easy to promise to grease its path, oh so difficult to implement) means a new system to integrate with.

And that’s the second cause of project failure. Unpredictability. Each new system is an unknown. Does the system work as expected? How clean is the data? How reliable is it under strain/load? What’s the error handling like (does it have error handling). It’s impossible to provide accurate estimates under these conditions. Each additional system brings a further problem, as each one will be different. Consistent code is now impossible, much of what you’re doing is black boxing.

But not content with this, they added new problems. The companies bidding on the project didn’t even have firm requirements from the Obama administration. There was lots of secrecy all the way through, because the Obama administration didn’t want info to leak out and embarrass them. Well a sure fire way to make a project fail is to have undefined requirements, and poor communication. Add to that the inevitable bureacracy that comes from large projects (double so with government, where everything needs to be accountable, which means yet further layers of pointless bureacracy), and you have a nightmare in the making.

Currently the signs are that the thing may not be salvagable. The insurance companies are receiving data that is either wrong, or inconsistent. So they’re holding off on signing people up. Delays, poor UX -> that’s recoverable. Inconsistent, unreliable data is a disaster and is the kind of thing that suggests deeper architectural problems.

#100 Metatone, I don’t disagree with the assessment of the bidding culture, I just don’t think it did anything other than bid up the costs (which admittedly were extraordinary), and maybe increase the incompetence slightly. But under those conditions I don’t see how anyone could have delivered a working site.

Maybe bringing it in house could work, but I think the real problem with IT system deliverables are social. Bureacracy tends to impede IT projects. The government has a lot of bureacracy – not sure how you solve that in-house.

It doesn’t make sense to compare launching healthcare.gov to amazon.com in the old days. Before its IPO amazon ran on two computers (one for the web site and one for the database). For the IPO they added an extra web server. It just wasn’t that big. You’d have to compare it something with millions of users at launch, like Google Wave or any of Apple’s cloud services (MobileMe, iCloud), or a rapidly growing site like Pinterest in 2011 or Twitter in 2008. All of those took months to sort out, if they ever did. It’s possible that healthcare.gov has just a couple of bottlenecks and it’ll all be ok in a week, but I think it’s more likely it’ll take another six months to make it work at all. The deadline for signing up for 2014 coverage is in just two months.

Implementing Medicare buy-in would have been a lot easier. If only Anthony Weiner had kept his mouth shut.

Imagine it was an airplane, would you insist on having it flying even if the builders see that it is in no way flight-worthy?

Yes fair enough, but if I’d asked for a plane and someone’s “remedy” for the problem of not having built a working plane was to not deliver a plane, I would say “thanks for making sure lads, better safe than sorry, but I cannot in good faith and respect for the English language allow you to call this no-plane strategy a solution”.

I cannot in good faith and respect for the English language allow you to call this no-plane strategy a solution

Well, the lack of a plane comes along with an offer to finish the plane in exchange for more money. Or to start over making a new plane incorporating lessons from this one in exchange for more money.

They aren’t very compelling solutions, but if the alternative is false assurances that the airplane is flightworthy or the sad news that the special-purpose venture created to design the airplane has no more assets and will be declaring bankruptcy later this afternoon and we apologize for the inconvenience.

Software development does seem to suffer from a really high variance in the ratio between how long it was expected to take and how long it actually took. After reading this post, I started worrying about why the law of large numbers doesn’t help you if you have a detailed project plan — suppose your project plan is divided into N pieces, and each piece has a certain variance, then (assuming statistical independence) you add together the individual random variables… I think part of the problem is statistical independence may not hold here: for example, if lots of the pieces of your project depend on interfacing to an external system, and that external system turns out to be in a worse state than you expected, then lots of steps in your plan are going over-budget in a statistically correlated way.

And many people who work in software development have an unfortunate tendency to be optimists. This is not what you want when project planning…

= = = Yes fair enough, but if I’d asked for a plane and someone’s “remedy” for the problem of not having built a working plane was to not deliver a plane, I would say “thanks for making sure lads, better safe than sorry, but I cannot in good faith and respect for the English language allow you to call this no-plane strategy a solution”. = = =

Again I would urge everyone to review the history of the Boeing 787 project, particularly the events of the scheduled public rollout on 8 July 2007 [ 7/8/7 in US date notation – get it? The marketing department certainly did], the schedule for first flight on 1 September 2007, the actual first flight on 15 Dec 2009, and the subsequent four year delay in delivery. Not to mention the fire (excuse me, “thermal event”) in the main electrical distribution panel toward the end of the flight test period. It isn’t just software, and promises get broken.

= = = After reading this post, I started worrying about why the law of large numbers doesn’t help you if you have a detailed project plan — suppose your project plan is divided into N pieces, and each piece has a certain variance, then (assuming statistical independence) you add together the individual random variables… I think part of the problem is statistical independence may not hold here: for example, if lots of the pieces of your project depend on interfacing to an external system, and that external system turns out to be in a worse state than you expected, then lots of steps in your plan are going over-budget in a statistically correlated way.

And many people who work in software development have an unfortunate tendency to be optimists. This is not what you want when project planning…= = =

Big Customer Dude: we need to move that button from the left side of the screen to the right
Project Executive: that’s simple
Project Manager: probably doable, but we’ll have to defer another change request to get it on the schedule
Project Executive: not for something this simple; squeeze it in
Project Estimator: moving a button is usually 20 hours, so I’ll estimate 40
Programmer-of-Buttons: The screen design, as signed off, is based on a 3×3 grid. If we move this button from grid position 4 to position 6 we’ll need a small database change
Database Designer: The 3×3 grid was signed off two years ago from the requirements document and baked into the design since it enabled full flexibility for the 437 critical functional requirements at the price of some inflexibility in other areas. I can change it if you want, no problem…
Database Designer to 437 functional programmers: Today I implemented a small change to the database design…
Project Executive: What do you MEAN we are 43,700 hours behind schedule?!?
Big Customer Dude: What DO YOU MEAN we are TWO YEARS LATE?!?!?

Obviously made up on the fly and exaggerated for effect, but that’s the way it goes in real life. Things that look very easy to the untrained eye often aren’t, even when we have enough knowledge and understanding to do a PMI-style task decomposition (which we usually don’t).

And no one from the Project Executive level up is ever willing to sit down and listen, really listen, to the people who know what is going on give accurate status reports; all they want is corpospeak happytalk. If the team members give them anything else they just translate it into happytalk before passing it up the chain anyway, so the team members protect themselves by generating the corprospeak themselves. Then they start believing their own happytalk, just as the executives do, and there you have it: two-three years late and possibly a failed project.

After reading this post, I started worrying about why the law of large numbers doesn’t help you if you have a detailed project plan

One reason (stats hat on) is that the Law of Large Numbers works very slowly with non-finite variances–and “how long will X take” tends to be best modeled (IMO) by a Power Law distribution (which has non-finite variance).

Also, the basic code rule is “it will take X hours if nothing goes wrong; if something goes wrong, it will take an unknown amount of time to fix.” Detailed project plans often are created by adding up the X’s; Murphy’s Law holds, so something ALWAYS goes wrong.

124: and of course the STS, which was the first space vehicle to be deliberately designed to be impossible to test without a live human crew on board. Think about that – every other capsule, every other booster, they sent up for a few flights first empty (or maybe with a monkey or something) just to check that it would hold together and not leak or catch fire or explode. But the STS simply could not fly – by design – without risking the lives of a human crew. This was a requirement that was added to the design. It would have been easier to build an STS that could fly unmanned test flights.

The Space Shuttle had a whole host of wacky requirements imposed on it by the Air Force and then never used. Including the ability to snatch a Soviet satellite and return to base 3000 miles cross-range on a single orbit which drove the selection of large delta wings. ArsTechnia had a good article about how the Soviets built Buran because they believed NASA and the USAF wouldn’t build such a foolish design and must know something they didn’t. (Buran appears to have been a better design, but the fundamental problems were there). I think we have to consider the Shuttle su generis in bad decision making.