Time: Two Questions, Too Few Answers (A Treatise on Acquisition)

Good luck finding anyone willing to publically disagree with that statement. But despite the near uniformity of opinion on this topic, the defense acquisition complex consistently fails to answer two related questions:

How long should acquisition programs take?

How long do acquisition programs take?

The first question is admittedly difficult to answer in the abstract. An IT program and a jet fighter probably don’t belong on the same timeline.

Or maybe they do.

No doubt we could see some pretty interesting things in both categories in 9 months. But I doubt we’ll get a lot of people to sign up to that sort of timeline, particularly among those who build big projects like aircraft, tanks or ships. Given the wide range of technical genres encompassed by the term “defense acquisitions,” it probably makes sense to spend more time building some things than others.

Even though there is not necessarily a single answer to question #1, we do have a few parts of an inkling of an answer. For starters, according to a 2006 Government Accountability Office (GAO) report, the DoD itself says we shouldn’t spend more than 5 years building anything. So there’s that, but it’s not much to go on. Five years is w-a-a-a-y shorter than we’ve spent on many high-profile systems, but it’s also a pretty long time for other types of technology.

My personal answer to question #1 is “Half the time.” That’s admittedly still on the abstract side, but hang with me for a second.

First, let me explain that by the phrase “my personal answer” I mean an answer based on the considered, studied opinion provided in 1986 by the Packard Commission, which said it “is possible to cut this cycle in half.” (And that’s back when the cycle time was a lot shorter than it is now.)

That finding was corroborated 12 years later by a survey Ross McNutt performed as part of his 1998 PhD research at MIT, which also concluded a 50% reduction was both possible and desirable. The consensus seems to be that long timelines are not inevitable attributes of defense acquisition programs. Doing it faster - a lot faster - is entirely possible.

It sounds crazy and counter-intuitive, I know, but the answer is pretty consistent: acquisition programs should take half the time. If I thought it would help, I’d include a couple more references of subsequent studies that made similar observations and recommendations (there are several), but those two are probably sufficient to make the point. Anything more would be overkill.

I have to wonder what would happen if we tried to cut timelines in half on any meaningful scale (we haven’t). It’d be interesting to see how that would work out, don’t you think?

On that note, you can draw your own conclusions from the fact that we haven’t ever (e-v-e-r) tried to cut acquisition cycle times in half across the board, even though several sources have said such a move is feasible.

Interestingly, we’ve drastically cut development time on many individual projects, which supports the aforementioned assertion that long timelines aren’t inevitable. What we haven’t done is made the effort on a strategic, acquisition-wide level.

Maybe there’s a better answer out there, a more nuanced and comprehensive answer than the admittedly blunt instrument of “half the time,” but the combination of clarity and credentials inherent in this answer is pretty hard to beat.

OK, time to move on to question #2, the “does” question. How long does a typical program actually take?

Nobody knows. Seriously, we have no idea how much time we spend, because no one is collecting the data.

We used to keep track of this sort of thing, as shown in the following graph from a 12 June 98 report by the Defense Science Board, attributed to Daniel Czelusniak:

Feel free to ask a search engine of your choice for an updated version of that chart - but I’m pretty sure you’re gonna find nothing.

Now, I know better than to try to prove a negative, so let me acknowledge an updated chart may be out there somewhere. If so, it’s exceedingly well hidden. I read the internet until my face fell off and couldn’t find it anywhere.

Turns out I’m not the only one without access to this information. In a speech on 6 Feb 2012, the top DoD acquisition guy, Undersecretary of Defense for Acquisition, Technology and Logistics Frank Kendall, acknowledged he doesn’t have an answer to the question “Are we doing better or worse than we were 10 years ago?” If anyone should be able to immediately put eyeballs on that data, it’s Mr. Kendall.

The absence of readily available up-to-date information is an even bigger problem than the troublesome upward trend depicted in the chart, although that trend is indeed a major bummer.

It’s bad enough the average cycle time was so darn long and kept getting longer, despite multiple assertions that it should be cut in half... but now we don’t even know if we’re doing any better or worse. That makes it sort of difficult to substantiate this article’s opening claim, however obviously valid it may be.

Look, this data should be hard to miss, not hard to find. It should be plastered everywhere, not hypothetically hidden in some esoteric metrics management office. A key attribute of this kind of data is its findability, which directly determines its utility and value. If it isn’t front and center, immediately locatable by anyone who wants to see it, it might as well not exist in the first place for all the good it’s doing us.

But I’m pretty sure it’s not hidden. I think it flat out hasn’t been collected. This data should exist. It doesn’t. I’d love for someone to prove me wrong on that. Honestly, I would. On the other hand, I’d hate to think someone put all this data together and didn’t share it with Mr. Kendall. That would be embarrassing.

OK, in the rare instances where we can get someone to even consider measuring today’s acquisition timelines, we end up getting wrapped around the axle debating when to start the clock and when to stop, arguing about what constitutes “a program” and a jillion other bits of important irrelevance (ask me how I know this). And that’s a primary reason we don’t know how long today’s acquisition programs take: there is no broad consensus on how to define such a thing, let alone a desire to measure it.

You’d think we would have figured that stuff out by now. We apparently used to know, then we decided to un-figure it out, preferring instead to spend time arguing endlessly about stuff that doesn’t matter and doesn’t help.

When should we start the clock - at milestone A, B, I, II or some other point? When should we stop - at the Initial Operational Capability, Full Operational Capability, or some other point? How should we define “a program?”

Doesn’t matter. Not even a little.

Yes, these questions need answers, but almost any answer will do. Just pick one, be consistent with the measurements, then watch the trend. It really is that simple. Are some answers better and more useful than others? Of course. But any answer is better than none, and none is what we have today.

This agnostic approach makes sense to me, but based on a hoard of actual debates I’ve witnessed, heard about and tried not to get engaged in, a lot of supposed experts fiercely disagree with me (and each other, of course). They’d rather fight to the death against anyone who thinks milestone B is (or isn’t) the right place to calculate the start of a program than see consistent data collected using a definition which varies slightly from their preferred perspective.

The report explains “the typical acquisition effort of the 1960’s required 7 years for completion. A review of MDAPs [Major Defense Acquisition Programs], using the 1996 SARs [Selected Acquisition Reports], found that major system required 11 years (132 months) to progress from program start to initial operational capability. In 1998... DoD established the goal of delivering new MDAPs to the field in 25 percent less time...”

So far, so good. Someone made measurements and set an improvement goal. Why they didn’t aim for a full 50% reduction is unclear (chickens!), but even a 25% reduction is a step in the right direction. I’ll take it.

How’d things go? Well, the report explains the “USD (AT&L) computed an average cycle time of 96.9 months for 48 MDAPS...” which means the DoD beat the goal by more than two months.

Hooray! We set a goal and beat the goal! Good news, right? Well, don’t pop the champagne just yet.

The report goes on to say “We could not verify whether DoD met the... goal because the database... omitted programs and contained discrepancies. As a result, the average cycle time goal stated in the FY 2000 Annual Report of the Secretary of Defense may not be accurate.” (emphasis added)

Aw, crap.

Let’s summarize the situation: they collected some data, but it was not reliable or accurate data. It was a try, but if we judge the attempt on its results, it’s fair to say it was not a very good try. At this point, the obvious question is: Where’s the follow-up?

Surely, someone said: “Alright people, time to have another go. Please clean up the data, deal with the discrepancies and omissions, then re-crunch the numbers so we can find out whether we met the goal.” I mean, they made the initial effort, why stop there? Why not take another pass and this time do it right?

If there was a second attempt, it didn’t leave much of an evidence trail. Unfortunately, that’s typical of these things. There’s a big push to do something, an ineffective outcome, a final report that says “Um, we didn’t quite get there...” And then, silence.

Even if they pulled off a Top Secret do-over, there’s no evidence of actual improvements, because more than ten years down the road, we are still reading reports from the Defense Business Board with lines like this: “Major new programs take too long to bring to the field and are too expensive.”

OK, let’s change directions a bit. The whole question of average acquisition cycle times is interesting, but well outside most people’s circle of influence. For military technologists and acquisition practitioners, the average cycle times across the DoD matters less - far less - than whether or not our specific project is on schedule, right?

Here’s the thing - although we need to avoid optimizing a part at the expense of the whole, if we do a lot of individual projects faster, then the aggregate average just might take care of itself. So let’s talk about time as it applies to a particular, individual project. Are there things we could do to help make sure we deliver on time?

What if we decided it was important to deliver a project on time, then aligned our metrics and actions in such a way that they supported that goal? That’s one of those things that sounds obvious but is seldom done in actual practice. Instead, at the first sight of a problem, we tend to tack on a schedule extension (and ask for more money).

What if we fought tooth and nail to prevent delays, instead of treating schedule extensions like a best practice for problem solving?

And by “fought tooth and nail” I mean insisting on a focused simplicity in our requirements and otherwise avoided over-reaching and over-engineering, then also avoiding the dreaded “rebaseline” approach (which tends to be a euphemism for “add time and money”).

What if we incentivized and rewarded early delivery... without simultaneously insisting that unrealistic and unnecessary breakthroughs occur on a predictable schedule? Again, providing such incentives isn’t terribly hard to do. Finding programs that do it, on the other hand, is much more difficult.

There are a million ways to encourage pursuit of a desirable outcome. It’s just a matter of deciding which outcomes we really desire. We’re actually quite good at meeting goals, particularly when they are appropriate goals unencumbered with other, mutually-contradictory goals. The trick is to set the right goals in the first place.

Look, if our goal is to build a four-sided triangle, I promise we’re going to end up with a rectangle every dang time, no matter how often we rebaseline and process-improve. That extra side doesn’t make the triangle better. It makes it something other than a triangle.

And lest there be any ambiguity, let me also say the goal isn’t to do a better job of tracking how much time we spend on acquisition programs, those earlier comments about measurements and metrics notwithstanding. The goal is to actually spend less time - maybe even 50% less time - on our acquisition programs. Collecting and examining real data might should achieve that goal, but don’t confuse the measurement with the achievement.

In practical terms, one way to bring acquisition times down is to use the schedule to constrain the design - which is what the GAO frequently recommends, for what it’s worth. Those GAO ninjas just might be onto something.

Here’s how that would work: Instead of the government telling the contractor “Here’s a huge list of everything I want the system to do. How long will it take and how much will it cost me?” (accompanied by not-so-subtle elbow nudges and winks and whispers of “take as much time as you need.”), we should instead reverse that and say “Here’s how much money and time I’ve got, how close can we get to the capability objectives without exceeding those amounts?” (accompanied by a steely-eyed, squared-jaw countenance that says “Not one day or one dime more.”).

As threats change, technologies mature, and additional capabilities become both necessary and available, we would then integrate them into future blocks and upgrades... or into some future system... or maybe not build them at all. How can I justify that final suggestion? Well, it turns out a lot of our supposed requirements aren’t truly required. The more restraint we can exercise over extraneous desirements, the better, both in terms of timeliness and operational performance.

The good news is most defense acquisition programs can be done in half the time. That’s also the bad news, because it means we’re falling short of the ideal. At least, we seem to be falling short. Until we start collecting the data and making it available to people like Mr. Kendall, we don’t know how far off target we really are. If we decide to collect this data, we’ve got to actually collect it and not be satisfied with a big stack of discrepancies and omissions.

If we’re serious about wasting less time on acquisition systems, we really should put a little effort into asking two key questions: how long should acquisition programs take and how long do they take? The answers just might help us push forward with principles and practices designed to reduce how long warfighters have to wait for new gear.

About the Author

Lt. Col. Dan Ward is an active duty acquisitions officer in the U.S. Air Force, currently deployed to Kabul, Afghanistan. The views expressed in this article are solely those of the author and do not reflect the official policy or position of the U.S. Air Force or Department of Defense.

Comments

@Forward - Glad to hear you read the Comanche article (it's at Time's Battleland blog for those who haven't seen it) and sorry it sounded like I was singling out Army acquisition. In a previous piece at Battleland I chided my own service about the JSF and the C-27J, and my other articles in several other outlets have looked at a pretty wide spectrum of acquisition programs, both historical and current. So yes, that piece was just about Comanche - but really, the issue was with whether or not Comanche constituted a failed program or a good one... and our answer has implications for current programs (i.e. JSF, etc).

@Frankfurman - You're completely correct that the acquisition environment provides all sorts of perverse incentives. My point with this article was that while we could/should incentivize and reward speed, we're not even tracking the data on program timelines. It's pretty hard to reward outcomes we're not measuring. It seems to me that one of the first steps to correcting our incentive structure is to figure out (for example) how long acquisitions do take and how long they should take...

@TJ - Integrity is indeed a critical element of this whole puzzle. Thus my suggestion to set cost and schedule limits and stick to them. That's only part of the integrity concept, but it's a start. I would include design restraint in the integrity column as well - from an engineering and program management perspective, integrity means saying "Here's what we really need... and there's the stuff we don't really need."

Lt. Col. Ward, I believe you would agree that the complexity of a multi-service F-4 Phantom in no way approaches that of today's F-35. That alone adds time to the procurement cycle. Second, as history and future budgets indicate, whatever we buy today will be around for multiple decades. It makes sense to get it right even if it takes 5 years longer to ensure relevance 50 years from now. Failure to seek advanced technology now assures less relevance in even two decades, let alone five.

Second, pot meet kettle black. After reading your Battleland article about Comanche and noting that you are an acquisition officer, I'm kind of shocked that you would single out Army Aviation for criticism given its successes with multi-service H-60s, CH-47F, Raven/Hunter/RQ-7 Shadow/Warrior-A/MQ-1C Gray Eagle and AH-64D Block III to include cutting edge Longbow radar and UAS control from manned aircraft. Considering the problems experienced by the F-22, F-35, KC-X, CSAR-X, early Global Hawk, LAAR, UH-1 replacement for nuke personnel transport...did I leave any out?

Now let me reverse myself noting that the KC-X was not entirely the USAF acquisition community's fault because union and other politics got involved. Then, it was ironic that after Boeing underbid Airbus the second time around, we see Spirit and midwest American jobs go away anyway. We also end up with a tanker less capable of handling longer Pacific distances because the USAF changed requirements in a manner that clearly favored Boeing because Airbus advantages were not even considered unless within a certain cost delta.

The CSAR-X competition illustrated that other than appearances, there really is no similarity between the MEDEVAC and CSAR mission, despite what Michael Yon believes. Perhaps there is some question whether any non-stealthy helicopter could be an effective CSAR platform against capable foes. Which brings us to Comanche...

It is amusing that your Battleland article chastized Army Aviation for trying to build a stealthy Comanche that resulted in losses of $6.9 billion after the program ended. If it had continued, it might have spent $43 billion to procure over 1200 aircraft at a cost of $35.3 million each. Shall we compare stealthy Comanche costs to the $80+ billion the USAF will have spent on 187 stealth F-22s by the time they are brought up to planned increment levels?

On Memorial Day, we honor the lives lost of all servicemembers since our nations founding. Since WWII, however, Soldier and Marine lives have been lost at a far higher rate than those of Sailors and Airmen. Shouldn't we spend as much on ground troops to ensure the same level of unfair fight that Sailors and Airmen already experience? More was spent over the past decade on ground troops to keep their loss levels at unprecedented low levels relative to past wars. What is wrong with that?

As for Comanche, because Army Aviation had to struggle to fight for dollars during the Clinton procurement holiday, the low $6.9 billion expended over that decade-plus explains many program delays. Your article touched briefly on the fact that the mission to get bin Laden may have involved Comanche tech. Who knows? We do know that the T800 engine has been used on the Sikorsky X-2 demonstrator that powered the prototype to 250 knots. We see the Brits using a variant of the engine on its proposed Super Lynx 300, and Turkeys T-129 which is projected to weigh over 11,000 lbs. Considering that a more powerful T800 variant was to power Comanche's 12,000 lbs, guess it partially answers your question about whether the aircraft could get off the ground.

Sensor enhancements, artificial intelligence, and manned-unmanned teaming also were proposed for Comanche if I recall correctly. Such programs continue today. Other redundancy enhancements, use of composites, and aircraft survivability equipment techniques also have found their way into current rotorcraft requirements. The result has been far less lost aircraft than in Vietnam, or what the Soviets experienced in its decade in Afghanistan.

Perhaps, the failed deep attacks of early OIF illustrated that the concept of Comanche deep strikes was no longer as viable. However, the radar air defense threat remains, which calls into question the viability of systems like MV- and CV-22 that gain much of their speed and range from high altitude flight that would not be viable against radar air defenses. An X-2 like rotorcraft could match tilt rotor speeds while flying lower under most air defense radar threats. In any event, with the number of cell towers and wires found at lower flight levels, some speed must be sacrificed at night to safely employ night vision devices.

So while I commend you for you and your's service in Iraq and Afghanistan, and the lives your manned aircraft, RPA operators, airlifters, and TACP/JTACs have saved in current wars, please continue to figure out how the air service can spend less on their aircraft, buying more unmanned aircraft for instance. That might free monies for other services whose lives and unfair advantage matter just as much. Please, no comments about the need to first maintain air superiority. We have over 200 stealth aircraft operational with potential enemies having just prototypes that themselves will be costly and fewer in number than our thousands of F-35s.

Selective reading of history such as found in your chart comparing low tech to high tech procurement months, and past air-to-air challenges vs. current air defense threats, is not helpful to a future with different trends. We have spent so much on airpower that future foes will not be able to afford our quality in any substantial quantity based on defense budgets alone. They will be able to afford air defense and surface-to- surface missiles, however. Joint unmanned aircraft can help address those threats, and manned/unmanned teaming between F-35s and USAF RPAs could be a reality...if the white scarf attitude is substituted for one more likely to ensure joint force commander future success...and lower lives lost to remember on future Memorial Days.

"The CSAR-X competition illustrated that other than appearances, there really is no similarity between the MEDEVAC and CSAR mission, despite what Michael Yon believes."

Yon knows that there are fundamental differences between MEDEVAC and CSAR as technically defined. His argument is that Afghanistan is not Vietnam. The number of actual CSAR missions have been comparatively non-existent, so Army MEDEVAC and Air Force Pedro helicopters are both performing medical evacuations as their primary missions. Pedros have the advantage of being able to launch immediately and defend themselves in some circumstances that would leave the MEDEVAC helicopters on the tarmac waiting for an armed UH-60, a Kiowa Warrior, or an Apache as an escort. Pedros also fly with 2-3 paramedic trained crew members where the regular Army MEDEVAC crews typically are EMT-Basic trained.

Recent acquisition of HH60M MEDEVAC helicopters has been a mixed bag of moving forward and going backward. A recent internal report from Afghanistan by the MEDEVAC Proponent detailed how the entire medical package has to be stripped out of HH60M's deployed to mountainous regions in order for them to reach FOBs previously serviced by UH60A/L MEDEVAC models. It turns out that adding 2000 lbs of weight affects performance - who knew? They also have a bubble window on each door that causes airsickness in crew members because of imperfections in the Plexiglas and limits visibility for monitoring hovering and landing hazards - especially when NVG are used. These defects were first noted in an Army report in 1994, yet helicopters shipping today have them.

The Army also tried using rapid acquisition for the LUH72 MEDEVAC helicopter for non-combat areas. It is the militarized version of a European civilian ambulance helicopter. After beating out Sikorsky, the contracts were signed and shipments began. Then the DoD determined that the helicopter didn't meet the requirements for functioning as a MEDEVAC helicopter for a variety of reasons, including:
- While you can put two litters in the rear compartment, there is less than 6" separating the litters which prevents any in-flight care from being rendered
- The inadequate lighting in the rear compartment wouldn't allow any aid to be given if there was only one litter or an ambulatory patient
- There were no handholds in the cabin for crew members, nor rails or hooks for suspending IV bags, etc.
- The Army specified that the cabin air conditioning unit be removed only to discover later that the cabin could rapidly reach temperatures 20-40 degrees higher than the ambient temperatures and pilots in protective flight suits would be enduring temperatures over 110 degrees
- The flight avionics in ambient temperatures above 80 degrees would shut down after only 30 minutes due to component overheating.

I agree completely with the overall thrust of the article, but one concept is ignored. With the way we structure contracts, there's every incentive for programs to find themselves over-budget and late. Take cost-plus contracts for example. Though it's a crime that we ever write contracts this way, we continue to do so. Essentially, if a contractor builds us Gadget X, for a predicted cost of $10 Billion, they can later say that it costs $20 Billion, with a certain percentage of net profit guaranteed by the contract (the cost plus a percentage). If we cancel, large cancellation fees (often totaling hundreds of millions of dollars) are levied. Therefore, all risk is underwritten by DoD.

The point is that we structure contracts in such a way that cost and time overruns aren't just unpunished, they're incentivized. When such behavior is financially incentivized, it is inevitable.

We deserve to get fleeced, we write the contacts that way.

Imagine, for one second, an alternate method. We structure the contract for Gadget X such that part of the payment is held in escrow, and payout is maximized by how quickly the project is completed (for submission for our approval, so companies aren't hamstrung by us). There's a hard deadline written in, after which cancellation fees are levied on the contractor. Risk is shared by both DoD and the contractor. All we would need is the fortitude to say that if the contractor couldn't meet the deadline, we could live with cancelling the project outright. Could we?

These are great questions, but we are missing a fundamental predecessor in this discussion: the need for the Pentagon Establishment to have a great deal more integrity than it displays when it comes to program management in DoD. They are willing to throw a man in jail or reassign him to a remote base on specious charges in order to keep power and keep asserting the conclusion. A General can waste a Billion taxpayer dollars or (almost) lose a war, but perish the thought of actually listening to anyone advocating reform.

DoDIG should be taken with a grain of salt. When they criticize the Pentagon, it is more likely to be correct, but such cases can be muted. In one case I observed, they artificially narrowed the scope of the audit to the point of reducing the severity of the finding. They are sensitive to the sensibilities of the egos of the Establishment--internal DoD politics matters more than truth when spending taxpayer money on military toolsets.

No matter how much integrity and independence these auditors are suppose to have, they favor the home-town team. Michael Jordan could sometimes travel or push-off in United Center, and the Pentagon Establishment sometimes get softer evaluations than they should from DoDIG.

The game, then, is that the Pentagon Establishment sometimes ignores the severity of the critique when it suits them, saying they concur with the audit but acting as if the audit never occurred. A softer eval is easier to then ignore, and the Military-industrial complex goes on its merry way, with everyone scratching everyone else's back.

We seem to believe past systems never had problems. Heck, the Predator flunked its DOD Initial Testing. I'm sure lots of IG types and former fighter jocks would have flooded the airwaves denouncing those new-fangled efforts given past internet forum access.

At the Joint Warfighters Conference, I seem to recall a Vice Admiral saying something about excessive oversight. Too many bureaucratic baliwicks to go through, and check-off staffing and approval benchmarks. Did I just read about 138 different variants of F-16? Is that what you get when you move too fast on a system that will be around 50 years?