aerospace and defense

Hot Topics at the National Defense Industrial Association’s Integrated Program Management Division (NDIA-IPMD)

For those of you who did not attend, or who have a passing interest in what is happening in the public sphere of DoD acquisition, the NDIA IPMD meeting held last week was a great importance. Here are the highlights.

Electronic Submission of Program Management Information under the New Proposed DoD Schema

Those who have attended meetings in the past, and who read this blog, know where I stand on this issue, which is to capture all of the necessary information that provides a full picture of program and project performance among all of its systems and subsystems, but to do so in an economically feasible manner that reduces redundancy, reduces data streams, and improves timeliness of submission. Basic information economics state that a TB of data is only incrementally more expensive–as defined by the difference in the electricity generated–as a MB of data. Basic experience in IT management demonstrates that automating a process that eliminates touch labor in data production/validation improves productivity and speed.

Furthermore, if a supplier in complex program and project management is properly managing–and has sufficient systems in place–then providing the data necessary for the DoD to establish accountability and good stewardship, to ensure that adequate progress is being made under the terms of the contract, to ensure that contractually required systems that establish competency are reliable and accurate, and to utilize in future defense acquisition planning–should not be a problem. We live in a world of 0s and 1s. What we expect of our information systems is to do the grunt work handling ever large systems in providing information. In this scenario the machine is the dumb one and the person assessing the significance and context of what is processed into intelligence is the smart one.

The most recent discussions and controversies surrounded the old canard regarding submission at Control Account as opposed to the Work Package level of the WBS. Yes, let’s party like it’s 1997. The other issue was whether cumulative or current data should be submitted. I have issues with both of these items, which continue to arise like bad zombie ideas. You put a stake in them, but they just won’t die.

To frame the first issue, there are some organizations/project teams that link budget to control account, and others to work package. So practice is the not determinant, but it speaks to earned value management (EVM).The receiving organization is going to want the lowest level for reporting where there is foot-and-tie to not only budget, but to other systems. This is the rub.

I participated in an still-unpublished study for DoD that indicated that if one uses earned value management (EVM) exclusively to manage that it doesn’t matter. You get a bit more fidelity and early warning at the work package level, but not much.

But note my conditional.

No one exclusively uses EVM to manage projects and programs. That would be foolish and seems to be the basis of the specious attack on the methodology when I come upon it, especially by baby PMs. The discriminator is the schedule, and the early warning is found there. The place where you foot-and-tie schedule to the WBS is at the work package level. If you are restricted to the control account for reporting you have a guessing game–and gaming of the system–given that there will be many schedule activities to one control account.

Furthermore, the individual reviewing EVM and schedule will want to ensure that the Performance Measurement Baseline (PMB) and the Integrated Master Schedule (IMS) were not constructed in isolation from one another. There needs to be evidence that the work planned under the cost plan matches the work in time.

Regarding cumulative against current dollar submission the issue is one of accuracy. First, consecutive cumulative submissions require that the latest figure be subtracted from the last, which causes round-up errors–which are exacerbated if reporting is restricted to the control account level. NDIA IPMD had a long discussion on the intrinsic cumulative-to-cumulative error at a meeting last year, which was raised by Gary Humphreys of Humphreys & Associates. Second, cumulative submissions often hide retroactive changes. Third, to catch items in my second point, one must execute cross checks for different types of data, rather than getting a dump from the system of record and rolling up. The more operations and manipulation made to data, the harder it becomes to ensure fidelity and get everyone to agree on one trusted source, that is, in reading off of the same page.

When I was asked about my opinion on these issues, my response was twofold. First, as the head of a technology company it doesn’t matter to me. I can handle data in accordance with the standard DoD schema in any way specified. Second, as a former program management type and as an IT professional with an abiding dislike of inefficient systems, the restrictions proposed are based on the limitations of proprietary systems in use by suppliers that, in my opinion, need to be retired. The DoD and A&D market is somewhat isolated from other market pressures, by design. So the DoD must artificially construct incentives and an ecosystem that pushes businesses (and its own organizations) to greater efficiency and innovation. We don’t fly F-4s anymore, so why continue to use IT business systems designed in 1997 that are solely supported by sunk-cost arguments and rent seeking behavior?

Thus, my recommendation was that it was up to the DoD to determine the information required to achieve their statutory and management responsibilities, and it is up to the software solution providers to provide the, you know, solutions that meet them.

I was also asked if I agreed with another solution provider to have the software companies have another go at the schema prior to publication. My position was consistent in that regard: we don’t work the refs. My recommendation to OSD, also given that I have been in a similar position regarding an earlier initiative long the same lines back when I wore a uniform, is to explore the art of the possible with suppliers. The goals are to reduce data streams, eliminate redundancy, and improve speed. Let the commercial software guys figure out how to make it work.

Current projection is three to four weeks before a final schema is published. We will see if the corresponding documentation will also be provided simultaneously.

DCMA EVAS – Data-driven Assessment and Surveillance

This is a topic for which I cannot write without a conflict of interest since the company that is my day job is the solution provider, so I will make this short and sweet.

First, it was refreshing to see three Hub leads at the NDIA IPMD meeting. These are the individuals in the field who understand the important connection between government acquisition needs and private industry capabilities in the logistics supply chain.

Second, despite a great deal of behind-the-scenes speculation and drama among competitors in the solution provider market, DCMA affirmed that it had selected its COTS solution and that it was working with that provider to work out any minor issues now that MIlestone B has been certified and they are into full implementation.

Third, DCMA announced that the Hubs would be collecting information and that the plan for a central database for EVAS that would combine other DoD data has been put on hold until management can determine the best course for that solution.

Fourth, the Agency announced that the first round of using the automated metrics was later this month and that effort would continue into October.

Fifth, the Agency tamped down some of the fear related to this new process, noting that tripping metrics may simply indicate that additional attention was needed in that area, including those cases where it simply needed to be documented that the supplier’s System Description deviated from the standard indicator. I think this will be a process of familiarization as the Hubs move out with implementation.

DCMA EVAS, in my opinion, is a significant reform of the way the agency does business. It not only drives process and organizational improvement within the agency by eliminating uneven and arbitrary determinations of contract non-compliance (as well as improvements in data management), but opens a dialogue regarding systems improvement, driving similar changes to the supplier base.

NDAA Section 804

There were a couple of public discussions on NDAA Section 804 which, if you are not certain what it is, should go to this link. Having kept track of developments in the NDAA for this coming fiscal year, what I can say is that the final language of Section 804 doesn’t say what many think it says when it was in draft.

What it doesn’t authorize is a broad authority to overrule other statutory requirements for government accountability, oversight, and reporting, including the requirement for earned value management on large programs. This statement is supported by both OSD speakers that addressed the issue in the meeting.

The purpose of Section 804 was to provide the ability to quickly prototype and field new technologies in the wake of 911, particularly as it related to identifying, tracking, and preventing terrorist acts. But the rhetoric behind this section, which was widely touted by elected representatives long before the final version of the current NDAA had been approved, implied a broader mandate for more prosaic acquisitions. My opinion in having seen programs like this before (think Navy A12 program) is that, if people use this authority too broadly that we will be discussing more significant issues than a minor DCMA program that ends this blog post.

Thus, the message coming from OSD is that there is no carte blanche get-out-of-jail card for covering yourself under Section 804 and deciding that lack of management is a substitute for management, and that failure to obtain timely and necessary program performance information does not mean that it cannot be forensically traced in an audit or investigation, especially if things go south. A word to the wise, while birds of a feather catch cold.

a. Day-to-day program management will be pushed to the military services. No one really seems to understand what this means. The services already have PMOs in place that do day-to-day management. The policy part of old AT&L will be going intact to A&S as well as program analysis. The personnel cuts that are earmarked for some DoD departments was largely avoided in the reorganization, except at the SES level, which I will address below.

b. Other Transaction Authority (OTA) and Section 804 procurements are getting a lot of attention, but they seem ripe for abuse. I had actually was a member of a panel regarding Acquisition Reform at the NDIA Training and Simulation Industry Symposium held this past June in Orlando. I thought the focus would be on the recommendations from the 809 panel but, instead, turned out to be on OTA and Section 804 acquisitions. What impressed me the most was that even companies that had participated in these types of contracting actions felt that they were unnecessarily loosely composed, which would eventually impede progress upon review and audit of the programs. The consensus in discussions with the audience and other panel members was that the FAR and DFARS already possessed sufficient flexibility if Contracting Officers were properly trained to know how to construct such a requirement and still stay between the lines, absent a serious operational need that cannot be met through normal acquisition methods. Furthermore, OTA SME knowledge is virtually non-existent. Needless to say, things like Nunn-McCurdy and new Congressional reporting requirements in the latest NDAA still need to be met.

c. The emphasis in the department, it was announced, would also shift to a focus on portfolio analysis, but–again–no one could speak to exactly what that means. PARCA and the program analysis personnel on the OSD staffs provide SecDef with information on the entire portfolio of major programs. That is why there is a DoD Central Repository for submission of program data. If the Department is looking to apply some of the principles in DoD that provide flexibility in identifying risks and tradeoffs across, then that would be most useful and a powerful tool in managing resources. We’ve seen efforts like Cost as an Independent Variable (CAIV) and other tradeoff methods come and go, it would be nice if the department would reward the identification of programmatic risk early and often in program go/no-go/tradeoff/early production decisions.

d. To manage over $7 trillion dollars of program PARCA’s expense is $4.5M. The OSD personnel made this point, I think, to emphasize the return on investment in their role regarding oversight, risk identification, and root cause analysis with an eye to efficiency in the management of DoD programs. This is like an insurance policy and a built-in DoD change agent. But from my outside reading, there was a move by Representative Mac Thornberry, who is Chairman of House Armed Services, to hollow out OSD by eliminating PARCA and much of the AT&L staffs. I had discussions with staffs for other Congressional members of the Armed Services Committee when this was going on, and the cause seemed to be that there is a lack of understanding to the extent that DoD has streamlined its acquisition business systems and how key PARCA, DCMA, and the analysis and assessment staffs are to the acquisition ecosystem, and how they foot and tie to the service PEOs and PMOs. Luckily for the taxpayer, it seems that Senate Armed Services members were aware of this and took the language out during markup.

Other OSD Business — Reconciling FAR/DFARS, and Agile

a.. DoD is reconciling differences between overlapping FAR and DFARS clauses. Given that DoD is more detailed and specific in identifying reporting and specifying oversight of contracts by dollar threshold, complexity, risk, and contract type, it will be interesting how this plays out over time. The example given by Mr. John McGregor of OSD was the difference between the FAR and DFARS clauses regarding the application of earned value management (EVM). The FAR clause is more expansive and cut-and-dried. The DFARS clause distinguishes the level of EVM reporting and oversight (and surveillance) that should apply based on more specific criteria regarding the nature of the program and the contract characteristics.

b. The issue of Agile and how it somehow excuses using estimating, earned value management, risk management, and other proven program management controls was addressed. This contention is, of course, poppycock and Glen Alleman on his blog has written extensively about this zombie idea. The 809 Panel seemed to have been bitten by it, though, where its members were convinced that Agile is a program or project management method, and that there is a dichotomy between Agile and the use of EVM. The prescient point in critiquing this assertion was effectively made by the OSD speakers. They noted that they attend many forums and speak to various groups about Agile, and that there is virtually no consensus about what exactly it is and what characteristics define it, but everyone pretty much recognizes it as an approach to software development. Furthermore, EVM is used today on programs that at least partially use Agile software development methodology and do so very effectively. It’s not like crossing the streams.

Gary Bliss, PARCA – Fair Winds and Following Seas

The blockbuster announcement at the meeting was the planned retirement of Gary Bliss, who has been and is the Director of PARCA, on 30 September 2018. This was due to the cut in billets at the Senior Executive Service (SES) level. He will be missed.

Mr. Bliss has transformed the way that DoD does business and he has done so by building bridges. I have been attending NDIA IPMD meetings (and under its old PMSC name) for more than 20 years. Over that time, from when I was near the end of my uniformed career in attending the government/joint session, and, later, when I attended full sessions after joining private industry, I have witnessed a change for the better. Mr. Bliss leaves behind an industry that has established collaboration with DoD and federal program management personnel as its legacy for now and into the future.

Before the formation of PARCA all too often there were two camps in the organization, which translated to a similar condition in the field in relation to PMOs and oversight agencies, despite the fact that everyone was on the same team in terms of serving the national defense. The issue, of course, as it always is, was money.

These two camps would sometimes break out in open disagreement and expressed disparagement of the other. Mr. Bliss brought in a gentleman by the name of Gordon Kranz and together they opened a dialogue in meeting PARCA’s mission. This dialogue has continued with Mr. Kranz’s replacement, John McGregor.

The dialogue has revolved around finding root causes for long delays between development and production in program management and to recommend ways of streamlining processes and eliminating impediments, to root out redundancy, inefficiency, and waste throughout the program and project management supply chain, and to communicate with industry so that they understand the reasons for particular DoD policies and procedures, to obtain feedback on the effects of those decisions and how they can implemented to avoid arbitrariness, and to provide certainty to those who would seek to provide supplies and services to the national defense–especially innovative ones–in defining the rules of engagement. The focus was on collaborative process improvement–and it has worked. Petty disputes occasionally still arise, but they are the exception to the rule.

Under his watch Mr. Bliss established a common trusted data stream for program management data, and forged policies that drove process improvement from the industrial base through the DoD program office. This was not an easy job. His background as an economist and his long distinguished career in the public service armed him well in this regard. We owe him a debt of gratitude.

We can all hope that the next OSD leadership that assumes that role will be as effective and forward leaning.

Final Thoughts on DCMA report revelations

The interest I received on my last post on the DCMA internal report regarding the IWMS project was very broad, but the comments that I received expressed some confusion on what I took away as the lessons learned in my closing paragraphs. The reason for this was the leaked nature of the reports, which alleged breaches of federal statute and other administrative and professional breaches, some of a reputational nature. They are not the final word and for anyone to draw final conclusions from leaked material of that sort would be premature. But here are some initial lessons learned:

Lesson #1: Do not split requirements and game the system to fall below financial thresholds to avoid oversight and management approval. This is a Contracts 101 issue and everyone should be aware of it.

Lesson #2: Ensure checks and balances in the procurement process are established and maintained. Too much power, under the moniker of acquisition reform and “flexibility”, has given CIOs and PMs the authority to make decisions that require collaboration, checks, and internal oversight. In normative public sector acquisition environments the end-user does not get to select the contractor, the contract type, the funding sources, or the acquisition method involving fair and open competition–or a deviation from it. Nor having directed the procurement, to allow the same individual(s) to certify receipt and acceptance. Establishing checks and balances without undermining operational effectiveness requires a subtle hand, in which different specialists working within a matrix organization, with differing chains of command and responsibility, ensure that there is integrity in the process. All members of this team can participate in planning and collaboration for the organizations’ needs. It appears, though not completely proven, that some of these checks and balances did not exist. We do know from the inspections that Contracting Officer’s Representatives (CORs) and Contracting Officers’s Technical Representatives (COTRs) were not appointed for long-term contracts in many cases.

Lesson #3: Don’t pre-select a solution by a particular supplier. This is done by understanding the organization’s current and future needs and putting that expression in a set of salient characteristics, a performance work statement, or a statement of work. This document is authored to share with the marketplace though a formalized and documented process of discovery, such as a request for information (RFI).

Lesson #4: I am not certain if the reports indicate that a legal finding of the appropriate color of money is or is not a sufficient defense but they seem to. This can be a controversial topic within an organization and oftentimes yields differing opinions. Sometimes the situation can be corrected with the substitution of proper money for that fiscal year by higher authority. Some other examples of Anti-deficiency Act (ADA) violations can be found via this link, published by the Defense Comptroller. I’ve indicated from my own experience how, going from one activity to another as an uniformed Navy Officer, I had run into Comptrollers with different opinions of the appropriate color of money for particular types of supplies and services at the same financial thresholds. They can’t all have been correct. I guess I am fortunate that over 23 years–18 of them as a commissioned Supply Corps Officer* and five before that as an enlisted man–that I never ran into a ADA violation in any transaction in which I was involved. The organizations I was assigned to had checks and balances to ensure there was not a statutory violation which, I may add, is a federal crime. Thus, no one should be cavalierly making this assertion as if it were simply an administrative issue. But everyone in the chain is not responsible, unless misconduct or criminal behavior across that chain contributed to the violation. I don’t see it in these reports.Systemic causes require systemic solutions and education.

Note that all of these lessons learned are taught as basic required knowledge in acquisition classes and in regulation. I also note that, in the reports, there are facts of mitigation. It will be interesting to see what eventually comes out of this.

Dave Gordon at AITS.org takes me to task on my post regarding recommending using common schemas for certain project management data. Dave’s alternative is to specify common APIs instead. I am not one to dismiss alternative methods of reconciling disparate and, in their natural state, non-normalized data to find the most elegant solution. My initial impression, though, is: been there, done that.

Regardless of the method used to derive significance from disparate sources of data that is of a common type, one still must obtain the cooperation of the players involved. The ANSI X12 standard has been in use in the transportation industry for quite some time and has worked quite well, leaving the preference of proprietary solution up to the individual shippers. The rule has been, however, that if you are going to write solutions for that industry that you need to allow the shipping info needed by any receiver to conform to a particular format so that it can be read regardless of the software involved.

Recently the U.S. Department of Defense, which had used certain ANSI X12 formats for particular data for quite some time has published and required a new set of schemas for a broader set of data under the rubric of the UN/CEFACT XML. Thus, it has established the same approach as the transportation industry: taking an agnostic stand regarding software preferences while specifying that submitted data must conform to a common schema so that a proprietary file type is not given preference over another.

A little background is useful. In developing major systems contractors are required to provide project performance data in order to ensure that public funds are being expended properly for the contracted effort. This is the oversight responsibility portion of the equation. The other side concerns project and program management. Given the usual cost-plus contract type most often used, the government program management office in cooperation with its commercial counterpart looks to identify the manifestation of cost, schedule, and/or technical risk early enough to allow that risk to be handled as necessary. Also, at the end of this process, which is only now being explored, is the usefulness of years of historical data across contract types, technologies, and suppliers that can be used to benefit the public interest by demonstrating which contractors perform better, to show the inherent risk associated with particular technologies through parametric methods, and a host of insights that can be derived through econometric project management trending and modeling.

So let’s assume that we can specify APIs in requesting the data in lieu of specifying that the customer can receive an application-agnostic file that can be read by any application that conforms with the data standard. What is the difference? My immediate observation is that is reverses the relationship in who owns the data. In the case of the API the proprietary application becomes the gatekeeper. In the case of an agnostic file structure it is open to everyone and the consumer owns the data.

In the API scenario large players can do what they want to limit competition and extensions to their functionality. Since they can block box the manner in which data is structured, it also becomes increasingly difficult to make qualitative selections from the data. The very example that Dave uses–the plethora of one-off mobile apps–usually must exist only in their own ecosystem.

So it seems to me that the real issue isn’t that Big Brother wants to control data structure. What it comes down to is that specifying an open data structure defeats the ability of one or a group of solution providers from controlling the market through restrictions on accessing data. This encourages maximum competition and innovation in the marketplace–Data Neutrality.

I look forward to additional information from Dave on this issue. Each of the methods of achieving the end of Data Neutrality isn’t an end in itself. Any method that is less structured and provides more flexibility is welcome. I’m just not sure that we’re there yet with APIs.

When we wake up in the morning we enter the day with a set of assumptions about ourselves, our environment, and the world around us. So too when we undertake projects. I’ve just returned from the latest NDIA IPMD meeting in Washington, D.C. and the most intriguing presentation at the meeting was given by Irv Blickstein regarding a RAND root cause analysis of major program breaches. In short, a major breach in the cost of a program is defined by the Nunn-McCurdy amendment that was first passed in 1982, in which a major defense program breaches its projected baseline cost by more than 15%.

The issue of what constitutes programmatic success and failure has generated a fair amount of discussion among the readers of this blog. The report, which is linked above, is full of useful information regarding Major Defense Acquisition Program (also known as MDAP) breaches under Nunn-McCurdy, but for purposes of this post readers should turn to page 83. In setting up a project (or program), project/program managers must make a set of assumptions regarding the “uncertain elements of program execution” centered around cost, technical performance, and schedule. These assumptions are what are referred to as “framing assumptions.”

A framing assumption is one in which there are signposts along the way to determine if an assumption regarding the project/program has changed over time. Thus, according to the authors, the precise definition of a framing assumption is “any explicit or implicit assumption that is central in shaping cost, schedule, or performance expectations.” An interesting aspect of their perspective and study is that the three-legged stool of program performance relegates risk to serving as a method that informs the three key elements of program execution, not as one of the three elements. I have engaged in several conversations over the last two weeks regarding this issue. Oftentimes the question goes: can’t we incorporate technical performance as an element of risk? Short answer: No, you can’t (or shouldn’t). Long answer: risk is a set of methods for overcoming the implicit invalidity of single point estimates found in too many systems being used (like estimates-at-complete, estimates-to-complete, and the various indices found in earned value management, as well as a means of incorporating qualitative environmental factors not otherwise categorizable), not an element essential to defining the end item application being developed and produced. Looked at another way, if you are writing a performance specification, then performance is a key determinate of program success.

Additional criteria for a framing assumption are also provided in the RAND study. These are that the assumptions must be determinative, that is, the consequences of the assumption being wrong significantly affects the program in an essential way. They must also be unmitigable, that is, the consequences of the assumption being wrong are unavoidable. They must be uncertain, that is, the outcome or certainty of it being right or wrong cannot be determined in advance. They must be independent and not dependent on another event or series of events. Finally, they must be distinctive, in setting the program apart from other efforts.

RAND then applied the framing assumption methodology to a number of programs. The latest NDIA meeting was an opportunity to provide an update of conclusions based on the work first done in 2013. What the researchers found was that framing assumptions which are kept at a high level, be developed early in a program’s life cycle, and should be reviewed on a regular basis to determine validity. They also found that a program breached the threshold when a framing assumption became invalid. Project and program managers, and requirements personnel have at least intuitively known this for quite some time. Over the years, this is the reason given for requirements changes and contract modifications over the course of development that result in cost, performance, and schedule impacts.

What is different about the RAND study is that they have outlined a practical process for making these determinations early enough for a project/program to be adjusted with changing circumstances. For example, the numbers of framing assumptions of all MDAPs in the study could be boiled down to four or five, which are easily tested against reality during the milestone and other reviews held over the course of a program. This is particularly important given the lengthened time-frames of major acquisitions from development to production.

Looking at these results, my own observation is that this is a useful tool for identifying course corrections that are needed before they manifest into cost and schedule impacts, particularly given that leadership at PARCA has been stressing agile acquisition strategies. The goal here, it seems, is to allow for course corrections before the inertia of the effort leads to failure or–more likely–the development and deployment of an end item that does not entirely meet the needs of the Defense Department. (That such “disappointments” often far outstrip the capabilities of our adversaries is a topic for a different post).

I think the court is still out on whether course corrections, given the inertia of work and effort already expended at the point that a framing assumption would be tested as invalid, can ever truly be offsetting to the point of avoiding a breach, unless we then rebrand the existing effort as a new program once it has modified its structure to account for new framing assumptions. Study after study has shown that project performance is pretty well baked in at the 20% mark. For MDAPs, much of the front-loaded efforts in technology selection and application have been made. After all, systems require inputs and to change a system requires more inputs, not less, to overcome the inertia of all of the previous effort, not to mention work in progress. This is basic physics whether we are dealing with physical systems or complex adaptive (economic) systems.

Certainly, more efficient technology that affects the units of measurement within program performance can result in cost savings or avoidance, but that is usually not the case. There is a bit of magical thinking here: that commercial technologies will provide a breakthrough to allow for such a positive effect. This is an ideological idea not borne out by reality. The fact is that most of the significant technological breakthroughs we have seen over the last 70 years–from the microchip to the internet and now to drones–have resulted from public investments, sometimes in public-private ventures, sometimes in seeded technologies that are then released into the public domain. The purpose of most developmental programs is to invest in R&D to organically develop technologies (utilizing the talents of the quasi-private A&D industry) or provide economic incentives to incorporate technologies that do not currently exist.

Regardless, the RAND study has identified an important concept in determining the root causes of overruns. It seems to me that a formalized process of identifying framing assumptions should be applied and done at the inception of the program. The majority of the assessments to test the framing assumptions should then need to be made prior to the 20% mark as measured by program schedule and effort. It is easier and more realistic to overcome the bow-wave of effort at that point than further down the line.

Note: I have modified the post to clarify my analysis of the “three-legged stool” of program performance in regard to where risk resides.

Much has been said about the achievement of schedule and cost integration (or lack thereof) in the project management community. Much of it consists of hand waving and magic asterisks that hide the significant reconciliation that goes on behind the scenes. From an intellectually honest approach that does not use the topic as a means of promoting a proprietary solution is that authored by Rasdorf and Abudayyeah back in 1991 entitled, “Cost and Schedule Control Integration: Issues and Needs.”

It is worthwhile revisiting this paper, I think, because it was authored in a world not yet fully automated, and so is immune to the software tool-specific promotion that oftentimes dominates the discussion. In their paper they outlined several approaches to breaking down cost and work in project management in order to provide control and track performance. One of the most promising methods that they identified at the time was the unified approach that had originated in aerospace, in which a work breakdown structure (WBS) is constructed based on discrete work packages in which budget and schedule are unified at a particular level of detail to allow for full control and traceability.

The concept of the WBS and its interrelationship to the organizational breakdown structure (OBS) has become much more sophisticated over the years, but there has been a barrier that has caused this ideal to be fully achieved. Ironically it is the introduction of technology that is the culprit.

During the first phase of digitalization that occurred in the project management industry not too long after Radsdorf and Abudayyeah published their paper, there was a boom in dot coms. For business and organizations the practice was to find a specialty or niche and fill it with an automated solution to take over the laborious tasks of calculation previously achieved by human intervention. (I still have both my slide rule and first scientific calculator hidden away somewhere, though I have thankfully wiped square root tables from my memory).

For those of us who worked in project and acquisition management, our lives were built around the 20th century concept of division of labor. In PM this meant we had cost analysts, schedule analysts, risk analysts, financial analysts and specialists, systems analysts, engineers broken down by subspecialties (electrical, mechanical, systems, aviation) and sub-subspecialties (Naval engineers, aviation, electronics and avionics, specific airframes, software, etc.). As a result, the first phase of digitization followed the pathway of the existing specialties, finding niches in which to inhabit, which provided a good steady and secure living to software companies and developers.

For project controls, much of this infrastructure remains in place. There are entire organizations today that will construct a schedule for a project using one set of specialists and the performance management baseline (PMB) in another, and then reconciling the two, not just in the initial phase of the project, but across the entire life of the project. From the standard of the integrated structure that brings together cost and schedule this makes no practical sense. From a business efficiency perspective this is an unnecessary cost.

As much as it is cited by many authors and speakers, the Coopers & Lybrand with TASC, Inc. paper entitled “The DoD Regulatory Cost Premium” is impossible to find on-line. Despite its widespread citation the study demonstrated that by the time one got down to the third “cost” driver due to regulatory requirements that the projected “savings” was a fraction of 1% of the total contract cost. The interesting issue not faced by the study is, were the tables turned, how much would such contracts be reduced if all management controls in the company were reduced or eliminated since they contribute as elements to overhead and G&A? More to the point here, if the processes applied by industry were optimized what would the be the cost savings involved?

A study conduct by RAND Corporation in 2006 accurately points out that a number of studies had been conducted since 1986, all of which promised significant impacts in terms of cost savings by focusing on what were perceived as drivers for unnecessary costs. The Department of Defense and the military services in particular took the Coopers & Lybrand study very seriously because of its methodology, but achieved minimal savings against those promised. Of course, the various studies do not clearly articulate the cost risk associated with removing the marginal cost of oversight and regulation. Given our renewed experience with lack of regulation in the mortgage and financial management sectors of the economy that brought about the worst economic and financial collapse since 1929, one my look at these various studies in a new light.

The RAND study outlines the difficulties in the methodologies and conclusions of the studies undertaken, especially the acquisition reforms initiated by DoD and the military services as a result of the Coopers & Lybrand study. But, how, you may ask does this relate to cost and schedule integration?

The present means that industry uses in many places takes a sub-optimized approach to project management, particularly when it applies to cost and schedule integration, which really consists of physical cost and schedule reconciliation. A system is split into two separate entities, though they are clearly one entity, constructed separately, and then adjusted using manual intervention which defeats the purpose of automation. This may be common practice but it is not best practice.

Government policy, which has pushed compliance to the contractor, oftentimes rewards this sub-optimization and provides little incentive to change the status quo. Software industry manufacturers who are embedded with old technologies are all too willing to promote the status quo–appropriating the term “integration” while, in reality, offering interfaces and workarounds after the fact. Those personnel residing in line and staff positions defined by the mid-20th century approach of division of labor are all too happy to continue operating using outmoded methods and tools. Paradoxically these are personnel in industry that would never advocate using outmoded airframes, jet engines, avionics, or ship types.

So it is time to stop rewarding sub-optimization. The first step in doing this is through the normalization of data from these niche proprietary applications and “rewiring” them at the proper level of integration so that the systemic faults can be viewed by all stakeholders in the oversight and regulatory chain. Nothing seems to be more effective in correcting a hidden defect than some sunshine and a fresh set of eyes.

If industry and government are truly serious about reforming acquisition and project management in order to achieve significant cost savings in the face of tight budgets and increasing commitments due to geopolitical instability, then systemic reforms from the bottom up are the means to achieve the goal; not the elimination of controls. As John Kennedy once said in paraphrasing Chesterton, “Don’t take down a fence unless you know why it was put up.” The key is not to undermine the strength and integrity of the WBS-based approach to project control and performance measurement (or to eliminate it), but to streamline it so that it achieves its ideal as closely as our inherently faulty tools and methods will allow.

Despite the best of intentions web blogging this week has been sparse, my time filled with contract negotiations and responses to solicitations. Most recently on my radar is the latest proposed DFARS rule to allow contractors to self-certify their business systems. Paul Cederwall at Pacific Northwest Government Contracting Update blog has has a lot to say about the rule that is interesting but he gets some important things wrong.

To provide a little background, a DFARS requirement that has been in place since May 18, 2011 established six business systems that must demonstrate accountability and traceability in their internal systems to ensure that there is a high degree of confidence in the integrity of the underlying systems of the contractor receiving award of a government contract. You can find the language here. Given that this is the taxpayer’s money, while there was a lot of fear and loathing on how the rule would be applied since it included some teeth–the threat of a withhold on payments–most individuals involved in acquisition reform welcomed it as a means of handling risk given that one of the elements of making an award is “responsibility.” (This is one leg of the “three-legged stool test” that must be passed prior to a contracting officer making an award, the others being responsiveness, and price and price-related factors. This last could include value determinations.)

The concept of responsibility is a loaded one, calling on the contracting officer to apply judgment, business knowledge and acumen, and analytical knowledge. The elements, from the Corporate Findlaw site has a very good summary as follows:

“the FAR requires a prospective contractor to (1) have adequate financial resources to perform the contract; (2) be able to comply with the required or proposed delivery or performance schedule; (3) have a satisfactory performance record; (4) have a satisfactory record of integrity and business ethics; (5) have the necessary organization, experience, accounting and operational controls, and technical skills; (6) have the necessary production, construction, and technical equipment and facilities; and (7) be otherwise qualified and eligible to receive an award under applicable laws and regulations.”

Our acquisition systems, especially in regard to extremely large contracts that will turn into the complex projects that I write about here, tend to be pulled in many directions. The customer, for example, wants what they need and to reduce the procurement lead time as much as possible. Those who are given oversight responsibility and concern themselves with financial accountability focus on the need for compliance and integrity in the system, and to ensure that funds are being expended for the purpose contracted and in a manner that will lead to the contractually mandated outcome. The contractors within the competitive range not only bid to win but their proposals are calibrated to take into account considerations of risk, market share and exposure, strategic positioning, and margin.

Thus, the Six Business Systems rule is a way of meeting the legal requirement of determining responsibility, which is part of the contracting officer’s charter, particularly under the real-world conditions imposed by governmental austerity. But here is the rub. When I was an active duty Navy contracting officer we had a great deal of resources at our disposal to ensure that we had done our due diligence prior to award. The military services and the Department of Defense provided auditing resources to ensure the integrity of financial systems, expose rates during the negotiating process to meet the standard of “fair and reasonable,” and to ensure contract compliance and establish reliable reporting of progress based on those audits.

But things have changed and not always for the better. During the 1980s and after technology was the first agent for change. As a matter of fact I was the second project manager of the Navy Procurement System project in San Diego during that time and so was there at the beginning. The people around me were prescient–despite the remonstrations to the contrary–that such digitization of procurement processes would result not only in improvements in the quality of information and productivity, but also reductions in workforce. The result was that the federal government lost a great deal of corporate knowledge and wisdom while attempting to weed out suspected Luddites. Hand-in-hand with this technological development came the rise of government austerity, which has become more, not less, severe over the last thirty years. Thus the public lost more corporate knowledge and wisdom in the areas most sensitive to such losses.

Over this time criticism of the procurement system has seemed like the easiest horse of convenience to beat, especially in the environment of Washington, D.C. The contracting officer pool is largely inexperienced. The most experienced, if they last, are largely overworked, which diminishes effectiveness. New hires are few and far between, especially given hiring and pay freezes. Internships and mentoring programs that used to compete with the best of private industry have largely disappeared and most training budgets are either non-existent or bare-boned. The expected procurement “scandals,” the overwhelming majority of which can be directly traced to the conditions described above as opposed to corruption, fraud, waste, or abuse, resulted.

Because of these conditions, the reaction in terms of ensuring integrity within the systems in lieu of finding scapegoats, was to first establish the Business Systems rule, which is in the best tradition of management. But, given that things became unexpectedly more austere with government shutdowns and sequestration, the agency tasked with enforcing the rule–the Defense Contract Audit Agency (DCAA)–does not have the resources to complete a full review of the systems of the significant number of contractors that provide supplies and services to the U.S. Department of Defense. Thus, the latest solution was to propose self-certification–one which was also sought by a good many companies in the industry.

There are criticisms coming from two different perspectives on the rule. The first is that self-certification is charging the fox with watching the hen house. The 2006-07 housing bubble and resulting banking crisis is an object lesson of insufficient oversight.

The other criticism comes from many in the industry that sought the change. The rub here is that teeth were imposed in the process, requiring an annual independent CPA audit. DCAA will review the results of the audit and the methodology used to make the determination of the certification. This is where I part with PNWC. The knee-jerk reaction is to question DCAA’s ability to judge whether the audit was completed properly because, after all, they were not “competent” to complete the audits to begin with. This is a tautology and not a very good one.

As a leader and manager, if I delegate a task (given that I am usually busy on more pressing issues) and put checks and balances in place in the performance of that task, there will still come the time when I want that individual (or individuals) to present me with an accounting of what they did in the performance of that task. This is called leadership and management.

The legal responsibility of DCAA in this case in their oversight role is to ensure the integrity of the contractor’s systems so that contracting officers can make awards with confidence to responsible firms. DCAA is also accountable for the judgment and process in providing that certification. One can delegate responsibility in the completion of a task but one cannot delegate accountability.

Note: Some formatting errors came out in the initial posting. Many apologies.

a. Additional tools are needed to achieve the intended functionality apart from the core application;

b. Technical support is poor or nonexistent;

c. Personnel in the organization still rely on spreadsheets to extend the functionality of the application;

d. Training on the tool takes more time than training the job;

e. The software tool adds work instead of augmenting or facilitating the achievement of work.

I have seen situations where all of these conditions are at work but the response, in too many cases, has been “well we put so much money into XYZ tool with workarounds and ‘bolt-ons’ that it will be too expensive/disruptive to change.” As we have advanced past the first phases of digitization of data, it seems that we are experiencing a period where older systems do not quite match up with current needs, but that software manufacturers are very good at making their products “sticky,” even when their upgrades and enhancements are window dressing at best.

In addition, the project management community, particularly focused on large projects in excess of $20M is facing the challenge of an increasingly older workforce. Larger economic forces at play lately have exacerbated this condition. Aggregate demand and, on the public side, austerity ideology combined with sequestration, has created a situation where highly qualified people are facing a job market characterized by relatively high unemployment, flat wages and salaries, depleted private retirement funds, and constant attacks on social insurance related to retirement. Thus, people are hanging around longer, which limits opportunities for newer workers to grow into the discipline. Given these conditions, we find that it is very risky to one’s employment prospects to suddenly forge a new path. People in the industry that I have known for many years–and who were always the first to engage with new technologies and capabilities–are now very hesitant to do so now. Some of this is well founded through experience and consists of healthy skepticism: we all have come across snake oil salesmen in our dealings at one time or another, and even the best products do not always make it due to external forces or the fact that brilliant technical people oftentimes are just not very good at business.

But these conditions also tend to hold back the ability of the enterprise to implement efficiencies and optimization measures that otherwise would be augmented and supported by appropriate technology. Thus, in addition to those listed by Ms. Symonds, I would include the following criteria to use in making the decision to move to a better technology:

a. Sunk and prospective costs. Understand and apply the concepts of sunk cost and prospective cost. The first is the cost that has been expended in the past, while the latter focuses on the investment necessary for future growth, efficiencies, productivity, and optimization. Having made investments to improve a product in the past is not an argument for continuing to invest in the product in the future that trumps other factors. Obviously, if the cash flow is not there an organization is going to be limited in the capital and other improvements it can make but, absent those considerations, sunk cost arguments are invalid. It is important to invest in those future products that will facilitate the organization achieving its goals in the next five or ten years.

b. Sustainability. The effective life of the product must be understood, particularly as it applies to an organization’s needs. Some of this overlaps the points made by Ms. Symonds in her article but is meant to apply in a more strategic way. Every product, even software, has a limited productive life but my concept here goes to what Glen Alleman pointed out in his blog as “bounded applicability.” Will the product require more effort in any form where the additional effort provides a diminishing return? For example, I have seen cases where software manufacturers, in order to defend market share, make trivial enhancements such as adding a chart or graph in order to placate customer demands. The reason for this should be, but is not always obvious. Oftentimes more substantive changes cannot be made because the product was built on an earlier generation operating environment or structure. Thus, in order to replicate the additional functionality found by newer products the application requires a complete rewrite. All of us operating in this industry has seen this; where a product that has been a mainstay for many years begins to lose market share. The decision, when it is finally made, is to totally reengineer the solution, but not as an upgrade to the original product, arguing that it is a “new” product. This is true in terms of the effort necessary to keep the solution viable, but that then also completely undermines justifications based on sunk costs.

c. Flexibility. As stated previously in this blog, the first generation of digitization mimicked those functions that were previously performed manually. The applications were also segmented and specialized based on traditional line and staff organizations, and specialties. Thus, for project management, we have scheduling applications for the scheduling discipline (such as it is), earned value engines for the EV discipline, risk and technical performance applications for risk specialists and systems engineers, analytical software for project and program analysts, and financial management applications that subsumed project and program management financial management professionals. This led to the deployment of so-called best-of-breed configurations, where a smorgasbord of applications or modules were acquired to meet the requirements of the organization. Most often these applications had and have no direct compatibility, requiring entire staffs to reconcile data after the fact once that data was imported into a proprietary format in which it could be handled. Even within so-called ERP environments under one company, direct compatibility at the appropriate level of the data being handled escaped the ability of the software manufacturers, requiring “bolt-ons” and other workarounds and third party solutions. This condition undermines sustainability, adds a level of complexity that is hard to overcome, and adds a layer of cost to the life-cycle of the solutions being deployed.

The second wave to address some of these limitations focused on data flexibility using cubes, hard-coding of relational data and mapping, and data mining solutions: so-called Project Portfolio Management (PPM) and Business Intelligence (BI). The problem is that, in the first instance, PPM simply another layer to address management concerns, while early BI systems froze in time single points of failure into hard-coded deployed solutions.

A flexible system is one that leverages the new advances in software operating environments to solve more than one problem. This, of course, undermines the financial returns in software, where the pattern has been to build one solution to address one problem based on a specialty. Such a system provides internal flexibility, that is, allows for the application of objects and conditional formatting without hardcoding, pushing what previously had to be accomplished by coders to the customer’s administrator or user level; and external flexibility, where the same application can address, say, EVM, schedule, risk, financial management, KPIs, technical performance, stakeholder reporting, all in the same or in multiple deployed environments without the need for hardcoding. In this case the operating environment and any augmented code provides a flexible environment to the customer that allows one solution to displace multiple “best-of-breed” applications.

This flexibility should apply not only vertically but also horizontally, where data can be hierarchically organized to allow not only for drill-down, but also for roll-up. Data in this environment is exposed discretely, providing to any particular user that data, aggregated as appropriate, based on their role, responsibility, or need to know.

d. Interoperability and open compatibility. A condition of the “best-of-breed” deployment environment is that it allows for sub-optimization to trump organizational goals. The most recent example that I have seen of this is where it is obvious that the Integrated Master Schedule (IMS) and Performance Management Baseline (PMB) were obviously authored by different teams in different locations and, most likely, were at war with one another when they published these essential interdependent project management artifacts.

But in terms of sustainability, the absence of interoperability and open compatibility has created untenable situations. In the example of PMB and IMS information above, in many cases a team of personnel must be engaged every month to reconcile the obvious disconnectedness of schedule activities to control accounts in order to ensure traceability in project management and performance. Surely, not only should there be no economic rewards for such behavior, I believe that no business would perform in that manner without them.

Thus, interoperability in this case is to be able to deal with data in its native format without proprietary barriers that prevent its full use and exploitation to the needs and demands of the customer organization. Software that places its customers in a corner and ties their hands in using their own business information has, indeed, worn out its welcome.

The reaction of customer organizations to the software industry’s attempts to bind them to proprietary solutions has been most marked in the public sector, and most prominently in the U.S. Department of Defense. In the late 1990s the first wave was to ensure that performance management data centered around earned value was submitted in a non-proprietary format known as the ANSI X12 839 transaction set. Since that time DoD has specified the use of the UN/CEFACT XML D09B standard for cost and schedule information, and it appears that other, previously stove-piped data will be included in that standard in the future. This solution requires data transfer, but it is one that ensures that the underlying data can be normalized regardless of the underlying source application. It is especially useful for stakeholder reporting situations or data sharing in prime and sub-contractor relationships.

It is also useful for pushing for improvement in the disciplines themselves, driving professionalism. For example, in today’s project management environment, while the underlying architecture of earned value management and risk data is fairly standard, reflecting a cohesiveness of practice among its practitioners, schedule data tends to be disorganized with much variability in how common elements are kept and reported. This also reflects much of the state of the scheduling discipline, where an almost “anything goes” mentality seems to be in play reflecting not so much the realities of scheduling practice–which are pretty well established and uniform–as opposed to the lack of knowledge and professionalism on the part of schedulers, who are tied to the limitations and vagaries of their scheduling application of choice.

But, more directly, interoperability also includes the ability to access data (as opposed to application interfacing, data mining, hard-coded Cubes, and data transfer) regardless of the underlying database, application, and structured data source. Early attempts to achieve interoperability and open compatibility utilized ODBC but newer operating environments now leverage improved OLE DB and other enhanced methods. This ability, properly designed, also allows for the deployment of transactional environments, in which two-way communication is possible.

A new reality. Thus given these new capabilities, I think that we are entering a new phase in software design and deployment, where the role of the coder in controlling the UI is reduced. In addition, given that the large software companies have continued to support a system that ties customers to proprietary solutions, I do not believe that the future of software is in open source as so many prognosticators stated just a few short years ago. Instead, I propose that applications that behave like open source but allow those who innovate and provide maximum value, sustainability, flexibility, and interoperability to the customer are those that will be rewarded for their efforts.

Note: This post was edited for clarity and grammatical errors from the original.

The question in the title refers to agile in the “traditional” sense and not the big “A” appropriated sense. But I’ll talk about big “A” Agile also.

It also refers to a number of discussions I have been engaged in recently among some of the leading practitioners in the program and project management community. Here are few data points:

a. GAO and other oversight agencies have been critical of changing requirements over the life cycle of a project, particularly in DoD and other federal agencies, that contribute to cost growth. The defense of these changes has been that many of them were necessary in order to meet new circumstances. Okay, sounds fair enough.

But to my way of thinking, if the change(s) were necessary to keep the project from being obsolete upon deployment of the system, or were to correct an emergent threat that would have undermined project success and its rationale, then by all means we need to course correct. But if the changes were not to address either of those scenarios, but simply to improve the system at more than marginal cost, then it was unnecessary.

How can I make such a broad statement and what is the alternative? we may ask. My rationale is that the change or changes, if representing a new development involving significant funding, should stand on its own merits, since it is essentially a new project.

All of us who have been involved in complex projects have seen cases where, as a result of development (and quite often failure), that oftentimes we discover new methods and technologies within the present scope that garner an advantage not previously anticipated. This doesn’t happen as often as we’d like but it does happen. In my own survey and project in development of a methodology for incorporating technical performance into project cost, schedule and risk assessments, it was found that failing a test, for example, had value since it allowed engineers to determine pathways for not only achieving the technical objective but, oftentimes, exceeding the parameter. We find that for x% more in investment as a result of the development, test, milestone review, etc. that we can improve the performance of some aspect of the system. In that case, if the cost or effort is marginal then, the improvement is part of the core development process within the original scope. Limited internal replanning may be necessary to incorporate the change but the remainder of the project can largely go along as planned.

Alternatively, however, inserting new effort in the form of changes to major subsystems involves major restructuring of the project. This disrupts the business rhythm of the project, causing a cultural shift within the project team to socialize the change, and to incorporate the new work. Change of this type not only causes what is essentially a reboot of the project, but also tends to add risk to the project and program. This new risk will manifest itself as cost risk initially, but given risk handling, will also manifest itself into technical and schedule risk.

The result of this decision, driven solely by what may seem to be urgent operational considerations, is to undermine project and program timeliness since there is a financial impact to these decisions. Thus, when you increase risk to a program the reaction of the budget holder is to provide an incentive to the program manager to manage risk more closely. This oftentimes will invite what, in D.C. parlance, is called a budget mark, but to the rest of us is called a budget cut. When socialized within the project, such cuts usually are then taken out of management reserve or non-mandatory activities that were put in place as contingencies to handle overall program risk at inception. The mark is usually equal to the amount of internal risk caused by the requirements change. Thus, adding risk is punished, not rewarded, because money is finite and must be applied to projects and programs that demonstrate that they can execute the scope against the plan and expend the funds provided to them. So the total scope (and thus cost) of the project will increase, but the flexibility within the budget base will decrease since all of that money is now committed to handle risk. Unanticipated risk, therefore, may not be effectively handled in the future.

At first the application of a budget mark in this case may seem counterintuitive, and when I first went through the budget hearing process it certainly did to me. That is until one realizes that at each level the budget holder must demonstrate that the funds are being used for their intended purpose. There can be no “banking” of money since each project and program must compete for the dollars available at any one time–it’s not the PM’s money, he or she has use of that money to provide the intended system. Unfortunately, piggybacking significant changes (and constructive changes) to the original scope is common in project management. Customers want what they want and business wants that business. (More on this below). As a result, the quid pro quo is: you want this new thing? okay, but you will now have to manage risk based on the introduction of new requirements. Risk handling, then, will most often lead to increased duration. This can and often does result in a non-virtuous spiral in which requirements changes lead to cost growth and project risk, which lead to budget marks that restrict overall project flexibility, which tend to lead to additional duration. A project under these circumstances finds itself either pushed to the point of not being deployed, or being deployed many years after the system needed to be in place, at much greater overall cost than originally anticipated.

As an alternative, by making improvements stand on their own merits a proper cost-benefit analysis can be completed to determine if the improvement is timely and how it measures up against the latest alternative technologies available. It becomes its own project and not a parasite feeding off of the main effort. This is known as the iterative approach and those in software development know it very well: you determine the problem that needs to be solved, figure out the features and approach that provides the 80% solution, and work to get it done. Improvements can come after version 1.0–coding is not a welfare program for developers as the Agile Cult would have it. The ramifications for project and program managers is apparent: they must not only be aware of the operational and technical aspects of their efforts, but also know the financial impacts of their decisions and take those into account. Failure to do so is a recipe for self-inflicted disaster.

This leads us to the next point.

b. In the last 20+ years major projects have found that the time from initial development to production has increased several times. For example, the poster child for this phenomenon in the military services is the F35 Lightning II jet fighter, also known as the Joint Strike Fighter (JSF), which will continue to be in development at least through 2019 and perhaps into 2021. From program inception in 2001 to Initial Operational Capability (IOC) it will be 15 years, at least, before the program is ready to deploy and go to production. This scenario is being played out across the board in both government and industry for large projects of all types with few exceptions. In particular, software projects tend to either fail or to meet their operational goals in the overwhelming majority of cases. This would suggest that, aside from the typical issues of configuration control, project stability, and rubber baselining, (aside from the self-reinforcing cost growth culture of the Agile Cult) that there are larger underlying causes involved than simply contracting systems, though they are probably a contributing factor.

From a hardware perspective in terms of military strategy there may be a very good reason why it doesn’t matter that certain systems are not deployed immediately. That reason is that, once deployed, they are expensive to maintain logistically. Logistics of deployed systems will compete for dollars that could be better spent in developing–but not deploying–new technologies. The answer, of course, is somewhere in between. You can’t use that notional jet fighter when you needed it half a world away yesterday.

c. Where we can see the effects on behavior from an acquisition systems perspective is in the comparison of independent estimates to what is eventually negotiated. For example, one military service recently gave the example of a program in which the confidential independent estimate was $2.1 billion. The successful commercial contractor team, let’s call them Team A, whose proposal was deemed technically acceptable, made an offer at $1.2 billion while the unsuccessful contractor team, Team B, offered near the independent estimate. Months later, thanks to constructive changes, the eventual cost of the contract will be at or slightly above the independent estimate based on an apples-to-apples comparison of the scope. Thus it is apparent that Team A bought into the contract. Apparently, honesty in proposal pricing isn’t always the best policy.

I have often been asked what the rationale could be for a contractor to “buy-in” particularly for such large programs involving so much money. The answer, of course, is “it depends.” Team A could have the technological lead in the systems being procured and were defending their territory, thus buying-in, even without constructive changes, was deemed to be worth the tradeoff. Perhaps Team A was behind in the technologies involved and would use the contract as a means of financing their gap. Team A could have an excess of personnel with technical skills that are complimentary to those needed for the effort but who are otherwise not employed within their core competency, so rather than lose them it was worth bidding at or near cost for the perceived effort. These are, of course, the most charitable assumed rationales, though the ones that I have most often encountered.

The real question in this case would be how, even given the judgment of the technical assessment team, the contracting officer would keep a proposal so far below the independent estimate to fall within the competitive range? If the government’s requirements are so vague that two experienced contracting teams can fall so far apart, it should be apparent that the solicitation either defective or the scope is not completely understood.

I think it is this question that leads us to the more interesting aspects of acquisition, program, and project management. For one, I am certain that a large acquisition like the one described is highly visible and of import to the political system and elected officials. In the face of such scrutiny it would have to be a procuring contacting officer (PCO) of great experience and internal fortitude, confident in their judgment, to reset the process after proposals had been received.

There is also pressure in contracting from influencers within the requiring organizations that are under pressure to deploy systems to meet their needs as expeditiously as possible–especially after a fairly lengthy set of activities that must occur prior to the issuance of a solicitation. The development of a good set of requirements is a process that involves multiple stakeholders on highly technical issues is one that requires a great deal of coordination and development by a centralized authority. Absent such guidance the method of approaching requirements can be defective from the start. For example, does the requiring organization write a Statement of Work, a Performance Work Statement, or a Statement of Objectives? Which is most appropriate contract type for the work being performed and the risk involved? Should there be one overriding approach or a combination of approaches based on the subsystems that make up the entire system?

But even given all of these internal factors there are others that are unique to our own time. I think it would be interesting to see how these factors have affected the conditions that everyone in our discipline deems to be problematic. This includes the reduced diversity of the industrial and information verticals upon which the acquisition and logistics systems rely; the erosion of domestic sources of expertise, manufactured materials, and commodities; the underinvestment in training and personnel development and retention within government that undermines necessary expertise; specialization within the contracting profession that separates the stages of acquisition into stovepipes that undermines continuity and cohesiveness; the issuance of patent monopolies that stifle and restrict competition and innovation; and unproductive rent seeking behavior on the part of economic elites that undermine the effectiveness of R&D and production-centric companies. Finally, this also includes those government policies instituted since the early 1980s that support these developments.

The importance of any of these cannot be understated but let’s take the issue of rent seeking that has caused the “financialization” of almost all aspects of economic life as it relates to what a contracting officer must face when acquiring systems. Private sector R&D, which mostly fell in response to economic dislocations in the past–but in a downward trend since the late 1960s overall and especially since the mid 1980s–has fallen precipitously since the bursting of the housing bubble and resultant financial crisis in 2007 with no signs of recovery. Sequestration and other austerity measures in FY 2015 will at the same time will also negatively impact public R&D, continuing the trend overall with no offset. This fall in R&D has a direct impact on productivity and undercuts the effectiveness of using all of the tools at hand to find existing technologies to offset the ones that require full R&D. This appears to have caused a rise in intrinsic risk in the economy as a whole for efforts of this type, and it is this underlying risk that we see at the micro and project management level.