First-order business cases, which are based on the automation of some task or business process, and

Second-order business cases, which are based on the integration of first-order solutions, and

Third-order business cases, which are based on the improved management of second-order solutions.

Most IT projects are now based on second and increasingly third order business cases: Commodity technologies are still finding their way into first-order solutions, but mostly in late adopter sectors, such as governmental services. In almost all businesses, most of the low-hanging automation business cases have already been picked & consumed, or already thrown out onto technology compost heap.

Yet our education and training methods for preparing our IT workforce are still largely based on first-order automation and its constituent technologies: We still teach the design of operating systems, compilers, train on low-level programming languages, etc. But as a percentage of industry labor, such base technology work is rare, and, outside of the open source community, limited to a select few companies.

Successful second- and third-order business case projects require modeling, analytic, design, management and financial skills that today are only acquired through many years of hard knocks. Our Cambridge Don's suggest that,

"Service Science is emerging as a distinct field. Its vision is to discover the underlying logic of complex service systems and to establish a common language and shared frameworks for service innovation. To this end, an interdisciplinary approach should be adopted for research and education on service systems…Industry refers to these people as T-shaped professionals, who are deep problem solvers in their home discipline but also capable of interacting with and understanding specialists from a wide range of disciplines and functional areas."

Bowen & Spohrer at IBM have suggested that a hybrid degree program, half business and half computer science, is best suited to the new age.

My own education was almost entirely interdisciplinary and liberal arts; my technology training came later and on the job. Back in the eighties, that was novel; but in the future, perhaps not. I also think the trend bodes well for integrating more women into the IT workforce, women being generally more communicative and group-engaged then the "deep problem solvers" who are 90% male.

02/11/2010

I am trying to regain some perspective after splitting my days between startups charging along at a break-neck pace and corporate IT chugging along well-worn rails.

The startup IT world is compelling in its creative fecundity – new features and functions pouring forth from the latest open-source tools, operated by young, intelligent, energetic workers that slough off disappointments and set-backs with a shot of Red Bull.

But so much of the startup world’s data today is, frankly, trivial and disposable:Children’s games, reservoirs of transient messages, amusements and eye candy; “nice to have” information of all sorts, but very little of primary importance.In part, this is because very few enterprise software firms remain after a decade of consolidation, and those firms are so large, little enterprise software gets built at startups anymore.Even when a startup’s data is important (such as personal finances), web data is usually redundant to a “real” system of record in an enterprise somewhere.So much web data is therefore, ultimately, disposable.No wonder startup’s view testing as a troubling annoyance, and really love the idea that customers can and will test for them.

The corporate IT world is also compelling -- in its devotion to robust solutions.New feature requests are pored-over as carefully as the welds that hold a fighter plane together.More often than not, the workforce consists of graying baby-boomers devoted to finding a solution will hold fast even in a hundred-year storm, deeply fearful of failure, for the security of their jobs, if not for their customers.

The essence of corporate IT today is integration, the endless gluing of one application with another extant application.And all this plumbing is ceaselessly stressed by business process changes that test the limits of human adaptability.There is usually no practical alternative to endless cross-functional meetings where a dozen people grope for a common language and understanding, and too often end up with a fairly trivial application no matter how important the data, perhaps because that’s all they can find so few shared words to agree upon.

All this could mean nothing more than:

Delivered function points = Feature Requirements / Data significance.

If the data’s not important, then features flow into the code like water over Niagara Falls.If the data is important, then code experiences a much more painful birthing.

11/06/2009

Probably not: But it is sure changing rapidly, within both web content businesses and software as a service firms. No doubt, corporations will also follow this trend line as experience with faster, more efficient and less costly testing techniques spreads.

Following up on my last post, here is some direct feedback from Robert Johnson at Facebook, who spoke recently about their process for software development and testing.

"Facebook developers are encouraged to push code often and quickly. Pushes are never delayed and applied directly to parts of the infrastructure. The idea is to quickly find issues and their impacts on the rest of system and surely fixing any bugs that would result from these frequent small changes."

"Second, there is limited QA (quality assurance) teams at Facebook but lots of peer review of code. Since the Facebook engineering team is relatively small, all team members are in frequent communications. The team uses various staging and deployment tools as well as strategies such as A/B testing , and gradual targeted geographic launches. This has resulted in a site that has experienced, according to Robert, less than 3 hours of down time in the past three years."

Johnson certainly confirms some of the Beck's observations about how test changes when the business need for agility starts to dominate, specifically:

"Immunize" code by extensive code reviews, and by building in testability, assertions, or other self-validation techniques;

Perform rolling deployments across subsets of the user base;

Perform "release experiments" in lieu of testing.

The argument against agile QA is usually that such techniques have no place in truly mission-critical corporate applications, and that such methods sacrifice quality by shifting the burden to the consumer.

Well, if you haven't noticed, the quality burden has been shifting to the consumer for a couple decades now. For example, permanent beta testing is now the norm at most large Internet content and ecommerce firms.

Traditional QA engineering won't change dramatically where risks due to poor quality are too great: Medical devices or aeronautics; regulated markets, such as security exchanges; on-line banking or card payment systems; core corporate accounting; etc. But out of a typical corporate application portfolio, most applications pose no such risks, and these new QA techniques offer opportunities for more agility, faster time to market, with significant cost savings.

The logic of capitalism is such that quality is never an absolute, but just one among many factors that influence consumer demand. When properly marketed, a shift in quality techniques that results in lower costs to the consumer without significant loss of features will almost always find new customers and more demand from existing customers. I argue this will be as true for internal corporate IT as with external consumers, and once again, Internet development techniques will lead the way.

09/06/2009

Or is it the other way around: Manage reality and perception will follow? Please follow our author as he plays tennis with himself, and arrives at a dialectical conclusion up in the umpire's chair.

Thesis: If indeed, following Peter Drucker, "the purpose of a business is to create a customer", then the primary goal of management is marketing. Managers need to control the perception of a business because "the aim of marketing is to know and understand the customer so well the product or service fits him and sells itself."

"Reality", from this marketing perspective, is the realization of sales, and has little to do with whether or not a product or service "really" works. As long as messages defining fitness to purpose can be created and transmitted at a reasonable cost, and as long as these messages are believed by customers and exploited by sales agents, then it's all good, right?

Marketing, outside of selective technical sectors, doesn't need to operate in the realm of logic, only in the emotional and cultural soup of a market economy, and Maslow's "hierarchy of needs". For example, marketing could be used to bring us into an informed, adult discussion of healthcare policy, but it is far easier (and clearly for some marketers, more fun) to generate tirades about "death panels" via innuendo and exploitation of ignorance.

And as anyone who lives in a corporate hierarchy can attest, one can also "solve" any management problem by careful marketing. Within most businesses, and especially politics, it often doesn't matter whether a manager is "really" correcting a problem, as long as the perception of the manager's activities is controlled. Unfortunately, manipulating human perceptions is oft times simpler and less costly than changing a product, improving a service, replacing an underperforming resource, telling the truth, following lawful conduct, or even making any kind of logical argument at all.

Antithesis: Product developers, and their close relations in manufacturing, have long known that their parts of a business can only operate effectively and efficiently if problems are "really" solved, and not just considered another problem of perception. Customers are only, over the long term, enthusiastic about products or services that not only appear to work for them, but "really" work well for them, and over the entire product's lifespan, too. Otherwise, both the customer and the product manufacturer will incur additional costs that reduce competitiveness over time.

"Reality", from the product perspective, is usually objective, often physically so, but always measurable. Isn't marketing and sales easier if the product "really" works, not just that we say it does? And in the long run, isn't that a better outcome for both producer and consumer?

Synthesis: The "manage reality" camp, technocratic, and politically elitist though it is, has a strong argument, because over the long run, it really does cost less, and generate more revenue, if you manage reality and not just perception. The American automobile industry is surely one glaring example, where marketing always came first, and product second. (Ironically, they often employed Peter Drucker, too.) Spending 17% of GNP on healthcare services is another example where realization of revenue is not consistent with long term survival, hence our mediocre public health statistics and 15% of the population uninsured.

But the "manage perception" camp, manipulative, short-sighted and politically populist though it may be, is grounded in reality also: Consumers and markets are decidedly not careful evaluators of costs and benefits, except maybe (and it's a big maybe) in the aggregate. Humans evolved innumerable heuristics to help us survive on the savannahs of Africa, and these make us biologically directed, emotionally buffeted, and drawn to magical explanations that don't require much thought. Given the hundreds of generations it took to enable our species to survive and eventually thrive, we are not likely to shed these heuristics easily. We will remain deeply "conservative", except when threatened or forced to behave differently.

The best products or services in the world will go un-consumed if purchasers don't believe they "fit". That belief must be achieved, whether by logic, an appeal to prejudice and bigotry, or marching around in a gorilla suit.

But let's hope we have the wisdom to both manage reality and perception both in their proper proportion. Marketing and the management of perception is surely necessary for the realization of sales, but it is not an end in itself, much less an assurance of long-term viability.

08/03/2009

Fine tuning IT project risk management by phase has the potential to improve the quality of project outcomes and reduce failures. But as all experienced project managers know, there is no one life cycle model that can capture all the variations we encounter in the real world.

As I reported previously, researchers measured business success factors by the phase of a new technology's deployment in its life cycle. The life cycle phases used in the study were as follows:

Initiation (Research & Requirements)

Adoption (Evaluation & Budgeting)

Adaptation (Procurement & Installation)

Acceptance (Initial deployment)

Routinization (Optimized usage)

Infusion (Extension throughout the enterprise)

The success factors were categorized at a high level as:

Commitment to the new technology

Knowledge about the new technology

Communications to the user community

Planning for the new technology implementation

Infrastructure to support and extend the new technology

The success factor weightings for the first four phases are summarized in the following table:

Phase

Commitment

Knowledge

Communications

Planning

Infrastructure

Initiation

15

60

20

30

15

Adoption

25

20

35

15

25

Adaptation

20

5

20

35

20

Routinization

40

15

25

25

40

Infusion

40

15

25

25

40

Let's see how we could adapt these findings to a commonly used project risk management technique. The following table includes a variety of risk factors to be evaluated periodically with mitigations assigned accordingly. A project's total risk profile should, of course, diminish towards zero as the project nears completion:

#

Project Risk Description

Risk Factor (1=low, 5=high)

Mitigations

1

Application Complexity

2

Baselines

3

Contract or SOW

4

Customer Expectations

5

Customer Involvement

6

Customer Acceptance

7

Design level of detail

8

External Dependencies

9

Hardware (new)

10

Software (new)

11

Interfaces or Integrations

12

Experience of team

13

Productivity of team

14

Project Management

15

Project planning/scheduling

16

Project Resources

17

Requirements Stability

18

Requirements Definition

19

Subcontractor involvement

20

System Performance

21

Network Performance

22

Workload on team

Oct.

Totals by Month:

Nov

Dec

Jan, etc.

Obviously, certain categories of risk are not as significant during Initiation as during Adaptation, for example. Furthermore, during periodic risk factor reviews, irrelevant factors are distracting and a waste of time. A simple improvement would be to add a column of phase-weighted scalars, ignoring any factors when the scalar is zero:

Adaptation Phase Risk Description0=n/a1=low2=med3=hi

Initiation

Adoption

Adaption

Acceptance

Routinization & Infusion

Application Complexity

3

3

3

3

3

Baselines

0

1

2

2

2

Contract or SOW

1

3

3

3

0

Customer Expectations

2

2

2

3

2

Customer Involvement

1

2

2

3

3

Customer Acceptance

1

2

2

3

2

Etc.

The resulting project risk matrix, easily implemented in a spreadsheet, looks like this when filled in. A metrics team could track such totals and then assign certain thresholds for the risk totals to help decide which projects are at low, medium or high risk:

#

Adaptation Phase Risk Description

Project SpecificRisk Factor (1=low, 5=high)

Phase Weighting

Factor * Weight

Mitigations

1

Application Complexity

2

3

6

Educate customer organizations beyond senior managers.

2

Baselines

1

1

1

Have vendor commit to v3.5 before evaluation.

3

Contract or SOW

2

3

6

Sourcing to review ASAP.

4

Customer Expectations

3

2

6

Underpromise and overdeliver.

5

Customer Involvement

3

2

6

Do not start Adaptation until customer employee budget is approved.

6

Customer Acceptance

3

2

6

Do not start Adaptation until customer selects test vendor.

7

Design level of detail

1

2

2

In good shape for now.

Etc.

Etc.

Total for Oct

120

Within medium risk range.

One of the dilemmas of our business is that no one life cycle model can ever capture the wide variation of IT projects we perform. Yet creating an overly complex life cycle model quickly makes project management too complex, and in itself a risk to project success. IT managers must use judgments based on experience to assure their life cycle model accurately reflects an individual project, but without introducing too much overhead and complexity. Hopefully, the above example reflects a happy medium and is an improvement, based on quantitative research, yet easy to implement.

07/28/2009

A recent study (Brown et al, CACM, Volume 50, #9) provides a handy matrix of what IT business success factors are critical for each phase of a new technology's life cycle, suggesting potential improvements to our risk management techniques.

What the researchers discovered was that business success factors varied considerably depending on the phase of new technology deployment within an organization. The life cycle phases used in the study were as follows:

Initiation (Research & Requirements)

Adoption (Evaluation & Budgeting)

Adaptation (Procurement & Installation)

Acceptance (Initial deployment)

Routinization (Optimized usage)

Infusion (Extension throughout the enterprise)

The success factors were categorized at a high level as:

Commitment to the new technology

Knowledge about the new technology

Communications to the user community

Planning for the new technology implementation

Infrastructure to support and extend the new technology

The success factor weightings for the first four phases are summarized in the following table:

Phase

Commitment

Knowledge

Communications

Planning

Infrastructure

Initiation

15

60

20

30

15

Adoption

25

20

35

15

25

Adaptation

20

5

20

35

20

Routinization

40

15

25

25

40

Infusion

40

15

25

25

40

This makes intuitive sense to me:

Knowledge about the new technology as early as possible is critical at the Initiation of the technology's lifecycle, or nobody will know how to use it to achieve business goals.

During Adoption, Communicating the plans and goals for the new technology takes precedence, but relatively in balance with other success factors.

05/18/2009

Virtual teams and business relationships without any physical presence can work, once the on-line techniques and tools are learned. But when misunderstandings, lapses, or failures occur, on-line tools and those who use them often lack the means to recover.

Remote, non-physical relationships are not new. Initially, however, they were conducted by human intermediaries such as diplomats -- ambassadors, emissaries, and the like – who were selected for their sensitivity to language, culture, etiquette, character, as well as mission. Diplomatic skills raise the probability that politicians or businessmen can be successful together virtually, even if they never see or meet in person. Diplomatic skills work not only because they raise the probability that messages are accurately delivered, understood, and mutual interests advanced, but also because they lay the groundwork for recovery when a relationship goes south and disputes must be resolved.

Yet for diplomacy to work, the participants must have both the means and the time to conduct a social relationship. Today's digital communications, combined with globalization, and the push for ever greater productivity via virtual teams and business relationships, makes establishing, stabilizing and maintaining a relationship difficult. We are pushed to conduct social relationships in decidedly non-social time frames. It is an experiment in speed-dating on a massive scale.

Not only are we expected to speed-date into new business relationships, whether colleagues or customers, but to do so with an array of real-time and near real-time technologies that are often brand new, poorly understood and difficult to control. Each new communications channel is like a shiny new toy, with some new intrinsic property we may find appealing – immediacy, a-synchronicity, brevity, etc. But in the mix with established technologies, without established social norms developed over longer durations, communications may only worsen.

Diplomacy and politesse have to be relearned, often painfully, as every new communications technology takes hold. This was true of writing, telegraphy, telephony, and now e-mail, IM, texting, twittering and the entire spectrum of on-line social software. Every new communications technology over history has eventually adopted and evolved core diplomatic skills, but the process proceeds by trial and error, and moves forward fitfully. We do eventually evolve, adapt and adopt, but our physical, biological, cultural and genetic nature don't go virtual just because our social relationships do (c.f., "Blown to Bits" by Evans and Wurster, "Being Digital", by Negroponte, "Social Life of Information", by Brown, and many others).

Immediacy combined with anonymity can create rapid, negative feedback loops that can quickly destroy working relationships, sometimes irreparably. The effect was first noticed with email "flame wars" and newsgroup postings, but the same behavior can be seen in blog comment threads, texting, twittering, etc. Such negative feedback loops is inherent to the technology, and only human judgment can mitigate it.

Here are some of my recommendations for diplomatic communications in an inhumane age:

Just because a new communications technology exists doesn't mean you have to use it. There are many ways to convey and receive information, each with its pluses and minuses, but if you can, stick to the ones you have mastered. If you're better on the phone, use your voice; if you're a better writer, stick to email. There are usually no prizes for being an early adopter, despite the hype.

Even before the recent digital flood, some consultancies had specialized as facilitators, combining a core set of techniques for getting people to communicate and work together effectively toward a common goal. Yet we don't appear to use facilitation as much anymore, even when the plethora of communications channels would suggest we should use them more. Hire a Webex facilitator to help you host an on-line meeting; it is well worth the one hundred bucks.

Encourage your company to consider using unified communications technologies, because they introduce a control-console that can put you on top of your communications instead of being victimized by them. Everyone now has both corporate/enterprise options and nearly free Internet alternatives. Make the effort to unify your personal information in one virtual place and don't spread it around like so many leather-bound address books collected over the years.

Don't neglect human nature: We are not becoming non-physical beings, and you are not your on-line avatar. Allocate the time to develop and maintain relationships and don't expect them to become robust instantaneously. Prepare for your on-line meetings, don't just show up and expect some speed-dating miracle. Judiciously use your travel budget to meet people in person, even if only for a few moments – it is money well spent.

Don't let on-line disputes escalate; when necessary, intervene and become the diplomat. Be prepared to terminate communications, or switch to another communications mode, before irreparable negative feedback loops are created.

Encouragingly, software is becoming better at helping individuals manage multiple channels – this is an interesting article about an IBM application that uses simple data consolidation techniques to help people within virtual meetings.

05/13/2009

Many IT organizations compare themselves to sports teams, yet where are the coaching staff, pre-season, training camp, game-plans and regularly scheduled practices? If practice in a safe environment under the supervision of coaches is how you get better at a skill, and essential for team success, how come our IT team never practices anything at all?

To answer these questions, I am going to do a quick compare & contrast. Part 1 will perform a quick compare & contrast, while Part 2 will focus on a root-cause analysis & make some (im)modest suggestions.

Front Office: There are, of course, executives and money guys above both professional sports teams and corporations with significant IT organizations. Each has a general manager or CEO who is (usually) an expert in their respective business, and who makes key financial decisions while trying to create & execute a winning strategy.

In professional sports, though, the general manager's decisions are largely (in consultation with the coaching staff and owners) about whom to hire, fire, trade, draft, and pay to build a great team. Note that these activities are largely operational and inward-facing. In larger corporations, most CEOs, have a primarily outward-facing role that leaves daily operations to the C-level executives.

Head Coach: Sports teams have a head coach, who determines strategy, and who hires a coaching staff with expertise in particular skills or aspects of the game. Sports strategy is an attempt to optimize your wins based on the skills & strengths of your team compared with your opponents (see for example the excellent Moneyball). Coaches develop game plans consistent with the strategy to optimize their chances of winning against particular opponents; unless, of course, you are the Oakland Raiders (sorry, had to get that out of my system).

IT doesn't have a head coach; IT has a CIO, who is largely concerned with how much money to invest in on-going operations vs new projects within parameters set by the CFO. IT strategy is usually based on recommendations from a CTO, and the game plan for winning in a particular market is usually developed by the CMO. A CIO implements a strategy and participates in a competitive game-plan, but the CIO is usually only directly responsible for the methodology used to build and operate products or services. Aside from referring to "my team", CIO's spend precious little time, much less investment, on human capital and organizational development issues.

A CIO, therefore, is more like a member of the coaching staff, a coach whose specialty is IT. But that should not explain or excuse the lack of teamwork. A complex software development or systems project requires sophisticated business processes to keep its many participants in sync. For example, a full-blown Rational Unified Method (RUP) project uses a business process that includes dozens of roles, capabilities, and standard activities, and that's just the generic, top-level complexity of RUP. When we incorporate the technical specifics, organizational peculiarities and iteration goals (prototype, pilot, production, etc.) that are part of any project's unique characteristics, task plans in the thousands are not unusual.

Yet you rarely hear of IT organizations practicing their methodology to achieve optimal performance. IT seems to be the only "sport" where we expect coaches, the newly drafted and the veteran alike to run out onto the field of play and somehow know the entire strategy, playbook, competition, and assignments, and then perform with optimal productivity, all without ever having so much as a scrimmage together.

Players: Athletes at the professional level have great natural physical ability, and are also highly skilled in their discipline (quarterback, linebacker, etc.), having honed those skills over many thousands of hours of practice and hundreds of games. But athletes are not necessarily effective at using those skills effectively to implement a strategy or game plan, or to coordinate or communicate their tasks in real-time. For all these reasons, coaching is essential to team sports.

IT professionals are usually highly intelligent, which is their physical ability. The IT labor market is also diverse, and its players do not always work for the same company, or reside in the same locality or country, even if they are playing on the same "team". As in team sports, maybe even more so, IT professionals are not necessarily effective at using their skills effectively to implement a game/project plan or strategy, so coordination and communications is still essential.

Recruitment & Draft: Team sport skills are highly measurable via standards such as speed of acceleration, muscle strength, endurance, ability to memorize plays, etc. A player's disciple skills are also visually apparent as seen in game play, whether live or on video. Recruitment is largely an activity for college coaches who identify players with natural abilities that they can subsequently develop through coaching, turning them into running backs, linebackers, etc. At the professional level, players in particular disciplines are readily scored by the teams doing the hiring, are evaluated against a team's needs, and then selected in the draft.

Within IT, college recruitment is mostly on the basis of standardized test scores, not demonstrable skills. There is a seemingly eternal dialog in academic computer science about what to actually teach students, but the study of theory usually completely submerges actual practice, and what practice there is focuses on only one discipline (e.g., coding). Undergraduate test taking, not game-time, is what determines academic ranking.

The equivalent of the draft in IT is hiring, and, In comparison to sports, evaluating IT skills is actually quite difficult. The skills within IT disciplines (coding, testing, documenting, designing, etc.) are surprisingly and alarmingly unpracticed and un-measurable except on-the-job during actual projects. Because job experience is the only practice we get in IT, it is not even obvious that recruiting computer science majors is the best strategy. And IT job experience tends to be highly unique to the company where it occurred.

Training: Becoming a professional athlete is largely about the training. Some skills can be developed through individual study and training, but the direction, measurement and evaluation of an experienced coach is usually essential to achieve the highest individual level within a team context. All team sports have developed extensive & focused regimes of training & practices appropriate for an entire season and particular game.

In comparison, becoming an IT professional is largely about convincing someone to hire you in the first place, since practice is only to be had during projects, and for most practitioners, the only games being played are the ones you are paid to play in.

We all know that training in IT departments is grudgingly provided, and then only for the basic skills necessary to use some new tool. "Team training" usually means "the whole team was trained on a tool", not that anyone learned how to work together effectively.

Projects (i.e., IT games), at least, do have project managers, and IT project plans are comparable to a game plan in sports. But we never actually scrimmage or practice the game plan before we hit the field.

Coaching Staff & Game Management: Sports are generally fast, real-time events, and the coaches are in constant communication, selecting the plays, evaluating the results, providing feedback to the players, before, during and after the game.

IT projects, in comparison, rarely have someone in the role of head coach with a coaching staff. Instead, coaching responsibilities are usually distributed over project managers, development managers, test managers, operations managers, product managers, etc. Worse, within the typical corporate matrix, each of these coaches reports to a different boss with no common technique or methodology. Rather than provide feedback before, during and after a game, IT employees usually get feedback only once a year during annual performance reviews.

IT projects are not typically real-time events (outside of some operational situations), but most IT projects have only the most rudimentary tools for evaluating results. Typically, project managers only know if particular "plays" (i.e., tasks on the project plan) were completed, not whether they were successful. My guess is that because IT lives with failure so much, that is the reason post-project evaluations came to be called "post-mortems".

Summary: As we can see, IT does in fact share many of the traits of a sports team, and would logically benefit from coaching & practice. But this is certainly not the case.

In part 2, I'll explore why I think the IT industry behaves as it does, and also why I think we can learn a thing or two by behaving more like a professional sports team.

05/04/2009

IT development processes and governance can only minimize the risk of doing something really stupid and damaging to your business. But even the best development process is never a game-changer, and almost always results in lower net productivity, and can even contribute to greater alienation and misalignment of IT from business operations.

After a project train-wreck, after all the blame has been assigned, miscreants fired, and bills paid, executive management will naturally ask of survivors: "How will you assure me this never happens again"? And the answer almost certainly will be: "We will improve our processes and take fewer risks". Re-engineering development processes is a natural, human response to some act of profound stupidity.

To explore this topic a bit further, let's compare with current events in global finance. (There are many good analogies between the worlds of finance and IT, in part, I would hazard, because both deal in abstractions -- money and data, respectively -- that are the drivers of a modern economy. Like finance, IT deals in portfolios of projects and operations -- analogous to investments and accounting -- with varying risk profiles, based on a symbolic model of the world. To my thinking, comparisons between software and more physical disciplines, such as civil engineering, usually result in false analogies and misleading conclusions.)

Many, many people have behaved stupidly over the past eight years, taking on excessive leverage in operations and paying dearly for risky investments. These behaviors culminated in an epic catastrophe in September of 2008, triggered by the collapse of Lehman and AIG. During the ensuing months, there has been a steady and growing apportionment of blame among CEOs, their supposed regulators, boards of directors and political supporters.

Soon, the "fix", in the form of improved market regulations, will be proposed by the Obama administration.

Free-market enthusiasts of the finance business worry (with some reason, yet too much hysteria) the new regulatory cure may be worse than the disease, reducing wealth-creation by inhibiting innovation in financial instruments. These observers also assert that there will be unanticipated side-effects (and here they are almost certainly correct.)

At the same time, critics on the political left rails that the champions of unfettered markets will be the same folks re-engineering the new regulatory regime, endangering prosperity for the masses and continuing to benefit the wealthy few who can afford to take risks. The political left, however, includes fewer finance practitioners, and therefore tends to criticize from a position of domain-knowledge ignorance.

President Obama has characterized the emerging regulation as follows: "The choice we face is not between some oppressive government-run economy or a chaotic and unforgiving capitalism," Obama said. "Rather, strong financial markets require clear rules of the road, not to hinder financial institutions, but to protect consumers and investors, and ultimately to keep those financial institutions strong."

When I assess an IT project disaster, I am firmly of the President Obama persuasion:

Regulation, in the form of a defined development governance process that properly assesses & manages risks, is now a necessity. IT is now too essential to business, and, like a good insurance policy, is a necessary cost. The free-market approach of no-defined process which just leaves the practitioners "free to choose" is no longer a responsible decision.

But like any business cost, process & governance needs to be light-weight and not impose excess burdens on practitioners. I would never recommend the programs pitched by the larger consultancies, to create a "culture of risk awareness", which will more likely shut down all innovation, unless like the political left-wing, you desire a total cessation of risk taking. Very few applications have the dependability requirements of a nuclear power plant or weapons system.

Deeply complex processes can make it even more difficult for IT practitioners to relate to business operations, because it forces IT to adopt language and behaviors alien to the business per se.

Lastly, like the global finance system, corrections in governance have to be performed by the experienced practitioners that understand the complexities of modern IT. There is no turning back the clock to a simpler time, and IT problems cannot be fixed by outsiders, only by the people within.

Almost from the dawn of computing, the challenges of IT abstractions, complexity, and growth has given birth to waves of engineering process enthusiasms and literature. I am of Fred Brooke's "no silver bullet" camp (c.f. http://www.lips.utexas.edu/ee382c-15005/Readings/Readings1/05-Broo87.pdf). My summary conclusion is: There can never be one right development process, standard model or methodology, for any & all businesses. Managers can only progressively improve the alignment of IT with a business through iterative projects, continuous learning, light-weight processes and above all, better hiring.

One last thought: When dealing with the aftermath of a project disaster, managers should remind themselves that the key determinant of technical outperformance is – and always has been -- the quality and talent of your employees. The relative productivity differentials between individuals in the technology industry remains as wide today as when it was first measured (see http://blogs.construx.com/blogs/stevemcc/default.aspx, and many others). Better hires, properly guided, will develop better processes to protect your business, resulting in more productive, less expensive and more responsive IT.

04/28/2009

When interviewing candidate managers to manage people, not just a process or market, always ask them about the last person they fired.

It is surprising how many managerial candidates have never fired anybody at all, and this one question is far more revealing about a manager's skills than softballs about finding and managing top talent.

Most managers don't find top people candidates themselves and don't usually manage the hiring process. We generally use recruiters to find candidates, who are largely dependent on job boards and proprietary personal networks. Once top candidates are found, the business process of recruitment is usually managed by HR in order to protect the corporation from legal risk. And while there may be some challenge in finding the best person for the money (i.e., who fits your budget), or in identifying the best future performers among entry level graduates, the Best candidates usually stand out pretty clearly.

Next, getting top candidates to accept an offer does show an ability to sell yourself and your company, but don't flatter yourself too much: Even the most charismatic executive cannot overcome a bum business plan. Successful hiring has more to do with the underlying health of your corporation and its ability to offer professional growth and increasing compensation than a winning smile and good Irish story.

Finally, the Best are not usually that difficult to manage – not surprisingly, that's part of the reason they are the Best. One of the great pleasures of managing in the technology business is the opportunity to manage the brightest talents on the planet, which allows you to focus on accomplishing business goals, not monitoring work hours, inappropriate behavior, ethical lapses or poor hygiene.

First off, admitting you had to fire someone is a reality check on honesty: We all make hiring mistakes, and if a managerial candidate cannot admit to a hiring mistake, then he or she is probably hiding a lot more as well, or really hasn't managed very much.

If hiring mistakes are not addressed head-up, the entire organization suffers. With the human body, the immune system constantly seeks out and eliminates threats to health. In the technology business, where people are the core asset, performance management has a similar function, identifying and eliminating hiring mistakes. A manager that does not, or cannot fire someone – for example, by dishonestly shuffling someone via transfer or misrepresentation -- puts the entire corporation's productivity at risk. Yes, rehiring is expensive, but tolerating underperformance is even more expensive.

Second, the ability to fire tells you much more about a managerial candidate's day to day skills than hiring. Firing is much, much harder than hiring, because to avoid subsequent litigation risks, you have to prove that you were managing the fired person responsibly all along. Litigation over hiring is much less common than litigation over wrongful termination.

There are many reasons for firing someone, but all these reasons – we'll call them firing-factors – reflect on a manager's core skill sets. The set of firing-factors do not begin or with end with technical skills, productivity or mental power. For example, because so much of technology is team-based, sometimes personal style and team-fit is the key firing-factor: If a team under a manager's direction is underperforming, and one person is the identifiable cause, that manager must act.

What are the managerial skills and knowledge that firing someone reveals?

Setting clear goals and expectations: To fire someone, you have to prove that reasonable performance expectations were established.

Getting understanding of those goals and expectations: If the underperformer can argue that the goals and expectations were never clear, HR can force you back into mitigation-mode, where you are forced to keep the poor performer for another period of time.

Measuring people's performance: If you weren't regularly measuring performance, you can't prove their shortcomings, and leave yourself open to accusations of bias.

Holding people accountable for under-performance: If you didn't reveal the underperformance in a timely fashion, HR can force you back into mitigation.

Coaching to improve performance: Once underperformance is revealed, depending on the actual factors, you have to work out a plan to address these firing-factors. For example, ethical lapses might be cause of immediate termination, but most other firing-factors are in theory open to improvement.

Listening and emotional intelligence skills: Getting to the real cause may take more than reading a report or evaluating a number, but reading the person.

Process management skills: Once the firing process is underway, it must be rigorously managed, and you have to be carefully about what you say and communicate all along, else you can put your company at even greater risk.

Understanding of legal and other business risks: If someone throws the firing process over to HR, it can indicate a narrow view of their job as a manager, and a lack of understanding of the huge costs a company can incur if a firing is not managed well.

In my own experience, when asked about actions I regret as a manager, the top of my list is always "I should have fired so-and-so sooner". By delaying, I caused myself ongoing trouble, reduced my own productivity, and allowed continuing organizational underperformance. I also did the person I eventually fired a disservice, since he or she was not forced to address their firing-factors quickly, or find the right company for them faster. The delay in firing hurt their careers as well.

So fire fast, fire well, and always ask: "Who did you fire last and why?"