Pages

Sunday, December 30, 2018

How many project managers are still laboring with the aftermath of Fredrick Winslow Taylor, more popularly known as F.W. Taylor? You might ask: Who was Taylor? F.W. Taylor was one of the first to study business systematically. He brought 'Taylorism" into the business culture in the years leading up to World War I. By 1915, his ideas were considered quite advanced, and they had significant impact well into the mid-20th century.

Taylor was a mechanical engineer who worked early-on in a metal products factory. Appalled at the seemingly disorganized and informal management of the time, and equally distressed by the costly throughput of poorly motivated workers laboring at inefficient processes , Taylor set about to invent "scientific management", a revolutionary movement that proposed the reduction of waste through the careful study of work.

Taylor came up with the original 'time-and-motion' studies, perhaps one of the first attacks on non-value work. Peter Drucker, a management guru par excellence who coined the term 'knowledge worker', has ranked Taylor, along with Darwin and Freud, as one of the seminal thinkers of modern times. ["Frederick Taylor, Early Century Management Consultant", The Wall Street Journal Bookshelf, June 13, 1997 pg. A1].

The essence of Taylorism is an antithesis to agile principles but nonetheless instructive. Counter to what we know today, Taylor believed that workers are not capable of understanding the underlying principles and science of their work; they need to be instructed step-by-step what to do and how to do it; and nothing is left to chance or decision. Rigid enforcement is required.

However, Taylor was close to the mark with his doctrine about value-adding work. According to Taylor, managers must accept that they have responsibilities to design efficient and effective process and procedures. Waste must be eliminated! Every action requires definition and a means to measure results.

Taylor was not well like by workers and it's not hard to see why. But Taylor's ideas and practices brought great efficiencies and profitability while providing customers with products of predictability of quality. Taylor most important legacy is perhaps his ideas of scientific management and the importance of process definition and process management as a means to control product and productivity.

I like what Steve McConnell says about quality and the software relationship. Building off Taylor's ideas of 'do it once right', though he does not mention Mr. Taylor, McConnell, author of the respected book "Code Complete" states the " general principle of software quality is .. that improving quality reduces development costs .... the best way to improve productivity is to reduce the time reworking..."

Kent Beck, writing in his book "Extreme Programming Explained - Second Edition" has a pretty strong idea about the legacy of Taylorism and its lingering effects on the knowledge industry. He says of Taylor that he brought a social structure we continue to unconsciously apply, and warns against the message that Taylorism implies: workers are interchangeable; workers only work hard enough to not be noticed; quality is an external responsibilityA project management tip Fredrick Taylor was the first to study and quantify non-value work and put emphasis on eliminating wasteful and time consuming processes, procedures, and environmental impediments.

In a fiduciary relationship, one person, in a position of vulnerability, justifiably vests confidence, good faith, reliance, and trust in another whose aid, advice or protection is sought in some matter.

In such a relation good conscience requires the fiduciary to act at all times for the sole benefit and interest of the one who trust

So, what are we to make of that?
Certainly, the project manager is, or should be, is vested with confidence, good faith, reliance, and trust. So, that makes the PM a fiduciary watching out for the vulnerable.

And, in a project situation, who is vulnerable?

The client or customer?

The sponsor?

Other project staff

And, the PM is to hold all their interests in hand and find the best solution that optimizes interests for each of them? Good luck with that!

At some point, some ox is going to get gored. And then who blames the fiduciary? And to what risk is the fiduciary held?

The answer is: it's different in every project, depending on whether the client or sponsor is most supreme. And, of course, how does the PM get measured?

Thursday, December 27, 2018

As a former intelligence professional, John LeCarre is one of my favorite authors, to say nothing of the dry British wit and sparkling prose that supports some quite challenging plots. Nonetheless, I didn't expect to find this wisdom on the pages of "Our kind of traitor"

In operational planning there are two opportunities only for flexibility: One, when you've drawn up your plan. Two, when the plan goes belly up. Until it does, stick like glue to what you've decided, or you're ....

Monday, December 24, 2018

Have you thought much about this? Two of the conceptual conundrums of the hybrid methodology project are:

How do you verify that which is incomplete and

How do you validate the efficacy of that which is yet to be conceived?

Verification and validation (V-and-V) are traditionally held to be very important project practises that are difficult to map directly into the Agile domain. Traditionally, V-and-V has these practises:

Validation: Each requirement is validated for it’s business usefulness, in effect its efficacy toward project objectives. Validation is usually not later than the last step in gathering and organizing requirements

Verification: When development is complete, and when integration of all requirements are complete, the roll is called to ensure that every validated requirement is present and accounted for.

Validation
Placed into an Agile context, validation is applied both to the project backlog and to the iteration backlog, since changes are anticipated to occur.

Validation is typically first applied at the story or use case level, validating with conversation among the interested and sponsoring parties that the functionality proposed is valid for the purpose.

One can imagine validating against external rules and regulations, perhaps internal standards, and of course validating against the business case.

Verification
Verification is generally a practice at the iteration level, verifying that iteration backlog matches the iteration outcomes, and logging any differences

Depending on the project paradigm, V-and-V can be carried into integration tests and customer acceptance tests, again testing against various benchmarks and standards for validity, and verifying that everything delivered at the iteration level got integrated at the deliverable product level.

Friday, December 21, 2018

You don't have to bother with gathering requirements; requirements just emerge

You don't have to have any documentation; it's all in the code

You can do away with V&V: verification and validation, because that's like QA tacked onto the end

You don't really have to have an architect, because (somehow) the best architecture emerges

Taking responsibility for business critical performance
In my view, and what I tell my students: Nonsense, all of it! "They" have never tried to build something with OPM (other people's money) and been personally accountable for how the money is spent, what value is produced, and how the value/cost ratio was managed to the advantage of the business. But even more important, "They" have never had to be responsible for business-critical performance.Regulators -- helpful?
But to that add external regulators. Regulators don't give a flip about what "They" think. There had better be outcomes that can be audited back to the base level; there had better be documentation that supports claims; there had better be a way to do V&V before the "what did you know when and why didn't you know sooner" questions arrive via your local lawsuit.

In any regulated product market, like medical devices for instance that are built with a lot of software, the focus
has to be on the joint satisfaction of the buyer/user and the regulator. Fortunately,
both of these groups are on the "output" side of the project, which fits Agile
quite well.

Where agile has a vulnerability is in the compliance part... unless compliance is built into the backlog, either as a framework or as explicit "stories". To not do so is to take a really unrealistic path to only temporary success... temporary until the regulators tear it apart.

Same comments apply for any number of regulated businesses, like banking, by the way, and back office areas like cash management and receivables where these things have to sustain audits, to say nothing of safety systems like certain critical avionics, ship controls, and industrial controls.

Oh, big data!
And, in this day and time: "big data". Ever tried to validate a data warehouse with tens of millions of records? The issue is simple; the solution is not. Reporting from a data warehouse is almost like "lying with statistics": you can find some data that fits almost any scenario, but is the context accurate; marriage of data with context is where the complexity (and information) lies. Doing data reports in Agile could be fool's errand if the "stories" are not carefully crafted.

Tuesday, December 18, 2018

When you say "risk management" to most PMs, what jumps to mind is the quite orthodox conception of risk as the duality of an uncertain future event and the probability of that event happening.

Around these two ideas -- impact and frequency -- we've discussed in this blog and elsewhere the conventional management approaches. This conception is commonly called the "frequentist" view/definition of risk, depending as it does on the frequency of occurrence of a risk event. This is the conception presented in Chapter 11 of the PMBOK.

The big criticism of the frequentist approach -- particularly in project management -- is that too often there is no quantitative back-up or calibration for the probabilities -- an sometimes not for the impact either. This means the PM is just guessing. Sponsors push back and the risk register credibility is put asunder. If you're going to guess at probabilities, skip down to Bayes!

However.. (there's always a however it seems), there are three other conceptions of risk that are not frequentist in their foundation. Here are a few thoughts on each:

2

Failure Mode Event Analysis (FMEA): Common in many large scale and complex system projects and used widely in NASA and the US DoD. FMEA focuses on how things fail, and seeks to thwart such failures, thus designing risk out of the environment. Failures are selected for their impact with essentially no regard for frequency. This is because most of the important failures occur so infrequently that statistics are meaningless. Example: run-flat tires. Another example: WMD countermeasures.

3

Bayes/Bayes theorem/Bayesians: Bayesians define risk as the gap between a present (or more properly 'a priori') estimate of an event and an observed outcome/value of the actual event (called more properly the posterior value).

There is no hint of frequentist in Bayes; it's simply about gaps -- what we think we know and what it turns out that we should have known. The big criticism -- by frequentists -- is about the 'a priori' estimate: it's often a guess, a 50/50 estimate get things rolling.

Bayes analysis can be quite powerful... it was first conceived in the 17th century by an English mathematician/preacher named Thomas Bayes. However, in WWII it came into its own; it became the basis for much of the theory behind antisubmarine warfare.

But, it can be a flop also: our 'a priori' may be so far off base that there is never a reasonable convergence of the gap no matter how long we observe, or how many observations we take.

4

Insufficient controllability, aka anonymous operations: the degree to which we have command of events. Software, particularly, and all anonymous systems generally are considered a "risk" because we lack absolute control. See also: control freak managers. See also the move: 2001: A Space Odyssey. Again, no conception of frequency.

Now, to be fair, Mike Cohn more or less supports the thesis we present here when he (Mike) quotes Philip Anderson who writes in "Biology of Business"

Self-organization does not mean that workers instead of managers engineer an organization design. It does not mean letting people do whatever they want to do. It means that management commits to guiding the evolution of behaviors that emerge from the interaction of independent agents instead of specifying in advance what effective behavior is. (1999, 120)

But, back to the headline: What did Mr. Highsmith tell us? (Of course, he said more than these bullets, but these are the highlights)

There is just too much experience and management literature that shows that good leaders make a big difference

There is a contingent within the agile community that is fundamentally anarchist at heart and it has latched onto the term self-organizing because it sounds better than anarchy. However, putting a duck suit on a chicken doesn’t make a chicken a duck.

Delegating decisions in an organization isn’t a simple task; it requires tremendous thought and some experimentation

Leading is hard. If it was easy, every company would be “great,” to use Jim Collins’ term (Good to Great).

What did he not tell us?

Dominance is a human trait not easily set aside; thus the natural leaders will come to the fore and the natural followers will fall-in thankfully. There's no need and no practical way to rotate the leadership once dominance is established

Like it or not, positional authority counts for something in all but the smallest enterprises. Thus, senior managers are senior for a reason. It's hard to establish credibility with the stakeholdes that hold the key to resources if the team is being led from the bottom of the pecking order.

Self-organization may deny biases and bully the nemesis off the team. Group think, anyone?

Delegation is a tricky matter: do only those things that only you can do

And the answer is: according to Highsmith, something called "light touch", but in reality it means leading and managing from a position of trusting the team, but mentoring the "self-organization" towards a better day.

Wednesday, December 12, 2018

You have to give Jurgen Appelo high marks for imaginative illustrations that catch the eye and convey the thought. He says this is one of his best illustrations ever; he may be right. He calls it his "celebration grid". I imagine Jurgen will be telling us a lot more about this if it catches on.

Sunday, December 9, 2018

Does software fail, or does it just have faults, or neither?
Silly questions? Not really. I've heard them for years.

Here 's the argument for "software doesn't fail":

Software always works the way it is designed to work, even if designed incorrectly. It doesn't wear out, break (unless you count corrupted files), or otherwise not perform exactly as designed. To wit: it never fails

Here's the argument for "it never fails, but has faults":

Faults refer to functionality or performance incorrectly specified such that the software is not "fit for use". Thus in the quality sense of "fit for use" it has faults.

What’s odder about the views of my correspondent is that, while believing “software cannot fail“, he claims software can have faults.

To those of us used to the standard engineering conception of a fault as the cause of a failure, this seems completely uninterpretable: if software can’t fail, then ipso facto it can’t have faults.

Furthermore, if you think software can be faulty, but that it can’t fail, then when you want to talk about software reliability, that is, the ability of software to execute conformant to its intended purpose, you somehow have to connect “fault” with that notion of reliability.

Monday, November 26, 2018

Of note, however, is this bit from L. Rafael Reif, MIT President, as quoted in the press

The goal of the college is to "educate bilinguals of the future"
And, to be clear, bilinguals are people whose 'other interests' are -- among others -- biology, chemistry, politics, history, and linguistics who are skilled in the techniques of modern computing that can be applied to them

Who said IT guys are one-dimensional? No longer; they can be bilingual
And, by extension, so can the project managers of the world be something other than PM-nerds
We can learn other languages as well!

Better yet: we can be multi-dimensional in both knowledge and wisdom -- what a concept!

Tuesday, November 20, 2018

From Viscount Nelson, victorious British commander in chief at the naval battle of Trafalgar, we get this insight for initiative and independent action, as described by Admiral James Stavridis in his book "Sea Power":

Nelson knew he would not have clear and instantaneous communication ... [making] precise command and control impossible.
As [Nelson] said in his [planning] memorandum: "Something must be left to chance; nothing is sure ... ""In case signals can neither be seen or perfectly understood, no captain can do very wrong if he places his ship along side the enemy ..."

So, there's some good stuff there for project managers:

Don't lean heavily on the idea you will always be in touch when it matters

Accept the idea that command and control systems have their limits; other processes work also

Do think it through and commit to a plan -- even though the plan itself may not survive first contact with project reality [some would call this making an estimate .... gasp!]

Set expectations and then unambiguously delegate authority to meet those expectations

Saturday, November 17, 2018

So, I'm just catching up with the buzz about blitz-scaling, the business model that says:

Get to scale fast! Actually, get to even larger scale even faster.
Blitz your way there!
Only the fastest to scale wins; there's hardly a spot for number two

One might ask: What's the debt and debris accumulated in blitzing scale?

Reid Hoffman has an answer in his book, titled no less than: "Blitzscaling: The lightning-fast way to massively valuable companies"

Conventional process-oriented decision making supported by "facts" and analysis of risk, discounted cash flow and the like, are out

What's in is speed, decisions based on instinct and partial data, and a willingness to pay the downside if risks don't work out

Ok, almost anyone could imagine that deregulating is going to allow speed with some broken glass along the way.
In the past Reid argues, business put a high value on not breaking the glass.
Efficient and predictable processes with predictable outcomes was king.

Remember the "Theory of Constraints" developed in the early '80s: Efficiency in resource utilization was the answer to better business

Saturday, November 10, 2018

I've heard it many times that this little ditty is the essence of why Agile is problematic with its dearth of plans, estimates, etc:

"Would you tell me, please which way I ought to go from here?
'That depends a good deal on where you want to get to,' said the Cat.
'I don't much care,' said Alice.
'Then it doesn't matter which way you go,' said the Cat.
'So long as I get SOMEWHERE,' Alice added as an explanation.
'Oh, you're sure to do that,' said the Cat"

- Lewis Carrol from "Alice's Adventures in Wonderland"

Not so fast!

No Agile project is sans a Narrative or Vision or epoch story

No Agile project is without some tie to the business, and thus a business outcome or influence,

No Agile project is without some commitment of resources from the business -- folks I know don't work for free

On the other hand

Some Agile projects have trouble getting off the stage; to wit: Are we done yet?

Some Agile projects spend the money and get little done, certainly little for the business

All Agile projects benefit from some degree of planning and estimating, if only to frame the project onto the right first step.

Wednesday, November 7, 2018

I thought this posting on the "Agile Canon" was worthy of passing along in its entirety. So, there's the link for a pretty good read on the most important elements of a canon that all should be interested in adopting:

Sunday, November 4, 2018

One of my students offered this strategy for establishing, maintaining, and leveraging relationships with the customer. I thought it was pretty good, so here's the idea:

1. Customer Account Responsible (ACR) -- who ... is the Account Manager
for the domain, market or dedicated to the customer (big accounts) responsible for:

Account relationships,

Opportunity identification,

Commercial management,

Communication management.

Normally the SPOC for a business development effort.

2. Customer Solution Responsible (CSR) – This role is held by various people
depending on the company type – Solution Architects, Solution Consultants,
Solution Managers etc – and they have the responsibility of:

a. End-to end solution integrity
b. Collection of and documentation of requirements from the customer – Executive,
Business, Technical and Users requirements – Securing the sign off on the
requirements scope with the ACR and CFR to the customer.
c. Prioritization of the requirements with the respective customer responsibilities,
in order of increasing importance, and determining the “fit to need” alignment of
the requirements
d. Map solution requirements to the vendor solution/product portfolio, and
determine the Delta, and how to fill those Delta.
e. Hand off complete solution design documentation to the project execution team,
and provide input to the executing team (CFR) for project execution planning.
f. Including and manages SME’s , Product Owners etc and their deliverables as
needed in various verticals that are needed to address the solution design and
product lifecycle

3. Customer Fulfillment Responsible (CFR) – this is normally the PMO organization that turn the solution into reality inside the customer organization/premises/site:

Thursday, November 1, 2018

Does software fail, or does it just have faults, or neither?
Silly questions? Not really. I've heard them for years.

Here 's the argument for "software doesn't fail": Software always works the way it is designed to work, even if designed incorrectly. It doesn't wear out, break (unless you count corrupted files), or otherwise not perform exactly as designed. To wit: it never fails

Here's the argument for "it never fails, but has faults": Never fails is as above; faults refer to functionality or performance incorrectly specified such that the software is not "fit for use". Thus in the quality sense of "fit for use" it has faults.

What’s odder about the views of my correspondent is that, while believing “software cannot fail“, he claims software can have faults. To those of us used to the standard engineering conception of a fault as the cause of a failure, this seems completely uninterpretable: if software can’t fail, then ipso facto it can’t have faults.

Furthermore, if you think software can be faulty, but that it can’t fail, then when you want to talk about software reliability, that is, the ability of software to execute conformant to its intended purpose, you somehow have to connect “fault” with that notion of reliability. And that can’t be done. Here’s an example to show it.

Consider deterministic software S with the specification that, on input i, where i is a natural number between 1 and 20 inclusive, it outputs i. And on any other input whatsoever, it outputs X. What software S actually does is, on input i, where i is a natural number between 1 and 19 inclusive, it outputs i. When input 20, it outputs 3. And on any other input whatsoever, it outputs X. So S is reliable – it does what is wanted – on all inputs except 20. And, executing on input 20, pardon me for saying so, it fails.

That failure has a cause, and that cause or causes lie somehow in the logic of the software, which is why IEC 61508 calls software failures “systematic”. And that cause or causes is invariant with S: if you are executing S, they are present, and just the same as they are during any other execution of S.

But thereliabilityof S, namely how often, or how many times in so many demands, S fails, depends obviously on how many times, how often, you give it “20″ as input. If you always give is “20″, S’s reliability is 0%. If you never give it “20″, S’s reliability is 100%. And you can, by feeding it “20″ proportionately, make that any percentage you like between 0% and 100%. The reliability of S is obviously dependent on the distribution of inputs. And it is equally obviously not functionally dependent on the fault(s) = the internal causes of the failure behavior, because that/those remain constant.

Monday, October 29, 2018

Validation and Verification: traditionalists know these ideas well. Do they still have relevance in the Agile space?

My opinion: Yes!

Traditional V-and-V: the way it is

Traditional projects rely on validation and verification (V-and-V) for end-to-end auditing of requirements:

Validation: the requirements ‘deck’ is validated for completeness and accuracy.

If there are priorities expressed within the deck, these priorities are also validated since priorities affect resource utilization, sequencing, and schedule.

Verification: After integration testing, the deck is verified to ensure that every validated requirement was developed and integrated into the deliverable baseline; or that changed/deleted requirements were handled as intended.

Agile: what's to verify; what's to validate?

The BIG QUESTION: Is the strategic intent of the narrative answered? Is the business case on a path to success?

After all, the grand bargain in Agile is that flexibility for tactical implementation is allowed insofar as there is faithfulness to the strategic intent. Tactics are fluid; strategy is not.

Agile V-and-V: the way to do it

Certainly, Agile projects are less amenable to the conventional V-and-V processes because of the dynamic and less stationary nature of requirements.

Validation: After the business case is set, the top-level narrative is in place, and the overall strategy of the project is framed, some structured analysis can occur on the top level requirements.

If there are priorities expressed within these business case requirements, these priorities are also validated

Conversational-style requirements -- aka, stories -- are also validated, typically after the project backlog or iteration backlog is updated.

Verification: After integration testing, the deliverable functionality is verified to ensure that every validated conversation was developed and integrated into the deliverable baseline; or that changed/deleted conversations were handled as intended.

During development, expect some consolidation of stories, and expect some use (or reuse) of common functionality.

Thus, recognize that Agile may not maintain a fully traceable identify from the time a conversation is moved into the design and development queue to the time integration testing is completed. However, the spirit of the conversation should be there is some form. It’s to those conversational forms that verification is directed.

The last thing to do is circle back to the narrative: Is the big question verified? If so: victory!
If not, back to the sponsor for guidance and direction

Friday, October 26, 2018

Have you ever been asked: "What time is the 3 pm meeting?"
You're thinking: "This guy is on something; or he's texting while talking!"

We here in the backyard of the seemingly larger-than-life Walt Disney World* pay some attention to the management paradigms coming out of our corporate neighbor.

And, so the Disney response to that question is instructive, as given in this blog post from the Disney Institute, which I sum up as:

'Any interaction provides an opportunity to add value and improve quality of communications'

“What time is the 3 o’clock parade?”On any given day in the Magic Kingdom at Walt Disney World Resort, you might hear Guests asking our Cast Members this seemingly peculiar question. And, while the question appears to have an obvious answer, we also know that frequently thetrue questionlies beyond the obvious.

As our Guests are often excited and distracted ..... So, Cast Members will ask some additional questions to uncover what it is that the Guest really wants to know…such as,“What time will the parade get to me?” “When should I start waiting to get a good viewing spot?”and“Where is the best place to stand?”

Instead of simply repeating the obvious answer—the actual parade start time—back to the Guest, our Cast Members take this opportunity to .... share with the Guest what time the parade will pass by certain locations in the park, offer possible vantage points to view the parade or advise when to leave another area and still arrive at the parade on time.

This is important, because rather than dismissing the“3 o’clock parade?”question as something trivial and offering a blunt response, Cast Members understand that it offers the opportunityto exceed the Guests’ expectations ......
.... the “3 o’clock parade” question is commonly used to help Cast Members understand that their answer can either end the conversation, or it canbegin a quest for richer discovery.....

*Did I mention:7 parks, 29 hotels on property, 40,000 acres, and tens of thousands of "cast members
And, I am an un-paid volunteer for Disney Sports Attractions

Tuesday, October 23, 2018

Looking for project dashboard that really provides insight at a glance?

This one from John Higbee might be the answer

If you've not a Higbee person, maybe you've not seen it. Take a look at John Higbee's presentation about "Program Success Probability" . (*)

Take notice of the neat arrangement of program success divided left and right by internal and external factors.

On page 5 of Higbee's slides, you'll find this image:

Dynamic colors
This presentation is intended as a dashboard. The colors are dynamic on a Red-Green-Yellow-Gray (not evaluated) scale. The scale has to be defined (calibrated) for each program in order for management to be able to get a proper take-away.

Trendy
Trends are shown in each block with arrows. Again, trends must be defined for each program, i.e. what is the meaning for an up-pointing arrow?

Of course, Higbee goes on in the presentation with more detail and more examples of dashboard presentations, for example the more-or-less standard presentation of sliding bars to show progress vs plan

For the Gov in all of us
Since this presentation is for a government audience, it includes dashboards for contractor performance and even contractor business success

Bottom line: an interesting suggestion for dashboards are in this presentation, along with at least one gov'y's idea of what's important.

-------------------
(*) Search this site for other Higbee presentations; you'll find others you might be interested in.

Saturday, October 20, 2018

“So Einstein was wrong when he said, "God does not play dice." Consideration of black holes suggests, not only that God does play dice, but that he sometimes confuses us by throwing them where they can't be seen.”― Stephen Hawking

Science and engineering projects
If you line up with Hawking, and are looking for a start in the quantum world, read this:

GAITHERSBURG, Md.—The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has signed a cooperative research and development agreement (CRADA) with SRI International to lead a consortium focused on quantum science and engineering. SRI International is a nonprofit, independent R&D center headquartered in Menlo Park, California.

Wednesday, October 17, 2018

The bad haircut
What do you say when your colleague comes in with a bad haircut? (*)
Jump on it? Criticize it?
Threaten abuse?
Probably none of the above; probably you ignore it or make some civil remark

The bad idea
What if the same person comes in with a bad idea? Now what?
Probably you can't ignore it, but your commentary can be civil, inquiring, benefit of the doubt and all that (Speak softly and carry a big stick ... our guy Roosevelt; and look at what he accomplished)

Sunday, October 14, 2018

"The New York Herald pointed out [that] the telegraph appeared to make it possible for the whole nation to have the same idea at the same moment. .... Henry David Thoreau raised an eyebrow: "We are in great haste to construct a magnetic telegraph from Maine to Texas; but Maine and Texas, it may be, have nothing important to communicate"

The New York Times

Nothing important to communicate? Then why is everyone staring at their screens all the time? Could it be simple addiction to having the same idea at the same moment as everyone else?

Thursday, October 11, 2018

"Scholars ... have situated resilience, the ability to sustain ambition in the face of frustration, at the heart of ... leadership growth. Why some people are able to extract wisdom from experience, others not, remains a critical question"

Doris Kearns Goodwin, Historian

"Leadership in Turbulent Times"

In another venue, we might say some people are naturally street smart, while others have seen it all -- but can't make anything of it.

Tuesday, September 25, 2018

Poster child for the evil ratio:
Wouldn't it be nice if we could ban % Complete from the lexicon of project management!

% Complete is a ratio, numerator/denominator. The big issue is with the denominator. The denominator, which is supposed to represent the effort required, is really dynamic and not static, and thus requires update when you replan or re-estimate -- something that almost never happens, thus consigning the denominator to irrelevance.

Why update?
Because you are always discovering that stuff isn't as easy as it first looked. Thus, we tend to get "paralyzed" at 90% (no progress in the numerator, and an obsolete denominator)

Doesn't changing the denominator mean you're changing the plan along the way? Yes, but the alternative is remain frozen on a metric/plan you are not tracking (or tracking to)

What's the fix?.

Personally, I prefer these metrics, none of which are ratios. And, why do I like this set of non-ratio? Because there is a good mix of "input" which is always of concern to the PM and the sponsors, and "output" which is always of concern to users and customers, and is the value generator for the business. Thus, this set keeps an eye on both the input and the output.