Pages

Sunday, December 30, 2012

Men are dumber around women. Thijs Verwijmeren, Vera Rommeswinkel and Johan C. Karremans gave men cognitive tests after they had interacted with a woman via computer. In the study, published in the Journal of Experimental Social Psychology, the male cognitive performance declined after the interaction, or even after the men merely anticipated an interaction with a woman.

Friday, December 28, 2012

It's said that Winston Churchill was a bull that carried his own china shop! Always in action, never at rest, courageous, and a meddler-manager.

Meddler-manager is not the same as micro-manager. Churchill had numerous pet projects -- most of them either in technology or in off-the-books operations -- about which he generated uncountable ideas, all passed to the staff with "action this day!" tags.

Examples abound: floating tanks used at Normandy; the floating harbors, called Mulberry, that allowed for large ship logistics just off the beach; and the SOE (Special Operations Executive) that was a covert effort to assist resistance fighters in Europe.

He didn't micro-manage implementations, but he meddled incessantly inserting top down ideas of what should be done. Consequently, a lot of management energy that could have been directed toward project implementation went instead toward fending off the meddler.

Fortunately, meddlers are more often focused on results -- outcomes -- than process (the internals), so they are often process agnostic.
But, meddlers are impatient, seeking immediate satisfaction. So any delay or extraordinary schedule only brings more meddling, and brings new ideas before the old 'new' ideas are fully baked.

The best defense is offense: constant flow of incremental outcomes that partially satisfy and provide short feedback about whats good and bad (right and wrong) about what's been developed (so far). In other words, deliberate volatility generates the information and evaluation needed for validation and continuous improvement. In other words, over managing change management to the point of smooth sailing may close off just the kind of disorder that challenges improvement.

Unfortunately for the Admiral-program manager, good and useful and timely program information does not necessarily beget good and timely program performance. This dashboard is for the F-35 fighter aircraft, and by all accounts this program is in big doo-doo. Of course, that's what you'd learn by examining the data; so there's no mystery that the earned value for this project is way off the mark.

But, back to the dashboard. You can see from the panel on the left that there are many charts to show trend lines for measurements for several different metrics. Project flows are in the center panel, and then subsequent panels on the right document other project activities.

One unique aspect of this dashboard is that some of the data is for the design and development program for the aircraft, and some are for the production program. The acquistion strategy for the F-35 was to begin production as soon as there were design models that verified the likely performance. Simply put, that's a risky strategy, and if the assumptions break down, as they did in this program, all the dashboards in the war room can't put humpty dumpty back on the original track.

One metric that is on this dashboard is the sunk cost -- the actual cost to date. Some are saying that the program must go forward so that the sunk cost is not a waste. Others say you shouldn't make a decision on the basis of sunk cost -- only the future is relevant. So far, it's been decided that the future is relevant and the program continues.

He says in the new book that he wants this book to be the definitive explanation of the spectrum of fragile (read: Black Swan, incalculable risk), robust (read: survivable, calculable risk), and antifragile (read: risk driven improvement).

In a few words, his points are these:

Fragile systems can not absorb shock; they break, they fail, and they do great harm when they collapse

Robust systems can absorb shock, but robust systems are no better off after the shock than before; they don't learn or improve from the experience

Antifragile systems not only can absorb shock, but the disorder/disruption is actually fuel for innovation and improvement. They next shock will have less effect; any new system will build upon the disorder of the past and be better.

Mulitple systems
And, we're not talking physical systems necessarily; all manner of human factors and biological systems are included, along with political systems, etc.

One example he has given in a television interview was comparing a taxi driver and an office worker. The former deals with the uncertainty of business every day and constantly adapts to stay afloat economically; the latter, if laid off suddenly, is devastated and adrift. The former is antifragle (because of constant learning and adaption); the latter is fragile (because shock is unstainable)

In business, he says the restaurant and aviation sectors are antifragile, constantly learning from mistakes, and the financial industry is fragile -- vulnerable to black swan events.

Domain sensitivity
Taleb makes the point that the qualities of antifragile are at the same time domain dependent -- that is, context sensistive -- and domain independent -- that is, it is valid to represent the phenomenom in one domain with similar characteristics in another domain, though often we miss this cross domain connection.

Taleb wrties: "We are all, in a way, ... handicapped, unable to recognize the same idea when it is presented in a different context. It is as if we are doomed to be deceived by the most superficial part of things, the packaging...."

Redundancy and risk management
We learn this about risk management:"Layers of redundancy are the central risk management property of natural systems. ... Redundancy ...seems like a waste if nothing unusual happens. Except that something unusual happens— usually."

Project context
Almost every project I know of embraces continuous improvement. But to make it effective, CI should be paired with reflection, investigation of root cause, and actionable strategies. These get at the learning component of being antifragile.

Actionable strategies begin with a dolop of system engineering: decouple where possible to trap problems before they propagate. Decoupling is most often accomplished by modularity and defined interfaces.

And then we add redundancy -- equivalent capabilities implemented in different ways decoupled for independence -- and cohesion.

Cohesion is the property of absorbing shock without failure. We get at this with buffers, redundancy, and elastic physical properties.

The final test
Taleb gives us this as the final test:[We] .. detect antifragility (and fragility) using a simple test of asymmetry: anything that has more upside than downside from random events (or certain shocks) is antifragile; the reverse is fragile.

Thursday, December 20, 2012

It's a take off on the more familiar object and task burn down chart applied universally in agile. Mike's chart is pretty straight forward: Estimate the risk impact (as modified by likelihood) and call this 'risk exposure'. When the risk is mitigated, or passes by without impact, the exposure is burned off.

Nice. But is this old wine in a new bottle?

Exposure
Some years ago -- actually, many years ago -- I took over a department of about 85 engineers, mostly system engineers, with a budget (then) of about $10M annually. One of the first things my new boss asked me to do was to figure out what my (our) exposure was.

Exposure? I understood this to be a question about risk, but what's the metric here, and how do you measure for data? For instance, the metric could be scope exposure, schedule exposure (as in Mike's chart), quality, or budget. Or, it could be some other "measure of effectiveness" -- MoE -- that was applied in the various programs.

In the moment what my boss was asking for was an estimate of the budget risk (impact and likelihood) that was lurking in the work package assigments to which my department engineers had committed themselves.

That $10M department budget was all signed out to various projects for work that had to be done, but what if the work could not be done for $10M? How much risk was there for me as department director and for the various program managers who needed the work done?

So, I had my metric: monetized budget; and I had a way to measure it.

Management and Monte Carlo
What I did, of course, was build a version of a risk register, much like Mike's but with some differences that account for risk management: for each work package, I asked my WP managers to give me a 3-point estimate (no single points allowed!) of the remaining effort required to get 'done' (cost-to-complete according to a completion standard more or less understood by all -- a similiar idea exists in agile, of course).

A Monte Carlo simulation of the WP estimates gave me -- actually, my business analyst -- a single point measurement (expecgted value statistic) to compare with the single point budget. Each statistic value is a deterministic number so I can do arithmetic on them. Of course, the optimistic, ML, and pessimistic numbers are random numbers -- values in a distribution -- so I can't do arithmetic with them, except by simulation. Thus, their columns are not added.The difference in these two points was my (our) exposure measured as the risk weighted expected value of the portfolio.

Tuesday, December 18, 2012

You might say supremely confident, even a touch of arrogance that he'll find the way out, no matter the hole he's in (First rule of holes: stop digging!)

In any event, if you've not been exposed to the rapid fire delivery of Mr Andreessen, there's an interview with him on Charlie Rose that you might like. And, as always, Andreeseen specializes the 'next big thing'.

The authors posit that decision tables should be treated like any other relational table in a relational paradigm, thus requiring adherence to certain rules for building information by rows and columns. For instance, decision tables should follow the normal rules of normalization, which in a few words, removes redundancy and generally provides for the maxim about data: "enter once, use many"

From the first essay that presents a six step process for building decision tables we learn that we can expect:... a deliverable that is more valuable than its pieces. The resulting model is:

Comprised of the most atomic pieces of business logic (no Ors, no ELSEs, no OTHERWISE, one conclusion fact type)

Based on disciplined fact types

Normalized to minimize redundancies

Predictable in structure and

Aligned with business performance and directions.

From the second essay we learn that the benefits of decision tables are explained this way:

A decision table is an intuitive
visual representation. This circumvents the need for other less friendly
representations – such as formal language, strict grammar rules or
fill-in-the-blank sentence templates.

Both business people and technical professionals understand a decision tableif it is devoid of technical artifacts.

Some forms of incompleteness, inconsistency, and redundancy become visible ina decision table.

Certain technologies lend
themselves to easy automation of decision tables. In fact, most Business
Rule Engines (or Business Rule Management Systems, BRMS) accept
decision tables as a format for creating and making some changes. That’s
because automation ofdecision tables into such engines is fairly
straight-forward.

These essays have a good exposition of the author's ideas, but there is a supporting 20 page "primer" (in pdf format) about decision models that is worth a read. You can find it -- after free registration -- at this location.

Friday, December 14, 2012

Merriam-Webster:Metric: a standard of measurement -- Example: no metric exists that can be applied directly to happiness — Scientific MonthlyMeasure: the dimensions, capacity, or amount of something ascertained by measuring

At Leading Strategic Initiatives, Greg Githens has a worthy posting on what makes a good metric. Githens offers six qualities for metrics, abridged and repeated here:

It measures something important. ... metrics reflect the imperatives of the individual or the organization.

It has relevance to the audience. Since ... initiatives have difference stakeholders, one of the biggest challenges is to prioritize the audience and tailor [metrics for]... them.

It measures something that is directly controllable by individuals or small groups. This suggests that metrics are local, and connected to action.

It is resistant to gaming. .... the metric is difficult for self-centered actors to manipulate.

It is a member of a very small, lean set of measurements. Since people have a limited span of attention, we want to keep the metrics to a handful

[It is a member of a] ... set of metrics [that] includes both leading and lagging indicators. No one drives their car by focusing [exclusively] on the rear view mirror, they [also] look down the road to see the turns and respond to the threats.

If you're thinking about applying these ideas and are looking for some specific project metrics, do a read of John M. Green's essay at dtic.mil entitled "System Measures of Effectiveness". In the abstract, Green writes:

Proper selection of performance measurement attributes is essential to[the performance analysis] process. These measurement attributes commonly called measures of effectiveness, orMOEs, provide quantifiable benchmarks against which the system concept and implementation canbe compared.

In this paper, you'll find this table of metric qualities (charateristics), very similar to what Githens posits:

Monday, December 10, 2012

I'd never thought in terms of a gender attitude/bias/approach to risk management until I read this posting "How men and women manage risk differently"
which is an interview with Angela Minzoni Alessio, an industrial and business anthropologist from the Ecole Centrale in Paris

Apparently, there is this to know:

The evolution and taking into account qualitative policies and
evaluation instead of only or predominantly quantitative policies. In
other words, being open to risk management from a wider perspective.

The focus on end-to-end prevention and care systems from design to
implementation and evaluation. We should be looking at risk management
from start to finish.

An increasing capacity and focus on training to explicitly deal with
subjects like morale, taboo, anger, hope or fear. Project management
training should include all these when it comes to dealing with risk.

Today’s mainly masculine way to deal with risk and danger remains
attached to objectivity and purity, with the risk analysis profession
favouring the paradigm of rational choices, thinking probabilistically
and using universalising terminology.

... we observe women will tend to be less impulsive and more willing to listen and explicitly acknowledge feelings such as danger and fear. This same attitude is also favourable to the disclosing of errors, an essential step in risk management.

Saturday, December 8, 2012

Defining a non-linearity:
Before we get into his main points, you might ask: what is a nonlinearity and why should I care?
A good answer is provided in Donella Meadows' fine book on systems: "Think in systems: a primer" . Donnella actually quotes from James Gleick's "Chaos, making a new science":

A nonlinear relationship is one in which the cause does not produce a proportional effect. The relationship between cause and effect can only be drawn with curves or wiggles, not with a straight line.

Brooks is the author of "The Mythical Manmonth",the theme of which is that time and persons are not interchangeable. Why? You guessed it: non-linearities!

Project example
And non-linearities are also what's behind Brooks' Law. He posits that the communication overhead that goes with additional staff -- to say nothing of the inefficiency to divert energy and time toward integrating new team members -- is the gift that keeps on giving. This communication overhead is a constant drag on productivity, affecting throughput and thus schedule.

There's actually a formula that explains this non-linearity to some extent. That is: the number of communication paths between project team members expands almost as the square of the number, N, of team members:

The number of bidirectional communication paths between N persons is equal to:N * (N -1), or N*N - N

Now, anytime we see a variable, like N, modifying itself, as in the factor N*N, we have a nonlinearity.

To see how this formula works, consider you and I communicating with each. N = 2, and formula forecasts that there are 4 - 2 = 2 paths: I talk to you, and you talk to me.

Now, add one more person and the number of paths increases by 4! Good grief. We jump from 2 paths for two people to 6 paths for three people: 3*3 - 3 = 6. Talk about your nonlinearity!

Test it for yourself: 1 and 2: I talk to you and you talk to me; 3 and 4: I talk to the third person and she talks to me; 5 and 6: you and third person talk back and forth.

Of course, this example is only one of a myriad of non-linearities faced by project managers, so that makes Pavel's posting all the more important.

Pavel makes these three main points:

Nonlinear relationships between project parameters ... arise as a consequence of the balance betweencomplexity of work, objectives of work, and productivity of work performers.

Nonlinearities .... arise as a consequence of the limited capabilities of work performers, and limitations that are connected with technological feasibility of work

Nonlinear relationships ... characterize communication and contacts between people, and, as a consequence, team productivity

We've already discussed an example of point #3; the big issue in point #2 is the non-linearity experienced when we reach our limits of capability and feasibility. At the limit, no matter how much more we put into it, we just don't get a proportional response.

Point #1 goes to the issue of complexity, and outgrowth of complicated. The latter does not necessarily beget the former. Complexity is the emergence of behavior and characteristics not observable or discernable just by examining the parts. Complicated is the presence of a lot of parts.

To see this by example, consider your project team in a large open-plan work space. If there's plenty of room to move about, that situation may be complicated by all the people and tools spread about, but not complex.

Now, compress the work space so that people bump into one another moving about, tools interfere with each other, and one conversation interferes with another.

This situation is complex: the behaviors and performance is different and emergent from that observable when the situation was only complicated.

Thursday, December 6, 2012

Everyone familiar with TRIZ?
Just a reminder: it's an acronym from Russian, so no need to expand it precisely, but in English it is generally referred to as "the theory of inventive problem solving"
(sometimes, TIPS)

The TRIZ process is sort of a mind mapping into distinct categories, the so-called TRIZ-40, originally envisioned as a way to actually structure the process of innovation.

The way I got onto this was from a blog by Matthew Squair positing a 'reverse TRIZ' process.

Squair does it this way:

The basic technique is as follows.

1. Start with the objective e.g. “Minimise the risk of a plant gas leak and explosion”.

2. Reverse this objective e.g. “I want to increase the risk of a plant gas leak and explosion”

3. Then exaggerate/amplify the objective (hazard), e.g., “I want lots of gas leaks and when they occur big explosions!”

4. Finally ask the participants, “What resources do I need to achieve this objective”, some answers in this case might be:

Tuesday, December 4, 2012

1. Collaboration
This is openness in the sense that boundaries of firms are becoming more porous, fluid and open. Tapscott tells the story of Rob McEwen, a man he knows not because he scoured the world for case studies but because the two men are neighbors. McEwen headed up Goldcorp, where he did a radical thing: publishing his geological data to see if anyone in the world could find gold in his lands. Submissions came in from all over the world, and for $500,000 he found $3.4 billion worth of gold. The company’s market cap went from $90 million to $10 billion. “As my neighbor, I can tell you he’s a happy camper,” adds Tapscott. Yet here’s the real moral of the story: some of the best submissions didn’t even come from geologists. The winning submission came from a computer graphics company. This marks a huge change in the way we can think about how to innovate to create goods and services, and public value.

2. Transparency
“Here we’re talking about the communication of pertinent information to stakeholders,” he says. People might be bent out of shape by Wikileaks, “but that’s just the tip of the iceberg.” After all, it’s not only Julian Assange who has information on our institutions. Companies have to be naked and transparent, and frankly, if you’re going to be naked, “fitness is no longer optional. You better get buff.” Companies better have value — and they’d better have values. And, he adds, this is good, not bad! “Sunlight is the best disinfectant. We need a lot of sunlight in this troubled world.”

3. Sharing
Sharing is about giving up assets or international property. Conventional wisdom said that you developed your own IP, and if someone infringed it, you sued them. But that doesn’t seem to have worked so well for the record industry, he adds. The industry that brought us Elvis and the Beatles is now suing children. The pharmaceutical industry, too, is in trouble, about to fall off the so-called patent cliff. Pharma needs to start sharing pre-competitive research, to share all sorts of clinical data, and provide the rising tide that will lift all boats.

4. Empowerment
In the Tunisian revolution, the new media didn’t cause the revolution, social media didn’t create the revolution — it was created by the young generation that wanted hope and jobs. But just as the Internet drops collaboration costs in business, it drops the cost of rebellion and insurrection in ways people didn’t initially understand. In the Tunisian revolution, snipers were killing unarmed kids. In return, kids were taking pictures and sending them to friendly soldiers who’d then come and take out the snipers. “You think social media is about hooking up online? It’s a military tool for self-defense,” says Tapscott. Looking at the ongoing unrest in Syria, he says: “Three months ago, you’d be injured, go to hospital with a broken leg and come out with a bullet in the head.” Now young people have used social media to improvise and create an alternative healthcare system

Sunday, December 2, 2012

It's a conversation now
User stories are a real shift in the way customer/users express themselves.

Stories are a move away from the world of "shall and will" structured requirements and into the world of "want and need" conversation.

Thus, agile is a domain of conversational requirements. As such, it's much less dogmatic about requirement structure. The downside is that verification of design and validation at delivery requires that the customer/user be in touch along the way, or they may lose touch with the conversational thread.

Customers in shock
Of course, this may come as a big shock to customers: they may not be accustomed to, or expecting to be embedded, always at the ready, and empowered to rule in near real time. It takes a savvy dude to exercise this power wisely, effectively, and knowledgeably.

Some training (of the customer) may be required! (Don't try this at home)

Keeping track for V&V
And, it's much more difficult to record a conversation, structure it to remove ambiquity, vagueness, redundancy, and relate it to other requirements. These difficulties beget test driven development, both tactically and strategically. In the TDD paradigm, the test script/scenario/procedure captures and documents feature, function, and performance.

Quality
And, what's more, if done to the requisite standard, quality in the small sense (conformance to standard) and quality in the large sense (conformance to need and want) come along also.

Friday, November 30, 2012

Authority and power: often misunderstood; often abused; sometimes used effectively

I suggest the following about authority and power :

Managers have authority by virtue of institutional/constitutional position, and power by virtue of their exercise of authority

Authority is the ability and the institutional right to authorize (to say YES)

With or without authority, you can always say NO (and gum it up; staffers do this all the time)

Power comes from the fear/threat/utility of the application of (authorized) resources

Really effective power comes from the ability to communicate, where communication means to be able to instruct/educate/persuade/motivate.

However, it's also true that a leader (less so a manager) without authority can still have power -- perhaps very effective power -- by virtue of their communication skill.

Someone with authority but no real power can say "do this" but it won't get done or it won't stick. Stickiness is the real mark of power... say "do this" and it gets done, and it sticks! (No back channel work arounds, no problems of subordination; it just sticks!)

Of course, authority can segue into authoritarian.
We've posted on that before:

Wednesday, November 28, 2012

We often hear that managers want "permission to fail"; or that more progressive organizations convey "permission to fail"

Really?

I suggest some caution: "permission to fail" in most instances means "permission to take a risk" that might have good payoff, but which also might not work out. If it doesn't, there may be harm, but at least no foul.

It never means "permission to be incompetent", or not to be up to the job.

Monday, November 26, 2012

Although project management is a thinking person's profession, you wonder sometimes what's going on when silly things happen. But, of course, that's why we blog: to raise issues for thinking people Bookmark this on Delicious

Saturday, November 24, 2012

Mike Griffiths writes the "Leading Answers" blog and contributes considerably to the efforts of PMI to support agile.
On his blog site you'll find things like mapping the PMBOK to agile process steps using an interactive matrix that you can use to click about and find interesting agile tips for working with the PMBOK.

Now, in a recent posting, he provides (in a downloadable pdf of a ppt presentation) an explanation of the ACP exam, entitled "Inside the PMI-ACP Exam." We learn this bit of news: the PMI-ACP certification population has now passed the risk management certification population to become the most popular certification after the PMP and CAPM.

For practitioners looking not only for sample questions but also for the structure and organization of the exam, Mike's presentation is a really nice reference to have.

Thursday, November 22, 2012

One of my Agile Project Management students asked me about stage gates and agile.
My first response was this:

Agile is not a gated methodology, primarily because scope is viewed as emergent, and thus the idea of pre-determined gate criteria is inconsistent with progressive elaboration and emergence.

Agile does embrace structured releases; you could put a criteria around a release and use it as a gate for scope to be delivered

Re budget: agile is effectively 'zero base' at every release, if not at every iteration. You can call the question at these points of demarcation.

Agile is a "best value" methodology, meaning: deliver the 'most' and 'best' that the budget will allow, wherein 'most' and 'best' is a value judgement of the customer/user.

Of course, every agile project should begin with a business case which morphs into a project charter. Thus, the epic narrative (the vision narrative) is told first in the business case, and retold in more project jargon in the charter. Thence, there are planning sessions to get the general scope and subordinate narratives so that an idea of best value can be formed.

But, DSDM is one agile method, among others, that is more oriented to a gated process than say, SCRUM. To see how this could work, take a look at this presentation:

Tuesday, November 20, 2012

I am reading (on a free Kindle reader app) a great book on the power (and frustration) of Bayes Theorem. The book is: "The Theory that would not die" by Sharon Bertsch Mcgrayne

Bayes is the guy--from the 18th century--who told us that given some data (actual observations) we can reverse engineer the underlying probabilities, or at least the parameters like mean and deviation. One catch is that we are required to guess a starting point. Oops! Guessing is not what we do in project management, or mathematics for that matter.

This idea (guessing to start, and then improving the guess with real data) is an anathema to the 'frequentists' who go at it the other way 'round: given parameters, we can predict data. Oops! if the event has never happened, or happens infrequently, or has never been observed, where do the parameters come from? How can we use situations like these to drive decision making? If we can't make a decision with it, then does anybody care? Spending time observing such stuff is not what we get paid to do in project management.