Tools can help teams perform faster if they are set up correctly and contain good data. Here is a list of some tools that my clients are using. The list is by no means a complete list of all tools, but it might cause you to investigate tools that you have not heard of before.

In March 2018, a new version of the Capability Maturity Model Integration (CMMI) and appraisal method were released that can be used starting in 2019. This article summarizes the changes and upcoming milestones.

The overall focus in V2.0 has been to drive home the point that the model is about improving organizational capability to improve performance for any type of project (Agile, Waterfall, or a hybrid).

Figure 1: CMMI V2.0 — click on image to make larger

Goals and Motivation for Version 2.0

The CMMI Institute has focused on simplifying both the model and appraisal method. Here are some of their original goals for the changes:

Model

A simplified model that reduces redundancy.

One model that includes Development, Services, Security, Supply Chain and People practices. Appropriate topics are selected for different types of organizations.

Improve Business Performance: Business goals are tied to operations in order to drive measurable improved performance.

Leverage Current Best Practices: CMMI V2.0 is a trusted source of proven best practices.

Build Agile Resiliency and Scale: The model contains direct guidance on how to strengthen Agile with Scrum.

Benchmark Capability and Performance: The new performance-oriented appraisal method improves the reliability and consistency of benchmarking while reducing preparation time and lifecycle costs.

Accelerate Adoption: Online platform and adoption guidance make the benefits of CMMI more rapidly achievable.

Changed “Process Area” to “Practice Area”: This emphasizes that the model is not a collection of (rote) processes to be implemented, but a collection of practices to be used to run and manage projects.

Organized practices within each Practice Area: Practices are organized by levels instead of Specific Goals. Levels provide a clear, methodical path for building capability and improving performance within each Practice Area.

Replaced the Generic Practices with two new Practice Areas: Governance (GOV) & Implementation Infrastructure (II) to reduce the redundancy and complexity of the Generic Practices. They foster the persistence and habit of an organization’s processes and their business value rather than compliance to the model.

Major Model Changes

The new layout includes levels (similar to the old Capability Levels) that provide a simple process maturity pathway. For example, to be Maturity Level 3 (ML3), Practice groups at Levels 1, 2 and 3 (the “Level” columns in Figure 2) have to be satisfied.

Maturity Levels

Figure 2 shows the new structure of the model. The Practice Areas for each Maturity Level are:

Supplier Agreement Management (SAM) can be declared Not Applicable if there are no suppliers (same as CMMI V1.3).

Figure 2: CMMI V2.0 Practice Areas

A free summary of the model (CMMI V2.0 Quick Reference Guide) is at the link cmmiinstitute.com/resources/. You will have to create an account. There will be a free sample of the Estimation Process Area posted too.

New and Changed Practices

Here is our (unofficial) summary comparison of the two models:

Below we summarize the significant practice changes in V2.0. Note that the wording of many practices has changed, and some have moved to different areas of the model. For example, if your organization is targeting Maturity Level 2, some of the old ML3 practices have now moved to ML2. We have left out Level 1 practices since they are assumed to be performed if Level 2 practices are performed.

CMMI V2.0 Additions to ML2

There are seven new practices and one changed practice (see change in red text).

3.2: Select and monitor supplier processes and deliverables based on criteria in the supplier agreement.

Process Asset Development (PAD)

2.2: Develop, buy, or reuse process assets.

3.2: Develop, record, and keep updated a process architecture that describes the structure of the organization’s processes and process assets.

3.7: Develop, keep updated, and make organizational measurement and analysis standards available for use.

Decision Analysis and Resolution (DAR)

3.1: Develop, keep updated, and use a description of role-based decision authority.

Causal Analysis and Resolution (CAR)

2.1: Select outcomes for analysis.

2.2: Analyze and address causes of outcomes.

3.1: Determine root causes of selected outcomes by following an organizational process.

3.2: Propose actions to address identified root causes.

3.3: Implement selected action proposals.

3.4: Record root cause analysis and resolution data.

3.5: Submit improvement proposals for changes proven to be effective.

CMMI V2.0 Deletions from ML3

OPD SP 1.2: Establish and maintain descriptions of lifecycle models approved for use in the organization.

OPD SP 1.7: Establish and maintain organizational rules and guidelines for the structure, formation, and operation of teams.

The new Value statement in the model text, which is listed below each practice, is now a required part of the model. The appraisal team has to make sure that the value statement (intent) of each practice is being met. To quote the appraisal method, “The intent of each practice is collectively: the practice statement, the value statement, and the additional required information.”

CMMI V2.0 Levels 4 and 5

The practices in maturity levels 4 and 5 are similar to CMMI V1.3, but they have all shifted around. Assume for now that your Lead Appraiser will explain this.

Generic Practice Changes

The old Generic Practices have been replaced by two new Practice Areas, Implementation Infrastructure (II) and Governance (GOV). Now they are applied and appraised against your processes, rather than being appraised against each Process Area in V1.3.

We will write more about these in a later article.

Implementation Infrastructure (II)

II makes sure processes are performed, similar to the previous Generic Practices.

3.1 Senior management ensures that measures supporting objectives throughout the organization are collected, analyzed, and used.

3.2 Senior management ensures that competencies and processes are aligned with the objectives of the organization.

4. 1 Senior management ensures that selected decisions are driven by statistical and quantitative analysis related to performance and achievement of quality and process performance objectives.

There is an official mapping of practice changes between V1.3 and V2.0. Since it is very long and hard to read, we have created a more concise mapping that we will provide to clients when we deliver upgrade or introductory CMMI training.

Obtaining the Model Document

The full model text is available for purchase in PDF or Viewer form at cmmiinstitute.com/model-viewer/. The annual Viewer license includes updates as they are published. The one-time PDF purchase does not.

One-time copy of the full model (price does not cover future updates): $150.

V1.3 to V2.0 mapping: Free.

Online web Viewer: $250 per year. If you purchased a copy at the old $450 price, your license will be extended by 7 months.

Trial Viewer access: $50 for 30 days.

Three-day CMMI class student Viewer access: The Viewer price is included in the class fee and lasts 30 days.

Enterprise Viewer license: Enterprise licenses are available in bundles of 10, 25, 50, 100, 150, 200, 250 or unlimited. Seat licenses can be reassigned when people leave the organization. Contact partners@cmmiinstitute.com for quotes.

The original licensing policy announced in March 2018 about the sharing of the model content changed in July 2018. Now, a Lead Appraiser can share a not-for-profit tool copy (e.g., spreadsheet) with a client that contains the model practices without the client needing to purchase a copy of the model. The client must purchase a license before the Lead Appraiser can give a client a copy of the models descriptive text.

Sustainment: This is a new appraisal type that can be used to extend an organization’s rating for six years (two years at a time based on eligibility criteria).

Three Sustainment appraisals can be performed (once every two years) before a new Benchmark appraisal is required. A Sustainment appraisal is one third the scope of a Benchmark appraisal. There are criteria for eligibility, such as:

The relevant sampling factors from the previous appraisal have not significantly changed (e.g., type of work, size of organization).

Note: The eligibility criteria of “In-scope projects from the prior Benchmark or Sustainment Appraisal are still active” has been deleted in the latest version of the appraisal method.

Organizational Sampling

The minimum organizational sample will be a random sample generated by the CMMI Institute Appraisal System (CAS). The Lead Appraiser will enter information about the organization’s projects and CAS will respond with a random sample of projects and practices. This random sample is generated a maximum of 60 days in advance of the appraisal “conduct” phase. The Lead Appraiser can (and in our opinion should) sample more than the minimum.

Performance Report

A new performance report is required to be delivered to the sponsor and CMMI Institute for Benchmark and Sustainment appraisals. (It is optional for Evaluation appraisals.) The performance report summarizes the metrics that have improved throughout the CMMI implementation (e.g., increase schedule prediction or reduce rework). The performance report does not impact the rating.

The task of generating the performance report should be started very early in the improvement cycle. The CMMI Institute has an example spreadsheet that can be used. With some clever thinking, the performance report could be the artifact used to demonstrate the Practice Area MPM.

Training for Appraisal Teams

Appraisal team members previously trained under CMMI V1.3 may upgrade their training to CMMI V2.0 through one of the following course formats:

Either course will upgrade the students’ training to CMMI V2.0 for all content areas, including Development, as well as content areas not yet released (e.g., Services, Supplier Management, Security, etc.)

High Maturity Appraisal Team Members (Levels 4 and 5) will also be required to complete the 1-day “High Maturity Concepts” course from a qualified HM Lead Appraiser at $100/person plus the LA’s labor fee. The CMMI Institute expects to develop a self-paced, e-learning format of the “High Maturity Concepts” course, but no delivery date has been established.

Foreign language interpreters

CMMI V2.0 appraisals will require interpreters to be registered with the CMMI Institute. They have to:

Pass an English test

Pass the CMMI associate exam

Pass a verbal exam conducted by CMMI Institute

Prove that they have an English qualification (e.g., degree)

Transition Dates for V2.0

Below are the main dates when the new model and appraisal method will come into effect:

CMMI V1.3 appraisals can be conducted until 30 September 2020 (a change from the original date of March 2020). The appraisal result will still be valid for three years.

CMMI V2.0 Development appraisals can be conducted starting January 2019.

The CMMI V2.0 Services and Supplier Management Practice Areas will be available 4Q2018. Service appraisals can be conducted mid-2019.

all projects were equally process-mature (that is, you didn’t just have one stellar project for the appraisal),

the CMMI practices were used to run the business (that is, to achieve business goals, address chronic problems, maintain gains and mitigate risks), and

you had a practical and thoughtful Lead Appraiser you loved,

…then there are only a few implementation items that have changed that you will have to adopt, and many appraisal rules and regulations your LA will have to worry about.

However, if your organization was implementing CMMI with no depth (:<), just barely surviving each appraisal with rote practice implementation (:<), and you had a robotic Lead Appraiser you hated (why?), then many things have changed in V2.0 to ensure your company is focused on results (:>).

Please feel to contact us if you need help improving or navigating CMMI.

Overcommitment (promising more than one can do) is common in every industry. Overcommitment can be positive and cause people to stretch and grow. It can also lead to embarrassment, disaster and financial loss. Given that there is a range of outcomes, great project teams are very careful when making commitments.

There are many reasons that people overcommit. These include:

Single focus: No other options are considered. Just say “Yes,” because we have to, always.

Fact free: No data are used to evaluate the amount of overcommitment or communicate the risk of overcommitting.

Conscious: Both parties agree to the risk of overcommitting and develop mitigation actions to reduce the impact.

Psychological: One or both sides have a desire to punish, control, feel needed, or appease.

One common trait of individuals and teams that routinely overcommit is that no other options are considered. If the customer or big boss requests A, then A is committed to, whether it can be done or not. No further exploration or discussion is pursued. There is one request and one answer.

The approach of just saying “Yes” is probably the most common reason teams get into trouble. It is deemed easier and safer to lead the requester astray in the moment than engage in discussion. The concern is that a discussion might lead to conflict, and some people will avoid even potential conflict at any cost.

Individuals and teams that successfully avoid chronic overcommitment explore far more options before committing. They use data to understand their capacity and know what they can and cannot commit to. There is one request with numerous achievable options considered, each one with known risk.

Below are some questions to help you come up with other approaches to commitment and negotiation. There is no one approach that will work every time, and practice will be essential to make them work. The goal is not to eliminate overcommitment; this may be impossible. The goal is to improve your batting average and the degree of overcommitment.

Committing and overcommitting – individuals and teams

1. When you or your team receive a work request, do you:

just say “Yes” to appease the requester in the moment?

ask questions to understand the larger context and details of the request?

ask the requester for the latest date they need a reply so that you know how much time you have to evaluate the request?

ask about potential future requests so that you have more time to react next time?

2. When you respond to a work request, do you:

just say “Yes” and hope it works out — or assume you can ask for forgiveness later?

provide achievable options for them to consider? For example:

offer a cheaper and quicker solution to get them started.

offer a solution with an achievable schedule, not necessarily the date they wanted.

suggest a solution provided in increments based on achievable dates.

suggest a deluxe version, which costs more, but has additional features or scope they would like.*

Provide data and details (e.g., labor, materials, availability, and risks) to substantiate your options so that they know you are credible.

Suggest who is a better choice for the work if you are not a good fit.

*Consider this option if you sell stuff to customers for money.

3. When you overcommit to a work request, are you overcommitting because you:

do not have good estimate and capacity data before agreeing?

forget to use the data you have?

do not assess the risks to you and the requester of overcommitting?

are afraid to mention any of the data you have to the requester? (Ask yourself, “What is the worst that would happen if you shared the information?”)

like overcommitting. It does something positive for you.

4. When there is disagreement between you and the requester (e.g., you can deliver by June, but they want April), do you:

just say “Yes,” and hope it works out?

understand the larger need they are trying to satisfy by the request (see Negotiation Example below).

assess, communicate, and get agreement on the risks?

communicate data and options to the requester with your whole team with you (in-person or virtually), so that this position is not seen as just one person’s opinion?

say “No” to protect your integrity and their success?

The last option (saying “No”) is obviously risky to your employment. If you believe you are capable of the request, but not in the timeframe desired, then use the questions above to explore other options. If you have exhausted absolutely all options, then hunker down, communicate the risks, and refine your approach next time.

Presenting options

There is a skill to presenting options. Consider these steps:

Present 3-5 achievable options. With each option, state the effort, duration, risk and assumptions.

STOP TALKING and push the ball into their court.

Answer questions but don’t offer better and better deals! Have the other side absorb the reality. If they push back, assess and communicate risk.

Manage all changes. For each change, communicate the impact to estimates and risks.

Negotiation Example

One useful negotiating approach comes from the book “Getting to Yes” [1]. The basic premise is to assume that the position taken by each side (e.g., a June deadline or October deadline) is just one position of many that could have been selected to serve each side’s interest (e.g., time to do a quality job and the need to generate cash flow).

This negotiation approach includes conducting a joint brainstorming session with the other side on possible alternative positions to improve the final solution and each side’s ownership of it.

Below is one example. Practice will be essential to make this work for you.

Step 1. State the positions of each side.

For example, before the negotiation, Person A wants a June 1 deadline for project scope A+B+C, and Person B wants October 1. These are obviously not compatible.

Step 2. Understand the interests of both sides (the primary benefit of the position taken).

Both sides state the benefit they are seeking from their position. Person A chose his or her position (e.g., June) to earn revenue, and Person B chose their position (e.g., October) to avoid killing the project team from overwork and to be able to deliver a higher quality product.

Step 3: Brainstorm options to achieve interests.

Brainstorm five to 15 options (new candidate positions) to achieve the interests of earning revenue, not killing the team, and delivering a quality solution. For example:

Deliver scope A in June for some revenue

Deliver A in June, B in July, C in October

Deliver A+B in June with one extra resource

Deliver A+B+C in June with two extra resources

Add C to the existing system for June and deliver A+B in July

Delay project D+E that is pulling existing resources from A+B+C and deliver June 15

Use the previous option with external resources to test A early while B+C are being built

Step 4: Select a few achievable options to communicate.

The point is, the original positions were just one way to communicate the interest of each side. The negotiation process puts those aside and develops new options with the involvement of both sides.

Summary

There are more options than “Yes.” Researching options and practice are essential.

—————

Please feel to contact us if you need help in with planning, estimation and resolving commitment issues.

In a previous blog, I summarized the CMMI (Capability Maturity Model Integration) and explained why people use it.

Over the past few years, I have appraised many hardware organizations that have also used the practices to improve their mechanical, electrical, and system engineering activities.

Below is a diagram showing the Process Areas that have different implementations for hardware engineering compared to software engineering (see red arrows in Figure 1). The other Process Areas (e.g., Project Planning and Risk Management) can be applied to all of the work of the project.

Since the model does not dictate any particular lifecycle, each organization can use the practices to improve the lifecycle that suits them the best. This might be an iterative lifecycle for small software teams and a sequential lifecycle for large hardware organizations.

Configuration Management: Establish and maintain the integrity of work products using configuration identification (labeling), configuration control (e.g., permission to modify), configuration status accounting (final status of work products), and configuration audits (checks to verify changes and integrity).

Measurement and Analysis: Develop and sustain a measurement capability (defined goals and measures) used to support management information needs.

Project Monitoring and Control: Understanding each project’s progress so that appropriate corrective actions can be taken when performance deviates significantly from the plan.

Process and Product Quality Assurance: Provide staff and management with objective insight into process execution and associated work products to find mistakes early.

Requirements Management: Define requirement baselines for a project. Manage changes so that technical and resource impacts are assessed. Trace requirements to related downstream work products so that test coverage of requirements can be performed and the impact of requirements changes is assessed with more accuracy.

Supplier Agreement Management: Manage the acquisition of products and services from suppliers. This Process Area can be declared Not Applicable (after discussion with the appraiser) if there are no custom, risky, or integrated suppliers.

Product Integration: Plan and execute integration testing of components as they are completed, or when all components are complete. Check that interfaces are correct before spending time in system testing. Communicate interface changes to impacted areas.

Organizational Process Focus: Coordinate all improvements. Take what is learned at the team level and organize and deploy this information across the organization. The result is that all teams improve faster from the positive and negative lessons of others.

Organizational Process Definition: Organize best practices and historical data into a usable library.

Organizational Training: Assess, prioritize and deploy training across the organization, including domain-specific, technology and process skills needed to reduce errors and improve team efficiency.

Integrated Project Management: Perform project planning using company defined best practices and tailoring guidelines. Use organizational historical data for estimation. Identify dependencies and stakeholders for coordination, and comprehend this information into a master schedule or overall project plan. As project work progresses, coordinate all key stakeholders. Use thresholds to trigger corrective action (such as schedule and effort deviation metrics).

Risk Management: Assess and prioritize project risks and develop mitigation actions for the highest priority ones. Start by considering a predefined list of common risks and use a method for setting priorities.

Decision Analysis and Resolution: For key decisions, systematically select from alternative options using criteria, prioritization and an evaluation method.

*CMMI and the CMMI logo are registered marks of CMMI Institute LLC

]]>Why Do People Use CMMI and What Do They Get From It?http://processgroup.com/why-do-people-use-cmmi-and-what-do-they-get-from-it/
Thu, 05 Oct 2017 17:35:05 +0000http://processgroup.com/?p=7464...continue reading >]]>
Introduction

CMMI* (Capability Maturity Model Integration) has been around for 30 years and is a proven collection of engineering, management and improvement practices.

There are five primary models: Development (CMMI-DEV), Services (CMMI-SVC), Data management (DMM), People (PCMM) and Acquisition (CMMI-ACQ). Here, I will mention the Development and Services models (see Figure 1).

Figure 1 — DEV and SVC models (click on image to expand)

The DEV model is used for development of systems, products, IT solutions and software. It has a worldwide adoption covering numerous types of organizations, including:

CMMI does not state which lifecycle should be used. Organizations can use the practices in many different types of workflows, including, Agile/Scrum, Kanban, iterative, sequential and hybrid versions (see Figure 2).

Figure 2 — Practices packaged in different lifecycles

The SVC model is also used globally and has a wider variety of business types. Our own experience includes:

Scientists performing experiments under contract

IT resource pool management

IT data center management

Accounting services

What do people get out of CMMI?

When CMMI is implemented at Level 2, projects work like clockwork. When organizations move to Level 3, the organization works like clockwork. Work is planned, schedules are achievable, communication occurs, risks are mitigated, defects are found early, and solutions work. Any process that is broken is fixed, and best practices are shared.

The primary reasons for using CMMI are to:

Have a one-stop place for a complete set of practices that can be repackaged into the work flow they want (e.g., Agile, sequential, or a hybrid).

Provide a roadmap so that practices can be adopted incrementally to mature the organization over time.

Find errors and assess risks early so that less time is spent on surprises and rework later. An organization at Maturity Level 3 typically has a rework rate of between 5 and 10 percent of the project effort (compared to 40-90 percent at Level 1).

Use the practices to enable teams and work to stay organized so that more work can be achieved with less stress. At Level 3, projects typically meet deadlines within 5-15% of the original budget without chronic overtime.

Obtain an appraisal that leads to a recognized public rating (which is optional) to demonstrate capability. (See example published ratings over the last three years.)

Doing CMMI correctly

CMMI, like all frameworks, models and methods, is whatever you make of it. It can be messed up royally, and it can be done brilliantly. The best organizations:

Make their processes one or two pages in length that can be read and implemented in real-time, and many practices are grouped into one process (e.g., Agile planning and tracking, or requirements elicitation and management).

Make their documentation concise and useful. They imbed it in their work flow tools so that it can be easily found, shared and edited (e.g., plans, requirements, design, and test cases).

It doesn’t matter, really, what source of practices you use, whether, CMMI, Wikipedia, or a book. What matters is that new practices are used intelligently to run the business, either to address a problem, mitigate a risk, or maintain a gain.

There is a new version of CMMI coming, called CMMI v2.0 (also called Nextgen). It is a lighter read, and some practices have moved around to change their emphasis and ease of implementation. It comes with a new appraisal method. Both will be available March 2018, and organizations will be able to appraise with them in January 2019. However, don’t wait, just start with the current version and improve your performance.

]]>Kanban — What It is and Using It with Agile or CMMIhttp://processgroup.com/kanban-what-it-is-and-using-it-with-agile-or-cmmi/
Tue, 22 Aug 2017 19:21:30 +0000http://processgroup.com/?p=7440...continue reading >]]>Introduction

Kanban (“signboard” or “billboard” in Japanese) is an inventory-control system to control the supply chain. It was made popular by Toyota in 1953*. In Kanban, a signal is sent to produce and deliver a new shipment of material as it is consumed. These signals are tracked through the replenishment cycle and bring visibility to a workflow.

One of the main benefits of Kanban comes from establishing an upper limit to the work in progress (WIP) to avoid overloading the development or manufacturing system. Kanban aligns inventory levels and work in progress, with actual consumption.

Kanban consists of the following steps:

Visualize the workflow

Limit WIP

Manage flow

Since Kanban is an approach to monitor the state of work activities, it can be applied to any type of work. Kanban measures the flow of work so that bottlenecks can be identified and addressed. One example is software development. Work items that are tracked in software development can include requirements (user stories), tasks, defects, enhancements, or action items.

Figure 1 shows an example work flow for software development. Each column, or work flow state, is usually well-defined and the columns represent a summary view.

Figure 1

The diagram is the same as a typical Scrum Board or Task Board used in Agile. The difference here is that a Work in Progress (WIP) limit is set for each of the columns.

WIP (Work in Progress)

To maximize the items that reach the “done” state, a limit can be put on the previous columns to highlight where work is building up. A WIP limit of three would mean that three or fewer items should be active in that state at any time. If the WIP limit is exceeded, the team needs to address issues that prevent the flow of work, rather than ignore chronic issues and move to other work just to stay busy.

Common causes of exceeding WIP limits include:

Test tools and resources not available or working.

Requirements not defined or clear.

Test cases not defined or updated.

Task and story dependencies are not identified or satisfied.

Distractions prevent work being done.

Poorly performing vendors.

When a WIP limit is approaching, or has been reached, the team stops and addresses the issue. Corrective actions could include helping a team member finish his or her work, improving a process that is causing the delay, or changing the WIP limit.

WIP limits are usually based on capacity. For example, a WIP limit of one item per person means that a team of five people can handle five items in that column. One item has to be finished before one more can be added. Setting a WIP limit of 10 would imply that multitasking is being done. If each team member switches between two primary tasks, then exceeding a WIP of 10 will likely indicate there is an issue to address. The goal is to move work to the “done” state, not just stay busy. Try several different WIP limits and determine a good one for your team.

An example

The following diagrams illustrate a client using Kanban for software development.

Figure 2

The “Working” chart in Figure 2 shows development tasks that are in progress. The seven-person team set a WIP limit of 14 (based on two tasks per person). This means that up to 14 tasks can be active at any one time for the team. The chart shows that the team isn’t suffering from multitasking in the work column, so the WIP limit could be reduced, perhaps even by half.

This group focuses on achieving smooth velocity to maintain a predictable cycle time rather than a smooth WIP. WIP charts provide an instantaneous picture and serve as a leading indicator for future velocity problems.

The same team also set a WIP limit of three for “Verification.” The chart shows that they exceeded the WIP limit at the beginning of the time period and the problem continued to worsen. While the team should consider raising the WIP limit to something more reasonable for its workflow, the real problem highlighted by this graph is a bottleneck in the testing phase.

Test teams that commonly exceed their WIP limit can have underlying causes, such as poorly defined requirements, excessively buggy code, unstable software and hardware platforms, or an understaffed testing team. Exceeding WIP limits can be an indicator that a bottleneck exists or that large cycle time variance is pending. The key Kanban indicators are cycle time (the time spent working on an item) and the Cumulative Flow Diagram described below.

Cumulative Flow Diagram

The Cumulative Flow Diagram in Figure 3 shows the overall work flow. The average delay in work finished is shown by the horizontal distance between the two graphs. The average WIP is shown by the vertical distance.

Figure 3

Each line represents the counts of the Backlog and Done columns from the Kanban board. The other columns can be plotted to provide more visibility about where the WIP is building up. The lines should be mostly parallel for a team to predict when work can be released.

When the lines start to separate on either axis, corrective action should be investigated. Figure 4 shows one example of the organization described above. The graph shows a fairly consistent flow of work from “Ready for Work” to “Done.”

Figure 4

How does Kanban relate to Scrum?

Scrum limits how much work you should commit to in a Sprint and expects sprints to be between one and four weeks. Kanban limits how much work you should have in any one process step (each column of the Kanban board).

Kanban does not have any fixed time boxes, project management or engineering practices. Therefore, the typical Scrum workflow (“Backlog Review,” “Sprint Planning,” “Development” and “Done”) can be used to define the initial Kanban states. Here are some suggestions for when to use Scrum and Kanban.

Use Scrum if:

Your team prefers to group work into two-, three- or four- week chunks.

Your team prefers daily and weekly team events to monitor progress, perform demos for feedback, and collect and act on lessons learned.

Your team benefits from a periodic forcing function to look at what is actually built every sprint and take action.

Your team members or managers tend to wander, change their mind a lot when they have goals longer than a month, or ignore real-time data regarding progress. Use the periodic scrum events to look at this information.

Use Kanban if:

Your team has a continuous flow of work where any one item can be released when completed, and does not have to wait for other items. Examples could include bug fixes or help desk requests.

Your team has an irregular release cycle.

Your development work does not easily fit into standard duration sprints, or you prefer to set longer-term release goals and track progress to that goal.

Your workflow processes are well defined and the team and management have a history of paying attention to project progress data and don’t need a forcing function every two weeks to stay focused.

Use Kanban and Scrum if:

Your team wants the standard milestones of Scrum to chunk and manage work.

Your team wants the additional visibility from Kanban on how well work is flowing through the process.

Kanban and CMMI (Capability Maturity Model Integration)

CMMI is a collection of project management, engineering and improvement practices organized into a roadmap to improve capability and performance. The practices can be put into a work flow and summarized on the Kanban board based on an Agile, Waterfall or hybrid project lifecycle. Kanban expects a work flow to be defined and CMMI provides one example of a complete set of practices that can be adopted incrementally into the workflow.

Two of the practices in CMMI refer to tracking work over time and establishing thresholds to trigger investigation and corrective action (PMC sp 1.1 and IPM sp 1.5***). The Kanban board is an example of how to track actual work complete. The WIP metric can be used as a threshold to trigger investigation regarding current performance. Therefore, Kanban can be used to implement these two practices.

The Risks of using Kanban

Here are some risks to be aware of if you adopt Kanban. (Don’t panic, they are all fixable.)

If the work being tracked on a Kanban board is not clearly defined and monitored with “done” being crystal clear, then the charts won’t mean anything.

If team members and senior managers typically shy away from discussing and correcting chronic problems because status quo is comfortable, then Kanban will provide little value and the team will be less willing to try the next approach.

If team members are chided for exceeding WIP limits or not having predictable Cumulative Flow diagrams, then they will stop reporting accurate data.

If your culture thrives on extreme multi-tasking then Kanban might fail because Kanban focuses on getting work done, not getting more work started.

If your organization gave up on Scrum or CMMI because it didn’t like to plan based on capacity, commit and be accountable, then Kanban will just provide one more view of the world that you don’t like. It would be like throwing away a square mirror and hoping for better results from a round mirror.

Summary

Using Scrum to chunk and manage work, along with CMMI Maturity Level 3 engineering, project management and improvement practices, leads to a world-class performing organization. Adding Kanban provides visibility into the work flow. Organizations can go one step further by adding Lean principles to identify and reduce waste.

[Forward this email to your boss! Subject: Here’s a cool tip for you] Quick Link.
[Thanks to Jim Congdon of Logos Technologies, LLC for real data and input for the article. Jim is a PMI-certified manager of an Agile software development group that has been appraised at CMMI Maturity Level 3.]

Project Monitoring and Control (PMC) Specific Practice (sp) 1.1: Monitor actual values of project planning parameters against the project plan. (This practice is partially implemented by tracking work in the Kanban board, and can be fully implemented by additionally tracking the actual work effort required.)

Integrated Project Management (IPM) sp 1.5: Manage the project using the project plan, other plans that affect the project, and the project’s defined process.

]]>What is the Difference When People Change?http://processgroup.com/what-is-the-difference-when-people-change/
Wed, 03 May 2017 15:50:28 +0000http://processgroup.com/?p=7407...continue reading >]]>Introduction

Since 1989 Mary and I have been performing the role of change agent. This involves teaching or coaching new skills, such as estimation, risk management, defect identification, agile, CMMI practices, or fixing organizational problems so that work can be done quicker with fewer problems.

There is one common difference between the teams and organizations that adopt a change and the ones that don’t. The ones that change realize that the change is for them, not for the person requesting the change. If the change is seen as only benefiting someone else’s life, then it is either ignored, or, at best, adopted superficially.

For example, teams usually reject peer reviews (a.k.a inspections) because they take one or two hours of time. What makes the situation worse is that someone else is recommending inspections to the team, implying that the team needs to clean up its work. What could be worse — extra work performed for someone else to keep them happy!

A desired change for the team would be one that does at least one of three things:

Addresses current risks or challenges

Helps the team achieve its goals

Helps the team maintain a previous gain

Mapping the change to a need takes effort, and this is usually the step that is skipped. This step is essential, whether one change is being deployed or numerous changes are being adopted from frameworks such as Agile, CMMI* or PMBOK*.

In the case of peer reviews, the typical challenges addressed are:

The team works late hours to fix bugs

The team’s reputation for quality is not good

The pace of work is slow because the amount of technical debt is high

Requirements are ignored because they contain more errors than information

The code base cannot be touched because it is toxic or fragile

Changes stick when they address the needs of the team and the team expends effort to make the practice work for them. This is why some organizations using Agile, CMMI or PMBOK (for example) love them and perform extremely well and why other organizations put up with the very same practices grudgingly.

How can you use this concept?

Before recommending a new practice or a collection of practices, stop and ask what the goals and problems are

Identify or explain the tie between the problems and the recommendation

Demonstrate, pilot, and try the change in earnest

Retire practices that don’t maintain a gain, fix a problem, mitigate a risk, or help a goal

The software, systems and IT development communities are constantly looking for approaches to organize and manage their work. Over the years there have been many solutions to choose from. These include the Project Management Body of Knowledge from the Project Management Institute (PMI), the Capability Maturity Model (CMMI), Integrated Product Development, Concurrent Engineering, and Integrated Product Teams. All of these work when used in earnest.

Scaled Agile Framework (SAFe) is a recent software and systems development framework that implements Agile at the enterprise level. It also incorporates practices from Lean Product Development and Extreme Programming (XP). Similar to other frameworks and methodologies, it covers many of the practices you need to define and coordinate work activities among teams dependent on each other.

This article will briefly compare and contrast SAFe, Scrum and CMMI.

What is SAFe?

SAFe is a defined set of practices based on Lean and Agile principles to synchronize and align Agile teams in large-scale software and systems development.

SAFe 4.0 is designed for large systems and organizations. Essential SAFe is a subset of the practices for Team and Program Management (several teams working together). See references below.

Comparison of SAFe, Scrum and CMMI

SAFe and Scrum are prescriptive ways to define and manage work. After every iteration, feedback is collected on the work via a demo, and improvements are collected on the process via a retrospective.

CMMI is a toolbox of practices organized into process maturity levels and categories. A maturity level guides the reader to implement basic practices first, before tackling more advanced practices. The practices can be reshuffled into any shape (life cycle) desired, such as an iterative life cycle (Scrum), a phase life cycle (incremental). Practices can be scaled down to a one-person project or up to a 1000-person organization.

CMMI practices can be implemented in the way you want, or by using the examples provided. They can be repeated every sprint, phase or time-box, touched on lightly, or implemented rigorously based on the needs of the project or organization. You can call work a “to-do list,” “WBS,” “sprint backlog,” or any name that makes you feel good!

If a group of practices is used, (e.g., a maturity level), then the organization can optionally be appraised and recognized for that fact, based on the defined appraisal rules.

SAFe, Scrum and CMMI all have the same underlying premise, that is, to make work visible. The table below is the top part of a summary comparison. Click on the table for the complete version, or click here.

How to proceed

Know that no framework, scheme or methodology works unless the organization invests effort, thought and diligence. Similarly, the tool box in your garage does not make your house beautiful by itself!

Focus on what you want to achieve by enumerating your delivery goals and current challenges.

Go through the frameworks and pick one or two items that help you address the problems and achieve the goals.

Go to step 2.

Conclusion

If your organization is using some variant of Agile and wants to scale those practices for larger systems that cross the enterprise, then look at SAFe. If you are using Agile or SAFe and want to improve your engineering, risk, decision, process assurance, and supplier management practices, look at CMMI.

Want more information?

Neil has teamed up with SAFe instructor Charles Maddox of The i4 Group, and will be providing detailed CMMI/SAFe mapping sessions in upcoming SAFe classes. Contact us for more information or immediate help.

A deadline and expectations with no reliable estimates or task breakdown

Few or no requirements

No repeatable life cycle to manage work now and in the future

Bugs and rework from previous projects consuming resources

If some of these resonate, here are five steps that can be implemented immediately to get your project up and running. They describe proven steps for all of our clients in the same situation over the past 28 years.

Define or learn a simple, repeatable life cycle to keep organized

Elicit and write some requirements to set goals

Derive achievable estimates to stay sane

Assess and mitigate risks to save time

Aggressively and efficiently find defects to save more time

If you make a mistake in one of these steps, it’s OK. After one iteration (for example, two weeks) you can systematically improve performance.

1. Define or learn a simple, repeatable life cycle to keep organized

Kids, doctors, athletes and musicians are taught to develop routines. Routines enable key skills to be implemented, practiced and become, well, routine. Success is then the result of how good the skills are within each routine and how well the routines are implemented.

A great project has routines defined to give them the result they want. One simple routine is Scrum (or any of the Agile methods). It doesn’t come with great skills, but those can be readily added.

2. Elicit and write requirements to set goals

Eliciting and writing requirements enables a team to set goals and become clear on what they need to investigate. A project with no requirements is either a valid research project, a hobby, or a hope using someone else’s money until the money runs out.

Scrum provides a very basic focus on one type of requirement, the user story. When done correctly, this is a good starting point. However, there are other types of requirements that exist, such as quality attributes, exceptions, constraints, functional, interface and system requirements. A few days spent on requirements saves a lot more than a few days later.

3. Derive achievable estimates to stay sane

Teams that struggle usually have deadlines, not estimates.

Estimates are used to set deadlines, or to validate and assess the risk of existing deadlines. They are also used to set priorities and have meaningful discussions with stakeholders. Teams that have no estimates, or at best “some numbers,” might have little or no basis to know whether they are way ahead or way behind — every day is interesting.

4. Assess and mitigate risks to save time

There are many problems that can be foreseen and mitigated and many that cannot. Great teams assess and mitigate the ones that they can so there is more time to address the ones they can’t. Really great teams learn from their risks over time and refine their routines to avoid them all together.

5. Aggressively and efficiently find defects to save more time

Defects consume team resources, and this directly impacts the current deadline. Like risks, not all defects can be predicted or found early, but many can. Great teams perform thorough peer reviews (a.k.a. inspections) on selected project artifacts and code to clean them up before they are used.

For example, the backlog of user stories is not just groomed, it is peer-reviewed thoroughly and quickly for defects. This prevents the team from wasting two-week blocks of time guessing on what to code and provides QA with clarity on what to test.

Inspections are performed on selected code and test cases. Defects are found by the team, usually at a rate of one defect per minute. When this is not done, the team’s time is consumed fixing old stuff and responding to customer complaints. A team that has 10 percent rework compared to one that has 60 percent has a lot more time to do real work.

After the work has been cleaned up, test the system in the real environment to avoid disaster. There is nothing worse than the system working great on the lab machine, but not in the customer’s environment. Inspections are used to clean up the defects that can be found prior to system testing. System testing finds most of the remaining defects.

Conclusion

New projects are by nature messy, usually overwhelming, and full of unknowns. Applying some straightforward practices enables a team to spend their focus on the hard stuff, not managing chaos.

A common definition for work done is, “I am sure this will be OK,” and “The deadline is up.”

If a team applies one of these definitions to requirements, test plans or code, then defects slip downstream, causing rework, extra test cycles, costs and upset customers.

Here is an alternative definition for “done”:

The work has been inspected for defects (errors and omissions)

Defects have been repaired and verified

Code has passed its test cases

The final document or code is under configuration management (versioned and backed up)

A solid definition of “done” helps a team avoid chaos and minimize rework. It also enables schedules to be meaniful and reliable. “Done” is a fundamental characteristic of a professional environment that is fun to work in.

When I work with a client on quality issues (or “done”), we often perform a sample inspection (team peer review) on work that has been declared “done.” Ninety percent of the time we can still find:

Between 1 and 4 critical or major defects per page for documents (e.g., requirements, backlogs, and test plans)

Between 37 and 44 critical or major defects per Thousand Lines of Non-commented Source Code (KLOC), depending on the age of the code.

Critical code defects include memory leaks, incorrect variable names, logic errors, and wrong path names. These defects are difficult to find in test and drain the team of resources after release if not caught.

Who is involved, and how fast are these inspections (team peer reviews)?

Inspections are conducted by the team and avoid evaluating the author. When inspections are conducted efficiently, a team of three to five people logs between one and two defects per meeting-minute. So in 30 minutes, 30 to 60 defects are logged. Compare these numbers to the reviews you conduct now.

Conclusion

Defining “done” enables schedules to be meaningful and work to be reliable. It only takes a fraction of the project’s budget to find the majority of defects upstream, and the time invested saves numerous expensive test and rework cycles later.

What is your team’s definition of “done,” how should it change, and what would be the impact?

If you have comments or questions about this article, or would like to get some helpful complimentary feedback regarding your “done” and quality challenges, contact us.