We're in a quite complex software engineering project with lots of unknowns due to new technologies involved that the Team has never applied nor have comparable experience.

Due to these risks the Product Owner (PO) requested a "confidencecheck" of if the Team will reach the minimum viable product (MVP) by a certain deadline this year. Since "deadline" and doing Scrum clash quite a bit, the Team was really hesitating to give the PO an answer.

The PO required to have this indication for a CEO's report; thus, all had to rate it; there was no way around. So I took the rating as part of a Retrospective. We're in Sprint 2 of 8; the real velocity is not clear to the Team yet.

He asked the Team to vote 1 - 5 regarding these phrases:

5 - Yes, I'm confident. There is a clear and transparent path forward
to achieve the MVP.

4 - Yes, I'm confident but the path forward is not fully transparent to me.

3 - Yes, I'm fairly confident that we can make it happen but I see some risks.

2 - No, I'm not confident anymore. There are too many obstacles.

1 - No, I'm already convinced that there will be a delay.

Here's the result I'd like to share. The Dev Team is quite large, since we have multiple streams - some votes are also from people of an extended Team.

I'd like to get your opinions on this.

How do you see this situation?

Is this a valid way to get feedback from the Team as a means to report to the CEO under the circumstances of facing risks that could risk feasibility of the MVP?

Is there a better and more agile-friendly way to get feedback from the Team? How would you facilitate such a decision?

One problem is the scale, 3 yes options and 2 no options. It will tend to push answers towards yes, if there are more yes options.
– BentSep 7 '18 at 14:39

When doing some further research on my own question i found a method by Atlassian, called "demo trust" from their playbook - you might wanna take a look, in there it's defined to ask for a confidence vote from 1 - 4 options: atlassian.com/team-playbook/plays/demo-trust
– Andre MeierSep 7 '18 at 15:07

5 Answers
5

While such a confidence check is nothing that you'd read about in Scrum Guide I don't see it as something that violates Scrum in any way. It is, in fact, answering a different question: the one about confidence of people, not the one about forecast end date.

The native method to answer a question about expected end date in Scrum would be based on Velocity and backlog size. I would argue that simply counting throughput (a number of features completed) is simpler and yields similar outcomes. No matter which method you use it can inform people about their confidence. For example: throughput / velocity measured so far suggests that we need 9 more iterations to finish the backlog, but we are in sprint 2 of 8 thus we only have 6 iterations left; thus my confidence for making it on time is low.

Another bit that can inform confidence would be scope creep: how many features are being added to MVP backlog over time. The fact, that you have e.g. 25 features in the backlog for the remaining time means little if you can expect that each iteration adds 3-4 new features to the backlog. In such was the case you would need to take into account that the real backlog size may be more like 43-49 features than current 25. This, as well, would inform confidence of people.

On the validity of the method, I don't see it much of a problem as long as there is a common understanding between the PO and CEO what the measurement actually means, and possibly all the caveats to the quality of the measurement.

A few observation that one could draw looking at the picture:

Majority of voices in the "sphere of uncertainty" which is the middle of the scale. If I have no opinion, or I have one but I'm uncertain about it, I would likely put my voice in the middle. It doesn't matter what the label of "3" says actually. One improvement I see for this activity would be to add an explicit "I don't know" option to choose from. That would sort out uncertain voices from those that are "fairly confident" on making it on the deadline.

People are overly optimistic in estimation. Their expertise doesn't matter. The fact that they've been exposed to their own overly optimistic past estimates doesn't matter either. This begs for conclusion that in reality the chances of making the project on time is lower than what this confidence exercise suggests.

As long as people who took part in the exercise may be confident that its outcome won't be used in any way against them you could count on the reliable data. The picture suggests it was anonymous voting, which is good. However, I could still imagine CEO undertaking the action after the MVP missed its deadline. After all the team was "fairly confident" that you'd make it. In any case it is important for the team to know that the pure goal of the exercise is to inform decision-makers and there is no downside for the team in taking part in the activity.

The above remarks also suggest how the activity could have been improved: make 100% sure that the team feels safe about the gathered measurement, make sure the CEO understand the caveats and the team knows that too, give space to "have no idea" about own confidence, make sure it's anonymous, etc.

the scale here ("on a scale of 1 to n, how strongly do you agree with the statement [...]") is called a likert scale. likert scales can be "forced" (these have an even number of options, with no neutral choice offered) or "unforced" (these have an odd number of options, with the middle one being neutral). surveyers tend to prefer forced likerts, because research has shown that humans will almost universally skew their answers to extremes or to true neutrals (which artificially reduces variation in the data). removing the neutral option eliminates a source of skew.
– Woodrow BarlowSep 7 '18 at 16:28

2

My point is that there's a difference between "I'm neutral" and "I have no idea". Implementing an explicit "I have no idea" option, which is completely off the scale, makes a clear distinction between the two option. It also adds more safety for people who aren't forced into picking a value on the scale.
– Pawel BrodzinskiSep 7 '18 at 21:28

Reduce the number of "levels". You don't need 5, it makes it less actionable, 3 levels is probably better

Don't ask for likelihood the answer of your team might be very dependent on how optimistic they are. I believe it is far better to construct quantifiable/objective questions that would discover the likelihood of the project. For example, the level of specific skills/requirements and have people evaluate themselves on those, that would give you a much clearer picture than one simple question alone. To elaborate on this, if, for example you need everyone to be comfortable using tool X that no one knows how to use it, then two good questions are: how comfortable are you at learning tool X in this amount of time? and the second one, how long did it took you before to learn a similar tool?

I don't think that it's a massive problem to ask for a confidence vote. I often do this with raising hands as a yes and no or we use fingers instead of a scale like you have.

As long as the team doesn't think that a confidence vote is taken as a commitment or like a signed agreement that they will deliver on a fixed date.

Most organisations put in fixed dates and this is often a sign that the people that define these fixed dates need coaching to understand that on Agile projects we use landing zones, velocity and scope as our levers.

A better way of doing this is to get the team to estimate 2 sprints worth of ready for dev work and then estimate the rest of the backlog at epic level. Plug all this data into a burn-up chart using their average velocity of the last 2 sprints and then constantly update it as their velocity figures come in every sprint (Jira does this automatically). This burn-up chart will give you a landing zone i.e. a date range. If its completely off, then you need to have a hard look at your MVP, remove distractions and other work the team might still be responsible for, if its still completely off, you would need to discuss any architectural bells and whistles which may need to be removed. The worse case scenario is you would need to have a difficult discussion with the CEO and PO coaching them on and showing them these charts.

coming in a bit late, but I've done this a few times, though not specific to Scrum:

Getting an honest evaluation of your team's confidence in stuff like this (launch dates, complete requirements) is really valuable, but challenging:

Honesty: Self-reporting on the spot is tough -- you want people to answer somewhat thoughtfully, but also without influence from other team members. AND without feeling like they're committing to something.

Ideally, figure out a stress-free way to do this anonymously. Takes away the pressure of being "right", but also gives people time to be thoughtful in their choice.

Trajectory: If you're only asked once, it can feel like your answer will be taken as a commitment ("We're way behind, but Jane said three months ago she was really confident in the date -- what gives???").

Ideally, poll regularly, and track the trend over time. Seeing the line trend down or up is more valuable than any individual number.

I've done this manually on a few projects, and it's been super useful to see things like: Launch date confidence was low, so we probed a bit more, found out team wasn't confident we'd got all the reqs captured, so even though all the boxes were being checked, no one was sure how anything would come together in the end.

So useful, but a bit of a pain logistically (extra busywork to send it out, collect results, calculate stuff, etc.).

(Pain enough that I actually just wrote an app (ProjectPoll.co) that does this for you; if you're interested, feel free to make an account and ping me, I'd be happy to set you up with a project to try it on.)

Estimating confidence on a regular basis (at Retrospectives) will be useful in gauging team health and professional growth as everyone works in this new territory. The Scrum Master should direct the process:

The Scrum Master surveys the team with an anonymous vote (1-5), insert the data into a histogram & discuss the results as a team immediately.

Discuss the individual week's "score" and then trend.

Ideally the confidence #'s move left (4's & 5's get bigger) as User Stories are worked through/revised and knowledge is gained.

Keep in mind:

Good results start with well defined User Stories (not hopes).

Agile development yields product and knowledge (both are valuable).

The Dev Team & Stakeholders should talk directly regularly at Sprint Reviews.

Stakeholders' needs can change as information/situations evolve. Be transparent and communicate fearlessly. Good luck & go get 'em!!