I have a client who recently asked me about this problem. I was wondering how you'd handle it?

They are new to UX and to Agile methods, so they are struggling in interesting ways.

As part of a recent project, they added some usability testing to their sprints. They'd never done a usability test before and, as a result, discovered many usability issues with the design.

However, many of the problems they found were outside the original scope of the project. The UX team members wanted desperately to add many of the problems to the project's backlog, but instantly found resistance from the product owners who didn't want to delay the delivery. The result was a sense from the UX folks that the product owners didn't care about a good user experience.

My question is how might you have avoided this problem? I had initially suggested that the team needed more upfront definition of what is and isn't within the project's scope and a way to record outside-of-scope usability issues for future projects.

Isnt a product backlog just that? A backlog of things that, when completed will make the product a better product? Whats on the product backlog and what is required for deliverable x don't equate in my mind. What is wrong with adding them as PBIs but not necessarily adding them to the sprints leading up to the deliverable? That way, if you don't have time to resolve the issues now, at least they will still be on the radar for future releases.
–
Matt LavoieAug 12 '11 at 15:35

1

I want to echo Matt's comment - and to add that the issue seems to be more that the UX team don't understand what a product backlog is, as implied by Jared's second-last paragraph.
–
gef05Aug 12 '11 at 17:35

7 Answers
7

I have come across this exact problem on a project. Everybody on the team is already stretched to the limit to get a product shipped - and it's important to get it out before a particular international event (for example). So a UX review which happens too late in the development life cycle throws up some serious issues which no-one had foreseen. What to do - and how could it have been avoided so as to learn on the next project.

In order to prevent scope creep there has to be a process of continual scope re-evaluation and prioritization. Assessment of changes, time, and risks need to be made along with the feature's impact and value for most users experience. And everyone needs to be a part of this precisely so that no one feels they are being ignored.

Furthermore, it's not just about how to manage what to put in the release, but how to manage the next step after that. It feels better to be able to say that we'll defer a feature for the next version rather than drop a feature. Once you get over the issue that: 'ok yes maybe you're not getting what you initially had on your feature list, but that you are in fact getting a better product at the end of the day', then the mindset of the stakeholders and consequently everyone on the team can improve greatly.

So for that reason, it helps to get stakeholder buy-in from day 1 that this is what agile development is about. Adapt, iterate, improve. The goal is to get the best product you can - not to tick off a bunch of checkboxes on a list of features - you don't get paid per feature!

When problems occur so late in the day, it's really not time to panic. The opposite in fact. It's time to 'tweak not redesign'. The biggest impact for the least risky and quickest changes.

The UX report just happens to be a good kicker to do another round of scope re-evaluation. The problem in your scenario is that it comes as a shock - like rabbits caught in the headlight. It shouldn't be a shock. Everything that is put together to make a product constitutes part of the user experience. Why then leave testing of that to the end, when probably there was all sorts of other testing going on elsewhere.

The chances are that there are a bunch of changes which are actually quite easy and these are really something of a no-brainer. The chances are also that there are some already partly unfinished or risky areas of functionality and that actually these add less value to the user's experience than fixing that 'first impression' or the 'sign-up' form or whatever has been thrown up. There is no doubt that compromises will have to be made, but that is just par for the course - show me one project that hasn't had to compromise.

One of the biggest issues I see today is that companies still need educating about the benefits of user experience and how it affects them and their products, and that UX consideration should start at day 0. It's not changing fast enough. Free UX goggles for everyone, that's what I think. Sorry for the long winded answer :-)

It is interesting that this is a common problem now. I think it's attributed to UX considerations being more of a mainstream school of thought now than before, and those that are midstream just get the worst of it.
–
NicAug 12 '11 at 16:09

5

@Roger Atrill A corollary to your answer: Why do testing if you aren't going to use what you learned?
–
peteorpeterAug 12 '11 at 18:17

Actually it sounds like the UX team wasn't made part of the development team from "day 0" as Roger states. You should be one team and create functionality together (all required disciplines). This would reduce but probably not eliminate the usability problems encountered during testing. UX should also have been brought into strategic design.

If it were me I would have inserted a Usability Testing Task(s) in the Sprint and also inserted a Usability Fix task for the things falling out of Usability. This would reduce your overall velocity but it's reality.

Another way is to just take the Usability Fallout Stories and put them in the Backlog, which is the Product Owners responsibility. Don't stop someone from putting these things in the Backlog. Many Agile teams allow that anybody can add to the Backlog and it's the PO's job to prioritize the items on the Backlog so the most important ones come first. But it's the PO's decision as they are responsible for Product ROI.

I too don't understand why adding to the backlog the issues that came up should delay the delivery. There's certainly nothing that says that absolutely everything in the backlog needs to be added to some release.

I would, however, take an honest assessment of the user value that would be added by addressing the discovered issues. Spending more time on a product [should] always add value, but that added value also needs to be weighed against the cost of dropping other features or pushing delivery dates. The product owners should be given enough information to weigh the relative value of these new stories versus the existing stories and adjust the project plan accordingly. Of course, weighing those relative values is much easier if all the issues are in the backlog.

[Minor caveat that I'm a dev and not a UX person so I might have a bit of a naive understanding of what is done/needs to be done from a UX perspective].

I have worked on Agile teams where UX folks were an integral part of the team and a few things that seemed to work well:

Early UX involvement

Often our interaction designer would "peek" up the backlog one or more sprints (based on current velocity) and start thinking about what the UX should be like

Early UX/Dev/Product Owner conversations about upcoming "stories"

This was done (we used Scrum) during backlog grooming sessions. The entire team would look into the backlog at least a sprint ahead; PO and/or UX would help describe the story; UX would maybe have some wireframes of the UI and walk the team through the interactions

Early Usability Testing

If at all possible start testing as early, ahead of the current sprint. Yes, this means you won't be testing on a functioning application. Try to create low-fi clickable wireframes or mockups. I've worked with interaction designers that have used html to wire up stripped down UI that has enough of the key interactions that they can at least get an initial sense of usability. Maybe try a tool like Balsamiq. I've even seen people do testing on paper ("where would you expect to x?").

Estimate alternatives

As part of this "grooming" process, the dev team is responsible for estimating or sizing what is being described. Often UX would propose multiple alternatives and dev would estimate each option. There is nothing that will make a PO less likely to negotiate than the feeling that they have no options.

Estimate cost of deferral

This can be a useful approach when trying to convince a PO to move a deferred UX-related story up the priority. Engage the dev team to estimate the cost of deferral. Things are often more expensive to re-do later down the road than they are to face them up front. Try to estimate the cost of doing something now vs doing it later. This is the card you play if the PO says, "we'll do it later. we need to focus on features now". Tell the PO how much more it will cost later, then the ball is in his/her court.

This is generally my strategy when working with POs, do your homework, present the options, and then trust that they will make the right choice.

I should add that facilitating collaboration in the team, if you're a Scrum team, is the job of the Scrum master - they should be actively looking for ways to help the team work better as a unit (including the PO).

One of the main purposes of using agile methods is to make a situation like this a non-issue – anything that doesn't make the first release can be added to the backlog, to be (possibly) developed and pushed during a later release. As Ben Rice also noted, adding to the backlog should not jeopardize a release date.

As Roger Attrill and others also noted, UX testing should be part of the process from the beginning, and should go hand-in-hand with functional testing. With all of the many different techniques and methods that can be leveraged to do UX testing (many of which can be handled in-house), there's really no good excuse for not doing it.

If the product owners are unwilling to add the issues uncovered by UX testing (as opposed to functional testing) to the backlog, then it would seem they do not value the results of the testing; and so it is likely to be more of a cultural/political issue at hand, not a procedural one, and begs the question – why did they bother to UX test to begin with?

At the company where I work, the UX team is responsible for creating the PRDs, which is the detailed plan of requirements to the engineering team. This helps us avoid the problem by making sure the UX team had a chance (in the beginning) to get things designed well.

Once an issue is found, I have found it very difficult to get it on the roadmap (as you noticed in your question). I literally have been waiting 4 years to fix a particular problem that makes me (and users) crazy in the UX. The problem hasn't been slipping the release, the problem has been getting the fixes at all. Usually fixes wait until we redo the component.

This is a case of "an ounce of prevention is worth a pound of cure".

One last thought: At one point, I said usability testing is like quality assurance. Let's create bug reports. The engineers had a cow. "Our code wasn't wrong, your design was wrong!" ugh.

My reply to Jared there included below for those who might be interested:

Not a problem unique to agile. I'm amazed at the number of folk who budget in
time for usability testing, but expect the findings can magically be integrated
with no additional work :-)

I think your initial suggestion is a good one. Before I start doing user testing
with a team we try and figure out what is/is not possible implementation wise.
This is good for two reasons:

1) It helps set expectations on both sides of what is going to be done with the
results. So I'm not left being annoyed that my sage advice is being ignored, and
the team isn't given a set of high level issues that cannot be fixed in the
scope of the current release plan.

2) It allows better targeting of the UX resources. If we know that we're not
realistically have time to completely re-implement feature Foo, then we can
spend more time looking at feature Bar instead - or more time
teaching/facilitating good UX work elsewhere.

In getting teams to value UX I find it much more effective to provide advice
that can be acted on immediately so the value of that advice can be quickly
seen. Once you start showing value it's much easier to get agreement on
addressing larger issues.

Having that conversation about how the usability testing can effect the end
product can lead to some interesting places.

For example it may highlight situations where the product development team needs
to become more agile.

How fixed is the release plan? Is it fixed for a good reason? Is new feature Foo
really more important than fixing feature Bar? Is agreeing a release plan N
months in advance helping or hindering building a better product?

The conversation about how usability testing results fit in can rapidly become a
much larger discussion about the process as a whole, the definition of done, and
how features are prioritised.

This larger conversation is useful and good - but can sometimes get in the way
of addressing the short term issues that can feasibly be addressed in the
context of the next couple of iterations. Especially, like your client, where
you're facing a huge usability debt.

Addressing the small issues first. Show that they help. Use that evidence as the
fuel to drive addressing larger scale issues.