To deliver agile outcomes, you have to do more than implement an agile process- you have to create focus around what matters to your user and rigorously test how well what you’re doing is delivering on that focus. Driving to testable ideas (hypotheses) and maximizing the results of your experimentation is at the heart of a high-functioning practice of agile. This course shows you how to facilitate alignment and create a culture of experimentation across your product pipeline.
You’ll understand how to answer these four big questions:
1. How do we drive our agility with analytics?
2. How do we create compelling propositions for our user?
3. How do we achieve excellent usability?
4. How do we release fast without creating disasters?
As a Project Management Institute (PMI®) Registered Education Provider, the University of Virginia Darden School of Business has been approved by PMI to issue 20 professional development units (PDUs) for this course, which focuses on core competencies recognized by PMI. (Provider #2122)
This course is supported by the Batten Institute at UVA’s Darden School of Business. The Batten Institute’s mission is to improve the world through entrepreneurship and innovation: www.batteninstitute.org.

RD

The course contains complete detailed information on Agile Testing. Users should make the best use of this course knowledge as now all the companies are now moving to Agile.

CK

Sep 25, 2018

Filled StarFilled StarFilled StarFilled StarFilled Star

This course actually bring all that knowledge into light which has been taught in Course 1-3. all videos specially the interview are the essence of this course.

From the lesson

Should we build it? Did it matter?

Nothing will help a team deliver better outcomes like making sure they’re building something the user values. This might sound simple or obvious, but I think after this module it’s likely you’ll find opportunities to help improve your team’s focus by testing ideas more definitively before you invest in developing software. In this module you’ll learn how to make concept testing an integral part of your culture of experimentation. We’ll continue to apply the Lean Startup methods you learned in ‘Running Product Design Sprints’ (and/or you can review the tutorials in the Resources section). We’ll look at how high-functioning teams design and run situation-appropriate experiments to test ideas, and how that works before the fact (when you’re testing an idea) and after the fact (when you’re testing the value of software you’ve released).

Taught By

Alex Cowan

Faculty & Batten Fellow

Transcript

Let's look at the cohort analysis. Here, we're looking at how some kind of variation in the way users are encountering the product is going to change our success rates. And here, we're also more so looking at some change, usually in usability of our product or our interface. Because we've assumed that and we've kind of figured out motivation, now we're just looking at how do we bring them through the process better, typically? And here is an example, you won't really be able to see any of the details, but you can do this cohort analysis on Google Analytics. This is a screen from their cohort analysis and what's on the y axis here, the rows, is sessions beginning on sort of day one, day two and so forth and then these are days. So this is just people that presumably encounter the site for the first time on day zero, day one, day two, day three, how was our retention of those users? That's what's filled in to these colored boxes here. So this is a generalized example that Google presented on their help page. And presumably what the person is looking at this would be asking themselves is how some kind of changes that they were making over time here affected the onboarding process and hence the user's propensity to keep using the system? And so that's one example of cohort analysis. The idea is that we're dividing users into some kind of meaningful chunks, cohorts, with some sort of variation. And this, what you're seeing here with the Google thing is very typical. So as you make changes to the site, we may be making changes to areas that only matter to new users. In which case, we don't want to look at how that change affected retention for everybody but just the people that, you know, onboarded at that given time when we made that change for example. But you can divide people into cohorts for anything. An even simpler example might be for an able quiz that they're going to test video onboarding. So when a user signs up for a free trial, they're going to email them an offer to set them up on the platform and help them create their first quiz. And they want to figure out is that something that they should invest in? So the assumption there would be if we provide HR managers with a virtual on-boarding session we will materially increase retention and revenue. And they're going to test this by automating an email offering this on-boarding process to 50% of the trial users that sign up whenever the experiment begins. And they're going to schedule these with some kind of online scheduling tool to make the whole thing a little easier. Now they're going to offer it to 50% of the trial. Why not just offer it to anybody and then measure the people that sign up and take this video on board against the people that didn't? Well, that might be a bad way to get those results, because there's probably some kind of bias in the people that decide to sign up for this versus the people that don't. So this is kind of beyond our scope a little bit here, but from a sampling standpoint we want to make sure there's no bias in the set of users that we're comparing, if at all possible. And let's say, and again these are just example thresholds, that they'd like to see at least a 20% improvement in the number of users that actually go through and create a working quiz. Otherwise they'll consider it a fail, consider this not worthwhile. They'd like to see a 15% improvement, in the number of people that do more than three quizzes, so create more than three quizzes. And they'd like to see at least a 10% improvement on converting from trials. These are all sort of failure metrics. If any of these are less, they're going to, they think they're going to consider it a fail, and not invest in offering these on-boarding sessions anymore. They want to see a 10% improvement in their 60 day usage. And so if it's true, if these improves things by at least these amounts of times that you see up here, then they're going to continue the program and probably roll it out to all the users. And if it's false, they'll kill this program and just look back at the fundamental drivers in on-boarding and retention. So once you get a user, what is it that causes them to want to continue using the product versus not. And they probably have some kind of notes about how long this is going to take and how to make sure to instrument the adequate time for this into their cycles. So that's a view of how cohort analysis might work and how you might instrument it into getting the right answers for some variations that you're going to test in your product or your processes.

Explore our Catalog

Join for free and get personalized recommendations, updates and offers.