Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

UXPA 2016: Mixed Methods Research in the Age of Big Data

UX professionals have a long history of blending quantitative and qualitative research to better understand the customer experience. As Data Science has emerged as a discipline (with an increasing amount of hype), it's all too easy to engage only during results time, sharing information but working independently. At UXPA 2016, I made the case for deeper collaboration between UX professionals and Data Scientists during research and analysis time, for the sake of better Design outcomes for all.

How many have engaged in Mixed Methods Research before – either individually or more broadly, on your UX team? How many of you have a Data Science group within your organization that is starting to look at customer outcomes? Keep your hand up if you’re completely satisfied with your working relationship with those Data Scientists, otherwise put your hand down.

I’ve gotten a handle on three groups in the audience. For some of you, Data Science hasn’t entered into the space of customer understanding in your organizations. I’ll speak to that briefly – but as a forecast, that day is coming up fast.

A bunch of you do have Data Scientists in your organization working on customer outcomes, but you aren’t satisfied with your working relationship. That’s what the bulk of this presentation will be about. The future of Mixed Methods Research requires the best of what both disciplines bring to the table, so we need to start building stronger relationships with each other.

Before I dive in, I’ll give a nod to the folks who are happy with their connection with Data Scientists. This is a new space, and what may seem like second nature to you is a challenging space for many. It’s a challenging space for me, frankly. The way forward is to have multiple points of view about how we collaborate, but I’m consistently struck by the scarcity of Data Science topics at UX conferences and vice versa. So today I’m going to be adding my perspective to the discussion, and I hope you’ll all consider doing the same at future events.

Introduction – UX Data Scientist Not going in depth into my title – showing this viz to communicate that I consider myself a qualitative researcher at heart. Made the switch – not because I was tired of qual research, or because I think Data Science is better. I’m just passionate about how these two disciplines collaborate, and formalizing a hybrid role seemed like the best position for me to start bringing that passion to life. So I’ve been talking about “Data Science” for a couple minutes now, but I want to spend a bit more time unpacking what’s happening in the Data Science space. And to tee up this discussion, I want to reference a comment that Susan made during the past, present, future panel on Wednesday – there’s a lot of buzz around UX, and we need to make sure that UX work is being represented well, and that people who lack the rigor of the discipline aren’t defining what it means for everyone. That point really resonated with me, though in terms of buzz right now, my perception is that Data Science has a lot more of it.

Coursera: 302 matches for User Experience, first page has developer courses (presumably with a UX lesson). Compare that to 400+ Data Science matches – and they are all Data Science for the first many pages – and UIUC is even hosting a master’s degree completely on Coursera.

There’s an interesting dynamic with these programs, because the draw for them – the promise for someone who is already quantitatively minded – is that there’s so much data, you’ll be able to answer any question you have. The appropriate nuance shows up later, in the details of the course, if you’re paying attention. So these graduates will be exploding onto the scene, ready to tackle every question in their way, and quite frankly most of them may not think that we have a role to play in the questions that they’re asking.

So what do we do about that?

kv: Sam, that all sounds great, but sometimes it literally feels like we’re speaking different languages.

Remember my note that Data Scientists may be coming onto the scene without an understanding of what we do. That’s not entirely true, unfortunately – in more than a few cases, they may have a negative impression of what we do. While I don’t think it’s meant to be antagonistic, many Data Scientists frame qualitative research as dealing with “small data,” “anecdotal data,” or not even calling it data at all. (When I told someone about my background and what I’m doing now, his response was, “Oh cool – I’m excited to see someone in the UX space starting to work with the ‘real data.’”)

The good news is that this role of educating stakeholders about what we do is very familiar to us – and I know this has already been discussed at length. My goal here isn’t to educate about THE education approach, but more to discuss this process from the Data Science perspective, and how I’ve addressed this education challenge while I was a qualitative user researcher.

Jen’s Wednesday keynote

It’s fine if your story is different than mine, but fundamentally, we need to be comfortable – and proactive – in sharing this story.

Reinforce that this won’t necessarily be an easy conversation, and you may meet with some resistance. There’s no silver bullet to take someone from believing they have all the answers to accepting that there are different approaches of thinking and knowing. The good news is that, there are folks like me on the Data Science side – we’re chipping away at this communication problem from both ends. But even if the conversation starts with a negative outlook on what Uxers do, the act of having that conversation moves the collaboration potential forward for everyone.

“The statement process was lengthier and more controversial than anticipated.”

Principles are super approachable. They clearly articulate what a p-value can be used for and then spend the other 5 principles trying to address common misuse of this measure. Check out Principle #6 – what is this saying? If your Data Scientist is using the p-value by itself as a mic drop moment, that’s an opportunity for you to hook in. There’s a lot of context missing if all you’re working off of is the p-value, so don’t be afraid to jump in there. Let’s tie some thoughts together. Remember what I said up front about the recent hype around citizen data science – the idea that anyone can adapt a model and make it work for their context. These principles were a reaction to seasoned practitioners abusing some of the fundamental concepts of the discipline. What do you think will happen when there are folks engaging who don’t even have a rigorous background in the space? The bottom line is that statistics is hard. Most statistics classes have a unit up front that emphasizes how bad our brains are at thinking in probabilistic and statistical terms. And to reinforce the point – it took the foremost experts in Statistics over a year to release a statement about this fundamental point in the discipline. What can we do about this? I really appreciate Principle #4 – offering full reporting and transparency.

Partially the long way around to say that qualitative research excels at answering the Why. But there’s more to it than that.

We also need to be aware of the context of the data – what the data is representing and what it isn’t. The best machine learning model can only perform on the basis of the data that it’s given, and making sure the right data is included is a tough problem that we need to be a part of. There’s a growing body of research on how machine learning models can be biased, and it stems back to this point – the person building the model has intrinsic biases that prevent him or her from pursuing the type of data that’s appropriate to speak about a particular space.

If collaboration is happening today, I tend to see that it’s centered on results sharing. This is great – but I’ve found more than a few times that my ability to influence strong research outcomes is pretty limited if I’m only engaging on the results end.

A/B testing is a common example, so I’ll close out with some ideas around how to start a conversation around that.

“The statement process was lengthier and more controversial than anticipated.”

UXPA 2016: Mixed Methods Research in the Age of Big Data

1.
Mixed Methods Research
in the Age of Big Data
A Primer for UX Professionals

18.
Models + Key Aspects of Analysis
Descriptive Model
Descriptive Statistics
Statistical Significance
What is the probability of obtaining this
result given the null hypothesis is true?
Practical Significance
Is the effect on the outcome large
enough to be considered relevant?

19.
http://fivethirtyeight.com/features/statisticians-found-one-thing-they-can-agree-on-its-time-to-stop-misusing-p-values/
The statement process
was lengthier and more
controversial than
anticipated.

20.
6 Principles for p-values from ASA’s Statement
1. P-values can indicate how incompatible the data are with a specified statistical model.
2. P-values do not measure the probability that the studied hypothesis is true, or the probability that
the data were produced by random chance alone.
3. Scientific conclusions and business or policy decisions should not be based only on whether a
p-value crosses a specific threshold.
4. Proper inference requires full reporting and transparency.
5. A p-value, or statistical significance, does not measure the size of an effect or the importance of a
result.
6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.
http://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108

21.
Models + Key Aspects of Analysis
Descriptive Model
Descriptive Statistics
Statistical Significance
What is the probability of obtaining this
result given the null hypothesis is true?
Practical Significance
Is the effect on the outcome large
enough to be considered relevant?
Predictive Model
Super vised Machine Learning
Accuracy
How well does the model predict the
outcome for new data cases?

23.
Models + Key Aspects of Analysis
Descriptive Model
Descriptive Statistics
Statistical Significance
What is the probability of obtaining this
result given the null hypothesis is true?
Practical Significance
Is the effect on the outcome large
enough to be considered relevant?
Predictive Model
Super vised Machine Learning
Accuracy
How well does the model predict the
outcome for new data cases?
Representation Model
Unsuper vised Machine Learning
Optimization Criteria
How will we determine that we’ve built a
reasonable and appropriate representation
model for our data?

27.
… And a Framework for Attributes
UX
Business
Tech
Experience Attributes
Customer attributes that can
explain how that customer
will experience a product.
Technology Attributes
Customer attributes that can
explain whether customers
will have technical issues with
a product.
Business Attributes
Customer attributes that can
explain the extent to which the
customer will contribute to
business outcomes.