Research & Design Process

The truth is that sometimes a small team may not have taken the time to document the decision making process for new teammates to reference later, and the demands on their time might mean that they aren't available for a hands on briefing either. It's definitely not an idea situation, but I've always been someone who is willing to make lemondade when I need to.

The following process is one that I follow when it is necessary for me to build information about a product, its industry, and its users from the ground up. I do my best to make my cumulative research available and organized for on-boarding new members of the team, and I make documents or presentations about my current findings for other departments as well as the development team so that everyone can better understand the research supporting the decisions we are making with the product.

1. Start with Existing Online User Resources

With Brivity, I began by attending webinar demonstrations like a potential customer would, reading existing self-help and support articles online, and reviewing any marketing materials I could find. Though it was a slower process than a direct briefing from coworkers, I gained a lot of insight into the experience of a new user.

2. Meet, and listen to the customers

My second week at Brivity, I was offered a chance attend a trade show & conference as part of the sales team. I made notes of things attendees didn’t express interest in, which features they were most excited about, and any inquiries that I wasn’t yet familiar with yet. Between selling, I also attended educational sessions.

3. look into user behavior

Back in the office, I managed to get my hands on some data of what sorts of objects users were creating in the app and what they looked at on the marketing site. We examined things like which features were the most popular among users, which were underutilized, and how users might be using existing features in a way that we weren't expecting. We also analyzed exit survey results from trial users and longer-term clients, customer feedback on feature usability, bug reports, and feature requests.

4. Get familiar with the perceived (from the UI) and actual app architecture

I got everything up on a wall and tagged the heck out of it with post-it flags to create broad user flows based on what I had learned from my earlier research. Most users had very strong opinions about how data related in the app, and gave me ideas of how they thought it was structured, and how it should change. I wanted to understand how closely the user's perspective of the information matched that of the apps architecture. The answer to that turned out to be complicated.

5. Learn the industry standards and expectations

When I came across a term or phrase I wasn’t familiar with, I googled it and confirmed my understanding with an actual real estate agent later.

I researched the laws and restrictions agents have to abide by, as well as what they need to know to pass the exam to receive and maintain their license and how that different in various states/countries.

Understanding what your competitors do and how they are different is very important from a product development and marketing perspective.

I compiled a list of competitors—based on the content of our sales videos—and signed up for their products to cross reference for user expectations about industry terminology and structure of a transaction management tools, CRM's, and Task management systems: i.e. structurally how our product compared to theirs. Following the marketing of their feature releases also got me started on market analysis as we could see what their users were most excited about and compare it to what we were working on.

6. Observe users using the product

Upwards of 70% of an agent's time can be spent outside of the office, so most communication with admins and other users within the team needs to be online. It was easy to understand why communicating within a large team could be difficult.

We started with Generative Research—specifically with contextual inquiries—with users of the product from a large real estate team in the same building, and a couple of smaller local teams in our research pool. This provided users ranging from 2 years of familiarity with Brivity & real estate, to newbies just starting in real estate and with Brivity the day I was observing them. Generative research is important because it helps us make sure that we're actually building something people want.

I made notes of any questions trainees asked and of the explanations given. I also asked experienced users about other products they’d used and what they had liked and disliked about those in comparison to Brivity. This was followed up with Evaluative research from what analytics we had (later setting up tracking for a few more) and analyzing past feedback survey results received by sales and support.

With a better understanding of the existing product and the needs of users, I'm then able to move into the design process, which itself is sprinkled with additional research and feedback along the way.

The Design, Feedback, Design, Development feedback... Loop

The design/development process moves quickly from one step to another with the expectation of changes based on the feedback received happening between each phase. I am usually at different stages of this process for 3-10 features for mulitple products at a time.

Low Fidelity first

Get everything up on a wall to get an overview of inconsistencies in naming conventions and layouts between pages, while viewing the existing architecture / site map of the product.

User flows taken from interviews get turned into Wireframes and run by support and the developers for feasibility and proof of concept. Changes are made when necessary.

Once those were agreed upon, rough static mockups are made and then shown to stakeholders and sales for feedback on market fit and to confirm that they support business goals.

Add more details

After adding enough data for users to understand what they are looking at, I build interactive prototypes with variations on color and text hierarchy to be A/B tested by both new and old users for a usibility benchmark and to confirm that their expectations were met. These are often tweaked and tested again on new users.

I find that the best way to build a digital product is to design a system with reusable, modular parts instead of pixel perfect layouts. This results in a style guide that doesn't require defining each page specifically—instead I like to create elements or components that can be combined and rearranged to fit various layouts as needed.

High Fidelity enables testing real-world examples & aids development

For the places in the app where we were making the biggest changes to interaction, we turned those style guides into pixel perfect mockups and allowed users to test them in inVision. Later they could be attached stories for development and used for writing both internal & external release notes.

Writing stories for development turned out to be a crucial part of the design process for me as it forced me to confirm that we hadn't missed any important features or interactions. The combinations of stories & prototypes helped clarify development tasks and made acceptance easier.

At first, accepted features were deployed directly to production (scary, I know), but later we build a staging environment where releases could be tested internally first and next by a small group of beta users from the team next door before being deployed to production.

Collect Feedback and Iterate

As product owner, I set up regular daily check-ins with the support team to talk about what customers were talking about and asking about them most, as well as give them information about where we were in the process with specific feature requests so that they could update users. We also used these check-ins to remind them of release times and dates and let them know if those would move outside our normal schedule. Instituting these meetings made a huge difference in how Support saw themselves as part of the research and development team; it improved communication and made the feedback loop a lot more efficient.

The best part of these meetings as a product owner and UX designer, is that they gave us good primary feedback on newly released work. If a new feature didn't have any comments or questions on how to use it within the first week, we'd check analytics to make sure it was being used, but then we could rest assured that the feature was easy to understand and met expectations. Sometimes though, it wasn't used at all—which then became a marketing question as well as a "did we just build something useless" question—and we knew we'd missed the mark a bit. Then support would give us the contact information of particularly opinionated users to follow up with. This was great, because we could ask them questions about what they thought the feature would be, why it wasn't meeting their needs, and then get some ideas on how to iterate in the future.