Follow Me

Customer-driven development: 3 ways to make customers part of R&D

You invested tons of money to develop a new capability in your software. It passed all QA tests, from functionality to performance to security. But customer calls are streaming into your support team, each complaining about how the new release has negatively affected the user experience. The software generates defects if customers use your product in a way that was not specifically tested by R&D, such as in different environments or when run on alternative configurations.

You can prevent scenarios such as this through customer-driven development. If your software R&D team isn’t getting ongoing, direct feedback from your customers in the earliest phases of software development, you’re missing out on an opportunity to drive product direction and improve quality. It’s not enough to just satisfy your customers; you need to bring their voices into R&D so that they can guide product development from the earliest stages. Customer-driven development is the best way to ensure that your customers will actually use the product you’re building.

In a large software R&D organization such as mine, it’s just not feasible for all of our developers to talk to customers directly. Instead, we built an internal team called Customer-Driven Testing and Development (CDTD) that’s responsible for generating customer and user influence in our product development lifecycle. Here’s how we constructed the team, what we learned, and how you can reap the benefits of a customer-centric R&D strategy.

One team, two hats

Our team wears two hats. It dons one when working with customers on products that have already been released and the another when focusing on future releases, where we want to understand how the customer would use a new feature. For this reason, our work is not limited to QA, but also involves aspects that influence development, functional architecture, and product management.

The CDTD team’s activities are not part of a product’s QA cycle but nonetheless exert great influence over how we develop and test the product. Because the team is positioned between customers and R&D, we included engineers with highly technical backgrounds. These engineers understand each product and are passionate about working directly with users.

One of the CDTD team’s main responsibilities is to help focus our QA team. We use the 80/20 rule to understand that 20 percent of the product’s capabilities represent 80 percent of customer use cases, and we identify use cases that can’t be detected in synthetic lab conditions, such as customer environments. Our team also works closely with product managers and functional architects, providing them with customer feedback that can aid in creating strategic product plans.

Here are three key best practices and tips that our CDTD team implements. By following them, you’ll bring your team and your users closer together by continuously streamlining feedback and getting users to join product discussions.

1. Simulate the customer environment

If your customers are willing to share their production environment data with you, you should replicate their environment in your QA labs using actual data sets from their business.

Even enterprises with major data restrictions, such as large banks, are willing to share data—assuming all legal requirements are met—when they understand the benefits.

For example, your customers can minimize risks once the time comes for an upgrade because the new version of the product will have already been tested with their data. Already this year, we’ve received plenty of customer data sets, and we’ve set up the necessary automation procedures to streamline customer environment simulations.

It’s not just the customer that benefits, however. R&D organizations need to maintain strong relationships with customers in order to continuously receive feedback that influences product development, as well as to learn about current bugs that disrupt the user experience.

2. Research customer requirements

There are several elements to researching your customer’s requirements. These include the following:

The customer survey

The customer survey enables our team to make decisions by giving us a clear understanding of how customers use our products. Our survey reports include quantitative information about our most common features, the most common environments, and more. The surveys also help us understand what questions we can ask to better understand the direction we should go with future versions. For example, if we understand that most of our customers use a specific operating system, we can focus our testing accordingly.

Surveys are also a great way to help R&D organizations receive customer feedback. In order to approach our customers with well-defined and well-structured questions, we gather questions from developers, solution architects, test engineers, and product managers. We include small, specific queries about product features, as well as ideas for future features.

Although we communicate these questions to customers using a typical survey format, we also strive to maintain direct communication with each customer that replies. One of the most important aspect of these surveys is to encourage customer feedback.

Change requests

A change request (CR) is typically a production issue that has been escalated to R&D. Our role is to gather all CRs and analyze them to learn more about what needs to be improved in our own testing environment, while our main goal is to minimize the chance of a specific CR recurring.

For example, when a CR relates to a function that is outside of a product’s main path (the most used workflows and functions), we might learn that the frequency at which that particular function was being tested was too low. By then triggering QA to enhance or add specific tests, our team can help stop a given CR from recurring—or at least reduce the possibility that it might recur.

We learn whether each CR is a customer application under test (AUT), environment-specific, or test coverage issue, and then act accordingly. Actions include prereleasing validation for specific customers, enhancing test of workflows that are not in the main path frequently, or creating new manual or automated tests for issues in the main path.

3. Turn to customers as design partners, beta testers, and early adopters

We give customers the opportunity to participate in programs where we share plans and ideas for products and features that are in the incubation stage, and we enhance our QA efforts so that customers can provide feedback and potentially influence the direction and quality of our products. They become our design partners, beta testers, and early adopters.

Design partners

We start by identifying a group of users that match our design-partner qualification criteria. Then we share our plans, including feature mockups, either early on in the development process or beforehand, when the team is still conceptualizing a product or feature. We collect feedback from these customers through online conference calls or on-site visits, during which we generate a discussion between our design partners while we listen. We also strive to match design partners to specific upcoming features in order to obtain relevant feedback. Dozens of design partners give our team input about features in the making, and during biweekly update sessions we share everything we’ve created to date with them, including initial mockups, UI options, workflow descriptions, and so on.

Beta testers

Prior to our feature freeze phase, we give our beta customers full access to new features. This helps us to better align those features with the customer’s needs. Customers get a chance to install the beta build on their testing environment before the version is released, so that they can test it and check that their main path works as expected, as well as try out the new features. They can share with us any issues that they find and directly influence the quality of the version.

Early adopters

Finally, we have an early-adopters program. Many users prefer to wait until a new feature is stable, rather than trying it when it’s first released. So, to accelerate feedback, we reach out to our early-adopter customers right away, asking them to try out the product as soon as it’s released.

Our early adopters understand that they have an opportunity to influence how the software they want to use gets developed. Users in our program say they love to proactively share insights with us, because they understand that their needs are being seriously considered and eventually implemented. The more feedback they give, the better the services they use become.

R&D should take the lead

Even in small organizations that lack resources, R&D leaders should dedicate resources to proactively obtain user feedback. Because your team members are usually not the ultimate users of your product, you should even involve your customers in your product brainstorming session by engaging with them as early as possible in a structured way. Make sure you record all of the ideas and requests you hear, and infuse them into your product’s road map and release cycles. Not only will your customers appreciate your approach of listening and implementing their ideas, but you will set yourself apart from your competitors as well.

That, in a nutshell, is how I infuse customer feedback into our R&D process. What’s your approach?