Developing leaders in agile, lean, and software quality – in context

Heuristic Approach to Compatibility Testing

With the superfluity of environments where software could run, how can you select a good set to run your tests against?Test designers are typically challenged to come up with a strategy to evaluate product quality when used in different browsers and platforms. Combine that with the myth of the perfect software running in the heads of your stakeholders, you are well on your way to a suicide mission unless you hammer some sense in.

BEGIN WITH YOUR MARKET IN MIND

When the economics of software development kicks in, everything is about ruthless prioritization: finding that optimal, good enough combination of browsers and platforms to test against within certain constraints.

Here’s a sample model you can use to assess:

What Tests Should You Run?

Acceptance checks, user interface checks, and exploratory tests are done on the environment affecting N% of users (e.g. 80%) based on browsers, platform, resolution, localization, and other relevant variables. These will be done on actual devices, not emulators.

User interface checks are done on all other browsers, platform, and resolution that we have available using desktop and mobile devices, virtual machines, and cross-browser testing tools.

Note: this is a sample model; it could differ depending on each project’s context.

–

What User Environment/s Do You Test On?.

One way to identify this is using pairwise analysis of top browsers, operating systems, resolutions (desktop), device platforms (mobile, desktop), screen densities (mobile), and other important variables based on current usage statistics (analytics) over a relevant period of time. Then decide what top environments is feasible to focus on based on the data.

This method is corollary to the 80/20 rule where, given limited resources, effort is focused on building good enough quality on 20% of environment/s affecting the top 80% of the customers.

If analytics data for the product is not yet available, you can default to your own market research data (specific to your customers). If everything fails, there’s generic browser usage statistics available in the web (see Browsers to Test)

–

What Constraints?.

Strategy on testing needs to also hinge on the project’s economic limitations and risk profiles, such as but not limited to:

Time-to-market; the cost of delay in getting the product to the hands of the customer

Budget; the money and resources we can spend

Quality Risk Profile; the level of risk that The Team is willing to take to meet other priorities such as time to market or meeting the budget

–

THE BOTTOM LINE

Seems complicated? Sure.

The nudge, if you haven’t noticed yet, is to have the product owner, development team, and other stakeholders discuss goals, options, and risks based on user data within constraints and create a shared accountability or understanding of the decision.

It is not the model above that matters; it’s the discourse. With people from different perspectives collaborating, various options from a technology-, quality-, and business-level perspectives can be introduced and used for prioritization.