How we do exploratory tests before taking on third-party apps: Oursky’s Software QA-as-a-Service

This piece on exploratory tests is Part 2 of what Oursky calls “code diagnostics”, a standalone service that takes a snapshot of a product’s code base and existing issues. You can also read our code diagnostics Part 1, which focuses on the code base review part of the service.

Software QA is not a function, but a baked-in approach to software development with best practices. We just wanted to put that out there because today, we’re focusing on a function: exploratory tests.

Where are exploratory tests used in software development?

All the components that contribute to quality code at Oursky.
Exploratory tests are usually better for consumer apps that can be tested “exploratorily”. These tests can take place after a new build is completed before it is shipped. Exploratory tests are for product usability (to complement the automated tests with continuous integration that check for errors). Exploratory tests usually come in for two scenarios:

Scheduled by a PM after a new build: built into a project management workflow for an existing client project

Performed for a client with existing product: as part of our standalone “code diagnostic” service when clients have products built by a third-party developer that they want us to review the existing code base, or make changes to the existing product.

Why “code diagnostics” helps before working on apps developed by other teams

We track bugs for our own projects as well, including our opensource serverless backend Skygear.
Oursky began offering “code diagnostics” as a standalone service because it is a good practice to examine an app created by a third-party to understand the scope of work before providing estimations or timelines to a client. The goal with code diagnostics is to provide a client with enough information to make informed decisions about how to proceed with their product. It also clearly documents pre-existing issues before our team takes over a project. Our code diagnostic deliverable is a report, which is a snapshot of the quality of the existing code base and a list of existing bug reports and issues. Where appropriate, notes or suggestions are made about the existing software architecture that could cause problems down the line when the product scales. The report has two main sections:

Code base review: to assess the quality of the code base and whether it is easy enough to understand and is good enough to maintain or build on

Exploratory test: to document bugs in the existing product (major, minor, trivial) to understand what needs to be fixed before proceeding with development so there is no confusion if there are future bugs

Depending on the size of a product and quality of documentation, the review could take a few days to two weeks for huge projects.

How we do exploratory tests

Exploratory tests divide issues into major, minor, and trivial bugs.
The objective of exploratory test (even for general software QA) is to find as many bug and usability issues as possible within a specified period of time. Depending on the timeline agreed upon with a client, the scope of work may or may not cover the entire app. The main goal is to provide a clear picture of the number and degree of severity of the defects in different parts of the app.

For example, in one code diagnostic case with a client, we had 100+ issues reported overnight by our Skytesters (testers that Oursky’s QA team manages). This meant that Oursky’s QA team had retest and edit all reported bugs within 48-hours (60-70 reasonable issues were accepted).

The Process:

Kick-off: QA has a meeting with Sales and PM (to give testing team clear briefing instructions)

Quick demo of the product features

Questions about testing setup, e.g.

How should testing accounts be created? Any special workflow at registration?

Any testing credit cards or billing accounts?

Questions about expected exception handling, e.g.

Any loading state between pages / before data is loaded?

What are the expected behaviors if users click on a button which is suppose to only be available for logged in users?

What happens if the users kill the session and restore, such as close the browsers or kill and reopen the app?

Does the website support mobile view / responsive design?

Confirm the schedule and scope:

How many platforms need to be tested?

Around how many testers should be invited to join the test?

When will the testing environment be setup and ready for testing?

When is the deadline for submitting the test report to the client?

Does the website support mobile view / responsive design?

Invite Testers: Invite and confirm remote testers who are suitable to join this test

Note: Based on client’s budget and urgency

Planning: Draft and send testing guidelines to the testers, which include

Test run information (i.e. date, time, platform, app version, etc.)

Instructions on how to report issues

Checklist of app features

Any special points to note

Any known / existing issues

Test Run: Testers report any issues they find, and retest each other’s issues to raise accuracy in a fixed time period

Review: Testers stop reporting and test manager does the following:

Review and edit the issues reported

Filter out duplicated, too trivial, or issues created due to improper use

Make sure that the issues are reproducible according to the given steps

Double check the issue type and severity is properly labelled

Report: Generate a test report in PDF format from the list of issues

Create an index page listing the issues in different sections and sort by severity

To be clear, print each issue in a separate page

Since the severity of defects on different sections are clearly listed on the report, the PM can easily scan the document and further discuss with the client about fixing the major issues before proceeding to new features. This meeting seems like a small step, but it is crucial because it prevents the development schedule from being delayed by any unknown major issues after all the planning is done. Of course, the client will also receive the full report and can take the report to meet with any app vendors they choose as well.

How issues are reported

All bug reports by testers include screenshots and detailed instructions.
Oursky manages a team of regular testers who are pre-screened and audited for performance. Testers are paid a base salary for each test run and each new accepted issue by the QA team to avoid duplicates. Instead, testers can re-test issues reported issues to increase the accuracy of reports.

Issues have a standardized structure and must be clear to help the development team reproduce the issue and fix the problem. The items included in the report are:

Descriptive Issue Name
– Device: iPhone X (11.2) / Chrome on Macbook (10.12.5)
– Build: (if given in email)
– Description: Clear description of the issue and the steps to reproduce the issue for others.

Screenshots (Essential!)
The final deliverable is a report that is broken down into modules, with issues ordered by severity. We also emphasize that reports are meant to be used by teams (Oursky or third parties) to reproduce the issue and ultimately solve the problem. For that reason, all included information above is essential for each issue.

We hope this helps you and your team understand a bit more about one component of software QA and why taking time to do code diagnostics will give a meaningful snapshot to to plan further product improvements!

If you would have a third-party app that you would like to make improvements on, please get in touch!