Search

Tools

QA:Release validation test plan

From FedoraProject

Introduction

Before an official Fedora release comes out, Alpha and Beta pre-releases are made. Alpha, Beta and Final (GA) are the milestones for a Fedora release. At each milestone, several "test compose" (TC) and "release candidate" (RC) builds are composed and tested to ensure the build finally released as the Alpha, Beta or final release meets certain requirements.

Prior to the first Alpha TC, preliminary validation testing is also conducted against nightly composes from the Rawhide and then Branched trees.

This document describes how this testing is carried out. You can contribute by downloading the nightly composes and candidate builds and helping to test them.

Testing will involve executing test cases to verify installation and basic functionality on different hardware platforms for the various Fedora products. Everyone is encouraged to test and share your ideas, tests, and results.

Contact

For further information, help with getting involved, or to send comments about installation testing, please contact the QA group.

Goal

The goal of release validation testing is to ensure the release candidate compose which is ultimately released at each milestone (following the Go No Go Meeting) meets the Fedora Release Criteria, which define the minimum requirements for Fedora releases.

Responsibilities

The QA team has overall responsibility for maintaining this process, and setting up each test event (see #Organization_procedure below).

The QA team and the Product working groups - Server, Workstation and Cloud - share responsibility for conducting testing. Working groups are particularly expected to contribute to the execution of tests that are significant to their products.

Scope and Approach

Testing will include:

Manually executed test cases in bare metal, virtual and cloud environments of the primary Architectures using the various Fedora release deliverables (installer images, live images, disk images, and package trees used for network installation and upgrades)

In future, some automatically executed test cases via the Taskotron system are expected to be included in release validation testing, but these tests are not yet ready. For more information about automatic testing, please see the Taskotron sub-pages, especially the install automation plan.

The release validation tests, taken together, should provide coverage for the full set of Fedora_Release_Criteria, which define the actual requirements that Fedora releases must meet.

Validation test events are expected to result in the identification of behaviour that does not meet the relevant release criteria. Each individual issue of this kind of is considered a "release blocker bug". As they are identified, these should be reported, and marked as proposed release blocker bugs according to the QA:SOP_blocker_bug_process. A single iteration of the process is expected to end when a release candidate build is fully tested and no release blocker bugs are discovered. That build is then expected to be approved for release.

Other bugs discovered during testing should be reported as usual, and may be proposed as "freeze exception bugs" according to the QA:SOP_freeze_exception_bug_process, where more information on the nature and purpose of the "freeze exception" concept can be found.

Timing of validation test events

Pre-Alpha nightly validation

In the first part of the release cycle, before the branch point and the first Alpha test compose, preliminary validation testing is conducted against the nightly composes automatically produced each night by ReleaseEngineering. A bot running relval will 'nominate' a particular nightly compose for testing every few days, creating the Wikitcmstest result pages, and sending an announcement email to the test-announce mailing list.

Organizing validation test events

The procedure for running a validation testing event is documented as the release validation testing procedure. It includes instructions for updating the wiki with the new result pages and other changes, and announcing the event on the mailing list.

Test organization, execution and result tracking

Manual test results are managed using the Wikitcms 'system' of wiki pages with specific names, content, and categorization. Each validation testing event (whether a nightly or TC/RC compose) will have a set of result pages.

The basic workflow of validation testing is to download one or more images from a given nightly compose or candidate build, load up one or more of the result pages for that compose, and run some of the test cases: give priority to earlier release levels (Alpha tests before Beta tests before Final tests) and tests that have not yet already been conducted by anyone else. Report any bugs you encounter, and then enter the results of your test into the results page, either by using the relval tool or by editing it directly (help on doing this is included in the 'source code' of the page, and don't worry if you make a mistake, it can easily be reverted or fixed).

Enhancements to the testing process

Results Summary page

For each compose, there is a results Summary page: see the current Summary page for an example. This page uses the mediawiki partial transclusion feature to display the results from each of the individual result pages together in one page. The volume might be overwhelming, but it is a handy way to see all test type results together. You can enter results via the Summary page, in most cases - mediawiki will cause the edit to be applied to the correct underlying result page.

Test coverage

Some useful information is available on test coverage; this is the page for the release currently under development. The pages provide a quick overview of the coverage for each validation test across all composes (nightly and TC/RC). This can be useful in various ways, but its main use for a tester is to see which tests have not been run recently or at all; please give such tests priority over tests which have already been run many times, to improve overall coverage. This information is produced by relval's testcase-stats module.

Reporting results with relval

The relval tool which generates the test coverage data and helps create the result pages can also report results, by editing the result pages on your behalf. You may find this more convenient than editing the page source directly. To report results for the current nightly, TC or RC compose, install relval according to the instructions on its page, and then run relval report-results.

Test priority

Tests are associated with a milestone (Alpha, Beta, Final) or listed as Optional. All Alpha tests must be completed without encountering release blocker bugs before the Alpha release, Beta tests before the Beta release, and Final tests before the Final release. Optional tests never have to be completed with any particular result or indeed at all, but are listed as it is useful to conduct them (and file any bugs discovered) if time is available. Ideally, all tests would be run for all builds - this is rarely possible, but it is good to run more than the minimum if possible. The mandatory test types are:

Deliverables

A full set of results pages for each candidate build and nominated nightly compose

Full test coverage for the tests associated with each milestone, ideally for the final release candidate build, but at least combined across all release candidate builds

Detailed bug reports for all issues encountered during testing, nominated as release blocker or freeze exception bugs where appropriate

Contingencies

Test results can be carried over from one test event to a later one if it is reasonably certain that the changes between the candidate builds in question do not affect the codepaths exercised in the test case in question. If there was any change that may affect the test case, it should be re-run. Detailed instructions for carrying test results forward are provided as comments in the source of the test results pages.