If you’re a developer, one of the most valuable things you can do is look at the GitHub
issues list and help fix bugs. We almost always prioritize bug fixing over feature
development.

Even for non developers, helping to test pull requests for bug fixes and features is still
immensely valuable. Ansible users who understand writing playbooks and roles should be
able to add integration tests and so Github pull requests with integration tests that show
bugs in action will also be a great way to help.

When Pull Requests (PRs) are created they are tested using Shippable, a Continuous Integration (CI) tool. Results are shown at the end of every PR.

When Shippable detects an error and it can be linked back to a file that has been modified in the PR then the relevant lines will be added as a GitHub comment. For example:

The test `ansible-test sanity --test pep8` failed with the following errors:lib/ansible/modules/network/foo/bar.py:509:17:E265 block comment should start with '# 'The test `ansible-test sanity --test validate-modules` failed with the following errors:lib/ansible/modules/network/foo/bar.py:0:0:E307 version_added should be 2.4. Currently 2.3lib/ansible/modules/network/foo/bar.py:0:0:E316 ANSIBLE_METADATA.metadata_version:required key not provided @ data['metadata_version']. Got None

From the above example we can see that --testpep8 and --testvalidate-modules have identified issues. The commands given allow you to run the same tests locally to ensure you’ve fixed the issues without having to push your changed to GitHub and wait for Shippable, for example:

If you haven’t already got Ansible available, use the local checkout by running:

Ideally, code should add tests that prove that the code works. That’s not always possible and tests are not always comprehensive, especially when a user doesn’t have access to a wide variety of platforms, or is using an API or web service. In these cases, live testing against real equipment can be more valuable than automation that runs against simulated interfaces. In any case, things should always be tested manually the first time as well.

Thankfully, helping to test Ansible is pretty straightforward, assuming you are familiar with how Ansible works.

Testing source code from GitHub pull requests sent to us does have some inherent risk, as the source code
sent may have mistakes or malicious code that could have a negative impact on your system. We recommend
doing all testing on a virtual machine, whether a cloud instance, or locally. Some users like Vagrant
or Docker for this, but they are optional. It is also useful to have virtual machines of different Linux or
other flavors, since some features (apt vs. yum, for example) are specific to those OS versions.

The first command creates and switches to a new branch named testing_PRXXXX, where the XXXX is the actual issue number associated with the pull request (for example, 1234). This branch is based on the devel branch. The second command pulls the new code from the users feature branch into the newly created branch.

Note

If the GitHub user interface shows that the pull request will not merge cleanly, we do not recommend proceeding if you are not somewhat familiar with git and coding, as you will have to resolve a merge conflict. This is the responsibility of the original pull request contributor.

Note

Some users do not create feature branches, which can cause problems when they have multiple, unrelated commits in their version of devel. If the source looks like someuser:devel, make sure there is only one commit listed on the pull request.

The Ansible source includes a script that allows you to use Ansible directly from source without requiring a
full installation that is frequently used by developers on Ansible.

Simply source it (to use the Linux/Unix terminology) to begin using it immediately:

source ./hacking/env-setup

This script modifies the PYTHONPATH environment variables (along with a few other things), which will be temporarily
set as long as your shell session is open.

The online code coverage reports are a good way
to identify areas for testing improvement in Ansible. By following red colors you can
drill down through the reports to find files which have no tests at all. Adding both
integration and unit tests which show clearly how code should work, verify important
Ansible functions and increase testing coverage in areas where there is none is a valuable
way to help improve Ansible.

The code coverage reports only cover the devel branch of Ansible where new feature
development takes place. Pull requests and new code will be missing from the codecov.io
coverage reports so local reporting is needed. Most ansible-test commands allow you
to collect code coverage, this is particularly useful to indicate where to extend
testing. See Testing Ansible for more information.