As of Astropy 3.0, the dependencies used by the Astropy test runner are
provided by a separate package called pytest-astropy. This package provides
the pytest dependency itself, in addition to several pytest plugins
that are used by Astropy, and will also be of general use to other packages.

Since the testing dependencies are not actually required to install or use
Astropy, they are not included in install_requires in setup.py.
However, for technical reasons it is not currently possible to express these
dependencies in tests_require either. Therefore, pytest-astropy is
listed as an extra dependency using extras_require in setup.py.
Developers who want to run the test suite will need to install the testing
package using pip:

>pipinstallpytest-astropy

A detailed description of the plugins can be found in the Pytest Plugins
section.

There are currently three different ways to invoke Astropy tests. Each
method invokes pytest to run the tests but offers different options when
calling. To run the tests, you will need to make sure you have the pytest
package (version 3.1 or later) installed.

In addition to running the Astropy tests, these methods can also be called
so that they check Python source code for PEP8 compliance. All of the PEP8 testing
options require the pytest-pep8 plugin, which must be installed
separately.

The astropy core package and the Astropy package template provide a test
setup command, invoked by running pythonsetup.pytest while in the
package root directory. Run pythonsetup.pytest--help to see the
options to the test command.

Since pythonsetup.pytest wraps the widely-used pytest framework, you may
from time to time want to pass options to the pytest command itself. For
example, the -x option to stop after the first failure can be passed
through with the --args argument:

Turn on PEP8 checking by passing --pep8 to the test command. This will
turn off regular testing and enable PEP8 testing.

Note also that this test runner actually installs astropy into a temporary
directory and uses that for running the tests. This means that tests of things
like entry points or data file paths should act just like they would once
astropy is installed. The other two approaches described below do not do
this, and hence may give different results when run from the astropy source
code. Hence if you’re running the tests because you’ve modified code that might
be impacted by this, the setup.pytest approach is the recommended method.

The name of a specific package to test, e.g. ‘io.fits’ or
‘utils’. Accepts comma separated string to specify multiple
packages. If nothing is specified all default tests are run.

args : str, optional

Additional arguments to be passed to pytest.main in the args
keyword argument.

docs_path : str, optional

The path to the documentation .rst files.

open_files : bool, optional

Fail when any tests leave files open. Off by default, because
this adds extra run time to the test suite. Requires the
psutil package.

parallel : int or ‘auto’, optional

When provided, run the tests in parallel on the specified
number of CPUs. If parallel is 'auto', it will use the all
the cores on the machine. Requires the pytest-xdist plugin.

pastebin : (‘failed’, ‘all’, None), optional

Convenience option for turning on py.test pastebin output. Set to
‘failed’ to upload info for failed tests, or ‘all’ to upload info
for all tests.

pdb : bool, optional

Turn on PDB post-mortem analysis for failing tests. Same as
specifying --pdb in args.

pep8 : bool, optional

Turn on PEP8 checking via the pytest-pep8 plugin and disable normal
tests. Same as specifying --pep8-kpep8 in args.

plugins : list, optional

Plugins to be passed to pytest.main in the plugins keyword
argument.

remote_data : {‘none’, ‘astropy’, ‘any’}, optional

Controls whether to run tests marked with @pytest.mark.remote_data. This can be
set to run no tests with remote data (none), only ones that use
data from http://data.astropy.org (astropy), or all tests that
use remote data (any). The default is none.

The test suite can be run directly from the native pytest command. In this
case, it is important for developers to be aware that they must manually
rebuild any extensions by running setup.pybuild_ext before testing.

In contrast to the case of running from setup.py, the --doctest-plus
and --doctest-rst options are not enabled by default when running the
pytest command directly. This flags should be explicitly given if they are
needed.

It is possible to run only the tests for a particular subpackage or set of
subpackages. For example, to run only the wcs tests from the
commandline:

pythonsetup.pytest-Pwcs

Or, to run only the wcs and utils tests:

pythonsetup.pytest-Pwcs,utils

Or from Python:

>>> importastropy>>> astropy.test(package="wcs,utils")

You can also specify a single file to test from the commandline:

pythonsetup.pytest-tastropy/wcs/tests/test_wcs.py

When the -t option is given a relative path, it is relative to the
installed root of astropy. When -t is given a relative path to a
documentation .rst file to test, it is relative to the root of the
documentation, i.e. the docs directory in the source tree. For
example:

Astropy can use coverage.py to
generate test coverage reports. To generate a test coverage report, use:

pythonsetup.pytest--coverage

There is a coveragerc file that
defines files to omit as well as lines to exclude. It is installed
along with astropy so that the astropy testing framework can use
it. In the source tree, it is at astropy/tests/coveragerc.

Any time a bug is fixed, and wherever possible, one or more regression tests
should be added to ensure that the bug is not introduced in future. Regression
tests should include the ticket URL where the bug was reported.

Tests that need to make use of a data file should use the
get_pkg_data_fileobj or
get_pkg_data_filename functions. These functions
search locally first, and then on the astropy data server or an arbitrary
URL, and return a file-like object or a local filename, respectively. They
automatically cache the data locally if remote data is obtained, and from
then on the local copy will be used transparently. See the next section for
note specific to dealing with the cache in tests.

They also support the use of an MD5 hash to get a specific version of a data
file. This hash can be obtained prior to submitting a file to the astropy
data server by using the compute_hash function on a
local copy of the file.

Tests that may retrieve remote data should be marked with the
@pytest.mark.remote_data decorator, or, if a doctest, flagged with the
REMOTE_DATA flag. Tests marked in this way will be skipped by default by
astropy.test() to prevent test runs from taking too long. These tests can
be run by astropy.test() by adding the remote_data='any' flag. Turn on
the remote data tests at the command line with pythonsetup.pytest--remote-data=any.

It is possible to mark tests using
@pytest.mark.remote_data(source='astropy'), which can be used to indicate
that the only required data is from the http://data.astropy.org server. To
enable just these tests, you can run the
tests with pythonsetup.pytest--remote-data=astropy.

from...configimportget_data_filenamedeftest_1():"""Test version using a local file."""#if filename.fits is a local file in the source distributiondatafile=get_data_filename('filename.fits')# do the test@pytest.mark.remote_datadeftest_2():"""Test version using a remote file."""#this is the hash for a particular version of a file stored on the#astropy data server.datafile=get_data_filename('hash/94935ac31d585f68041c08f87d1a19d4')# do the testdefdoctest_example():""" >>> datafile = get_data_filename('hash/94935') # doctest: +REMOTE_DATA """pass

The get_remote_test_data will place the files in a temporary directory
indicated by the tempfile module, so that the test files will eventually
get removed by the system. In the long term, once test data files become too
large, we will need to design a mechanism for removing test data immediately.

By default, the Astropy test runner sets up a clean file cache in a temporary
directory that is used only for that test run and then destroyed. This is to
ensure consistency between test runs, as well as to not clutter users’ caches
(i.e. the cache directory returned by get_cache_dir) with
test files.

However, some test authors (especially for affiliated packages) may find it
desirable to cache files downloaded during a test run in a more permanent
location (e.g. for large data sets). To this end the
set_temp_cache helper may be used. It can be used either as
a context manager within a test to temporarily set the cache to a custom
location, or as a decorator that takes effect for an entire test function
(not including setup or teardown, which would have to be decorated separately).

Furthermore, it is possible to set an option astropy_cache_dir in the
pytest config file which sets the cache location for the entire test run. A
--astropy-cache-dir command-line option is also supported (which overrides
all other settings). Currently it is not directly supported by the
./setup.pytest command, so it is necessary to use it with the -a
argument like:

Tests may often be run from directories where users do not have write
permissions so tests which create files should always do so in
temporary directories. This can be done with the pytest tmpdir
function argument or with
Python’s built-in tempfile module.

If the setup_module and teardown_module functions are specified in a
file, they are called before and after all the tests in the file respectively.
These functions take one argument, which is the module itself, which makes it
very easy to set module-wide variables:

defsetup_module(module):"""Initialize the value of NUM."""module.NUM=11defadd_num(x):"""Add pre-defined NUM to the argument."""returnx+NUMdeftest_42():"""Ensure that add_num() adds the correct NUM to its argument."""added=add_num(42)assertadded==53

We can use this for example to download a remote test data file and have all
the functions in the file access it:

importosdefsetup_module(module):"""Store a copy of the remote test file."""module.DATAFILE=get_remote_test_data('94935ac31d585f68041c08f87d1a19d4')deftest():"""Perform test using cached remote input file."""f=open(DATAFILE,'rb')# do the testdefteardown_module(module):"""Clean up remote test file copy."""os.remove(DATAFILE)

Tests can be organized into classes that have their own setup/teardown
functions. In the following

defadd_nums(x,y):"""Add two numbers."""returnx+yclassTestAdd42(object):"""Test for add_nums with y=42."""defsetup_class(self):self.NUM=42deftest_1(self):"""Test behaviour for a specific input value."""added=add_nums(11,self.NUM)assertadded==53deftest_2(self):"""Test behaviour for another input value."""added=add_nums(13,self.NUM)assertadded==55defteardown_class(self):pass

In the above example, the setup_class method is called first, then all the
tests in the class, and finally the teardown_class is called.

There are cases where one might want setup and teardown methods to be run
before and after each test. For this, use the setup_method and
teardown_method methods:

defadd_nums(x,y):"""Add two numbers."""returnx+yclassTestAdd42(object):"""Test for add_nums with y=42."""defsetup_method(self,method):self.NUM=42deftest_1(self):"""Test behaviour for a specific input value."""added=add_nums(11,self.NUM)assertadded==53deftest_2(self):"""Test behaviour for another input value."""added=add_nums(13,self.NUM)assertadded==55defteardown_method(self,method):pass

Finally, one can use setup_function and teardown_function to define a
setup/teardown mechanism to be run before and after each function in a module.
These take one argument, which is the function being tested:

For tests that test functions or methods that require optional
dependencies (e.g. Scipy), pytest should be instructed to skip the
test if the dependencies are not present. The following example shows
how this should be done:

In order to test that warnings are triggered as expected in certain
situations, you can use the astropy.tests.helper.catch_warnings
context manager. Unlike the warnings.catch_warnings context manager
in the standard library, this one will reset all warning state before
hand so one is assured to get the warnings reported, regardless of
what errors may have been emitted by other tests previously. Here is
a real-world example:

In order to ensure reproducibility of tests, all configuration items
are reset to their default values when the test runner starts up.

Sometimes you’ll want to test the behavior of code when a certain
configuration item is set to a particular value. In that case, you
can use the astropy.config.ConfigItem.set_temp context manager to
temporarily set a configuration item to that value, test within that
context, and have it automatically return to its original value.

We make use of the pytest-mpl
plugin to write tests where we can compare the output of plotting commands
with reference files on a pixel-by-pixel basis (this is used for instance in
astropy.visualization.wcsaxes).

To run the Astropy tests with the image comparison, use:

pythonsetup.pytest-a"--mpl"--remote-data

However, note that the output can be very sensitive to the version of Matplotlib
as well as all its dependencies (e.g. freetype), so we recommend running the
image tests inside a Docker container which has a
frozen set of package versions (Docker containers can be thought of as mini
virtual machines). We have made a set of Docker container images that can be used for this. Once you have
installed Docker, to run the Astropy tests with the image comparison inside a
Docker container, make sure you are inside the Astropy repository (or the
repository of the package you are testing) then do:

This is because since the reference image files would contribute significantly
to the repository size, we instead store them on the http://data.astropy.org
site. The downside is that it is a little more complicated to create or
re-generate reference files, but we describe the process here.

This will create a reference_tmp folder and put the generated reference
images inside it - the folder will be available in the repository outside of
the Docker container. Type exit to exit the container.

Make sure you generate images for the different supported Matplotlib versions
using the available containers.

Next, we need to add these images to the http://data.astropy.org server. To do
this, open a pull request to this
repository. The reference images for Astropy tests should go inside the
testing/astropy
directory. In that directory are folders named as timestamps. If you are simply
adding new tests, add the reference files to the most recent directory.

If you are re-generating baseline images due to changes in Astropy, make a new
timestamp directory by copying one the most recent one, then replace any
baseline images that have changed. Note that due to changes between Matplotlib
versions, we need to add the whole set of reference images for each major
Matplotlib version. Therefore, in each timestamp folder, there are folders named
e.g. 1.4.x and 1.5.x.

Once the reference images are merged in and available on
http://data.astropy.org, update the timestamp in the IMAGE_REFERENCE_DIR
variable in the astropy.tests.image_tests sub-module. Because the timestamp
is hard-coded, adding a new timestamp directory will not mess with testing for
released versions of Astropy, so you can easily add and tweak a new timestamp
directory while still working on a pull request to Astropy.

A doctest in Python is a special kind of test that is embedded in a
function, class, or module’s docstring, or in the narrative Sphinx
documentation, and is formatted to look like a Python interactive
session–that is, they show lines of Python code entered at a >>>
prompt followed by the output that would be expected (if any) when
running that code in an interactive session.

The idea is to write usage examples in docstrings that users can enter
verbatim and check their output against the expected output to confirm that
they are using the interface properly.

Furthermore, Python includes a doctest module that can detect these
doctests and execute them as part of a project’s automated test suite. This
way we can automatically ensure that all doctest-like examples in our
docstrings are correct.

The Astropy test suite automatically detects and runs any doctests in the
astropy source code or documentation, or in packages using the Astropy test
running framework. For example doctests and detailed documentation on how to
write them, see the full doctest documentation.

Note

Since the narrative Sphinx documentation is not installed alongside
the astropy source code, it can only be tested by running pythonsetup.pytest, not by importastropy;astropy.test().

For more information on the pytest-doctestplus plugin used by Astropy, see
pytest-doctestplus.

Sometimes it is necessary to write examples that look like doctests but that
are not actually executable verbatim. An example may depend on some external
conditions being fulfilled, for example. In these cases there are a few ways to
skip a doctest:

Next to the example add a comment like: #doctest:+SKIP. For example:

>>> import os
>>> os.listdir('.') # doctest: +SKIP

In the above example we want to direct the user to run os.listdir('.')
but we don’t want that line to be executed as part of the doctest.

To skip tests that require fetching remote data, use the REMOTE_DATA
flag instead. This way they can be turned on using the
--remote-data flag when running the tests:

Astropy’s test framework adds support for a special __doctest_skip__
variable that can be placed at the module level of any module to list
functions, classes, and methods in that module whose doctests should not
be run. That is, if it doesn’t make sense to run a function’s example
usage as a doctest, the entire function can be skipped in the doctest
collection phase.

The value of __doctest_skip__ should be a list of wildcard patterns
for all functions/classes whose doctests should be skipped. For example:

__doctest_skip__=['myfunction','MyClass','MyClass.*']

skips the doctests in a function called myfunction, the doctest for a
class called MyClass, and all methods of MyClass.

Module docstrings may contain doctests as well. To skip the module-level
doctests include the string '.' in __doctest_skip__.

To skip all doctests in a module:

__doctest_skip__=['*']

In the Sphinx documentation, a doctest section can be skipped by
making it part of a doctest-skip directive:

..doctest-skip::>>># This is a doctest that will appear in the documentation,>>># but will not be executed by the testing framework.>>>1/0# Divide by zero, ouch!

It is also possible to skip all doctests below a certain line using
a doctest-skip-all comment. Note the lack of :: at the end
of the line here:

..doctest-skip-allAlldoctestsbelowhereareskipped...

__doctest_requires__ is a way to list dependencies for specific
doctests. It should be a dictionary mapping wildcard patterns (in the same
format as __doctest_skip__) to a list of one or more modules that should
be importable in order for the tests to run. For example, if some tests
require the scipy module to work they will be skipped unless importscipy is possible. It is also possible to use a tuple of wildcard
patterns as a key in this dict:

__doctest_requires__={('func1','func2'):['scipy']}

Having this module-level variable will require scipy to be importable
in order to run the doctests for functions func1 and func2 in that
module.

In the Sphinx documentation, a doctest requirement can be notated with the
doctest-requires directive:

One of the important aspects of writing doctests is that the example output
can be accurately compared to the actual output produced when running the
test.

The doctest system compares the actual output to the example output verbatim
by default, but this not always feasible. For example the example output may
contain the __repr__ of an object which displays its id (which will change
on each run), or a test that expects an exception may output a traceback.

The simplest way to generalize the example output is to use the ellipses
.... For example:

This doctest expects an exception with a traceback, but the text of the
traceback is skipped in the example output–only the first and last lines
of the output are checked. See the doctest documentation for
more examples of skipping output.

Another possibility for ignoring output is to use the
#doctest:+IGNORE_OUTPUT flag. This allows a doctest to execute (and
check that the code executes without errors), but allows the entire output
to be ignored in cases where we don’t care what the output is. This differs
from using ellipses in that we can still provide complete example output, just
without the test checking that it is exactly right. For example:

>>> print('Hello world')We don't really care what the output is as long as there were no errors...

Some doctests may produce output that contains string representations of
floating point values. Floating point representations are often not exact and
contain roundoffs in their least significant digits. Depending on the platform
the tests are being run on (different Python versions, different OS, etc.) the
exact number of digits shown can differ. Because doctests work by comparing
strings this can cause such tests to fail.

To address this issue, the pytest-doctestplus plugin provides support for a
FLOAT_CMP flag that can be used with doctests. For example:

>>> 1.0 / 3.0 # doctest: +FLOAT_CMP
0.333333333333333311

When this flag is used, the expected and actual outputs are both parsed to find
any floating point values in the strings. Those are then converted to actual
Python float objects and compared numerically. This means that small
differences in representation of roundoff digits will be ignored by the
doctest. The values are otherwise compared exactly, so more significant
(albeit possibly small) differences will still be caught by these tests.

These continuously test the package for each commit and pull request that is
pushed to GitHub to notice when something breaks.

Astropy and many affiliated packages use an external package called
ci-helpers to provide
support for the generic parts of the CI systems. ci-helpers consists of
a set of scripts that are used by the .travis.yml and appveyor.yml
files to set up the conda environment, and install dependencies.

Dependencies can be customized for different packages using the appropriate
environment variables in .travis.yml and appveyor.yml. For more
details on how to set up this machinery, see the package-template and ci-helpers.

The 32-bit tests on CircleCI use a pre-defined Docker image defined here which includes a 32-bit
Python environment. If you want to run tests for packages in the same way,
you can use the same set-up on CircleCI as the core package, but just be
sure to install Astropy first using:

easy_installpippipinstallastropy

For convenience, you can also use the astropy/affiliated-32bit-test-env
Docker image instead of astropy/astropy-32bit-test-env - the former includes
the latest stable version of Astropy pre-installed.

In some cases, you may see failures on continuous integration services that
you do not see locally, for example because the operating system is different,
or because the failure happens with only 32-bit Python. The following sections
explain how you can reproduce specific builds locally.

If you want to run your tests in the same 32-bit Python environment that
CircleCI uses, start off by installing Docker if you
don’t already have it installed. Docker can be installed on a variety of
different operating systems.

Then, make sure you have a version of the git repository (either the main
Astropy repository or your fork) for which you want to run the tests. Go to that
directory, then run Docker with:

The following pytest plugins are maintained and used by Astropy. They are
included in the pytest-astropy package, which is now required for testing
Astropy. More information on all of the plugins provided by the
pytest-astropy package (including dependencies not maintained by Astropy)
can be found here.

The pytest-remotedata plugin allows developers to control whether to run
tests that access data from the internet. The plugin provides two decorators
that can be used to mark individual test functions or entire test classes:

@pytest.mark.remote_data for tests that require data from the internet

@pytest.mark.internet_off for tests that should run only when there is no
internet access. This is useful for testing local data caches or fallbacks
for when no network access is available.

The plugin also adds the --remote-data option to the pytest command
(which is also made available through the Astropy test runner).

If the --remote-data option is not provided when running the test suite, or
if --remote-data=none is provided, all tests that are marked with
remote_data will be skipped. All tests that are marked with
internet_off will be executed. Any test that attempts to access the
internet but is not marked with remote_data will result in a failure.

Providing either the --remote-data option, or --remote-data=any, will
cause all tests marked with remote_data to be executed. Any tests that are
marked with internet_off will be skipped.

Running the tests with --remote-data=astropy will cause only tests that
receive remote data from Astropy data sources to be run. Tests with any other
data sources will be skipped. This is indicated in the test code by marking
test functions with @pytest.mark.remote_data(source='astropy'). Tests
marked with internet_off will also be skipped in this case.

This plugin provides two command line options: --doctest-plus for enabling
the advanced features mentioned above, and --doctest-rst for including
*.rst files in doctest collection.

The Astropy test runner enables both of these options by default. When running
the test suite directly from pytest (instead of through the Astropy test
runner), it is necessary to explicitly provide these options when they are
needed.

The pytest-openfiles plugin allows for the detection of open I/O resources
at the end of unit tests. This plugin adds the --open-files option to the
pytest command (which is also exposed through the Astropy test runner).

When running tests with --open-files, if a file is opened during the course
of a unit test but that file not closed before the test finishes, the test
will fail. This is particularly useful for testing code that manipulates file
handles or other I/O resources. It allows developers to ensure that this kind
of code properly cleans up I/O resources when they are no longer needed.