Run this variant of the recipe. Why does it fail? Wasn't cart declared in the earlier docstring?

How it works...

The doctest module looks for every docstring. For each docstring it finds, it creates a shallow copy of the module's global variables and then runs the code and checks results. Apart from that, every variable created is locally scoped and then cleaned up when the test is complete. This means that our second docstring that was added later cannot see the cart that was created in our first docstring. That is why the second run failed.

There is no equivalent to a setUp method as we used with some of the unittest recipes. If there is no setUp option with doctest, then what value is this recipe? It highlights a key limitation of doctest that all developers must understand before using it.

There's more...

The doctest module provides an incredibly convenient way to add testability to our documentation. But this is not a substitute for a full-fledged testing framework, like unittest. As noted earlier, there is no equivalent to a setUp. There is also no syntax checking of the Python code embedded in the docstrings.

Mixing the right level of doctests with unittest (or other testing framework we pick) is a matter of judgment.

Filtering out test noise

Various options help doctest ignore noise, such as whitespace, in test cases. This can be useful, because it allows us to structure the expected outcome in a better way, to ease reading for the users.

We can also flag some tests that can be skipped. This can be used where we want to document known issues, but haven't yet patched the system.

Both of these situations can easily be construed as noise, when we are trying to run comprehensive testing, but are focused on other parts of the system. In this recipe, we will dig in to ease the strict checking done by doctest. We will also look at how to ignore entire tests, whether it's on a temporary or permanent basis.

How to do it...

With the following steps, we will experiment with filtering out test results and easing certain restrictions of doctest.

Create a new file called recipe20.py to contain the code from this recipe.

Create a recursive function that converts base10 numbers into other bases.

How it works...

In this recipe, we revisit the function for converting from base-10 to any base numbers. The first test shows it being run over a range. Normally, Python would fit this array of results on one line. To make it more readable, we spread the output across two lines. We also put some arbitrary spaces between the values to make the columns line up better.

This is something that doctest definitely would not support, due to its strict pattern matching nature. By using #doctest: +NORMALIZE_WHITESPACE, we are able to ask doctest to ease this restriction. There are still constraints. For example, the first value in the expected array cannot have any whitespace in front of it. But wrapping the array to the next line no longer breaks the test.

We also have a test case that is really meant as documentation only. It indicates a future requirement that shows how our function would handle negative binary values. By adding #doctest: +SKIP, we are able to command doctest to skip this particular instance.

Finally, we see the scenario where we discover that our code doesn't handle 0. As the algorithm gets the highest exponent by taking a logarithm, there is a math problem. We capture this edge case with a test. We then confirm that the code fails in classic test driven design (TDD) fashion. The final step would be to fix the code to handle this edge case. But we decide, in a somewhat contrived fashion, that we don't have enough time in the current sprint to fix the code. To avoid breaking our continuous integration (CI) server, we mark the test with a TO-DO statement and add #doctest: +SKIP.

There's more...

Both the situations that we have marked up with #doctest: +SKIP, are cases where eventually we will want to remove the SKIP tag and have them run. There may be other situations where we will never remove SKIP. Demonstrations of code that have big fluctuations may not be readily testable without making them unreadable. For example, functions that return dictionaries are harder to test, because the order of results varies. We can bend it to pass a test, but we may lose the value of documentation to make it presentable to the reader.

How it works...

This version has a limit of handling base 2 through base 36.

For base 36, it uses a through z. This compared to base 16 using a through f. 35 in base 10 is represented as z in base 36.

We include several tests, including one for base 2 and base 36. We also test the maximum value before rolling over, and the next value, to show the rollover. For base 2, this is 1 and 2. For base 36, this is 35 and 36.

We have also included tests for 0 to show that our function doesn't handle this for any base. We also test base 37, which is invalid as well.

There's more...

It's important that our software works for valid inputs. It's just as important that our software works as expected for invalid inputs. We have documentation that can be viewed by our users when using our software that documents these edges. And thanks to Python's doctest module, we can test it and make sure that our software performs correctly.

Testing corner cases by iteration

Corner cases will appear as we continue to develop our code. By capturing corner cases in an iterable list, there is less code to write and capture another test scenario. This can increase our efficiency at testing new scenarios.

How to do it...

Create a new file called recipe23.py and use it to store all our code for this recipe.

In the previous screenshot, the key information is on this line: AssertionError: expected: 11/2 actual: 10/2. Is this test failure a bit contrived? Sure it is. But seeing a test case shows useful output is not. It's important to verify that our tests give us enough information to fix either the tests or the code.

How it works...

We created an array with each entry containing both the input data as well as the expected output. This provides us with an easy way to glance at a set of test cases.

Then, we iterate over each test case, calculate the actual value, and run it through a Python assert. An important part that is needed is the custom message 'expected: %s actual: %s'. Without it, we would never get the information to tell us which test case failed.

What if one test case fails? If one of the tests in the array fails, then that code block exits and skips over the rest of the tests. This is the trade off for having a more succinct set of tests.

Does this type of test fit better into doctest or unittest?

Here are some criteria that are worth considering when deciding whether to put these tests in doctest:

Is the code easy to comprehend at a glance?

Is this clear, succinct, useful information when users view the docstrings?

If there is little value in having this in the documentation, and if it clutters the code, then that is a strong hint that this test block belongs to a separate test module.

Getting nosy with doctest

Up to this point, we have been either appending modules with a test runner, or we have typed python -m doctest <module> on the command line to exercise our tests.

For a quick recap, nose:

Provides us with the convenient test discovering tool nosetests

Is pluggable, with a huge ecosystem of available plugins

Includes a built-in plugin targeted at finding doctests and running them

Getting ready

We need to activate our virtual environment (virtualenv) and then install nose for this recipe.

Create a virtual environment, activate it, and verify the tools are working.

Using pip, install nose.

How to do it...

Run nosetests –with-doctest against all the modules in this folder. If you notice, it prints a very short .....F.F...F, indicating that three tests have failed.

Run nosetests –with-doctest -v to get a more verbose output. In the following screenshot, notice how the tests that failed. It is also valuable to see the <module>.<method> format with either ok or FAIL.

Run nosetests –with-doctest against both the recipe19.py file as well as the recipe19 module, in different combinations.

How it works...

nosetests is targeted at discovering test cases and then running them. With this plugin, when it finds a docstring, it uses the doctest library to programmatically test it.

The doctest plugin is built around the assumption that doctests are not in the same package as other tests, like unittest. This means it will only run doctests found from non-test packages.

There isn't a whole lot of complexity in the nosetests tool, and...that's the idea!. In this recipe, we have seen how to use nosetests to get a hold of all the doctests.

Summary

In this article we saw ways to perform testing in Python using doctest.

Alerts & Offers

Series & Level

We understand your time is important. Uniquely amongst the major publishers, we seek to develop and publish the broadest range of learning and information products on each technology. Every Packt product delivers a specific learning pathway, broadly defined by the Series type. This structured approach enables you to select the pathway which best suits your knowledge level, learning style and task objectives.

Learning

As a new user, these step-by-step tutorial guides will give you all the practical skills necessary to become competent and efficient.

Beginner's Guide

Friendly, informal tutorials that provide a practical introduction using examples, activities, and challenges.

Essentials

Fast paced, concentrated introductions showing the quickest way to put the tool to work in the real world.

Cookbook

A collection of practical self-contained recipes that all users of the technology will find useful for building more powerful and reliable systems.

Blueprints

Guides you through the most common types of project you'll encounter, giving you end-to-end guidance on how to build your specific solution quickly and reliably.

Mastering

Take your skills to the next level with advanced tutorials that will give you confidence to master the tool's most powerful features.

Starting

Accessible to readers adopting the topic, these titles get you into the tool or technology so that you can become an effective user.

Progressing

Building on core skills you already have, these titles share solutions and expertise so you become a highly productive power user.