Tag: python

We have many test suites which use test layers (e. g. the ones from plone.testing). We want to use py.test and all its fancy features to have a modern test runner. There was no way to convert such tests partly: either you have to port the whole project or you are stuck with the zope.testrunner.

For each layer it creates two fixtures: one for the layer setUp/tearDown and one for the testSetUp/testTearDown. The layer fixture is configured for class scope but the plug-in orders the tests and knows about the next test so the layer is only torn down if the next test needs another fixture.

Usage

You only have to add a new section to your package buildout and running the test via

bin/py.test -x

detects the layers and displays the needed setup code. See the PyPI-Page of the package for details.

Future

Maybe it is possible to get rid of the fixture setup code, so running tests using layers gets even easier.

We’ve recently started experimenting with the excellent scales library to collect in-process metrics (see Coda Hale’s CodeConf talk “Metrics everywhere” among many others for reasons why one definitely wants to do that).

Scales comes with a flask-based HTTP server that allows viewing the collected measurements and dumping them as JSON. But if you already are in a web application, there’s no real need to spin up yet another thread, open another port etc. to do this. In our case, we’re using Pyramid, so here’s a quick recipe to get the same view that greplin.scales.flaskhandler provides:

For continuous integration during development, we use Jenkins to automatically run tests for all projects we maintain. Some time ago we wanted to increase visibility of the results, so we set up a Raspberry Pi driving a few meters of LPD8806-based LED strip on which we can address single LEDs to represent the status of individual or aggregated builds.

Automating deployments is a good idea…

After an SD Card failure we were painfully remembered how hard it can be to set up a service where all parts were deployed manually. Fortunately we wrote at least some minimal documentation on how to set everything up, so after a few days we were presented with many broken builds. Of course nobody cared about the build status with all LEDs being dark. 😦

Let’s automate!

Today we wondered if we can use our deployment-tool batou to make reproduceable deployments to a raspberry pi, and did some tests on a vanilla raspbian image (2013-07-26 “Wheezy”).

Preparing your Raspberry Pi

Of course, you can not deploy to it without some simple preparations. First thing is, batou needs to be able to log on the target host with a public ssh key, so we copied our public key to the raspi which has the address 192.168.0.5 in this example:

(If you don’t have the ssh-copy-id, you have to manually append your ssh public key to /home/pi/.ssh/authorized_keys, which you will need to create on a plain installation)

Manually install minimal requirements

Batou does also have a few requirements which are needed to bootstrap the environment:

mercurial – to pull the buildout which sets up batou

python-virtualenv – to create a clean python environment for the buildout

python-dev – to compile libcrypto against

Note: We are currently working on batou 1.0 which most likely will no longer need any of these.

You can install all the requirements at once with the following command on your raspi:

pi> sudo aptitude install mercurial python-virtualenv python-dev

Prepare batou

Now you are ready to do your first batou deployment to a raspberry pi. For our experiments we created a small hello-world batou deployment, containing a test component which deploys a file /tmp/test which contains “foo” to a raspberry pi specified by an IP address.

To create the nessecary scripts to do the deployment, run buildout to create a sandbox containing all dependencies of batou and the scripts you can use to deploy:

local> python bootstrap.py
…
local> bin/buildout

Deploy!

After some minutes, your batou deployment sandbox will be ready for use. You most likely modified environments/pi.cfg so you need to check in that change first, because batou refuses to deploy a dirty working copy.

local> hg ci -m 'change ip of my raspi'

To run the deployment, call batou-remote with the name of the environment (“pi”, which corresponds to environments/pi.cfg). Because the ssh user you use to connect with the target host differs from your local user, you have to specify it with --ssh-user.

local> bin/batou-remote pi --ssh-user=pi

Batou will now set up itself on the remote side and deploys all components specified in pi.cfg. To show it worked, check if the deployed file contains the correct content:

Programs need to update files. Although most programmers know that unexpected things can happen while performing I/O, I often see code that has been written in a surprisingly naïve way. In this article, I would like to share some insights on how to improve I/O reliability in Python code.

Consider the following Python snippet. Some operation is performed on data coming from and going back into a file:

Pretty simple? Probably not as simple as it looks at the first glance. I often debug applications that show strange behaviour on production servers. Here are examples of failure modes I have seen:

A run away server process spills out huge amounts of logs and the disk fills up. write() raises an exception right after truncating the file, leaving the file empty.

Several instances of our application happen to run in parallel. After they have finished, the file contents is garbage because it intermingles output from multiple instances.

The application triggers some follow-up action after completing the write. Seconds later, the power goes off. After we have restarted the server, we see the old file contents again. The data already passed to other applications does not correspond to what we see in the file anymore.

Nothing of what follows is really new. My goal is to present common approaches and techniques to Python developers who are less experienced in system programming. I will provide code examples to make it easy for developers to incorporate these approaches into their own code.

What does “reliability” mean anyway?

In the broadest sense, reliability means that an operation is performing its required function under all stated conditions. With regard to file updates, the function in question is to create, replace or extend the contents of a file. It might be rewarding to seek inspiration from database theory here. The ACID properties of the classic transaction model will serve as guidelines to improve reliability.

To get started, let’s see how the initial example can be rated against the four ACID properties:

Atomicity requires that a transaction either succeeds or fails completely. In the example shown above, a full disk will likely result in a partially written file. Additionally, if other programs read the file while it is being written, they get a half-finished version even in the absence of write errors.

Consistency denotes that updates must bring the system from one valid state to another. Consistency can be subdivided into internal and external consistency: Internal consistency means that the file’s data structures are consistent. External consistency means that the file’s contents is aligned with other data related to it. In this example, it is hard to reason about consistency since we don’t know enough about the application. But since consistency requires atomicity, we can say at least that internal consistency is not guaranteed.

Isolation is violated if running transactions concurrently yields different results from running the same transactions sequentially. It is clear that the code above has no protection against lost updates or other isolation failures.

Durability means that changes need to be permanent. Before we signal success to the user, we must be sure that our data hits non-volatile storage and not just a write cache. Perhaps the code above has been written with the assumption in mind that disk I/O takes place immediately when we call write(). This assumption is not warranted by POSIX semantics.

Use a database system if you can

If we would be able to gain all four ACID properties, we would have come a long way towards increased reliability. But this requires significant coding effort. Why reinvent the wheel? Most database systems already have ACID transactions.

Reliable data storage is a solved problem. If you need reliable storage, use a database. Chances are high that you will not do it by yourself as good as those who have been working on it for years if not decades. If you do not want to set up a “big” database server, you can use sqlite for example. It has ACID transactions, it’s small, it’s free, and it’s included in Python’s standard library.

The article could finish here. But there are valid reasons not to use a database. They are often tied to file format or file location constraints. Both are not easily controllable with database systems. Reasons include:

we must process files generated by other applications, which are in a fixed format or at a fixed location

we must write files for consumption by other applications (and the same restrictions apply)

our files must be human-readable or human-editable

…and so on. You get the point.

If we are set out to implement reliable file updates on our own, there are some programming techniques to consider. In the following, I will present four common patterns of performing file updates. After that, I will discuss what steps can be taken to establish ACID properties with each file update pattern.

File update patterns

Files can be updated in a multitude of ways, but I see at least four common patterns. These will serve as a basis for the rest of this article.

Truncate-Write

This is probably the most basic pattern. In the following example, hypothetical domain model code reads data, performs some computation, and re-opens the existing file in write mode:

with open(filename, 'r') as f:
model.read(f)
model.process()
with open(filename, 'w') as f:
model.write(f)

A variant of this pattern opens the file in read-write mode (the “plus” modes in Python), seeks to the start, issues an explicit truncate() call and rewrites the contents:

This method is more robust against errors than the truncate-write method. See below for a discussion of atomicity and consistency properties. It is used by many applications.

These first two patterns are so common that the ext4 filesystem in the Linux kernel even detects them and fixes some reliability shortcomings automatically. But don’t depend on it: you are not always using ext4, and the administrator might have disabled this feature.

Append

The third pattern is to append new data to an existing file:

with open(filename, 'a') as f:
f.write(model.output())

This pattern is used for writing log files and other cumulative data processing tasks. Technically, its outstanding feature is its extreme simplicity. An interesting extension is to perform append-only updates during regular operation and to reorganize the file into a more compact form periodically.

Spooldir

Here we treat a directory as logical data store and create a new uniquely named file for each record:

with open(unique_filename(), 'w') as f:
f.write(model.output())

This pattern shares its cumulative nature with the append pattern. A big advantage is that we can put a little amount of metadata into the file name. This can be used, for example, to convey information about the processing status. A particular clever implementation of the spooldir pattern is the maildir format. Maildirs use a naming scheme with additional subdirectories to perform update operations in a reliable and lock-free way. The md and gocept.filestore libraries provide convenient wrappers for maildir operations.

If your file name generation is not guaranteed to give unique results, there is even a possibility to demand that the file must be actually new. Use the low-level os.open() call with proper flags:

After opening the file with O_EXCL, we use os.fdopen to convert the raw file descriptor into a regular Python file object.

Applying ACID properties to file updates

In the following, I will try to enhance the file update patterns. Let’s see what we can do to meet each ACID property in turn. I will keep this as simple as possible, since we are not planning to write a complete database system. Please note that the material presented in this section is not exhaustive, but it may give you a good starting point for your own experimentation.

Atomicity

The write-replace pattern gives you atomicity for free since the underlying os.rename() function is atomic. This means that at any given point in time, any process sees either the old or the new file. This pattern has a natural robustness against write errors: if the write operation triggers an exception, the rename operation is never performed and thus, we are not in the danger of overwriting a good old file with a damaged new one.

The append patterns is not atomic by itself, because we risk to append incomplete records. But there is a trick to make updates appear atomic: Annotate each written record with a checksum. When reading the log later on, discard all records that do not have a valid checksum. This way, only complete records will be processed. In the following example, an application makes periodic measurements and appends a one-line JSON record each time to a log. We compute a CRC32 checksum of the record’s byte representation and append it to the same line:

The checksummed log record approach is used by a large number of applications including many database systems.

Individual files in the spooldir can likewise feature a checksum in each file. Another, probably easier, approach is to borrow from the write-replace pattern: first write the file aside and move it to its final location afterwards. Devise a naming scheme that protects work-in-progress files from being processed by consumers. In the following example, all file names ending with .tmp are ignored by readers and are thus safe to use during write operations:

At last, truncate-write is non-atomic. I am sorry that I am not able to offer you an atomic variant. Right after performing the truncate operation, the file is nulled and no new content has been written yet. If a concurrent program reads the file now or, worse yet, an exception occurs and our program gets aborted, we see neither the old nor the new version.

Consistency

Most things I have said about atomicity can be applied to consistency as well. In fact, atomic updates are a prerequisite for internal consistency. External consistency means to update several files in sync. As this cannot easily be done, lock files can be used to ensure that read and write access do not interfere. Consider a directory where files need to be consistent with each other. A common pattern is to designate a lock file, which controls access for the whole directory.

This method only works if we have control over all readers. Since there may be only one writer active at a time (the exclusive lock is blocking all shared locks), the scalability of this method is limited.

To take it one step further, we can apply the write-replace pattern to whole directories. This involves creating a new directory for each update generation and changing a symlink once the update is complete. For example, a mirroring application maintains a directory of tarballs together with an index file, which lists file name, file size, and a checksum. When the upstream mirror gets updated, it is not enough to implement an atomic file update for every tarball and the index file in isolation. Instead, we need to flip both the tarballs and the index file at the same time to avoid checksum mismatches. To solve this problem, we maintain a subdirectory for each generation and symlink the active generation:

Here, the new generation 484 is in the process of being updated. When all tarballs are present and the index file is up to date, we can switch the current symlink with a single, atomic os.symlink() call. Other applications see always either the complete old or the complete new generation. It is important that readers need to os.chdir() into the current directory and refer to files without their full path names. Otherwise, there is a race condition when a reader first opens current/index.json and then opens current/a.tgz, but in the meanwhile the symlink target has been changed.

Isolation

Isolation means that concurrent updates to the same file are serializable — there exists a serial schedule that gives the same results as the parallel schedule actually performed. “Real” database systems use advanced techniques like MVCC to maintain serializability while allowing for a great degree of parallelism. Back on our own, we better use locks to serialize file updates.

Locking truncate-write updates is easy. Just acquire an exclusive lock prior to all file operations. The following example code reads an integer from a file, increments it, and updates the file:

What is wrong with this code? Imagine two processes compete to update a file. The first process just goes ahead, but the second process is blocked in the fcntl.flock() call. When the first process replaces the file and releases the lock, the already open file descriptor in the second process now points to a “ghost” file (not reachable by any path name) with old contents. To avoid this conflict, we must check that our open file is still the same after returning from fcntl.flock(). So I have written a new LockedOpen context manager to replace the built-in open context. It ensures that we actually open the right file:

The spooldir pattern has the elegant property that it does not require any locking. Again, it depends on using a clever naming scheme and a robust unique file name generation. The maildir specification is a good example for a spooldir design. It can be easily adapted to other cases, which have nothing to do with mail.

Durability

Durability is a bit special because it depends not only on the application, but also on OS and hardware configuration. In theory, we can assume that os.fsync() or os.fdatasync() calls do not return until data has reached permanent storage. In practice, we may run into several problems: we may be facing incomplete fsync implementations or awkward disk controller configurations, which never give any persistence guarantee. A talk from a MySQL dev goes into great detail of what can go wrong. Some database systems like PostgreSQL even offer a choice of persistence mechanisms so that the administrator can select the best suited one at runtime. The poor man’s option although is to just use os.fsync() and hope that it has been implemented correctly.

With the truncate-write pattern, we have to issue an fsync after finishing write operations but before closing the file. Note that there is usually another level of write caching involved. The glibc buffer holds back writes inside the process even before they are passed to the kernel. To get the glibc buffer empty as well, we have to flush() it before fsync’ing:

Alternatively, you can invoke Python with the -u flag to get unbuffered writes for all file I/O.

I prefer os.fdatasync() over os.fsync() most of the time to avoid synchronous metadata updates (ownership, size, mtime, …). Metadata updates can result in seeky disk I/O, which slows things down quite a bit.

Applying the same trick to write-replace style updates is only half of the story. We make sure that the newly written file has been pushed to non-volatile storage before replacing the old file, but what about the replace operation itself? We have no guarantee that the directory update is performed right on. There are lengthy discussions on how to sync a directory update on the net, but in our case (old and new file are in the same directory) we can get away with this rather simple solution:

We open the directory with the low-level os.open() call (Python’s built-in open() does not support opening directories) and perform a os.fsync() on the directory’s file descriptor.

Persisting append updates is again quite similar to what I have said about truncate-write.

The spooldir pattern has the same directory sync problems as the write-replace pattern. Fortunately, the same solution applies here as well: first sync the file, then sync the directory.

Conclusion

It is possible to update files reliably. I have shown that all four ACID properties can be met. The code examples presented above may serve as a toolbox. Pick the programming techniques that match your needs best. At times, you don’t need all four ACID properties but only one or two. I hope that this article helps you to make an informed decision about what to implement and what to leave out.

After Zope “-the-Framework” reaching the end of its lifecycle during the last few years, we did a bunch of new projects with Pyramid, a nice web framework primarily authored by long-term Zope developer Chris McDonough.

We think it’s about time to give something back to the community, and become more involved in Pyramid development. We therefore happily announce to host a large Pyramid sprint organised in cooperation with the pysprints.de team. You find more information and sprint topics at GitHub.

The sprint starts on Thu, August 15th 10:00h CEST, and ends on Sat, August 17th with a garden party in the evening! Expect BBQ, beer and (most likely) live music!

Travis-CI is a free hosted continuous integration platform for the open source community. It has a good integration with Github, so each push to a project runs the tests of the project.

gocept.selenium is a python package our company has developed as a test-friendly Python API for Selenium which allows to run tests in a browser.

Travis-CI uses YML-Files to configure the test run. I found only little documentation how to run Selenium tests on Travis-CI. But it is straight forward. The following YML file I took from a personal project of mine. (I simplified it a bit for this blog post.):

For a couple of years, we at gocept have been developing a Python library, gocept.selenium, whose goal it is to integrate testing web sites in real browsers with the Python unittest framework. There exist a number of approaches to doing this; when first starting real-browser tests, we opted for using selenium. Back then, it had not been integrated with webdriver yet (more on webdriver below).

There turned out to be multiple aspects to selenium integration: setting up the web server under test, starting a browser to run selenium and pointing it at the server, but also designing a wrapper around the selenium testing API to bring it in line with unittest’s way of defining specialised assertions.

We came up with the gocept.selenium package which includes both a selenese module defining such an API wrapper and a bunch of modules for integration with those web-server frameworks that we happen to use in our work, among them generic WSGI and a number of Zope-related servers. The integration mechanism is implemented in terms of test layers, so all of this requires the Zope test runner to be used. We released a 1.0 version of gocept.selenium in November 2012, marking the selenese API as stable.

The description of the package given so far already indicates two aspects that need yet to be addressed: Firstly, the selenium project is based on webdriver nowadays, with the old selenium implementation being kept for backwards compatibility at the moment. Secondly, collecting all those server integration modules in the same package that implements the actual selenium integration makes for rather complex (albeit optional) package dependencies and poses a maintainability problem.

We have dealt with the latter in December 2012, extracting all those integration modules from gocept.selenium into a new package, gocept.httpserverlayer. From the package’s documentation:
»This package provides an HTTP server for testing your application with normal HTTP clients (e.g. a real browser). This is done using test layers, which are a feature of zope.testrunner. gocept.httpserverlayer uses plone.testing for the test layer implementation, and exposes the following resources (accessible in your test case as self.layer[RESOURCE_NAME]):

http_host: The hostname of the HTTP server (Default: localhost)

http_port: The port of the HTTP server (Default: 0, which means chosen automatically by the operating system)

In addition to generic WSGI and static-file serving, the server frameworks supported at this point (i.e. gocept.httpserverlayer 1.0.1) include Zope3/ZTK (both using zope.app.testing and zope.app.wsgi with the latter supporting Grok) as well as Zope2 and Plone (using ZopeTestCase, WSGI or plone.testing.z2).

After the creation of gocept.httpserverlayer, we released the 1.1 series of gocept.selenium which no longer brings its own integration code. For the sake of backwards compatibility, though, it still implements separate TestCase classes for each of the integration flavours.

This leaves webdriver support to be dealt with. Originally, we had hoped to simply sneak it in, having to change very little client code, if any at all. Our plan was to implement the old API (both for test setup and selenese) in terms of webdriver which should allow us to benefit from webdriver immediately, as some issues with the old selenium were causing trouble in our daily work (including the behaviour of type and typeKeys as well as drag-and-drop). We started a branch of gocept.selenium where we switched from integrating legacy selenium to talking to webdriver and changed the selenese implementation to use webdriver commands.

However, it turned out that a number of details couldn’t be completely hidden, and webdriver brought its own share of problems (including, sadly, new issues with drag-and-drop). We tried out our branch in a real project to the point that all tests would pass again, and ended up with a long list of upgrade notes describing incompatibilities, either temporary or not, both causing semantic differences of behaviour and necessitating changes to the test code. We identified a number of pieces of the old selenese API that we wouldn’t bother implementing, and we still had a few large projects that would help discover more things to watch out for.

It became clear that sneaking webdriver into an existing selenium test suite wasn’t the way to get to use it soon. So, instead of continuing to develop the branch and replacing the selenium-based implementation in gocept.selenium 2, we merged the branch now, in such a way that we have two different selenium integrations available at the same time, usable simultaneously in the same project. That way, new browser tests can be added using the webdriver integration layer, and existing tests can be migrated to using webdriver test case by test case, as needed.

We have made alpha releases of gocept.selenium 2 so people may experiment with the webdriver integration. Note that while the current implementation of the test layer (gocept.selenium.webdriver.Layer) contains some code to deal with Firefox, we have successfully run it against Chrome as well. While the integration layer exposes a raw webdriver object as the seleniumrc resource, there is also the WebdriverSeleneseLayer which offers a resource named selenium, which is the old selenese API implemented in terms of webdriver and can be used together with the base layer.

We are currently working towards a stable gocept.selenium 2 release that includes webdriver support at the level described, but at the same time also thinking about how our ideal testing API might be structured in order to integrate with the unittest API concepts but make better use of the object-oriented raw webdriver API than the current selenese does. If you have an interest in using webdriver in conjunction with the Python unittest framework you are very welcome to try out the current state of gocept.selenium 2 and get back to us with ideas and suggestions.