Given how tough it can be to reliably benchmark any cloud, having an open source, cloud-agnostic toolkit to help make it happen is a net boon.

Google devised PerfKit as a way to benchmark a variety of different cloud resources. It doesn't just clock network speed or CPU, but the performance of real-world applications that are often part of cloud deployments. As such, MongoDB, Cassandra, and Hadoop were included in the original PerfKit package.

PerfKit emphasizes programmability and extensibility, since it controls every phase of the testing -- config, provisioning of resources, execution, teardown, and publishing of the results -- with Python scripts. The tester creates YAML files that describe how the tests are to be performed, with abstractions for needed resources like disk space, networking, firewalls, and VMs.

The target environment can be a standalone system, a VM in a private cloud, or a VM on one of nine popular cloud providers: AliCloud, Amazon Web Services, CloudStack, DigitalOcean, Google Cloud Platform, Kubernetes, Microsoft Azure, OpenStack, and Rackspace.

The 1.0 label is only now being applied because Google needed to find "the right abstractions making it easy to extend and maintain," and "the right balance between variance and runtime," according to Google's blog post.

A few new benchmarks have also been added to the mix, mainly EPFL EcoCloud Web Search and Web Serving. The former sets up an instance of the Nutch search engine (based on Lucene) and tests the system in question against simulated client traffic; the latter configures the Nginx Web server and benchmarks traffic to a synthetic Web application.