March 30th, 2012. The OpenNebula project is proud to announce the availability of the beta release of OpenNebula 3.4 (Wild Duck). The software brings countless valuable contributions by many members of our community, and specially from Research in Motion, Logica, Terradue 2.0, CloudWeavers, Clemson University, and Vilnius University.

This release is focused on extending the storage capabilities of OpenNebula, including support for multiple Datastores. The new functionality overcomes the single Image Repository limitations in previous versions. The use of multiple datastores provides extreme flexibility in planning the storage backend and important performance benefits, such as balancing I/O operations between storage servers, defining different SLA policies (e.g. backup) and features for different VM types or users, or easily scaling the cloud storage.

OpenNebula 3.4 also features improvements in other systems, especially in the core with the support of logic resource pools, the EC2 API with the support of elastic IPs, the Sunstone and Self-service portals with new cool features, and the EC2 hybrid cloud driver that now supports EC2 features like tags, security groups or VPCs. See below for highlights in these components.

With this beta release, Wild Duck enters feature freeze and we'll concentrate on fixing bugs and smoothing some rough edges. This release is aimed at testers and developers to try the new features or to migrate existing drivers (specially TM drivers) to be compatible with the new version.

As usual OpenNebula releases are named after a Nebula. The Wild Duck Cluster (also known as Messier 11, or NGC 6705) is an open cluster in the constellation Scutum.

OpenNebula Core

Datastores, OpenNebula 3.4 supports multiple datastores. OpenNebula is shipped with 4 basic datastores: system, to hold images for running VMs; filesystem, to store disk images in a file form; iSCSI/LVM, to store disk images in a block device form; and VMware, a datastore specialized for the VMware hypervisor that handles the vmdk format.

New Transfer Drivers, hosts are not tied to a single transfer mechanism (transfer driver) and now can access images from different datastores in different ways. Note that a VM can have its disks in different datastores. Also the transfers associated with persistent or save_as images have been simplified. The TM protocol and scripts have been preserved, so only minor modifications are needed to port existing TM drivers. There are also new drivers to use in combination with the Datastores: qcow2, iSCSI and an improved version of vmware that uses the vmdkfs tools.

Clusters, by popular request we have brought back the cluster concept. The new cluster is a logic pool of resources that includes physical hosts, datastores and networks. When a VM uses resources of a cluster, its REQUIREMENTS are automatically adapted to only use hosts from that cluster.

Restricted attributes, since OpenNebula 3.2 administrators can restrict the usage of potentially insecure attributes. These attributes can now be customized in the OpenNebula configuration file.

XML Templates, any XML-RPC method now supports XML templates (apart from the traditional ones). You can now create networks, VMs, images using XML documents, where the elements are the template attributes (see the -x output of any one* CLI command for an example)

Quotas, extended image quota functionality in order to support urls

OpenNebula 3.4 also includes minor bug fixes and enhancements.

SunStone & Self Service Portal

There are several new features in the GUI applications:

New Navigation Menu: Sunstone includes a new tree-like menu for resource description to easily access different parts and resources of an OpenNebula installation.

Support for Datastores, and Clusters, Sunstone has been updated to expose the new functionality of the OpenNebula core.

More translations, we have a new translation… Italian!

Support for VNC in Self-Service Portal, same functionality as in Sunstone. This can be disabled in the configuration file.

Secure Web Sockets for the VNC proxys.

Upload of Images in Sunstone, and several performance improvements in this area.

Improved Virtual Network Dialog, that now includes VLAN options.

Parametric VM instantiation

Cloud Servers

There have been some improvements on the Cloud APIs:

Elastic IPs for EC2 Query API, the implementation uses association/disassociation plugins and so it can be easily adapted to the datacenter network architecture.

The EC2 Query server includes a improved support for SSL proxys as well as custom paths.

The OCCI server has been improved to include user/group information in resources and extended information of resources.

Cluster Partitioning, Cloud requests can be routed to an specific cluster with its own storage and network resources to better isolate public cloud users.

Improved logging, a new framework has been included to add logging information to the servers.

Auth drivers New CloudAuth driver that delegates the authentication to the OpenNebula core. Therefore any OpenNebula auth driver can be used to authenticate cloud users.

Hybrid Cloud Computing

OpenNebula 3.4 includes an improved EC2 hybrid driver, to support most of the EC2 features like tags, security groups or VPC.

Migrating from OpenNebula 3.2

OpenNebula 3.4 is API compatible with OpenNebula 3.x, so you should expect that applications and drivers developed for 3.x work with this release, with the exception of custom authentication drivers.

Also there have been changes in the Image Repository (now Datastore) and Transfer drivers. The protocol and command structure has been preserved, although some minor adaptations are needed to port existing TM or Image Repository drivers.

Documentation

The documentation of OpenNebula 3.4 can be found here. The documents are in a development state so watch out, and do not hesitate to ask in the mailing list.

Acknowledgements

The OpenNebula project would like to thank the community members and users who have contributed to this software release by being active with the discussions, answering user questions, or providing patches for bugfixes, features and documentation, and especially to:

Research in Motion cloud team for its feedback on the new Datastore and Zones components, and its contributions to the qcow drivers.

Terradue 2.0 for its valuable contributions to the VMware storage drivers.