New features & improvements

MAAS 2.5 has changed the communication between machines and controllers. These changes now allow complete separation between machines and region controllers; this means that machines will no longer have to access Region Controllers directly, and only require access to Rack Controllers. This provides the ability for MAAS to achieve proper separation for security purposes.

To provide an overview, by default, machines (in any working state, such as commissioning or deploying/deployed) are configured to use MAAS for DNS, Proxy (for package management, APT, RPM), HTTP (for cloud-init datasource), NTP, DHCP, PXE, Images (HTTP) and syslog.

In MAAS 2.4, the communication between machines and controllers was mixed. Machines would use the Region Controller for DNS, Syslog, HTTP (for cloud-init datasource), Proxy and sometimes NTP. The rack controller would only be used for DHCP, PXE, images (over HTTP) and NTP.

As of 2.5, all communication (by default) is now proxied (preferred) through the rack controllers. This means that the rack controllers now also manage DNS, Proxy and HTTP request from cloud-init. Details below:

For DNS, the rack controller now installs and configures bind as a forwarder, allowing machines to query the rack controller directly. Zones management and maintenance are still kept in the region controller.

For HTTP, the rack controller now installs nginx, intended to serve both as a proxy and as an HTTP server binding port 5248. This means that machines will no longer contact the metadata server directly in the region controller and will instead contact the rack controller (the one the machine is PXE-booting from).

MAAS now creates an internal DNS domain (not manageable by the user) and a special DNS resource for each subnet that is managed by MAAS. Each subnet will include all rack controllers that have an IP address on that subnet. Booting machines will use the subnet DNS resource to resolve the rack controller available for communication. In the case that multiple rack controllers belong to the same subnet, MAAS will use a round-robin algorithm to balance the load across multiple rack controllers.

Proxy (squid) is now done through the rack controller. Newly deployed machines using the MAAS built-in proxy will be configured to use the internal DNS resource for the subnet they have been deployed to. This ensures machines always have a controller, using multiple controllers in round-robin DNS when available.

Improvements to syslog

As part of the changes to proxy communication through the rack, further enhancements have been made to syslog:

MAAS uses syslog to gather logs from the enlistment, commissioning and deployment processes that server admins can use for debugging purposes. This communication is now also proxied through the rack controller and sent to all region controllers. (This information is still available in /var/log/maas/rsyslog.)

MAAS rack controllers now communicate syslog information to the region. Previously, the rack controllers would only log syslog information locally (to maas.log). In environments with multiple rack controllers, admins had to ssh into those machines to be able to view logs. Users can now see the log information of all region and rack controllers on any region controller.

Users can now configure a remote syslog server in case they don’t want their machines to send syslog information to the MAAS controllers. (Note that this is the same information noted above, which includes enlistment, commissioning and deployment process logs.) This won’t forward MAAS controllers’ syslog to the external server, only machine syslog information.

High availability improvements

Rack controller based HA

With the changes introduced in MAAS 2.5 to proxy the control plane communication via the rack controllers, the expectations of high availability has changed.

Now, this only affects high availability for services where access to the Region Controller was required. The changes ensure that HA for MAAS services are provided closer to the machine, at the rack controller.

This means, in order to effectively provide HA of MAAS services to machines, administrators are required to deploy multiple rack controllers. For example:

Machines will be deployed to use the IPs of the rack controllers that the machine can reach for DNS resolution (instead of the Region Controllers in previous versions).

Machines will be deployed to use a DNS based URL to access the Proxy (squid) and cloud-init datasource.

Virtual IPs for rack-to-region communication no longer needed.

In previous versions of MAAS HA, the rack controller was always configured against a single region controller endpoint. While the rack controller would automatically discover all other region controller endpoints, in the event the configured region controller endpoint would disappear, the rack was unable to communicate with the region.

This typically required users to configure (at least) a Virtual IP (VIP) and some type of failover/load-balancing mechanism to ensure that the rack was always able to communicate with the region endpoint (via the VIP). This required the configuration of technologies such as corosync/pacemaker or keepalived to maintain this VIP (and/or HAProxy) on all region controllers.

Starting from 2.5, however, this behavior has changed. The rack controller now allows users to specify multiple region controller endpoints for a single rack controller, thus removing the need for complex dependencies.

While the rack controller will now allow administrators to specify multiple endpoints, MAAS will also attempt to automatically discover and track all region endpoints in a single cluster, as well as automatically attempt to connect to them in the event the one configured becomes inaccessible.

KVM

MAAS 2.5 has greatly expanded its support for KVM hosts (formerly known as Virsh Pods or KVM Pods) and brings exciting new features and improvements:

New architectures support

MAAS has expanded the support for KVM management to all supported Ubuntu architectures. More specifically, MAAS now supports ARM64, PPC64 and s390.

KVM host deployment

MAAS 2.5 can now deploy a machine and automatically configure it as a KVM host. The currentl functionality allows administrators to easily convert their hardware into a KVM micro-cloud and maximize the use of resources with MAAS.

Storage

KVM storage support has been expanded. MAAS 2.4 introduced the ability to use libvirt storage pools as target storage devices for virtual machines created with MAAS, but it provided no visibility into utilization. As of 2.5, this has now been improved and storage utilization is tracked. This also comes with several UI improvements.

Networking

The most exciting feature about MAAS 2.5 with KVM is that MAAS now supports attaching virtual machines it creates to different networks (on machine creation). MAAS is now more intelligent about how it attaches to default networks defined in the hypervisor. In the past, MAAS would look for a maas network and attach to that and otherwise attach to a default network. MAAS now checks to see if DHCP is managed by libvirt before attaching. If DHCP is managed by libvirt, MAAS will not be able to PXE boot the machines in order to manage them. MAAS will fall back to attaching to a network on the hypervisor known to be DHCP-enabled in MAAS, even if that network is not associated with a network in libvirt.

Furthermore, MAAS now introduces the ability to compose KVM virtual machines with interfaces. Users of the API, using either the machines allocate endpoint or the pod compose endpoint, are now able to include an interfaces constraint, allowing the selection of KVM pod NICs.

If the interfaces constraint is left unspecified, MAAS 2.5.0 will maintain backward compatibility with earlier releases by first checking for a maas network, then a default network for attachment to the KVM pod.

If the interfaces constraint is specified, MAAS will create a bridge or macvlan attachment to the networks that match the constraints. MAAS will prefer bridge interface attachments when possible, since this typically results in successful communication. For example, consider the following constraint:

interfaces=eth0:space=maas;eth1:space=storage

In this case, assuming the KVM pod is deployed on a machine or controller with access to the maas and storage spaces, MAAS will create an eth0 interface bound to the maas space, and an eth1 interface bound to the storage space.

If constraints are specified as follows, MAAS will assign unallocated IP addresses:

interfaces=eth0:ip=192.168.0.42

In this case, MAAS will automatically convert the ip constraint to a VLAN constraint (for the VLAN where its subnet can be found), and assign the IP address to the newly-composed machine upon allocation.

Users are now also able to create virtual machines over the API and take benefit of these changes.

Storage Support for CentOS (and RHEL)

MAAS now supports configuring storage for CentOS and RHEL deployments. The support includes:

Custom partitions with different filesystems (with the exception of ZFS and Bcache)

LVM

RAID

Note that Bcache and ZFS are not included as part of this support becauseat the time of enablement, these feature are not supported on CentOS 7.

Thanks to the Curtin team for adding the ability to configure storage in CentOS/RHEL for us to use in MAAS.

ESXi

MAAS can now deploy VMWare’s ESXi. The first supported release is ESXi 6.7. Given that ESXi is a special case, deployment support is limited in comparison to Linux-based operating systems, but it is similar to our Windows support. ESXi support in MAAS includes:

Storage

VMWare ESXi uses a specific partitioning layout which cannot be modified. MAAS will install the image to the boot disk and expand the VMFS partition to the remaining available space on the boot disk.

Networking

Network configuration is also support for ESXi. MAAS will:

Name the interfaces with the same name in MAAS.

Interfaces will be configured based on the IP assignment mode in MAAS, namely:

Post-installation customization

Post-installation customization is not available over preseeds (e.g. curtin_userdata), but it is available over user_data. Administrators are able to deploy a machine with ESXi, providing a Shell, Python or Perl script as ‘user_data’. The image will then execute these scripts on first boot.

Adding machines with IPMI credentials

In previous versions of MAAS, users were required to provide the MAC address of the PXE interface when adding new machines to MAAS. This allowed MAAS to correctly identify the booting machine to provide the correct configuration.

Starting from 2.5, this behavior has slightly changed. Now, users only need to specify IPMI credentials, or a non-PXE MAC address for non-IPMI machines. By doing so, MAAS now automatically discovers the machine and runs the enlistment configuration by either matching the BMC address (on IPMI machines) or by matching the non-PXE MAC address.

Commissioning scripts during enlistment

In older versions, the MAAS enlistment process only gathered very basic information about a machine (more precisely, its architecture and network interface information). As of 2.5, this behavior has changed. MAAS now runs all builtin commissioning scripts and gathers all the minimum required information about the machine during the enlistment process. Custom commissioning scripts are not run during enlistment.

Note that after the enlistment process, the machine will be placed in the ‘New’ state, as before. Administrators may transition the machine into a ‘Ready’ state by running ‘Commissioning’ or ‘Testing’.

Resource Pools

MAAS 2.5 officially introduces resource pools. Administrators can organize resources (machines) into pools. In the near future, administrators will be able to restrict access to pools to specific users.

Note that this feature is now backported to 2.4 and will be available in 2.4.1.

Web UI

A new machine listing

The filter tool is now collapsable. It is collapsed by default to allow the machine list to use the full width of the window.

The machine list grid now has double rows to show more information. Each row displays the primary IP, power type with status, tags with owner and pool with zone. There have also been improvements to the RAM, storage and host, domain presentation.

The machine list is now more responsive to varying screen widths.Columns appear dynamically to show more information, if possible.

An interfaces tab for machines view

This table now uses a double-row style to show information in pairs:

MAC address and hostname

VLAN and fabric

Subnet and optional name

IP address and mode

Improvements to the header-action menu

The actions menu has been redesigned.It now groups actions logically together with the number of machines upon which each action can be performed. Actions that are unavailable are now visible but disabled.

A new KVM pod details view

A new KVM pod details view has been introduced that displays the cores, RAM and storage details of the KVM. Each storage pool within a KVM is now displayed individually, with path and type information visible at a glance.

The overcommit ratio of a KVM machine is now displayed clearly in the heading of the section with calculated charts to show to current state of each machine.

The virsh address is now shown without needing to visit the configuration tab.

Introduction of Vanilla framework 1.8

Updated Vanilla framework with tighter controls over spacing and padding. Lists and forms are now more tightly padded for a denser display of information.

KVM compose window reworked

The new compose UI allows multiple requests across multiple storage pools. Storage pool type and path information is now available to make selecting clearer…

The introduction of meter charts makes the data easier to understand at a glance. This includes warnings before composing to prevent requests that exceed available space.

Custom KVM networking is now possible. Multiple interfaces can be configured on a VM.

Networking can be determined by space or fabric.

Subnets are filtered by space or fabric

Configuring a manual IP is now possible directly in the GUI.

A new subnets menu describes VLANs and fabrics and allows selection in a single action.

Installation

MAAS 2.5.0 is available for Ubuntu Cosmic & Bionic and can be installed from our PPA as follows: