A couple of my favorite Solaris 10 features provide the ability to create virtual machines with ease and run them on a very efficient and full featured file system.

You can learn more about the Zettabyte File System (ZFS) here. In summary, if you are not already using ZFS then it’s time to read up and start.

Zones are virtual machines running within a Solaris system. If you don’t know what a virtual machine is just picture a single physical server running multiple independent and isolated copies of the operating system. Solaris 10 zones (or containers) allow you to run a number of virtual machines on one physical machine. Although there are plenty of benefits to this approach the example below mainly focuses on the ease of deployment.

In the example there is a one to one relation between physical machine and zone. The same process could be used to create multiple zones on one physical machine if there was a need and the hardware resources were sufficient.

If you stumble upon this page keep in mind that with newer releases of software some of the options and methods may have changed. For obvious reasons I had to make some changes to hostnames, etc. If you find typos please let me know.

I don’t take any credit for inventing any of this. This info is available in many places on the web already. I’ve left a copy here mainly so I can refer back to when needed.

Background:

This is a simple process used to clone a bunch of web slingers. This assumes that each physical machine has the same base O/S, packages and patch level. Trying to use this process on systems with different packages or patch levels guarantees a headache. There are plenty of ways to install the base OS. While outside of the scope of this blurb I’m fond of using Flash Archives (flar). Since many of the systems I work with are highly customized (and, ugh.. constantly changing) having a handful of these systems available as flars is helpful.

The environment in the example is basic, each server has one child zone installed with its own instance of Apache/PHP/MySQL installed and running from a mounted NFS volume. (Note: I think that web farms with multiple boxes in front of some load balancing hardware provide more redundancy than getting one monster server and dropping multiple zones on it. Just my opinion though.)

Process:

Part One – Create some disk pools and ZFS space to run the virtual machines in.

1. What disks do we have? (I already knew disk c3t2d0 was in use, you should check df -h first if you are not sure)

We don’t really want to do anything here so a Ctrl C is issued to exit without screwing things up horribly.

2. Create a pool called zones from physical disk c3t3d0. Notice that there is no mucking with mkfs or editing vfstab. The device is mounted and available instantly. Also note that if we had multiple slices or disks that we would use a similar command to create a mirror or raidz of said slices and/or disks (example: zpool create zones mirror c0t0d0s5 c0t1d0s5). The name of the pool can be foo or fred or whatever. I picked the name ‘zones’.

Okay, in four easy steps we setup some zfs space. Why would we do this? Well, for one thing because we can and more importantly because we can do things like this:
(insert various examples here, like exporting to another box, making a snapshot, and not so cool — deleting the whole thing with one simple command).

Part Two – Create a zone to run application $foo in with manageable resources.

Think of a zone as a virtual machine or a chrooted environment.

We want a virtual machine to run a web application but we only want to allow nn RAM and nn CPU for this new machine. This way, when we run our forkbomb – or our standard Poorly Optimized Software – the damage is limited to the virtual machines’ isolated environment. In theory this should work well, in practice I’ve seen a virtual machine take quite some time to reboot after the hosing I gave it. The actual server, called the global zone was running fine though, which is what we want. Yet, the slow response to kill and restart the virtual machine was/is a concern.

1. Here we discover that capped-cpu did not make it into our release of Solaris (10 8-07). Argh! This was the latest release at the time this was written.

2. Since we can not use capped-cpu, which will give us more granularity we will have to stick with dedicated-cpu. There are other methods that can be used, but I prefer the simple approach. (You could use pooladm and create processor sets, which I may do again depending on how well ‘dedicated-cpu’ works).
Here we give the zone a single CPU and 4 gigs of RAM.

[root@someserver:~/bin]# zoneadm -z webserver1 install
/zones/webserver1 must not be group readable.
/zones/webserver1 must not be group executable.
/zones/webserver1 must not be world readable.
/zones/webserver1 must not be world executable.
could not verify zonepath /zones/webserver1 because of the above errors.
zoneadm: zone webserver1 failed to verify
[root@someserver:~/bin]# chmod go-rwx /zones/webserver1/
[root@someserver:~/bin]# zoneadm -z webserver1 install
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <831> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <700> packages on the zone.
Initialized <700> packages on zone.
Zone is initialized.
The file contains a log of the zone installation.
[root@someserver:~/bin]#

3. Okay, let’s login. (I did not paste the initial screens, when you first login there are a couple of questions to answer about hostname, timeszone, root pw, DNS, kerberos and such). You only have to do this for the first zone you create. Takes about 30 seconds to answer those and then you have a prompt.

After a certain amount of customization within the running zone called webserver1 it is ready to deploy to some other servers. Some examples of customization would be adding some users or groups, editing vfstab, compiling some software or toggling services to meet your need. The key here is that once you’ve created your ‘golden’ zone (which has a base install taken from your ‘golden’ flar) you can then clone this zone with minimal effort. Yeah, I’m sure there are lots of other ways to do this. This works well for me.

Part 5 - Clone / Migrate an installed zone to other machines.

Note: In order to pull this off, all of the physical servers need to be running the same versions of software and the same patches/packages must be installed.

Here we shut down the original zone, detach it and transfer it to another physical server. When we detach the zone a configuration file is dropped in the zones home. This is an important step.

After you extract the webserver1.tgz archive you should edit nodename and hosts. You can do so after booting the zone but if you forget you’re sure to get a headache. These files live at:
[root@anotherserver:/zones/webserver2/root/etc/{nodename,hosts}

4. Now that the trivial stuff is done it’s time to do the hard part.
Pay attention here, the important line below is "create -a /zones/webserver2". This is the path to where you extracted the webserver1.tgz archive. Don’t forget the –a.

The only zonecfg changes needed are what to call the new zone and the IP address of the zone. (You will need to change net physical if the hardware is different).

You should now have all of the files and configuration needed to attach and boot the new zone.

5. Permissions must be set properly, same thing happens when you first setup a zone, perms :).

[root@anotherserver:~]# zoneadm -z webserver2 attach
/zones/webserver2 must not be group readable.
/zones/webserver2 must not be group executable.
/zones/webserver2 must not be world readable.
/zones/webserver2 must not be world executable.
could not verify zonepath /zones/webserver2 because of the above errors.
zoneadm: zone webserver2 failed to verify
[root@anotherserver:~]# chmod go-rwx /zones/webserver2/
[root@anotherserver:~]# zoneadm -z webserver2 attach

6. Okay, boot the new virtual machine. Note that all of your customizations made on webserver1 are there.

After speaking with the Sun Data Center Architect who visited last week I found out that some of the Sun documents might have been interpreted wrong regarding CPU/Cores in zones. Previously I was under the impression that setting ncpus=1 in the zone would provide 1 dedicated CPU. This is not the case in the version of software we have, ncpus=1 is only one core. This applies to x86 hardware, see the comments section for more info.

Here is how we add more CPU:

Virtual server shows 1 CPU and can be viewed with mpstat -- note that it is 90% idle. This is actually one core.

Thanks. This ‘article’ was more of a cut-n-paste of emails sent to co-workers documenting some basic stuff. I added some explanation which I hope makes it easy to follow. Trade Rags? Well, you and I may still read them…anyone else?

You may want to addendum:
“Previously I was under the impression that setting ncpus=1 in the zone would provide 1 dedicated CPU. This is not the case in the version of software we have, ncpus=1 is only one core. ”

Not necessarily true, as with a Sun CMT server (Sunt5240, for example) ncpus=1 would equate to one thread.

The following doc on the Sun site states the ncpus setting is for number of CPU’s (or just another core on the chip die as I view it). I may may not have mentioned it above but these are all x86 boxes, I have not been able to play with any of the cool threads / niagra boxes like the one you mention.

Ah yes, I am sure it would make a huge difference on an x86 platform. Yeah, the threaded tech treats it entirely different. And if you want nice little headache, try staying within license compliance with Oracle and their DB products on one. It’s a mathematical circus, since it’s (CMT servers) all threaded, heh.