Archives For Linux LVM

The first thing you must learn about RAID technologies in Linux is that they have nothing in common with HP-UX, and I mean nothing! Yes there is LVM but that’s all, the mirror of a volume group for example is not done through LVM commands, in fact you are not going to mirror a volume group but the block device/s where the volume group resides.

There are two tools to manage RAID in Linux.

dmraid

mdadm

Dmraid is used to discover and activate software (ATA)RAID arrays, commonly known as fakeRAID, and mdadm is used to manage Linux Software RAID devices.

dmraid

Dmraid, uses libdevmapper and the device-mapper kernel driver to perform all the tasks.

The device-mapper is a component of the Linux Kernel. This the way the Linux Kernel do all the block device managment. It maps a block device onto another and forms the base of volume management (LVM2 and EVMS) and software raid. Multipathing support is also provided through the device-mapper. Device-mapper support is present in 2.6 kernels although there are patches for the most recent versions of 2.4 kernel version.

mdadm

mdadm, is a tool to manage the Linux software RAID arrays. This tool has nothing to do with the device-mapper, in fact the device-mapper is not aware of the RAID arrays created with mdadm.

To illustrate this take a look at the screenshot below. I created a RAID1 device, /dev/md0, I then show its configuration with mdadm –detail. Later with dmsetup ls I list all the block devices seen by the device-mapper, as you can see there is no reference to /dev/md0.

Instead mdadm uses the MD (Multiple Devices) device driver, this driver provides virtual devices created from another independent devices. Currently the MD driver supports the following RAID levels and configurations

RAID1

RAID4

RAID5

RAID6

RAID0

LINEAR (a concatenated array)

MULTIPATH

FAULTY (an special failed array type for testing purposes)

The configuration of the MD devices is contained in the /etc/mdadm.conf file.

As a final thought, my recommendation is that if there is hardware RAID controller available, like the HP Smart Array P400 for example, go hard-RAID five by five and if not always use mdadm even if there is an onboard RAID controller.

Like this:

Now that my daily work is more focused on Linux I found myself performing the same basic administration tasks in Linux that I’m used to do in HP-UX. Because of that I thought that a post explaining how the same basic file system and volume management operations are done in both operative systems was necessary, hope you like it :-)

This is going to be a very basic post intended only as a reference for myself and any other Sysadmin coming from either Linux or HP-UX that wants to know how things are done in the other side. Of course this post is no substitute of the official documentation and the corresponding man pages.

I’ve used Red Hat Enterprise Linux 5.5 as the Linux version and 11iv3 as the HP-UX version.

The follwing topics will covered:

Volume group creation.

Logical volume operations.

File system operations.

Volume group creation

Physical volume and volume group creation are the most basic tasks in LVM, both in Linux and HP-UX, but although command syntax is quite similar in both operative systems the whole process differs in many ways.

– HP-UX:

The example used is valid to 11iv2 and 11iv3 HP-UX versions, with the exception of the persistent DSFs you will have to substitute them for the corresponding legacy devices used in 11iv2.

Go into the VG subdirectory and create the group device special file. For the Linux guys, in HP-UX each volume group must have a group device special file under its subdirectory in /dev. This group DSF is created with the mknod command, like any other DSFs the group file must have a major and a minor number.

For LVM 1.0 volume groups the major number must be 64 and for the LVM 2.0 one must be 128. Regarding the minor number, the first two digits will uniquely identify the volume group and the remaining digits must be 0000. In the below example we’re creating a 1.0 volume group.

And create the volume group with the vgcreate command, the arguments passed are the two physical volumes previously created and the size in megabytes of the physical extent. The last one is optional and if is not provided the default of 4MB will be automatically set.

Create the physical volumes. Here it is where the first difference appears. In HP-UX a physical volume is composed by a whole disk, with the exception of boot disks in Itanium systems, but in Linux a physical volume can be a whole disk or a partition.

To explain the session first a new partition is created with the command n and the size of the partition is set (in this particular case we are using the whole disk); then we must change the partition type, which by default is set to Linux, to Linux LVM and to do that we use the command t and issue 8e as the corresponding hexadecimal code, the available values for the partition types can be shown by typing L.

root@hp-ux:/# lvremove /dev/vg_new/lvol_test
The logical volume "/dev/vg_new/lvol_test" is not empty;
do you really want to delete the logical volume (y/n) : y
Logical volume "/dev/vg_new/lvol_test" has been successfully removed.
Volume Group configuration for /dev/vg_new has been saved in /etc/lvmconf/vg_new.conf
root@hp-ux:/#

– Linux:

Create the logical volume with the lvcreate command, the most basic options (-L, -l, -n) are the same as in HP-UX.

Unlike the volume group section, the basic logical operations are performed in almost the same way in both operative systems. Of course if you want to perform mirroring the differences are bigger, but I will leave that for a future post.

File system operations

The final section of the post is about the basic file system operation, we are going to create a file system on the logical volume of the previous section and later to extend it, including this time the volume group extension.

Finally resize the file system, to do that use the resize2fs tool. Unlike in HP-UX with fsadm, that needs the new size as an argument in order to extend the file system, if you simply issue the logical volume as the argument the resize2fs utility will extend the file system to the maximum size available in the LV.

Like this:

Some of the features I always liked about the Linux LVM2 implementation are the lvs, vgs and pvs commands. With these simple commands a short list of the LVs, VGs and PVs active on the system can be obtained.

Because of this I decided to write three scripts to emulate the behavior of vgs, lvs and pvs on my HP-UX servers. This scripts take advantage of the mentioned LVM2 “-F” switch so they will not work on HP-UX 11.23 or any other previous versions. If anyones recognize the “scripting” style is because I grab some parts of the code from Olivier’s ioscan_fc2.sh and adapted them to my needs so credit goes to him also :-)

VGS: List the volume group on the /etc/lvmtab file, if the server is part of a cluster the volume groups active on other nodes will be showed as deactivated. With the -v switch single VGs can be queried.