VG creation

Dear Admins,Our requirement is unused disk (PV) from the VG: vguat1 should be moved and added to the VG: vgoradata, since the space is less in VG: vgoradata.I could see that /dev/dsk/c3t4d0 is not used. Enclosed is the file for your reference. Please revert for any more details.Server: HP-UX B.11.23 U ia64Could any one help me to I accomplish this task?Thanks in advance.Regards,V.P

Re: VG creation

Hi:

Since '/dev/dsk/c3t4d0' is unused, 'vgreduce' it from the volume group, 'vguat1'. To add it to the 'vgoradata' volume group, you would use 'vgextend'.

The problem that you have, however, is that the maximum number of physical extents for any physical volume in 'vgoradata' is 1618 (Max PE per PV). The physical volume you want to add can hold 4374 extents.

Thus, if you merely 'vgextend' c3t4d0 into the 'vgoradata' volume group, you are only going to gain 1618 extents of usable space.

You should 'vgmodify' the 'vgoradata' volume group to increase the maximum number of physical extents that can be allocated from any of the physical volumes in the volume group.

See the manpages for 'vgreduce', 'vgextend' and 'vgmodify' for more information.

Re: VG creation

It's just as James said, the number of available extents on the disk are greater than the number of extents allowed that the vgoradata vol grp is set up to use. The disk has 4374 extents, the vgoradata vol grp will only use 1618 of them wasting 2756 extents of space.

You will have to modify the number of Max_Pe_per_PV setting in vgoradata. You can use vgmodify, the man pages will help. You are using 11.23 so you may have to download a patch to install 'vgmodify', PHCO_35524. here's a link to a pdf og vgmodify.

Re: VG creation

If you simply follow smatador's advice, you'll find you won't get to use the full capacity of /dev/dsk/c3t4d0.

Currently in vguat1, the /dev/dsk/c3t4d0 has 4374 extents of size 32 MeB, so its total size is 4374 * 32 MeB = 139968 MeB.

But the vgoradata volume group has PE size of 16 MeB and the "Max PE per PV" value is 1618. Without modifying these parameters, you can get only 1618 * 16 MeB = 25888 MeB, or just about 1/5 of the total capacity of /dev/dsk/c3t4d0.

You have HP-UX 11.23, so the vgmodify command should be available if the appropriate patch (PHCO_35524 or a superseding patch) has been installed.

The extent size of vgoradata was chosen at the time it was created, and the vgmodify command cannot change it. Instead, you can use vgmodify to increase the "Max PE per PV" value.

To cover the full size of c3t4d0 using 16 MeB extents, you need to increase the "Max PE per PV" on vgoradata to 8748. This may or may not be possible.

To see if it's possible to make the change, run:vgmodify -p 10 -n -e 8748 -r vgoradata

If vgmodify says:

[...] VGRA for the disk is too big for the specified parameters. Decrease max_PVs and/or max_PEs.

then vgmodify cannot help you.

If vgmodify says:

Review complete. Volume group not modified

this means "OK, vgmodify can do it".

If it says:

vgmodify: New configuration does not require PE renumbering. Re-run without -n.

This is even better: you don't need to use the -n option. Leaving it out makes the operation simpler and safer.

The procedure for modifying the VG:

First, run "man vgmodify" and read it to understand what you're going to do, and to learn how to recover if the vgmodify process gets interrupted or fails.

You must first stop Oracle, unmount the vgoradata LVs and deactivate the VG to make the change:

Re: VG creation

A basic ServiceGuard cluster, or a RAC active/active cluster?

Anyway, the basic procedure is mostly the same.

Do the VG modification on the primary node.

Just replace "stop oracle, unmount the vgoradata LVs and deactivate the VG" in the basic procedure with "halt the package" (on all nodes, if a RAC cluster), and "re-activate the VG and mount the filesystems" with "restart the package".

There is one extra procedure in the end.

Whenever you add or remove PVs to/from cluster VGs, you must create a new VG map file and re-import the VG to all other nodes, to make the other nodes aware of the change.

-------------

1.) Begin by creating a new map file on the node you used to change the VG:

vgexport -v -s -p -m vgoradata.map vgoradata

With the '-p' option, this command does not actually export the VG: it only creates the map file.

Also check the minor device number of the VG group file:

ll /dev/vgoradata/group

The response will be something like:

crw-r--r-- 1 root sys 64 0x020000 Jul 22 2008 /dev/vgoradata/group

In this example, the minor device number is 0x020000. Find the respective number in your system and remember it: it should be unique to each VG on the cluster.

Copy the new vgoradata.map file from the primary node to all failover nodes, and on each of them, export and re-import the VG so that the nodes will become aware of the changes done on the primary node:

2.) Export the vgoradata VG:

vgexport vgoradata

3.) Re-create the group file:

mkdir /dev/vgoradatamknod /dev/vgoradata/group c 64 0xNN0000

(Replace the 0xNN0000 with the correct minor device number: it must be the same as on the primary node.)

4.) Re-import the VG.

vgimport -v -s -m vgoradata.map vgoradata

5.) If you have more than two nodes, repeat steps 2-4 on each failover node as necessary.

-----------

If vguat1 is a cluster volume group too, you must do the same procedure (steps 1-5) with vguat1 too.