First, I am more windows guy and mostly copy-paste Linux admin with dozen google search results opened for each Linux command.
Now as I understand, I have 2 options:
a.) Add some 200 GB to existing disk via VMWare Client, then extend LVM partition to use additional space.
b.) Add 200 GB virtual disk to this virtual machine, and add additional storage to Zimbra, one partition for index and one for store.

Which is the best to go for reliability, performance, and stability?

04-12-2012, 12:40 AM

christo

Hi all,

My first post :) I'm also sitting with this dilemma, anyone know what's the best route forward?

04-17-2012, 03:35 PM

Labsy

Don't wanna spam the forum, but would appreciate experts opinion on the subject. As much help as possible would make me happy.
thanx!

04-17-2012, 07:02 PM

ccelis5215

Labsy,

For me, you are the "experts" you wanna find.

Don't mind about lacking responses.

As always, you have to try.

ccelis

04-18-2012, 12:40 AM

christo

answer coming

hey guys,

we are doing some test with this and its going quite well, about 80% done and we'll post the outcome if its successful :) hold thumbs

06-03-2012, 03:06 PM

Labsy

Quote:

Originally Posted by christo

hey guys,
we are doing some test with this and its going quite well, about 80% done and we'll post the outcome if its successful :) hold thumbs

Hi Christo,

Any results yet? I'll need to decide which way to go, either extend existing volume, or add new one, and I would appreciate having your test results, of course if you have some :)

06-04-2012, 01:43 AM

christo

Hey Labsy,

We were so close and did successful tests on copying the data off the VM guest, inserting the bigger drives and extending the linux volume without any problems. When we moved over to the production server, however, we weren't able to copy the Zimbra VMKD file back to the new VM guest. It was 600GB and failed on about 53% every time and I have no idea why. We made documentation on how to do the Linux extension though and I will share that.

06-06-2012, 03:01 PM

Labsy

Hi Christo,

sorry for the late reply - I was busy with other irrelevant stuff.

As of now, I recognized that my ZCS 7.14 is using LVM on existing partitions. But do I need to use LVM on the newly added disk/partition?

Or do I need to use LVM only in case if I go with extending existing partition?

Also, I'll need to free up some space on existing config, because it is 97% full, which is bad for future. Huh....lots of work tonite :)

06-06-2012, 06:43 PM

Labsy

Hi,

I had to make a quick decision, what to do with 97% disk full and I decided to EXTEND the existing partition. Here's how I did it:

SYSTEM: Ubuntu 8.04 LTS
Default install with LVM.

All this was done as root

Code:

# sudo su root

1.) Powered down ZIMBRA virtual machine in my ESXi host and made FULL BACKUP.

- pressed p to print partition table to identify partitions. There were sda1, sda2 and sda5
- pressed n to create new primary partition
- pressed p for primary
- pressed 3 for the partition number (remember, sda1 and sda2 were already there, so sda3 is next in my case)
- pressed ENTER two times to accept suggested Start and End positions
- pressed w to write the changes to the partition table
NOTE: Don't panic when you get "WARNING: Re-reading the partition table failed with error 16: Device or resource busy."
This is normal, you only need to reboot machine.

6.) Restarted the virtual machine

Code:

reboot now

7.) Verified that the changes were written to partition table and that new partition is of type 83:

Code:

# fdisk -l

8.) Then I converted the new partition to a physical volume:

Code:

# pvcreate /dev/sda3

8.1.) Checked the name of my volume group:

Code:

# vgdisplay | grep "Name"
VG Name zimbra

Remember this name!

9.) Run the command to extend the physical volume:

Code:

# vgextend zimbra /dev/sda3

Here "zimbra" is the name, which you grepped in previous step.

10.) Verify how many physical extents are available to the Volume Group:

Code:

# vgdisplay zimbra | grep "Free"
Free PE / Size 51224 / 200.11 GiB

Ok, now we have a bit more than 200 GB to extend, but we'll stick to 200 GB to be on the safe side.

Now, here you see 2 Logical Volumes: root and swap_1. We want to extend root volume and leave swap_1 unchanged.

11.) Now extend the Logical Volume:

Code:

# lvextend -L+200G /dev/zimbra/root

The 200G is the size, which we determined to be free in step 10.

12.) Now expand the ext3 filesystem online, to fill the Logical Volume:

Code:

# resize2fs /dev/zimbra/root

13.) See the new space available:

Code:

# df -h

14.) Finally, you need to REBOOT once again. But be aware that this might tahe HOURS to complete!
Upon reboot, fsck will be forced with the message:

Code:

/dev/mapper/zimbra-root primary superblock features different from backup, check forced.

This is normal. E2fs forces a fsck if it notices the backup superblocks are different from the primary superblock, to avoid corrupting a valid backup before copying the primary superblock to the backup superblocks.
It is still running, in my case seems it will take 3-4 hours. Hope no "bad" words will be seen in report.

**EDIT** fsck finished within 1 hour, reported to corrects some minor errors (probably updating superblocks after resizing), forced one REBOOT and after that Zimbra started normally, no issues, performance OK. Also disk space graphs in Admin console updated accordingly. Kewl.
Seems we're back in business, with smile on my face :)