Engage your cerebral cortex

Main menu

Post navigation

Shrink EBS Root

This post was published 6 years, 3 months ago. Due to the rapidly evolving world of technology, some concepts may no longer be applicable.

My EC2 instances are setup to have only the operating system and program files on the root volume, with all other data (logs, mail, etc.) on a second EBS volume. This leads to a very stable root volume, which sees a minimum of changes. Fully configured, my root volume (using Amazon’s Linux) is 1.2GB. The default size of the root volume is 8GB. Given the above, it serves little purpose for me to have so much space allocated to my root volume, and unused. I opted to shrink my root volume to 4GB, and my in future reduce this even more.

Before proceeding, it is worth noting that Amazon’s Linux uses ext4 as its root filesystem. Ext2 and ext3 root file systems can be resized in the same way, however other file systems require a different procedure.

Snapshot root volumeThis step is done either as a backup or to create a temporary EBS volume containing the data we will copy to the new, smaller, volume.

Create a new (empty) EBS volume of the target sizeThis will become our new root volume – so, in my case, I created a 4GB EBS volume (it should be in the same availaility zone as the instance you want to attach it to)

Prepare your original root volumeEither:

Stop (not terminate) the instance it is attached to, and detach the volume OR

Create a new EBS volume using the snapshot created earlier

Attach the volumes from the previous 2 steps to an instance
While you can attach them to the original instance, these volumes should not be mounted (only attached)
In the examples below, /dev/xvda1 refers to the original root volume, and /dev/xvdg refers to the new volume.

Run a file system check on the original volume (or volume derived from snapshot)

e2fsck -f /dev/xvda1

Copy the data to the new volume

Option 1: Use rsync
Format the new volume: mkfs.ext4 /dev/xvdg
Mount the two volumes, and use rsync -aHAXxSP /source /target

Resize the file system of the original volume to its minimum size
Since this is an ext4 file system, we use resize2fs

the ‘M’ option shrinks to the minimum size

the ‘p’ option displays progress

resize2fs -M -p /dev/xvda1

The above command will output the new file system size. For instance:

"Resizing the filesystem on /dev/xvda1 to 319011 (4k) blocks.

Calculate the number of chunks
The filesystem sits at the start of the partition, and is continuous – the size corresponds to output of resize2fs from above. We want to copy everything from the start to that point.

Since EBS usage charges for I/O, we want to use a somewhat large chunk size – I used 16MB.

blocks*4/(chunk_size_in_mb*1024) – round up a bit for safety (I ended up with 78 blocks, which I rounded to 80)

Perform the actual copy of data

dd bs=16M if=/dev/xvda1 of=/dev/xvdg count=80

Note: dd uses ‘M’ as 1048576B and ‘MB’ as 1000000B

Resize the file system on the new volume to its maximum size

resize2fs -p /dev/xvdg

Check the new file system for consistency

e2fsck -f /dev/xvdg

Now that the data has been copied over and everything checked. We can replace our root volume on the target instance.
If the target instance is running, stop (not terminate) it
If you haven’t already, detach the root volume from the target instance.
Attach the new EBS volume to the target instance as /dev/sda1You can determine the root device by running:

after running the command, I stopped the instance. Then I detached my original root volume and attached the new volume. After this I restarted the instance. When I tried ssh into system using putty it said authentication failure

dd method:

When I used resize2fs -M -p /dev/xvda1 it said Online shrinking is not possible

After you detach the root volume, you have to attach it to a new instance as a non-root volume. You perform the resize on this new instance. (The device should not be /dev/xvda1 – that is likely the root volume; it should be something like /dev/xvdf). As the error you got suggests, you cannot do a resize on a volume that is in use – which is why it is attached to another instance as a secondary volume (e.g. the other instance is running off its own root volume – different from the one you are resizing). Hope that helps, good luck.

I was just curious if you ever assigned two EIPs to a ubuntu instance in ec2. I tried doing that but after assigning 2 EIPs I was only able to access the instance using one EIP. With other I was getting connection timeout. The reason I need two EIPs from a single instance is that I want to run two websites from the same instance.

I followed the following procedure for associating two EIPs to a single instance:-
I created a VPC, associated two EINs to the instance and associated two EIPs to the EINs.

Is there some additional steps which I need to perform to get both EIPs working

Two websites usually don’t usually need 2 IPs (the exception being some SSL setups) – you can setup virtual hosts and have many websites running under the same IP.

You can’t attach multiple EIPs to a non-VPC instance. In VPC, you only need one ENI – but you have a secondary private IP address on that ENI. When you associate the EIP with the instance, you choose which private IP it will be mapped to. Finally, you need to modify /etc/network/interfaces to include the new addresses. AWS provides a good overview of the procedure in their documentation. If you have trouble getting it working, I would recommend asking on ServerFault.

I believe it is possible, but I would advise against it as it has many complexities that are unnecessary (the problem comes from grub and the kernel not being able to read the array prior to it being initialized). Instead, I would suggest creating a separate RAID array, and binding mount points to the relevant locations. Essentially, don’t store anything other than the operating system and core packages on your root volume – databases, code, uploads, logs, etc. can all go on your RAID array (and when you use mount with bind, you are able to make the RAID appear transparent to the system (e.g. you can bind /var/log to /mnt/raid/logs). If you care about the contents of your root volume (which really, you shouldn’t) then take snapshots of it.

Since your files are all on a separate volume, then it is just a matter of launching a new instance. You can do one of two things – make an image and launch an instance from that (not ideal – since the image will get outdated quite quickly) or take snapshots and create a new EBS volume from the snapshots. I would suggest daily snapshots of something like a root volume, and hourly snapshots of important data (remember, they are differential, so you only store the difference). If you are running only one instance, you will have a bit of downtime – but it should only take about 5 minutes to launch a new instance referencing your latest snapshot as the root volume. Frankly, if the downtime is not acceptable, then you need to look into setup a load-balanced, stateless, high-availability cluster – where any single node can go down, and the other nodes just pick up the slack until new nodes are brought online.

The new instance can pick up the raid array (since the root volume stores the configuration information for the array) – you just need to ensure you mount the relevant EBS volumes with the same device names. As for the daily snapshot, you are right that it is not needed – however, things do change – you tweak your configurations, update packages, install new packages. If nothing has changed, your snapshot will take up no space, if something has changed, only the difference will be stored in the snapshot. Since the snapshots are differential, it makes it worthwhile to take a snapshot frequently, regardless of whether or not things change.

a) I would suggest your question will get more attention on ServerFault instead of StackOverflow as it is not a programming question.
b) Instead of creating an AMI, I would suggest just starting an instance, stopping it, detaching the root volume, and attaching your new root volume to the same location (/dev/sda1) [Although, realistically, this isn’t going to make a difference, it is an easier process if you need to do it multiple times]
c) Launch the AWS console and view the console log for the instance. It will hopefully give you some pointers on what is not working so that you can narrow your issue down.

It sounds like the image you recovered on might have had more recent kernel and that you inadvertently setup an ext4 filesystem instead of ext3. If this is the case you will need to install the appropriate packages on the root volume before it will mount. I would recommend posting a question to ServerFault if you continue to experience errors.