Previous topic

Next topic

This Page

Quick search

When shrinking the filesystem, blocks that are allocated too far at the end of
the BlockFile (if any), must be relocated at the beginning using the
–pack option of the fsckddumbfs command.

Create a new empty filesystem having the required size.

Migrate meta-data to this new filesystem using the migrateddumbfs command.

connect the BlockFile device to this new filesystem

The advantage of migrating data to a new filesystem is that no data can be lost
if the migration process is interrupted. The only sensitive operation is the packing
of the filesystem. But a simple normal repair using fsckddumbfs can fix any problem.

Block devices are a less flexible to handle than regular files.
Even if your a using LVM the resize operation is a lot more complex.
Be sure to have a good understanding of how ddumbfs works before to
start resizing.

The size of the block addresses used in the index and in the DataFiles are optimized.
For example, if the maximum number of block is under 16777216 (16M), the size of
the address is stored in 3 bytes to save space. If the new size cross such a boundary
the DataFiles must be migrated too. The index is always migrated anyway.
This is what migrateddumbfs does.

The size of the IndexFile is defined at creation time and never changes. The
position of every entries in the index depend of the total capacity of the filesystem.
Any change of the capacity requires to move the entries inside the index.

Increasing the filesystem size don’t require to do anything to the BlockFile.
You don’t need to pack it.

To shrink the BlockFile, all blocks allocated beyond the new limit must be
moved at the beginning. Any block move require to update its address in the index and in
each DataFile. See How does packing works.

Pack will swap all the free blocks at the beginning with used blocks
located at the end to finally get all the free space at the end of
the BlockFile. After that the BlockFile can be truncated without
losing data.

First the filesystem must be checked to avoid any corruption.

The new size is calculated regarding the used space.

Using the bit list of free blocks, the first allocated block beyond
the limit will be moved to the first free space and so on. This is
probably the longest operation. To reduce the number of un-sequential
iops, reads and writes are grouped in 1Mo chunk. The first blocks
to move are read up to fill in a 1Mo buffer, then written to appropriate
places.

Then all DataFiles are opened, and all addresses above the new limit
are updated. Their is no translation table, such a table would take
too much RAM and cannot be stored to disk without slowing down the process
too much. The bits list is used as a table. The Nth used block
above the limit goes to the Nth free block starting at zero. To speed
up a bit, two indexes of free and unfree blocks are created.

Finally the index is updated using the same logic as the DataFiles
and the bits list is filled with 1 up to the limit and reset to zero beyond.

If the BlockFile is a regular file (not a block device) it is truncated
as much as possible

Here is a sample with 10 used blocks:

1
01234567890123456789
AB C DE FG HIJ

Block F, G, H, I and J will be moved to free space at the beginning, blocks
A, B, C, D, and E stay at the same place.:

As you can see in the Blocks usage line, the first 80% of the
BlocFile is full. 0 means than between 0 and 10% of the
part is filled. . (a dot) means the part is completely empty.
You can also see that 76800 blocks can be reclaimed.

Now I reclaims the free space and pack the BlockFile.
Any non *read-onlyfsckddumbfs operation reclaims the free blocks,
here I choose -n for a normal repair. The -k is for pack: