Background

The largest disks used in PPC based TeraStations as supplied by Buffalo are 500GB giving a maximum of 2TB (4x500GB) on a system. It is now possible to get IDE disks up to 750Gb in size, and SATA disks up to 2.0TB in size. Larger disks may become available in the future. Many users would like to be able to use such disks to upgrade their TeraStations capacity.

The systems that are currently covered by this article include all the PPC based TeraStation Models:

Original TeraStation

TeraStation Home Server

TeraStation Pro v1

These all come with a Linux 2.4 Kernel. This version of Linux has a limitation where a single file system cannot exceed 2Tb. As Buffalo have assumed that normal use is to use a RAID array treating all 4 disks as a single file system the largest disks that is supported by the standard Buffalo firmware is 500Gb. This article covers techniques for using disks larger than this (albeit with some restrictions).

The following ARM based TeraStation models are not covered:

Terastation Live

TeraStation Pro v2

These systems come with a Linux 2.6 kernel. This does not suffer from the limit of 2TB in a single file system. Although there is no reason to suspect that the instructions described in this article would not work they should not be required as the standard Buffalo firmware should handle larger disks without issues.

SATA Disks on IDE Based Systems

The original TeraStation and the TeraStation Home Server are IDE based. It is possible to use disks larger than 750GB on an IDE based system via a SATA/IDE converter, although you have to make sure you get one small enough to fit into the available space. The ones tested were obtained via eBay, and consisted of a small PCB that was the same size as the back of a 3.5" drive with a SATA connector on one side and an IDE socket on the other as shown below:

The board plugged into the back of the drive and simply converted the connectors from SATA to IDE and there was just enough clearance inside the case to fit the power and IDE cables. For large disks a SATA drive plus a converter is normally cheaper than the equivalent size IDE drive, so this is attractive from a price point of view.

Approach

This wiki article discusses an approach that can be used if the use of RAID5 arrays is critical to you. If RAID5 is not crtical then you might find the approach discussed in the wiki article called TeraStation Larger Disks to be more suitable.

There approachs discussed here is based on using Telnet enabled firmware and setting up the RAID5 arrays used to store data via manual commands issued during a telnet session.

Advantages

You can use drives larger than the 500GB maximum supported by the standard Buffalo firmware.

A single drive failure does not lose any data.

Recovering the system back to a fully working state after a drive failure is relatively simple.

The Buffalo provided firmware continues to be used as the basis of day-to-day operation of the system

The Buffalo browser based GUI can still be used to manage nearly all aspects of the system such as users and groups.

The software upgrades available from the itimpi website can still be used with the system.

Buffalo firmware upgrades can still be used (although at this late date it is unlikely that they will provide new ones for these models as they have been superseded by the ARM based models).

Disadvantages

You have to use a telnet enabled firmware release rather than the standard Buffalo supplied ones. Many might consider this to be an advantage rather than a disadvantage!

Manual steps are required to set up the RAID5 data arrays - the Buffalo Web GUI facilities cannot be used for this purpose.

Manual steps are required to recover from a disk failure - you cannot use the Buffalo GUI to achieve this. However as these are the very similar steps to those required to set the system up in the first place this will probably not be an issue.

Telnet Enabled Firmware

TeraStations internally run the Linux operating system. Buffalo hide this from the average user providing the system "packaged" and with a browser based GUI to control and configure the system. Telnet enabled firmware allows users to log in using a telnet client to control and manipulate the system at the Linux command line level.

This allows users to do things like:

Configure the system at a more detailed level than allowed for by the Buffalo GUI.

Install new applications to extend the functionality of the Terastation.

In the event of any problems being encountered allow for a level of access that gives more chance of recovering user data without loss.

The changes described in this article require the use of a telnet enabled release. Hopefully the instructions provided are detailed enough that users can carry out the steps provided without needing much linux knowledge.

The standard software supplied with buffalo PPC based TeraStations does not provide for telnet access. Telnet enabled releases of firmware corresponding to virtually all Buffalo firmware releases can be found at itimpi's website. These are identical in functionality to the corresponding Buffalo firmware releases - the modification to add telnet functionality being trivial. This means they will have exactly the same bugs (if any) as are found in the Buffalo releases.

The itimpi firmware releases are the ones that have been used while preparing this article. Firmware from other sources should work fine as long as it is telnet enabled.

To use telnet you need a telnet client. A rudimentary one is included with Windows which you can invoked by typing a command of the following form into the Windows Run box:

telnet TeraStation_address

A freeware one called PuTTY is recommended as a much better alternative. In addition to the standard telnet protocol PuTTY also supports the more secure SSH variant (although additional components need installing at the TeraStation end to support this).

TIP: If you already have a telnet enabled version of the firmware installed and you want to continue to use that version then you can run the Firmware updater in Debug mode and elect to not rewrite the linux kernel or boot image in flash, but merely update the hard disk. This is slightly safer as the flash chips have been known to fail.

TeraStation use of Partitions and RAID arrays

This section provides some simple background information on the way that the TeraStation partitions the disk and the way it makes use of RAID arrays. Although it is probably not critical that you understand this section it does help to make sense of the command that are used later when setting up the partitions and RAID arrays.

Partition layout

Partition 1 (System Partition)

partition 2 (Swap Partition)

Partition 3 (Data Partition 1)

Partition 4 (Data Partition 2)

Setting up a the RAID5 Arrays

This section covers how to set up a single RAID5 array if you are using drives larger than 500GB. A single RAID5 array is limited to 2TB useable space. This means that tThe amximum amount of space that can be used on each drive in the first RAID5 array is a little under 750GB drives. Larger arrays are not possible due to the 2TB limit on a single file system imposed by the 2.4 kernel. If you are using larger drives, you can, hoever, set up a second RAID5 array as described later in this article.

The steps involved in setting up the first RAID5 array are:

You need to use a telnet enabled version of the firmware.

Start with 4 unpartitioned drives and flash the firmware to get the basic setup. Any RAID5 array setup at this stage will be limited to about 1.6TB in size as this is the maximum that the Buffalo provided firmware knows how to set up. Do not worry as we are going to change this in the subsequent steps described here.

Change each disks (sda,sdb,sdc,sdd) partition table to give partition 3 the space to be used for the first RAID5 array, and Partition 4 any remainng space. First delete the existing partitions 3 and 4 and then recreate them with their new sizes. The size of Partition 3 must not exceed 715, 816,238 1K blocks - you may have to experiment a bit to work out what the start and end tracks need to be. Then set the type for partitions 3 and 4 to be 'fd'.The example below is setting partition 3 to be 500GB & partition 4 the remaining space:

14. Reboot & enjoy :up: . You should have 2 raid5 arrays working and should be able to configure them & their shares from the web application.

edit: Fixed step 4 to include changing partition 3 to a linux swap partition. It would have still functioned without being set to swap, but better safe than sorry.
the end result of step 4 should look like this

Use the approach described above for Changing the Partition Sizes on each of the four disks. As an example on a Seagate 750GB drive this works out as follows:

Repeat repartitioning and formatting the new partition for each of the four disks

Use the Web GUI to create the RAID5 partition and any wanted shares.

Using Symbolic Links to give Single View of Arrays

If you follow the normal process of setting up each array with its own share then when working at the Client level you will see each array independently.

cd /mnt/array1/share
ln -s /mnt/array2 _array2

This would make the contents of array2 appear under the '_array2' folder with the first array.

Recovering After a Drive Failure

One of the big advantages of a RAID5 approach is that if a single drive fails, then your data is still intact. This section covers what needs to be done after such a failure to replace the failed drive and get the RAID5 array fully functional with 4 drives.

The standard Buffalo firmware will detect that an array has failed, but it will not be able to recover that array due to the fact that the RAID arrays are not set up exactly as Buffalo firmware expects. Instead manual intervention is required along the same line as was originally used to create the RAID5 arrays.

In the following commands replace the '?' by a, b, c or d to correspond to drive 1, 2, 3 or 4 depending on what drive you are trying to replace.

Re-partition drive as described earlier. If you are not sure of the sizes of the partitions then you can use the command

mfdisk -c /dev/sda

and use the 'p' command to see the partition details, and then use the 'q' command to quit. If it is drive 1 you are trying to replace then use /dev/sdb instead to look at the settings on drive 2.

Flashing with new Firmware versions

The tests done show that it should be possible to flash a TeraStation setup as described in this article without any issues. However caution should be taken as this is a none-standard setup and this cannot be guaranteed to be true in all cases.

If you attempt a flash upgrade on a system that has been setup as described in this article and the firmware updater program gives any warnings about invalid partition structure or wants to format any of the disks you should abandon the firmware update as otherwise you will almost certainly lose data.

Scripts for Automating Process

The steps involved are a little error prone, so the following scripts can be used to automate this process. They can also serve as further examples of the steps that are required to get everything working.

After each of the scripts has been created, then you need to ensure that they are set to be executable by issuing a command of the form:

chmod +x scriptname

These Scripts are not yet finished and are still under development. In the meantime you should be able to carry out the requisite process using the manual steps described in the earlier sections

/usr/sbin/prepare_disk

This script used to prepare a disk ready for it to be added to the RAID5 arrays.

/usr/sbin/create_arrays

This script is used to create the initial RAID5 arrays after the disk has been partitioned and formatted.
It makes use of the prepare_disk script to handle the partitioning and formatting of each disk.

/usr/sbin/recover_arrays

This script is used to recover the RAID5 arrays after a single disk has failed.
It makes use of the prepare_disk script to handle the partitioning and formatting of each disk.
Since i will not be the end-user of the NAS i wanted to create an easier way of modifying the partitions and formating them, in case a drive fails and a new one needs to be configured to match our above modifications.

I created the following script (add_disk) and placed it in /bin/
A user would be able to telnet into the NAS when a drive fails and just run it against the replacement drive. Upon reboot the drive should be seen and rebuilt by the TSP.

I tested it by removing removing the drive from the array, deleting all partitions, and then running the script.

As always its attached below for anyone who needs it but please let me know of your personal experience.
DISCLAIMER: USE AT YOUR OWN RISK - I'm a newbie that needed this functionality and i did my best to implement it.