Hardware failure

Hardware failure is a scary thing.
It is not possible to predict
when it will occur or the affect it will have. One thing that
is certain is data loss resulting from hardware failure. We
cannot prevent failure but we can guard against its effects.
The most efficient way of preventing loss is to regularly
backup our data.

Span.com
specialises in providing you with the equipment necessary
to prevent data loss. Whether you are a domestic user backing
up family photos or a video editor with hundreds of gigabytes
of raw video, we can supply you with an appropriate solution
to store your data quickly and safely.

"Prevention rather than cure".
A phrase often used in the
medical world. It can also be applied to data backup. It is
always cheaper to backup than it is to retrieve lost data
from failed hardware.

If you are still not convinced about the benefits of backing
up, have a look at the information below.

The effect of data l

The effect of data loss on companies.

It is possible to se

It is possible to see in the diagram (above) that data loss has a huge financial impact on businesses. These figures include considerations such as man-hours and revenue lost through data loss. These statistics are courtesy of Ontrack Ltd. A specialist data recovery company.

When you consider that most businesses experience on average, two hours of downtime per week, those are incredible figures. Below are a few facts associated with data loss.

1. Most companies value 100 megabytes of data at more than $1 million.

2. 43 percent of lost or stolen data is valued at $5 million.

3. 43 percent of companies experiencing disasters never reopen, and 29 percent close within two years. (McGladrey and Pullen).

4. It is estimated that 1 out of 500 data centers will have a severe disaster each year. (McGladrey and Pullen)

5. 40 percent of respondents to a computer security survey had detected and verified incidents of computer crime during the previous year. (NCSA Annual Worry Report)

6. Computer crimes cost firms who detect and verify incidents of computer crime between $145 million and $730 million each year. (NCSA Annual Worry Report)

7. A company that experiences a computer outage lasting more than 10 days will never fully recover financially. 50 percent will be out of business within five years. ("Disaster Recovery Planning: Managing Risk & Catastrophe in Information Systems" by Jon Toigo)

If you are unfortun

If you are unfortunate enough to have lost data and did not have it backed up, dont worry yourself too much! It can probably be retrieved but unfortunately it will not be cheap.

span.com recommends RetroData, who can recover data from practically any storage medium, from memory cards to workstation drives, right through to mammoth RAID arrays and mass storage devices.
They offer Standard, Priority and Priority+ services - they guarantee successful data recovery, or not to charge at all.
To contact them please call: 01590 673808
RetroData website : www.retrodata.co.uk.

span.com is a partner with Ontrack Data Recovery Services, a specialist in data recovery from most sorts of hardware and media.
They offer a full and world renowned data recovery service and are pleasant and efficient.
To contact them please call: 00800 1012 1314
Ontrack website : www.ontrack.co.uk.
Quote the reference "WOR100" for a free upgrade to priority service.

Maxtor is promoting

Maxtor is promoting a five-step "best practices" program for basic protection.

1) Develop a backup schedule--backup data daily or at minimum weekly.

2) Backup everything--today users can easily backup all of their computer hard drive data. No need to spend time sorting through every file or folder. Invest in a storage solution thats twice the size of your internal hard drive to give your system room to grow.

3) Do it automatically--set it and forget it. Use a solution thats easy to set up and provides automatic backups.

4) Rotate backups--give added protection in case of an earthquake, fire, flood, or theft. Use two drives and rotate one offsite.

5) Dont procrastinate--unfortunately, the need to backup data is often a lesson learned from an unfortunate experience. Dont let it happen to you. Have you done it today?

There are three kind

There are three kinds of backup as explained below:

Full Backup:
A Full backup is simply backing up all files on the system.

Incremental Backup:
An incremental backup is a backup that backs up only the files modified since the last backup.

Differential Backup:
A differential backup is a cumulative backup of changes made since the last full backup.

There are many backu

There are many backup strategies out there and you have to pick the one that suits you the best. Here are examples of the most used three types:

The GFS scheme begins with the daily backups. Typically, four backup media are labeled for the day of the week each backs up; for example, Monday through Thursday. Each tape is recalled for use on its labeled day. If only a one-week version history of files is maintained, then each tape is overwritten each week. In order to maintain a 3-week version history of files (recommended), more tapes are required. For example this weeks Monday tape will not be overwritten for 3 weeks.

Weekly backups follow a similar scenario. A set of up to five weekly backup media is labeled "Week1," "Week 2," and so on. Full backups are recorded weekly, on the day that a "Son" media is not used. Following the example above these would be "Friday" tapes. This "Father" media is re-used monthly. Five weekly tapes are required in order to maintain a one-month history of files, as some months have 5 weeks.

The final set of three media is labeled "Month1," "Month2," and so on, according to which month of the quarter they will be used. This "Grandfather" media records full backups on the last business day of each month. If your backup plan follows a corporate fiscal calendar, then your monthly tape will take the place of the week 4 or week 5 weekly/Father tape, depending on the month. If your backup schedule follows calendar months, then your monthly backup will vary throughout the year, replacing a daily or weekly tape. Typically, monthly tapes are overwritten quarterly or yearly (recommended), depending on version history requirements.

Each of these "media" may be a single tape or a set of tapes, depending on the amount of data to back up and the type of backup used (incremental vs. full). Weekly and/or monthly tapes are generally pulled as archive tapes.

Tower of Hanoi
In the Tower of Hanoi backup rotation schedule every disc is a backup media set and every move is a day of a backup. In this case, the earlier a backup media set is used, the more often it is used throughout the backup process. Each additional backup set, added to the backup rotation schedule, is used when the previous ones are not used, and doubles the backup history by keeping an older version of data when it is not used.

The Tower of Hanoi rotation schedule allows having a longer backup history, when compared with the Grandfather-father-son rotation schedule.

In this schedule, one media set "A" is used every other backup session (daily sessions in this example). Start Day 1 with "A" and repeat every other backup (every other day).

The next media set "B" starts on the first non-"A" backup day and repeats every fourth backup session.

Media set "C" starts on the first non-"A" or non-"B" backup day and repeats every eighth session.

Media set "D" starts on the first non-"A," non-"B," or non-"C" backup day and repeats every sixteenth session. Media set "E" alternates with media set "D."

The advantage to the Tower of Hanoi scheme is that with each new media set added to the rotation, the backup history doubles. The frequently used media sets have the most recent copies of a file, while less frequently used media retain older versions.

This backup scheme can be difficult to keep track of manually and therefore is generally done with the help of rotation schemes provided in backup software packages.

As with the Grandfather-Father-Son rotation scheme, tapes should be periodically removed from the rotation for archive purposes.

Incremental Tape Method
This method has a few names and is fairly simple to implement. It involves determining how long you wish to maintain a copy of your data and how many tapes you wish to use. It is based on labeling each tape with a number and then incrementing them through adding and removing one Backup Set each week. It can be configured to allow for either 5-or-7 day backup schemes. An incremental tape rotation method is set up as follows.

The first week you use 1-2-3-4-5-6-7
The second week you use 2-3-4-5-6-7-8
The third week you use 3-4-5-6-7-8-9
The fourth week you use 4-5-6-7-8-9-10
The fifth week you use 5-6-7-8-9-10-11
Tape 1 would then be inserted again 6-7-8-9-10-11-1

You continue this as long as you have tapes and have one tape from every week that you perform a backup able to be stored for a certain period of time. It puts even usage on each tape making sure that a file gets copied to a multiple amount of tapes. The disadvantage is that the backup time can take a while if doing a full backup of multiple servers. It could be varied to do a full backup on the first of every week and then incremental of differential backups every day after that.

An advantage of this system is that tapes can be removed or added to the system at any time if an archive tape or longer file histories are needed. The key is to keep a log of the tape sequence and what date it was last used. This can be calculated months at a time or even for an entire year if necessary.

Once youve establish

Once youve established a backup routine, it would be wise to keep these few safe backup tips in mind.

Test your backups!
When you think of it, try restoring a few important files from your backup, just to make sure that your file selections and your backup media are performing as expected.

Check your backup logs.
Most backup software provides a log file after each backup. Log files can be somewhat complicated to read, but you can quickly scan it to look for any problems. If you see words like "Error", "Failed", "Unable to...", etc. you should take a closer look.

Keep a backup off-site!
We all hate to think of it, but things like theft, fire and other natural disasters can destroy your entire work area. Taking your backup media off-site is a good idea. Storing your backups in a safe deposit box is great, but an off-site dresser drawer would suffice.

Look after the storage media.
Looking after the storage media is nearly as important as doing the backups them selves.
Store in a cool, dry place, stored in a clean storage case
Do not leave media sitting around
Avoid flexing, bending or twisting the media
Do not touch exposed parts of the media which contain data
Do not expose to magnetic fields
Do not continually use the same media - use fresh media in a timely manner

All devices, includi

All devices, including the initiator, on a bus must be have a unique number. It is advisable to have the boot device as ID 0.

SETTING SCSI IDs

External SCSI devices usually have a device number switch on the rear of the case, equipped with small "+" and "-" buttons. Using a pin, press the buttons to increase or decrease the device number. To connect the device number switch to the device, attach the wires to the jumpers mentioned below. The black wire MUST be connected to the SCSI ID 0 jumper, the other wires must be connected so that the ID 1 and 2 (and 3 in the case of wide devices) jumpers are covered by the coloured wires. There is no standard to which row of jumpers (top or bottom) the wires go on, and no standard as to which way around the coloured wires go. The only way is to try it until it works.

Internal SCSI devices will require that you manipulate jumpers or DIP switches to set the SCSI ID#. If youre lucky, a small chart will appear on the case of your internally mounted SCSI device, illustrating how to set the jumpers or DIP switch for each SCSI ID#.

If your internal SCSI device doesnt have such a chart, you can still figure out how to set the SCSI ID#. Internally mounted SCSI devices usually have three DIP switches or pairs of jumper pins in a row (four pairs of jumper pins for "Wide" SCSI devices). Figuring out how to set the SCSI ID# for such devices is a matter of simple addition.

Each switch (or pair of pins) has a numeric value associated with it. From left to right, these values are 1, 2, and 4, (and 8 for SCSI-3 devices only). If the switch is on (or the pair of pins has a jumper attached), its value is added to the total. If a switch is off or there is no jumper on a pair of pins, its value is not added (counts as zero). Add these numbers together, and the result is the SCSI ID.

Most SCSI controllers support eight (really seven, as the controller itself is the eighth) devices, each device with a corresponding SCSI ID# number of 0-6 (SCSI ID# 0 is device number one and the controller is SCSI ID# 7, so 1+2+4=7).

Note: Each SCSI device must be assigned its own unique ID. For example, it would not be possible to set two SCSI devices on the chain to ID 2. These devices would conflict and not function. The only way for this to work is if each device is attached to its own SCSI controller card.

Both ends of a SCSI

Both ends of a SCSI bus (cable) must be be terminated using active terminators.

In practice this can be achieved using either discrete terminators or the internal terminators often provided within the controller and attachable devices.

NO termination is allowed at points along the cable. So any device attached (that is not at the end) must have internal termination disabled.

In the situation where the controller has multiple SCSI ports with differing performance ALL ports must be correctly terminated.

In a situation where the controller has a common port with internal and external connections and both are in use, DO NOT enable termination within the controller, because in this configuration the controller is actually in the middle of the SCSI Bus.

Narrow (50pin) devices SHOULD be connected part way along a wide (68pin) SCSI bus. If you must connect such a device at the end of a wide (68pin) SCSI cable then suitable termination must be added to achieve correct termination of the "unused" data lines. A terminator within the Narrow Device cannot provide this function.

Try replacing the cable and terminators if possible, make use of the drives on board termination.

Try low level formatting the drive and verifying the data area.

Check that the cable lengths are within the ANSI specification for the interface transfer rate used and the number of devices installed on the bus.

If attached through a removable carrier, try removing the drive from the carrier and attaching direct to the bus.

If running a wide (68 pin) device from a wide controller through a narrow bus, try disabling wide negotiation in the controller.

On multi-channel SCSI controllers, if using one SCSI controller, one must only use two channels of each controller.

If the drive is the

If the drive is the only device on the bus:
- Is it at the end of the cable ?
- Is it configured as Master ?

If the drive is one of two devices on the bus:
Is one configured as Master and the other as Slave ?
- Is it at the end of the cable ?

Check all connectors are seated correctly:
- Pin 1 (coloured conductor) is located nearest to the power connector.
- When connecting to the motherboard look for small printed 1 next to the pin to ensure the cable is the correct way round.
- Check that there are no bent pins.
- Try replacing the cable.
- Total cable length should be kept to a minimum and not exceed 18 inches.

- Try running the drive with all other devices removed.

- If attached through a removable carrier, try removing the drive from the carrier and attaching direct to the bus.

- If experiencing problems during Fdisk / partitioning, it may be that your MBR (Master Boot Record) has become corrupt. Try running ZAP.EXE or CLEANDISK.EXE.

- Microsoft Windows NT, 95A, DOS are limited in that they must be installed within the first 1024 cylinders.

Remember that with I

Remember that with IDE the speed of the master governs the speed of the slave. If you buy a new fast hard disk and slave it to your old slow disk, you will have 2 slow disks. For this reason CD and DVD devices should always be slaved wherever possible, and hard disks should be masters.

1.) If you have 1 hard disks it should be installed as the primary master. Any other devices can be added in any order. 2.) If you are adding a second hard disk your primary master should be left as your boot drive, with the second disk as the secondary master. Other devices should be slaved. 3.) If you have 3 hard disks, again make sure the primary master is the boot drive, with a CD or similar device as the primary slave. The 2 hard disks should be added as the secondary master and slave, with the quicker drive set to master. 4.) If you need to add more disks or IDE devices internally, you will need to buy an Ultra100 IDE expansion card which will give you 2 more IDE channels, 2 master and 2 slave.

Remember to enable DMA and read/write caching on all drives. To do this, go into Windows and follow the following steps: 1.) -> Control Panel -> System -> Devices -> Disk Drives -> Generic IDE -> Properties -> Settings OR 2.) Right click My Computer, Select Properties, Select Device Manager, Double click "Disk Drives", Select the disk drive, Look through the properties until you see the DMA and caching check boxs and tick them, Click OK and restart.

If you are working with video and are having problems with dropped frames, try turning off all auto-save, MS Office Fast-Find and Anti-Virus background applications.

You can adjust the [vcache] setting in the system.ini file to increase the amount, or use a program like "Cacheman" to do it. Only attempt editing a .ini file if you know what you are doing!

Installing too much junk will slow your system down, installing too many demos etc will clog up the registry and degrade system performance. Get in the habit of uninstalling programs you no longer use.

Defragmenting the drives will prevent file fragmentation building up, which can slow the system.

Recently Mac custome

Recently Mac customers have been experiencing problems connecting FireWire and USB devices. The Mac will stop recognizing the device for no apparent reason. The problem affects new and old devices, with mainly hard disks and CD units being affected, although this is probably because they are the two most common devices.

Both 3rd party cards and Mac internal buses are similarly afflicted. The problem seems to appear on some machines running OS 9.0.4 and 9.1, with a few people having difficulties with 8.6. It is sometimes related to extensions, but not always. There is no fix from apple for this yet. We recommend working through the suggestions listed below and then searching the forums of the websites listed at the bottom.

ALL the returns of FireWire and USB products (bar one) we have received have turned out not to be faulty. You WILL save yourself time and energy by running through the points opposite before calling us.

 Test the device on another Mac. If it OK, how is the Mac different?  Do other USB/FireWire products work?  Do you have the latest version of FireWire?  Do you have the correct USB drivers installed?  Check Apples website for software upgrades.  Are extensions conflicting? Toast and iTunes extensions conflicting is a known problem.  Check your system folder, are the correct extensions installed or are they in the program folder?  Disable any recently installed extensions.  If you have up or down-graded your OS, try changing back.  Try uninstalling any recently installed software.

Problems when using

Problems when using Pro Tools and Digital Performer.

High performance PCI cards have to be able to share the PCI bus with very low latency if they are going to co-exist with Digidesign hardware. Digidesign tech support tells people to slow the SCSI bus down to 10MHz to accomplish this. It has been found that using the options for the card which directly control the PCI bus latency also does the job (surprise, surprise). Adaptec calls this something cryptic like "A/V options", but the documentation explains what it actually does (limits the amount data the controller will pump across the PCI bus without letting other cards onto the bus). ATTO calls it what it is. You actually want to set this to "cache line" (Adaptec) or 32 bytes (ATTO). If you have a 64 bit card and PCI bus, you can probably set it to 64 bytes on the ATTO without having any problems.

Now, for the interesting part. The option to slow the PCI bus down was there because the cabling for Ultra is very twitchy and some people would need to slow the bus down because their cabling wasnt able to do 20MHz. The controller has no way of testing for that.

Ultra2 and Ultra160 have much looser cabling requirements AND the hardware does a standardized speed check handshake when first getting connected; so, it can slow the clock down on its own if the cabling is bad. This means that they dont need any options to manually slow the clock down if all the devices on the bus are at least Ultra2. ATTO explicitly states in their driver manual that the SCSI bus speed options are ignored if the controller and all the drives are at least Ultra2. Since Digi (or at least Digi customer service) doesnt believe in using correct option to reduce latency on the PCI bus, Digi doesnt support Ultra2 or Ultra160 SCSI cards. Digis storage expert has said that they should work...

Whilst we endeavour

Whilst we endeavour to provide free technical support for all of our products, sometimes this just is not possible. We at Worldspan are PC rather than Mac orientated and do not have any Macs on site for testing purposes. If you are experiencing difficulties with your new Mac purchase, by all means call us and we will try our best to help you, but sometimes we find this difficult as we have no way of replicating any problems you may have. We would recommend visiting the following website and checking their lists and archives to see if they can be of help.

How to change the hard drive

Change Hard Drive

A redundant array of independent disks (more commonly known as a RAID) is a system of using multiple hard drives for sharing or replicating data among the drives. Depending on the version chosen the benefit of RAID is a one or more of increased data integrity, fault-tolerance, throughput or capacity compared to single drives. In its original implementations (in which it was an abbreviation for "Redundant Array of Inexpensive Disks"), its key advantage was the ability to combine multiple low-cost devices using older technology into an array that together offered greater capacity, reliability, and/or speed than was affordably available in singular devices using the newest technology.

At the very simplest level, RAID is one of many ways to combine multiple hard drives into one single logical unit. Thus, instead of seeing several different hard drives, the operating system sees only one. RAID is typically used on server computers, and is usually implemented with identically-sized disk drives. With decreases in hard drive prices and wider availability of RAID options built into motherboard chipsets, RAID is also being found and offered as an option in higher-end end user computers, especially computers dedicated to storage-intensive tasks, such as video and audio editing.

The most popular RAID levels are RAID 0, RAID 1, RAID 5 and becoming more wide spread is RAID 6.

Below there are links to explaintions of what RAID levels are and what the advantages and disadvantages there are from inplementing them.

RAID Levels

JBOD (Just a Bunch of Disks) - This is just the disks with nothing done to them.

It is a popular method for combining multiple physical disk drives into a single virtual one. As the name implies, disks are merely concatenated together, end to beginning, so they appear to be a single large disk.

In this sense, concatenation is akin to the reverse of partitioning. Whereas partitioning takes one physical drive and creates two or more logical drives, JBOD uses two or more physical drives to create one logical drive.

In that it consists of an Array of Inexpensive Disks (no redundancy), it can be thought of as a distant relation to RAID. JBOD is sometimes used to turn several odd-sized drives into one useful drive. Therefore, JBOD could use a 3 GB, 15 GB, 5.5 GB, and 12 GB drive to combine into a logical drive at 35.5 GB, arguably more useful than the individual drives separately.

A RAID 0 (also known as a striped set) splits data evenly across two or more disks with no parity information for redundancy. It is important to note that RAID 0 was not one of the original RAID levels, and is not redundant. RAID 0 is normally used to increase performance, although it is also a useful way to create a small number of large virtual disks out of a large number of small physical ones. Although RAID 0 was not specified in the original RAID paper, an idealized implementation of RAID 0 would split I/O operations into equal-sized blocks and spread them evenly across two disks. RAID 0 implementations with more than two disks are also possible, however the reliability of a given RAID 0 set is equal to the average reliability of each disk divided by the number of disks in the set. That is, reliability (as measured by mean time between failures (MTBF)) is inversely proportional to the number of membersso a set of two disks is half as reliable as a single disk. The reason for this is that the file system is distributed across all disks. When a drive fails the file system cannot cope with such a large loss of data and coherency since the data is "striped" across all drives. Data can be recovered using special tools. However, it will be incomplete and most likely corrupt.

While the block size can technically be as small as a byte it is almost always a multiple of the hard disk sector size of 512 bytes. This lets each drive seek independently when randomly reading or writing data on the disk. If all the accessed sectors are entirely on one disk then the apparent seek time would be the same as a single disk. If the accessed sectors are spread evenly among the disks then the apparent seek time would be reduced by half for two disks, by two-thirds for three disks, etc. assuming identical disks. For normal data access patterns the apparent seek time of the array would be between these two extremes. The transfer speed of the array will be the transfer speed of all the disks added together.

RAID 1. - A RAID 1 creates an exact copy (or mirror) of all of data on two or more disks.

picture

This is useful for setups where redundancy is more important than using all the disks maximum storage capacity. The array can only be as big as the smallest member disk, however. An ideal RAID 1 set contains two disks, which increases reliability by a factor of two over a single disk, but it is possible to have many more than two copies. Since each member can be addressed independently if the other fails, reliability is a linear multiple of the number of members. To truly get the full redundancy benefits of RAID1, independent disk controllers are recommended, one for each disk. Some refer to this practice as splitting or duplexing.

When reading both disks can be accessed independently. Like RAID 0 the average seek time is reduced by half when randomly reading but because each disk has the exact same data the requested sectors can always be split evenly between the disks and the seek time remains low. The transfer rate would also be doubled. For three disks the seek time would be a third and the transfer rate would be tripled. The only limit is how many disks can be connected to the controller and its maximum transfer speed. Most IDE RAID 1 cards use a broken implementation and only read from one disk so their read performance is that of a single disk. Some older RAID 1 implementations would read both disk simultaneously and compare the data to catch errors. The error detection and correction on modern disks makes this no longer necessary. When writing the array acts like a single disk as all writes must be written to all disks.

RAID1 has many administrative advantages. For instance, in some 365*24 environments, it is possible to "Split the Mirror": declare one disk as active, do a backup of the inactive disk, and then "rebuild" the mirror. This procedure is less critical in the presence of the "snapshot" feature of some filesystems, in which some space is reserved for changes, presenting a static point-in-time view of the filesystem.

Also, one common practice is to create an extra mirror of a volume (also known as a Business Continuance Volume or BCV) which is meant to be split from the source RAID set and used independently. In some implementations, these extra mirrors can be split and then incrementally re-established, instead of requiring a complete RAID set rebuild.

RAID 2 - This stripes data at the bit (rather than block) level. Not currently used.

A RAID 3 uses byte-level striping with a dedicated parity disk. RAID 3 is very rare in practice. One of the side effects of RAID 3 is that it generally cannot service multiple requests simultaneously. This comes about because any single block of data will by definition be spread across all members of the set and will reside in the same location, so any I/O operation requires activity on every disk.

In our example, below, a request for block "A1" would require all three data disks to seek to the beginning and reply with their contents. A simultaneous request for block B1 would have to wait.

A RAID 4 uses block-level striping with a dedicated parity disk. RAID 4 looks similar to RAID 3 except that it stripes at the block, rather than the byte level. This allows each member of the set to act independently when only a single block is requested. If the disk controller allows it, a RAID 4 set can service multiple read requests simultaneously. Network Appliance uses RAID 4 on their Filer line of network storage servers.

A RAID 5 uses block-level striping with parity data distributed across all member disks. RAID 5 is one of the most popular RAID levels, and is frequently used in both hardware and software implementations. Virtually all storage arrays offer RAID 5.

In our example, below, a request for block "A1" would be serviced by disk 1. A simultaneous request for block B1 would have to wait, but a request for B2 could be serviced concurrently.

RAID 5

A1

A2

A3

Ap

B1

B2

Bp

B3

C1

Cp

C2

C3

Dp

D1

D2

D3

Every time a data "block" (sometimes called a "chunk") is written on a disk in an array, a parity block is generated within the same stripe. (A block or chunk is often composed of many consecutive sectors on a disk, sometimes as many as 256 sectors. A series of chunks [a chunk from each of the disks in an array] is collectively called a "stripe".) If another block, or some portion of a block is written on that same stripe, the parity block (or some portion of the parity block) is recalculated and rewritten. The disk used for the parity block is staggered from one stripe to the next, hence the term "distributed parity blocks". This means, of course, that the controller software becomes more complex.

Interestingly, the parity blocks are not read on data reads, since this would be unnecessary overhead and would diminish performance. The parity blocks are read, however, when a read of a data sector results in a cyclic redundancy check (CRC) error. In this case, the sector in the same relative position within each of the remaining data blocks in the stripe and within the parity block in the stripe are used to reconstruct the errant sector. The CRC error is thus hidden from the main computer. Likewise, should a disk fail in the array, the parity blocks from the surviving disks are combined mathematically with the data blocks from the surviving disks to reconstruct the data on the failed drive "on-the-fly".

This is sometimes called Interim Data Recovery Mode. The computer knows that a disk drive has failed, but this is only so that the operating system can notify the administrator that a drive needs replacement; applications running on the computer are unaware of the failure. Reading and writing to the drive array continues seamlessly, though with some performance degradation. The difference between RAID4 and RAID5 is that, in Interim Data Recovery Mode, RAID5 might be slightly faster than RAID4, because, when the CRC and parity are in the Disk that failed, the calculation does not have to be performed, while with RAID4, if one of the Data disks fails, the calculations have to be performed every time.

In RAID 5 arrays, which have only one parity block per stripe, the failure of a second drive results in total data loss.

The maximum number of drives is theoretically unlimited, but it is common practice to keep the maximum to 14 or fewer for RAID 5 implementations which have only one parity block per stripe. The reason for this restriction is that there is a greater likelihood of two drives in an array failing in rapid succession when there is greater number of drives. As the number of disks in a RAID 5 increases, the MTBF for the array as a whole can even become lower than that of a single disk. This happens when the likelihood of a second disk failing out of (N-1) dependent disks, within the time it takes to detect, replace and recreate a first failed disk, becomes larger than the likelihood of a single disk failing.

One should be aware that many disks together increase heat, which lowers the real-world MTBF of each disk. Additionally, a group of disks bought at the same time may reach the end of their Bathtub curve together, noticeably lowering the effective MTBF of the disks during that time.

In implementations with greater than 14 drives, or in situations where extreme redundancy is needed, RAID 5 with dual parity (also known as RAID 6) is sometimes used, since it can survive the failure of two disks.

A RAID 6 uses block-level striping with parity data distributed twice across all member disks. It was not one of the original RAID levels.

In RAID 6, parity is generated and written to two distributed parity stripes, on two separate drives, using a different parity stripe in each two dimensional "direction".

RAID 6

A1

A2

A3

p4

Dp

B1

B2

p3

Cp

B3

C1

p2

Bp

C3

C3

Dp

Ap

D1

D3

D3

RAID 6 is very inefficient when used with a small number of drives. But as drives become bigger and arrays have more drives, and rebuild times skyrocket, the fact that RAID6 is more redundant than RAID 5, is more and more attractive, and also makes more sense than having a "hot spare" disk. See also Double parity below for another, more redundant implementation.

Nested RAID Levels

Many storage controllers allow RAID levels to be nested. That is, one RAID can use another as its basic element, instead of using physical disks. You can think of the RAID arrays as layered on top of each other, with physical disks at the bottom.

RAID 0+1. - This is a RAID used for both replicating and sharing data among disks.

A RAID 0+1 (also called RAID 01, although it shouldnt be confused with RAID 1) is a RAID used for both replicating and sharing data among disks. The difference between RAID 0+1 and RAID 10 is the location of each RAID system it is a mirror of stripes.

Where the maximum storage space here is 360GB, spread across two arrays. The advantage is that when a hard drive fails in one of the RAID 0s, the missing data can be transferred from the other array. However, adding an extra hard drive requires you to add two hard drives to balance out storage among the arrays.

It is not as robust as RAID 10 and cannot tolerate two simultaneous disk failures, if not from the same stripe. That is to say, once a single disk fails, all the disks in the other stripe are each individual single points of failure. Also, once the single failed disk is replaced, in order to rebuild its data all the disks in the array must participate in the rebuild.

To add to the confusion, some controllers that run in RAID 0+1 mode combine the striping and mirroring into a single operation. The layout of the blocks for RAID 0+1 and RAID 10 are identical except that the disks are in a different order. To the smart controller this does not matter and they gain all the benefits of RAID 10 but are still labelled as only supporting RAID 0+1 in their documentation.

RAID 10. - This is similar to a RAID 0+1 but reversed.

picture

A RAID 10, sometimes called RAID 1+0, is similar to a RAID 0+1 with exception that the RAID levels used are reversedRAID 10 is a stripe of mirrors.

One drive from each RAID 1 set could fail without damaging the data. However, if the failed drive is not replaced, the single working hard drive in the set then becomes a single point of failure for the entire array. If that single hard drive then fails, all data stored in the entire array is lost.

Extra 120GB hard drives could be added to any one of the RAID 1s to provide extra redundancy. Unlike RAID 0+1, all the "sub-arrays" do not have to be upgraded at once.

RAID10 is often the primary choice for high-load databases, because of its faster write speeds since there is no parity to calculate.

A RAID 50 combines the block-level striping with distributed parity of RAID 5, with the straight block-level striping of RAID 0. This is a RAID 0 array striped across RAID 5 elements.

One drive from each of the RAID sets could fail without damaging the data. However, if the failed drive is not replaced, the remaining working drives in that set then become a single point of failure for the entire array. If one of those drives then fail, all data stored in the entire array is lost. The time spent in recovery (detecting and responding to a drive failure, and the rebuild process to the newly inserted drive) represents a period of vulnerability to the RAID set.

In the example below, datasets may be striped across both RAID sets. A dataset with 5 blocks would have 3 blocks written to the 1st RAID set, and the next 2 blocks written to RAID set 2.

The configuration of the RAID sets will impact the overall fault tolerancy. A construction of three seven-drive RAID 5 sets has higher capacity and storage efficiency, but can only tolerate three maximum potential drive failures. A construction of seven three-drive RAID 5 sets can handle as many as seven drive failures but has lower capacity and storage efficiency.

RAID 50 improves upon the performance of RAID 5 particularly during writes, and provides better fault tolerance than a single RAID level does. This level is recommended for applications that require high fault tolerance, capacity and random positioning performance.

As the number of drives in a RAID set increases, and the capacity of the drives increase, this impacts the fault-recovery time correspondingly as the interval for rebuilding the RAID set increases.

Proprietary RAID levels

Although all implementations of RAID differ from the idealized specification to some extent, some companies have developed entirely proprietary RAID implementations that differ substantially from the rest of the crowd.

One common addition to the existing RAID levels is double parity, sometimes implemented and known as diagonal parity. As in RAID 6, there are two sets of parity check information created. Unlike RAID 6, however, the second set is not a mere "extra copy" of the first. Rather, most implementations of Double Parity calculate the extra parity against a different group of blocks. While traditional RAID 5 and 6 calculates parity against one group of blocks (A1, A2, A3, AP), Double Parity calculates parity against different groups, for example, in our graph both RAID 5 and RAID 6 calculate against all A-lettered blocks to produce one or more parity blocks. However, it is fairly easy to calculate parity against multiple groups of blocks, instead of just A-lettered blocks, one can calculate all A-lettered blocks and all 1-numbered blocks.

RAID 1.5 - This is just a correct implementation of RAID 1. When reading data is read from both disks simultaneously and most of the work is done in hardware instead of the driver.

Quick compare chart

RAID Level

Data Availability

Read Performance

Write Performance

Rebuild Performance

Min Disks Required

Suggested Uses

picture

No gain

No gain

No gain

N/A

Non-critical data

picture

Very good

Very good

N/A

Non-critical data

picture

Excellent

Very good

Good

Good

Small databases, database logs, critical information

picture

Good

Sequential reads: good. Transactional reads: Very good

Fair2

Poor

at least 2

Databases and other read-intensive transactional uses

picture

Excellent

Very good

Fair

Good

Data-intensive environments (large records)

picture

Excellent

Very good

Fair

Fair

at least 4

Medium-sized transactional or data-intensive uses

N = the amount of GB disks you need.X = Number of RAID sets

Article Copy

Don't know which connections are on your Mac ? This site has lots of information about different types of Mac.

Are you having a problem with a product?

Check the connections and whether any drivers need to be installed.

If it is a hard drive, make sure it has been formatted correctly.

Check the manufacturer's website for support, as other people may have had this problem before.

Check the manufacturer's website for any firmware or software upgrades.

If you are sure there is a fault with the product, check whether the manufacturer has a direct replacement service. This is normally the quickest method.

Alternatively, get in touch with us via the "contact us" page, by email, phone, or in person.

We can advise if there is a solution, or if a replacement will be needed.

We will then give you an RMA number, so you can send the product back to us for testing and replacement.