1- Its saying that the Virtual Drive is in a Background Initialization Progress, i don't understand why and its been one day that i set that up and is still in 0%? So something is not right.

I suspect it'll take at least a week, if not longer, to fully initialize. Once that does finish, your write speeds should be at least double what you are getting now. My 2TB RAID10 array took about 2 days to initialize on my LSI 9211 (which is basically the same card you have).

So right now you have 19TB. If 1 of your 8 disks fails, your performance is going to drop back to what you're seeing right now (maybe even worse) until you replace that drive and the whole array re-initializes itself. This is largely due to the RAID card you have not having a dedicated CPU and RAM to calculate the parity bits. It's relying on your CPU which is far slower at performing those calculations.

With RAID10, you'd have around 11TB, but you could lose up to 4 drives without any major performance losses. Likewise, read/write speeds should be much higher as there are no parity bits to calculate, it just writes data to pairs of drives.

And unless you have a decent UPS attached to your PC, I really wouldn't recommend enabling caching. Should your PC lose power unexpectedly, you'll risk corrupting 19TB of data.

I suspect it'll take at least a week, if not longer, to fully initialize. Once that does finish, your write speeds should be at least double what you are getting now. My 2TB RAID10 array took about 2 days to initialize on my LSI 9211 (which is basically the same card you have).

So right now you have 19TB. If 1 of your 8 disks fails, your performance is going to drop back to what you're seeing right now (maybe even worse) until you replace that drive and the whole array re-initializes itself. This is largely due to the RAID card you have not having a dedicated CPU and RAM to calculate the parity bits. It's relying on your CPU which is far slower at performing those calculations.

With RAID10, you'd have around 11TB, but you could lose up to 4 drives without any major performance losses. Likewise, read/write speeds should be much higher as there are no parity bits to calculate, it just writes data to pairs of drives.

And unless you have a decent UPS attached to your PC, I really wouldn't recommend enabling caching. Should your PC lose power unexpectedly, you'll risk corrupting 19TB of data.

Thx, JD!

I have a Thermaltake TRX-1000M and a nobreak pluged into the the server only.

And about the initialization, it occur only on the RAID BIOS Console or when i'm in windows it is also processing it?

Damn thats slow, I didn't even think of an HBA initializing that slowly. I have an HBA in my desktop for RAID 0, but use ROCs for all my servers. It does take while to initialize and I break my arrays up into 4 disk RAID 5s when I can because of preference. A week to initialize is another reason in my mind to keep the arrays a more manageable size.

Damn thats slow, I didn't even think of an HBA initializing that slowly. I have an HBA in my desktop for RAID 0, but use ROCs for all my servers. It does take while to initialize and I break my arrays up into 4 disk RAID 5s when I can because of preference. A week to initialize is another reason in my mind to keep the arrays a more manageable size.

I don't get it, why taking so much time to initiate a RAID5 array? And if i just stay idle on Windows 7, is the initialize process run automatically?

Nope, nothing you can do but wait it out. It'll go "faster" if you stop trying to access it.

As Bluebyte and myself are saying though, you should really re-consider RAID5 at such a size in a non-enterprise environment. You're using consumer hard drives that are prone to failure and 1 drive loss means you're going to have to suffer through this whole initialize process all over again.