If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

At some point in time, hopefully by February, I will get another terrabyte Clariion array and will be able to move everything off of this array and reconfigure but until then, I'm stuck w/ this arrangement.

Can anyone make suggestions as to how you might spread the data files out to minimize IO problems and include the reasoning behind it?

Given the chance to reconfigure these disks totally, what would you suggest. I am going to be getting a new array in February and will have a chance to redo everything.

I'm working on a solution and will post it for comments when I am finished trying to figure it all out. Basically this is a Data Warehouse. The fact tables obviously take up the majority of space (214GB or so). Everything else is pretty small in comparison. I could probably go totally Raid 0+1

Raid 0+1 would be a good idea. The System, Users and Tools tablespaces probably has a low I/O to begin with. I would probably move them to a RAID 1 disk (to /u03 maybe!!), thus leaving the Data, Index and Fact tablespaces strictly on RAID 5 disks, since those are the most critical tablespaces.

There are currently 2 service processors in the system. There are 2 fiber channel controllers per block for a total of four controllers but only 2 are used (the others are for failover). Each SP has 4 LUN's assigned to it in the following configuration.

That means 28-32GB drives and 2 additional hot spares for 30 total drives.

The disk was originally set up w/ 64K stripe but supports 4,16, 64, 128, and 256K stripes.

You can have between 3 and 16 drives in either of the Raid 0 or Raid 5 configurations.

There is one fiber channel arbitrated loop per storage processor w/ each loop having 100MB/s of throughput for 200MB/s total throughput.

The rotational speed of the drives are, of course, 10000RPM. There is a 1MB data buffer. The buffer to media transfer rates fall between 21.1 and 36.8 MB/s. Access Times are 5.7ms read and 6.5ms write. Rotational Latency is 2.99ms.

Let me know if you need any further information. I know that it wouln't be totally Raid 10 - that temp doesn't need to have redundancy and such issues. I'm just wondering what combination of Raid Levels to stick out there to optimize.

I don't even know that much information about myself let alone my hard drive configuration... you know your stuff... Aside of the serial numbers on the drives... what other information would you possibly be able to supply...

The following configuration may help you to get best performance from the available disks:

Assumptions carefully considered:
1. Degree of Striping and Mirroring
2. Number of Available disks = 30
3. Data Ware Application with many users that require mostly unique scans on its tables or indexes
4. 2 Spare disks are also used for Striping
5. 2 Controllers each with 100MB bandwidth
6. 128K or 256K stripe would be better than 64K.

Note 1:Reason for using 9 disks in u05:
As you said the minimum transfer rate from buffer to media is around 21 MB/Sec, you can very well go up to 10 disks (10 disks * 21 MB=210 MB) in one striping. More distribute the data, More Parallel I/O for Reading can be engaged.

Note 2:
If your application is Data Warehouse with few users that require many range/full table scans, then RAID 3 is better option than RAID 5.

Note 3:
If you need more disk space for your DATA, you can use U07 volume. Create Rollback Tablespace in U06.

Note 4:
If your database is in NOARCHIVELOG mode, then you do not need U03. Use those 2 disks as HOT SPARE. However, if the database is n ARCHIVE mode, then you must have 2nd members of REDO LOG.

Note 5:
Why do you need less degree of striping for Index than for DATA.
INDEXES require a lower degree of striping because the data in Indexes are always lesser than data in Tables.

Note 6:
Whatever be the RAID level, Never create INDEX Tablespace in DATA volume. This will definitely hurt performance.