If one were to setup say 2 drives with raid1 and then raid0 in matrix formation and then separated the two, obviously the raid0 would be trashed, but would the raid1 be viewable by a normal system since its partition table should be on the outer edge?

Also, what would be the expected result of configuring raid1 with say a 120gb SSD and either a normal HDD or SSHD (smart-caching not supported) with write-through enabled? Would the weakest-link problem still be an issue or would the SSD eat most of the reads and the HD eat most of the writes up until cache is filled up?

You are right saying that the Raid 0 can be trashed if it is removed. Raid 1 can be removed from either the IRST interface or the Option ROM (CTRL-I) and it will keep all information and OS installation files.

If you mean using the SSD as cache drive for the mechanical drive, you will notice a change in performance. If you decide to use the SSD as cache the operating system needs to be installed in the Normal HDD.

I appreciate the reply. What I was concerned with was if I had setup a RAID1 initially on 2 drives and used the rest for either RAID0 or another RAID1 if one of the volumes would be detected in another system lacking Intel RST (following a catastrophic system failure for instance). From what I read, the RAID metadata is stored at the end of the drives so I assumed the partition table for the initial array would appear as it would any other normal drive (with it appearing at the beginning of the drive itself).

As far as the SSD is concerned, as I am on a Intel 5 series, I don't believe we are able to use an SSD drive as a cache drive in that capacity. This would require Intel Smart Response, correct? Based on the docs, this feature appeared in what I believe is the 7-series architecture and not available to those of us running ones prior (regardless of OROM or driver version; at least it is not shown anywhere in the options). I was curious to see if anyone knew how an array would behave should I mirror an SSD and a SSHD (or HDD) them together as ordinary drives. I was hoping write-back would allow the mirror to avoid a performance decrease (mainly from the SSD slow-write) until the cache becomes saturated and slows down as a result. I wasn't expecting full SSD-like performance.....something more-or-less in-between. Had I a backup system in place with RST, I would immediately have gone for a RAID10 array as being the most practical solution.

I imagine this would likely come down to if a write-operation is performed on a RAID1 mirror, if RST would immediately return the data read from whichever was the fastest drive and similarly if a write-operation was performed if it would return immediately (cache-permitting) and end up somewhere as an average of the performance for each drive individually.

Sorry for the long reply; had I been able to find documentation instead, I would've made use of it myself rather than take up your time. If I am missing something or off-base, please feel free to chime in with any suggestions you might have. Unfortunately, at the moment this is the hardware I have to work with otherwise.

You can any HDD that was in a RAID structure in other system but when you connect that HDD in a different computer it is possible that you will not see the RAID volume unless it is the same board model.

It is also important to mention that you can use a SSD and a HDD as mirror in the same RAID but the max storage capacity will be always the smallest drive and it also happens with the write-read operation. I mean that if you have the SSD and HDD in the same RAID, the speed of Read-Write will be the speed from the normal HDD because is the slowest one. IT also happens with the capacity, the SSD probably will be the smallest one between the Normal HDD and the SSD.

The best way to have a RAID 1 with a good performance is to use the SSD as cache and use the HDD as storage drive or bootable drive.

The Z68 I believe is a 6 series board while I'm currently on a late x58. My hope was that with write-back caching enabled, the RAID chipset would write directly to cache and return as if the IO completed already (this is how it's described in the manual) which being in a mirror set, I would've imagined the data IO would be processed by the fastest drive that can complete the request. Even with a 1 TB mirror setup across 2 normal HDDs the performance is doubled both read and write unless there's a specific reason why the SDD would wait on the HDD to complete its read request instead of passing it through and vice-versa for writing. As far as the capacity, I was planning on just putting the mirror on the outer tracks somewhat like short-stroking it to maintain the best speed.

The writing performance I imagine would degrade to the SDD speed level once the cache had been saturated but I'm afraid I don't see why read performance would drop along with it. If mirroring allows performance gains in read by being able to access multiple drives simultaneously then why would the chipset request the same data from both disks and wait for both of them to retrieve it before completing the IO request? If it were, then I would imagine my RAID1 testing wouldn't have doubled in read speed. I guess I would expect the performance the be somewhere in between the two drives performance. Generally I would just rely on backups although I am trying to gain more space on my system disk. Shame Win8.1 Enterprise didn't ship with ReFS or I would just use its hierarchical storage.

Oh well. I do intend to try and break up one of the RAID1 mirrors and see what the result is in the least. I'd like to see the result first since they have no data on them at the moment and I just lost a 2TB archive of 15 years of history I am trying to avoid at all costs.