Yes I would. Also is there an internal drive for the software - or just the flash card on the back? I'd like to see if I can build windows or Linux and use a single tray as a NSF or nas box. Is there a hardware raid controller?Thanks

Yes I would. Also is there an internal drive for the software - or just the flash card on the back? I'd like to see if I can build windows or Linux and use a single tray as a NSF or nas box. Is there a hardware raid controller?Thanks

I took some photos today. Will upload them with explanatory text in the next few hours.

The Gen II and Gen III both boot from a CF card (1GB for the Gen II, and 4GB for the Gen III). The CF card sits in a simple adapter that occupies a PCIe slot, but there's no connection to the PCIe bus. The CF adapter simply connects to an integrated SATA port on the montherboard.

There's no RAID controller in the XIV. The drives are standard SATA parts that connect directly to SATA ports on the motherboard. All the magic is provided by the XIV software.

However the Gen III is a little bit different in that each node also has its own 512GB 1.8" Micron SSD that sits in a separate PCIe adapter. But the SSD is just used for read caching.

FWIW the node is just a standard x86 server from Xyratex (and the HBAs and NICs are just generic LSI and Intel PCIe cards). You can install any OS that you want onto the box if you want a NAS.

However the Gen III is a little bit different in that each node also has its own 512GB 1.8" Micron SSD that sits in a separate PCIe adapter. But the SSD is just used for read caching.

A couple interesting notes.. All gen3 nodes have the PCIe carrier for the SSD. The SSD itself is an option that can be added after the fact to any gen3 if not ordered that way up front.

The SSD is exposed to the array as 400GB of read cache. The extra 112gb could be there for a couple of reasons.. Either for spare capacity as cells fail, or for future functionality. My money is on the latter due to the way they write to the SSDs - they shouldn't be wearing them out excessively, and I imagine there is additional hidden spare capacity for the flash controller to deal with failed cells on its own. But on the other hand it is MLC flash so it may be more failure prone. Given that it is currently just read cache it doesn't matter if it fails anyway, the data is expendable. FWIW I've never had an SSD fail in an XIV, and we've been running them since day 1 of GA.

However the Gen III is a little bit different in that each node also has its own 512GB 1.8" Micron SSD that sits in a separate PCIe adapter. But the SSD is just used for read caching.

A couple interesting notes.. All gen3 nodes have the PCIe carrier for the SSD. The SSD itself is an option that can be added after the fact to any gen3 if not ordered that way up front.

The SSD is exposed to the array as 400GB of read cache. The extra 112gb could be there for a couple of reasons.. Either for spare capacity as cells fail, or for future functionality. My money is on the latter due to the way they write to the SSDs - they shouldn't be wearing them out excessively, and I imagine there is additional hidden spare capacity for the flash controller to deal with failed cells on its own. But on the other hand it is MLC flash so it may be more failure prone. Given that it is currently just read cache it doesn't matter if it fails anyway, the data is expendable. FWIW I've never had an SSD fail in an XIV, and we've been running them since day 1 of GA.

Yes - this is all true.

Assuming that you install the SSD option, the SSD in the Gen III is a 512GB 1.8" Micron RealSSD P400e (OEM part no. MTFDDAA512MAR-1K1AB). The datasheet says that it contains 25nm MLC NAND, with a micro-SATA 6Gb/s interface, a Marvell controller, and 175TB of lifetime write capacity.

It's a surprisingly low-end drive for an enterprise array. But it does work as promised (each node has its own SSD cache, so you get several TB of cache presented across the XIV frame).

I realize this is an old thread but there is very little information posted about the IBM XIV.

We acquired a 9 module Gen2 XIV. The system was powered off using the UPS instead of being properly shutdown. It was transported to our site and now the GUI shows it's in maintenance mode with all 108 disks failed. All but 20 disks pass the component_test but the phase in never completes. A state_change target_state=on results in "The required operation is not allowed in the current system state"

There are still volumes defined but I don't care about the data. Does anyone know the recovery procedure for an XIV that was powered off incorrectly. I've scoured the file system looking for interesting scripts but did not discover anything.