These are starting to show up here and there "bare", with no drives, and I could see buying one and supplying my own disks.

But I've never been inside of one, and further, have some specification clarifications ... if anyone can help:

- Are both of them SATA only on the inside ? As in, you cannot put a SAS disk into one of the 48 slots ?

Most important, however, is that I do not plan on running ZFS ... which means I need an actual hardware raid controller (not interested in vinum, etc., thank you). But how does this work, physically ?

Do each of the 48 drives have a sata port on the backplane, underneath, and I can plug my raid cards leads into each of the 48 slots ? Or are there 4 or 6 or 8 larger, SATA-aggregator ports somewhere on the backplane and I can go from the raid card to the backplane with some of those ?

Basically, how do I connect an add-in raid card to the internal drives of a thumper/thor ?

And finally, are there any cards that would even work ? I say that because the thumper has only 2 LP pci-x slots, and the thor has only 3 half height pci-e slots ... and all of the 16 or 24 port raid cards I have ever seen are full height ...

Looks like both 3ware and areca have only full height options for any more than 8 ports ... which is why I am curious about the ability to do internal SAS drives, since I wouldn't need a physically large card to do that ...

Not sure on the specs of these machines (check docs.sun.com or sunsolve.sun.com). What I can tell you is that SAS and SATA share a common connector. SAS Controllers are super fancy and work with both SATA and SAS drives, but SATA controllers do not work with SAS drives.

As for the raid contoller part, why wouldn't you use zfs? In terms of administration (believe on this), you are making your life EASY with zfs.

yeah, and there are some issues in our organization with tools and work flows, etc., that are all going to break with ZFS.

This isn't knocking ZFS - we're excited about it - but for this application we really do need physical raid controllers. I'm just not sure if it's possible to address 48 drives with physical controllers in a thumper, and am trying to find out how...

jsloan wrote:yeah, and there are some issues in our organization with tools and work flows, etc., that are all going to break with ZFS.

This isn't knocking ZFS - we're excited about it - but for this application we really do need physical raid controllers. I'm just not sure if it's possible to address 48 drives with physical controllers in a thumper, and am trying to find out how...

This however does make sense. I have seen many things break because of zfs.

I have built my own Solaris/OpenSolaris server machines and did some research on the x4500 before it was EOL. Browsing some of the documentation on oracle.com makes me think the drives mate with the I/O board and controllers are part of the I/O board. I've just read in Sun documentation that the controllers are LSI, but also seen Marvell driver download links and the above article has a diagram showing the Marvell controllers.

Here's the thing, that machine was intended to use ZFS. It was intended to use lots of cheap SATA drives. I don't think you'll be able to use SAS drives, or dedicated RAID controllers. Just not what the machine was designed to do. You might be able to use Linux if you don't like Solaris, but I'd say ZFS is a better solution than LVM and software RAID on Linux.

Not sure what Bri3d means about reliability with ZFS. I've read a lot of success stories and haven't had an issue with my own couple TBs. It's not fair to compare RAID-6 with RAIDZ, RAIDZ2 would be comparable. Speed? Well if you want speed you don't use RAID-5/RAID-6 or similar.

Stripes across mirrored pairs would be better in this application. It's expandable, fast and offers better rebuild times.

I am not a fan of dedicated controllers. The Opterons in the X4500 are faster than the little CPU on a RAID card. If you look at how most Filers and other storage appliances work, they don't use RAID cards either. A NetApp for example is really just a custom PC with custom software, FC ports and a bunch of JBODs.

My own RAIDZ arrays have been stable and fast enough running on slow Core2Duo based Pentiums.

Of course the Thumper docs don't say anything about hardware RAID, because they were designed, implemented, and executed to run only RAID-Z. I don't think you could get enough ports off of any commercially available low-profile SATA RAID controllers to run hardware RAID on one (the disks are SATA, by the way, not SAS).

Sun have been switching between LSI and Marvell parts for a while (leaning towards Marvell lately). The X4500 is Marvell while the X4540 is LSI, if I recall correctly.

Well, the thought was that since the x4540 (the "thor") has 3x pci-e, that there _might_ be some half height cards out there that had 16 ports each, but this does not seem to be the case - at least not with 3ware nor areca.

And the other thought was that there was some way to interface with the backplane, and those pics show that there _is_, but those:

don't look like standard interconnects that I could get off of a raid card.

So it looks to be a bust from two different angles - at least if I want UFS2 on hardware raid.

Drifting a bit off topic ... isn't it surprising that in the 3+ years that the thumper has been out, nobody has cloned it ? When they were first introduced, I thought for sure we were only 6 months away from a 4u, 48 disk clone from supermicro ... and now all these years later, and nobody has brought one out. Isn't that odd ?

Of course the Thumper docs don't say anything about hardware RAID, because they were designed, implemented, and executed to run only RAID-Z. I don't think you could get enough ports off of any commercially available low-profile SATA RAID controllers to run hardware RAID on one (the disks are SATA, by the way, not SAS).

Sun have been switching between LSI and Marvell parts for a while (leaning towards Marvell lately). The X4500 is Marvell while the X4540 is LSI, if I recall correctly.

The link is in reference to a bug in a beta build from 2009, that even that article says will be fixed in the next build. ZFS has quirks, it can't be helped, but there are no major causes for concern in the stable releases. As far as most industry specialists are concerned - ZFS is production ready.

The whole "of course its raidz" was the point. You shouldn't buy a thumper unless thats the direction you want to take.

@OP: A better solution would be something like a 3511 storage array (not that model, its old, maybe the new 6140 is worth looking at) and attach it to an x41x0 style server over fibre. That also opens you up to the option of shared storage in the future.

To add another dimension to Japes' argument in favor of software RAID on commoditiesque hardware:

If you have a dedicated proprietary RAID controller subsystem (I would have said "hardware RAID" except that I've already been chewed out once by someone who pointed out that since it wasn't implemented using dedicated logic but rather a microprocessor running a custom program it was no more "hardware RAID" than any other software setup) you have the additional lurking time bomb of other controllers very likely not being able to read the array, so if your controller dies you need to acquire the exact same model or else restore from dumps. Software RAID is much more forgiving of being moved to different equipment.