Take virtually any modern day SSD and measure how long it takes to launch a single application. You’ll usually notice a big advantage over a hard drive, but you’ll rarely find a difference between two different SSDs. Present day desktop usage models aren’t able to stress the performance high end SSDs are able to deliver. What differentiates one drive from another is really performance in heavy multitasking scenarios or short bursts of heavy IO. Eventually this will change as the SSD install base increases and developers can use the additional IO performance to enable new applications.

In the enterprise market however, the workload is already there. The faster the SSD, the more users you can throw at a single server or SAN. There are effectively no limits to the IO performance needed in the high end workstation and server markets.

These markets are used to throwing tens if not hundreds of physical disks at a problem. Even our upcoming server upgrade uses no less than fifty two SSDs across our entire network, and we’re small beans in the grand scheme of things.

The appetite for performance is so great that many enterprise customers are finding the limits of SATA unacceptable. While we’re transitioning to 6Gbps SATA/SAS, for many enterprise workloads that’s not enough. Answering the call many manufacturers have designed PCIe based SSDs that do away with SATA as a final drive interface. The designs can be as simple as a bunch of SATA based devices paired with a PCIe RAID controller on a single card, to native PCIe controllers.

The OCZ RevoDrive, two SF-1200 controllers in RAID on a PCIe card

OCZ has been toying in this market for a while. The zDrive took four Indilinx controllers and put them behind a RAID controller on a PCIe card. The more recent RevoDrive took two SandForce controllers and did the same. The RevoDrive 2 doubles the controller count to four.

Earlier this year OCZ announced its intention to bring a new high speed SSD interface to the market. Frustrated with the slow progress of SATA interface speeds, OCZ wanted to introduce an interface that would allow greater performance scaling today. Dubbed the High Speed Data Link (HSDL), OCZ’s new interface delivers 2 - 4GB/s (that’s right, gigabytes) of aggregate bandwidth to a single SSD. It’s an absolutely absurd amount of bandwidth, definitely more than a single controller can feed today - which is why the first SSD to support it will be a multi-controller device with internal RAID.

Instead of relying on a SATA controller on your motherboard, HSDL SSDs feature a 4-lane PCIe SATA controller on the drive itself. HSDL is essentially a PCIe cable standard that uses a standard SAS cable to carry a 4 PCIe lanes between a SSD and your motherboard. On the system side you’ll just need a dumb card with some amount of logic to grab the cable and fan the signals out to a PCIe slot.

The first SSD to use HSDL is the OCZ IBIS. As the spiritual successor to the Colossus, the IBIS incorporates four SandForce SF-1200 controllers in a single 3.5” chassis. The four controllers sit behind an internal Silicon Image 3124 RAID controller. This is the same controller used in the RevoDrive which is natively a PCI-X controller, picked to save cost. The 1GB/s of bandwidth you get from the PCI-X controller is routed to a Pericom PCIe x4 switch. The four PCIe lanes stemming from the switch are sent over the HSDL cable to the receiving card on the motherboard. The signal is then grabbed by a chip on the card and passed through to the PCIe bus. Minus the cable, this is basically a RevoDrive inside an aluminum housing. It's a not-very-elegant solution that works, but the real appeal would be controller manufacturers and vendors designing native PCIe-to-HSDL controllers.

OCZ is also bringing to market a 4-port HSDL card with a RAID controller on board ($69 MSRP). You’ll be able to raid four IBIS drives together on a PCIe x16 card for an absolutely ridiculous amount of bandwidth. The attainable bandwidth ultimately boils down to the controller and design used on the 4-port card however. I'm still trying to get my hands on one to find out for myself.

Post Your Comment

74 Comments

I suspect that many companies are working on SSDs that do away with SATA as a final drive interface. Just as we saw companies like OCZ enter the SSD market before Intel, I suspect we'll see the same thing happen with PCIe SSD controllers. When the market is new/small it's easy for a smaller player to quickly try something before the bigger players get involved. The real question is whether or not a company like OCZ or perhaps even SandForce can do that and make it successful. I agree with you that in all likelihood it'll be a company like Intel to do it and gain mainstream adoption, but we've seen funny things happen in the past with the adoption of standards proposed by smaller players.

The difference in time to market is not so much the company size it is the willingness to take risk. Small companies have to take more risks to carve out their market. I think you will find that Intel was working on SSD long before OCZ even thought about selling SSD’s. The difference is that Intel spends a lot of time getting the product right before it is released. OCZ simply bypass proper product development and do it in the field with paying customers.

The enthusiast market might be happy trading off performance with reliability but that is not going to happen in the enterprise market. (This is probably a moot point as I actually doubt that enterprise is the true target market for this product).

It would be great to see a lot more focus on aspects outside of performance, which as you have eluded to is no longer a relevant issue in terms of tangible benefit for the majority of users.Reply

So if the OS sees SiI3124 then there's actually no RAID inside at all - 3124 is simple 4-port SATA host controller and the RAID there is software RAID.

It would be interesting to see what Linux sees about IBIS. My guess it it will see plain 3124 and 4 SSD-s behind that with no RAID at all, and you can use dmraid etc to RAID them together - so you are actually seeing what's happening and no magic left.Reply

Correct. I just installed the 240GB IBIS into a Karmic machine and the kernel only sees 4 separate 55GB "drives" - effectively JBOD mode. The Silicon Image 240GB raid0 set (which is reported OK in BIOS during post) is not visible to Linux. I think a driver will need to be published for this. Will explore dmraid options next...Reply

OK, so official comment from OCZ is "The drive is not compatible with Linux". Seriously, OCZ must be kidding. And no one at Anandtech seemed to think this little tidbit was newsworthy in 8 pages of gushing praise? I would think a statement to the effect that non-Windows OS support is non-existent would be mandatory in a review.

I went back and checked the OCZ datasheet, and sure enough, it only mentions Windows. So the blame rests with me for ASSUMING anyone introducing such a device would support contemporary OS's. But, I've just wasted $700, so I feel like a complete loser. And that'll CERTAINLY be the last product I ever buy from OCZ.Reply

While it may be of some use to speed freaks and number crunchers who running PCs only to get few more random numbers from benchmarks than others before them, I don't see the point in this. Yes of course bandwidth and stuff, all nice. But:

- You can't RAID few of them into some proper RAID level (10,5,6,50,60) because every drive is already "RAID-ed" internally.- You need a special add-on card which isn't anything standard - not to mention that offers nothing but ports to connect drives.- There is high degree of probability that such drives won't run properly with standard RAID cards (Areca, Adaptec, LSI, Intel - take your pick)

Instead creating some new "standard" OCZ should focus on lowering costs of SSDs. 2GB/s+ in RAID0 is easily achievable right now. Need 16 SSDs (which is exactly like 4x4 IBIS), 16 port card (like new Arecas 1880) and off you go. Only advantage for OCZ IBIS here is less occupied space with 4 drives instead 16, but still 16x2.5" SSDs takes only 3x5.25" slots with 2x6 and 1x4 backlpanes.

And for heavy duty jobs there are always better solutions like GM-PowerDrive-LSI for example. Delivers 1500MB (R)/1400 MB (W) straight out of the box. Supports all RAID from 0 to 60, 512 MB of on board cache. Need no special new card. It just works.Reply

For some reason Anandtech seem to have lost objectivity when it comes to OCZ. This along with the Revodrive = epic fail. Consumers want products that are fit for market as opposed to underdeveloped and over priced products that are full of bugs. OCZ’s RMA policy is a substitute for quality control. Reply

But the mentioned Revodrive is quite interesting, as that setup should give you access to TRIM on FreeBSD operating system. Since silicon Image works as normal AHCI SATA controller under non-Windows OS, passing TRIM should also work.

I cannot confirm this, but in theory you should have TRIM when using the Revodrive under FreeBSD and likely also Linux (if you disable the pseudoraid).

A native PCI-express solution would still have to present itself that is affordable. It would be very cool if they made a LSI HBA connected to 4 or 8 Sandforce controllers and have a >1GB/s solution that also supports TRIM (not under Windows). That would be very sleek!Reply

ssds in traditional drive packaging have one big advantage. they can be used in large disk arrays. this new gadget is not usable in anythyng other than a single computer or worksation i.e. it's a DAS solution.

the whole industry from mid level to high end is moving to SAN storage (be it fc, iscsi or infiniband). the IBIS has no future ...Reply