Storage in the computer market currently revolves around two types of products -- the HDD and the SSD. The SSD is faster and requires less power to operate leading to better battery life in portable computers. The HDD offers lower cost and more storage capacity than the current SSDs.

A company called Fusion-io is offering a new product called the ioDrive Duo, which it claims to be the world's fastest and most innovative SSD. The company says that the product doubles the slot capacity of its PCI Express ioDrive storage solution.

The new ioDrive Duo offers what the company claims is previously unheard of levels of performance, capacity, and protection for a single server. The product claims to be able to scale from 6Gb/sec of read bandwidth and offer over 500,000 read IOPS when using four ioDrive Duos.

David Flynn from Fusion-io said in a statement, "Many database and system administrators are finding that SANs are too expensive and don’t meet performance, protection and capacity utilization expectations. This is why more and more application vendors are moving toward application-centric solid-state storage. The ioDrive Duo offers the enterprise the advantages of application-centric storage without application-specific programming."

The ioDrive Duo fits into PCI Express x8 or x16 slots and can sustain up to 20Gb/sec of raw throughput. The company also says that it can easily sustain 1.5Gb/sec of read bandwidth and nearly 200,000 read IOPS. Sustained read bandwidth is 1500 MB/sec, sustained write bandwidth is 1400 MB/sec, Read IOPS is 186,000, and write IOPS is 167,000.

The ioDrive Duo offers multi-bit error detection, correction and flash back protection offering chip level N+1 redundancy and on-board self-healing. The product can also be configured for RAID-1 mirroring between two ioMemory modules on the same ioDrive Duo PCIe card.

The new cards will be available in April 2009 with 160GB, 320GB, and 640GB. A 1.28TB version isn't coming until the second half of 2009. The typical SSD, like the SSD offerings from Intel, are sized like normal hard drives and connect via SATA and other enterprise connection standards.

Comments

Threshold

Username

Password

remember me

This article is over a month old, voting and posting comments is disabled

I love Woz. But everything he touches after Apple hasn't turned out well.

FusionIO has few major disadvantages:

1. Hotswappability. High end servers do allow this, but normal 2S servers don't. When the SSD breaks, if you have to take down the server to swap a PCI-Express card, you will realize how stupid PCI-Express based storage really is.

2. Price. 5000 dollars for 160GB? Why don't you get 12 X25-Es for the same price and get much better aggregate IOPS? the 165K random write IOPS are battery backed at 1K size. Normal file system IOPS are 4K standard, which means, FusionIO won't do the advertised random write IOPS in a database type applications

3. bootable volume. This is another thing FusionIO lacks simply because the storage card needs a PCI-express bridge driver in the OS to get access.

Anything FusionIO does, you can do with an array of cheaper SLC SSDs. Even Intel's X25-E is at 100% price premium over DRAMExhange.com's spot price(currently about $5.50 per GB for SLC 16Gbit chips). Then you can see that the FusionIO is hardly worth 500(5.50*80 16gbit chips) dollars for the 5K price they are asking. Given the fact that Intel will launch 30nm SSD production by the end of the year and the current global economic condition, SSD IC pricing will continue to slide, and fusionIO don't be able to catch the volume economics train.

This is one of the things that sounds good in theory, but in practice, sucks balls.

Just to clarify, are you saying PCI-e based storage as in right on the card, or would that include any RAID platform that uses PCI-e as well?

My understanding is that current RAID systems that would support 12 drives would be forced to go through PCI-e anyways because (once again, to my understanding,) no motherboards are out that have native support for 12 drive RAID, let alone a good enough controller for what you are talking about.

Hotswappability - Can you hot-swap the PCI-express RAID card in your normal 2S server? If not, then this product has the exact same disadvantages as any RAID card.

Price - Think $/IOp not $/GB... this thing is an absolute beast. Also, after your pay for your 12 X25-E SSD's, what are you going to hook them up to? This is going to cost money too. Are you going to use a 12 port RAID card, SAS HBA, external JBOD? Show me a device that I can hook 12 SATA SSD's up to and actually obtain the aggregate throughput potential of all those drives. 1500 MBps (with the big B) !? That's insane and I don't know of any storage controller that can match it at this price point.

Bootable volume - I agree somewhat, for $5k they should have thrown some boot firmware on the card just to give us the option... but I think that even if it were there most people wouldn't be booting off of this. It's for high workload servers, not gaming rigs.

As to the final comment about your array of SSD's... I still want to know what you're going to hook them up to and get close to the performance of the FusionIO.

somedude, you sound like FusionIO's salesman, with logical errors all over the place.

"Hotswappability - Can you hot-swap the PCI-express RAID card in your normal 2S server? If not, then this product has the exact same disadvantages as any RAID card."

Yes, you can't hotswap RAID cards. But RAID card have much smaller chance of failure(MTBF) than disks. In fact, RAID cards typically only fail when temperature is too high in a case. That problem can be mitigated easily by having better air flow or better air conditioning. We are talking about failure of SSDs, which is guaranteed to fail at 100,000 erase cycles. If the SSDs reside in 2.5 inch hot swappable bays, replacing them is a 1 minute job. If fusionIO does 167,000 writes a second, you can be sure that the card won't last long. When it fails(and it will fail, all drives fail), it would be a nightmare.

Price - I am thinking in $/IOPS. In fact I am thinking in IOPS*GB/Watt/Dollar. It only takes 6 Intel X25-Es to get the 180K read IOPS the fusionIO provides. My 12 Intel X25-E example is to simply show that FusionIO is twice as expensive as X25-E based raid array that does the same thing.

As to the RAID controller. Intel IOP348 dual 1.2Ghz based cards like Adaptec, Supermicro, Areca all can do about 1.2-1.3GB sustained rate, which is close to the fusion IO spec. You can hook 6-8 Intel X25-Es per RAID card to get to where FusionIO is and do it at half the cost of FusionIO while providing hotswappability.

JBODs are all over the place, Dell MD1120s, HP MSA70, Supermicro chassis,etc. What I am trying to say is that, those 2.5 inch hot swap infrastructure is already in place in most datacenters. All the IT people have to do is to replace the SAS drives in them with the X25-E and the upgrade is in place, cheap and fast. Would people crack open servers and install a pair of 5000 dollar PCI-Express cards? (it would have to be a pair since you want mirroring at the bare minimum).

All it comes down to is that fusionIO is a pipe dream. All PCI based storage so far had failed(Gigabyte iRAM, Sandisk ...etc) So will fusionIO and OCZ Z-Drive and Micron's PCI-E card. Hotswappability is a bitch of a requirement, and volume economics demand at least price parity, both of which FusionIO lacks.

quote: It only takes 6 Intel X25-Es to get the 180K read IOPS the fusionIO provides

And it will take 50 of the X25-Es to match the claimed 167,000 4k random write IOps (3,300 for the X-25E)

If you want to hotswap SSDs you'll need even more drives as the performance of some will be lost to redundancy.

What RAID level are you planning on using? Do you expect to get perfect linear scaling for each additional SSD out of RAID5/6 in 4K random workloads? If you go RAID10, then your required number of drives will double. With RAID0 hotswap is pointless.

If you use a SAS RAID card, STP overhead will eat up some of the performance of the SATA SSD's.

Even if you use two of the new cards in RAID1, the $/IOp makes these worth considering. Assuming, of course, that your application actually NEEDS 167,000 random write IOps.

First of all, Intel's 3300 published random write IOPS is conservative. Depending on IO size, it can actually bench about 5000-8000 IOs per second. The 167K random write IOPS by fusionIO is suspicious at best. It has to be cached in write-back ram, which means, the controller actually lies about data persistence. I don't see a super capacitor or a battery on the FusionIO device, so when power goes out, you might lose transactions(correct me if I am wrong here, but I don't see a battery on the FusionIO PCB)

If you really want to be technical, FusionIO is simply pushing the same RAID idea, just masqueraded under a new brand name. Under the two yellow heatsinks are probably the same Intel IOP348 chips that the RAID card uses. It even has the same heatsink as the Adaptec 5 series RAID cards so I am expecting nothing less than a Intel IOP348 underneath. Then under that other heatsink on the bottom left is probably the PCI-Express lane splitter, allowing the two raid ICs to be sharing the same PCI-Express bus. Then the driver side actually has to hide the internal dual RAID card implementation of the FusionIO card and do software RAID0 on it.

I counted the NAND ICs on that PCB. There is no freaking way that FusionIO can do 167K write IOPS without caching it in ram. Plus, how long do you think the device will last if it actually does 167K writes a second(SLC breaks down at 100K erase cycles)? If it does 167K 4K IOS per second, that is 668MB of data per second, and your FusionIO will be out of space in as little as 5 minutes. So much for high write IOPS if it is full :)

It would take a completely incompetent IT manager to consider FusionIO. For 10,000+ dollars for a pair of FusionIOs, a better bet is to fill the 24bay SAS case with 24x Intel X25-Es at the same price and get over 2.5 times the space and twice the IO throughput. Other than that, over priced SAN systems will try to incorporate FusionIO to future fudge IOPS numbers, but those SAN vendors will die along with FusionIO if they decide to establish dependence on FusionIO.

I've been looking for an actual benchmark report @ purely 4k random writes from the previously released fusion-io cards. I too am suspicious that they can sustain 167k random write IOps @ 4k. For now, I hope their spec sheets are not complete BS, time will tell.

Your write cache concerns apply equally to the fusion-io and the intel X25-E. From what I could find, the X25-E is only able to obtain the 3,300+ write IOps with write caching enabled, and I don't think the intel SSD has a battery or cap to protect it's cache.

The fusion-io spec sheet "claims" 48 years if you're write-erasing 5TB/day. Even if you cut that by a factor of 10 you're still looking at nearly 5 years. How long do you think one of the Intel SSD's would last if you were writing to it at the maximum possible speed 24x7?

Also, both the fusion-io cards and the X25-E's come with 3 year warranties. They are based on the same flash technology and should have similar lifespans.

The 32 GB X25-E's are going for around $400 each right now. You're at nearly $10K before paying for the 24 drive SAS case, SAS RAID card and cables. Even if you assume 100% perfect linear performance scaling for all 24 drives on a RAID card, you're only to 80k IOps, or about half way there (based on the spec sheets from both vendors). Also, as I stated before you'll loose a portion of your SATA drive performance to STP overhead if you put them behind a SAS controller. You'll loose both capacity and performance to redundancy if you plan on using anything but RAID0.

Devices such as this (if the manufacturer's claims hold true) can provide a huge boost to certain applications at a price point significantly lower than what's been available in the past. People who maintain applications that are extremely sensitive to IOps performance (both read and write) should consider all available options, including ram-based, pci-e-flash, sas/sata-flash, and anything else that is available.