Comparing a single Vertex 3 240GB to a 1.6TB doesn't seem quite valid. Someone considering the R4 would be asking what is the performance difference between the R4 and 4 to 16 Vertex 3s in RAID 0. Especially considering the massive cost savings per GB using the Vertex 3s.Reply

I have trouble hearing and use subs for all my movies, yet Anand speaks quite clearly, and I have no trouble understanding him.

Also, he posts the video review in conjunction with the written review, and not as a stand alone feature, so you really don't need it subtitled. So you would still get to enjoy the reviews as you always have.Reply

I just can't take OCZ seriously from a reliability standpoint. I would love to know what the failure rate is like on OCZ's desktop offerings. I personally am in the process of my 3rd RMA of an OCZ SSD during the past 2 years.

I think Intel, Crucial (or, judging by the last review, Samsung) will make my next SSD. I can only rebuild windows and piece together backups so many times before I say enough is enough. Reply

And why is that? Because of the supposed high failure rates? Can you supply any real information about this?

OCZ has less then 1% failure rate. There may be more then 1% of customers who have "issues" but they aren't related to the drive. User error plays a pretty big role, but of course it MUST be OCZ's fault right?

Enterprise customers are professionals who know how to install serious hardware like this. And if they don't? OCZ will help install it for them on site. That's what enterprise companies do!

I don't believe that 1% number for a second. First of all, I read some return stats from a store that listed the RETURN rate at just below 3%. Secondly, I know of 5 very different systems with Vertex 3 drives in them. All 5 have recurring lockups/BSODs. The people who built and run these systems write their own filesystems. They are extremely knowledgeable. If they can't make them run properly, they are not fit to run outside of a lab environment.

That said, I suspect it's as much Sandforce that's the problem as it is OCZ.Reply

From all the data I've been seeing, it seems to be a SATA III issue, and an issue with motherboards not being reading for such high volumes of data flow. Mechanical drives can get no where near SSD speeds, and I don't think manufacturers were really expecting how fast they'd go on SATA III (almost pegging it out at times, and it's brand new!).Reply

SSDs appear to be an on-the-job learning program for SSD manufacturers with all the issues that currently exist.

I do not however believe they are selling SSDs at low margins.

Enterprise won't use SSDs yet for the same reason informed consumers won't use them - they have serious reliability and compatibility issues. Unless you can afford lost data and a hosed PC, SSDs are not even an option at this point in time. Maybe in a couple more years they will sort out the problems that should have resolved long agao?Reply

I wonder really how much a consumer SSD costs to produce. Saying that slim margins will force companies out of business if there's a big markup on a 128GB is not true. These same drives were $100s of dollars last year and probably still aren't good value today. Unless you're saying consumers are waiting for the .50c/GB drive.Reply

Anand wrote: "I've often heard that in the enterprise world SSDs just aren't used unless the data is on a live mechanical disk backup somewhere. Players in the enterprise space just don't seem to have the confidence in SSDs yet."

I use an SSD in an enterprise environment, a first gen Sandforce model from OWC. I do trust it with my main workload - database and web server in this case, but of course it is still backed up to mirrored hard drives nightly, just in case.

I'd have no qualms deploying a Z-Drive R4 in one of our HPC clusters, but it'd be an RM88 model with capacitors, and I'd still run the nightly rsync to a large RAID unit. Now if someone would finally signal they want to spend another $100k on a cluster, and I'll spec a nice SSD solution for primary storage.Reply

These offer extreme performance, but probably only an enterprise server can ever benefit from this much performance. Enthusiast users of single-user machines should probably stick with RevoDrive X2 for around $2 / GB.Reply

Correct. I've edited the text slightly, though even a single order of magnitude is huge, and we're looking at over 30x faster with the R4 CM88 (and over two orders of magnitude faster on the service times for the weekly stats update).Reply

Have you tried asking HP for an "IO Accelerator" ? (Its a Fusion card)

I worked with a customer a few weeks ago near me and they were testing 10 x 1.28TB Fusion IO cards in 2 different DB Server upgrade projects. 8 in a DL980 for one project and 2 in a DL580g7 for a separate project.Reply

I see all these posts taking all kinds of punishment, please try and remember that ANY company that uses SandForce has the SAME issues, but since Ocz is the largest they catch all the flack. If anything, SF needs to beef up validation testing first and foremost.Reply

I have to wonder at the utility of these drives. They're not really PCIe drives, they're four or eight RAID-0 SAS drives and a SAS controller on a single PCB. They're still going to be bound by the limitations of RAID-0 and SAS. There are proper PCIe SSDs on the market (Fusion-io makes some), but considering the price-per-gig, these Z-Drives seem to offer little benefit other than saving space.

Why should I spend $11,200 on a 1600GB Z-Drive when I can spend about the same on eight OCZ Talos SAS drives and a SAS RAID controller, and get 3840GB of capacity? Or spend half as much on eight OCZ Vertex 3 drives and a SATA RAID controller, and get 1920GB of capacity?

I'm just trying to see the value proposition here. Even with enterprise-grade SSDs (like the Talos) and RAID controllers, the Z-Drive seems to cost twice as much per-gig than OCZ's own products.Reply

What happens if a controller toasts itself wheres your data then?I would rather have smaller hot swap units sitting behind a raid controller.It is a shame OCZ couldn't supply such a setup for you to compare performance, or perhaps they know it would be comparable.

Yes it is a great bit of kit but f I can't raid it then it is of no more use to me than as a cache and RAM is better at that, and a lot cheaper, $11000 buys some big quantities of DDR3.

In the enterprise space security of data is king, speed is secondary. Losing data means a new job, slow data you just get moaned at. That is why SANs are so well used. Having all your storage in one basket that could fail easily is a big no, no and has been for many years.Reply

To be fair, you can RAID it in software if required. You could RAID a bunch of USB sticks if you really wanted to. There are more than a few enterprise-grade SAN solutions out there that ultimately rely on Linux's software RAID, after all.Reply

You cant raid it in software but you could raid several of them if you have deep pockets.The point is why buy a 1.6 or 3.2 TB SSD when you can buy 10 x 320 gb SSDs and (possibly) get better performance for less cost?Reply

I think I've mentioned this before but can you load up a Windows 7 installation with 30 or so startup programs and compare the startup time difference between this and a harddrive?A video of this would be be even impressive.Reply

I've been going through some issues with a 2281 drive with Toggle nand. I'm basically writing 11TB a day to it and under these conditions I can't get too many hours in between crashes. I'm of the opinion that the latest FW has helped most out, but clearly my experience shows that the 2281, when perfected, will be unstoppable in certain workloads, but for now all SF users are going to have some problems. If the problems are predictable you can compensate, but if they're random, well SF controllers aren't the only things that have problem with randomness. I knew it was a possibility, and that normal users won't abuse their drives as much, but I have to wonder if OCZ can make an enterprise drive problem free, why can't they make consumer SF drives better? The SF problem is the OCZ problem... OWC doesn't have the same perception issues, but is using the same hardware (Mushkin,Patriot, etc). As much as I like OCZ, they've done some questionable things in the past, and not just swapping cheap flash in SF1200 drives. Hopefully they can overcome the problems they're having with Sandforce and their Arrowana stuff, release a problem free next gen Indilinx controller, and then call it a day. Oh yeah, quit using those stupid plastic chassis. Reply

I'll admit, I'm now too lazy to even read....it's getting bad. I just want to push the "play" button while I sit back eating Cheetos and rubbing my tummy. Get into my tummy little Cheeto, get into my brain little ssd review,... same line of thinking really, whatever is easiest.

If you want to really test it'and validate it's long term reliability, you pretty much need to do what enterprise customers do. Run the SSD, but always keep a backup of it somewhere, like you said.

That being said though, if you've got TWO backup copies of it, you can actually run a parity check on it (pseudo-checksum) and determine its error rate.

Also, you didn't run HDTach on it. Given that it's tied together with a Marvell SAS controller, NOT being able to run TRIM on it, I would presume, will give it performance issues in the long run.

To do the error checking, you'll probably have to put this thing in a Solaris system running ZFS so you can mimic the CERN test. And if you actually read/write continuously to it, at the same level in terms of the sheer volume of data, other SSD/NAND-specific issues might start to pop up like wear levelling, etc. I would probably just run the read/write cycle for an entire month, where it periodically deletes some data, rewrite new data, etc. At the end of the month, make the two mirror backups of it. And then run it again. Hopefully you'd be able to end up at some identical endpoint after PBs of read/write ops that you can run both the block level and volume level checksum on.

I am sure he is referring to the previous versions of the z-drive, which is all you can use as an indicator.

I am an enterprise customer. Dell R710s and two Z-Drive R2 M84 512GB models, one in each. I have had to RMA one of them once, and the other is on it's second RMA. They are super fast when they work, but three failures across two devices in less than a year is not production ready. We are using them in benchmarking servers running Red Hat Enterprise 5 for database stores, mostly read only to break other pieces of software talking to it. Very low writes.

But here is the thing. When they power on, one or more of the four RAID pieces is "gone". This is just the on board software on the SSD board itself, no OS, no I/O on it at all besides the power up RAID confidence check. Power on the server, works one day, next day the controller on the card says a piece is missing. That's not acceptable when you are trying to get things done.

In a perfect world, you have redundant and distributed everything with spare capacity and this is not a factor. But then you start looking at dealing with these failures and you start to ask yourself is your time better spent on screwing around with an RMA process and rebuilds or optimizing your environment?Reply

Nobody in the right frame of mind using SSD in enterprise segment (not even interested in them as consumer drives, but that is not the issue here). SSDs are just as unreliable as normal HDDs with ridiculous price point. You can lose all of data much quicker than from normal HDD. RAID arrays built from standard HDDs are just as fast as 1 or 2 "uber" SSDs and cost fraction of a SSD setup (often even including cost of the RAID controller itself). Also nobody running large arrays in RAID0 (except maybe video processing). RAID0 is pretty much non-existent in serious storage applications. As a backup I much more prefer another HDD array than unreliable, impossible to test, super-duper expensive SSD.

You can't tests NAND reliability. That is the biggest problem of SSDs in business class environment. Because of that SSD will whiter and die in the next 5-10 years. SSDs are not good enough for industry, if you can't hold on to big storage market then no matter how good something is, it will die. Huge, corporate customers are key to stay alive is storage market. Reply

SSDs are the best thing since sliced bread if you run a database server.

For one thing, the minimum latency of a PCIe SSD 4K read is almost 1,000 times less than a 4K read off a 15K SAS drive. The drive arrays don't even start to close the performance gap until well over 100 drives, and even then the drive array cannot match the minimum latency. It can only match the performance in parallel operations.

If you have a lot of operations that work at queue depth of 1, the SSD will win every time, no matter how large the disk array.Reply

Bear in mind though, that enterprises (real heavy weights) probably preffer something like fusion-io io-drives which btw are the only ssd`s running in IBM driven blade servers. With speeds up to 3 Gb/s and over 320 k IOPS it`s not surprising they cost ca 20k $$ per unit :D So it`s not true that SSD`s in general are not good for the enterprise segment. Also, and this is hot - these ssd use SLC NAND...Reply