I still don't get how OWC managed to beat OCZ to market last year with the Mercury Extreme SSD. The Vertex LE was supposed to be the first SF-1500 based SSD on the market, but as I mentioned in our review of OWC's offering - readers had drives in hand days before the Vertex LE even started shipping.

I don't believe the same was true this time around. The Vertex 3 was the first SF-2200 based SSD available for purchase online, but OWC was still a close second. Despite multiple SandForce partners announcing drives based on the controller, only OCZ and OWC are shipping SSDs with SandForce's SF-2200 inside.

The new drive from OWC is its answer to the Vertex 3 and it's called the Mercury Extreme Pro 6G. Internally it's virtually identical to OCZ's Vertex 3, although the PCB design is a bit different and it's currently shipping with a slightly different firmware:

OWC's Mercury Extreme Pro 6G 120GB

OCZ's Vertex 3 120GB

Both drives use the same SF-2281 controller, however OCZ handles its own PCB layout. It seems whoever designed OWC's PCB made an error in the design as the 120GB sample I received had a rework on the board:

Reworks aren't uncommon for samples but I'm usually uneasy when I see them in retail products. Here's a closer shot of the rework on the PCB:

Eventually the rework will be committed to a PCB design change, but early adopters may be stuck with this. The drive's warranty should be unaffected and the impact on reliability really depends on the nature of the rework and quality of the soldering job.

Like OCZ, OWC is shipping SandForce's RC (Release Candidate) firmware on the Mercury Extreme Pro 6G. Unlike OCZ however, OWC's version of the RC firmware has a lower cap on 4KB random writes. In our 4KB random write tests OWC's drive manages 27K IOPS, while the Vertex 3 can push as high as 52K with a highly compressible dataset (39K with incompressible data). OCZ is still SandForce's favorite partner and thus it gets preferential treatment when it comes to firmware.

OWC has informed me that around Friday or Monday it will have mass production firmware from SandForce, which should boost 4KB random write performance on its drive to a level equal to that of the Vertex 3. If that ends up being the case I'll of course post an update to this review. Note that as a result of the cap that's currently in place, OWC's specs for the Mercury Extreme Pro 6G aren't accurate. I don't put much faith in manufacturer specs to begin with, but it's worth pointing out.

OWC Mercury Extreme Pro 6G Lineup

Specs (6Gbps)

120GB

240GB

480GB

Sustained Reads

559MB/s

559MB/s

559MB/s

Sustained Writes

527MB/s

527MB/s

527MB/s

4KB Random Read

Up to 60K IOPS

Up to 60K IOPS

Up to 60K IOPS

4KB Random Write

Up to 60K IOPS

Up to 60K IOPS

Up to 60K IOPS

MSRP

$319.99

$579.99

$1759.99

OWC is currently only shipping the 120GB Mercury Extreme Pro 6G SSD. Given our recent experience with variable NAND configurations I asked OWC to disclose all shipping configurations of its SF-2200 drive. According to OWC the only version that will ship for the foreseeable future is what I have here today:

There are sixteen 64Gbit Micron 25nm NAND devices on the PCB. Each NAND device only has a single 64Gbit die inside, which results in lower performance for the 120GB drive than 240GB configurations. My review sample of OCZ's 120GB Vertex 3 had a similar configuration but used Intel 25nm NAND instead. In my testing I didn't notice a significant performance difference between the two configurations (4KB random write limits aside).

OWC prices its 120GB drive at $319.99, which today puts it at $20 more than a 120GB Vertex 3. The Mercury Extreme Pro 6G comes with a 3 year warranty from OWC, identical in length to what OCZ offers as well.

Other than the capped firmware, performance shouldn't be any different between OWC's Mercury Extreme Pro 6G and the Vertex 3. Interestingly enough the 4KB random write cap isn't enough to impact any of our real world tests.

Post Your Comment

45 Comments

"The two come with comparable warranties which brings the decision down to pricing, where OCZ currently has a $20 advantage. "

I don't think that's the complete picture. OWC is more of a Mac shop so provides ways to easily update your firmware from a Mac. OCZ relies on you burning an ISO file. This might have changed for the Vertex 3, but I doubt it. OWC sells directly through macsales.com, and I hear their service is pretty good, so if you're a Mac owner, you would tend to naturally gravitate around the OWC solution, even with the price markup.

I for one hope Anand will figure out what's up with the OCZ 'max iops' Vertex 3. Seems to me that he was promised that his test sample's performance would be the same as the shipping Vertex 3, and now we have a Vertex 3 with and without a firmware cap, and the firmware capped one was shipped first. Perhaps that's not the case and the 'max iops' is some new tweaks above and beyond, but it's been out a few weeks and so far I've yet to read anything about it anywhere.Reply

Sure, the AnandTech Storage Bench tests are *based on* real world workloads, but since you're playing them back at a faster than normal rate, they stop being real world tests. Just take the case of playing back a movie for example. You have a lot of reads there, but you don't get any increase in performance from being able to perform those reads faster than what is necessary to keep your buffers filled (non-empty really). In addition to this, many of the writes that are performed during the tests should be non-blocking, so increasing the write performance would in many cases not lead to any actual real world performance increases (you'd "free up" the drive faster, but that's mostly a benefit when there's other stuff that needs to be done).

There is presumably a lot of stuff in there that *is* limited by your IO speed, but it's all mixed up with stuff that *isn't*, so you can't tell which speed increases give a real world improvement, and which do not. You simply can't tell how much real world performance an increase in results represent, the relationship most certainly will not be linear (e.g. a 10 point difference could have different meaning depending on how high the values are), and you can't even conclusively tell that a drive that gets a higher score actually performs better in any real world sense.

The tests do provide some information, but it's not something that tells you how much you would benefit from upgrading your drive.

I personally think that it would be interesting if you would provide some *actual* real world tests too, so that people could tell if they would actually see some real world differences between different SSDs. Maybe some program/game loading/zoning tests?

At least people would then be able to judge if the difference they see would be significant enough (for them) to warrant an upgrade. You know how to value a 10s difference, but how much is a 10 MB/s difference in the storage bench worth?Reply

Note that both the 2011 and 2010 Storage Bench do playback in real time. The 2011 benches in particular even preserve idle times properly so all that's sped up are actual I/O requests. You are correct in that this will speed up things like decoding video, however note that many video players already do a lot of read ahead and pre-decode on frames in order to avoid stuttering. Note that very few of the IOs in these tests are for things like video decode so I don't believe they're biasing the results too much.

You are correct in that we're focusing exclusively on the I/O aspect of performance. And that a 10% increase in I/O performance won't result in a 10% increase in system performance (except in IO bound tasks).

I've toyed with doing timing based tests, the issue is that modern SSDs don't show any difference when measuring the launch time of a single application. It's really under heavy multitasking and behavior over time that they differ from one another. Both of these types of tests are very difficult to time in a repeatable fashion, which is why we turn to our trace based performance tools.

I do agree that there's a need for some perspective on these performance improvements, which is what I tried to do by including disk busy time in our 2011 results. There you can see that over the course of a 3 hour use period (for example in the case of the heavy workload) that one drive may shave off x number of seconds vs another drive. How annoying those seconds are is really up to the end user.

Personally I find that there are three categories that drives fall into these days. There are those that offer performance around that of an X25-M G2, the next group is around the Vertex 2 and the final group is the 6Gbps 240GB Vertex 3. I feel like there's a noticeable difference going from any one of those groups to the other, but going from an X25-M G2 to something that's slower than the Vertex 2 makes less sense.

Considering that it's now a given that someone will ask for a graph of loading an application on the comments of any new SSD article, I think you should just pick 5 or 10 current drives and show everyone a graph that spans from 5.2 seconds to 5.5 seconds (or whatever it is) with a tag line of "There, are you happy? Now stop asking about this!"Reply

Real world tests shouldn't be a "There, are you happy?" afterthought. Real world is all that matters.

Let me refer back to this classic example: http://tinyurl.com/yamfwmg . RAID0 was 20-38% faster in IOPS, and in the time-based comparison it was equal or slightly slower. Anand concluded "RAID-0 arrays will win you just about any benchmark, but they'll deliver virtually nothing more than that for real world desktop performance."

The chipset drivers are listed as being Intel 9.1.1.1015. Aren't those drivers from December 2009? Do they work well for H67?

I can understand if you want to keep the same drivers for consistency in the tests, but have you checked if there are any significant performance benefits from using more recent drivers? If there are any changes, it could be interesting to see how it affects some of the more recent drives.Reply