Update: Since the publication of this review OWC appears to have switched controllers for the Mercury Extreme SSD. The current specs look similar to that of SandForce's SF-1200 controller, not the SF-1500 used in the earlier drives. Performance and long term reliability (in an enterprise environment) are both impacted. For more information, read this.

I must admit, I owe OWC an apology. In my Vertex LE review I assumed that because my review sample had an older version of SandForce’s firmware on it that the company was a step behind OCZ in bringing SandForce drives to market. I was very wrong.

For those of you who aren’t Mac users, Other World Computing (OWC) just shouldn’t be on your radar. The only reason I’ve heard of them is because of my Mac experience. That’s all about to change as they are technically the first company to sell SandForce based SSDs. That’s right, OWC even beat OCZ to the punch. The first customers actually got drives the day my Vertex LE review went live. Multiple days before the LE actually went on sale at Newegg.

I mentioned it briefly in my Vertex LE review. The OWC Mercury Extreme SSD is based on the same SandForce controller as the Vertex LE. There was some confusion as to exactly what this controller is. As of today there is only a single SandForce MLC SSD controller shipping. It’s somewhere in between the performance of an SF-1200 and a SF-1500. Ultimately we’ll see the SF-1500 move to high end enterprise drives only, with the SF-1200 used in consumer drives like the OCZ Vertex 2 and Agility 2. The accompanying firmware is also somewhere in between the SF-1200 and SF-1500 in terms of performance (more on SandForce's controllers here). But as I just mentioned, its the equivalent of what OCZ is shipping in the Vertex LE.

OWC has assured me that all drives that are being sold have the latest RC1 firmware from SandForce, just like the Vertex LE. The firmware revision number alone should let you know that like the Vertex LE, these are wholly unproven drives. OWC is only sending out drives on 30 day evaluation periods, so I don’t expect many long term reliability tests to be done on those drives in particular. Thankfully we do still have the Vertex LEs to hammer on.

I previewed the Mercury Extreme in my last article, stating that it performs identically to the Vertex LE. Not only does it perform the same, but it's also a little cheaper:

OWC is the first company to offer a 50GB drive based on the SandForce controller. I’d long heard rumors that performance was significantly lower on the 50GB drive, but I had no way of testing it. OCZ still doesn’t have any 50GB drives. OWC gave me the opportunity to answer that question.

OWC got upset with me when I took their drive apart last time, so I can't provide you guys with internal shots of this drive. The concern was that opening the drive left it in an unsellable condition. I would hope that no company is reselling review samples, but you never know.

The 50GB Mercury Extreme carries a $229 price tag, that’s comparable to other small-capacity SSDs on the market:

Unfortunately it does give you the worst cost per GB of NAND, and even worse when you consider how much of that is usable accessible. Remember that these SF-1500 controllers are derivatives of SandForce’s enterprise SSD efforts, meaning they are designed to use a lot of spare area.

Despite having 64GB of MLC NAND on board, the 50GB drive has a formatted capacity of 46.4GB. Nearly all of the extra flash is used for bad block allocation and spare area to keep performance high.

I installed Windows 7, drivers and PCMark Vantage on my 50GB drive which left me with 30.8GB of free space. That’s actually not too bad if you aren’t going to put a whole lot more on the drive. There’s more than enough room for a few applications, but think twice before using it for media storage.

Preview Today, More Tests Coming

It’s sheer excitement that made me push this review out today. I was really curious to see how well one of these 50GB SandForce drives performed. I have seen some of you request that you’d like to see more non-I/O specific, real world tests in our suite. I’ve done this in previous articles but stopped simply because the data didn’t seem to provide much value. These drives are so fast that measuring application launches, game level loads or boot time simply shows no difference between them all. Instead, by focusing on pure I/O performance I’ve at least been able to show what drives are technically the fastest and then base my recommendation on a good balance of raw performance and price. Then there’s the stuff that’s more difficult to benchmark - long term reliability and consistency of performance. Most of these drives end up in one of my work machines for several months on end. I use that experience in helping formulate my recommendations. In short, I’m still looking to expand the test suite and add meaningful tests - it’s just going to take some time. This is a lengthy process as each new controller poses new challenges from a benchmarking perspective.

Well, these are pretty good news for people who would rather RAID low capacity SSDs for higher performance at simelar cost to bigger drives.
RAIDing 2-3 of these 50GB SSDs from ICH10R would give above 600MB/s bandwidth for both read and write, and still be at an acceptable price.

I still can't over wishing for a test with RAID 0 of lowest capacity drives compared to the high-capacity drives always reviewed here to see how much more performance you can get for the same price.
At $230 this 50GB SF-1500 should be able to defend itself against 2 RAID-0 x25-V wich would cost about the same, but from benchmark numbers i've seen, 2R0 x25-V would beat it thoroughly at everything except tasks heavily focusing on sequential writing.
3 x25-V's comming in at about $300 would beat all drives listed in this lineup at almost all benchmarks, and at performance pr $ you can't beat it.

BTW, it would be nice include a screenshot of AS SSD benchmark, since it includes high QD testing of 4KB random IOPS so you can see what the drive is capable of at max load. AHCI/RAID specifies up to 32 outstanding IOs, and most SSDs with many channels support it fully and will scale up to QD 32. Intel x25-V/M/E wich are listed at about 60MB/s 4KB random read (QD 3) scale up to 120-160MB/s from QD 10-12, and the same is true for C300 and SF-1500. When testing 4KB random read at QD 3 on an NCQ enabled 10+ channel SSD, you are really saturating only 3 of the channels.
Indilinx Barefoot drives with 4 channels can do 4KB random reads at about 60MB/s at QD 5, but scale no further. Reply

As for TRIM with RAID, if you use the RAID as you would a normal SSD, and don't have any usage patterns that does huge ammounts of random writes, you don't NEED the TRIM command, but it would be nice ofc. For a normal usage model, an SSD won't get write performance degraded to more than about 80% of fresh performance (+-10%), and since RAID gives linear scaling, two SSDs in RAID whitout TRIM will outperform a single SSD with TRIM quite nicely. Using this reasoning, TRIM not being avalible in RAID mode shouldn't stop you from buying and RAIDing 3-4 SSDs of 32/40GB rather than going for a single 80-256 since the RAID will outperform the single SSD by a good margin and at about the same price. The question is in the number of free ports on the motherboard SATA controller (for up to 4 SSDs). If you want to RAID more than 4 SSDs, you need a HBA or hardware RAID card to get further scaling.
In such a scenario, LSI 9211-8i + 8 x25-V will come in at $1000-1100 and deliver 1500MB/s read, 320-350MB/s write, and enough IOPS for any enthusiast. Reply

Buy the SSD you need if its 128gb or 256gb buy one SSD enable AHCI and install windows7 and do not install chipset drivers or intel matrix drivers or update the AHCI drivers (keep an look out on windows update) the pc will be fast

SSD and RAID is pointless for home users or even gamers (only Pure Video editing or messing with Big files that needs 400-600MB/s maybe and you need 2 raid 0 SSD setups to even use the bandwidth if moving data between them)

still pointless as your looking at constant reads and Write speeds, 2 ssds will max out most RAID setups until the drive degrades (or one SSD fails) but thats not the point as the random access speeds and data rate random access of an SSD is high, then it been better to buy one ssd that fits your needs (Size 128gb been the min)

TRIM keeps Write latency down as an Write can be done mostly Right away where as no TRIM it has to perform an erase before Write

with SSDs the IOPS is norm +1000 an HDD, random IOPS and random access speeds are high as well an hdd can not match, SSD does not warrant an raid setup,

as an gamer and an owner of an first gen SSD (corsair S128) high data rate does not mean it be any faster (my SSD is slower then an HDD at constant read/writes but far faster random reads and IOPS) and i have gone from an RAID 0 2xWD black HDDs and before that 4x80gb hdds, if i had bit more money at the time i would of got an second gen SSD, but the S128 was very cheap and seem to work very well it does lack in Write access times and write speed an little but that only comes up when installing games, i can reboot my pc in under 1 min (thats desktop to desktop) and run any programs soon as my desktop loads the pc loads that fast i had to set my IP address to static IP as the DHCP obtain was taking to long to get an IP (chat programs would give up waiting and then retry)

(yes an faster SSD be better then what i have got system would most likely respond even faster as my SSD does lack NCQ and its an SATA150 drive and that be why my Write access times are not so good but its no JMicron drive it does not stall under Write loads) Reply

The point of RAIDing SSDs is not just bandwidth, it's increasing parallellity (if that's a word...) and avalible bandwidth for mixed read/write. Only powerusers or enthusiasts really should consider RAID, since the benefits like you say will not be very noticable in everyday singletasking usage. It's first when you start multitasking (or video editing like you mentioned) you really feel the difference.

If you have a quadcore running at 3-4Ghz and a good amount of fast RAM (say 8-12GB DDR3 1600) even a good SSDs can be a substantial bottleneck when multiple apps access storage at once, especially if you have a sequential write or read (or both) going on at the same time.

On my main rigg i have 2x Mtron Pro 7025 32GB (SLC) SSDs in RAID 0 that i bought in 2008 (cost me about $1400 at the time), and these don't support NCQ either. What's great about them is they don't get degraded performance at all, they have great accesstime (i get 90MB/s 4KB random read), and i've never experienced any freezing or hiccups. The 5 year warranty (even for use in enterprise servers) don't hurt either :P
When I upgrade next time, they will likely be replaced by intels 3. gen SSDs at the lowest capasity point (or simelar highest performing low capasity SSD) in RAID from an LSI 9211 (or simelar controller). If ioXtreme has dropped in price by then, maybe i'll buy that instead (or some simelar PCIe SSD). Reply

if your Not doing Video editing or messing with Big files more then 1 SSD is pointless,

if your an gamer or want an fast pc one 1 SSD is only needed its been tested quite an number of times RAID + SSD is not needed any more More so that they have TRIM support now (that you lose when RAID is used)

multitasking really depends what you call multitasking i would norm call that 10-15 programs open at one time 1 SSD can handle that fine , again if it's Video editing you May gain the most from RAID, the Files would have to be very big thought as an SSD random access is fast any way (10-50MB/s) and so is its constant transfer rate that you see on most SSDs (200MB/s+)

Steam is the most likely the most stressful thing on my PC or any ones pc makes my SSD crawl when it decodes an pre loaded game (has to decryp all the files so that makes Lots of Read/Write) due to lack of NCQ on my corsair S128 very much so does not help, on second gen SSD thay have Sata300 so norm have NCQ support as default so the Slow downs norm do not happen, second and 3rd gen have TRIM support (3rd gen norm have it from when you Buy it like the JMicron 612)

when using RAID if Write back is Turned on you do benefit from buffered Writes so the (ICH8+) RAID gives more Priority to reads (as to why you get insane buffered burst rates on HDD or SSD when Write back is turned on) but there is an chance you can lose data if the system is powered down before data is Writen (unlicky for the most part under SSDs due to Write speeds and RAID norm Writes in Blocks not bits like windows does or p2p programs)

.
in the end if your an gamer or an user who just wants an fast and responsive pc 1 SSD is Far more then is needed you will not notice the speed improvement unless your doing benchmarks and only the Disk test part of the benchmarks (its also cheaper to Buy the 1 Big SSD with TRIM then 2 smaller SSD's in RAID and lose Trim, that would make 2 SSD's slower then 1 SSD once the drive runs out of free blocks due to lack of TRIM support) Reply

I know some people are using the 40-80GB versions of SSDs for ONLY their operating system, but I still feel like this product still isn't worth purchasing until you can get 120GB+ at the same prices as the 40-80GB range.

Basically, you should be able to have your OS/Office Suite/Games/Photo Editing software on the SSD. 40-80GB at this price is just not worth it.

Anand,
I would also like to new RAID-0 tests, if possible. I don't know if you are waiting on the Intel Storage Matrix, or if all the problems have been fixed with RAID/Trim, but we're approaching a point where we can almost buy a bunch of the scrap 30/40GB SSDs. Because of the higher reliability, RAID-0 is looking very attractive.

I say I want to see some RAID and 6Gbps testing and it gets ignored. An hour later, someone basically says the same thing and gets full response w/ follow-up. This has happened before, what's the deal? Reply