Crucial/Micron RealSSD C300 - The Closest Competitor

While OCZ rushes to be the first to ship these superfast SSDs, Crucial and Micron will soon be shipping their RealSSD C300s. Based on a Marvell controller these drives (Crucial for the channel, Micron for OEMs) are far more traditional in their architecture.

Instead, the innovation comes from the use of ONFi 2.0 MLC NAND flash and a 6Gbps SATA interface. The combination of the two results in some extremely high sequential speeds. A seemingly well architected firmware (and a boatload of DRAM) work together to deliver good random access performance as well.

In testing the C300 it performed very much like a faster X25-M, there was one anomaly that bothered me: maximum write latency.

Like Intel’s X25-M, whenever the C300 goes to write data it also does a bit of cleaning/reorganization of its internal data. The more cleaning the drive has to do, the longer this write process will take. Micron did its best to minimize this overhead but eventually you’ll have to pay the piper. Below you’ll see the average IOPS, average MB/s, average and max write latencies for the C300, X25-M G2 and Vertex LE during my 4KB random write test:

4KB Random Write Performance

Average IOPS

Average MB/s

Average Latency

Max Latency

Crucial RealSSD C300

36159 IOPS

141.3 MB/s

0.0827 ms

1277.9 ms

Intel X25-M G2

11773 IOPS

46.0 MB/s

0.255 ms

282.9 ms

OCZ Vertex LE

41523 IOPS

162.2 MB/s

0.072 ms

109 ms

While both Crucial and OCZ/SandForce offer incredible average write latencies, Crucial’s max latency is over a second! I haven’t actually seen max write latencies this bad since the JMicron days. But if you look at the average write latency, you’ll see that this max latency scenario basically never happens. I only worry about what happens when it does.

Crucial also warned me that despite the controller’s desire to keep performance as high as possible, if I keep bombarding it with random writes and never let up it may reach a point where it can no longer restore performance to an acceptable level. This sounds a lot like what Intel encountered with the original X25-M bug, although it’s not something I was able to bring about in normal usage thus far. Given the early nature of many of these drives, it’s going to take a lot of consistent use to figure out all of their quirks.

Overall performance of the C300 is excellent. Just like the Vertex LE, it performed admirably in all of our tests. Paired with a 6Gbps controller there’s actual a noticeable improvement in real world performance, although it’s limited to those scenarios where you’re doing a lot of sequential reads from the drive.

6Gbps SATA controller on a PCIe x1 card

The drive’s performance does come at a price. The RealSSD C300 will be available later this month in 128GB and 256GB configurations, priced at $499 and $799 respectively.

83 Comments

Further research shows me that no one knows what manufacturing process size these actually are, which I find strange. No reviews include the information, there are no manufacturing spec sheets, etc. Only that they are lead-free and have 48 pins, haha.

OCZ says they can't get any more sandforce controllers at a low price point. But how does OWC get the same controller for their Mercury Extreme Enterprise SSD. OWC 100GB SSD is priced the same as OCZ Vertex LE 100GB!

I'm curious to know what SSD is most suitable for a mac. Since OSX does not support TRIM, some sort of garbage collection has to be done within the drive (firmware) or a software tool has to be available for OSX.
Is this something you'll look into in forthcoming reviews?

How can you benchmark in so many different ways, and yet end up with hardly any relevant information? All those PCMark graphs don't tell me squat. Your AnandTech Storage Bench is flawed, since (as your last article found) the SandForce uses compression and IOPS don't equal bandwidth! Why does the user care about IOPS?? Do they care about IOPS of their graphics cards? Or CPU?

With CPUs, you measure things like encoding time and game framerates. Things that matter!

It's a waste of time to do what you're suggesting. The point of SSDs is to improve the user response time.

Encoding time? It would likely be virtually identical due to the modern pre-fetching algorithms in place.

Game framerates won't really be affected since the average of 3 runs is taken. After the first run, most everything will be cached, either in hardware or in software by the os in mem.
In the real world, you would expect to see less dips in fps (min fps will be higher), assuming it is a fresh first run. Reply

The point is germane. New SSD benchmarks are required to measure real world performance. Some of the current benchmarks end up limited to measuring nothing more than cache speed. IOPS is impractical and demonstrates nothing indicative regarding real world performance.

Ordinarily, I'd agree with you. However, the point of the article was to point out the problems with "older" benchmarks that would simply look at, in a vacuum, IOPS of a drive was inconsistent at best, and misleading at worst. In the case of Anand's testing methodologies, you see that the IOPS numbers he comes up with are, in fact, the "worst case scenario" listed for SSD's in the article you linked to.

Anand has since changed his benchmark methodology for all SSD's to be a "polluted" SSD - he does not simply wipe the drive clean, then benchmark. He first fills the drive with data, then does a format (which does NOT wipe the drive clean - you still have the write-erase cycle to contend with), then runs benchmarks.

The other thing to look at is that the benchmarks that Anand looks at are, in fact, consistent. Saying that one drive attains 600 IOPS on "Anand's light StorageBench" where another attains 500 IOPS _ON THE SAME BENCHMARK_ does, in fact, give you a reasonably accurate comparison. The trouble you'll get into is if you state "Drive X gets 5000 IOPS, but Drive Y gets 9000 IOPS", not mentioning the actual benchmark used, or even worse, cherry-picking the benchmarks to favor the particular Drive. Then, you have to dig down and figured out whether the benchmarks that gave you "5000 IOPS" was, in fact, properly executed - is that really indicative of the performance of the drive, or only in a very tightly controlled environment to maximize performance numbers? However, that's a question you always have to look at regardless of what you're testing, be it video cards (3D Vantage doesn't give you an accurate picture of how well the card will perform in some particular scenario), CPU's (SPECint or SPECfp give you minimal information about how a CPU performs in a large database environment) or other devices.

So really, I think that the point the article in storagesearch is hammering at you should be wary of reading more into generic IOPS as a benchmark for these SSD's as is simply stated.

So, in conclusion, I disagree about 80% with what you have written.

(minor "edit"):
So I've re-read the GGP post - while it is true that IOPS as a number means nothing to me, it also winds up being true that posting a Bandwidth number would also be more or less worthless to me - what is important is the general ranking of these devices in the same benchmark. The benchmark is measuring the _relative_ performance of each of the drives in the same sequence of tests. Taking conclusions like "this drive gets 600 in a benchmark and that one gets 400 in another benchmark" ultimately fails.

(BTW, Anandtech staff, please fix the fact that I can't use any "rich" text in these posts) Reply