Crucial/Micron RealSSD C300 - The Closest Competitor

While OCZ rushes to be the first to ship these superfast SSDs, Crucial and Micron will soon be shipping their RealSSD C300s. Based on a Marvell controller these drives (Crucial for the channel, Micron for OEMs) are far more traditional in their architecture.

Instead, the innovation comes from the use of ONFi 2.0 MLC NAND flash and a 6Gbps SATA interface. The combination of the two results in some extremely high sequential speeds. A seemingly well architected firmware (and a boatload of DRAM) work together to deliver good random access performance as well.

In testing the C300 it performed very much like a faster X25-M, there was one anomaly that bothered me: maximum write latency.

Like Intel’s X25-M, whenever the C300 goes to write data it also does a bit of cleaning/reorganization of its internal data. The more cleaning the drive has to do, the longer this write process will take. Micron did its best to minimize this overhead but eventually you’ll have to pay the piper. Below you’ll see the average IOPS, average MB/s, average and max write latencies for the C300, X25-M G2 and Vertex LE during my 4KB random write test:

4KB Random Write Performance

Average IOPS

Average MB/s

Average Latency

Max Latency

Crucial RealSSD C300

36159 IOPS

141.3 MB/s

0.0827 ms

1277.9 ms

Intel X25-M G2

11773 IOPS

46.0 MB/s

0.255 ms

282.9 ms

OCZ Vertex LE

41523 IOPS

162.2 MB/s

0.072 ms

109 ms

While both Crucial and OCZ/SandForce offer incredible average write latencies, Crucial’s max latency is over a second! I haven’t actually seen max write latencies this bad since the JMicron days. But if you look at the average write latency, you’ll see that this max latency scenario basically never happens. I only worry about what happens when it does.

Crucial also warned me that despite the controller’s desire to keep performance as high as possible, if I keep bombarding it with random writes and never let up it may reach a point where it can no longer restore performance to an acceptable level. This sounds a lot like what Intel encountered with the original X25-M bug, although it’s not something I was able to bring about in normal usage thus far. Given the early nature of many of these drives, it’s going to take a lot of consistent use to figure out all of their quirks.

Overall performance of the C300 is excellent. Just like the Vertex LE, it performed admirably in all of our tests. Paired with a 6Gbps controller there’s actual a noticeable improvement in real world performance, although it’s limited to those scenarios where you’re doing a lot of sequential reads from the drive.

6Gbps SATA controller on a PCIe x1 card

The drive’s performance does come at a price. The RealSSD C300 will be available later this month in 128GB and 256GB configurations, priced at $499 and $799 respectively.

Sorry, not true. Like I said, SandForce's compression makes IOPS not equal to bandwidth. See http://tinyurl.com/yden7kc">http://tinyurl.com/yden7kc . And allow me to restate my comments from the last article: In article http://tinyurl.com/yamfwmg">http://tinyurl.com/yamfwmg , in IOPS, RAID0 was 20-38% faster! Then the loading *time* comparison had RAID0 giving equal and slightly worse performance! Anand concluded, "Bottom line: RAID-0 arrays will win you just about any benchmark, but they'll deliver virtually nothing more than that for real world desktop performance."

So there you have it. Why measure IOPS?

> erple2: what is important is the general ranking
> of these devices in the same benchmark. The
> benchmark is measuring the _relative_ performance
> of each of the drives in the same sequence of
> tests.

What "general ranking" lacks is the issue of significance. I apologize, but I will again restate what I posted on the last article: is the performance difference between drives significant or insignificant? Does the SandForce cost twice as much as the others and launch applications just 0.2s faster? Let's say I currently don't own an SSD: I would sure like to know that an HDD takes 15s at some task, whereas the Vertex takes 7.1s, the Intel takes 7.0s, and the SF takes 6.9! Then my purchase decision would be entirely based on price! The current benchmarks leave me in the dark regarding this. Reply

The performance/free space dropoff is a significant issues, especially with otherwise-fast SSDs (i.e. Intel). For example, the 80GB X25-M should really be relabeled as a 60GB drive due to progressive worsening performance as the amount of free space decreases (beyond 70GB, it starts getting REALLY bad). Do these drives show any improvement in the performance to free space degradation curve? Reply

I'm building a new PC soon and was going to buy another Agility 60 and use it with my existing 60gb agility and raid0 them up. But since the Intel X25-M 80GB is almost the same price, and blows away the agility in random reads (which is more relevant in OS/App usage than sequential speed, correct?) would it be better just to buy and run the single Intel drive instead?

I'm not too fussed about losing out on 120gb of capacity in raid0, and besides, I can install the games to the Agility instead, and use the Intel for the OS/Apps. Reply

The one thing missing is the one that's really relevant to me: workstation performance.
It's probably close to the "heavy load" scenario", but... For me, it's a mix of compiles, compute-intensive modeling, visualization, and GIS use. Of these, the compiles, the visualization, and the GIS are the really-interactive items, so are probably most important.
There are lots of compile-benchmarks out there; it would be relatively easy to generate a GIS benchmark, using some of the GRASS GIS logs I have from what I've been doing lately.

I completely agree that there should be a developers benchmark, and keep mentioning this when these articles appear.

Compiling a large software project seems to me to be a good general purpose test. There'll be random and sequential reads and writes, of a few bytes to many megabytes, in some hard to predict ratio, as the build process reads sources/headers, uses temporary files and writes output. It isn't obvious to me whether the Intel or the Indilinx/Micron characteristics would be favored.

But afaik no-one's studied this from an SSD angle, and I wish Anand would at least add a benchmark which could, say, build a Linux distro while grepping it repeatedly for some random text.

Anand,
As always another great article. I just wanted to say that it is looking really bright for SSD's. The performance benefits of SSD's are just to great to ignore (unlike the switch from DDR2 to DDR3). But I am going to hold off though until Q4 as by then, the market will have alot more competition (hence lower prices), bugs will be sorted out and the thought of dead drives (such as the one you experienced) just gives me the creeps even if they do replace it with a new one.
Reply

Anand, it is getting hard to keep track of different SSDs, which controller they use, how many flash chips, etc. It would be wonderful if you could start an 'SSD decoder ring' chart that lists the relevant information, maybe even linked to performance numbers like you've done with CPUs. Reply