Post Your Comment

51 Comments

Good stuff, as usual. But at what point do SSD performance numbers cease to matter because they're all so fast that the difference doesn't matter?

Back when there were awful JMicron SSDs that struggled along at 2 IOPS in some cases, the difference was extremely important. More recently, your performance consistency numbers offered a finer grained way to say that some SSDs were flawed.

But are we heading toward a future in which most SSDs do well in any test that you can come up with shows all of the SSDs performing well? Does the difference between 10000 IOPS and 20000 really matter for any consumer use? How about the difference between 300 MB/s and 400 MB/s in sequential transfers? If so, do we declare victory and cease caring about SSD reviews?

If so, then you could claim some part in creating that future, at least if you believe that vendors react to flaws that reviews point out, even if only because they want to avoid negative reviews of their own products.

Or maybe it will be like power supply reviews, where mostly only good ones get sent in for reviews, while bad ones just show up on New Egg and hope that some sucker will buy it, or occasionally get a review when some tech site buys one rather than getting a review sample sent from the manufacturer?Reply

Storage is still the bottleneck for performance in most cases. Bandwidth between CPU and DDR3 1600 is 12.8GB/s. The fastest consumer SSDs are still ~25 times slower than that in a best case scenario. Also, you have to take into account all the different latencies associated with any given process (i.e. fetch this from the disk, fetch that from the RAM, do an operation on them, etc.). The reduced latency is really what makes the SSD so much faster than an HDD.

As for the tests - I think that the new 2013 test looks good in that it will show you real world heavy usage data. At this point it looks like the differentiator really is worst case performance - i.e. the drive not getting bogged down under a heavy load.Reply

I came in to post that same thing, talldude2. Remember why RAM is around in the first place: Storage is too slow. Even with SSDs, the latency is too high, and the performance isn't fast enough.

Hell, I'm not a programmer, but perhaps more and more things could be coded differently if they knew for certain that 90-95% of customers have a high performance SSD. That changes a lot of the ways that things can be accessed, and perhaps frees up RAM for more important things. I don't know this for a fact, but if the possibility is there you never know.

Either way, back to my original point, until RAM becomes redundant, were not fast enough, IMO.Reply

-- Hell, I'm not a programmer, but perhaps more and more things could be coded differently if they knew for certain that 90-95% of customers have a high performance SSD.

It's called an organic normal form relational schema. Lot's less bytes, lots more performance. But the coder types hate it because it requires so much less coding and so much more thinking (to build it, not use it).Reply

When I was an undergraduate, freshman actually, whenever a professor (english, -ology, and such) would assign us to write a paper, we'd all cry out, "how long does it have to be????" One such professor replied, "organic length, as long as it has to be." Not very satisfying, but absolutely correct.

When I was in grad school, a professor mentioned that he'd known one guy who's Ph.D. dissertation (economics, mathy variety) was one page long. An equation and its derivation. Not sure I believe that one, but it makes the point.Reply

But in a sense Tukano is right, the SATA 3 standard can already be saturated by the fastest SSDs, so the connections between components are indeed the bottleneck. Most SSDs are still getting there, but the standard was saturated by the best almost as soon as it became widespread. They need a much bigger hop next time to leave some headroom. Reply

The first round of SATA Express will give 16 Gbps for standard drives and up to 32 Gbps for mPCIe-style cards (used to be known as NGFF). I think we'll see a cool round of enthusiast drives once NGFF is finalized.Reply

Storage is almost always the bottleneck. Faster storage = faster data moving around your PC's various subsystems. It's always better. You are certainly not likely to actually notice the incremental improvements from drive to the next, but it's important that these improvements are made, because you sure as hell WILL notice upgrading from something 5-6 generations different.

What causes your PC to boot in 30 seconds is a combination of a lot of things, but seeing as mine boots in much closer to 5 seconds, I suspect you must be running a Windows 7 without a really fast SSD (I'm running 8 with an Intel 240Gb 520 series drive).Reply

5 seconds would be very fast, i get to windows desktop in w8 in 11 seconds. Calculated from pressing the power button on my laptop and stopped when i get to real desktop (not metro). I have older samsung 830 and first generation i7 cpu and 16gb memory. Reply

Regarding PC Boot time, easily for me it was my motherboard post time.

My old Asus took minimum 20 seconds to post! When I bought my new system I researched post times and ended up with an ASRock which posts in about 5 seconds. Boom, now I can barely sit down before I'm ready to log in. :)Reply

I wish there were more latency measurements. The only latency measurements were during the Destroyer benchmark. Latency under a lower load would be a useful metric. We are using NFS on top of ZFS, and latency is the biggest driver of performance.Reply

there is still a lot of headroom left; storage is still the bottleneck of any computer; even with 24 SSDs in RAID 0 you still don't get lightening speed.Try a RAM drive which allows for 7700Mb/s write speed and 1000+mb/s at 4k random writehttp://www.madshrimps.be/vbulletin/f22/12-software...

put some data on there, and you can now start stressing your CPU again :)Reply

In terms of consumer usage, 99% of us will properly need much faster Seq Read Write Speed. We are near the end of Random Write improvement. Where Random Read we could do with a lot more increase.

Once we move to SATA Express with 16Gbps, we could be moving the bottleneck back to CPU. And Since we are not going to get much more IPC and Ghz improvements, we are going to need software written with Multi Core in mind to see further improvement gains. So quite possible the next generation of Chipset and CPU will be the last of this generation before we have software move to Multi Core paradigm. Which, looking at it now is going to take a long time.Reply

For most users, a long time ago it stopped mattering. In a machine used for Word/Excel/Powerpoint, Internet, Email, Movies, I stopped being able to perceive a difference day to day after the Intel X25-M/320. I tried Samsung 470's and 830's and got rid of both for cheaper Intel 320's.Reply

Honestly for the average person and most enthusiasts SSDs are plenty fast and a difference isn't noticeable unless you are benchmarking (unless the drive is flat out horrible; bute the difference between a m400 and a 840 pro are unnoticible unless you are looking for it). The most important parts of an SSD then become performance consistency (though really few people actually apply a workload where that is a problem), power use (mainly for mobile), and RELIABILITY. Reply

I agree 100%. I can't tell the difference between the fast SSDs from the last generation and those of the current generation in day-to-day usage. The fact that Anand had to work so hard to create a testing environment that would show significant differences between modern SSDs is very telling. Given that reality, I choose drives that are likely to be highly reliable (Crucial, Intel, Samsung) over those that have better benchmark scores. Reply

I think Anand's penchant for on-drive encryption ignores an important aspect of firmware: it's software like everything else. Correctness trumps speed in encryption, and I would rather trust kernel hackers to encrypt my data than an OEM software team responsible for an SSD's closed-source firmware.

I'm not trying to malign OEM programmers, but encryption is notoriously difficult to get right, and I think it would be foolish to assume that an SSD's onboard encryption is as safe as the mature and widely used dm-crypt and Bitlocker implementations in Linux and Windows.

In my mind the lack of firmware encryption is a plus: the team at SanDisk either had the wisdom to avoid home-brewing an encryption routine from scratch, or they had more time to concentrate on the actual operation of the drive.Reply

Amazing. I am using an OCZ Vertex4 256GB drive. Bought it last Nov for about $224. Very happy with it.This SanDisk drive is the same price ($229), same capacity (240GB), same format. However, it is performing a full 5% to almost 100% better, depending on block size, random/sequential, read/write activity. Amazing what 7 to 12 months has brought to the SSD market!Reply

You wrote: "In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time"

In fact this is not what your test does. Your test records IOPS in one-second periods, but does not measure the latency of individual IOs. It would in fact be interesting to see the latency distribution for these drives.Reply

The benches on this drive are good.....not great, and I don think the opening bias is necessary. Who runs any disk at capacity 24/7? Perhaps some people temporarily... But 24/7 drive full???

Only a fool.

Kudos to sandisk for making a competitive offering, but please anandtech keep the bias out of the reviews....specially when it's not warranted.

Storage bench is great, but it's not the only metric.

Haswell is good, not great. But if your rocking a 2600k from 2 years ago? Meh.

Where are the legendary power savings? Why don't we have 4 ghz + skus? 8 cores? 64gb ram support? Quick sync degraded lol!! Good job on iris pro. Why can't I buy it and slap it into an enthusiast board?

Yet you read this review and the haswell review and come away feeling positive.

Real life:

Intel,A mild upgrade in IPC, higher In use TDP, 2 year old CPU's are still competitive

Why do you keep ignoring the Samsung 840 Pro with spare area increased when it comes to consistency. It seems to me to be the best drive around. And if you value and know about consistency it seems pretty straight forward to increase the spare area and you should have the abilities to do so as well.Reply

Agreed, it looks like a Samsung 840 Pro that's not completely full would be the performance king in every aspect - most consistent (check the 25% spare area graphs!), fastest in every test, good reliability history, and the best all around power consumption numbers, especially in the idle state which is presumably the most important.

Yet this drive is virtually ignored in the review, other than the ancillary mention in all the performance benchmarks it still wins, "The SanDisk did great here! Only a little behind all the Samsung drives... and as long as the Samsung drives are completely full, then the SanDisk gets better consistency, too! The SanDisk is my FAVORITE!"

The prevailing theme of this review should probably be "The SanDisk gives you performance nearly as good as a Samsung at a lower price." Not, "OMG I HAVE A NEW FAV0RIT3 DRIVE! Look at the contrived benchmark I came up with to punish all the other drives being used in ways that nobody would actually use them in..."

Seriously, anybody doing all that junk with their SSD would know to partition 25% of spare area into it, which then makes the Samsung Pro the clear winner, albeit at a higher cost per usable GB.Reply

To the extent that "cloud" (re-)creates server-dense/client-thin computing, how well an SSD behaves in today's "client" doesn't matter much. Server workloads, with lots o random operations, will be where storage happens. Anand is correct to test SSDs under loads more server-like. As many have figured out, HDD in the enterprise are little different from consumer parts. "Cloud" vendors, in order to make money, will segue to "consumer" SSD. Thus, we do need to know how well they behave doing "server" loads; they will in any case. Clients will come with some amount flash (not necessarily even on current file system protocols).Reply

Any word on whether this drive will be offered in a 960 GB capacity for a reasonable price in the near future?

This looks like the best performing drive yet reviewed, but I doubt I will see that big of difference from my 120 GB Crucial M4 in day to day usage. I really don't think most of us will see a large difference until we go to a faster interface.

So unless this drastically change in the next few months, I think my next drive will be the Crucial M500 960GB. Yes it will not be as consistent or quite as fast as the SanDisk Extreme II, but I won't have to worry about splitting my files, or moving steam games from my 7200 rpm drive to the SSD if they have long load times. Reply

Question for those more knowledgeable: I'm building a new DAW (4770k, win 8) which will also be used for development (Eclipse in linux). Based on earlier anandtech reviews I ordered a 128GB 840P Pro for use as the OS drive and eclipse workspace directory and the like. Reading this article, i'm not sure if I should return the 840P for the SanDisk... the 840P leads it in almost all the metrics except the one that is the most "real-world" and which seems to mimic what I'll be using it for (i.e. Eclipse.)

I gave up on SanDisk after they totally botched TRIM on their previous generation drive. They did such a poor job admitting it and finally fixing it that it left a bad taste in my mouth. They'd have to *give* me a drive for me to try their products again.Reply

Quick question. You mentioned a method to create an unused block of storage that could be used by the controller by creating a new partition (I assume fully formatting it) and then deleting it. This assumes TRIM marks the whole set of LBAs that covered the partition as being available. What is the comparable procedure on a Mac? Particularly if you don't get TRIM by default. And if you do turn it would it work in this case? Is there a way to guarantee you are allocating a block of LBAs to non-use on the Mac?Reply

great review. You made me read all of it!! and learn everything about SSDs. Really great work!!

do you know if SanDisk Extreme II is compatible with macbook white 2009? (last version of white)

I read about DRAM 512 MB 1600MHZand my macbook has MEMORY: 8GB RAM DDR3 1067 MHZ. It is not campatible with more MHZ. It crashes and doesn't work properly.

so... i don't know if this is a problem. I Play live keyboards with Logic Pro and don't want any crashes that would not matter in other occasions. But also need the speed of an SSD running through my projects.

I know friends that have serious monitor crashes of 10 to 20 seconds with older macbooks(2006,2007) and older SSDs.