Last week I migrated both of my primary work computers, my desktop and my notebook, to SandForce based SSDs. My desktop now uses an OCZ Vertex 2 based on the SandForce SF-1200 with OCZ’s special sauce firmware. My notebook uses Corsair’s Force F100, also based on the SF-1200 but offering equal performance to the Vertex 2.

Clearly 100GB isn’t enough space for everything I have, so on my desktop I have a pair of 1TB drives in RAID-1. This is where I store all of my pictures, music and some of my movies. Automatic backups happen to a separate 2TB networked drive.

I’ve got a separate file server that feeds the rest of my home and office with a 3TB RAID-5 array. The last part is really to feed my HTPC and hold all of my benchmarking applications, images and lab files, it’s not necessary otherwise.

This is exactly the sort of usage model SandForce was planning on when it designed its DuraWrite technology. If the majority of the data you store can somehow be represented by fewer bits you can solve a lot of the inherent problems with building a high performance SSD.

The SF-1200 and 1500 controllers do just that. The controllers and their associated firmware do whatever it takes to simply write less. In systems like my desktop or notebook, this is very simple. Writing less means the NAND lasts longer, it means that performance remains high for longer and with TRIM you can actually maintain that very high level of performance almost indefinitely.

SandForce’s technology is entirely transparent to the end user. You don’t get any extra capacity, all you get is better performance.

We’ve been looking at SandForce drives from multiple vendors for a while now. If you want the history on the technology look here, and if you want to know how SSDs work in general click here.

As I just mentioned, OCZ’s Vertex 2 ended up in my desktop. That’s the drive we’re looking at today. I moved to SandForce SSDs not because I wanted more performance, but because I wanted to begin long term testing of the mass production firmware on these drives. If I’m going to recommend them, I’m going to use them.

Post Your Comment

44 Comments

In the article, it was suggested that random data is not representative of what users will see. While the current sizes of drives may make that true, the importance of performance under loads of random data will become more important with time and with the growth of drive sizes. Why? Users will start to store a larger percentage of already-compressed files, like mp3s and movies, on their SSDs instead of buying additional bulk drives, and compressed files are, by nature, effectively identical to random data.Reply

Anand made two important distinctions: *small* and *writes*.He wasn't referring to all random data, regardless of size, and wasn't referring to reads.Those are important qualifications. Most mp3 files, and certainly movies fall under the *large* file size.Additionally, the vast amount of activity with those types of files are going to be reads, not writes.Reply

Not to mention that compressed files such as XviD/etc films won't see anywhere near as much benefit from an SSD in the first place.Sure, your read/write speeds might double or triple, but when it comes to sequential activities, mechanical drives are fast. SSDs excel at random reads/writes, which large files don't have so much of, since it's typically stored in a big chunk, meaning sequential activity, not random. And that's only a copying scenario.The point at which an SSD becomes useful for such large files will be further away than the point at which it becomes viable for a wider range of files. I would transfer my games to a large SSD long before I would start transferring compressed media files. They can stay on a mechanical for a loooooong time.

When we do finally have large enough drives that they might be the primary storage drives, I doubt Sandforce will necessarily be applying the same ideas, and if they are, they will have progressed significantly from their current state.

Talking about applying a realistic future problem to drives which exist today is meaningless when you consider how much progress has been already made up the this point, and how much progress will be made between now and when the future issue actually becomes a reality (when SSDs are affordable for mass storage purposes).Reply

I'm not arguing about whether a particular disclaimer in the article was completely accurate. My primary point was that it's not unheard of for people to use only one drive and still have a significant portion of their data compressed, and the number of people affected by this will likely increase over time. Further, I missed a key scenario that other posters have mentioned: file/partition encryption would likely be affected as well, and it's reasonable to assume that scenario will become more common as well as disk-level encryption becomes easier and even required by some companies.

Further, I should point out that the problem (insofar as it is one) by no means affects only small writes. Large reads, according to the tables, have a 20% perf hit, and large writes have a 43% perf hit.

I don't intend to denigrate SandForce's methods here, by any means. If it helps, it helps, and I'll probably get at least one of these myself. I'm just pointing out that the 'YMMV' caveat is applicable to more people and situations than some people realize.Reply

Re: Encryption and point that encrypted data doesn't compress. This is true, but... the SF controller self encrypts. There is little/no point in doing software full disk encryption before sending the data to the SF controller to be encrypted. SF-1200 uses 128 bit AES which NIST counts as strong encryption.Reply

Unfortunately, the only way to use that encryption is to use the device (ATA) password. Management and real-world usage of that is a pain and not nearly as flexible as the options available with, say, BitLocker. I'd wager that most companies with OS-level encryption policies in place will want to continue with their current policies rather than make exceptions for using ATA passwords with drives like this, even if the quality of the encryption itself is just as good.Reply

Most laptops, including mine, allow for only one HD. I bought a Corsair F200 (before Anand reported the compressed file issue) and planned to keep my jpg photos, mp3s, some movies on the laptop attached to a TV (or a receiver). I think this is a reasonably common scenario. Users are already dealing big time with large compressed files which are slower sequentially on Sandforce than on Indilinx.

I wish Crucial fixed their firmware sooner because C300 256GB is about $50 cheaper than F200 or Vertex 2 and faster in all dimensions.Reply

get a cheap small external hdd if your concerned about this. But generally I would not consider mp3's and jpgs as large files.Yes, compressed but by no means large.

Yeah many small laptops only have 1 hdd/sdd. It's IMHO stupid. would be much better to have space for 2 and just ship an external dvd drive. i mean i need my drive like once every couple month. completely useless if you have to carry it around all the time. (that's one great thing about the hp envy 15) ;)Reply