Hey, There’s an Elephant in the Room

When the first X25-M reviews went live a few people discovered something very important, something many of us (myself included) missed and should’ve addressed: the drive got slower the more you filled it up. It’s no great mystery why this happened, but it seemed odd at the time because it went against conventional thinking.

It’s worth mentioning that hard drives suffer from the same problem; just for a different reason.

Hard drives store data on platters; the platters rotate while an arm with read/write heads on it hovers over the surface of the platter and reads data while the platter spins. The diameter of the platter is greater the further out on the platter you go, that’s just how circles work. The side effect is that for the same amount of rotation, the heads can cover more area on the outside of the platter than on the inside.

The result is that transfer speeds are greater on the outer sectors of the platter than on the inner ones. OSes thus try to write as much data to the outer sectors as possible, but like beachfront property - there’s only a limited amount of space. Eventually you have to write to the slower parts of the drive and thus the more full your drive is, the slower your transfer rates will be for data stored in the innermost sectors.

Fragmentation also hurts hard drive performance. While modern day hard drives have gotten pretty quick at transferring large amounts of data stored sequentially, spread the data out all around the platter and things get real slow, real fast.

Randomness is the enemy of rotational storage.

Solid state drives aren’t supposed to have these issues. Data is stored in flash, so it doesn’t matter where it’s located, you get to it at the same speed. SSDs have +5 armor immunity to random access latency (that’s got to be the single most geeky-sounding thing I’ve ever written, and I use words like latency a lot).

So why is it that when you fill up a SSD like Intel’s X25-M that its performance goes down? Even more worrisome, why is it that when you delete data from the drive that its performance doesn’t go back up?

While SSDs are truly immune to the same problems that plague HDDs, they do also get slower over time. How can both be true? It’s time for another lesson in flash.

Outstanding article that really helped me understand SSD drives. I wonder how much of an impact the new SATA III standard will have on SSD drives? I believe we are still at the beginning stage for SSD drives and your article shows that much more work needs to be done. My respect for OCZ and how they responded in a positive and productive way should be a model for the rest of the SSD makers. Thank you again for such a concise article.
Respectfully,
Sinned Reply

araczynski: The 4/512 isn't done by accident. It's done to lower prices. The flash technology used in SSDs are meant to replace platter HDDs in the future. There's no way of doing that without cost reductions like these. Even with that the SSDs still cost several times more per storage space. Reply

i understand that, but i don't remember original hard drives being released and being slower than the floppy drives they were replacing.

this is part of the 'release beta' products mentality and make the consumer pay for further development.

the 5.25" floppy was better than the huge floppy in all respects when it was released. the 3.5" floppy was better than the 5.25" floppy when it was released. the usb flash drives were better than the 3.5" floppies when they were released.

i just hate the way this is being played out at the consumer's expense.
Reply

What a great article. I usually have to skip forwards when things bog down, but they never did with this long, but very informative article. Your focus on what matters to users is why I always check anandtech first thing every morning. Reply

The method for generating "used" drives is flawed. For creating a true used drive, the spare blocks must be filled as well. Since this was not done, the results are biased towards the Intel drives with their generous amount of spare blocks that were *not* exhausted when producing the used state. An additional bias is introduced by the reduction of the IOmeter write test to 8 GB only. Perhaps there are enough spare blocks on the Intel drives so that these 8 GB can be written to "fresh" blocks without the need for (time-consuming) erase operations.
Apart from these concerns, I enjoyed reading the article. Reply

I also just created an account to post, very nice article!
Lots of good well thought out information, I'm so tired of synthetic benchmarks glad someone goes through the trouble to bench these things right (and appears to have the education to really understand them). Whats with the grammar police though? geez... Reply