The OS and application data are continually becoming easier to fit on today's platters - why not move it to NAND? - Image courtesy Samsung

Do you need a solid-state drive? Samsung says you do, and here's why

DailyTech recently had the opportunity to sit down with Don Barnetson, Samsung's director of flash marketing, to chat about the future of NAND devices. Specifically, we picked Barnetson's brain about solid-state drives and future NAND storage.

Over the past few months, we've seen dozens of announcements about solid-state hard drives. PQI has already announced a 64GB flash drive (which coincidentally, is based on Samsung NAND), which ASUS, Fujitsu, Samsung and Sandisk have all announced products based on solid-state hard drives. Given the fact that the hard drive has been the bottleneck on PC performance for years, the question has to be asked is solid-state technology ready to take us out of the dark ages of storage?

In the 90s, the largest advocate of more storage was Microsoft. The company insisted we have larger hard drives for Windows 95, then Windows 98. Then the next largest proponent for more storage became the application designers, pleading users to get larger hard drives for image manipulation or games. But today, I can fit Vista, Outlook (and all of those 2GB PST files) and even a few games in less than 1/10th of my 250GB hard drive. The other 100-odd gigabytes is mainly composed of MP3s and a few DVD rips. I am the prime candidate for a solid-state hard drive.

Most business users claim only a fraction of the hard drive space provided for them, especially considering most unique data gets written to a network anyway. The operating system and applications can all fit in less than 10GB of space, which is well within the sizes of solid-state hard drives today. Barnetson's group has calculated that during an 8-hour day the average hard drive:

Has about a 1% chance of failure per year

Consumes 9W

Loses about 7 to 15 minutes per day in productivity

The fact that we lose so much time alone due to hard drive spin-ups and seeks is alone appalling, but the decreased power consumption is what is driving solid-state adoption today. A NAND device uses less than 200 milliwatts during read/writes, and 0 watts when not being accessed. On the desktop this is relatively unimportant, but on a notebook the hard drive accounts for 10% of the total power draw. Cutting this number down to less than 1% means an extra 12 minutes of usage on my 2 hour battery.

When asked about the reliability of NAND-based hard drives, Barnetson had no problem shrugging off fears of write corruption of failure. "Samsung's solid-state devices have a MTBF of approximately 1 to 2 million hours." Typical disk-based hard drives have a mean-time between failures of approximately 100,000 to 200,000 hours. Since there are no moving parts, the only real point of failure is for something to come unsoldered or a problem with the physical bit during a write.

Obviously, write-errors are a huge concern for those who have used flash products in the past. Only a few years ago the highest-end flash media was only useable for 1,000 or so writes. At that point the physical bits would "burnout" and could no longer be flipped. Today's single-level cell (SLC, memory that stores one bit per cell) is rated in excess of 100,000 writes before burnout. Multi-level cell flash, memory that stores multiple bits per cell, is significantly cheaper but even then is still rated at over 10,000 writes before burnout.

Is 10,000 writes enough? Absolutely, assures Barnetson. Samsung memory uses a technique called "wear leveling" to distribute the writes on a media through as many groups of cells as possible. The idea behind wear leveling is that all of the cells have approximately the same amount of writes to them, maximizing the life of the device. Consider a typical computer that writes 120 megabytes per hour to the hard drive. On a 32GB solid-state NAND drive, wear leveling would distribute this data over the entire drive -- it would take 267 hours to fill the device once. Even on a multi-cell flash device, at this rate it would take no less than 150 years to burnout all the bits on the SSD. Single-cell drives are capable of ten times as many writes.

Even so, Samsung's initial solid-state drives are all single-cell designs. This first generation of SSDs are prohibitively expensive for most, but Samsung's SSD roadmap already has plans for multi-cell level drives as early as next year, which should bring the cost down considerably. Additionally, Samsung anticipates announcing drives in capacities of up to 128GB in early 2008.

Solid-state memory will not entirely replace disk drives. The fact is, media is more and more prevalent each day. 5 years ago, a fringe enthusiast may have had as much as 1GB of MP3s on his hard drive. Today even the average user may have 100GB of just Lost episodes on their hard drive. As an intermediate step hybrid hard drive, hard drives with multi-gigabyte NAND caches, will provide the 2007 stopgap before really big SSDs get cheap. These drives can load the entire operating system, some applications and even a little bit of user data (like Outlook PST files) onto the NAND.

Our insatiable appetite for media cannot be even remotely matched with the production of NAND memory right now, but for games and operating systems, solid-state devices are here and ready to go.

Comments

Threshold

Username

Password

remember me

This article is over a month old, voting and posting comments is disabled

quote: Is 10,000 writes enough? Absolutely, assures Barnetson. Samsung memory uses a technique called "wear leveling" to distribute the writes on a media through as many groups of cells as possible. Consider a typical computer that writes 120 megabytes per hour to the hard drive. On a 32GB solid-state NAND drive, wear leveling would distribute this data over the entire drive -- it would take 267 hours to fill the device once. Even on a multi-cell flash device, at this rate it would take no less than 150 years to burnout all the bits on the SSD. Single-cell drives are capable of ten times as many writes.

I don't care how long it takes for ALL the bits to burn-out, I want to know how long it takes for one bit to burn out.
One bad bit = corruption = possible lost data. So, it may be 150 years of use before I lose ALL my data, but that's not something I care about, since I may lose the most important stuff first.

What a pointless use of mathematics to inflate numbers in a positive manner.

I'm sure they'll have error correction algorithms to ensure data isn't lost; you won't have to worry about that. Most likely flash hard drives would just gradually decrease in capacity over time, similar to floppy disks (when you used Scandisk on them). Believe it or not, CD's devote an entire half of their physical bits to error correcting bits!

How effective the wear leveling is and what -if any- are its limitations?

There are some data that are updated a lot. For example consider some data in a database table. How the wear leveling will work for them. If I need to update a bit in a table, does it write the whole memory block or page to another location. That is the only way (I think) to achieve 100% wear leveling, but then what about the performance?

What about the FAT. Will it apply the wear leveling to FAT as well? If it does not then the location where the fat is written to will wear out much faster. If it does, then it will be interesting to see how.

These and many other issues have solutions already but I do not know any that makes a flash memory suitable for every day usage.

Generally it is not easy to update flash memory data. You can see that in their every day usage. We can copy new data and the wear leveling mechanism will work nicely and spread the data through out the medium, but update?

Note: Flash memories CANNOT update a single bit or page. To update something you need to erase it first. This is may not be a problem until you consider that you can only erase a complete block at a time. So to update a bit you need to write a whole page (ie 512 bytes) and to do that you need to erase a block (ie 16KB). The current applications for computers are not designed with these limitations in mind. Does Samsung claim that their hard drive will be OK no matter the application?

I doubt the location of files matter. Wear-leveling should be transparent to the user. Just because the flash memory has reassigned cells to different addresses, it doesn't mean the address used to access them has changed.

Additionally, think of all the defragmenting that won't need to be done!

Lets say you are saving an existing file and have made significant changes: To spread out the wear it writes the file in a different physical location. But to prevent having a duplicate copy of the same file, would it not have to erase (or at least flag) the original file, thus dramatically lowering the 'spread ratio'?

It doesn't write and re-write each bit if it's used more; it spreads the writing around. Theoretically, no single bit will burn out before any of the others since they are all written to an equal amount of times.

No, in theory and in practice some bits will burn out first. It is not a hypothetical model, unless that model concedes the issues of being a physical device prone to imperfections.

It would be silly to think that right at 1 million cycles (or whichever applies) every bit just fails in succession.

Further, wear leveling cannot come close to what everyone things it can do. It would have to keep track of all these millions of writes to do so, instead it will inevitably far faster wear out areas that are non-static (files).

On the desktop this is relatively unimportant, but on a notebook the hard drive accounts for 10% of the total power draw. Cutting this number down to less than 1% means an extra 20 minutes of usage on my 2 hour battery.

Your math is correct, but not how you applied it. To do a proper calculation, you'd have to know how may watts were being saved, and how many watt*hours the battery was. Then you could do a proper comparison.

Well what I was really saying was that if you have a laptop that gives you 2 hours battery life you should leave it at home. If you need a real laptop meant to be used portably, a Thinkpad X60 with up to 8 hours battery life is what should be carried by frequent travelers.

You are a bit of a loser aren't you? going to the internet making empty posts in which you throw some technical terms around without saying anything then knocking someone else's post just so you can get your kicks feeling superior to others.
Takes all kinds I guess.

It's not my fault the internet is full of uninformed people saying stupid stuff. It's not my fault i have 15 years of experience on the tech business. It's not my obligation to explain those terms, if you're curious - as anyone with the need to learn would be - use google or some other search engine to find out what they mean. Educate yourself, fair enough ?

Oh dear, I was hoping that my post, although annoying to some because it veered off topic even more, would point out that your behavior was transparent to us and it might help you change before you got too entrenched and became one of those types, but if you indeed have "15 years experience" then I guess I'm too late.

If you admit you are not willing to contribute to the conversation with anything meaningful or explanatory then it seems I was correct in concluding the post was as childish as it seemed.

You need not specify what you are and are not incidentally, it's all too obvious.

LOL. Now talking as "we". Well, the other "we" thinks you're just been a prick.

What's transparent to me is your poor judgement. What's your problem anyway ? "anything meaninful" was pointing out other factors missing from an uninformed previous analysis. By your standard, i either just shut up and contribute nothing, or make an "explanatory" post. Well, i pointed out you can do a simple search and find out what it means. Why you didn't do that ? That's beyond my comprehension.

Again, i'm not anyone's private teacher, and even if i was, i would tell you to find out its defenition first, which means ... yeah you got it at last: use a search engine. But to the point, i guess i'm used to RTFM before i open my mouth. Whether you're able to find one or not, then that's your problem. A hint: it's easy, and believe me, it doesn't hurt ...

Bit dense too?
Excuse the superfluous questionmark.
I have no trouble understanding anything, and have no need to look up your hinted astounding and incredible technical knowledge, I have trouble with sad people posting empty posts and treating others posters like crap to feel superior.
I think YOU have trouble understanding justified criticism, perhaps you need to look some things up on google.

You say to me "I'm not your piñata clown" right after trying to bash the candy out of a previous poster, guess who's being a 'prick' in that scenario?

But enough of this nonsense, this is a tech comment area and not a personal fights system.
Let's stop this (after your unavoidable last post which will follow this one in which you try to maintain your feeling of superiority and innocence).

quote: Oh dear, I was hoping that my post, although annoying to some because it veered off topic even more, would point out that your behavior was transparent to us and it might help you change before you got too entrenched and became one of those types, but if you indeed have "15 years experience" then I guess I'm too late.

Dude Wwhat the hell are you talking about? I don't get your beef? You going off the deepend because the other poster made a perfectly fine post bringing up some interesting points that were on topic to the article at hand?

Who gives a shit if he didn't define them, I read and re-read his post...not a whiff of him trying to do offer anything but some insight on the matter.

All his post was a bashing of the previous poster, he just threw in some technical terms to be a wiseass but did not as much as hint what he was talking about in relation to those because the only purpose of the technical terms was to be a vehicle to bash the poster, not to inform.
The only purpose of his post was to be unpleasant to the poster he replied to.
I fail to see how that is not abundantly clear, normally I would let the thing pass because I see types like him making post like that all the time but this time I though it would do good to point out some things for a change.

And I'm not off the deep end at all I think, I'm calm and relaxed.

But thanks for your honest post and attempt to understand it and re-reading his post for that purpose and giving me that respect, I appreciate it.

Thanks. But anyway, i don't give a flying F about ratings. They don't feed me and besides, i learn nothing from it, just the fact the internet is sometimes full of mindless kids that don't have the proper attitude, and most of the time, they lack knowledge about what they're talking about. Yet they demand to be taught ? Someone must be kidding here ...

This is a lie. I worked for another flash memory company and the write/erase cycle of SLC is not close to 100,000 cycles. What I know is around 10,000 erase cycle, and you have to program one page at a time. For the cheaper MLC, which most likely will be what the cheap, large drive is made of, the cycle is closer to 3000 cycles. Samsung in particular has a horrible quality in their MLC and I wouldn't think they have this under control yet as of mid Mar this year.

The 100,000 cycle read is read disturb failure, meaning that reading 100,000 times will corrupt the data. The write/erase caused over program is non recoverable damage like a bad sector in HD, but it is now a bad page/block instead.

Wear leveling, however, do works very well and I wouldn't worry about that. Basically the LBA we read/write to is not directly mapped to one block, so it does get weared out on all block evenly (i.e. in linklist fashion).

What a pointless post! Once a bit is close to burning out it would be marked as bad and not used. Since the bits wear out evenly it would take over 100 years before the first bit wore out. Read the article first next time!

The chances of using the same drive for even 10 years is pretty slim. Something better will come along way before even one bit fails.

Plus if there was an unexpected failure the error correction capabilities would move the data to a good location and mark the bad bit as unusable.