Storage enthusiasts sitting on the edge of your seats for revolutionary SSD announcements out of this year's CES can rest easy: there's not anything mind-blowing coming up that you need to be worried about. Ars sat down today with both LSI/Sandforce and Samsung, and while both had plenty of neat stuff to talk about with regard to their current product line, neither had anything earthshaking to share. Like the headline says, this isn't necessarily a bad thing: now's an excellent time to buy an SSD if you don't already have one, and the ever-present enthusiast fear of buying something that will soon be obsolete or out of date isn't one that really applies for solid state disks.

Kent Smith, the senior director of product marketing for the Flash Components Division of LSI (which makes the enthusiast-friendly Sandforce SSD controllers featured in many consumer SSDs) noted that business has been quite brisk, with Sandforce controllers appearing in many, many different OEMs' drives. Kent compared the situation today to the hard disk drive market twenty years ago, with a plethora of manufacturers producing only moderately differentiated disks. But there are only two real HDD OEMs today: Seagate and Western Digital (or three, depending on how one counts Toshiba). Anyone can use Sandforce controllers in their disks, but the sheer number of OEMs making SSDs is unsustainable, and some collapse and consolidation is inevitable.

The reasons why tie in with NAND flash's much-discussed longevity issues. SSD prices themselves are low and will get lower, but the vast majority of SSD makers aren't actually manufacturing their own NAND, but rather sourcing it from one of several manufacturers. NAND's increasing density and complexity brings with it integration issues—as NAND gets smaller and more cantankerous, it can be more difficult for an OEM who sources both NAND and controllers and melds the two together to make drives. The OEMS that can dedicate the most time to it will produce fast and power-efficient devices, while others will be pushed out of the market by decreasing costs and decreasing margins.

The message from Stephen Weinger, Director of Marketing of NAND flash for Samsung, was similar. Samsung is in a different market position from LSI—as a vertically integrated manufacturer, Samsung makes "the whole widget," from controller to NAND, rather than just the controller. However, they see the same outlook for the SSD manufacturer space as LSI: the number of companies in the space is bound to become considerably smaller. Weinger noted that in 2012 OCZ missed its second quarter earnings targets due in part to supply issues with sourced NAND, and he indicated that anticipated SSD business in 2013 will likely cause these constraints to become more widespread among other SSD OEMs.

Samsung is one of the only SSD OEMs to sell a triple-level cell (TLC) SSD, the Samsung 840. As we discussed in our huge feature set on how SSDs work, TLC SSDs store three bits of data per NAND transistor, requiring the ability to discretely read and write eight different voltage levels. The nature of NAND cells means that as they get smaller and denser, they become more susceptible to wear from repeated erasures and rewrites. A TLC NAND transistor with its eight discrete voltage states has a much-decreased lifecycle than an SLC or MLC transistor, because reading from or writing to it requires much more precision and residual charge damages it more quickly.

Enlarge/ Triple-level cell NAND must store eight discrete voltage levels in order to represent three bits.

The Samsung 840 gets somewhat of a bad rap in comments on Ars when it comes up, but Weinger noted that in Samsung's own internal tests, its TLC NAND came out with about thirteen years of usable life when tasked with the write equivalent of about 40GB per day. This is possible because of the advanced tricks that modern SSD controllers (like Samsung's and LSI/Sandforce's) do to overcome write amplification. At the high level, this usually includes deduplication (writing repeating data only once) and compression, but both companies we talked to jealously guarded their controllers' "secret sauce." Samsung didn't have any post-TLC tech on sale, and noted that the inevitable transistor shrinking march of Moore's Law will likely continue on relatively unabated in NAND flash, asserting that the company is capable of keeping up the pace.

A fast-moving market and hotly in-demand products means that companies in the SSD space, at least for the next few months, will be focused on polishing and refining—reducing power consumption, handling write amplification, and stepping down to the next NAND process size. We'll see cheaper MLC and TLC SSDs from major OEMS, but it will be some time yet before anyone announces more exotic replacements like consumer-targeted memristor drives or anything like that. If you've been holding off buying an SSD because you were afraid of something newer coming out, now is as good a time as any to pull the trigger.

Lee Hutchinson
Lee is the Senior Technology Editor at Ars and oversees gadget, automotive, IT, and culture content. He also knows stuff about enterprise storage, security, and manned space flight. Lee is based in Houston, TX. Emaillee.hutchinson@arstechnica.com//Twitter@Lee_Ars

My fear isn't obsolescence, my fear is outright failure. I don't want to buy a third SSD that will suffer catastrophic Sandforce controller failures like the first two. My anecdotal experience is a 100% SSD failure rate... why would I buy a third unless I'm damned sure I'm not rinsing and repeating?

Not really surprising. Current gen SSDs are basically limited by the sata-3 bus in a lot of cases. Really small random rights are really the only place where there could be significant improvements made but in consumer workloads those types of writes aren't commonly seen at an really high level for an extended duration so even that would probably just be a benchmark thing rather than anything noticeable in general usage.

Working in IT has made me not trust single HDs. I just don't use single HDs anywhere, including my home computers. RAID1 everywhere. We've had SSDs go bad at work. They fail just as a mechanical HD fail.

But laptops? And now tablets? There isn't room for two disks.

Can we have two SSDs (two NANDs two controllers) in the same enclosure? They can keep the same form factor, with a single SATA connector. (If the disk controller fails then you are screwed anyway unless you are running a server with redundant controllers. Besides a "controller failure" in consumer level systems pretty much means a motherboard failure.)

This could be the next cool thing. Although I've been around long enough to realize the manufacturers aren't in it to make cool stuff. They are in it to make money. Consumer-level solution which competes with enterprise grade redundant storage may not be in their best interest. Also, end users. They want jiggabytes, not some RAID1 redundancy thingie. But I want it.

Edit: hmmm brainstorming. An SSD enclosure in the current form factor. Inside it, two removable "cartridges" for the two redundant SSD modules. One fails, remove it and pop in another; no data loss. Still not sure how/where to place the RAID1 logic and not make that a single point of failure. Or if it is, would a MTBF be significantly less than a plain SSD? This idea would just be a stopgap measure SSD makers could produce until the OEMs begin to incorporate two SSDs by design into laptops and tablets. I think it's important enough to do. The disk IS the system. That's where everything IS. Anything else could fail and you would not lose data.

My fear isn't obsolescence, my fear is outright failure. I don't want to buy a third SSD that will suffer catastrophic Sandforce controller failures like the first two. My anecdotal experience is a 100% SSD failure rate... why would I buy a third unless I'm damned sure I'm not rinsing and repeating?

I'm a little with you. I'd love to spruce up my aging laptop with an SSD, but NAND makes me nervous. But, no new surprises is evidence of a stable marketplace. In a stable marketplace, competition pushes pricing down. In a stable marketplace with low prices, more people buy units. When more people buy units, there's more data available on which drives to buy. So, this equates to good news to review whores like me.

Working in IT has made me not trust single HDs. I just don't use single HDs anywhere, including my home computers. RAID1 everywhere. We've had SSDs go bad at work. They fail just as a mechanical HD fail.

But laptops? And now tablets? There isn't room for two disks.

Can we have two SSDs (two NANDs two controllers) in the same enclosure? They can keep the same form factor, with a single SATA connector. (If the disk controller fails then you are screwed anyway unless you are running a server with redundant controllers. Besides a "controller failure" in consumer level systems pretty much means a motherboard failure.)

This could be the next cool thing. Although I've been around long enough to realize the manufacturers aren't in it to make cool stuff. They are in it to make money. Consumer-level solution which competes with enterprise grade redundant storage may not be in their best interest. Also, end users. They want jiggabytes, not some RAID1 redundancy thingie. But I want it.

I have heard a wise man say RAID should not be a replacement for regular backups.

My fear isn't obsolescence, my fear is outright failure. I don't want to buy a third SSD that will suffer catastrophic Sandforce controller failures like the first two. My anecdotal experience is a 100% SSD failure rate... why would I buy a third unless I'm damned sure I'm not rinsing and repeating?

So far I've had no problems, with maybe 5 SSDs. The problems I have had have been with AHCI, and my netbook and several AMD systems not supporting it. The netbook was no faster with the SSD, so I put the 160GB Hitachi drive back in. So I ended up with 2 unused SSDs. I plan to get a USB3 enclosure and see how fast it is as external storage. With my luck that won't be any faster either, but I plan to try.

FYI, I've got OCZ (yeah, I know) and a nice 480GB Sandisk (which seems to be at the same price today).

On the 2 computers where they do work, they are much faster, but I still have some hangs with the drive light on after bootup that I cannot explain.

Working in IT has made me not trust single HDs. I just don't use single HDs anywhere, including my home computers. RAID1 everywhere. We've had SSDs go bad at work. They fail just as a mechanical HD fail.

But laptops? And now tablets? There isn't room for two disks.

Can we have two SSDs (two NANDs two controllers) in the same enclosure? They can keep the same form factor, with a single SATA connector. (If the disk controller fails then you are screwed anyway unless you are running a server with redundant controllers. Besides a "controller failure" in consumer level systems pretty much means a motherboard failure.)

This could be the next cool thing. Although I've been around long enough to realize the manufacturers aren't in it to make cool stuff. They are in it to make money. Consumer-level solution which competes with enterprise grade redundant storage may not be in their best interest. Also, end users. They want jiggabytes, not some RAID1 redundancy thingie. But I want it.

I have heard a wise man say RAID should not be a replacement for regular backups.

HAH! No argument from me. You are taking to a guy who's got his personal files and e-mail replicating in real time between two different systems (my home server and the laptop), and then the server is backed up to a USB disk and then kept in my office. For a while I ran two servers and used DFS-R and DFS root but that was a bit of an overkill.

My fear isn't obsolescence, my fear is outright failure. I don't want to buy a third SSD that will suffer catastrophic Sandforce controller failures like the first two. My anecdotal experience is a 100% SSD failure rate... why would I buy a third unless I'm damned sure I'm not rinsing and repeating?

So far I've had no problems, with maybe 5 SSDs. The problems I have had have been with AHCI, and my netbook and several AMD systems not supporting it. The netbook was no faster with the SSD, so I put the 160GB Hitachi drive back in. So I ended up with 2 unused SSDs. I plan to get a USB3 enclosure and see how fast it is as external storage. With my luck that won't be any faster either, but I plan to try.

FYI, I've got OCZ (yeah, I know) and a nice 480GB Sandisk (which seems to be at the same price today).

On the 2 computers where they do work, they are much faster, but I still have some hangs with the drive light on after bootup that I cannot explain.

So, basically, you have no problems beyond them simply not working in several systems, performing like crap in one system, and causing random hangs on boot (do you use suspend/hibernate?) on the systems they function in... What?

Personally, I've had no trouble with the Samsung SSD in my system. Only been using it for about 18 months at this point, so there's still lots of time for it to start acting up, though... But, so far, no hangs, no mass-data-loss, no random slowdowns, no corruption, etc.

For those who are paranoid about data loss, can it happen, even on an SSD? Absolutely! Anything can fail. I've had a couple of older Sandisk drives fail, but that's all, out of ~60+ drives. Compare that to mechanical drives, which out of 60, I would have had ~10 failures over the same time period of 2 years (based upon experience, trust me).

But my experience? SSDs are WAY WAY better than mechanical drives. They just are unlikely to fail in the -same ways-.

Think about it, what have you had fail more often, a stick of RAM or a mechanical HDD?

Speaking of RAID, does anyone know the state of TRIM+RAID support for linux? The best I've been able to find is this, which isn't exactly conclusive. I've heard arguments from both directions (trim is necessary, trim is useless). Consequently, I'm currently running user partitions on raid1 platters and the OS on a single SSD.

My fear isn't obsolescence, my fear is outright failure. I don't want to buy a third SSD that will suffer catastrophic Sandforce controller failures like the first two. My anecdotal experience is a 100% SSD failure rate... why would I buy a third unless I'm damned sure I'm not rinsing and repeating?

If you Sandforce failure then why not buy a Samsung or one if the other alternate controllers? Are you sure you don't have a motherboard problem?

Speaking of RAID, does anyone know the state of TRIM+RAID support for linux? The best I've been able to find is this, which isn't exactly conclusive. I've heard arguments from both directions (trim is necessary, trim is useless). Consequently, I'm currently running user partitions on raid1 platters and the OS on a single SSD.

Ask Shaohua Li - I believe he already submitted patches LONG ago supporting RAID 0/1/10 and that 4/5/6 would come later - this was March 2012, so most likely it's already accepted, though you'd need to use a current kernel.

LVM and EXT4 have support discard for a while, so the only real issue was the md driver in the kernel - Looks like it most likely is now working. I don't personally use RAID for my SSD systems and my servers are managed by infrastructure people and I believe still have 15k spinning disks.

"but Weinger noted that in Samsung's own internal, their TLC NAND came out with about thirteen years of usable life when tasked with the write equivalent of about 40GB per day."

OK, there seems to be a word missing here (after "internal").

But I'm not complaining. Lee, anytime you hear concrete numbers like this, please please please share them! There is a lot of SSD distrust out there, and this kind of hard, quantified data goes a long way toward helping people understand the real issues rather than the invented fears which crop up in an information vacuum.

Unfortunately compression/deduplication etc are only applicable when the data isn't encrypted. When it is encrypted there will not be compression available, nor any dupes. The worst performing SSD I ever had was an mSata stick with Sandforce controller in my laptop - performed worse even than the HDD in the same device! The write latencies were large because the controller tried to compress every write and failed.

Are there any affordable bootable PCIe SSDs yet? I've done some digging but found only OCZ, and I think I'll be sticking with Samsung, SanDisk or Intel who are apparently only offering enterprise solutions at enterprise prices.

Are there any affordable bootable PCIe SSDs yet? I've done some digging but found only OCZ, and I think I'll be sticking with Samsung, SanDisk or Intel who are apparently only offering enterprise solutions at enterprise prices.

I think Anandtech has done some reviews of Intel based PCIe SSDs cards. And yes enterprise solutions at enterprise price.

Are there any affordable bootable PCIe SSDs yet? I've done some digging but found only OCZ, and I think I'll be sticking with Samsung, SanDisk or Intel who are apparently only offering enterprise solutions at enterprise prices.

OWC, who has some of the best SSDs around makes one that even boots Mac Pros (many don't). Works in PCs too. Pretty affordable.

mSATA says otherwise... same form factor as the tiny WiFi/WLAN cards you find in notebooks. Current capacity around 250GB with prices under $1/GB (e.g., the Crucial m4 which is the boot drive in the 15" notebook on which I type... which also has room for 2x 2.5" drives besides). Mushkin has announced a 480GB for around $500 (still using MLC).

What I would like to see is a stronger separation between the flash memory chips and the controllers(Have the flash memory chips on daughter cards like ram). So that you can upgrade your drive without having to replace the costly controller, and if the controller fails you can still recover your data by attaching the flash memory to a different controller.

I've been toying with the idea of getting a SSD for my laptop. Even though it's an older model with only SATA I I reckon I'd still get a nice speed bump when compared to my current 5400rpm HDD. Still waiting for the prices to come down a bit more though (want a 512GB drive).

I'm a huge fan of SSD drives. First one I got was a Crucial RealSSD 80GB that went in my old C2D MacMini. It's still trucking along with the machine's new owner, not a single problem despite 24/7 use.

I've had a few OCZ drives, a heap of Crucial ones, a few Samsungs, and a Kingston.

The OCZ drive in general were junk (Agility3, lots of failures on those), but the Crucials (M4 128GB, 64GB, mSATA M4 64GB and 32GB) have all been excellent, though needing a few too many firmware updates - that said the updates are simple, unlike OCZ. The Kingston SSDNow V+200 120GB is a good drive, though it's only been used for booting my server off, so I can't really give an objective opinion of it, other than Server 2008R2 boots from cold in less than 20 seconds on a Celeron G530.

Finally, I think my favourite is the Samsung 830. I have a Dell OEM one in my Vostro V131, and the OH has a 256GB 830 in her Lenovo. Totally transformed otherwise mediocre machines.

TL;DR - I hope OCZ is one of the manufacturers who die, because out of all the SSDs I've used, theirs have been junk.

Working in IT has made me not trust single HDs. I just don't use single HDs anywhere, including my home computers. RAID1 everywhere. We've had SSDs go bad at work. They fail just as a mechanical HD fail.

Personally I would only use RAID1 for convenience. I'm sure you know you should never use RAID1 as a (single) backup and if you have your important data backed up the only impact of HD failure is a time investment. Just a little tip to ease your mind and save you some money

I have by the way never had a single HD failure in my entire life. Threads like these help me to remind to check my backups because my luck has to run out some time

Also, end users. They want jiggabytes, not some RAID1 redundancy thingie. But I want it.

Edit: hmmm brainstorming. An SSD enclosure in the current form factor. Inside it, two removable "cartridges" for the two redundant SSD modules. One fails, remove it and pop in another; no data loss. Still not sure how/where to place the RAID1 logic and not make that a single point of failure. Or if it is, would a MTBF be significantly less than a plain SSD? This idea would just be a stopgap measure SSD makers could produce until the OEMs begin to incorporate two SSDs by design into laptops and tablets. I think it's important enough to do. The disk IS the system. That's where everything IS. Anything else could fail and you would not lose data.

Raid1 isn't for backup, it's for uptime and 24/7 uninterrupted operation. A backup is having multiple copies of your data to recover from when the HDDs inevitably fail.

For my laptop, I have a SSD in the 2.5in internal bay, and an optical bay adapter that houses a traditional HDD for backups. I use Microsoft SyncToy to backup the local laptop backup to a server share, and the contents of the server share are backed up to an external drive that's rotated with another external drive kept in a secure offsite location. That sir is a real backup. I would also argue my RAID0 array with SSD cache on my desktop is more reliable against data loss than your RAID1 array is, simply because it's backed up in the same manner as the laptop I mentioned.

Dilbert wrote:

Working in IT has made me not trust single HDs. I just don't use single HDs anywhere, including my home computers. RAID1 everywhere. We've had SSDs go bad at work. They fail just as a mechanical HD fail.

But laptops? And now tablets? There isn't room for two disks.

Can we have two SSDs (two NANDs two controllers) in the same enclosure? They can keep the same form factor, with a single SATA connector. (If the disk controller fails then you are screwed anyway unless you are running a server with redundant controllers. Besides a "controller failure" in consumer level systems pretty much means a motherboard failure.)

This could be the next cool thing. Although I've been around long enough to realize the manufacturers aren't in it to make cool stuff. They are in it to make money. Consumer-level solution which competes with enterprise grade redundant storage may not be in their best interest. Also, end users. They want jiggabytes, not some RAID1 redundancy thingie. But I want it.

Edit: hmmm brainstorming. An SSD enclosure in the current form factor. Inside it, two removable "cartridges" for the two redundant SSD modules. One fails, remove it and pop in another; no data loss. Still not sure how/where to place the RAID1 logic and not make that a single point of failure. Or if it is, would a MTBF be significantly less than a plain SSD? This idea would just be a stopgap measure SSD makers could produce until the OEMs begin to incorporate two SSDs by design into laptops and tablets. I think it's important enough to do. The disk IS the system. That's where everything IS. Anything else could fail and you would not lose data.

Since you're posting upstream, I'll follow suit. Nowhere in Dilbert's post did he say anything about using RAID1 as a backup. SSD failure = complete data loss and interrupted operation. RAID provides protection against both long enough to replace the failed component. It's not a backup, it's resilience and prevention of the inconvenience of needing to restore from said backups.

Golgatha wrote:

Raid1 isn't for backup, it's for uptime and 24/7 uninterrupted operation. A backup is having multiple copies of your data to recover from when the HDDs inevitably fail.

For my laptop, I have a SSD in the 2.5in internal bay, and an optical bay adapter that houses a traditional HDD for backups. I use Microsoft SyncToy to backup the local laptop backup to a server share, and the contents of the server share are backed up to an external drive that's rotated with another external drive kept in a secure offsite location. That sir is a real backup. I would also argue my RAID0 array with SSD cache on my desktop is more reliable against data loss than your RAID1 array is, simply because it's backed up in the same manner as the laptop I mentioned.

Dilbert wrote:

Working in IT has made me not trust single HDs. I just don't use single HDs anywhere, including my home computers. RAID1 everywhere. We've had SSDs go bad at work. They fail just as a mechanical HD fail.

But laptops? And now tablets? There isn't room for two disks.

Can we have two SSDs (two NANDs two controllers) in the same enclosure? They can keep the same form factor, with a single SATA connector. (If the disk controller fails then you are screwed anyway unless you are running a server with redundant controllers. Besides a "controller failure" in consumer level systems pretty much means a motherboard failure.)

This could be the next cool thing. Although I've been around long enough to realize the manufacturers aren't in it to make cool stuff. They are in it to make money. Consumer-level solution which competes with enterprise grade redundant storage may not be in their best interest. Also, end users. They want jiggabytes, not some RAID1 redundancy thingie. But I want it.

Edit: hmmm brainstorming. An SSD enclosure in the current form factor. Inside it, two removable "cartridges" for the two redundant SSD modules. One fails, remove it and pop in another; no data loss. Still not sure how/where to place the RAID1 logic and not make that a single point of failure. Or if it is, would a MTBF be significantly less than a plain SSD? This idea would just be a stopgap measure SSD makers could produce until the OEMs begin to incorporate two SSDs by design into laptops and tablets. I think it's important enough to do. The disk IS the system. That's where everything IS. Anything else could fail and you would not lose data.

What I would like to see is a stronger separation between the flash memory chips and the controllers(Have the flash memory chips on daughter cards like ram). So that you can upgrade your drive without having to replace the costly controller, and if the controller fails you can still recover your data by attaching the flash memory to a different controller.

You'd like that, would you? Did you read the fine article?

One of my take-aways was that integration between controller and flash was going to get more complicated to the point that OEMs were going to start dropping out because of the challenges of getting things right from a performance, reliability and power consumption perspective.

The chances of end-users doing their own mix-and match seems to be declining, not improving. SSDs are less like RAM DIMMs and more like spinning disks in this regard. Integrating platters, heads and controllers isn't something done by end-users. Some people may do component level work for recovery, but they are specialists.

My question is: will we see SSDs in the next generation of game consoles? It would be a huge improvement in both performance and heat management. Microsoft is almost there with the 360, but it only has 4GB of flash memory in the latest version, Sony still uses laptop spinny drives in the PS3.

Another idea: TVs w/ built-in DVR in the form of an mSATA SSD, even the skinniest TVs have room for this inside, and most smartphones have enough processing power to handle the recording and tuning.