Heavy peta-ing; fondling SSDs in a bad way

The Tech Report's attempts to test SSDs to destruction have hit the 500TB mark, with three two-bit MLC NAND drives and one three-bit TLC model all trying to survive. They are using raw SMART data to keep track of sectors reallocated from the spare area to replace flash which has died due to repeated usage. So far the Samsung 840 with its three bit TLC has suffered the most loss of sectors but like the other drives it has not shown much performance degradation. There have been a few other bumps in the road during the tests, check out the full story here.

"Our SSD Endurance Experiment has reached the half-petabyte mark, so it's time for another checkup."

I'm unsure if I would call them shit. But the implementation in the 840 showed some issues, where drive failed checksum on files stored on it at an earlier point. (dam this article is good)

For regular user who writes 5GB per day (if that) TLC drives are more then enough and will last a few years, so calling them shit doesn't fit description. (at 5GB per day - even to reach 100TB would take 54 years - more then enough for normal desktop user)

MLC are more then sufficient for hardcore users 50+GB per day. Article mentioned even 140+GB per day for nearly 10 years (just over the 500TB of writes)

TLC flash is horrible, and if the controllers are unable to manage the high failure rate without corrupting data, then it is bad for any home user.

I have seen SSD's which have had reallocated sectors after only a few weeks of use (some of the earlier SSD's, but they did not really have any corruption issues. the controller was simply able to swap them before they became completely unreliable. (then again back in those days, you could kill an SSD by running spinrite on it in level 4)

TLC flash needs to go, it is not worth the loss of reliability. Total lifetime writes means nothing if the storage becomes unreliable early on.

If an SSD or any drive is corrupting data, then you can never trust that drive again.

In an enterprise setting, a bad sector= replace the drive, as corruption is more detrimental than the cost of replacing the drive.

Same for home users, the data on the drive is worth more than the drive its self. Imagine typing up a 30 page report, then having the file end up corrupted when you go to open it up again. (I personally would value the paper more than the cost of the SSD in that situation.

Personally I don't place much value on my local data any thing like a paper will be stored in my google drive my local data for the most part is all stuff that would be fairly easy to replace and the stuff that isn't is copied across multiple machines. Any thing you can't afford to lose should be backed up period.

The issue is that most users cannot backup non stop, often you will have update intervals, e.g., daily, but even then, what if your data is written to a bad flash cell? you will end up with corrupt data which will still be copied to the cloud backups in their corrupt form. The overall issue still stands, the computer if the local storage is unreliable then the backups are unreliable, as that data will end up corrupted before being backed up.

While no storage is 100% reliable, it is not worth using something as unreliable as TLC flash for anything important.