If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. Registration is $1 to post on this forum. To start viewing messages,
select the forum that you want to visit from the selection below.

Of course it is life like, Intel recommends running Intel SSD Toolbox once every week, which would equal every 10-140GB depending on your usage pattern.
TRIM will not clean all "deleted" data even if you are running an OS supporting TRIM. (this is why it is recommended running the toolbox)

Not sure how many GB is written between each run of "wiper" but it is surely much more than the recommended 10-140GB, a lot more , an average of 40MiB/s equals 1 week of writes written in just 1 hour.
So if he cleans the drive once per day it would equal (a minimum) of 24weeks of writes, so, he can in fact run "wiper" once every hour and still be within the norm.

I've been running it once or twice a day in the past 2 days, so about every ~3.5TiB.

I like your thought about having the App run it every "n" hours.
I think every 4 hours would be perfect and would also fall into the 70-75 loop area and aroun 1Tib area.

Average speed reported by Anvil's app has been steady at about 113MB/s.

The other unknown SMART attribute 235 is still at 99/99/2, just as it was when the SSD was fresh out of the box.

64GB Samsung 470

sa178 raw increase of 4 this time -- still have not seen an increase by 1 (or any other odd number). It looks like the normalized value decreases by 1 for about a 10 increase in the raw value. So the raw value should be roughly 1000 when the normalized value reaches 1. If that is 1000 erase blocks of 512KiB each, then we are looking at roughly 512MiB of reallocated flash, or roughly 0.8% of 64GiB on board. Seems plausible to reserve a little less than 1% of flash for reallocated blocks.

I might be able to add the wiper process to the loop, maybe an option to let it run every n loops.

You should let it run for a day or two just to see how bad it gets.

Originally Posted by Anvil

Of course it is life like, Intel recommends running Intel SSD Toolbox once every week, which would equal every 10-140GB depending on your usage pattern.
TRIM will not clean all "deleted" data even if you are running an OS supporting TRIM. (this is why it is recommended running the toolbox)

Not sure how many GB is written between each run of "wiper" but it is surely much more than the recommended 10-140GB, a lot more , an average of 40MiB/s equals 1 week of writes written in just 1 hour.
So if he cleans the drive once per day it would equal (a minimum) of 24weeks of writes, so, he can in fact run "wiper" once every hour and still be within the norm.

Anything new on this feature/special build?
If I could run it every 15 loops (around 1 hour), that would be every ~220 GB.
Also probably keep my avg ~60-65 MB/s and ~5 TiB/day I think.
Around 50% per day increase.

All bar charts are sorted by their respective equivalent of Writes So Far. SF-1200 MWI Exhaustion expectation is overly optimistic due to me still running compression tests (and 0-fill was redone after it went below 100MWI)....it will probably always be optimistic until MWI does deplete. SF-1200 observed WA is also optimistic, for the same reasons.

Anything new on this feature/special build?
If I could run it every 15 loops (around 1 hour), that would be every ~220 GB.
Also probably keep my avg ~60-65 MB/s and ~5 TiB/day I think.
Around 50% per day increase.

Not really but I have found a wiper that does not require any user input/interaction, could you try downloading and running just to see if it works.Link
For this to work flawlessly, wiper would have to exit once the operation is done, could you check that as well.

Not really but I have found a wiper that does not require any user input/interaction, could you try downloading and running just to see if it works.Link
For this to work flawlessly, wiper would have to exit once the operation is done, could you check that as well.

Yep, it's works without any input and exits after done.

Also, just had my system crash a few minutes aga while app was running (sent you PM).

Most probably at least half of them will hit 1PB limit if there is no controller failure. The problem is to find out how usable are after this mark. That would mean keeping them in storage for 3-6 months and check if data is still there. But if doing so, we would only know that the drives are capable of at least 1PB writes, without knowing any upper limit.
And if I would bet, my money in the 1PB race are on Samsung model... I only need to wait another 78-80 days (less time, less chance of total failure)

And if I would bet, my money in the 1PB race are on Samsung model... I only need to wait another 78-80 days (less time, less chance of total failure)

I would not bet on it. I am becoming increasingly convinced that the write amplification on the Samsung is indeed about 5. And I guess (not convinced, but suspect) that smart attribute 178 normalized is the percentage of blocks left to be used for reallocation. When the pool of blocks available for reallocation is exhausted, the SSD should start having problems fast. If my guess for sa178 is correct, then the Samsung is on a countdown to write death, 72, 71, 70, 69....

I would not bet on it. I am becoming increasingly convinced that the write amplification on the Samsung is indeed about 5. And I guess (not convinced, but suspect) that smart attribute 178 normalized is the percentage of blocks left to be used for reallocation. When the pool of blocks available for reallocation is exhausted, the SSD should start having problems fast. If my guess for sa178 is correct, then the Samsung is on a countdown to write death, 72, 71, 70, 69....

I noticed the death countdown... but also this I suspect is just the pool of blocks for reallocation, not the total spare ones. Once this pool will be empty, it will probably start using the blocks from over provisioning which are a lot more at probably the cost of slower write speed. So if bad block count does not start to increase exponentially, then we will have a winner.

I noticed the death countdown... but also this I suspect is just the pool of blocks for reallocation, not the total spare ones. Once this pool will be empty, it will probably start using the blocks from over provisioning which are a lot more at probably the cost of slower write speed. So if bad block count does not start to increase exponentially, then we will have a winner.

That would be great if it uses OP blocks. Of course, I could be wrong about what sa178 is counting, and maybe when it gets to one, all of the OP blocks will be gone, too.