6 users thanked author for this post.

Fred gave a very sensible answer to the question of how much open space does a drive really need. That’s the Fred Langa I remember – very sensible, and spot on.

@woody, it would be great if you could join forces with Fred. He may prefer to be part of an already-established blog, rather than having to run his own blog. The combination of Susan Bradley, Fred Langa, and you, all at AskWoody, would be really awesome.

There is another “type” of space to allocate specifically for SSD’s if you want maximum long term health and consistent high performance from them.Â Referred to as “Spare Area” it’s similar to an older technique we called short stroking.

The idea is that by creating your partitions at a maximum of 90% (and some prefer to use only 75%) of the usable space on the SSD, the onboard SSD controller will use the un-formatted and un-allocated space to load balance cell write usage over time. This greatly reduces the number of write cycles within the partitioned segments of the solid state drive over time.

I’ve personally been using this method for years now, and my SSD’s do indeed last longer and perform better after a few years of use than the same ones do when all space is allocated to partitions.

They do work very differently. There are some key things to remember when you use SSD’s long term.Â One is do not defragment them.Â Windows 10 takes care of this by offering a TRIM Optimize cycle instead of a defrag cycle when you view the Defragment and Optimize application, but older Windows versions will happily defrag on manual demand (which is bad.)

Write amplification is the other big issue, and relates to TRIM (which W10 takes care of, as does 8.1 and a fully patched W7.)Â See the article I linked above for much more tech info.

6 users thanked author for this post.

Great blog post. As long as your backup storage can complete a backup without failure due to lack of space or lack of time, you’re good to nearly fill your backup drives.

Regards SSDs… There’s a difference between necessary and optimal.

Ideally with flash storage, you want to overprovision – the other term often used to describe buying more storage than you need (or WILL need) and don’t use.

It actually is possible to overprovision enough space that you can keep your volumes less than full long-term, even though it’s some kind of law of computing that data accumulates to fill all available space. You have to bring yourself to spend money for something you don’t need today. That’s okay; planning for the future is often a Good Thing.

In my own case, with my primary engineering workstation, back in 2012 I bought more high quality SSD storage than I needed – nearly 2 TB for drive C: – all I could have with MBR partitioning. Who could EVER need that much, right? SSDs were SERIOUSLY expensive back then, but I figured – rightly so – that the array would serve my needs in an ongoing way and I could really focus on my work. Starting out with more than 1.5 TB free back then, in all the time since I have had no worry about adding drives, no worry about repartitioning or reinstalling, no worry about something failing because the storage filled up…

Best decision I ever made.

I have saved the $$$ in time and worry MANY times over, and I have been productive continuously without glitch or interruption, able to meet the needs of my customers without having to take time out to deal with an underprovisioned system. To this day drive C: is still going strong and performance is nominal (which is to say, still fantastic). And I still won’t need to worry about expanding the volume and/or restoring a backup and/or reinstalling Windows in the foreseeable future.

Moral of the story: Buy more flash storage than you need now, and consider getting more than what you’ll need even in the near future. Your system will pay you back in savings of time and worry and will maintain performance when you stay away from using up your space.

I allocated all the available space. A RAID partition made from 4 x 480 GB drives shows up as 1.74 TB in Windows.

Two factors made this a reasonable approach:

1. I had no intention to fill the drives to anywhere near capacity, though I wanted the option temporarily in a pinch.

2. Internally the controller for these drives compresses data. If it couldn’t compress any data at all it would be able to hold the advertised 480GB. In reality, since much of the data actually stored is compressible to fit into fewer flash blocks than the file system thinks it’s using, the drive itself will maintain extra flash storage to use for its internal operations. By my reckoning this will maintain somewhere north of 20% free flash blocks internally, even without externally leaving extra space.

It must have been a good strategy – I’ve measured no slowdown in all the 6 years I’ve been using them heavily – they still deliver easily 1500+ megabytes/second throughput for sequential operations, which is what I saw when I first brought them online.

-Noel

Attachments:

3 users thanked author for this post.

I personally would always allocate all the space available.Â I can choose not to use it and get the same benefit as if I had not allocated it, but I also have the option of using it if I need to.Â If I am doing some kind of task that requires a lot of drive space, I am more likely to be able to finish that if I have all the drive space possible.Â I can worry about moving files and cleaning things up to get back to having the scratch space afterwards.

In addition to the compression Noel described (I hadn’t actually been aware of it being implemented as a means of assisting wear leveling, but it makes perfect sense), SSDs also will come with some level of factory overprovisioning.Â It’s typically 7%, which takes advantage of the difference in the way capacities are specified for hard drives vs. memory.

A 500 GB SSD has 500,000,000,000 bytes of storage available to the user, the same as a 500 GB traditional hard disk.Â They are rated in real gigabytes, where giga- means a billion of something (or 1000 x 1000 x 1000).Â Memory, though, is rated in binary gigabytes (each one being 1024 x 1024 x 1024), which (because of the predictable confusion that arose from having one word mean two different things) are now known as gibibytes (giga binary bytes).Â That includes the NAND cells that the SSD maker used to build the drive.

Aside: Windows continues to use the terms “megabyte,” “gigabyte,” and “terabyte” to describe hard disk capacities that are actually shown in mebibytes, gibibytes, and tebibytes.Â RAM is still sold by the “gigabyte,” even though it’s really gibibytes they’re talking about. The confusion continues!

Sometimes it is possible to increase the overprovisioning (like with my Samsung SSDs, it would be possible to increase the overprovisioning using the Samsung Magician software), but I prefer to have the storage available if I need it, and to get the needed free space by managing what I keep on the drive.

2 users thanked author for this post.

@noel-carboni This compression done at the controller level is a concept which I see mentioned for the first time in your post. I was not aware of this implementation, but even if this is the case, the manufacturer’s utilities for SSD drives which I use all tend to recommend empty raw space of 10% at the end of the disk for wear-levelling purpose. This technique to allow empty non-partitioned space is what is called over-provisioning (OP). The utilities which I have used and seen this recommendation are from Samsung, Intel, Micron.

2 users thanked author for this post.

This is a common misconception and it applies only to 512 byte sector disks.
Current large disks use 4 kB sectors which are fully usable by all supported Windows versions (Windows 7 require a patch to enable this functionality) and allow much larger MBR partitions, which according to my calculations should be 16 TB.

I own 2 Western Digital backup disks, one of 3 TB and another of 4 TB which are formatted as single partition and the partition table is MBR style.
Unfortunately Windows own Disk Management does not allow this formatting style, but Western Digital does, while Windows is more than happy to use such larger partitions.

1 user thanked author for this post.

So is compression a good idea on SSDs? Because I’d been avoiding it, because I’d heard it can cause higher load. Is it just good on a RAID setup, or would it be good to use on my SSD? (I already keep it only 45% full by keeping stuff not essential to fast running on regular hard drive.)

1 user thanked author for this post.

The compression Noel was referring to takes place within the drive itself, and is done in real time as data is written to or read from the drive.Â The OS isn’t aware of it, and it doesn’t require any CPU time or incur any performance penalty, as the drive electronics are optimized to be able to handle on-the-fly compression (and/or encryption, depending on the drive) as fast as the drive is capable of transferring data.

The kind of compression you’re talking about, if I read you correctly, is OS-level compression, which does increase CPU load and generally slow down the system.

2 users thanked author for this post.

Depending on the specific drive controller, your SSD may compress data internally. As Ascaris mentioned, that’s what I was referring to. You may have to do some research on your particular brand and model (mine are OCZ Vertex 3 models circa 2012; controllers have evolved a lot since then).

If your SSD’s internal controller DOES do compression itself – and it’s a good guess that it does – you want to avoid feeding it compressed data from the OS. Not only does the OS take longer to compress it, but because of the way compression algorithms work, the worst thing you can do is to give the SSD controller already compressed data. It makes for more work, slower operation, and the results can potentially be larger than the source, leading to some very strange failures.

TL;DR – Don’t enable file system compression if you’re using an SSD.

-Noel

4 users thanked author for this post.

There is no need to keep 45% free IMO, that’s just a waste of a good drive. All you need is sufficient space for TRIM and internal optimization to run, plus some spare to share the writes. (SSD life is much greater than the FUD would have you believe.)
My SSD is at least 70% full and is still a rocket.

Indeed!Â I’ve flogged my Samsung 840 Pro (128GB) over the years I’ve had it, including a lot of heavy browsing sessions with Firefox at the peak of its memory hogginess, with hundreds of tabs open, and only 8GB in the PC.Â It was slamming the page file, to the point that when I used Sysinternals’ RamMap, it showed that the system was repurposing all but the top two priority levels.Â If it had not been a SSD, the paging operation would have been so painfully slow that I would have given up, but it was tolerable, and restarting FF would have meant FF would have marked all my restored tabs as unread, which would have messed up my place.Â (More recently, Mozilla was kind enough to remove the unread tab status completely, which just destroys the way I have been browsing for many years.Â FF just keeps getting worse and worse… it’s why I use Waterfox).

I did add another 8GB to the PC, which reduced the paging significantly, but it does still page frequently (which is good; it delays or prevents emergency-level memory pressure that causes thrashing.Â Paging is not the same as thrashing!).Â Not to mention, of course, all of the writes that came along with the drive being a frequently used host device for Windows 7Â Windows 10Windows 7 againWindows 7+LinuxJust Windows 7 again Windows 8.1, and a lot of full-partition restorations from various backup programs (no, I don’t have that many disasters… I am just pretty quick to go to my backups, even sometimes just after uninstalling a program I am not sure got fully removed).

All of this write activity was done on a small drive, which means fewer NAND cells to spread the load with wear leveling.Â By now, I have racked up 25,656 power-on hours for the drive.Â That’s 26 days shy of three years of power-on time, and the little bugger has never missed a beat.Â Meanwhile, the Seagate 500GB 2.5″ HDD that was in my laptop died unceremoniously (and without any warning… it went from SMART healthy to dead instantly) at right around that number of hours (and the same age in calendar years… just short of 5 years).

The Samsung 840 Pro series will reach 0 in SMART wear-leveling count (the rated life of the NAND cells) at 2,000 drive writes, or 256 TB in the case of my 128GB drive.Â In the 4 years, 10 months I have had it, it’s used 31% of that number of drive writes, or ~79 GB.

At this rate, I only have ~11 years left before the drive reaches its SMART (rated) end of life, for a total life of ~15.5 years.Â That should certainly satisfy nearly anyone that they’d gotten their money’s worth from the drive.Â But is that the end, really?Â When TechReport tested the 840 Pro 256GB until it expired, it managed to rack up 2.4 petabytes before it gave it up and died.Â That would imply 1.2 petabytes for my drive with half the NAND cells, or just a bit shy of five times its SMART rated life of 256 TB.

If I kept using it at the rate I have since the start, that would represent a lifespan of 73 years before it hit 1.2 petabytes.Â I will be over 100 years old then, if I am still around!

Of course, not all drives are 840 Pros, and there is no guarantee mine will last nearly five times the SMART rated life before it dies like the TR drive did.Â On the other hand, newer drives are coming with longer and longer rated lives in drive writes per device (the lower-tier Evo 860 series SSDs handily beat the 840 Pro of the same size in rated life), and prices are falling, so that if I was to spend what I spent on the 128GB drive now, I’d get a considerably larger (and thus longer lasting again) drive than I did almost 5 years ago.Â The 1TB 860 Evo in my Swift laptop would probably have a write-based service life in the hundreds of years based on this, and that’s so far into the realm of absurdity that the drive can basically be considered to have unlimited service life in terms of drive writes on a consumer machine.Â All bets are off if you put it in a data center!

2 users thanked author for this post.

There’s also a few other things that affect the need for “spare” capacity.

A read-only drive doesn’t need any, of course.

However, anything where you may need to have a file edited and saved back by some application, especially one where you don’t know the application (to the exact version) beforehand, should have at least the file’s space free – because a great number of applications don’t do a resave-in-place even when you’d expect them to, and may stop doing that after a bugfix update if they do right now. Instead, what you get is write it as a new file with a temporary name, rename the old one to something else, rename the new one to what the name was, and then delete the old version. Makes it a lot more robust in case of errors (power loss halfway through the write being the well-known case), at a cost of storage space and even fragmentation.

And if you need to do that repeatedly and for multiple files possibly simultaneously… or even worse, files may change size…

Then if you do anything more complicated with the disk, well… some of the “advanced” file system types like ZFS (on Solaris or nowadays Linux or BSD too) start to degrade in performance and even capabilities a lot earlier than the traditional kinds when space becomes tight, no idea yet what happens with ReFS.

After following the links to the posts from @netdef (thanks a lot ðŸ™‚ ) and reading another one from Seagate which comes in any Google search for OP or write amplification, these are the conclusions in relation to over-provisioning:
– According to Seagate, there is an inherent minimum 7% OP in any SSD from the manufacturer’s design, taking advantage of a technicality and exploiting the difference between GB and GiB or MB and MiB. This is manufacturer’s “cheating” for a good purpose.
– According to Anand, most testing shows that for consumer SSDs, a figure around 25% OP created either by using the manufacturer’s utility or by not partitioning the last part of the disk gives a reasonable random write 4k performance, which is all that matters in fact when it comes to write performance in most OSes. A number close to 50% OP gives an even better random write 4k performance, but the benefit is less than it is worth spending for the extra reserved storage.
– According to Anand, the professional enterprise grade SSD tested has 29% OP reserved from manufacturing and as such further OP does not significantly enhance the random write performance. This equates roughly to the numbers observed for home grade SSD where 25% user reserved plus 7% manufacturer reserved gives 32% of reserved space for optimal random write performance.
– According to my observations, the manufacturers consider optimal for home users a user reserved space of 10% which in addition to the hidden 7% according to Seagate results in 17% total OP. This might be OK as a compromise for the price conscious user, but not necessary the best OP value overall. It is acceptable manufacturer policy though, as it would be difficult to sell the end-user or any user the requirement to have 30% of the space paid for as not used for data.
– According to Seagate, there is dynamic OP which takes advantage of TRIM technology and uses the space made available by TRIM as virtual OP. As I understand it, even if this is an excellent technical solution, it is still not as effective as raw space reserved exclusively for OP.

My take from all above:

Set aside 25% of the disk space and do not partition it.

An acceptable solution with no negative impact post installing the OS is to use the Windows Shrink option in Disk Manager and create the 25% empty space (raw, non-partitioned) at the end of the SSD. However, in such a case, it may take some time, a couple of weeks maybe, until the newly created raw space is fully utilised by OP if it was previously written with data.

Any value of OP less that 25% is useful, even if not optimal. A good value to start with may be the value recommended by the SSD manufacturers which is in general 10%, but that should be considered as an absolute minimum and a baseline to start tuning.
Avoid not reserving any space and relying exclusively on TRIM. This is especially true for Windows 7 which is already old technology and is not always consistent in the TRIM implementation, but it can be made to work with extra care.

1 user thanked author for this post.

An acceptable solution with no negative impact post installing the OS is to use the Windows Shrink option in Disk Manager and create the 25% empty space (raw, non-partitioned) at the end of the SSD. However, in such a case, it may take some time, a couple of weeks maybe, until the newly created raw space is fully utilised by OP if it was previously written with data.Â

2 users thanked author for this post.

I have enabled “over provisioning” via Samsung Magician on my 1TB Samsung SSd (M.2Â NvmE).Â This resulted in 93GB of reserved disk space.Â Magician also manages the Trim function.Â In addition, I useÂ SSD Keeper (by Condusiv Technologies). Â SSD KeeperÂ “prevents small, fragmented writes and reads that rob the performance and longevity of SSD drives, and ensures minimal input/output (I/O) operations for every file and boosts performance further by caching reads from available, unused DRAM that is otherwise sitting idle and going to waste.”Â (I hope this quote from the manufacturer’s web site is not regarded as “advertising”.Â If it is, Admin please delete.)

1 user thanked author for this post.

Samsung Magician and equivalent utilities from other drive manufacturers do the same in relation to using a RAM Disk for caching. Not sure if the utility presented here does anything special in addition to the manufacturers’ own software.

Additional software that intercepts/modifies writes is an additional point of failure that I would not trust – Windows has it’s moments and machines crash as it is. The small (potential) gain in SSD life is outweighed by the risk to my data IMO.