Hi,I've started using script files instead of drivespace 6 because of a few issues that you might be able to address:(I use it to save space on SSD- and with windows 10- we have lzx now!)

Drivespace6 should set the directory to be compress, so any new files can get normal ntfs compression (as lzx doesn't seem to be automatic).

Now that lzx isn't automatic, I have to re run my script anytime I install a new program etc.Drivespace6 is not feasible because it re reads and compresses files that are already in lzx! This is not ideal on an SSD where you want to limit writes- plus it adds a lot of time to compression.

How come it doesn't skip over files that are already lzx compressed? In my script I can check this by doing compact- and getting an output like this (notice the x X C and l flags!).

Even with Windows 10 compression methods, folders are still marked for automatic LZNT1 (traditional NTFS) compression.

And files that were compressed before are not re-compressed if the compression matches.

You can also travel in any direction with DriveSpace 6 - from a lower Windows 10 compression to a higher one (ex: XPRESS 4KB to XPRESS 16 KB), or back (ex: LZX to XPRESS 8KB) - the latter of which is not supported by compact.exe.

Additionally, DriveSpace 6 can also transition your disk files in and out of LZNT1 (traditional NTFS) to any of the new Windows 10 methods successfully. compact.exe has issues downgrading compression, but mostly works with upgrading compression.

While the security issue does not affect compact.exe with Windows 10 compression methods only (LZNT1/traditional NTFS is still curiously affected), DriveSpace 6 still manages to substantially outperform compact.exe as I have documented here:

I just did a short run on my small hdd temp partition, first copy files (about 8gb game). then compact /c /s /i /f /a /exe:lzx *It went down to about 5 GB, lzx is great!

Updated to latest zipmagic, Then I tried drivespace 6I used a program to watch the reads and writes on the disk. It did about 5GB of reads (which is expected) and 5gb of writes. 810 kb gained space (maybe a file or two were skipped by compact).

Here's what I use in a batch file to compress all files (not folders- as lzx takes away ntfs compression for folders- which makes new files uncompressed) that are either ntfs compressed, xpress, or uncompressed. It is a bit slow because of findstr which oddly uses not much cpu, but is crap. It also doesn't work on filenames with spaces, but that's not common on compressible dll/exe/game files.Linux has much better scripting, lol!If you want to use it directly on the command line, make %% into a single % (eg %%G -> %G)

Something's probably wrong with your program that checks disk reads and disk writes. Here's a test you can try:

Set DriveSpace 6 to maximum threads, for example 8 threads on a 4 core (2 without hyper-threading) CPU, on an already compressed disk. Notice how quickly DriveSpace 6 works, compressing only the new additional files available/unlocked since your last run.

Normally this would crush performance with LZX compression because it is very CPU intensive; on a system with 4 cores (including HT), DriveSpace 6 defaults to only 2 CPU cores and still achieves about 100% CPU utilization with LZX compression. So definitely don't do this when you're compressing your disk for the first time, but it can be shown to prove that DriveSpace 6 isn't re-compressing anything.

compact.exe lost an additional 17 GB, corresponding to 13% worse compression that DriveSpace 6, for reasons unknown as previously discussed on this thread.

In addition to being single-threaded, which is going to at least halve performance compared to DriveSpace 6, and worse on CPUs with more than 2 real cores; it loses significant space on a disk already compressed with DriveSpace 6, or it will fail to free up that additional space if it was not previously compressed with DriveSpace 6. Again, this could be due to any number of the optimizations below:

From wasting gigabytes of space, to being very slow (single threaded), to disabling automatic folder NTFS (LZNT1) compression, it doesn't make sense by your own stated priorities to keep using compact.exe over DriveSpace 6.

It can be hard for customers to hear that their research is wrong, but again, I remind you that:

1) Yes, DriveSpace 6 does set folders to use NTFS (LZNT1) compression, even when you are compressing your disk with the new Windows 10 compression methods.

2) No, DriveSpace 6 does NOT recompress and re-write to disk all pre-compressed data, when all that data was compressed using the exact same previous compression algorithm as before before (proof as described above on this thread).

I'll also repeat what I posted earlier:

3) If you need to absolutely maximize your compression, DoubleSpace 2 substantially outperforms DriveSpace 6, even when DriveSpace 6 is using the new Windows 10 compression methods, as highlighted here:

I used hd sentinel which logs bytes written and read. I ran ds6 again and it did the same with writes and read close to the compressed size.

Now on my laptop- which doesn't have hd sentinel, I used hwinfo64 which shows read and write rate. Ran ds6 first time, then again. Both times showing read and write rate pretty equal (approx 6 MB/s). How does ds6 detect whether a file is already lzx compressed or not? Maybe I have something blocking it's logging?

Of course, on an HDD, it would actually be slower if you enable more cores - because the HDD cannot handle the simultaneous workload.

However as your own results on the SSD illustrate, your measurement software are flawed. LZX is very CPU intensive so when you enable more cores, it could not possibly get faster with LZX. The fact that it does get faster with LZX shows you that no real compression/writing is going on.

I would contact the measurement software you are using and work with them, if there is any interest, to get their disk I/O reporting bugs fixed.

Disabled system protection on the drive, made sure no other program was open besides monitoring tools and ds6. Clean boot, nothing else running except for listed below.

Just ran it again on the system drive C: (got about 1 gb back- as new programs have been installed since last compress runs).Sometimes it was at 100%, but mostly 25% or so.

OS win10 x64 pro. PNY SSD 480gb as system drive

Idle win10 task manager says approx avg read 100 KB/s write 50 KB/s before start of ds6.Now run drivespace 6 on the same drive, using same 4 threads, LZX:So far no extra space freed (as I ran it just 5 minutes before).low read: 2.8 MB/s write: 2.5 MB/shigh read 25.7 write 25.1and so on up and down CPU load is average around 25% with sometimes rare tiny spikes to 70-80% (quad core i5-2500k) so it's not compressing LZX.Saved space on 2nd run, 2.54 MB. So should have written at most what, 10 MB????

SMART reported by drive, both gsmartcontrol and hd sentinel reports:The SSD writes before second run started at 5983, reads 8603. Ending of running through approx 40gb of C: it sayswrites 6028 and reads 8649Subtract them and you get writes of 45 gb and reads of 46gb. This matches the other program's sums.So, are you saying that also my SSD is lying?Similarly, like my last test with the small hdd temp partition, its showing the compressed size, not the actual size of the files. So, what is going on???

Are you really seeing no writes at all when you run it on an already ds6 compressed drive lzx???Because even process hacker is showing I/O read and write that confirm what the program says.

So, can you run it and tell me that it's not consistenly showing reads and writes on your system? I'm curious as to what causes this and whether you have tested it yourself on a 2nd run of the same drive. I know I am not crazy, but WOW, all those writes on my ssd?The slow script I made, does not induce as many writes, but is slower because windows has horrible string handling LOL.