That link is an incredibly thorough analysis of the exploration for and discovery of a workable solution.

Note also:

The article says:

As you can see, I used -c2 -n7 options to ionice, which seem sane.

which is true, but user TafT says if you want no disruption then -c3 'idle' would be a better choice than -c2 'best-effort'. He has used -c3 to build in the background and has found it to work well without causing the build to wait for ever. If you really do have 100% io usage then -c3 will not let the delete ever complete but he doesn't expect that is what you have based on the worked test.

In terms of efficiency, using one rm per file is not optimal, as it requires a
fork and exec for each rm.

Assuming you have a list.txt containing the files you want to remove this would be more efficient but it's still gonna be slow:

xargs -i rm {} < list.txt

Another approach would be to :
nice -20 xargs -i rm {} < list.txt
(this will take less time but will affect your system greatly :)

or

I don't know how fast this would be but:

mv <file-name> /dev/null

or

Create a special mount point with a fast filesystem (using a loop device ?) , use that to store and delete your Huge files.
(maybe move the files there before you delete them, maybe it's faster or maybe just unmount it when you want files gone)

or

cat /dev/null > /file/to/be/deleted (so it's zero-sized now) and if you want it to disappear just rm -rf <file> now

I had problems getting the directory to delete at a reasonable pace, turns out the process was locking the disk and creating a pileup of processes trying to access the disk. ionice didn't work, it just continued to use 99% of the disk IO and locked all the other processes out.

Here's the Python code that worked for me. It deletes 500 files at a time, then takes a 2 second break to let the other processes do their work, then continues. Works great.

I ve already got this issue.
"In sequential script that have to run fast, the process do remove a lot of file" ..
So the "rm" will make that script speed close to the IO wait/exec time.

So to make thing quicker , I ve added another process (bash script) launched per cron.. like a garbage collector it remove all files in a particular directory.

Then I've updated the original script by replacing the "rm" by a mv to a "garbage folder" (rename the file by adding a counter at the end of its name to avoid collision).

This works for me, the script run a least 3 time faster. but it works well only if garbage folder and original file are under the same mount point (same device) to avoid file copy.
(mv on same device consume less IO than rm)

/dev/null is a file not a directory. Can't move a file, to a file, or you risk overwriting it.

Actually it's a device and all data written to it gets discarded so mv <file> /dev/null makes sense

From Wikipedia, the free encyclopedia
In Unix-like operating systems, /dev/null or the null device is a special file that discards all data written to it (but reports that the write operation succeeded), and provides no data to any process that reads from it (yielding EOF immediately).[1]

That is wrong and INCREDIBLY dangerous. /dev/null is a device, which a special file-like object. If you're root, "mv /some/file /dev/null" will DELETE the special /dev/null device and move your file there! So the next time someone tries to use /dev/null they'll be using a real file instead of the device, and disaster ensues. (When Wikipedia says that it "discards all data written to it", that means that "cat /some/file > /dev/null" will read /some/file and discard the data you read, but that won't affect the original file).
–
user9876Oct 23 '14 at 14:01

/dev/null is a file not a directory. Can't move a file, to a file, or you risk overwriting it.

Create a special mount point with a fast filesystem (using a loop device
?) , use that to store and delete your Huge files. (maybe move the files
there before you delete them, maybe it's faster or maybe just unmount it
when you want files gone)

I don't think this is practical. It would use unnecessarily more I/O than the OP would like.