You could use the sfill command, which is also installed from the secure-delete package that installs srm. The sfill command works by creating one big file to fill all the free space, then writes to that file using several steps to ensure all the previously-free areas of the disk have had their original contents erased. Once that is completed, the utility then removes the big file, releasing the free disk space.

If you think your swap space contains some of your wife's data too, you could use the sswap command — also available if you install the secure-delete package — for secure deletion of the swap space, but you would need to disable the swap space first. I have 4 GB of RAM and my swap partition is virtually never used, so I don’t bother putting on my tinfoil hat in the case of swap.

The syntax for the sfill command is:

Code:

sfill [OPTIONS] directory/mountpoint

I have my /home directory on its own partition. In my case the command I would use to securely delete all files and free space in /home would be:

Code:

srm -rv /home/* && sfill -v /home

Please do have a look through the man pages if you haven't done so already:

Code:

man srm
man sfill
man sswap

I take no responsibility for the use of these commands! Caveat Utilitor._________________Clevo W230SS: amd64, OpenRC, Optimus.
Compal NBLB2: ~amd64, OpenRC, FGLRX, dual booting with Windows 7 Professional 64-bit.
KDE on both laptops.Fitzcarraldo's blog

Hmm... It's a long time since I installed and used app-misc/secure-delete. I've just had a look at the ebuild of app-misc/srm on-line at http://packages.gentoo.org/ and it contains:

Code:

DEPEND="!app-misc/secure-delete
sys-kernel/linux-headers
"

so I assume app-misc/secure-delete has been removed from the tree, then. Does the latest package app-misc/srm pull in the other utilities too? Have you tried sfill --help?_________________Clevo W230SS: amd64, OpenRC, Optimus.
Compal NBLB2: ~amd64, OpenRC, FGLRX, dual booting with Windows 7 Professional 64-bit.
KDE on both laptops.Fitzcarraldo's blog

Doesn't seem like it was a popular choice to remove it. Looks like you could un-merge app-misc/srm and merge app-misc/secure-delete from one of the overlays: http://gpo.zugaina.org/app-misc/secure-delete (secure-delete-3.1-r3.ebuild is the fixed one, according to a comment in the aforementioned bug report)._________________Clevo W230SS: amd64, OpenRC, Optimus.
Compal NBLB2: ~amd64, OpenRC, FGLRX, dual booting with Windows 7 Professional 64-bit.
KDE on both laptops.Fitzcarraldo's blog

Doesn't seem like it was a popular choice to remove it. Looks like you could un-merge app-misc/srm and merge app-misc/secure-delete from one of the overlays: http://gpo.zugaina.org/app-misc/secure-delete (secure-delete-3.1-r3.ebuild is the fixed one, according to a comment in the aforementioned bug report).

you do not need to overwrite files x times on modern hard drives. That means hard drives with more than 20GB of size. All you need is to use dd to pump random bytes from /dev/urandom into a dump file until all your space is filled. Then remove the dump file. Or pump them into the file to delete once and you are done.

The myth that to delete data really securely from a hard disk you have to overwrite it many times, using different patterns, has persisted for decades, despite the fact that even firms specialising in data recovery, openly admit that if a hard disk is overwritten with zeros just once, all of its data is irretrievably lost.

Craig Wright, a forensics expert, claims to have put this legend finally to rest. He and his colleagues ran a scientific study to take a close look at hard disks of various makes and different ages, overwriting their data under controlled conditions and then examining the magnetic surfaces with a magnetic-force microscope. They presented their paper at ICISS 2008 and it has been published by Springer AG in its Lecture Notes in Computer Science series (Craig Wright, Dave Kleiman, Shyaam Sundhar R. S.: Overwriting Hard Drive Data: The Great Wiping Controversy).

Note: I can't seem to find the original article about it, so the quotes are from a forum discussion the article quoted above and from the mentioned paper, here is its abstract:

Often we hear controversial opinions in digital forensics on the required or desired number of passes to utilize for properly overwriting, sometimes referred to as wiping or erasing, a modern hard drive. The controversy has caused much misconception, with persons commonly quoting that data can be recovered if it has only been overwritten once or twice. Moreover, referencing that it actually takes up to ten, and even as many as 35 (referred to as the Gutmann scheme because of the 1996 Secure Deletion of Data from Magnetic and Solid-State Memory published paper by Peter Gutmann) passes to securely overwrite the previous data. One of the chief controversies is that if a head positioning system is not exact enough, new data written to a drive may not be written back to the precise location of the original data. We demonstrate that the controversy surrounding this topic is unfounded.

The optinal bitwise recovery from a PRML drive that is no longer
available and was never used more than once is less than 92% per bit
(given foreknowledge of the write pattern). ePRML is as low as 49% per
bit using electron microscopy. Even at 92% per bit, the recovered data
is useless and random. This is detailed in the paper mentioned before.

At 49% - this is a modern drive - the toss of a coin is more accurate.
Think about that for a minute.