Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices

Welcome to LinuxQuestions.org, a friendly and active Linux Community.

You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!

Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.

If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.

Having a problem logging in? Please visit this page to clear all LQ-related cookies.

Introduction to Linux - A Hands on Guide

This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.

So, I see a lot of people recommending that you wipe a disk using /dev/urandom instead of /dev/zero for "maximum security".

What difference would it make ? All data is being overwritten in both cases, why would /dev/urandom be better than /dev/zero ?

Assuming a worst case scenario I've heard that there may be a way to somehow see what a bit was previously set to, especially if the bit was NOT flipped, is this possible ?

Quote:

The general concept behind an overwriting scheme is to flip each magnetic domain on the disk back and forth as much as possible (this is the basic idea behind degaussing) without writing the same pattern twice in a row. If the data was encoded directly, we could simply choose the desired overwrite pattern of ones and zeroes and write it repeatedly. However, disks generally use some form of run-length limited (RLL) encoding, so that the adjacent ones won't be written. This encoding is used to ensure that transitions aren't placed too closely together, or too far apart, which would mean the drive would lose track of where it was in the data.

I'm not sure how this would work, but what kind of transition would you be able to see ? 0 to 0 and 1 to 1 or 0 to 1 and 1 to 0 or both ? Well in either case the best thing to do is 1 wipe with /dev/one (doesn't exist) and 1 wipe with /dev/zero ... optimal, right ?

Anyone know more about this ? or have some more links. I don't really understand everything they say in the paper above ... but I understand the highlighted bit.

It also says:

Quote:

In the time since this paper was published, some people have treated the 35-pass overwrite technique described in it more as a kind of voodoo incantation to banish evil spirits than the result of a technical analysis of drive encoding techniques. As a result, they advocate applying the voodoo to PRML and EPRML drives even though it will have no more effect than a simple scrubbing with random data. In fact performing the full 35-pass overwrite is pointless for any drive since it targets a blend of scenarios involving all types of (normally-used) encoding technology, which covers everything back to 30+-year-old MFM methods (if you don't understand that statement, re-read the paper). If you're using a drive which uses encoding technology X, you only need to perform the passes specific to X, and you never need to perform all 35 passes. For any modern PRML/EPRML drive, a few passes of random scrubbing is the best you can do. As the paper says, "A good scrubbing with random data will do about as well as can be expected". This was true in 1996, and is still true now.

Looking at this from the other point of view, with the ever-increasing data density on disk platters and a corresponding reduction in feature size and use of exotic techniques to record data on the medium, it's unlikely that anything can be recovered from any recent drive except perhaps a single level via basic error-cancelling techniques. In particular the drives in use at the time that this paper was originally written have mostly fallen out of use, so the methods that applied specifically to the older, lower-density technology don't apply any more. Conversely, with modern high-density drives, even if you've got 10KB of sensitive data on a drive and can't erase it with 100% certainty, the chances of an adversary being able to find the erased traces of that 10KB in 80GB of other erased traces are close to zero.

Another point that a number of readers seem to have missed is that this paper doesn't present a data-recovery solution but a data-deletion solution. In other words it points out in its problem statement that there is a potential risk, and then the body of the paper explores the means of mitigating that risk.

I think the info you have already pulled up does a good job of debunking the need for multiple overwrites.

Head-offsetting, residual magnetic image and resonance and other physical attacks may have been possible in the early days, but with the high density and techniques that modern storage devices use I believe it's going to be far less likely to succeed.

Though it's still worth using urandom to fill a disk you're intending to use encryption on (so an attacker doesn't know what is empty space and what is used), when all you want to do is erase your disk, overwriting with any old junk is probably good enough, x'00', x'FF', x'AA', x'55', doesn't really matter.

Reading from /dev/urandom is very, very slow. It'll take several days to overwrite a typically sized disk these days. Doing multiple passes from urandom would probably require you to be very paranoid indeed.

The theory is that when you go from 1 to 0 it isn't 0, but a week signal. I think the long format in windows (format c sets every bit directly to zero. Therefor they can be recovered using analyzing tools. Not necessary software. sometimes you'll have to hand it over to a laboratory and they will be able to recover a lot of data. Not every thing, but often enough to reconstruct important documents etc.

If you overwrite with ones (A friend of mine made a kernel patch for /dev/one, but I don't know if he published it on Internet :P) and back to zero there still might be some background information. In practices this is impossible (today) to recover, but you will be able to get some words here and there.

But if you use /dev/urandom, then you don't know for sure if there was a one or a zero before, and thus make the recovery hopeless.

I read this some days ago. This was easy to understand and it isn't to long. It explains in a little more detail how this works and why /dev/urandom is better then /dev/one then /dev/zero.

Reading from /dev/urandom is very, very slow. It'll take several days to overwrite a typically sized disk these days. Doing multiple passes from urandom would probably require you to be very paranoid indeed.

Yeah I know that's why I don't like using it.

Quote:

Originally Posted by Dinithion

If you overwrite with ones (A friend of mine made a kernel patch for /dev/one, but I don't know if he published it on Internet :P) and back to zero there still might be some background information. In practices this is impossible (today) to recover, but you will be able to get some words here and there.

I will probably do the same. Perhaps I will delete certain files with dban. To overwrite one time with '1' seems to be good enough for three-letter agencies so using /dev/urandom is a complete waste of time IMHO.