If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below. ** If you are logged in, most ads will not be displayed. **

HDD Cleaning server

Hi

i work a lot with servers(hardware) and i need to erase used/old HDD-s regulary, so far i have used USB bootable systems like magicpart and so on.
I was wondering if its possible to config linux system to automaticly erase and overwrite it with 0-s bunch of times.

Server disk-s are hot swap-able, i wondered if it's possible by just insertting (hotswap ) disk's and the server erases all data automaticly, and on the monitor just shows wich bay-s are ready.

i hope you understand what im asking for, sorry if i did'nt express myself clearly, english is not my main language

As a project, I'm sure you could get something like that going, assuming you can then detect, via udev or some such, that a new drive is available.

However, why not just get a HDD cloning device, for example, Maiwo docking station or similar, and then wipe drives like that? You load in a template device, that is formated blank and have random data written to it, which the docking station then clones to all the other inserted drives. You get versions that support multiple drives?

Respectfully... Sarlac II
~~
The moving clock K' appears to K to run slow by the factor (1-v^2/c^2)^(1/2).
This is the phenomenon of time dilation.
The faster you run, the younger you look, to everyone but yourself.

Right now i have 1 server + 2 msa-s attached to it to clean the drives, docking station woulnt do the trick(disk ammount wise) i have regularly 10-20 disks in the cleaning. I was hoping to automate that process, so i dont have to run the program and select the disk-s and the cleaning formats every time.

You could use /dev/random instead. Quite frankly though, it's slow as molasses ...

I usually follow the advice:

Code:

"The kernel random-number generator is designed to produce
a small amount of high-quality seed material to seed a
cryptographic pseudo- random number generator (CPRNG). It
is designed for security, not speed, and is poorly suited
to generating large amounts of random data. Users should be
very economical in the amount of seed material that they
read from /dev/urandom (and /dev/random); unnecessarily
reading large quantities of data from this device will have
a negative impact on other users of the device."
-- excerpt from man urandom (linux), Solaris has a similar
entry.

Welcome - get the most out of the forum by reading forum basics and guidelines: click here.
90% of questions can be answered by using man pages, Quick Search, Advanced Search, Google search, Wikipedia.
We look forward to helping you with the challenge of the other 10%.
( Mn, 2.6.n, AMD-64 3000+, ASUS A8V Deluxe, 1 GB, SATA + IDE, Matrox G400 AGP )

Yeah, security "experts" will warn about random number generator reseeding and a minimum number of overwrites. It's been my experience though, playing with flux analyzers, that it's virtually impossible to recover the data after just two passes (random and zero.) I guess if you're protecting nuclear secrets from North Korea you might need more than that but otherwise..

Also, if the number of blocks isn't evenly divisible you'll get an out of space error from dd. Since dd's default block size is the same as the abstraction layer's logical block size of 512 (not physical block size which is usually 4096 or more) omitting the parameter ALWAYS works without error.