e2extract

e2extract was originally created by the late Steven Fountain. Seems like he died in September 2005 and somebody replaced his whole site with an obituary. Unfortunately was also his e2extract toolkit removed; it can be still accessed via the Internet Archive though.

This page is currently a mirror of his tools with a few patches of mine.

This is it! e2extract - extracts lost files and recreates
directory structure
from information obtained by the directory inodes, such as the original file name and location. recurses through type 1 files
(normal files) and type 2 files (directories), recreating them in a directory somewhere else.

I have written a script called parse_directory_inode in perl to analyze
directory inodes!! Signifigant process towards my miracle ext2fs recovery!

This tool analyzes the raw ext2 directory inode and plucks out the
names of the files contained within the directory. Also the inode number
for every file that is associated with each file name. LANDMARK
progress.

I can recover the directory structure complete with filenames!DSC
is going back online in 7 days

What the fsck?! I have to manually do the job of the ext2fs driver.

I got my mailfile back!!!!! My stat_inodes tool is the ground floor of
beyond-hope data recovery. All I had to do was count backwards from files
that were 20 digits, 19 digits, 18 digits, etc until I got to files that
were in the 10-99 meg ballpark. I found it. I will write a mass-recovery
tool to assist in reaping all of the remaining data into by-type
directories.

"Losing 8 years of history is very motivating."

Sometimes 'I hope you have a backup *chuckle*' is the wrong approach to the
situation. This is the right way to deal with this type of
problem.

I
wrote a tool called stat_inodes in perl -
footsteps of unix necromancy. that attempts to triage what's still on an
ext2fs drive. It uses istat and icat from the new coroners toolkit (TASK)
-- it creates a garganguan huge map showing what you've still got. It
calculates the block numbers associated with inodes, wether the inode is
allocated, the size of the information referenced in the inode, the number
of blocks allocated related to the inode (each block is 1x your
blocksize), the number of links (whatever thats for), and finally what the
program guesses the block to contain, using "file magic"

* the old root drive is completely fucked up,
see for yourself.
* fsck reports that there are 255000 files on
the filesystem, but when you mount the drive,
there are 9841 files reported via find . | wc -l,
meaning that 240000 files (or trees) are
sitting in limbo inaccessable.
* this basicly means that the drive requires
unix necromancy
* skeleton traffic has been redirected..
* that means i'm holding onto the mail
and you'd have to bribe me to see it
* all the content is 'gone' but still there,
somewhere.
* nobody gives a damn
* everythings gone - even the pictures from
99 to 2001
* starting from scratch is the only remaining
option..? or is it?
* so much for ghosts
* e2salvage might make a difference..
i got it from the BBC mini-distro.
* there is definitely a will
..slf@dreamscape.org
(925-895-1500)

tools attempted thus far: debugfs, dumpe2fs, e2fsck,
e2salvage, the old coroners toolkit (TCT),
the new coroners toolkit (TASK)... the coroners toolkit
is the only thing that's been of any use! I am now
technically superior to e2fsck, dumpe2fs, and debugfs. :P
e2salvage just corrupted it more
i sortof have built my own critical file analysis toolkit in
the process of combining these.
the idea is that i'm going to cat every inode on the disk
and attempt to identify what the hell it is via "file"
icat device [inode] | file -
it shows output like this in my map..
INODE : TYPE
49758 : ASCII English text
49759 : Bourne shell script text executable
49763 : ASCII C program text
49795 : a /usr/local/bin/perl script text executable
49803 : ASCII Java program text
49873 : ASCII English text, with very long lines
1880978 : GNU tar archive
1881071 : Zip archive data, at least v1.0 to extract
1733359 : gzip compressed data, was "spam.tar", from Unix
so basicly i can guess what's in each cubbyhole.
If I find something interesting, I can get it off the disk..
icat device [inode] > newfile
icat device [inode] | less
I can look inside files without taking them off the disk, if
they are tar files by trying..
icat device [inode] | tar xvf - if it was just a tar
icat device [inode] | tar xvzf - if it was gzip'd
icat device [inode] | bzip2 -dc - | tar xvf - if was tbz2
I can also 'stat' an inode...
istat device [inode]
milk:/mnt/bin# istat img 1880977
inode: 1880977
Allocated
Group: 115
uid / gid: 0 / 0
mode: -rw-------
size: 10989
num of links: 1
Inode Times:
Accessed: Mon May 14 17:43:42 2001
File Modified: Tue Jul 28 17:16:58 1998
Inode Modified: Mon Oct 23 23:50:35 2000
Direct Blocks:
3770649 3770650 3770651
the cool thing is that if all the data is still there (on the
direct blocks table) you don't have to do it on each inode
associated. it will show the related blocks for you.