Traversing the directory tree

It is often needed to traverse all files in some directory tree recursively - similarly
to what the Unix "find" command does, in Perl. It is possible to do so the "hard way",
using opendir, readdir and their friends. But in Perl, naturally, TMTOWTDI. Not only
I want to present an "other way to do it", but IMHO a "better way to do it", especially
for beginners who only need to perform simple tasks.

File::Find basics

Just remember - if you have to traverse files recursively and do some processing on
them, this is your friend:

First, a starting directory is initialized in $dir. If you imagine the directory structure
as a tree, this is the root, from which the search starts.
Then, find (a function from the File::Find module) is called. It is given a reference to
a subroutine and the starting directory. find will traverse the directory
tree and call the supplied subroutive on each file (be it just a file, a directory, a link, etc).
Then we see the definition of the processing function. It gets one argument (stored in $_), the
file currently seen by find. Consider the following simple example (it prints the names of
all directories, starting with "." - the current directory):

Here, the subroutine print_name_if_dir is given as an argument to find.
It simply prints the name of the file if it's a directory. Note the peculiar notation...
It's customary in Perl not to mention $_, so:

Try running it and compare the results to the previous version. You will notice that it
prints the full path to the directory. What happens is the following - Find::File
chdirs into each directory it finds in its search, and $_ gets only the short
name (w/o path) of the file, while Find::File::name gets the full path. If, for some reason,
you don't want it to chdir, you may specify no_chdir as a parameter. Parameters
to find are passed as a hash reference:

Note that "wanted" is the key for the file processing routine in this hash.
The results won't differ from the previous version. Here, however, $_ will also be
the full path to a file, because find doesn't "dive into" the directories.
Other parameters may be specified (like 'bydepth' if want a depth-first-search), but
these are advanced topics. If you're curious, you can look these issues up in the
documentation of the module.

Bonus - a useful utility based on File::Find

Ever felt that your quota suffocates you, and couldn't find the unnecessary large
files to remove ? Do you find "du" too tedious to use in these cases ? File::Find
comes to the rescue. Consider the following script... It takes a starting directory,
and prints the 20 largest files found in the tree under this directory - specifying
full paths, so you can just cut-n-paste them into "rm":

What goes on here ? find traverses the given directory recursively, taking
notice of each file's size in the $size hash table (-s if -f means = get the size
if this is a file). Then, it sorts the hash table by size, and prints the 20
largest files. That's it... I use this utility quite a lot to clean space, I hope
you find it useful too (and also understand exactly how it works !)

Update:

Thanks to rinceWind for this:
File::Find is cross-platform. It's one of the really handy ways for iterating directory trees on Windows - something Microsoft don't encourage you to do, with their 'hidden files' (File::Find X-rays through Windows hidden files mechanism nicely :-).

With this in mind, though, you must be careful when working with Windows' paths, because slashes there have a different direction. There is a nice tutorial - Paths in Perl, that explains this.

Update 2:

Conclusion

File::Find can turn the tasks dealing with recursive file traversal from torture
to pleasure, if you know how to use it. Modules like this make Perl a wonderful
language it is - you can perform useful tasks without pain. Enjoy !

Even more advanced uses

The preprocess and postprocess predicates of File::Find let you do some really wild stuff. To make use of them, you have to use the extended syntax of calling find(). To specify extra options, you have to pass a hash as the first parameter, rather than just a subroutine reference. The simplest case is exactly equivalent to using the subref shorthand:

Both of the new extra directives, preprocess and postprocess, take a subroutine references, just like the standard wanted one in the above examples. Having that out of the way, let's get to the juicy stuff:

preprocess

find() passes this routine an array with the entire contents of a directory immediately upon entering the directory and expects it to return the list of interesting files. Any omitted files will not be passed to the wanted function and omitted directories will not even be descended into by find(). This predicate makes File::Find the most powerful tool for all your directory traversal needs. To warm up, here's a silly example that does the same as the previous examples, that is, print only the names of directories:

As you (should) know, Perl stores the parameters passed to a subroutine in the special array @_. grep tests all elements of a list passed to it (here: the list of parameters, and thus filenames) against the expression and then returns a new list containing only the elements for which that expression is true. Here, the expression tests whether the entry is a directory, so the result is a list which does not contain any files, symlinks or anything else besides directories. We return this new list, causing find() to forget all the files, symlinks and everything else. It will not pass them to our wanted function, and so we can just print everything we get passed into there. Obviously, this is a contrived example.

So, what really interesting stuff can we do with the preprocess directive? Let's just try to implement the -mindepth and -maxdepth offered by GNU find. Of course, you don't needpreprocess to do that. The naive way would be do check the depth of the current location in the directory tree within the wanted function and bail if we're too deep or not deep enough. However, this is wasteful: what if you are traversing a very deep tree with thousands of directories and several hundred thousand files? The wanted function will likely spend most of its time saying "no, not deep enough", "no, too deep", "no, no, too deep", "too deep, next one", throwing away files over and over. The biggest problem here is that even if you only want the files at depth 2-3, find() will happily descend down to level 15, giving wanted all the directories and files it encounters en route, oblivious to the fact that we are only throwing them all away, waiting for the directory traversal to back out up to level 3 again. The solutionn is to use a precprocess routine to cull all directories from the list once we reach the maxdepth, preventing find() from descending any further and getting lost in areas of the tree we aren't interested in anyway. So without further ado:

Let's see what happens here. We find out how deep we currently are by counting the forward slashes in the full pathname of wherever we are, $File::Find::dir. If we are below the maximum depth, then we want to look at all files. If we are at the maximum depth, we ditch all directories, so find() will not descend any further. If we somehow got too deep, we return nothing, causing find() to back out of the directory immediately. Finally, in wanted we examine the depth again, in order to avoid processing files below the minimum depth. Because find() needs to descend into these directories we cannot avoid it passing names for directories that are too far up the tree to our wanted function.

postprocess

This one is a lot less involved; mainly because it neither takes nor returns anything. It is simply called before find() backs out of a directory, which means the entire subtree below it has been processed. In other words, it is safe to mess with the directory without unintentionally confusing find().

The following utility script makes use of this to remove empty directories. It doesn't try to check whether they're empty, because that's relatively complicated (we have to pay attention to the special . and .. entries) and rmdir will not remove a non-empty directory anyway. So we just let it fail harmlessly.

Conclusion

As if File::Find was not already good enough, these two extra predicates give you the power to do literally anything. preprocess lets you control find()'s behaviour in any way conceivable, and postprocess makes it easy to do any cleanup tasks of all manner without requiring a second directory traversal. Combining these powers makes it very easy to write astonishingly powerful file handling scripts with very little effort.

Update: fixed a couple typos in the text, rearranged a few sentences for clarity. No changes to actual content.

I'm not sure that anyone ever told you the Nooks and crannies does not work (2 curley braces out of place) but as I am sure you, an obvious guru, don't want erroneous code out here associated with your name. I may be wrong and if so let me know... I may be doing something wrong <blush>
Dale Clarke dalec@delta1.net

File::Find wasn't my friend in the past. Therefore i appreciate your interesting article. A few times i looked for possibility to work with File::Find as can be done with gnu-find regarding the maxdepth-feature.

The no_chdir-parameter seemed to me as a possible beginning for this.

But when i tested the concerning code-snipped in your article i couldn't recognize any difference between false or true no_chdir

You don't see any difference when changing the no_chdir parameter because you are just printing $File::Find::name which is the full path of the file, and no_chdir does not change that. If you print $_ instead, you can see the difference: with no_chdir set to 1, the filename will be relative to the directory you specified for find() to begin the search (since it will not change dirs as it traverses the filesystem tree). If no_chdir is set to 0, then find() will chdir into dirs as it traverses the tree, giving you the filename (relative to that file's directory) in $_.

The no_chdir option isn't what you want to implement GNU find's maxdepth, check Aristotle's reply to this thread instead.

tos,
As far as I can tell - there is no problem. I just spent about 20 minutes looking at the find2perl code as well as looking at the docs on File::Find. It doesn't appear that there is support for the gnu find's maxdepth option.

But don't get mad, get even by checking out File::Find::Rule which does have a maxdepth option and is argued by some to be easier to use than File::Find.

If I have a need to process all of the files in a directory tree, I use find2perl to generate a template. If you are familiar with the *ix find command, there are several options you can use to your advantage, but I usually stick to find2perl / -type f -print > template.pl . You can substitute any starting directory instead of / to suit, and the -type f will limit the code to files.