Where $MAXHITS is set to the maximum number of results i need. (other than that extra arg, this is the exact find(); call i'm using.

The story is this... i'm using this function to find all the files necessary to recreate an index file at bianca.com, and to actually create that file.

My current script (on my scratchpad) works for this quite well (i'll gladly provide a tarball of test filestructure to work with, just ask), without fail.

What i'm concerned about is that my test files/filestructure wasn't too big, but some of the 'live' directories that this may be working on could have literally thousands (tens, maybe 100) of files, in a very complicates directory tree. (i know, i know... but when we wrote the CGI that runs the BBS (in C, even.. heretics i know!) the filesystem-as-ersatz-database seemed OK) of files.

This introduces a huge time lag as the find function walks the tree. Since the files are invariably 'found' in the order of last-created (i'm not sure why, but it's so..) it's quite safe for me to say "stop finding once you've found n hits" since it's sure that those n files are the most recent n files, and are the ones i'm interested in.

Is there a way to do this? Does it involve hacking the module? i'm willing to give that a shot... but if there's a better non-wheel-reinventing solution...

And, if this *does* involve modifying the module, can/should/how do i post the changes so that others can use it?

Well, I don't know if anyone else has noticed this (maybe my mileage
is way different) but when I tried timing a simple File::Find approach
(as provided by "find2perl") against the equivalent back-tick `find
...` command on my linux box, I got a 5-to-1 wallclock ratio:

run a separate process to create and maintain a set of file name
listings for each path; the first time you run this, it'll take a long
time, but thereafter, it only needs to find the directories that were
created/modified since the last run, then for just those paths, diff
the current file inventory against the previous file name list, and
write the set of new files to a separate log file. (one approach is
given below)

adapt your re-indexing job so that it works from the log of new
files, and it doesn't use find at all.

Of course, you can put the two steps together into a single script.

#!/usr/bin/perl
# Program: find-new-files.perl
# Purpose: initialize and maintain a record of files in a
# directory tree
# Written by: dave graff
# If a file called "paths.logged" does not exist in the cwd, we create
# one, and treat all contents under cwd as "new". If "paths.logged"
# already exists, we find directories with modification dates more
# recent than this file, and treat only these as "new".
# For each "new" directory, assume a file.manifest is there (create an
# empty one if there isn't one), and diff that file against the curren+t
# inventory of data files, storing all new files to an array.
# Of course, this will fail in all paths where the current user does
# not have write permission, but such paths can be avoided by adding
# a suitable condition to the first "find" command.
use strict;
my $path_log = "paths.logged";
my ($list_name,$new_list) = ("file.manifest","new.manifest");
my $new_flag = ( -e $path_log ) ? "-newer $path_log" : "";
my @new_dirs = `find . -type d $new_flag`;
# add "-user uname" and/or "-group gname" to avoid directories where
# the current user might not have write permission
my $diff_cmd =
"cd 'THISPATH' && touch $list_name && ".
"find . -type f -maxdepth 1 | tee $new_list | diff - $list_name | +grep '<'";
# the shell functions in $diff_cmd will:
# - chdir to a given path,
# - create file.manifest there if it does not yet exist,
# - find data files in that path (not subdirs, not files in subdirs),
# - create a "new.manifest" file containing this current file list,
# - diff the new list of files against the existing file.manifest,
# - return only current files not found in the existing manifest.
# since it's a sub-shell, the chdir is forgotten when the sub-shell is+ done.
open( OUT, ">new-file-path.list" );
foreach my $path ( @new_dirs ) {
chomp $path;
my $cmd = $diff_cmd;
$cmd =~ s{THISPATH}{$path}g;
# the output of the shell command needs to be conditioned to have the
# path string prepended to each file name (we can leave the new-line
# in place at the end of the name):
print OUT join "", map { s{^< \.(.*)}{$path$1}; $_ } ( `$cmd` );
# replace the old manifest:
rename "$path/$new_list", "$path/$list_name" or
warn "failed to update $path/$list_name\n";
}
close OUT;
`touch $path_log`;

Wow! graff, that's some help, if you just 'whipped up' that script in response to my problem...

Happily, it'll be here at PM for everyone else to find too... i'm not sure it's applicable in this exact problem (though it'll be useful to me in another area... thanks! i hadn't even asked that question yet!), as i need the code to NOT depend on the local find command. I'd really prefer this script to be portable across Solaris and Linux systems, with differing find commands.

update: please note that the preprocess option is not available everywhere (best upgrade File::Find)

I also feel that there ought to exist File::Find::breakout, which would basically stop find from executing anymore (return? ;)

UPDATE:
after hastily posting this, which i still think is good idea, a couple of smartasses point out goto. Yeah well i'm not 0 for 2 so far ;)(dammit! sometimes solutions you know but don't love escape you)

The preprocess mention is great - because now I can propose an addition to fix what had bugged me about 914's current implementation of the min/maxdepth behaviour: the fact that the files have to be looked at, and if for no other other reason than to discard them. preprocess lets one avoid that:

This way, in directories below the mindepth, nothing other than directories is even looked at. Also further recursing down the current branch of three directory tree is aborted immediately upon entering a directory deeper than maxdepth.

The mindepth test in wanted() is still necessary because the directories below mindepth will have to be processed; the maxdepth test there is superfluous.

I'm also quite confident that this will cut the runtime down far enough that the maxhits kludges are unnecessary.

crazyinsomniac:
That's exactly what i meant.... it (a 'breakout' option) would seem to be a very useful thing to have..

Though it seems (from that list traffic you linked) that it's not typical for some duffer (read: me) to just modify a Module as storied and widespread as File::Find

Anyhow, it looks to me as though the $File::Find::prune way mentioned by grinder might be the way to go...

i'm pretty sure that the files i want will always be found first, since they're the most recent ones. Every time i've run this, the output array fills up in most-recentness order, on linux and solaris both. Clearly it's not alphabetical (as someone mentioned), and in fact i'm depending on this characteristic in another part of the script (there's a way to make it more robust, i'll do that later).

i'm not sure i totally understand your code, but i'll study it when i get home and can play.... thanks!

The eval/die works fine, but you should note that it will mess upp your current working directory unless you specify no_chdir => 1 in your call to find(), and then you have to use $File::Find::name to access the file. This works for me (it really should be improved to die with a particular text, like this post: Re: File::Find redux: how to limit hits?):

You could die if $max_hits--; in your wanted(). You only have to wrap the find() in an eval to catch the exception so that it doesn't actually abort the script. I'm not sure you're not making a dangerous assumption in that the files you want will be there by the time you die, though.

Went to join the gridlock to see it
Held an eclipse party
Watched a live feed
I cn"t see tge kwubosd to amswr thus
I tried to see it, but 8000 miles of rock got in the way
What eclipse?
Wanted to see it, but they wouldn't reschedule it
Read the book instead