perlquestion
sowais
<p>Hello Monks! I am a novice at Perl and need help with a code I have written. I am trying to grab all files with a certain extension using File::Find::Rule. The problem I have is that I am looking two specific directories using a 'while' loop with a counter for the files but for some reason the $directory path does not set to the 'Archive' option when $count = 0. After looking for files, I write them to a log for output. Please see my code below. Any help would be greatly appreciated. Thanks!
</p>
<code>
use strict;
use warnings;
use File::Find::Rule;
my $directory = 'C:\Test\WMA_calls';
my $output_file = 'C:\Test\Results\output.txt';
my $count=0;
my @files;
while($count < 2) {
print $count;
$directory = 'C:\Test\Calls\Archive' if $count eq 0;
$directory = 'C:\Test\Calls\History' if $count eq 1;
@files = File::Find::Rule->file()
->name( "*.wma", "*.wmv" )
->in( $directory );
$count++;
}
if(@files) {
open FILE, ">", $output_file or die "Can't open file, $output_file: $!";
foreach(@files) {
print FILE "$_\n";
}
close FILE;
}
else {
print "\n\nError!! No files found";
}
exit 0;
</code>
<p>Update: I was able to fix the above code and get it to work but the issue now is that I have more than 4000+ files in the two folders and its taking a really long time for the script to complete. Any advice on a more efficient/less memory hog way? Thanks again for the responses!
</p>
<p>Update: Thank you for all the responses! I believe I was not clear on the original description of what I am trying to accomplish. I am trying to read all files of certain type from a given directory and all its subdirectories and eventually log all these filenames. From the previous responses I have been able to get a working code but when I did a dry run in Production, where I have over a million files I will be logging, its taking over 2hrs for just one directory read. Is there a more efficient/less memory consumption way I can do this? the dry run has taken over 100 MB of memory and counting...
</p>