If you're on a unix/linux system, running a "find" command in a subshell is okay, while perl's readdir will almost always do pretty much just as well in terms of performance, and I've seen one or two cases where perl does better. The nice thing about readdir is that you don't need to worry about possible artifacts in file names that affect the text output from "find" (e.g. it's possible to have things like line-feeds and carriage-returns embedded in file names).

Whenever I've tried to benchmark File::Find against unix "find" and simple (recursive) readdir, File::Find took noticeably longer to finish on relatively large directory structures. If you aren't dealing with nested directories, you don't need recursion, and readdir is definitely the easiest/best way to go.

BTW, the time needed to scan all the file names in a directory (or traverse a directory tree) is not affected by the quantity of data stored in the files; it's purely a matter of how many files per directory, and how many directories.

(The one case where a unix "find" command did worse that perl's "readdir" was on a ridiculously large directory - like a million files, all with fairly long names. Apparently, "find" (on a BSD system) was trying to hold the file names in memory, and at a certain point, it had to start using swap space, causing a geometric (exponential?) slow-down. Meanwhile, the run time for a simple perl script with while($f=readdir(DIR)){...} was linear with the number of files, regardless of directory size.)