If you slurp the entire file into an array or hash, that will start slowing down pretty quickly, depending on your server's resources. Tossing the contents into hashes will generally slow down the program more than arrays.

The best way to handle large files is to not slurp it anywhere -- handle each line one at a time. For example:

Essentially, you will read the file line by line. Because Perl cleans up after itself, the previous line will "go out of scope" when the next line is read, and memory resources that was used by the previous line will be freed.