mavili has asked for the
wisdom of the Perl Monks concerning the following question:

Hi monks, i've got the code below and it takes ages (>40 seconds) to run. I'm looking for ways to make it more efficient if possible.

The array @files is around 450 elements, and the file LOG is a large file containing tens of thousands of lines, which is then read into the string $log (this is faster than iterating through the lines of LOG):

EDIT: problem solved. I was doing it completely wrong by putting the log file into a string. the solution was to go through the logs in a loop HOWEVER in a different way. solutin is more or less outlined below:

If you disclose what regexp1 & regexp2 look like -- and how they are derived from $file? Presumably they are not constants, otherwise you would be calculating the same two counts 450 times each -- then it may be possible to see some way of speeding up the processing.

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.

"Science is about questioning the status quo. Questioning authority".

In the absence of evidence, opinion is indistinguishable from prejudice.