=head1 NAME
perlperf - Perl Performance and Optimization Techniques
=head1 DESCRIPTION
This is an introduction to the use of performance and optimization techniques
which can be used with particular reference to perl programs. While many perl
developers have come from other languages, and can use their prior knowledge
where appropriate, there are many other people who might benefit from a few
perl specific pointers. If you want the condensed version, perhaps the best
advice comes from the renowned Japanese Samurai, Miyamoto Musashi, who said:
"Do Not Engage in Useless Activity"
in 1645.
=head1 OVERVIEW
Perhaps the most common mistake programmers make is to attempt to optimize
their code before a program actually does anything useful - this is a bad idea.
There's no point in having an extremely fast program that doesn't work. The
first job is to get a program to I do something B, (not to
mention ensuring the test suite is fully functional), and only then to consider
optimizing it. Having decided to optimize existing working code, there are
several simple but essential steps to consider which are intrinsic to any
optimization process.
=head2 ONE STEP SIDEWAYS
Firstly, you need to establish a baseline time for the existing code, which
timing needs to be reliable and repeatable. You'll probably want to use the
C or C modules, or something similar, for this step,
or perhaps the Unix system C utility, whichever is appropriate. See the
base of this document for a longer list of benchmarking and profiling modules,
and recommended further reading.
=head2 ONE STEP FORWARD
Next, having examined the program for I, (places where the code
seems to run slowly), change the code with the intention of making it run
faster. Using version control software, like C, will ensure no
changes are irreversible. It's too easy to fiddle here and fiddle there -
don't change too much at any one time or you might not discover which piece of
code B was the slow bit.
=head2 ANOTHER STEP SIDEWAYS
It's not enough to say: "that will make it run faster", you have to check it.
Rerun the code under control of the benchmarking or profiling modules, from the
first step above, and check that the new code executed the B in
I. Save your work and repeat...
=head1 GENERAL GUIDELINES
The critical thing when considering performance is to remember there is no such
thing as a C, which is why there are no rules, only guidelines.
It is clear that inline code is going to be faster than subroutine or method
calls, because there is less overhead, but this approach has the disadvantage
of being less maintainable and comes at the cost of greater memory usage -
there is no such thing as a free lunch. If you are searching for an element in
a list, it can be more efficient to store the data in a hash structure, and
then simply look to see whether the key is defined, rather than to loop through
the entire array using grep() for instance. substr() may be (a lot) faster
than grep() but not as flexible, so you have another trade-off to access. Your
code may contain a line which takes 0.01 of a second to execute which if you
call it 1,000 times, quite likely in a program parsing even medium sized files
for instance, you already have a 10 second delay, in just one single code
location, and if you call that line 100,000 times, your entire program will
slow down to an unbearable crawl.
Using a subroutine as part of your sort is a powerful way to get exactly what
you want, but will usually be slower than the built-in I C and
I C=E> sort operators. It is possible to make multiple
passes over your data, building indices to make the upcoming sort more
efficient, and to use what is known as the C (Orcish Maneuver) to cache the
sort keys in advance. The cache lookup, while a good idea, can itself be a
source of slowdown by enforcing a double pass over the data - once to setup the
cache, and once to sort the data. Using C to extract the required sort
key into a consistent string can be an efficient way to build a single string
to compare, instead of using multiple sort keys, which makes it possible to use
the standard, written in C and fast, perl C function on the output,
and is the basis of the C (Guttman Rossler Transform). Some string
combinations can slow the C down, by just being too plain complex for it's
own good.
For applications using database backends, the standard C namespace has
tries to help with keeping things nippy, not least because it tries to I
query the database until the latest possible moment, but always read the docs
which come with your choice of libraries. Among the many issues facing
developers dealing with databases should remain aware of is to always use
C placeholders and to consider pre-fetching data sets when this might
prove advantageous. Splitting up a large file by assigning multiple processes
to parsing a single file, using say C, C or C can also be a
useful way of optimizing your usage of the available C resources, though
this technique is fraught with concurrency issues and demands high attention to
detail.
Every case has a specific application and one or more exceptions, and there is
no replacement for running a few tests and finding out which method works best
for your particular environment, this is why writing optimal code is not an
exact science, and why we love using Perl so much - TMTOWTDI.
=head1 BENCHMARKS
Here are a few examples to demonstrate usage of Perl's benchmarking tools.
=head2 Assigning and Dereferencing Variables.
I'm sure most of us have seen code which looks like, (or worse than), this:
if ( $obj->{_ref}->{_myscore} >= $obj->{_ref}->{_yourscore} ) {
...
This sort of code can be a real eyesore to read, as well as being very
sensitive to typos, and it's much clearer to dereference the variable
explicitly. We're side-stepping the issue of working with object-oriented
programming techniques to encapsulate variable access via methods, only
accessible through an object. Here we're just discussing the technical
implementation of choice, and whether this has an effect on performance. We
can see whether this dereferencing operation, has any overhead by putting
comparative code in a file and running a C test.
# dereference
#!/usr/bin/perl
use strict;
use warnings;
use Benchmark;
my $ref = {
'ref' => {
_myscore => '100 + 1',
_yourscore => '102 - 1',
},
};
timethese(1000000, {
'direct' => sub {
my $x = $ref->{ref}->{_myscore} . $ref->{ref}->{_yourscore} ;
},
'dereference' => sub {
my $ref = $ref->{ref};
my $myscore = $ref->{_myscore};
my $yourscore = $ref->{_yourscore};
my $x = $myscore . $yourscore;
},
});
It's essential to run any timing measurements a sufficient number of times so
the numbers settle on a numerical average, otherwise each run will naturally
fluctuate due to variations in the environment, to reduce the effect of
contention for C resources and network bandwidth for instance. Running
the above code for one million iterations, we can take a look at the report
output by the C module, to see which approach is the most effective.
$> perl dereference
Benchmark: timing 1000000 iterations of dereference, direct...
dereference: 2 wallclock secs ( 1.59 usr + 0.00 sys = 1.59 CPU) @ 628930.82/s (n=1000000)
direct: 1 wallclock secs ( 1.20 usr + 0.00 sys = 1.20 CPU) @ 833333.33/s (n=1000000)
The difference is clear to see and the dereferencing approach is slower. While
it managed to execute an average of 628,930 times a second during our test, the
direct approach managed to run an additional 204,403 times, unfortunately.
Unfortunately, because there are many examples of code written using the
multiple layer direct variable access, and it's usually horrible. It is,
however, minusculy faster. The question remains whether the minute gain is
actually worth the eyestrain, or the loss of maintainability.
=head2 Search and replace or tr
If we have a string which needs to be modified, while a regex will almost
always be much more flexible, C

, an oft underused tool, can still be a
useful. One scenario might be replace all vowels with another character. The
regex solution might look like this:
$str =~ s/[aeiou]/x/g
The C