As I emphasize in Mastering Perl, faster code comes from better algorithms instead of different syntax. The difference between tr/// and y, for instance, is so insignificant that the real problem is the time you waste thinking about it.

But, the big problem with these benchmarks and the people who tout the results is that they don’t run them more than once. They get an answer and stop thinking about it.

Benchmarking is dangerous because there’s not an answer. There are many answers and many things to consider. After I run a benchmark, I run it again to see if I get the same answer. Typically, the percentages change a bit, but in this case the two snippets trade places too:

If I thought that these two operators were different, I might be confused by these. Instead of stopping here, I want to try something else. I’ll benchmark y against itself. I should get the same times for each snippet because it’s the same thing:

Comparing the y to itself shows that there’s some inherent uncertainty in the measurement. And, there always will be. Ignore that to your own embarrassment.

Notice that across the runs that the rate ranges from 4612502/s to 5324464/s. That’s a difference of about 700,000 iterations, or 15% of 4,612,502 or 13% of 5,324,464. Those differences are much greater than the relative percentages in the reports. That’s a problem. Running another test right after a test gives different results even if the percentages within the test are the same. Damn you multi-tasking computers!