All of them, I think. I suppose you could argue that (1) doesn't apply if you're copying and pasting the code from Perl Monks, but you still have to understand the code and adapt it to your precise requirements, both of which are more difficult when the code is more complicated.

I'm bewildered by your objection, to be honest. Do you seriously think it's a good idea to optimise code that’s already fast enough? There’s an interesting debate to be had about “premature optimisation”, but I've never heard anyone advocate needless optimisation.

In the specific example you benchmarked, comparing the two versions side by side devoid of the benchmark code, with some unnecessary punctuation removed and a few extra spaces for clarity (this is not a criticism of your preferences!) you get this:

The extra complexity amounts to 3 uses of the keyword reverse, and a longer variable name. The structure of the code is otherwise identical.

My question really is, is that "extra complexity" so onerous as to amount to a maintenance problem?

Conversely, whilst the original application's use of the subroutine in question may not greatly benefit from the optimisation, the next application that uses it could. It might call that sub in an inner loop and that 4.5x to 5x greater performance (on my machine) could be significant.

More importantly, if the efficient version becomes the 'standard' way of implementing parsing named parameters, and is used universally, rather than alternating between the optimal and non-optimal versions on a case by case basis, then the optimal version will become 'normal' and what, if any, potential for a maintenance problem may exists as a result of the "extra complexity", disappears because of familiarity.

I guess I am looking for the balance between consistancy in the face of future need, and simplicity.

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.