I think 10% speed gain is seriously faster. And I started with the fact that there indeed still is a long way to go.

When I started in October 2014 with CSV parsing in perl6, my first working copy took a whopping 256 seconds. From there to 3.5 is quite a thing and makes me hope it will eventually go down to 0.35, which is comparable to pure-perl implementations of CSV parsers.

The fastest perl5 parser has no options whatsoever. No options means less if statements and more efficient loops. The perl6 parser has all the options Text::CSV_XS supports, and thus includes all the code paths, if statements and other constructs to support those features. If perl6 will be able to optimize next and last to end a loop other that through an exception, I expect this process to see a speed-up by factors instead of by percentages.

I understand your stance towards perl6 not being ready for serious production when looking at speed, but if you also look at what the language offers, you might conclude that once you master the basic constructs, development-time in perl6 weighs up to solving problems in other languages. Writing scripts for complex tasks that do not depend on speed, but require serious thought in other languages, just write themselves in perl6. To me at least, it brought new insights in how I could be able to solve problems.

Sorry, but a 10% improvement, when by your own admission an order of magnitude is required, doesn't come close to "serious" IMO.

If perl6 will be able to optimize next and last to end a loop other that through an exception,

I think you've hit the nail on the head.

if you also look at what the language offers, you might conclude that once you master the basic constructs, development-time in perl6 weighs up to solving problems in other languages.... scripts for complex tasks that ... but require serious thought in other languages, just write themselves in perl6.

P6 has a seriously powerful and interesting syntax; but unless that syntax can be converted into runtime code that runs efficiently, without resorting to requiring complex XS code, called via the P5 runtime, it means that whatever develop-time gains might be available are swamped by the need to also learn and use XS to achieve performance.

There are several features of the P6 language that are simply too complex to ever allow the language to be interpreted efficiently. Any one of: generics, OO-exception handling, MOP, incremental regex, hypotheticals, junctions, lazy evaluation, macros, user-define operators, active metadata, introspection; preclude conversion to an efficient bytecode format.

Just as the possibilities of overloading and tying & magic, impose constant runtime performance penalties on every Perl5 opcode, each of those P6 features adds the requirement for a runtime decision point and branch or an indirection. Any two or three combined would make the tasks of compile time and runtime (JIT) optimisation very, very tough.

With all of them, the task is simply impossible in an interpreted language. With that number of permutations of decision points and indirections, the time taken to select and generate the optimised code paths, will be greater than the time required to run the unoptimised code.

The fact that even when just calling P5/XS code from P6 to do the bulk of the benchmarked task, imposes an 2 orders of magnitude performance penalty, and that is lorded as a breakthrough...

The only way P6 will ever run efficiently is if it is compiled; and that would require a multi-pass, iteratively refining compiler of the complexity of GHC, and that will have been under constant development by a well-funded succession of seriously clever professors and a nearly unlimited supply of cheap and enthusiastic undergrads and grads for thirty years in two years time!

It's taken the last 15 years (my time with perl) for P6 to get to the point where being only 200x slower is a newsworthy event; I simply don't have the lifespan nor enthusiasm to wait another 15. I've got too much doddering and pottering and bitching loudly about the youth of today to do before I kick it, to even consider it.

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.

In case anyone is interested, the semicoherent run-on sentence that is the second-to-last paragraph of this rant references GHC, the Glasgow Haskell Compiler. It was started in 1989, so I don't know what "fourty years in two years time" could possibly mean. Perl6 borrows quite a bit from Haskell, so it's worth reading up on if you're into that kind of thing.

Even setting aside XS, the fastest P6 vs P5 is 3.009/0.121=24.9, which is still pathetic. But there must be something wrong with their methodology, because they have Rust taking 0.000 seconds to C's 0.002, which is absurd.

For C and Rust and less for the next few, the testing set is too small to be significant. Process management and system load have way more effect of these than on the others. If I however make the test-set bigger, the slower processes would take too much time.

I already tried to subtract startup-time, but as the startup-time is about the same as the runtime, and so low that noise has more impact than the actual run-time, it is more like an indication that these processes are significant faster than the rest.

Also note that the goal of these graphs are to show relative speed between the perl methods available. I added the other languages later out of curiosity.