I've been working on a restructured perlvar and I think I've mostly got it right, but at the moment I'm almost wishing that I never have to see it again. Have a look for yourself. It's in the perl git repo in the briandfoy/perlvar branch (if you're looking at the github mirror, realize it's several hours behind).

The new version notes when each variable appeared in the Perl 5 series of releases if it wasn't there at the start.

I still have to ensure that nothing breaks the perldoc -v stuff. I've tried it on several variables without problems but I don't know if some of the restructuring affected the odd variable.

I expect to merge this for the next development release, so I have a couple works to sort out whatever is left.

The trick, however, is if this sort of smart match is faster than a hash lookup. Yes they are and no they aren't. I've updated my Stackoverflow answer with additional benchmarks and a new plot.

Smart matches are faster if you have to create the hash, but slower if you already have the hash.

There's a middle ground in between that I don't care to find. For some number of searches of the hash, the cost of creating the hash amortizes enough that it's faster than a smart match.

It depends on what you are doing, but that's the rub with every benchmark. The numbers aren't the answers to your real question, in this case "Which technique should I use?". They only support a decision once you add context.

Karel Bílek on StackOverflow wondered if the smart match operator was smartly searching. We know it's smart about what it should do, but is it also smart in how it does it? In this case, is it smart about finding scalars in an array?

I benchmarked three important cases: the match is at the beginning of the array, the end of the array, and in the middle of the array. Before you look at my answer on StackOverflow though, write down what you think the answer should be. Done? Okay, now you can peek

It's royalty report time, so the mailbox was full of checks this week. I keep close tabs on how well my books do so I can figure out if the time to write the books would have been better spent making fancy coffees at Starbucks. I think I'm slightly ahead, but only slightly.

However, I noticed that the total revenue for Learning Perl, 4th Edition is very close to $1,000,000 for the 20 quarters it has been available. And, curiously, it's getting closer to that number even though the latest edition has been out for 9 quarters. Who's still buying the old edition?

Even better, though, the Fifth Edition is already over $500,000 in total revenue. Now, only a small slice of that gets to the authors, especially on a title with multiple authors, and that only comes four times a year over several years. My cut of Learning Perl, 4th Edition is only 1%, so I'm not buying any Bentleys.

My co-author (and original author) Randal Schwartz probably isn't as excited about that as I am. I think he was doing that level of revenue every quarter back in the First and Second editions, when you could write "Perl" or "Java" on a cardboard box and sell it for $75 with no trouble at all. That was the time to be a writer looking for enough money to eat.

I'm a bit more upbeat, though, because eBook sales have really taken off. Learning Perl is no slouch of a seller, but a fourth of its sales, measured in units sold, were eBooks for Q2. My other O'Reilly books have similar ratios. I think that's directly related to O'Reilly Media's commitment to eBooks and figuring out how to sell them. Instead of trying to make them the same price as the print book, they give you a break (normally about 25%), and you get access to the book for life, in a variety of formats, and with no tricky DRM. I still think eBook prices should be lower, but we're working on it :)

And, with Perl 5.14 due out sometime next year, expect some updates to some major books. I've been using the /r flag on the subsitution operator quite a bit, and it deserves to be in print. :)

I knocked off another big chunks of distributions that MyCPAN didn't like. I worked on the 700 or so dists that it couldn't unpack, and that number is now down to about 30. The changes to my method weren't that dramatic, but it clears up a bunch of the problem dists.

First, I was stopping too soon. Many archives unpack just fine even if they give warnings. Now I'll just record the warnings and wait to see if I get a directory with some files in it.

Second, I was using an HFS case-insensitive file system (stupid, but it's the default). Many distributions did not like that. Moving everything to a case-sensitive file system solved many of those problems.
Once I find out why Foo-Bar-0.01.tar.gz doesn't unpack, I usually solve the problem for all Foo-Bar-* series which probably had the same problem.

The remaining 30 or so dists are unpackable with the tars that I tried. That's not a huge number out of the 140,000 total, so I'm not so worried. Maybe they unpack with different tar implementations. I'm not that worried about it.

Here's the short script I used to go through all the distros once I had them on a new filesystem: