Cookie Notice

2015/06/29

It does it's job. Give it a request and a number of accessions and the names you want them to go by, and it changes them in the database.

Except...

Accessions are defined as zero-padded six-digit numbers, so instead of 99999, you'd have 099999. If you're strict, everything's fine.

But user's are not always strict. Sometimes they just put in 99999, expecting it to just work.

Oh, if only it were that easy.

I have requests here for the purpose of ensuring that for request 09999, you can only change accessions associated with that request. This is what lines 27-29 are for, to get the set of accessions that are entered by the user and one of the given request's accessions.

Yes, requests are defined as zero-padded five-digit numbers.

If I don't zero-pad the accessions, I get nothing in @accessions.

But if I do zero pad, I get no library name from $param->{ $acc }.

There is a fix for it. I could go back to the source and ensure that this sees no un-padded numbers. I could run through the $param hashref again, But clearly, this is something I should've built in at first.

2015/06/22

Depending on what it's used for, one specific entry or the whole lot, it has different endings, but the main body is 90 lines. There are 20 left joins in it.

It is an ugly thing.

So ugly, in fact, that am loath to include it here.

So ugly that I felt it necessary to comment and explain my use of joins.

This is where the trouble started.

I noticed it when I was running tests, getting the following error.

Clearly, it needed a bind variable, but something along the line blocked it.

I had this problem on Friday morning on our newer server, then it stopped. Honestly, it was such a fire-fighting day that I lost track of what was happening with it.

Then the module was put on the old server and the problem rose up again.

Whether that makes me Shatner or Lithgow
remains an exercise for the reader.

I said "my code has gremlins" and went home at 5pm.

When I got back to the lab this morning, I made different test scripts, each identical except for the hashbang. I set one for system Perl, which is 5.10, one for the one we hardcode into most of our web and cron uses, which is 5.16, and the one we have as env perl, currently 5.20.

The cooler solution would've been to have several versions of Perl installed with Perlbrew, then running perlbrew exec perl myprogram.pl instead, but I don't have Perlbrew installed on that system.

The error occurs with 5.10. It does not with 5.16 or 5.20.

And when I run it against a version without the comments in the query, it works everywhere.

I don't have any clue if the issue is with Perl 5.10 or with the version of DBI currently installed with 5.10, and I don't expect to. The old system is a Sun machine that was off support before I was hired in, and the admin for it reminds us each time we talk to him that it's only a matter of time before it falls and can no longer get up. I haven't worked off that machine for around two years, and this query's move to the old server is part of the move of certain services to the new machine.

And, as everything is fine with Perls 5.16 or higher, I must regard this as a solved problem except with legacy installs.

I know that MySQL accepts # as the comment character, but Sublime Text prefers to make -- mean SQL comments, so when I commented the query, I used the double-dash, and our solution is to remove the comments when deploying to the old server. It's a temporary solution, to be sure, but deploying to the old server is only temporary, too.

It's a sad and strange situation where the solution is to uncomment code, but here, that seems to be it.

Update:Matt S. Trout pushed me to check into the DBD::mysql versions, to see which versions corresponded to the error. The offending 5.10 perl used DBD::mysql v. 4.013, and looking at the DBD::mysql change log, I see bug #30033: Fixed handling of comments to allow comments that contain characters that might otherwise cause placeholder detection to not work properly. Matt suggests adding "use DBD::mysql 4.014;", which is more than reasonable.

2015/06/15

Last week, Perl devs from all over North America, and some from other continents, met in Salt Lake City, Utah, for YAPC::NA 2015.

Last week, I took vacation. But I spent the time with family, going to Ohio.

So, of course, I wanted to get a list of all the talks that they were able to record and put on YouTube, to list for my local Mongers group, Purdue.pm, and to allow me to go back and watch at my leisure.

I could've parsed the HTML with regular expressions, but that isn't protocol, so I used this as an excuse to work with Mojo::DOM. I generally prefer finding code examples to reading documentation, so here's my code.

To get the HTML, I opened https://www.youtube.com/user/yapcna/videos in Chrome, clicked load more to get all of this years' videos ( and some of last year's), then grabbed the HTML from Chrome Dev Tools and pasted it into the __DATA__ section of the program. Grabbing the HTML with LWP or the like wouldn't have grabbed it all.

2015/06/03

In my lab, we have an AJAX-laden web tool which loads a certain JSON API on page load. It was judged that what we had was too slow, so I created a program that wrote that JSON to a static file on regular intervals. Problem with that, of course, is that changes to the data would not show up in the static file until the next scheduled update.

So, we created a third version, which checks the database for checksum, and if it changes, it regenerates the file and sends the data. Otherwise, it opens the file and sends the data.

I tested with Chrome Dev Tools, which told a bit of the story, but at the scale where it's closer to anecdotes than data. I wanted to go into the hundreds of hits, not just one. I pulled out Benchmark, which told a story, but wasn't quite what I wanted. It started the clock, ran it n times, then stopped the clock, while I wanted to get clock data on each GET.

I also realized I needed to test to be sure that the data I was getting was the same, so I used Test::Most to compare the object I pulled out of the JSON. That was useful, but most useful was the program I wrote using Time::HiRes to more accurately grab the times, then use Statistics::Basic and List::Util to take the collected arrays of sub-second response times and show me how much faster it is to cache.

And it is fairly significant. The best and worst performance were comparable, but the average case has the cached version being about twice as fast, and using the static file being about 7 times faster. With, of course, the same problems.

If I wasn't about to take time out of the office, I'd start looking into other methods to get things faster. Good to know, though, that I have the means to test and benchmark it once I get back next week.