Category Archives: Uncategorized

Luckily, my Dell XPS 15s has a touchscreen, and Windows has an on-screen keyboard, or I wouldn’t be typing this much. Haven’t decided yet whether to buy a new laptop yet or just a new (external) keyboard, since I already have new machines coming within 1-2 months, but luckily nothing apart from the keyboard and trackpad were damaged.

Slightly more fun than the touch screen is the speech recognition. After the training, it’s certainly faster than typing on the screen. It does add two spaces after every period, which is not my usual style, but since I don’t have a keyboard any more, I don’t really have a choice. Unfortunately, typing passwords is both insecure and close to impossible, since you cannot read what is actually being typed. Windows 8 improves this by allowing you to display the contents of password fields, but since I have not yet upgraded my personal laptop, this is not much help.

Once you’re past the password field though, the rest is not too bad. Each sentence is typically quite quick, but corrections take a while. As my last keyboard post mentioned, I type quite quickly on a good keyboard, and speech recognition is not going to replace that anytime soon for me. It does seem to be improving as I use it, which is promising, but it may take more patience than I have available. Maybe I will suffer this for a while, just to see how well it can work. I’ll probably break down and buy a new laptop…

Breaking my ‘schedule’ already with a second consecutive meta-post, but I think it’s worth it to promote a great plugin I came across: Wordfence is a security plugin for WordPress that does an amazing amount work to prevent and detect hacking.

Now being hacked is not something I’ve been concerned about, being an extremely low traffic site, but even so I am not immune to spiders that search for vulnerabilities: I came across Wordfence as a result of looking at my list of 404 errors, many trying to find a non-existent file called timthumb.php.

It turns out that there was a serious vulnerability in TimThumb, which is used by many WordPress installations to do image resizing. That link lead me to Wordfence, which I installed immediately.

I wanted to call out the one or two best features of Wordfence, but I genuinely can’t decide which are best. Firstly, it will scan the WordPress source (including plugins and templates) against their own repository taken from official sources to detect modifications. Secondly, there is a page showing live traffic (yes, I tested it – remote connections show up within seconds), which is great for spotting unusual traffic patterns. Third is the firewall, which will throttle or block IPs that are connecting too quickly or causing too many 404s. Finally, email notifications for any suspicious (or optionally, normal) activity.

For those wanting per-country blocking or premium support there is a paid version, but the basics are all free. If only to provide early detection of maliciously modified PHP files, it is worth it.

For the last week or two, an intense discussion has been occurring on the python-ideas mailing list about asynchronous APIs, to which I’ve been contributing/supporting an approach similar to async/await. Since I wrote a 5000 word essay this week on the topic, I’m going to call that my post.

Getting this post out to make my schedule look like a plan (“Achieve Ultimate Blog Success In One Easy Step”) rather than an accident of laziness. I don’t have a list of topics yet, and even if I did then I wouldn’t be sharing it, but I will be doing posts on a regular basis.

My aim is to do a detailed Python Tools blog every two weeks, and a meta or personal blog every other week (such as this week).

This may not be as often as many other bloggers, though I think that I’m going in-depth enough with the Python Tools to get away with it. Every two weeks gives me a good chance to research, write and review before publishing, and my (presumed) lack of readership means probably nobody is expecting frequent updates anyway.

Now, with that out of the way, it’s time to go and choose a bug to write about for next week…

Once I started work, one of the first things I asked for was a new keyboard. (Something about typing a million-word thesis makes you fussy about your keys.) Slightly surprisingly, despite the keyboard I asked for not being on the list of approved hardware, I got it.

Possibly this is because it is, in the manufacturer’s words, “bad ass.”

Despite getting the “silent” model, it’s still quite a noisy keyboard, though the noise is plastic-on-plastic rather than the distinctive “click” of a mechanical keyboard. There is no difference in feel between the silent and the clicky, but there’s a huge difference from non-mechanical keyboards. Typing this on my laptop keyboard feels hard and stilted. Every key stops short, and after a while it begins to rattle up into my finger joints. Not pleasant.

But that part is not much of a surprise, since I previously owned a Das Keyboard and knew how good the mechanism was. However, my old one had labels, and so going without was a new experience. So far there have been two benefits.

The first is that “bad ass” is not just the manufacturer’s words, but the reaction of basically everyone who sees it. The box had been opened before I got hold of it, because the person it was delivered to was showing it off to people (which may also explain the late delivery…). I’ve also had a number of people who come into my office simply stop and stare as I type. The “wow” factor really exists.

On the downside, not having labels requires really getting to know the keyboard layout. In my case I already knew the layout, because I’ve been using the two-handed Dvorak layout for the last year. Without turning this into a rave about Dvorak, I think it’s great. However, one thing I’ve noticed myself doing is thinking in Qwerty and typing in Dvorak.

For example, Ctrl+Shift+B is used in Visual Studio to build the current solution. On a Qwerty keyboard, you would press the key labelled “N” to produce a “B”, and hence the keys labelled “Ctrl”, “Shift” and “N” to build. I found myself thinking of it as “control, shift, N” rather than “B”, which became problematic when my fingers started going to the actual key for “N” (labelled “L” on a Qwerty keyboard) instead of the key for “B”.

In short, I was still referring to the labels on my keyboards, despite them being incorrect. The keys for “A” and “M” are in the same place, and using them even after a year of Dvorak still felt more comfortable than other keys. I guess it’s just harder to trust yourself when there’s a sign telling you that you’re wrong.

Switching to a keyboard without labels has revealed just how often I would look at the keys. Cutting that out completely has significantly increased my confidence and reduced my typing errors. That alone, even without the “bad ass” effect, is worth it (blah-blah-YMMV-blah). Considering the amount of typing I do for work, it’s been a good investment, and if they hadn’t bought one then I would have bought my own and not regretted it at all.

So WCCI 2012 has come and gone, so now it’s up to the proceedings to make an actual change in the world. Hopefully some of the attendees found some new perspective or idea to influence their future work, though the whole experience is pretty full-on, so I thought I’d use this post as an excuse to write down my two biggest “lessons learnt.” (Whether I apply or use these is the future is a different matter…)

Gene Expression

EAs typically use one of a few common representations for the individuals they are evolving, such as integer or real-valued vectors. (There are others that aren’t relevant here.) In general, these vectors are of a fixed length and can change either (semi-)randomly or through exchanges with other individuals. Gene Expression is a concept where a vector has more elements than required, but a secondary Boolean vector indicates whether each “gene” (element) is “expressed” (active or ignored).

For example, the vector “12345” with expression vector “10110” implies the solution should be (based on) “134”. However, because the expression vector can change independently, the un-expressed values are retained. Dan Ashlock‘s presentations showed that even allowing a small amount of slack space (adding 4 elements to a 24 element vector) had a significant impact.

What’s potentially more interesting is the application of this concept on GPUs. Because of their architecture, variable-length vectors are particularly inefficient to use. Slack space up to the maximum possible length is a possibility, but then you need a way of indicating which elements should be ignored. My own intent was to include a length attribute, but an alternative would be to include an expression vector.

Real-world Problems

All the attendees at Zbigniew Michalewicz‘s lecture will know what I’m referring to here, since it had quite an impact. One of the main messages was that a lot of research is in “silos,” trying to find the fastest/best solution to a very narrow problem, and does not consider that (a) fast-enough is good-enough and (b) there are real-world applications that are completely ignored. Other presenters made similar points, perhaps better identifying the underlying reasons (lack of industry-academia interaction, time/effort/money required to create better than proof-of-concept software).

Probably the best outcome from this presentation was the defensive reactions it provoked: the number of people who said “hang on, look at what I’ve been working on” seemed to be higher than the number of written publications and certainly reached a wider audience. Normally I’m not a fan of purely “awareness raising” presentations, but in this case the awareness of actual work was raised, even if it was not work done by the presenter.

It will be interesting with my new job being in the “industry” that seems so mythical to academia. I’m quite happy about the move, mainly because I’d prefer my main output to be production-quality software rather than settling for prototypes and publications. Presumably I’ll get to go to tech conferences at some point, rather than academic ones, but for me WCCI 2012 was made great by having a pretty significant social group there. Even though I’m leaving academia, at least I’ve done it and it wasn’t all bad.

The work in this paper is an application of my thesis work, rather than being central to it. ESDL provides a structural decomposition of an algorithm that allows a software implementation to reconstruct the algorithm through operator composition. So, by implementing a set of operators for a GPU, rather than a CPU, a wide range of EAs can be easily constructed by any developer, whether they have taken the time to learn how to program for a GPU or not.

My presentation will be to an audience with a specific interest in implementing for GPU (at least, that’s what I’m assuming), which means I’ll be able to concentrate more on how ESDL’s robustness and portability can help make CIGPU accessible to more users without compromising on flexibility.

With regards to the performance results I’ll talk about, it’s hardly a spoiler to point out that there is no silver bullet here. Composing GPU-based operators produces faster code than operators written in a scripting language, comparable to CPU-based native operators and not as fast as a monolithic GPU-based algorithm. However, once a suitable set of operators are available, compiling ESDL with GPU-based operators produces code that is almost as fast as writing it from scratch, but without the (very significant) effort required.

If you are attending WCCI this year, feel free to come and have a chat to me either before or after I present (try and avoid doing it during my presentation…). I’m happy to talk about EAs, Python, C#, C++ or C++ AMP, all of which I’ve been using a lot of recently. See you there.