Memoirs of a Roadie

On Thursday, 19th March 2015 I uploaded my 366th consecutive release to CPAN. To most that may well be "meh, whatever!", but for me it has been an exhausting yet fulfilling exercise. The last 60 days though, were undoubtably the hardest to achieve.

When I started this escapade, I did it without realising it. It was several days before I noticed that I had been commiting changes every day, just after the QA Hackahon in Lyon. What made it worse was that I then discovered that I had missed a day, and could have had a 3 day head-start beyond the 9 days I already had in hand. Just one day behind me was Neil Bowers, and the pair of us set about trying to reach 100 consecuive days. It took a while for us to get into the flow, but once we did, we were happily committing each day.

Both of us created our own automated upload scripts, to help us maintain the daily uploads. This was partly to ensure we didn't forget, but also allowed us to be away for a day or two and still know that we would be able to upload something. In my case I had worried I would miss out when I went on holiday to Cornwall, but thankfully the apartment had wifi installed, and I was able to manage my releases and commits every morning before we left to explore for the day.

I mostly worked at weekends and stocked up on releases, sometimes with around 10 days prepared in advance. Most of the changes centred around bug fixes, documentaion updates and test suite updates, but after a short while, we both started looking at our CPANTS ratings and other metrics around what makes a good packaged release. We both created quests on QuestHub, and ticked off the achievements as we went. There were plenty of new features along the way too, as well as some new modules and distributions, as we both wanted to avoid making only minor tweaks, just for the sake of releasing something. I even adopted around 10 distributions from others, who had either moved on to other things or sadly passed away, and brought them all up to date.

Sadly, Neil wasn't able to sustain the momentum, and had to bail out after 111 consecutive uploads. Thankfully, I still had plenty of fixes and updates to work through, so I was hopeful I could keep going for a little while longer at least.

One major change that happened during 2014, was to the CPANTS analysis code. Kenichi Ishigaki updated the META file evaluations to employ a stricter rendition of the META Specification, which meant the license field in most of my distributions on CPAN now failed. As a consequence this gave me around 80 distributions that needed a release. On top of this, I committed myself to releasing 12 new distribuions, one each month, for a year, beginning March 2014. Although I've now completed the release of the 12 distributions, I have yet to complete all the blog posts, so that quest is still incomplete.

I made a lot of changes to Labyrinth (my website management framework) and the various ISBN scrapers I had written, so these formed the bedrock of my releases. Without these I probably wouldn't have been able to make 100 consecutive releases, and definitely not for a full year. But here I am 366+ days later and still have releases yet to do. Most of the releases from me in the future will centre around Labyrinth and CPAN Testers, but as both require quite in depth work, it's unlikely you'll see such a frequent release schedule. I expect I'll be able to get at least one released a week, to maintain and extend my current 157 week stretch, but sustaining a daily release is going to be a struggle.

Having set the bar, Mohammad S Anwar (MANWAR) and Michal Špaček (SKIM) have now entered the race, and Mohammad has said he wants to beat my record. Both are just over 200 days behind, and judging from my experience, they are going to find it tricky once they hit around 250, unless they have plenty of plans for releases by then. After 100, I had high hopes of reaching 200, however I wasn't so sure I would make 300. After 300, it really was much tougher to think of what to release. Occasionally, I would be working on a test suite and bug fixes would suggest themselves, but mostly it was about working through the CPAN Testers reports. Although, I do have to thank the various book sites too, for updating their sites, which in turn meant I had several updates I could make to the scrapers.

I note that Mohammad and Michal both are sharing releases against the Map-Tube variants, which may keep them going for a while, but eventually they do need to think about other distributions. Both have plenty of other distributions in their repetoire, so it's entirely possible for them both to overtake me, but I suspect it will be a good while before anyone else attempts to tackle this particular escapade. I wish then both well on their respective journies, but at least I am safe in the knowledge I was the first to break 1 year of daily consecutive CPAN uploads. Don't think I'll be trying it again though :)

100 #1

11 years ago I was eager to be a CPAN Author, execpt I had nothing to release. I tried thinking of modules that I could write, but nothing seemed worth posting. Then I saw a post on a technical forum, and came up with a script to give the result the poster was looking for. Looking at the script I suddenly realised I had my first module. That script was then released as Calendar::List, and I'm pleased to say I still use it today. Although perhaps more importantly, I know of others who use it too.

Since then, I have slowly increased my distributions to CPAN. However, it wasn't until I got involved with CPAN Testers that my contributions increased noticeably. Another jump was when I wrote some WWW::Scraper::ISBN driver plugins for the Birmingham Perl Mongers website to help me manage the book reviews. I later worked for a book publishing company, during which time I added even more. My next big jump was the release of Labyrinth.

In between all of those big groups of releases, there have been several odds and ends to help me climb the CPAN Leaderboard. Earlier this year, with the idea of the Monthly New Distribution Challenge, I noticed I was tantalisingly close to having 100 distributions on CPAN. I remember when Simon Cozens was the first author to achieve that goal, and it was noted as quite an achievement. Since then Adam Kennedy, Ricardo Signes and Steven Haryanto have pushed those limits even further, with Steven having over 300 distributions on CPAN!

100 #2

My 100th distribution came a few days before I managed to complete my target of a 100 consecutive days of CPAN uploads. A run I started accidentally. After the 2014 QA Hackathon, I had several distribution releases planned. However, had I realised what I could be doing, I might have been a bit more vigilant and not missed the day between what now seems to be my false start and the real run. After 9 consecutive days, I figured I might as well try to reach at least a month's worth of releases, and take the top position from ZOFFIX (who had previously uploaded for 27 consecutive days) for the once-a-day CPAN regular releasers.

As it happened, Neil Bowers was on a run that was 1 day behind me, but inspired by my new quest, decided he would continue as my wingman. As I passed the 100 consecutive day mark, Neil announced that he was to end his run soon, and finally bowed out after 111 days of releases. My thanks to Neil for sticking with me, and additionally for giving me several ideas for releases, both as suggestions for package updates and a few ideas for new modules.

I have another quest to make 200 releases to CPAN this year, and with another 20 release currently planned, I'm still continuing on. We'll see if I can make 200, or even 365, consecutive days, but reaching 100 was quite a milestone that I didn't expect to achieve.

100 #3

As part of my 100 consecutive days of CPAN uploads challenge, I also managed to achieve 100 consecutive days of commits to git. I had been monitoring GitHub for this, and was gutted to realise that just after 101 days, I forgot to commit some changes over that particular weekend. However, I'm still quite pleased to have made 101 days. I have a holiday coming up soon, so I may not have been able to keep that statistic up for much longer anyway.

100 #4

As part of updates to the CPAN Testers Statistics site, I looked at some additional statistics regarding CPAN uploads. In particular looking at the number of distributions authors have submitted to CPAN, both over the life of CPAN (aka BackPAN) and currently on CPAN. The result was two new distributions, Acme-CPANAuthors-CPAN-OneHundred and Acme-CPANAuthors-BACKPAN-OneHundred.

When I first released the distributions, I only featured in the second. For my 100th consecutive day, I released the latest Acme-CPANAuthors-CPAN-OneHundred up to that day, and with my newly achieved 100th distribution, was delighted to feature in the lists for both distributions.

When I relaunched the CPAN Testers sites back in 2008, I was in a position to be responsible for 3 servers, the CPAN Testers server, the Birmingham Perl Mongers server, and my own server. While managing them wasn't too bad, I did think it would be useful having some sort of monitoring system that could help me keep an eye on them. After talking to a few people, the two key systems most keenly suggested were Nagios and Munin. Most seemed to favour Munin, so I gave it a go. Sure enough it was pretty easy to set up, and I was able to monitor the servers, using my home server to monitor them. However, there was one area of monitoring that wasn't covered. The performance of the websites.

At the time I had around 10-20 sites up and running, and the default plugins didn't provide the sort of monitoring I was looking for. After some searching I found a script written by Nicolas Mendoza. The script not only got me started, but helped to make clear how easy it was to write a Munin plugin. However, the script as was, didn't suit my needs exactly, so had to make several tweaks. I then found myself copying the file around for each website, which seem a bit unnecessary. So I wrote what was to become Munin::Plugin::ApacheRequest. Following the Hubris and DRY principles copying the script around just didn't make sense, and being able to upgrade via a Perl Module on each server, was far easier than updating the 30+ scripts for the sites I now manage.

Although the module still contains the original intention of the script, how it does it has changed. The magic still happens in the script itself.

To start with an example, this is the current script to monitor the CPAN Testers Reports site:

Part of the magic is in the name of the script. This one is 'apache_request_reports'. The script extracts the last section of the name, in this case 'reports', and passes that to Run() as the name of the virtual host. If you wish to name the scripts slightly differently, you only need to amend this line to extract the name of your virtual host as appropriate. If you only have one website you may wish to name the host explicity, but then if you create more it does mean you will need to edit each file, which is what I wanted to avoid. All I do now is copy an existing file to one to represent the new virtual host when I create a new website, and Munin automatically adds it to the list.

Munin::Plugin::ApacheRequest does make some assumptions, one of which is where you locate the log files, and how you name them for each virtual host. On my servers '/var/www/' contains all the virtual hosts (/var/www/reports, in this example), and '/var/www/logs/' contains the logs. I also use a conventional naming system for the logs, so '/var/www/logs/reports-access.log' is the Access Log for the CPAN Testers Reports site. Should you have a different path or naming format for your logs, you can alter the internal variable $ACCESS_LOG_PATTERN to the format you wish. Note that this is a sprintf format, and the first '%s' in the format string is replaced by the virtual host name. If you only have one website, you can change the format string to the specific path and file of the log, and no string interpolation is done.

The log format used is quite significant, and when you describe the LogFormat for your Access Log in the Apache config file, you will need to use an extended format type. The field to show the time taken to execute a request is needed, which is normally set using the %T (seconds) or %D (microseconds) format option (see also Apache Log Formats). For example my logs use the following:

The second to last field is our time field. In Munin::Plugin::ApacheRequest, this is stored in the $TIME_FIELD_INDEX variable. By default this is -2, assuming a similar log format as above. If you have a different format, where the execution time is in another position, like $ACCESS_LOG_PATTERN, you can change this in your script before calling Run(). A positive number assumes a column left to right, while a negative number assumes a column right to left.

The last number passed to the Run() method, determines the number of lines read for the access log to describe the average execution time. For high hit rate sites, you may wish this to be a higher number, but as most of my sites are not that frequently visited, 1000 seems to be a reasonable number.

The config statements that are generated for the Munin master monitor are currently hardcoded with values. This will change in a future version. For the example above the config produced reads as:

graph_title reports ave msecs last 1000 requestsgraph_args --base 1000graph_scale nograph_vlabel Average request time (msec)graph_category Apachegraph_info This graph shows average request times for the last 1000 requestsimages.warning 30000000images.critical 60000000total.warning 10000000total.critical 60000000

The highlighted values are interpolated from the arguments passed to Run(). In a future version I want to be able to allow you to reconfigure the warning and critical values and the graph base value, should you wish to.

I have now been using Munin::Plugin::ApacheRequest and the associated scripts for 6 years now, and it has proved very successful. I have thought about releasing the module to CPAN previously, and have made several attempts to contact Nicolas over the years, but have never had a reply. I know he was working for Opera when he released his script, but have no idea of his whereabouts now. As the script contained no licensing information, I was also unsure what licensing he had intended the code to be. I hope he doesn't mind me having adapted his original script, that I'm now releasing the code under the Artistic License v2.

Although I haven't been able to contact Nicolas, I would like to thank him for releasing his original script. If I hadn't have found it, it is unlikely I would have found a way to write a Munin plugin myself to do Apache website monitoring. With his headstart, I discovered how to write Munic plugins, and can now set up monitor of new websites within a few seconds. Thanks Nicolas.

So my most prolific year was in 2012. I'll have to see if I can change that this year. However, it does give a nice yearly snapshot of my releases.

As it turns out, for CPAN Testers I don't need the BACKPAN index, as I already generate and maintain an 'uploads' table within the 'cpanstats' database. I do need to write the code to add this metric to the Author pages. Thanks to Neil's script though, he has given me a starting point. Being able to see the releases for yourself (or a particular Author) is quite cool, so I may adapt that to make any such matrix more dynamic. It might also be worth adding a more generic metric for all of CPAN to the CPAN Testers Statistics website. Either way, I now have two more things to add to my list of projects for the QA Hackathon next month. Neil will be there too, so I hope he can give me even more ideas, while I'm there ;)

Over the last year I've made several releases for Labyrinth and its various plugins. Some have been minor improvements, while others have been major improvements as I've reviewed the code for various projects. I originally wrote Labyrinth after being made redundant back in December 2002, and after realising all mistakes I made with the design of its predecessor, Mephisto. In the last 11 years has helped me secure jobs, enabled me to implement numerous OpenSource projects (CPAN Testers and YAPC Conference Surveys to name just two) and provided the foundation to create several websites for friends and family. It has been a great project to work on, as I've learnt alot about Perl, AJAX/JSON, Payment APIs, Security, Selenium and many other aspects of web development.

I did a talk about Labyrinth in Frankfurt for YAPC::Europe 2011, and one question I was asked, was about comparing Labyrinth to Catalyst. When I created Labyrinth, Catalyst and its predecessor Maypole, were 2 years (and 1 year) away from release. Back then I no idea about an MVC, but I was pleased that in later years when I was introduced to the design concept, that it had seemed an obvious and natural way to design a web framework. Aside from this and both being written in Perl, Labyrinth and Catalyst are very different beasts. If you're looking for a web framework to design a mojor system for your company, then Catalyst is perhaps the better choice. Catalyst also has a much bigger community, whereas Labyrinth is essentially just me. I'd love for Labyrinth to get more usage and exposure, but for the time being, I'm quite comfortable with it being the quiet machine behind CPAN Testers, YAPC Surveys, and all the other commercial and non-commercial sites I've worked on over the years.

This year I finally released the code to enable Labyrinth to run under PSGI and Plack. It was much easier than I thought, and enabled me to better understand the concepts behind the PSGI protocol. There are several other concepts in web development that are emerging, and I'm hoping to allow Labyrinth to teach me some of them. However, I suspect most of my major work with Labyrinth in 2014 is going to be centred on some of the projects I'm currently involved with.

The first is the CPAN Testers Admin site. This has been a long time coming, and is very close to release. There are some backend fixes that are still needed to join the different sites together, but the site itself is mostly done. It still needs testing, but it'll be another Labyrinth site to join the other 4 in the CPAN Testers family. The site has taken a long time to develop, not least because of various other changes to CPAN Testers that have happened over the few years, and the focus on getting the reports online sooner rather than later.

The next major Labyrinth project I plan to work on during 2014, is the YAPC Conference Surveys. Firstly to release the current code base and language packs, to enable others to develop their own survey sites, as that has been long over due. Secondly, I want to integrate the YAPC surveys into the Act software tool, so that promoting surveys for YAPCs and Perl Workshops will be much easier, and we won't have to rely on people remembering their keycode login. Many people have told me after various events that they never received the email to login to the surveys. Some have later been found in spam folders, but some have changed their email address and the one stored in Act is no longer valid. Allowing Act to request survey links will enable attendees to simply log into the conference site and click a link. Further to this, if the conference has surveys enabled, then I'd like the Act site to be able to provide links next to each talk, so that talk evaluations can be donme much more easily.

Lastly, I finally want to get all the raw data online as possible. I still have the archives of all the surveys that have been undertaken, and some time ago I wrote a script to create a data file, combining both the survey questions and the responses, appropriately anonymised, with related questions linked, so that others can evaluate the results and provide even more statistical analysis than I currently provide.

In the meantime the next notable release from Labyrinth will be a redesign of the permissions system. From the very beginning Labyrinth had a permissions system, which for many of the websites was adequate. However, the original Mephisto project encompassed a permissions system for the tools it used, which for Labyrinth were redesigned as plugins. Currently a user has a level of permission; Reader, Editor, Publisher, Admin and Master. Each level grants more access than the previous one as you might expect. Users can also be assigned to groups, which also have permissions. It is quite simplistic, but as most of the sites I've developed only have a few users, granting these permissions across the whole site has been perfectly acceptable.

However, with a project I'm currently working on this isn't enough. Each plugin, and its level of functionality (View, Edit, Delete), need different permissions for different users and/or groups. The permissions system employed by Mephisto came close, but they aren't suitable for the current project. A brainwave over Christmas saw a better way to do this, and not just to implement for the current project, but to improve and simplify the current permission system, and enable to plugins to set their permissions in data or configuration rather than code, which is a key part of the design of Labyrinth.

This ability to control via data is a key element of how Labyrinth was designed, and it isn't just about your data model. In Catalyst and other web frameworks, the dispatch table is hardcoded. At the time we designed Mephisto, CGI::Application was the most prominent web framework, and this hardcoding was something that just seemed wrong. If you need to change the route through your request at short notice, you shouldn't have to recode your application and make another release. With Labyrinth switching templates, actions and code paths is done via configuration files. Changing can be dne in seconds. Admittedly it isn't something I've needed to do very often, but it has been necessary from time to time, such as disabling functionality due to broken 3rd party APIs, or switching templates for different promotions.

The permission system needs to be exactly the same. A set of permissions for one site may be entirely different for another. Taking this further, the brainwave encompassed the idea of profiles. Similar to groups, a profile can establish a set of generic permissions. Specific permissions can then be adjusted as required, and reset via a profile on a per user or per group basis. This then allows the site permissions to be tailored for a specific user. This then allows UserA and UserB to have generic Reader access, but for UserA to have Editor access to TaskA and UserB to be granted Editor access to TaskB. Previously the permission system would have meant both users be granted Editor access for the whole site. Now, or at least when the system is finished, a user's permissions can be set so they can be restricted to only the tasks they need access to.

Over Christmas there have been a few other fixes and enhancements to various Labyrinth sites, so expect to see those to also find their way back into the core code and plugins. I expect several Labyrinth related releases this year, and hopefully a few more talks at YAPCs, Workshops and technical events in the coming year about them all. Labyrinth has been a fun project to work on, and long may it continue.

Although percentage wise the submissions are up, the actual number of respondents are just slightly lower than previous years. Though it has to be said I'm still pleased to get roughly a third of attendees submitting survey responses. It might not give a completely accurate picture of the event, but hopefully we still get a decent flavour of it.

Two questions, which I plan to pay closer attention to in future surveys are; 'How do you rate your Perl knowledge?' and 'How long have you been programming in Perl?' Originally the age question usually gave some indication of how long someone had been using Perl, but from experience, I now know that doesn't work. As such, these two questions hopefully give us a better idea of the level of knowledge and experience of attendees. Perhaps unsurprisingly London.pm had a lot of attendees who have been around the Perl community for many years, particularly as it was the first non-US Perl Monger group. However, we do still see a notable number of people who are relatively new to Perl. It will be interesting to see whether these numbers change over the years, as although the community doesn't appear to be growing radically, it is still attracting first-time attendees.

Looking at the list of suggested topics, I was intrigued to see "Testing" in there. Apart from my own talk and Daniel Perrett's, there wasn't anything specifically about testing. I don't know if its because the older hands are more weary of giving test talks, or whether everyone thinks everything has been said, but I do think it's a topic that worth repeating. We regularly have new attendees who have never seen these talks before, so hopefully we'll see some more submitted at future workshops and YAPCs. There was also a lot of interest in practical uses of web frameworks. Although Andrew Solomon held a Dancer tutorial, seeing how to solve specific problems with web applications would be valuable to many. Having said that, the diverse range of subjects that was on offer at the workshop, was equally as interesting. I just hope Mark and Ian are so inundated with talks next year, we have an even greater choice from the schedule.

Thank you to Mark and Ian from organising another great Perl event, and thanks to all the speakers for making it worth attending. Also to all the attendees, especially those who took the time to respond to the survey, and for all the talk evaluations. I know the speakers appreciate the evaluations, as I've had a few thank yous already :)

The following is part of an occasional series of highlighting CPAN modules/distributions and why I use them. This article looks at Data::FlexSerializer.

Many years ago the most popular module for persistent data storage in Perl was Storable. While still used, it's limitations have often cause problems. It's most significant problem was that each version was incompatible with another. Upgrading had to be done carefully. The data store was often unportable, and made making backups problematic. In more recent years JSON has grown to be more acceptable as a data storage format. It benefits from being a compact data structure format, and human readable, and was specifically a reaction to XML, which requires lots of boilerplate and data tags to form simple data elements. It's one reason why most modern websites use JSON for AJAX calls rather than XML.

Booking.com had a desire to move away from Storable and initially looked to moving to JSON. However, since then they have designed their own data format, Sereal. But more of that later. Firstly they needed some formatting code to read their old Storable data, and translate into JSON. The next stage was to compress the JSON. Although JSON is already a compact data format, it is still plain text. Compressing a single data structure can reduce the storage by as much as half the original data size, which when you're dealing with millions of data items can be considerable. In Booking.com's case they needed to do this with zero downtime, running the conversion on live data as it was being used. The resulting code was to later become the basis for Data::FlexSerializer.

However, for Booking.com they found JSON to be unsuitable for their needs, as they were unable to store Perl data structures they way they wanted to. As such they created a new storage format, which they called Searal. You can read more about the thoughts behind the creation of Sereal on the Booking.com blog. That blog post also looks at the performance and sizes of the different formats, and if you're looking for a suitable serialisation format, Sereal is very definitely worth investigating.

Moving back to my needs, I had become interested in the work Booking.com had done, as within the world of CPAN Testers, we store the reports in JSON format. With over 32 million reports at the time (now over 37 million), the database table had grown to over 500GB. The old server was fast running out of disk space, and before exploring options for increasing storage capacity, I wanted to try and see whether there was an option to reduce the size of the JSON data structures. Data::FlexSerializer was an obvious choice. It could read uncompressed JSON and return compressed JSON in milliseconds.

So how easy was it to convert all 32 million reports? Below is essentially the code that did the work:

Simple, straighforward and got the job done very efficiently. The only downside was the database calls. As the old server was maxed out on I/O, I could only run the script to convert during quiet periods as the CPAN Testers server would become unresponsive. This wasn't a fault of Data::FlexSerializer, but very much a problem with our old server.

Before the conversion script completed, the next step was to add functionality to permanently store reports in a compressed format. This only required 3 extra lines being added to CPAN::Testers::Data::Generator.

The difference has been well worth the move. The compressed version of the table has reclaimed around 250GB. Because MySQL doesn't automatical free the data back to the system, you need to run the optimize command on a table. Unfortunately, for CPAN Testers this wouldn't be practical as it would mean locking the database for far too long. Also with the rapid growth of CPAN Testers (we now receive over 1 million reports a month) it is likely we'll be back up to 500GB in a couple of years anyway. Now that we've moved to a new server, our backend hard disk is 3TB, so has plenty of storage capacity for several years to come.

But I've only scratched the surface of why I think Data::FlexSerializer is so good. Aside from its ability to compress and uncompress, as well as encode and decode, at speed, it is ability to switch between formats is what makes it such a versatile tool to have around. Aside from Storable, JSON and Sereal, you can also create your own serialisation interface, using the add_format method. Below is an example, from the module's own documentation, which implements Data::Dumper as a serialsation format:

It's unlikely CPAN Testers will move from JSON to Sereal (or any other format), but if we did, Data::FlexSerializer would be only tool I would need to look to. My thanks to Booking.com for releasing the code, and thanks to the authors; Steffen Mueller, Ævar Arnfjörð Bjarmason, Burak Gürsoy, Elizabeth Matthijsen, Caio Romão Costa Nascimento and Jonas Galhordas Duarte Alves, for creating the code behind the module in the first place.

Several years ago, we frequently updated the Birmingham.pm website with book reviews. To begin with, updating all the book information was rather labourious. Thankfully, on CPAN there was a set of modules that had been written by Andrew Schamp, that provided the framework to search online resources. I then wrote drivers for Amazon, O'Reilly & Associates, Pearson Education and Yahoo!. As the books we were reviewing were technical books, these four sources were able to cover all the books we reviewed.

A few years ago, I started working for a book company. In one project, we needed to evaluate book data, particularly for books where we had no data or very little. Often these were imports or out of stock titles that we could still order, but we were lacking information about. As such I created a number of further drivers, particularly for non-UK online catalogues, to help retrieve this information. I managed to create a collection of 17 drivers, and 1 bundle, all available on CPAN.

Then Neil had the idea to look at some of the quality aspects of all the CPAN distributions, and highlight those that might need adoption. As part of his reviews of similar modules over the past few years, he's adopted several modules, and was looking at what others he could help with. The results included 2 of the modules written by Andrew Schamp, which formed part of the ISBN searching framework I used for my ISBN distributions. Seeing as they hadn't been touched in eight years, I suspected that Andrew had moved on to other languages or work. So I contacted him to see whether he was interested in letting me take the modules on and update them.

It turns out that Andrew had written the modules for a college project, and since moving to C and with his programming interests now nothing to do with books, he was happy to hand over the keys to the modules. Over the past week, I have now taken ownership of Andrew's 5 modules, added these and my own 18 ISBN distributions to my local git repository, added all 23 to GitHub, updated the Changes file, and License & Repository info to the 5 new modules and released them all to CPAN. My next task is to update the Repository info in my 18 ISBN distributions and release these to CPAN.

Although I don't work in the book industry anymore, writing these search drivers has been fun. The distributions are perhaps my most frequently releases to CPAN, due to the various websites updating their sites. Now that I have access to the core modules in the framework, I plan to move some of the repeated code across many of the drivers into the core modules. I also plan to merge the three main modules into one distribution. When Andrew originally wrote the modules, it wasn't uncommon to have 1 module per distribution. However, as all three are tightly bound together, it doesn't make much sense to keep them separate. The two drivers Andrew wrote have not worked for several years, as unsurprisingly the websites have changed in the last 8 years. I've already updated one, and will be working on the other soon.

It's nice to realise that a few of my CPAN Testers summary posts inspired Neil, who in turn has inspired me, and has ended up with me helping to keep a small corner of CPAN relevant and up to date again.

If you're a new Perl developer, who wants to take a more active role in CPAN and the Perl community, a great way to start is to look at the stencils on QuestHub, and help to patch and submit pull/RT requests to update distributions. If you feel adventurous, take a look at the possible adoption list, and see whether anything there is something you'd like to fix and bring up to date. You can also look at the failingdistributions lists, and see whether the authors would like help with the test suites in their distributions. You can then create your tasks as quests in QuestHub and earn points for your endeavours. Be warned though, it can become addictive :)

There is one more ISBN distribution on the adoption list, and I have now emailed the author. Depending on the response, I may be going through the adoption process all over again :) [Late update, the author came back to me and he's happy for me to take on his distribution too]

I've just released new versions of my use.perl distributions, WWW-UsePerl-Journal and WWW-UsePerl-Journal-Thread. As use.perl became decommisioned at the end of 2010, the distrubutions had been getting a lot of failure reports, as they used screen-scraping to get the content. As such, I had planned to put them out to pasture in BackPAN. That was until recently I discovered that Léon Brocard had not only released WWW-UsePerl-Server, but also provided a complete SQL archive of the use.perl database (see the POD for a link). Then combining the two, he put up a read-only version of the website.

While at YAPC::Europe this last week, I started tinkering, and fixing the URLs, regexes, logic and tests in my two distributions. Both distributions have had functionality removed, as the read-only site doesn't provide all the same features as the old dynamic site. The most obvious is that posting new journal entries is now disabled, but other lesser features not available are searching for comments based on thread id or users based on the user id. The majority of the main features are still there, and those that aren't I've used alternative methods to retrieve them where possible.

Although the distributions and modules are now working again, they're not perhaps as useful as they once were. As such, I will be looking to merge both distributions for a future release, and also providing support to a local database of the full archive from Léon.

Seeing as no-one else seems to have stepped forward and written similar modules for blogs.perl, I'm now thinking it might also be useful to take my use.perl modules and adapt them for blogs.perl. It might be a while before I finish them, but it'll be nice to have the ability to have many of the same features. I also note that blogs.perl.org also now has paging. Yeah \o/ :) This has been a feature that I have been wanting to see on the site since it started, so thanks to the guys for finding some tuits. There was a call at YAPC::Europe for people to help add even more functionality, so I look forward to seeing what delights we have in store next.

Did I mention I went to Paris to take part in the 2012 QA Hackathon? Did I remember to mention all the cool stuff I got done? Well if you've been hiding for the past few weeks, have a look at the last couple of posts :)

As per usual, while there I took my camera along. However, unlike many previous visits to Paris, I didn't do any sight-seeing. And that includes failing to wander around the venue we were in and discovering the real submarine among other things, that others found while taking a breath of fresh air.

Instead I spent my time hacking away, and only occasionly coming up for air for food, drink and some camera action.

With over 40 people in attendance, it was going to be difficult to capture everyone, but I think I managed it. If I did miss you, my apologies. It was great to meet so many friends old and new, and a real pleasure to finally put faces to names that I've known for a while, but not had the opportunity to meet in person.

So many great things happened in Paris, and I'm really looking forward to see what we can achieve in London for the 2013 QA Hackathon. See you there.

CPAN Testers Report Status

After asking several times, Andreas thought he finally understood what the dates mean on the Status page for the CPAN Testers Reports. He started watching and making page requests to see whether his requests were actioned. On Day 3 he pointed out that the date went backwards! Once he'd shown me, I understand now why the first date is confusing. And for anyone else who has been confused by it, you can blame Amazon. SimpleDB sucks. It's why the Metabase is moving to another NoSQL DB.

The date references the update date of the report as it entered the Metabase. The last processed report is the last report that was extracted from the Metabase and entered into the cpanstats DB. Unfortunately, SimpleDB has a broken concept of searching. It will return results before the date requested, and regularly return the sorted results in an unsorted order. As such the dates you see on the Status page may go backwards in time! I'm not going to try and fix this, as it will all work as intended with the new system.

Missing Reports

There have been several questions relating to missing reports over the past few years. Sometimes it just needs me to refresh the indices, but in other cases it may be due to the fact that SimpleDB omits reports from a request. Did I mention SimpleDB sucks? In a request to the Metabase, I will ask for all the reports from a given date. The results are limited to 2500, due to Amazon's own restriction. In the returned list it will often omit entries, due to its ignorance of sorting in the search request. I have gone through the Metabase code on several occasions and can verify it does the right thing. SimpleDB just chooses to ignore the complete search request and returns what it *thinks* you want to know.

Ribasushi questioned me about one of his modules that had been released recently, which still had no Cygwin reports listed, even though he sent a few himself. Further investigation revealed that they are indeed missing from the cpanstats DB. Although they did enter the Metabase, they never came out again.

To resolve this I have been revisiting the Generator code to rework the reparse and regenerate code to enable search requests for missing periods, in the hope that this will retrieve most of the missing results. If it doesn't, then I will be asking David to produce a definitive list for me, and I will make specific requests for any missing reports. The Generator code has been updated in GitHub to include all the performance improvements that have been in live for some time too.

Erronously Parsed Reports

Every so often the parsing mechanism fails and stores the wrong data within the cpanstats DB. These days it seems to only affect the platform, OS version and OS name. I'm not quite sure what is happening, as reparsing the report locally again produces the correct results. This uses the same routine to parse the report, so why they occasional fail remains a mystery. However, to combat this, I now have a script that can run and search periodicly for this erroneous data and attempt to reparse the results. It can then alert me when it can't fix it and I can investigate manually. The have been occasions where the report can't be parsed due to the output being corrupted on the test machine, which unfortunately we can't always resolve. Sometimes there are enough clues within other parts of the report that point to a particular OS, but sometimes we just have to leave it blank.

It seems in putting some of this code live before leaving the hackthon, I accidentally reintroduced a bug. Slaven was quick to spot it and tell me about it, but unfortunately it was too late for me to fix it, as I needed to leave and catch my flight home. It should be fixed by the time you read this though, so all should be back to your regular viewing pleasure :) With the new script I've written, it should hopefully find and fix these errors in the future, as well as alerting me to fix the bug again!

Thanks Again

So that was the 2012 QA Hackathon. The show ended with a group photo, although a few were missing due to their early departures home, but I think we got most of us in. Including Miyagawa, who was taking the picture. The traditional thanks yous and good byes ensued and then Andreas and I headed off to begin our adventure getting the airport! The next hackathon, the 2013 QA Hackathon, will be in London. We'll have the domain pointed to the right place just as soon as Andy gets the website up and running. I look forward to a lot more involvement for next year, as we have been steadily growing in numbers each year. There has already been some significant output, but the event is much more than that. It's a chance to take to people face to face, discuss ideas and plan for the future. Expect more news for CPAN Testers soon.

I'm currently at the 2012 QA Hackathon working on CPAN Testers servers, sites, databases and code. It has already been very productive, and already I have two new module releases.

CPAN::Testers::WWW::Reports::Query::AJAX

This module was originally written in response to a question by Leo Lapworth about how the summary information is produced. As a consequence he wrote CPAN::Testers::WWW::Reports::Query::JSON, which takes the data from the stored JSON file. In most cases this data is sufficient, but the module requires parsing the JSON file which may be slow for distributions with a large number of reports. On the CPAN Testers Reports site, in the side panel on the distribution page, you will see the temperature graphs measuring the percentage of PASS, FAIL, NA and UNKNOWN reports a particular release has. This is glean from an AJAX call to the server.

But what if you don't want an HTML/Javascript styled response? What if you wanted the results in plain test or XML? Enter CPAN::Testers::WWW::Reports::Query::AJAX. Now you can use this to query the live data to for a particular distribution, and optionally a specific version, all the result values and get them pack as a simple hash to do with as you please.

I anticipate this might be most useful to project website who wish to display their latest results from CPAN Testers in some way. They can now get the data, and present it however they wish.

CPAN::Testers::WWW::Reports::Query::Reports

Now we get to perhaps the bigger module, even though its smaller than the one above. This module is perhaps most useful to all those who are trying to maintain a version of the cpanstats metadata from the SQLite database. As mentioned previously the SQLite database has been giving us grief over the past year, and we haven't gotten to the bottom of it. Andreas suspects there is some unusual textual data in some reports that is causing SQLite problems when it tries to store it. I'm not quite convinced by this, but as I'm only inserting records, I'm at a lost as to what else be the cause.

The SQLite file now clocks in at over 1GB compressed and over 8GB uncompressed, and is starting to take a notable amount of disk space (though considerably smaller than the 250GB+ Metabase database ;) ). It is also a significant bandwidth consumer each day, which can slow processing and page displays, as disk access is our limiting factor now.

Enter CPAN::Testers::WWW::Reports::Query::Reports. This module uses the same principles as the AJAX module above, but now accesses an new API on the CPAN Testers Reports site to enable consumers to get either a specific record or a whole range of report metadata records. Currently the maximum number of records that can be return in a single request is 2500, but this may be increased once the system has been proven to work well. Typically we have around 30,000 reports submitted each day, so to allow consumers to make best use of this API, I will look to increasing the limit to maybe 50,000 or 100,000. I want to impose a limit as I don't want accidental requests being sent to consume the full database in one go, as again this would put a strain on disk access.

The aim of the module is to allow those that currently consume the SQLite database, to more regularly request smaller updates and store the results in any database they so choose. Even into a NoSQL style database. It will ultimately reduce the bandwidth, data stored and processing to gzip and bzip2, which then means we can reallocate effort to more useful tasks.

If you currently consume the SQLite database, please take a look at this module and see how you can use it. I plan to include some example scripts that could be drop-in replacements for your current processes, but if you get there first, please feel free to submit them to me too, and I will include them with full credit. If you spot any issues or improvements, please also let me know.

CPAN Testers Platform Metabase Facts

This morning we had a CPAN Testers presentation and discussion hosted by David Golden. As there is plenty of interest from a variety of parties about CPAN Testers, it was a good opportunity to highlight an area that needs work, but which David and myself, as well as other key developers in the CPAN Tester community, just don't have time to do. Breno de Oliveira (garu or IRC) has very kindly stepped forward to look at one particular task, which we have been wanting to write since the QA Hackathon in Birmingham, back in 2009!

Breno has written a CPAN Testers client for cpanminus. At the moment its a stand-alone application, but it may well be included within cpanminus in the future. As part of writing the application, Breno asked David and myself about how the clients for CPAN::Reporter and CPANPLUS::YACSmoke create the report. Due to the legacy system we came from (email and NNTP) we still use an email style presentation of the reports. However, it has always been our intention to produce structured data. A CPAN Testers Report currently has only two facts that are required, a Legacy Report and a Test Summary. However there are other facts that we have already scoped, except they are just not implemented.

Back last year the Birmingham Perl Mongers produced the CPAN::Testers::Fact::PlatformInfo fact, that consumes the data from Devel::Platform::Info (which we'd written the previous year). The problem with the way test reports are currently created, is that we don't always know the definite platform information for the platform the test suite was run on. Reports, particularly in the Perl Config section, can lie. Not big lies necessarily, but enough that it can disguise why a particular OS may have problems with a particular distribution.

Breno is now looking to produce a module that firstly abstracts all the metadata creation parts from CPAN::Reporter, CPANPLUS::YACsmoke, Test::Reporter as well as his own new application, and puts them into a single library that can then create all the appropriate facts before submitting the report to the metabase. Hopefully he can get this done during the Hackathon, but even if he doesn't, we're hopful that he will get enough done to make it easy to complete soon after. Once we then patch the respective clients to use the new library, we will then start to be able to do interesting things with how we present reports.

The CPAN Testers Reports site only displays the legacy style report, which for most is sufficient, but it really would be nice to have some specially styled presentations for particular sections, or even allow user preferences to show/hide sections automatically when a user reads a report.

CPAN Testers Admin site

This is a site that I have been working on, on and off, for about 4 years, before we even had a Metabase. As a consequence it has been promised at various points and I've always failed to deliver. Now I have release the modules above, and there have been several comments already about having such functionality, I think I need to put some focus on it again. I have shown Breno the site running on my laptop and he has given me some more ideas to make it even more useful. It'll still be awhile before its released, but this will likely be down to running with some beta testers first before a major launch, just so it doesn't break the eco-system too badly!

Essentially the site was written to help authors and testers to highlight dubious reports and have them deleted from the system. Although the reports won't actually be deleted, they will be marked to ignore, so that they can be removed from JSON files and summary requests, as well as on the CPAN Testers Report site. This will hopefully enable us to get more accurate data, and bogus reports about running out of memory or disk space can be disregarded.

However, following Breno suggestions, I will look to making the site more public, so that authors can more easily see the reporting patterns without having to log in. The log in aspect will still be needed to flag reports, but the alternate browsing of reports by testers will be much more accessible.

Thanks

I would like to thank a few people who have helped to get me here, and have enabled these QA projects, not just CPAN Testers, to advance further.

Firstly I would like to single out ShadowCat Systems, who have very kindly paid for my flight here. Thanks to BooK and Laurent for organising the event, and to all the sponsors and Perl community who have provided the funding for the venue, accommodation and food for the event. It has already been very much appreciated, and hopefully the significant submissions to GitHub and PAUSE are evidence of just how worthwhile this event is.

Thanks also to all those who are here, and are helping out in all shapes and forms to help Perl QA be even better than it already is.

For those that follow the conference surveys, you'll be pleased to hear that I have now put the results of both the Israeli Perl Workshop and the German Perl Workshop online. These are the first events this year to take advantage of the surveys, although several more are to come.

This marks the second survey for the German Perl Workshop and notes some small differences, while it was the first for the Israeli Perl Workshop. I hope the future organisers can make use of the results and that they allow me to continue the surveys with these workshops next year, and for the years to come.

Although the Israeli Perl Workshop was in English this year, Gabor and I are hoping to be able to provide the survey in Hebrew next year. The German Perl Workshop marked the first survey not in English last year, and it helped to start building up a language pack, which can be used to plugin to the survey software. I plan to formalise this during the year, so that other events, using languages other than English, can still take advantage of the surveys.

Thanks to all the organisers and the survey participants for taking the time to respond to the questions. It is very much appreciated.

German Perl Workshop 2011 - Speaker Evaluations

I have now sent out all the talk evaluations from this year's German Perl Workshop or more correctly Der 13. Deutsche Perl-Workshop. If you were a speaker and haven't received an email, please check your spam folders first, and let me know (barbie at cpan . org) if you don't find it. The mail will have come from barbie at birmingham . pm . org.

My thanks to all the organisers of GPW2011 and everyone who took the time to respond to the evaluations. From previous experience the speakers have very much appreciated your feedback. I would also like to extend extra special thanks to Max Maischein aka "Corion", who took the time to translate all the questions, templates and emails into German for me.

This is the first survey that I have undertaken in a non-English language, and for the most part it has been very successful. While there have been some slight problems due to byte vs character lengths (I'll save my 'why-oh-why did we ever start with ASCII and not UTF-8' rant for another day), the work Max has done to provide all the translations has started me on a path to be able to accommodate other languages.

At the moment the plan is to create a GitHub repository of all the necessary files, with language branches containing the appropriate translations. Then should anyone wish to request a survey instance in the future in a non-English language, their first step will then be to provide the necessary translations for me. It currently takes roughly a day to set-up an instance, so drop-in replacements for these files will ease the set-up process. It will also mean that as time goes on and questions get added, refined or deleted, we can replicate these changes across all languages.

I'd like to see the survey site get more use in the future, and although I'm happy to run the survey sites, with the support of Birmingham Perl Mongers, the longer term goal has always been to allow others to create their own instances. With the official release of Labyrinth this year, much of the tool set is now Open Source. I still need to release the Survey Plugin for Labyrinth and the additional command-line tools used, but getting the language translations moving will be a big step forward. Hopefully I'll have more news in the new year.

YAPC::Europe 2011 Survey Results

During August this year, in Riga, Latvia, YAPC::Europe brought together 285 people to learn, discover and discuss Perl. As previous attendees know the YAPC conferences are a perfect opportunity to introduce yourself to the Perl community. YAPCs are now held all around the world and each is very different another. Each has their own charactistics, and they all get better and better thanks to the feedback from attendees old and new, which is why the YAPC Conference Surveys are well placed to concentrate that feedback for future organisers.

Although the responses where down from previous YAPC::Europe events, we still had over 50%, so thank you to everyone who took the time to respond. Interestingly of those who took the survey, none recorded themselves as coming from Latvia. I suspect this is in part due to the language barrier. As the surveys are in English, those that don't feel quite comfortable with the language might feel less inclined to feedback their thoughts and experiences. I'd like to be able to have the surveys available in different languages, but accumulating some of the responses, particularly the free text ones, may prove difficult. However, this is a goal for the future.

Unsurprisingly these days, we saw a large number of people attending who are regulars either to the YAPCs and Workshops or to the Perl community generally. At the conference itself we did ask how many attendees were at their first YAPC, and it was quite significant. However, we are still seeing roughly the same numbers, so we are not necessarily able to keep those new attendees coming back as regular attendees. In this survey however, no-one stated that they wouldn't attend another event in the future, so hopefully next year we should start seeing more familiar faces.

This year I plan to get the free text feedback sections online, and may well provide these for previous years too. I normally only provide these to the organisers (both current and succeding), but I think everyone could benefit from the thoughts and ideas, whether a YAPC organiser or an organiser of any other technical event.

Many thanks to all those who took the time to respond, both to the Conference Survey and all the Talk Evaluations. Your time is very much appreciated.

YAPC::NA 2011 Survey Results

During June this year, in Asheville, North Carolina, YAPC::NA assembled 251 people together to learn and discuss Perl, Perl projects and meet Perl people. The YAPC conferences are a perfect opportunity to tell the Perl community of your latest project, or to talk to other Perl developers face to face. YAPCs have now been running for 12 years, and each gets more focused and exposure than the last. In part in this thanks to all the previous organisers who have gone before, offering help and advice where they can. However, the YAPC Conference Surveys also help to provide value feedback to future organisers.

While only 34% of all attendees responded, the feedback has still proved very helpful and provided me with some additional questions for the future. I was recently asked how I thought the YAPCs had changed, and one of the changes I noted, as is hinted at in the feedback, is that many of the talks now focus more on Perl frameworks and applications, rather than specific modules or techniques. In a way it highlights how Perl has grown up. Perl is still a language and tool to get jobs done, but now there are more stable and constructive ways of getting those jobs done.

Many thanks to all those who took the time to respond, both to the Conference Survey and all the Talk Evaluations. Apologies for the delay in getting the results online, but events with CPAN Testers have taken most of my free time over the last 2 weeks :(

Earlier this week I attended YAPC::Europe 2011. Many thanks to Andrew, Alex and all the others involved with bringing the conference to life, it was well worth all the effort.

During the conference I gave two talks. The first was my main talk, Smoking The Onion - Tales of CPAN Testers, which looked at how authors can use the CPAN Testers websites to improve their distributions, as well some further hints and tips for common mistakes spotted by testers over the years. It also looked at how some of the sites can be used by users to see whether a particular distribution might be suitable for their purposes or not. The talk seemed to go down well, and it seems a few were disappointed to have missed it, after discovering it wasn't my usual update of what has been happening with CPAN Testers. Thankfully, I did video the talk, and I think the organisers also have a copy, so expect to see it on YAPC TV and Presenting Perl at some point in the future.

Back in February I did a presentation for the Birmingham Perl Mongers, regarding a chunk of code I had been using to test websites. The code was originally based on simple XHTML validation, using the DTD headers found on each page. I then expanded the code to include pattern matching so I could verify key phrases existed in the pages being tested. After the presentation I received several hints and suggestions, which I've now implemented and have set up a GitHub repository.

Since the talk, I have now started to add some WAI compliance testing. I got frustrated with finding online sites that claimed to be able to validate full websites, but either didn't or charged for the service. There are some downloadable applications, but most require you to have Microsoft Windows installed or again charge for the service. As I already had the bulk of the DTD validation code, it seemed a reasonable step to add the WAI compliance code. There is a considerable way to go before I get all the compliance tests that can be automated written into the distribution, but some of the more immediate tests are now there.

As mentioned in my presentation to Birmingham.pm, I still have not decided on a name. Part of the problem being that the front-end wrapper, Test::XHTML, is written using Test::Builder so you can use it within a standard Perl test suite, while the underlying package, Test::XHTML::Valid uses a rather different approach and does provides a wider API than just validating single pages against a DTD specification. Originally, I had considered these two packages should be two separate releases, but now that I've added the WAI test package, I plan to expose more of the functionality of Test::XHTML::Valid within Test::XHTML. If you have namespace suggestions, please let me know, as I'm not sure Test-XHTML is necessarily suitable.

Ultimately I'm hoping this distribution can provide a more complete validation utility for web developers, which will be free to use and will work cross-platform. For those familiar with the Perl test suite structure, they can use it as such, but as it already has a basic stand-alone script to perform the DTD validation checks, it should be usable from the command-line too.

If this sounds interesting to you, please feel free to fork the GitHub repo and try it out. If you have suggestions for fixes and more tests, you are very welcome to send me pull requests. I'd be most interested in anyone who has the time to add more WAI compliance tests and can provide a better reporting structure, particularly when testing complete websites.

Back last year I got a curious email from a fellow London.pm'er asking why I was releasing so many WWW-Scraper-ISBN distributions. The reason was quite simple, to make my life easier! Well okay, that's why I wrote the distributions, but I figured others might find them useful too.

In the UK the book trade is a bit odd, and I dare say the rest of the world suffers from this too. The publishers don't like to give too much information away about their books, and the central body for allocating ISBNs, Nielsen, don't always have all the necessary metadata available. The book trade uses MARC Records to transfer this metadata around, and unfortunately, while there is provision to include much of the metadata, it often isn't included. The obvious things such as the Author, Title and the ISBN itself are usually there, but some of the data relating to the physical attributes (pages, height, width and weight) rarely are.

Jumping forward several years, now needing this extra metadata, I first expanded the original four distributions. However, not all of these online bookstores provided this extra metadata. Picking a variety of books I searched to see what metadata I could retrieve, and came across several sites around the world that included this information to varying degrees. Much of the basic information regarding an ISBN shouldn't change from country to country, so metadata retrieved from Australia or New Zealand is as valid as that from America or the UK. There are aspects that can differ, such as the cover illustration, but the majority of metadata returned should be applicable regardless of location.

There was some interesting discrepancies with the different units of weights and measures used across the sites too. While some stuck to a set of fixed units, others changed depending how big the values were, particularly for grammes and kilogrammes. I settled on grammes for weight and millimetres for height and width, seeing as metric was the most commonly used on the various sites.

It did cross my mind whether to include the prices in the metadata returned, but as prices often fluctuate frequently and are very location dependent, you are probably better to write this side of things yourself for your specific purpose, such as a comparision website. I also left out depth, as only a few sites regularly provided a value for it. I can always save it for a future release anyway.

Hopefully those that work in the book trade, who have been wishing that MARC Records were populated a little more fully than they are currently, can make use of these distributions to help fill in the gaps.

I haven't been posting recently about the Perl projects I'm currently working on, so over the next few posts I hope to remedy that.

To begin with, one of the major projects I've been involved with for the past 8 years has been CPAN Testers, although you can find out more of my work there on the CPAN Testers Blog. This year I've been releasing the code that runs some of the websites, specifically those that are based on my other major project, Labyrinth. Spearheading these releases have been the CPAN Testers Wiki and CPAN Testers Blog, with further releases for the Reports, Preferences and Admin sites also planned. The releases have taken time to put together mostly because of the major dependency they all have, which is Labyrinth.

Labyrinth is the website management framework I started writing back in 2002. Since then it has grown and become a stable platform on which to build websites. With both the CPAN Testers Wiki and the CPAN Testers Blog, three key plugins for Labyrinth have also been released which hopefully others can make use of.

The Wiki plugin, was intended to be written for the YAPC::Europe 2006 Wiki, but with pressures of organising the conference and setting up the main conference site (which also used Labyrinth), I didn't get it finished in time. Once a CPAN Testers Wiki was mooted, I began finishing off the plugin and integrating into Labyrinth. The plugin has been very stable for the last few years, and as a consequence was the first non-core plugin to be released. It's a fairly basic Wiki plugin, not too many bells and whistles, although there are a couple of Perlish shortcuts, but for the most part you don't need them. The CPAN Testers Wiki codebase release was also the first complete working site for Labyrinth, which was quite a milestone for me.

Following that success, the next release was for the CPAN Testers Blog. Again the underlying plugin, the Blog Plugin, has been stable for a few years, so was fairly quick to package and release, however the secondary plugin, the Event Plugin, has been evolving for quite some time and took a little more time. As I use both these plugin for several other sites, it was a good opportunity to bring together any minor bug fixes and layout changes. Some of these have seen slight modifications to the core Labyrinth codebase and the core set of plugins. In addition it has prompted me to start working on the documentation. It is still a long way from being complete, but at least the current documentation might provide some guidance to other users.

One of my major goals for Labyrinth was for it to be a 'website in a box'. Essentially this means that I wanted anyone to take a pre-packaged Labyrinth base (similar to the Demo site), drop it on a hosting service and be able to run a simple installation script to instantiate the database and configuration. The installation would then also be able to load requested plugins, and amend the database and configuration files appropriately. I haven't got to that stage yet, but it is still a goal.

With this goal in mind, I have read with interest the recent postings regarding the fact that DotCloud are now able to run Perl apps. This is definitely great news, and is exactly the kind of setup I had wanted to make best use of for the 'website in a box' idea. However, with several other frameworks now racing to have the coolest instance, it isn't something I'm going to concentrate on right now for Labyrinth. Plus there is the fact that Labyrinth isn't a PSGI framework, which others have eagerly added to their favourite framework. Labyrinth came from a very different mindset than other now more well known frameworks, and tries to solve some slightly different problems. With just me currently working on Labyrinth, as opposed to the teams of developers working on other frameworks, Labyrinth is never going to be the first choice for many reasons. I shall watch with interest the successes (and lessons learned from any hiccups) of the other frameworks as it is something I would like to get working with Labyrinth. If anyone who has the time and knows PGSI/Plack well enough, and would like to add those capabilities to Labyrinth, please get in touch.

The next notable plugins I'll be working on are the Survey, Music and Gallery Plugins. The former has its own post coming shortly. The next notable CPAN Testers site released planned is the Reports site. With it being considerably more involved, it might take a little longer to package and document, but it will likely be the most complex site release for Labyrinth, which will give anyone interested in the framework a good idea of how it can be used to drive several sites all at once.

Many years ago I wrote a set of scripts and modules that together formed a way for me to access eBay internationally. I frequently bought records from the UK, US, Germany and Australia, so those were the plugins that I focused on, but the intention was to allow more to interface to other eBay sites. I even did a presentation at YAPC::Europe in 2004, called The Perl Auctioneer, which explained my progress.

As part of the currency calculations and conversion, I used the same site that eBay themselves were using, XE.com. As I became more involved in other projects, and my international eBay buying declined, my efforts to finish and release the Perl Auctioneer waned. However, I was still using the currency conversion module, so released it as a stand-alone package. In time this became Finance::Currency::Convert::XE.

Although I have occasionally updated the module, I no longer use it. However, others still do. XE.com themselves are very protective of their data, understandably, and are very resistent to screen scrapers. Even though their own terms of use allow for personal use, and do not explicitly say screen scrapers are prohibited, they do make accessing the data from the command line very difficult. They have very recently upgraded their website with further measures to prevent automated tools scraping their data.

As I no longer use the module, I feel I have two choices. Pass on the distribution to someone else, who does want to invest time and effort on the module, or to abandon the module and distribution and remove it from CPAN. As the module does not currently work with the latest XE.com site, unless someone does come forward I plan to remove the distribution from CPAN by the end of the month.

If you would like to take over the module, please email me (barbie@cpan.org) and let me know your PAUSE ID. I'll then put the wheels in motion to give you maintainer/author permissions.

Sometime ago, a website I was working on needed the ability to view images on the current page from a thumbnail. Many websites now feature this functionality, but at the time only a few seemed to offer this, and the assumption was that the javascript required was rather complex. As such, I did a search of the viewer libraries available, either as Open Source or for free download, that I could use for a commercial website.

The initial search revealed a rather more limited result than I expected, and seemed to imply that the complexity had put people off from developing such a library. However, in retrospect it seems that a market leader has become so popular, stable and robust, that others have choosen to provide different or limited presentations based on similar designs.

Back last year I began writing a review of some of the viewers, but never got around to finishing it. Having some time recently, I decided to both complete the review and revisit the viewers to see what improvements have been made since I first investigated them.

Before I begin the individual reviews, I should note the requirements I was looking for in a viewer. Firstly, the viewer needed to be self contained, both with files and directory structure, so that the feature could be added or removed with minimal changes to other website files. The viewer needed to be run completely on the client side, no AJAX or slow loading of large images would be acceptable. However, the most significant requirement was that all code needed to work in IE6. Unfortunately this latter requirement was non-negotiable.

I was quite surprised by the results of the solutions I could find around the web, and although there are likely to be others now, the following is a brief review of each of the four immediate solutions I found, and my experiences with them.

Lightbox

Possibly the best know thumbnail viewer library available, and now a clear market leader. The original review was with v2.04, which had been the stable release from 2008. This month (March 2011) has seen a version 2.05 release with added IE9 support. Lightbox is licensed under the Creative Commons Attribution 2.5 License, and is free to use for commercial projects, although a donation would be very much appreciated.

While this viewer works in most browsers, and the features of images sets and loading effects looked great, it proved unworkable in many of the IE6 browsers I tried across multiple platforms. Despite searching in forums and in some howtos, there didn't seem to be an obvious fix to the problem. The viewer would either not load at all, load with a black layer over the whole web page, or begin to load and crash the browser. I know there are many problems and faults with IE6 and the javascript rendering engine, but these were supposedly stable releases.

As Lightbox makes use of the Prototype Framework and Scriptaculous Effects Library, which was already being used within the website the viewer was for, the library initially seemed to be the best fit. Failing IE6 so dramatically and consistently, disappointingly meant it couldn't be pursued further.

Slimbox

Slimbox is a Lightbox clone written for the JQuery Javascript Library. v2.04 is the last stable release, and the release that was originally reviewed. Slimbox is free software released under MIT License.

Slimbox is based on Lightbox 2, but utilises more of the JQuery framework and is thus slightly less bulky. While working well in the browsers I tried, it flickered several times in IE6 when loading the image. Anyone viewing the effect with eplipsy might well have felt ill. Even for someone not affected by eplisey this strobing effect was extremely off putting. I suspect this problem may well be an alternative side-effect to those seen with the original Lightbox, but again forums and howtos didn't provide a suitable fix in order to remedy this problem.

Dynamic Drive Thumbnail Viewer

This is the first thumbnail viewer that Dynamic Drive have available, as the second is an inline viewer rather than an overlay, which is what I was after, and is the version made available on July 7th, 2008. Scripts by Dynamic Drive are made available under their Terms of Use, and are free to use for commercial projects.

This a very basic viewer, relying on basic functionality rather than flashy effects. As such, it is simple in design and presentation. Rather than create a full browser window overlay, as both Lightbox and Slimbox do, the Dynamic Drive viewer simply contains the viewing image within a simple DIV layer tag. There is the possibility to add visual effects, but these can be easily turned off.

This seemed to work in most of the browser tried, except when clicking the image in IE6. The image appeared, but then immediately a javascript error popped up. After quickly reviewing the configuration and turning off the animation, the viewer opened and worked seamlessly across all the browsers tested.

Highslide JS

Highslide JS is a very feature rich library, which provides much more than an image viewer. Highslide JS is licensed under a Creative Commons Attribution-NonCommercial 2.5 License, which means you are free to use the library for non-commercial projects. For commercial projects two payment methods are available, $29 for a single website, and $179 for unlimted use.

The feature set for displaying images includes the style of animation to open images, the positioning of text, and the linking of image sets. In addition, it also provides many features for regular content, which can then be used for tooltip type pop-ups, using embedded HTML, IFrames and AJAX. Another standard feature is the ability to allow the user to move the pop-up around the screen, to wherever might be convienent.

However, there is a downside. While this works well in most browsers, even just loading the Highslide JS website in IE6 throws up several errors. With the library being so feature rich, it is a considerably larger codebase, although removing comments can remove this down to just over 8KB, and I suspect some of the older browsers may not be able to handle some of the complexity. Their compatibility table suggests that it works all the way back to IE 5.5, but in the tests performed for IE6, when the site did open without crashing the browser, the viewer itself felt rather clunky when an image was opened and several of the visibility settings just didn't work. You also frequently get an 'Unterminated string constant' error pop-up, which just feels disconcerting considering they are asking you to pay for commercial usage.

If IE6 wasn't a factor, this may have been a contender, as the cost is very reasonable for a commercial project that would utilise all its features.

Conclusion

These are just the four viewers that were prominent in searches for a "thumbnail viewer". They all seem to have the same, or at least a similar, style of presentation of images, which is likely due to the limited way images can be displayed as an overlay. However, the basic functionality of displaying an image seems to have been overshadowed by how many cool shiny features some can fit into their library, with configuration seeming to be an after thought.

With the ease of configuration to disable the IE6 error, the basic functionality and the freedom to use for commercial projects, the Dynamic Drive solution was utimately chosen for the project I was working on. If IE6 wasn't a consideration, I would have gone with Lightbox, as we already use Prototype and Scriptaculous. With IE6 usage dwindling on the website in question (Jun 2010: 38.8%, down to Mar 2011: 13.2%), it is quite possible that we may upgrade to a more feature and effect rich viewer in the future, and Lightbox does seem to be a prime candidate.

Consider this post a point of reference, rather than a definitie suggestion of what image viewer library to use. There may be other choices that suit your needs better than these, but these four are worth initial consideration at the very least.

Browsers & Operating Systems

For reference these were the browsers I tried, and the respective operating systems. And yes, I did test IE6 on Linux, where it occasionally stood up better than the version on Windows! Though this may be due to the lack of ActiveX support.

Paul Weller once sang of "a new direction. We want a reaction. Inflate creation." All three could be attributed to why two major events in the Perl event calendar started in 1999, and now happen all around the world today. The two events, The German Perl Workshop and YAPC::NA, both were a new direction for Perl events and specifically a reaction to more commercial events. They both also brought a new creativity to the Perl community.

In 2011 we now have YAPCs, Workshops and Hackathons happening on a monthly basis somewhere in the world. They are still very much organised by members of the Perl Community, and bring together a diverse group of people to each event. They often inspire some to create Perl events themselves. However, that initial enthusiasm is often quickly followed by panic, when the organisers start to figure out what they need to do to make a great event. Which is where a book might help.

I am planning to publish such a book, entitled 'Perl Jam - How to organise a conference ... and live to tell the tale'. The book is a guide for organisers planning to host a large technical event, with the aim of helping organisers think of everything, and prepare themselves for anything they might not have thought of, or forgotten. Organising a conference, workshop or hackathon can be a daunting prospect, but with the help of this book, it might make the experience much more enjoyable, and may even inspire you to do it all again!

'Perl Jam' is being made available for its first public draft via a GitHub repository. This is the third draft, and my thanks go specifically to Jon 'JJ' Allen and David Golden, for their extensive help and feedback so far. Also thanks to chromatic for allowing me to use the framework and scripts he used for his great book Modern Perl.

I welcome any and all comments and suggestions, so if you've ever organised a large event, please take the time to read the draft and see if there is anything not covered that you would have suggested. For any current organisers, please download and share the book with your team and feel free to send me any additional notes you make as you go along. If you are thinking about organising a technical event in the future, are there any questions you would want to know, that haven't been explained in the book?

Everything is up for discussion, including the cover (which is not the finished version), and I'm very interested to hear from anyone who has suitable photos that can be included in the book, as examples or to emphasise sections.

Last week I gave my first technical talk for several months. Despite being a bit rusty, everyone seemed to find the talk interesting. The talk itself was about code I'd written to test XHTML completeness of web pages and further pattern matching of page content. I've been using and developing the testing code over the last few years, having written the initial basic script, xhtml-valid, back in 2008. Over the last 18 months I have revisited the code and rewritten it into a traditional Perl testing structure. The talk looked at the current state of the code and asked for advice on where to take it next.

The code has developed into two packages, Test::XHTML and Test::XHTML::Valid, and as such the talk naturally fell into two parts, looking at each package in more depth. I had originally planned a demo, but unfortunately my laptop wouldn't talk to the projector, so had to rely on slides alone. This didn't seem to matter too much, as the slides conveyed enough of the API to give a decent flavour of what the packages were about.

The final questions I asked originally centred on where I was thinking of heading with the code base, but I also got asked a few questions regarding the technical aspects. My thanks to Colin Newell and Nick Morrott for giving me some ideas and pointers for further expansion of the code. As for my final questions, it was generally agreed that these should appear on CPAN in some form, and as two separate packages, but unfortunately nobody had a suitable name for either.

I plan to work further on the code, both to package them better and to include the suggestions from Colin and Nick, and then I'll see if anyone has some better suggestions for the names. In the meantime, the slides are now online [1] and the 2008 version 1.00 of the code base is also available [2]. I aim to have the current code base online soon, with a GitHub repo to provide ongoing developments for anyone who might be interested.

On the 1st January 2011, I released the first Open Source version of Labyrinth, both to CPAN and GitHub. In additon I also released several plugins and a demo site to highlight some of the basic functionality of the system.

Labyrinth has been in the making since December 2002, although the true beginnings are from about mid-2001. The codebase has evolved over the years as I've developed more and more websites, and got a better understanding exactly what I would want from a Website Management System. Labyrinth had the intention of being a website in a box, and although it's not quite there yet, hopefully once I've released all the plugin code I can put a proper installation tool in place.

Labyrinth now is the backend to several Open Source websites, CPAN Testers using it for the Reports, Blog, Wiki and Preferences sites, as well as some personal, commercial and community projects. As a consequence Labyrinth has become stable enough to look at growing the plugins, rather than the core code. I'm sure there is plenty that could be done with the core code, but for the moment providing a good set of plugins, and some example sites are my next aims.

As mentioned, I see Labyrinth as a Website Management System. While many similar applications and frameworks provide the scaffolding for a Content Management System, Labyrinth extends that by not only providing the ability to manage your content, but also to provide a degree of structure around the functionality of the site, so the management of users and groups, menu options and access, as well as notification mechanisms, enable you to provide more control dynamically.

When writing the fore-runner to Labyrinth, one aspect required was the ability to turn on and off functionality instantly, which meant much of the logic flow was described in the data, not the code. Labyrinth has built on this idea, so that the dispatch tables and general functionality can be controlled by the user via administration screens, and not by uploading new code. When I started looking at this sort of application back in 2001, there was nothing available that could do that. Today there are several frameworks written in Perl that potentially could be tailored to process a website in this way, but all require the developer to design and code the functionality. Labyrinth aims to provide that pre-packaged.

I'm primarily releasing Labyrinth so that I can release all the code that drives the CPAN Testers websites. Giving others the ability to better suggest improvements and contribute. The system allows me the freedom to build websites quickly and easily, with the hardwork being put into the design and CSS layouts. With so many other frameworks available, all of which have bigger development teams and support mechanisms than I can offer, I'm not intending Labyrinth to be a competitor. It might interest some, which is great, but if you prefer to work on other frameworks that's great too. After all it's still Perl ;)

On the face of it, OAuth seemed a bit confusing, and even the documentation is devoid of decent diagrams to explain it properly. Once I did get it, it was surprising to discover just how easy the concept and implementation is. For the most part Marc Mims has implemented all the necessary work within Net-Twitter, so Maisha only needed to add the code to provide the right URL for authorisation, and allow the user to enter the PIN# that then allows the application to use the Twitter API.

The big advantage to OAuth is that you don't need to save your password in plain text for an application. Once you enter the authorisation PIN#, the token is then saved, and reused each time you start up Maisha to access your Twitter feed.

As Identi.ca also implements an Open Source version of Twitter, they have also implemented OAuth in their interface. However, there is a slight modification to Net::Twitter needed, so I will wait for Marc to implement that before releasing the next version of Maisha.

So if you have been using Maisha and have been frustrated that you can no longer access Twitter, you now only need to upgrade to App-Maisha-0.14 and all should work again (once you've entered the PIN# of course).

If you are using Maisha, and have any feedback or wishlist suggestions please let me know.

The Optimum YAPC Attendance

In my recent post about promoting YAPCs, Gabor picked on something regarding the optimum number of attendees. I think he makes a good point that for a conference like a YAPC, 300-400 attendees is a good number to aim for. Anything more and it can become a logistical nightmare for organisers. It also means that the conferences themselves can become a little more impersonal, when a major aim of YAPCs is to bring people together.

With bigger numbers attending, it creates problems for organisers, not only to accommadate the large numbers, but also the cost. Universities have been ideal in the past, as they are usually quiet out of term time, and can usually accommodate several hundred people for little outlay. However, looking for venues that can accommodate thousands, which typically means professional conference venues, needs special effort to cover the costs. Events like FOSDEM are now so well established that large corporate sponsors are willing to donate without much persuasion, but a dedicated language conference would struggle to get the same kind of support.

YAPC::Asia can cope with 500 attendees, but now regularly sells out because they just cannot accommodate any more in the venue they use. In North America and Europe most of the venues can usually cope with around 400 attendees. In Europe we generally see lower attendances due to travel and accommodation costs for personal attendance being too high for some, as we see a larger number of attendees paying for themselves. As a consequence it is unlikely we are going to see a dramatic increase in numbers unless Perl suddenly finds itself being the language of choice for many business, especially corporates.

I have attended large conferences in the past, and while there is a wide choice of talks and more people to meet, it can be a bit overwhelming. You don't always get the chance to talk to all the people you wanted to, and many that you might have common interests with remain unknown to you. At the YAPCs it's a lot easier to talk to everyone, and you also have a better chance of someone pointing out someone else who you really should talk to. Although there are usually a few people I forget to find and say hello to, on the whole I do get to chat to some new attendees, and occasionally they'll come an introduce themselves to me, which is always a bonus. The smaller conferences just seem more sociable, which gives more of a fun element about them, which in turn makes them feel a bit more inclusive.

I think we still have plenty of room to manoeuvre, as I doubt we'll see many 400+ attended YAPCs for NA or Europe, so there is still lots of promoting worth doing. It all has a side effect of promoting YAPCs, Workshops, Hackathons, Perl and the community in general, not just in NA and Europe, but around the world. If people can't attend a YAPC, then we should be trying to encourage them to find a more local Perl Workshop. Both YAPCs and Perl Workshops are a great way to introduce yourself to the community and for the community to bring the best out in you. Another 100 or so attending YAPCs would be fantastic, and I'm sure the Perl Workshops around the world would love to see another 30-50 people attending too.

But as stated previously, promotion is the key. If you don't tell people how great you thought a YAPC or Perl Workshop was, how will others know that they should be attending the next one?

YAPC::Europe 2010 - Thoughts Pt 3/3 - Organising A YAPC

When considering whether to host a YAPC, potential organisers often have no idea what they are letting themselves in for. While it can be very rewarding, and a valuable experience, it is hard work. There are plenty of things to go wrong, and keeping on top of them can be quite daunting. However, when you first consider bidding you usually look to what's gone before, and over the past 10 years YAPC events have come on leaps and bounds. This year, YAPC::Europe in Pisa, Italy was no exception.

As mentioned in the previous post, the only real pitfall that Pisa organisers suffered was lack of promotion. The actual event pretty much ran smoothly. There were glitches with the WiFi network, but that seems to happen every year. This year once again, it seems network companies just don't believe us when we tell them that potentially 300+ devices will be wanting to connect all at once to the network. So although you could connect, the network was rather slow at times. Hopefully, future organisers can point to past experiences and impress on service providers that when we say 300+ devices we mean it! It's not just YAPC::Europe, as YAPC::NA has suffered too. Thankfully these problems didn't detract from a great conference.

For many attendees, the primary motivation for attending YAPC are still the talks. You get to see a wide range of subjects, hopefully covering areas of interest that suit all the attendees. However, this is extremely hard. During a few discussions during the event, I commented on the feedback from the YAPC::NA Conference Survey, which featured several comments from attendees, who felt that a beginner track would have been very useful. In retrospect, it might have been even better to have an Introduction To Perl tutorial before the conference, with the beginner track set aside for a variety of introductory topics covering aspects of the language, recommended modules, best practices or useful projects. The tutorial could then cover a lot of ground covering the basics, that would then be enough for beginners to not lose their way in the subject matter of some of the regular talks. Several people have commented that a beginner track, certainly for the first day, would be extremely useful. There have been several suggested approaches to this, but ultimately they are going to be a set of talks that are roughly the same each year.

At times speakers hear complaints that they are repeating talks, but with so many people attending for the first time every year, attendees often welcome having a chance to hear them. So if you do have an introductory talk that you think would benefit from a repeat performance, take in the comments from the talk evaluations and see what you can improve on, and submit it again the following year. I see some speakers benefiting from this to improve their speaking talents and gain more confidence in themselves.

The scheduling this year, from my perspective, was great. I only had 1 minor clash, and 1 clash where I would have liked to have seen all 4 talks. It's unlikely you'll ever get away with not having any clashes, but if you can gauge the subject matter and level of talks well, and don't put potentially overlapping talks together, you can reduce many such conflicts. This year the list of talks was online for a while before a schedule was published. This allowed those that were already registered a chance to highlight talks they were interested in. I don't know if this helped to guide the schedule, but it did seem a good opportunity to see exactly what talks were going to be popular. Having said, you can only rely on it for a short time, as getting the schedule published is really important both for raising the profile of the conference, and to persuade attendees to come to the event. Some conferences publish the schedule several months in advance, which can be hard to do, but does give potential attendees a chance to show their bosses why they should attend. Saying there might be some good and relevant talks rarely works.

This year the organisers made one of the best decisions ever regarding the schedule, and one that got appreciative comments from just about everyone. The talks started at 10am. In the past we have typically started around 9am, with some YAPCs starting as early as 8am. That early in the morning is a killer on the opening speaker. By starting at 10am, pretty much everyone was there every morning ready for talks. It made for a much more awake and attentive audience.

One aspect of the schedule that is down to the attendees to organise are the BOFs. This year, although several were suggested, I didn't see whether any of them happened. The one that looked likely, I would have attended had I been aware of it. To help these there needed to have been a BOF board by the registration table, which attendees can write their own schedule for. Having everything online is not very suitable for those who don't have laptops or cannot get internet connectivity. Plus a BOF Board helps to promote the BOFs to those who haven't heard of them before. Sometimes you just have to fall back to low-tech ;)

Another potential hazard for organisers is not considering the breaks and lunches. If your venue is in the middle of a city, town or very close to a variety of eating establishments, you can pretty much let your attendees fend for themselves during lunch. However, if they need to search for more than 15 minutes, then that can leave very little time for eating before they have to return to the venue. Due to the venue being quite a walk away from any potential eating establishment, it was rather important that they feed the attendees during lunch. As such they laid on a spread that was fantastic. It certainly avoided any unnecessary wandering into town trying to find something, and also meant we all had an hour for lunch where we could mingle and chat. And pretty much that's exactly what we all did. The breaks and lunches were always full of discussion. It gave us a chance to carry on points from talks, catch up with friends and introduce yourself to new people. If nothing else, this year's YAPC::Europe was extremely social.

As the saying goes, keep your attendees well fed, and you'll have a happy audience. That also means considering additional options, and it was good to see that lunch included a selection of vegetarian options too, as more and more attendees these days are vegetarian or vegan. For the breaks (and lunch if appropriate), try and include water, soft drinks, coffee and tea. Note that last one, tea. While much of Europe might prefer coffee, I can guarantee you'll get complaints if you don't provide at least English Breakfast Tea (we have a wider choice in the UK, but in the rest of the world, it always seems to be labelled as that). In Copenhagen they ran out every break time due to the caterers not anticipating the number of tea drinkers. Thankfully, for Pisa the drinks were very well stocked. A decent cup of tea goes a long way to keeping this attendee happy anyway ;)

The venue choice is always a difficult part of organising an event like YAPC, and largely depends on numbers. Over the last few years, several first choices have had to be abandoned because something hasn't worked out. The venue is never going to be perfect, but as long as there is plenty of room and everyone can get somewhere to sit then you've done well. You always need one room to hold everyone, but If you have some smaller rooms for the other tracks, try and avoid scheduling popular speakers or talks in them. Thankfully it doesn't happen often, and sometimes it can't be foreseen. This year Allison Randal did experience a bit of overcrowding in one of her talks, but no-one seemed to mind sitting on the floor or standing to hear.

The auction is always another trouble spot, and in recent years has rarely been necessary, as YAPCs usually make a profit these days. However, raising funds for the next years organisers, TPF or YEF is never a bad thing, as it all ends up helping to fund and promote Perl events. This year the Pisa organisers tried to be a bit different, and had it have worked as intended, then I think it would have gone down well. This year we had 3 tag teams trying to auction off 4 items each. Had it been kept to that, and the time limit of 5 minutes that had been suggested been rigorously imposed, then the auction would have been short and a lot of fun too. Unfortunately the time limits got abandoned, and some of the items led to a few bemused looks on the faces of the audience. If you've never been to a YAPC, then the auction can be a bit intimidating. None of us are as flush as we once were, so can't always afford to bump up the prices to levels we once saw in years gone by. Having said that, I do think we saw the highest price paid for a T-shirt, with Larry winning the the PIMC shirt off Matt Trout's back, thanks to a lot of friends :)

One point that Dave Rolsky made in his observations of the event, was regarding the price of attendance. We've now been running YAPCs for over 10 years and the prices have largely stayed the same in that time. There has been resistance to price increases, but 99 qr/Dollars|Euros|Pounds)/ is *really* cheap compared to other similar events. I do think there needs to be some alternative options, particularly for students, low-waged (or unwaged) and businesses, but a small increase in the standard price would, as Dave highlights, generate a significant amount of revenue. One aspect of the pricing that we've rarely pitched in the right way, has been for businesses wanting to send attendees, whether singularly or en-masse. It was commented to José at YAPC::NA in 2008, by someone that said that they had to pay for themselves, as their boss considered YAPC too cheap and therefore not a real conference. Having a business package that includes 1 or 2 tutorials in addition to the regular conference is one way to give value for money, but still charge a higher price. Lisbon tried this for 2009 and Riga are looking to use it for 2011. I hope it works, as it has the potential to encourage businesses to regard YAPCs as a credible training event for their employees.

Aside from the tower and the Cathedral there wasn't much to see in the town, which is probably a good thing, as it meant the town wasn't overly touristy or expensive. There were lots of choices for food in the evening, although mostly we all headed for the Piazza where we all met for the pre-conference meet-up. If you'd like your attendees to get a good flavour of your city, then it's worth investing time to point out evening social venues where attendees can meet-up. If you don't then the likelihood is they'll all head for the same place every night, as it's the only place they know how to get to.

If you have strong feelings (or even mild ones) about the conference, it would be great if you could take the time enter them into the Conference Survey. All the responses help the organisers of the future get a good idea of what attendees thought about the conference. In addition, please try and complete the talk evaluations, as I know the speakers do appreciate it. I spoke to a few speakers in Pisa who were very pleased to get the feedback, even if it wasn't always complimentary. Following some discussions, next year the talk evaluations will be simplified a little, so they will hopefully be quicker to complete.

As some may be aware I started writing a book last year, about how to organise a YAPC. After some feedback I had intended to make a second draft. Due to other commitments that hasn't happened as yet. Following further feedback from the YAPC::NA organisers and discussions with organisers and attendees of YAPC::Europe, as well as all the feedback from the surveys, I plan to pool those, together with the original feedback, and work on the next draft over the next month. Seeing the success of the git way, I'll be making the text available on Github, so any one can supply patches. My eventual aim is then to publish an ebook, together with a print on demand version, that can be used by organisers of YAPCs and workshops to help them plan and improve Perl events for the future. If you're interested in such a book, keep an eye out for updates in the near future.

Overall I enjoyed YAPC::Europe this year, and came away with several ideas from talks and discussions in the hallway track. My thanks to the Pisa organisers, you did a grand job. Now have a well earned rest. Next year Riga will be our hosts. With Andrew and his crew now having so many workshops and YAPC::Russians behind them, next year should be every bit as successful as this year. Good luck guys.

A final thought from YAPC::Europe in Pisa this year. Josette Garcia noted that 4 people who attended the very first YAPC::Europe were in Pisa. I was one of them, and I think Dave Cross, Nick Clark and Léon Brocard were the others. Of the 4 of us I think Léon and myself are the only ones to have attended every single YAPC::Europe. I wonder who'll break first :)

YAPC::Europe 2010 - Thoughts Pt 2/3 - Promoting A YAPC

This year, YAPC::Europe was reasonably well attended, with roughly 240 people. However, a few weeks prior to the event, the officially registered attendees for YAPC::Europe 2010 was considerably lower. Although every year it seems that many register in the last 2 weeks, there is usually a higher number registered before then. So why did we have such low numbers registering, until just before the conference this year? I'm sure there are several factors involved, but 2 strike me as significant.

The first is the current dates for the event. As mentioned in my previous post, the Perl community attending YAPCs is getting older, and many of us now have young families. August is notoriously bad for anyone with a family, as the school holidays govern a lot of what you're able to do. Those that can take time out to attend the conferences also have to juggle that with family holidays. Employers are often reluctant to have staff away during August, as too easily they can become short-staffed due to others taking holiday. Having said that, the attendances haven't fluctuated that much in recent times, regardless of whether early/mid-August is chosen or late-August/early-September. Although, the exception does seem to be Vienna in 2007 which attracted 340 attendees. As such, when deciding dates for a YAPC, bear in mind that some of your potential attendees may find it difficult to attend, or only be able to decide almost at the last moment.

The second factor was a pitfall that this year's organisers fell into too. Lack of communication. Immediately prior to the conference and during it, there was lots of news and promotion. However, 6 months ago there was largely nothing. Although, we finally had about 240 attendees, it is possible that there could have been many more. Big splashes across the Perl community with significant updates (website launch, call for papers, opening registration and unveiling the schedule) are a great way to make people aware of what is happening and can generate a buzz about the event long before it begins.

This year I noticed that a twitter search for 'yapc' in the weeks before YAPC::Europe, featured mostly posts about YAPC::Brasil, and I'm currently seeing several posts for YAPC::Asia. Last year, José and Alberto kept a constant feed of news, snippets, and talk link posts onto twitter and other social network micro-blogging services, which helped to generate posts from others attending or thinking of attending. This year that potential audience attracted via the marketing efforts, seems to have been lower than in previous years. The results of the Conference Surveys will hopefully give a better picture of this.

In recent times the Perl community has talked about marketing Perl in various ways. However, promoting our own events seems largely left to the organisers. While the organisers can certainly add fuel for the fire, it's the rest of the community that are needed to fan the flames. In the past YAPCs and Workshops have been promoted across various Perl sites, and in various Linux and OpenSource channels, which in turn generated a lot of interest from attendees and sponsors. The latter target audience are just as important as the former. While we want more people to attend the events, the sponsors are the people who fund them to make the happen. But not marketing the events to get maximum exposure likely means there are potential sponsors who either never get to hear of our events, or are turned off by the lack of exposure the event is generating.

Although the events do manage to get sponsors, for the organisers it can often be a very traumatic process getting sponsors involved. Once you've made initial contact, you'll need to persuade them that sponsoring the event is a good way to market their company. If they're able to see photos online of the events (possibly including sponsor branding), or read blog posts that direct people to the conference website (with all the event sponsors listed), it gives potential sponsors a feeling that it may be a worthwhile investment. Some sponsors are strong supporters of OpenSource and want to give back, but a large number are looking to promote their own brand. They're looking to make maximum revenue for a minimum outlay. They want to see that funding events is going to generate further interest and brand recognition to their target audience. Exposure through blogs and other online sources all helps.

As I've implied, much of this exposure is down to the community. If you attended YAPC::Europe (or YAPC::NA or any other Perl event, including Workshops) have you written a blog post about it? Did you tweet about the event before you went, during or even after? Have you posted photos online and tagged them with the event, in a way that others can find them? YAPC::Brasil and YAPC::Asia attendees seem to be doing this rather well, and there is a lot we can learn from them. In the last week, there have been several posts by attendees of YAPC::Europe 2010, but of the 240 people attending, it really is a small percentage. And likewise I saw a similar kind of percentage posting about YAPC::NA this year too. Several years ago use.perl and personal blogs were full of reports of the event. What did you learn at the event, who did you meet, what aspects of Perl are you going to take away with you from the event? There is a lot you can talk about, even if it was to mention one specific talk that you felt deserved comment.

With aggregators, such as Iron Man, Planet Perl and Perlsphere, whether you post via use.perl, Perl Blogs or your own personal site, you can get the message out. Next year, anyone wondering whether attending a YAPC is worthwhile is likely to search for blog posts about it. Are they going to find enough reasons to attend, or persuade their manager that they should attend? I hope so. YAPCs and Workshops are a great way to promote what is happening in Perl, and by talking about them we can keep that interest going long after the event itself.

In Gabor's lightning talk, looking at Perl::Staff and events group, he highlighted the differences in attendances between the conferences. Typically a YAPC::Europe has 200-300 attendees, YAPC::NA has 300-400 and YAPC::Asia has around 500 attendees. However, FOSDEM (5,000), LinuxTag (10,000) and CeBit (400,000) all attract much higher numbers. It's a fair point that we should try and provide a presence at these other OpenSource events, but a dedicated language interest event is unlikely to attain those attendances. The hope though is that we may have a knock-on effect, with people seeing Perl talks and a good Perl presence at those other events, might just take more of an interest in Perl, the community and the various Perl specific events.

I'd be very interested to see attendance figures for other dedicated language conferences, particularly for Europe, as I think Perl is probably about average. The EuroPython guys certainly attract similar numbers to Birmingham. In the past I've done a fair amount of pitching Perl at Linux, OpenSource and Security Conferences in Europe and to Linux User Groups around the UK. Birmingham Perl Mongers undertook 3 "world" tours in 2006, 2007 & 2008 doing exactly that. It was great fun, and we got to meet a lot of great people. If you have a local non-Perl group, such as a LUG, would they be interested in a Perl topic? Are you able to promote Perl, the Perl community or Perl events to them? Sometimes even just attending is enough, as you'll get to talk to plenty of other interesting people. The initial 2006 tour was primarily used to promote YAPC::Europe 2006, which Birmingham Perl Mongers were hosting that year, and it did help to raise the profile of the event, and eventually got sponsors interested too.

One thing that the Pisa organisers did, specifically osfameron, was to broadcast Radio YAPC podcasts (Episodes 0, 1, 2 & 3). Genius. I got to listen to them after each day, but I can imagine many haven't been able to hear until they returned home. It would have been great to have something before the conference too, even just the news updates and some of the highlights to look forward. Interviews with the organisers and any registered attendees would have been great too. It was a nice touch to the event, and it's promotion, to be able to feature interviews with speakers and attendees to get their experiences. I hope future organisers can try something similar too.

There are several people trying to raise the profile of Perl at the moment, but it takes the whole community to support their efforts by blogging, talking beyond our community and promoting events to those who might not have considered treating the conference as part of their training. We have a great community, and one that I'm pleased to be a part of. I want the community and the events to continue for many years to come, and talking about them can only help that. It's why Matt Trout shouted at many of us to blog about Perl and promoted the Iron Man aggregation competition.

The Perl community and events are very healthy at the moment, we just don't seem to be talking about them enough. As the business cards state, we do suck at marketing. If we want to avoid the mistakes of O'Reilly at OSCON last month, and the badly named tags, then promoting YAPCs and your experiences at them, are a good way to show how it can be done right.

YAPC::Europe 2010 - Thoughts Pt 1/3 - Young Blood & The Old Guard

Last week I was in Pisa for YAPC::Europe 2010. Although I was doing a talk about CPAN Testers, my intention was to keep a low profile and observe more. Having run the conference surveys for the past few years, it has been noticeable that the attendance has been changing. While there are new people coming along to YAPCs, the general average age is getting older. Marketing Perl to companies to encourage its use is one thing, but attracting people in general to the language is also important. The fact that for a notable number of attendees this is their first YAPC, probably means we are getting something right.

There were several European Perl Mongers that were noticeably absent this year. While some had posted apologies (mostly due to imminent baby arrivals it would seem!), others perhaps have moved on to other jobs, projects or languages, or their life means that they cannot commit to something like YAPC any more. While we miss them, it is a natural way for the community to evolve. It does give a chance for newcomers to become involved and this year I wanted to see who we are potentially going to see more of.

It seems we have quite a few people who are giving us, the Perl community, a fresh look and I think that the Perl community is rather healthy at the moment thanks to them. At least from a European perspective. YAPCs are an ideal chance for people to meet and discuss projects, which otherwise can take days or weeks via email and even IRC. Those new to projects can better introduce themselves and forge better communication channels with other project members, both during the conference and at the evening social events. I think it was Dave Rolsky who observed that the Europeans seemed more accustomed to putting down laptops and talking, rather than sitting in silence hacking away. There certainly seemed to be lots of discussion in hallways this year at least.

With all the fresh faces around, it's crossed my mind on several occasions, as to who is the old guard these days. There are several I could name who kind of fit the bill, and many of us have been around working on projects for quite a few years. Not necessarily hacking on perl itself, but certainly helping to build the Perl community. We have quite a vibrant community, one that I think is quite inclusive, supportive and appreciative. We have disagreements at times, but it's a community that seems to easily span age and experience barriers and is willing to learn from each other.

Keeping a low profile initially seemed to be working for me, that is right up until the afternoon of the last day. During the day, José had asked if I would help with his lightning talk, but not wanting to be part of any more talks, I respectfully declined. Little did I realise it was just a ruse, so he could say thank you to me for organising and running the YAPC Surveys. So much for not drawing attention to myself! After the Lightning Talks, brian d foy took centre stage to present the White Camel Awards. I was very pleased to see both Paul Fenwick and José Castro receive awards, and in fact was laughing at José as he realised one of the awards was going to him. However, José was almost in hysterics when he saw my reaction when I realised I was also receiving an award.

As I mentioned in my acceptance speech, I've never wanted an award for what I do. I do it because I want to, and because I love being part of this community. I had been asked before whether I would accept a White Camel Award, and I'd said no. Although I don't think the awards themselves are a bad thing, its just that I think others have been more deserving of them. I've been involved in many Perl projects over the years, and have largely hid behind them, as I've always felt the projects themselves are far more important than me. The fact that several people felt I needed to be acknowledged this year, regardless of my reluctance to receive the award, I guess means that sometimes I just have to accept that people would like to say thank you for the work I do. If like José, there was one person I should thank for introducing me to the Perl community, it would be Richard Clamp. It was Richard who gave me my first proper Perl job and persuaded me to go to a London Perl Mongers social.

Which sort of brings me to one of the projects I helped with last year, and I'm very pleased to see continuing this year. Introducing people to the Perl community is one aspect of the Send-A-Newbie programme. Edmund instigated the programme last year, and we managed to bring 3 people to YAPC, giving them a chance to experience the conference and the community. The hope was that they would use and benefit from the experience, and hopefully feel more empowered to contribute to the community. Then maybe be in the future, they might be able to attend future YAPCs. I was delighted to see Alan Haggai Alavi at this year's YAPC, and surprised to see him so soon. I was then even more impressed to hear what he has been doing to promote Perl in India, as this is exactly the kind of enthusiasm the Send-A-Newbie programme can benefit from too. I spoke briefly with Leon Timmermans, who was this year's attendee via the Send-A-Newbie programme, and again it seems we've found another deserving recipient.

With programmes like Send-A-Newbie, the Perl marketing efforts and the community in general, I'm very hopeful that we'll be seeing more young blood in the community in the years to come. However, it still needs some effort from every one of us to ensure that happens. Which brings me to my next post in this short series, which I'll be posting soon.

I've now been in the community for over 10 years, with Birmingham Perl Mongers celebrating their 10th birthday in September. I'm guess that means I'm one of the old guard now, which isn't bad for a C programmer who had a lot to learn all those years ago. I feel I've come a long way in the last 10 years, and it's been a fantastic journey. Perl and the community have changed immensely in those years, and I'm looking forward to seeing how the young blood and fresh faces now, take us in new and interesting directions over the next 10 years and more.

Last year I went to 3 conferences, YAPC::NA, YAPC::Europe and LUGRadio Live. All very different in their own way, although all Open Source. Due to other projects, work and fanmily commitments, it has take quite a bit of time to review all the photos. After several months, I finally found some time to whittle them down to the selection I have uploaded here.

The first conference, YAPC::NA, took place in Pittsburgh, PA, USA. The team have been holding the Pittsburgh Perl Workshops for several years now, and by all accounts they had been very well received. With the YAPC set of conferences having started in Pittsburgh, at the Carnegie Mellon University where this conference also took place, the organisers were quite proud to promote a sort of home coming for the event. It was a good conference, though my first talk was somewhat problematic as we couldn't get a laptop to work with the projector. Thankfully my second talk went without a hitch. My thanks to confound for introducing me to 'xrandr', which solved all the problems I had getting Ubuntu talking to the projectors.

The second conference, YAPC::Europe, was in Lisbon, Portugal. The conference itself was packed full of talks, though I think my lightning talk, which I'd been refining over the previous few months, generated the biggest reaction. Not surprising really, as it reminded people just how productive the Perl community was, particularly regarding CPAN.

I had originally thought about hiring a car and travelling along the Vasco da Gama Bridge (at 10.7 miles long, the longest road bridge in Europe), and do the circuit via the monument on the other side of the Tejo river, and back to Lisbon via the 25 de Abril Bridge (Lisbon's other bridge). I didn't in the end, but maybe I can save that for another time. Instead fellow Birmingham.pm'er Brian McCauley and myself walked around the city and took in some of the sights. When we got to the castle we managed to bump into a few other attendees (Paul Johnson, Aaron Crane and R Geffory Avery), who also had taken the advantage to do some sightseeing.

The last conference I attended was LUGRadio Live. For a number of reasons I didn't put forward a talk this year, but suggested JJ should give a talk instead. With the radio show no longer running, the conference had much more of a grassroots feel to it again. There ware some good talks, a couple of famous names, but mostly it felt like it was one big Linux User Group meeting, which to a degree it was, just a bit more global than your regular user group meeting ;) The conference was dubbed 'Back To Basic', but that really only applied to the extravagance. The quality of the conference was first rate. Being in Wolvehampton, just round the corner for me, I didn't take the opportunity to do any sightseeing, not that Wolverhampton is exactly the kind of place to do any sightseeing. As it happens I had taken Dan to the event, who loved it, especially building the lego models with all the other geeks. The following day was OggCamp, and although I would have liked to have attended, I had other commitments so had to pass. I think having the two events side by side though was a great idea, as it gives both events to feed off each other.

This year I'm currently only planning one conference, YAPC::Europe in Pisa, Italy. All being well I may get to see the tower, but as I'll be flying in and out just for the conference, I don't expect to see much more. I'm still undecided whether to submit a talk, as I'm trying to think of a suitable subject. I don't like repeating myself, but my two biggest profile Perl projects I've now covered for a couple of years (CPAN Testers and YAPC Surveys), so we'll see.

More photos to come, as I find time to get through the plethora of photos I've taken over the last year or so.

It has been quite a few months since I last posted here. Quite a few events and projects have happened and held my attention since I last wrote in my blog. And I still have a backlog of photos and videos from last year to get through too!

I did wonder whether anyone might think that after talking about Why The Lucky Stiff in one of my last posts, that I had done the same. Well for those who follow my CPAN Testers work, will know that CPAN Testers 2.0 has been a rather major project that finally got properly underway in December 2009. It's nearing completion, and I'll cover some of the highlights in a future post. Although it's been my most consuming project over the last 6 months or so, it hasn't been my only one. As mentioned in another of my last posts, I'm writing a book about how to host a YAPC. Due to other projects taking a higher priority, this has taken somewhat of a backseat for the time being, but I do plan on getting a second draft together within the next few months. I have looked into self-publishing the book and I'm now planning to have it formerly submitted with an ISBN (the internation book numbers) and supplied via print-on-demand print runs.

Another project that has been ongoing alongside my CPAN Testers work, has been my website management system, Labyrinth. This has been the website application I have been developing since 2002, and although several other Perl web frameworks have now been developed since, to lesser and greater degrees, Labyrinth has had the disadvantage of only having 1 core developer for the past 8 years. It's not an application that will revolutionise web development and deployment, but it has very successfully worked for a number of websites I have developed over the years. After having been relatively stable for the past year or two, I'm now cleaning up the code so I can properly release it as open source. This is mostly so that anyone wishing to contribute to CPAN Testers, or the YAPC Surveys, will then have all the code available to them. If anyone wants to use it and help develop it further, that would be a welcome bonus, but realistically other web frameworks have gained so much mindshare that I'm not expecting Labyrinth to make much of a dent any more. Not that that is a problem, as Labyrinth has made deploying websites so much easier for me, that I'll just be glad to let people help on CPAN Testers and the YAPC Surveys.

Speaking of the YAPC Surveys, YAPC::NA 2010 and YAPC::Europe 2010 are fast approaching. These will be next projects to get up and running. Thankfully the code base just needs a few upgrades to the latest version of Labyrinth, and some work on skinning the CSS to match the respective YAPC sites. All being well this should only take a few days. Then I'll be looking to release this version of the code base for anyone wishing to run similar surveys for themselves. I've already had one interested party contact me regarding a conference in October, so hopefully the code will be suitable, and only the questions need adapting. We shall see.

My other major project this year, also began back in December 2009. As some readers are well aware, I am an ex-roadie. From 1989-1994 I was a drum tech, lighting engineer and driver for Ark, one of the best Black Country bands ever. Not that I'm biased or anything ;) Last year the band got together for some rehearsals and planned a few reunion gigs. With interest gaining, an album was also planned. So this year, the band began recording and booking gigs. As a consequence the Ark Appreciation Pages desperately needed a makeover. I'll write more about what happened next in another post. Ark are back, and Mikey and I are delighted to be able to be involved with the band once again.

That's just a few of the projects that have taken up my time over the last 6-8 months. There are several others that I hope to post about, with family, time and work permitting. Expect to hear a little more from me than you have so far this year.

For those that might not be aware, I got made redundant on 31st March (the day after the QA Hackathon had finished). Thankfully, I start a new job next week, so I've managed to land on my feet. However, this has meant that I've ended up having the whole of April off to do stuff. My plan was to work on some of the Open Source projects that I'm involved with to move them further along to where I wanted them to be. As it turned out two specific projects got my attention over the last 4 weeks, and I thought it worth giving a summary of what has been going on.

YAPC Conference Surveys

Since 2006, I've been running the conference surveys for YAPC::Europe. The results have been quite interesting and hopefully have help organisers improve the conferences each year. For 2009 I had already planned to run the survey for YAPC::Europe in Lisbon, but this year will also see YAPC::NA in Pittsburgh having a survey of their own.

The survey site for Copenhagen in 2008 added the ability to give feedback to Master Classes and talks. The Master Classes feedback was a little more involved, as I was able to get the attendee list, but the talks feedback was quite brief. As such, I wanted to try and expand on this aspect and generally improve the process of running the surveys. Part of this involved contacting Eric and BooK to see if ACT had an API I could use to automate some of the information. I was delighted to get an email back from Eric, who very quickly incorporated an API that I could use, to retrieve the necessary data to keep the survey site for a particular conference up to date, even during the conference.

With the API and updates done, it was time to focus on expanding the surveys and skinning the websites to match that of the now live conference sites. The latter was relatively easy, and only required a few minor edits to the CSS to get them to work with the survey site. The survey site now has 3 types of survey available, though only 2 are visible to anyone not taking a Master Class. Those that have taken one of the YAPC::Europe surveys will be aware I don't use logins, but a key code to access the survey. This has been extended so that it can now be used to access your portion of the survey website. This can now be automatically emailed to attendees before the conference, and during if they pay on the door, and will allow everyone to feedback on talks during the conference. On the last day of the conference the main survey will be put live, so you can then answer questions relating to your conference experience.

I'm hoping the slight change won't be too confusing, and that we'll see some ever greater returns for the main survey. Once it does go live, I'd be delighted to receive feedback on the survey site, so I can improve it for the future.

CPAN Testers Reports

Since taking over the CPAN Testers Reports site in June 2008, I have spent a great deal of time improving it's usability for users. However, it's come at a price. By using more and more Javascript to dynamically change the contents of the core pages, it's meant that I have received a number of complaints that the site doesn't work for those with Javascript disabled or who use a browser that doesn't implement Javascript. For this reason I had decided that I should create a dynamic site and static site. The problem with this is that the current system to create all the files takes several hours for each set of updates (currently about 16 hours per day). I needed a way to drive the site without worrying about how long everything was taking, but also add some form of prioritisation so that the more frequently requested pages would get updated more quickly than those rarely seen.

During April, JJ and I went along to the Milton Keynes Perl Mongers technical meeting. One of the talks was about memcached and it got me thinking as to whether I could use it for the Reports site. Discussing this with JJ on the way home, we threw a few ideas around and settled on a queuing system to decide what needed updating, and to better managed the current databases to add indexes to speed up some of the complex lookups. I was still planning to use caching, but as it turned out memcached wasn't really the right way forward.

The problem with caching is that when there is too much stuff in the cache, the older stuff gets dumped. But what if the oldest item to get dumped is extremely costly on the database, and although it might not get hit very often, it's frequent enough to be worth keeping in the cache permanently. It's possible this could be engineered with memcached if this was for a handful of pages, but for the Reports site it's true for quite a few pages. So I hit on a slightly different concept of caching. As the backend builder process is creating all these static files, part of the process involves grabbing the necessary data to display the basic page, with the reports then being read in via the now static Javascript file for that page. Before dropping all the information and going on to the next in the list, the backend can simply write the data to the database. The dynamic site can then simply grab that data and display the page pretty quickly, saving ALOT of database lookups. Add to the fact that the database tables have been made more accessible to each other, the connection overhead has also been reduced considerably.

The queuing system I've implemented is extremely simple. On grabbing the data from the cache, the dynamic site checks quickly to see if there is a more recent report in existence. If there is, then a entry is added to the queue, with a high weighting to indicate that a website user is actually interested in that data. Behind the scenes the regular update system simply adds an entry in the queue to indicate that a new entry is available, but at a low weighting. The backend builder process then looks to build the entries with the most and highest weightings and builds all the static files, both for the dynamic site and the static site, including all the RSS, YAML and JSON files. It seems to work well on the test system, but the live site will be where it really gets put through its paces.

So you could be forgiven in thinking that's it, the new site is ready to go. Well not quite. Another part of the plan had always been to redesign the website. Leon had designed the site based on the YUI layouts, and while it works for the most part, there are some pages which don't fit well in that style. It also has been pretty much the same kind of style since it was first launched, and I had been feeling for a while that it needed a lick of paint. Following Adam's blog post recently about the state of Perl websites, I decided that following the functional changes, the site would get a redesign. It's not perhaps as revolutionary as some would want, judging from some of the ideas for skins I've seen, but then the site just needs to look professional, not state of the art. I think I've managed that.

The work to fit all the pieces together and ensure all the templates are correct is still ongoing, but I'm hopeful that at some point during May, I'll be able to launch the new look websites on the world.

So that's what I've been up to. I had hoped to work on Maisha, my other CPAN distributions, the YAPC Conference Survey data, the videos from the QA Hackathon among several other things, but alas I've not been able to stop time. These two projects perhaps have the highest importance to the Perl community, so I'm glad I've been able to get on with them and get done what I have. It's unlikely I'll have this kind of time again to concentrate solely on Open Source/Perl for several years, which in some respects is a shame, as it would be so nice to be paid to do this as a day job :) So for now, sit tight, it's coming soon...

After the last few weeks of trying to access Twitter from the command line, I set about writing something that I could expand to micro-blog to any social networking site that supports many of the Twitter API type commands. At the moment it only works with Twitter and Identi.ca, but my plan is to look at creating plugins, or more likely to allow others to create plugins, that can enable the tool to interact with other micro-blogging sites.

After trying to think of a decent name, I finally settled on Maisha. It's a Swahili word meaning "life". You can grab the code from CPAN as App-Maisha.

Currently you'll need to use the standard Perl install toolset to install the application, but ultimately I'd like to have something that you can install just about anywhere without having to go through all the headache of installing dependencies. I'll have a go at doing an .rpm and a .deb package release, and will also try using PAR. It would be nice to have this as a standalone application that just about anyone can use, but for now CPAN will have to do.

My next immediate step is to look at writing something that interfaces to Facebook without requiring a developer key or any such nonsense. It will probably have to involve a bit of screen scraping, unless there is some more official API, but as yet I haven't found it. Everything regards Facebook applications seems to centre around the developer application that can do all sorts of dubious things, but mine is purely for the user to control from their desktop, not a 3rd party website/server. Thus giving them a developer API key assigned to me is wholly inappropriate. It would be nice if they had a restricted User API, which allows you to update your status and look at your friends' statuses, but I think I'll be in the minority wanting it.

In 2006 I, along with 3 others from Birmingham Perl Mongers, organised the 2006 YAPC::Europe Perl Conference. It was thankfully a great success and invigorated several with ideas of things that they could do to join or create communities. Whether that was forming a local Perl Monger user group or starting a code project that would eventually be submitted to the CPAN. However, one person was inspired to go to another YAPC the following year and then submit a talk and speak at the 2008 YAPC::Europe Perl Conference. Had the 2006 conference not been in Birmingham, UK in 2006, Edmund would likely never have gone to a YAPC, and never realised how valuable they are. Not just in terms of the presentations and speakers, but of the communities and projects that are discussed, that he might not otherwise be aware of. And perhaps most importantly, realise just how easy it is to be included into the community and how easy it is for everyone to make a difference.

At the conference dinner in 2008, Edmund was struck by the lack of younger members of the communty in attendance, and started to think about why. For some time I have been trying to understand what we as a community can do to bring new people into the community, and although my perspective has focused on YAPCs, it equally applies to projects and local user groups. However, there is one aspect that I had neglected, that was obvious to Edmund. Funding. Most of those we are trying to encourage to come along to a YAPC are likely to be unwaged or on low wages, and cannot afford the costs of travel and accommodation for 4-6 days.

Last week Edmund launched the Send-A-Newbie website, with the support of the organisers for the 2009 YAPC::Europe Perl Conference to be held in Lisbon, Portugal, together with several members of the Perl community who have voiced approval. It is a great idea, and is a great way to enable students in particular a chance to attend the biggest Perl developer conferences in Europe.

The initaive aims to send at least 6 people, although even if only 1 person is selcted to attend this year, I would consider it a success. As it happens some grant applications have already been received, so it is likely that at least 1 person will attend thanks to the programme. Hopefully more will be approved for grants providing the funding can be obtained.

So how can you help? Well if you have the ability to do so, please consider donating. Mentioned the programme to anyone who you think might be a worthy recipient of a grant, and get them to apply. Mention it at your local user group, and see whether anyone can help with a donation. In order to keep YAPCs and the Perl community healthy we need to encourage potential future stars that attending the conference is a worthwhile oppotunity. If they could benefit from a grant to cover their travel and accommodation costs, then it really is in yours and their interest to do something about it. Applications will be accepted until 1 June 2009, so there is plenty of time yet to promote and apply for grants.

Something that has bugged me recently is my lack of regular posts to my personal blog. I rarely write about the projects I work on, as most are featured in some form or another on the various Perl sites that I work on. But I feel I ought to make a note of snippets of ideas and thoughts about some here too. Not that I want this to become a technical blog, but there are random thoughts that would fit better here than over there. So expect a few more project related posts in the future.

I also have a considerable backlog of gig photos as well as the family type photos that I want to go through, and at least put a few photos online. So that might help make my postings a little more regular :)

Recently there has been a very strong reaction to a news story regarding a woman who bought a Dell laptop that came with Ubuntu preinstalled. Now until Jono's personal post, I hadn't heard about it, but after reading Jono's reaction, I decided to look into it further.

Unfortunately for the woman in question, her name is now so tightly tied to this news story, should a future employer ever search for her name, it's not necessarily going to put her in a good light. However, the same is true of the many reactionary members of the Linux and Ubuntu communities who responded to the story, and later blog posts by the news reporter. There are reactionary people in every community, whether it involves computers or not. Even though many are accutely aware that these reactionaies are a small portion of a community, and rarely represent the true community, unfortunately they by their very nature are the first to react and often shout the loudest .

In this particular news story though, there are a couple of elements to the story that don't quite ring true. Firstly, the woman claims that she accidentally ordered the laptop with Ubuntu pre-installed. Now, although Dell were very vocal about the fact they were going to offer Linux distributions on their laptops, unless you specifically search or ask, the default install is still Windows. It takes a concious effort on the part of the buyer to choose Ubuntu on their site. That's not to say she didn't somehow accidentally select the wrong operating system, but it does seem rather odd that she wasn't aware she'd done it.

Secondly, the woman claimed that she dropped out of classes for two semesters, because she couldn't install Microsoft Word (which was unfortunately implied as being a necessity for the course) or connect to her ISP. Take a moment to read the first part again. She dropped out of classes for 6 months because she couldn't get her laptop to work correctly. Personally I can't believe that she never sought help or advice from the college, friends or classmates. Ignoring the fact that Ubuntu wasn't for her, why did it have to drag on so long before she went to a news reporter to stir up a lot of bad feeling? And following on from that why go to a news reporter at all, other than to make a name for yourself? Personally I'm inclined to believe that struggled for a couple of weeks trying to sort this out, then got frustrated and thought talking to the local news channel might resolve it quicker. I'm assuming of course, but would you really wait 6 months before deciding to complain?

In this type of case the fault usually lies in one of two camps. Either Dell for not exchanging the laptop for one with Windows installed, or the woman for not contacting Dell soon enough to try and resolve the problem. Reading the story it would seem the woman did contact Dell and was told Ubuntu should work fine. Without know the exact details of the conversation, I'm inclined to say the fault lies with Dell for not replacing the laptop with a Windows install. In the UK, and I would assume the US has something similar, all online retailers must replace or refund within a set time period and product that does not meet the buyers expectations, regardless of reasons.

Had Dell replaced the laptop, without trying to convince her of the virtues of Ubuntu, this would have been a non-story. Instead it's created some very negative press for all concerned. The news reporter has since followed up the original story and after initially seeming to generate some positive feedback, settled to generate more bad press. It really is sad that news stories such as this don't get more accurately reported, but hey modern journalism is all about sensationalism, so it shouldn't be a surprise. But what saddens me much more, is the fact that so many first reactions have been to name call, harrass and belittle their percieved opponents.

Reading the pieces of the story that I have, and more specifically some of the replies, I agree with Jono. Community is about communication, and more specifically education, and not rude and offensive comments. I cannot even comprehend how these people ever thought their replies were in any way helpful. Flamewars are a waste of time and effort on all sides, and usually only serve to let the most reactionary fall into carefully laid traps. The original story now appears to have been taken down, possibly due to the overwhelming amount of hits it has received from around the world. However, the reporting itself had all the hallmarks of a trap. There were inflamatory accusations and inaccuracies, so it wasn't a surprise to discover that it got the reaction it did. Thankfully some of the replies were from well reasoned people, who did try and point out the inaccuracies, and better inform the news reporter and readers of places to find more out about Ubuntu. But the overwhelming weight has been negative and does Linux, Ubuntu and Open Source no favours.

Ubuntu is a great operating system, and has helped to advance the Linux desktop perhaps more than any other in recent years, but it isn't for everyone. In this story, the woman obviously isn't as familar with a Linux desktop as she is with a Windows desktop. I have no doubt that she could use it, but change is difficult for most people, and having learnt how to use Windows, this woman just didn't want to learn something different. Did she deserve the derision for that point of view, certainly not. And what about the perception of the Linux, Ubuntu and Open Source communities to those who are not part of them? I doubt any of them will be closer to giving any flavour of Linux a try.

In all likelihood, had this woman been able to get some reasoned advice early on, and maybe even had some technical support to get her online and using Open Office to create her Word documents, she could quite easily have been converted. Instead the reactionaries have alientated her, and only served to reinforce the wrong impression that the Linux community still has a lot of growing up to do. I doubt Linux or any Open Source community is ever going to be rid of these reactionaries, but I do wish they would realise that they do themselves, and the communities they apsire to represent, a considerable disservice.

It will be interesting to see if Jono covers these unwanted elements of communities in his new book, Art of Community, as while we all have wanted help and advice to building a community, it would also be useful to suggest ways to restrain those that might otherwise unintentially put it in a bad light. "A chain is only as strong as it's weakest link."

Earlier this month, a good friend of mine, Jono Bacon announced that we was starting to write a book about building communities. It's been a subject that has been discussed at length by many communities, many times over many years, and there is no one right answer to it. Some methods work in one context and don't in another. You see it all depends on the people, and specifically the personalities, who are part of the community and who you want to encourage (or discourage as the case may be) into joining, rather more than the project or common interest element itself.

Jono's book, titled Art Of Community, will be a look at how to build communities from different perspectives. He's getting several notable Open Source community members to help contribute their stories and it looks like it will be a really useful book for those starting a project, or user group to get some ideas of how to make it happen.

The hard part of starting any community, is promotion. Jono himself is taking note of this for the book's promotion too. You see the book itself has started a community of people who are early supporters of the book, and want to help make it a success. Part of making it a success is letting people know it exists. As Jono is already widely well know in technical communities (I've known him for about 8 years thanks to him starting WolvesLUG near me), he does have a head start. But it still needs people to talk about it, discuss it and eventually review it. I thought I'd write this blog post, partly to help promote the website that the book now has, but also make others aware that the book is being written.

I'm looking forward to reading the completed book, as apart from being a great read, I expect it to become a great source of reference for helping new communities promote themselves and florish.

Having started Birmingham Perl Mongers back in 2000, been a Perlcommunitymember, a member of the YEF Venue Committee and a major contributor to the CPAN Testers project, I've been very accutely aware how hard it can be to build a community. Though it should be noted that the building part isn't just about getting a project or user group off the ground, it's also about keeping it going, and encourage others to get involve and help the community thrive.

A good case in point is the CPAN Testers project. I first became a CPAN Tester back in 2004, and contributed several thousand reports for the Win32 platform. It was thanks to Leon presenting a BOF at 2003 YAPC::Europe in Paris, that I first became interested enough to join the volunteer effort. Shortly afterwards I started contributing to code for the smoke tools and the websites, creating the CPAN Testers Statistics website in the process. With the help of the Statistics site I was able to promote the project to other Perl programmers at YAPC events, by show how valuable the service the project provides is. Over the last few years the number of testers has grown, and the number of test reports submitted has gone from a about 100 per day to over 5,000s per day. In June 2008, Leon handed over the Reports website to me, as I was eager to improve the websites and make them more useful. Since then, I've had several developers help contribute patches and ideas to the project and it has been very encouraging to see the community driving the site forward. CPAN Testers now have their own server, a whole family of websites and a great tester community. In our case the community has built itself and mostly promoted itself from being a useful set of websites for developers. It'll be interesting to see if Jono pinpoints anything that we actually did do to build the project community and just never realised we were doing it.

I'm also interested in reading the book, as it is likely to have some useful references for a book project I'm currently working on. Although I don't plan on making it a hard copy book, it will be available online, and I hope to encourage contributions and improvements. My book doesn't have a working title as yet, but the subject matter is 'organising Open Source conferences', and will also have thoughts for workshops, hackathons and large technical meetings. The blue print for the project is based largely on my own experiences of organising The 2006 YAPC::Europe Perl Conference, but will hopefully include other thoughts and comments from conference organsiers for other Open Source events, such as the organsiers of LUGRadio Live, which Jono himself was significant instigator of. Like Art of Community, my project will also be available online under a Creative Commons license, and I'll be watching to see how the Art of Community community establishes itself and see whether there are any good ideas I could use too.

I look forward to finally reading the book, but in the meantime I'll just have to keep an eye on the Art of Community website updates.

A couple of weeks ago I was in Copenhagen for YAPC::Europe, which was a blast. I did my Understanding Malware talk, which seems to have gone down well, and the posters even better!

Before leaving the UK I finally bought a new camera, a Canon EOS 40D. Unfortunately this was my first time using the camera and I was a little disappointed that I wasn't able to get the same quality of photos as my Fuji FinePix 5100. As such don't expect anything too much from these photos. Hopefully over the coming months I'll get used to the camera and improve the picture quality.

For some personal observations of the conference, see my use.perl post about it. I may do a more detailed write-up about the talks I saw and the discussions I had too at some point, but that's it for now. Anyway, enjoy the photos.

So while severalpeopleIknow have been telling everyone that they'll be at OSCON this week, I thought I'd mention that I've just been to LUGRadio Live, probably the best Open Source event ever :)

The event was originally billed as the last event LUGRadio event. The reason being that the presenters were finding it harder to prepare for the radio show recording, and have the time to edit and put it out, when they have work and family taking up more of their time. It was sad to hear that they were stopping the show, though understandable, but it was an even bigger disappointment when there was the prospect of no more LUGRadio Live. The event is more than just a conference, it's a great way for the UK community (although there are plenty of European and further afield attendees) to get together and catchup. As such a few of us behind the scenes had already suggested that something should happen. I'd suggested that another UK LUG take up the challenge and hold the event somewhere else in the UK. However, Dave Morley and Ron "BigRon" Wellstead had ideas to just do it themselves, seeing as most of WolvesLUG were on the crew and had been working behind the scenes for the last few events. Either way I would have been happy.

So it was with some relief that during the Live & Unleashed recording on Saturday, that Jono said that after the Friday night party, he was so overwhelmed with the comments from people, about how much they were going to miss the event, he was moved to discuss with Aq about doing the event again. Thankfully, they were both in agreement that is was worth doing. So even though the podcast will be no more, LUGRadio Live event will continue, which is great news.

This weekend was great fun, and I manage to take over 2,000 photos over the two days (and Friday night party), which I now have the pleasure of whittling down to a more manageable number to post here. I hope to get throught them all this week, so stay tuned for news of when they are uploaded. It was great to see the Bytemarkgaming rig, which was a great success, and also to be able to say personal thanks to Matt Bloch for helping Birmingham.pm sort out their server. I'm also very grateful for the guys for the 1 or 50 special LUGRadio tshirts that I got as a thank you for yet again being their unoffical offical photographer for the event :) It was great to catch up with Josette and Sylvie for O'Reilly, as well as John Pinner from Linux Emporium (BTW thanks for the tshirt John), who had some ideas for an interesting conference next year, and Andy Robinson from OpenStreetMap. Novell (Ethne loves Geeko the chameleon), RedHat, Efficient PC, Beagleboard and the Open Rights Group all had great stands too, and all helped to make it probably the best exhibition area they've ever had at LUGRadio Live.

Also in attendance in the exhibition area were the Linux Outlaws, another Linux podcast, that are looking like they could fill the void for all those LUGRadio fans. I've only heard them being mentioned on LUGRadio, on recent episodes, so haven't had a chance to listen to them yet, but having had a chat to Fabian, they seem like really sound guys, and I'm looking forward to hearing all the back issues. They were also hoping to record an episode of their show at LRL, but I don't know whether they managed that.

This year, thanks to Tony and Laura, this event is probably the most filmed LRL too. Having organised an AV crew well in advance this year, pretty much the whole event was filmed in some form or another. I'm sure it'll be a while before the videos appear, but judging from the effort they put into it, it's going to make fantastic viewing. Also thanks to all the crew, and especially Mez and Chris for helping me out during my talk. The crew have become and invaluable part of LRL, and without them it really wouldn't be the kind of event that it has become. Remember these guys are doing it all for free, because they love being part of the whole experience and want to help put on the best show possible. It also helps that they are a great bunch of guys and gals.

But the biggest buzz about the whole event was Chinny. Thanks to Xalior, who had the outfit custom made, a lifesize Chinny Raccoon featured in much of the events over the two days. Big thanks to MrBen for being a great sport in the costume and generally putting on a great show. It's no surprise he is considered a lifetime LUGRadio Community Hero. Although after seeing the pictures from the Gong-A-Thong, his wife Heather is not so keen to let him out of the house for next year!

My photos will be online soon, so check back soon for them, in the meantime enjoy the tasters I've added to this post. There are plenty more to come :)

Someotherblogposts have started appearing around the web, so it'll be interesting to read what others make of the weekend. I plan to write a little more later too. However, the one post that really says more about LUGRadio Live than anyhing else, is the one Laura posted about her and Tony filming the last ever studio recording of LUGRadio, and includes some of her highlights from the past LUGRadio Live events. Sums it all up for me too.

Tomorrow will be the start of the last ever LUGRadio Live. Tonight Open Source and Linux enthusiasts will descend on Wolverhampton, to mark the beginning of a farewell party that is set to be remembered for a long time. The party starts at The Hogs Head in Wolverhanmpton city centre, with about 30 or so people already confirmed, and many more likely to turn up.

According to Chris, the Britannia is now full, and by all accounts pretty much everyone staying there is attending LUGRadio Live :) The final Live And Unleashed recording will be tomorrow night, with another party after it. The final day of the conference is likely to be a bit of a sad day. I'm doing my talk first thing on Sunday morning, so hopefully there won't be too many sad faces in the audience.

It's going to be sad to see the show finish, not least because I've met some great people because of LUGRadio, and been inspired on several occasions. The crew and community behind LUGRadio and the live event, are superb and deserve tons of credit for putting on one of the best Open Source events in the UK. I'm hoping that it becomes an inspiration for others, preferably LUG groups, to come up with an annual event to continue the community's desire to meet up in real life.

I shall be taking photos over the weekend, so expect to see a further post, hopefully next week, with all the best sights from the whole weekend. I'm looking forward to the weekend, but it'll also be a little sad to think that this is the end of an era.

My photos are finally online from the YAPC::NA Conference in Chicago. Although many of the outdoor photos have come out well, many of the indoor ones haven't. For the conference itself, the main room was too dark on stage to really catch the speakers well, and all though the other two rooms were well lit, the speakers always seemed to move at the wrong moment. I think it might have helped if I';d have used my tripod a bit more, but I really do need a good digital SLR.

I did want to add lots of tags and things to all the photos, but that's just going to have to wait until I have more time. In the mean time, enjoy.

For those that only want to see the conference related photos, these are they:

As an added bonus I'm piecing together some of the photos I took during the Speakers Party, where we were able to get a grand view of the city. At the moment I have only uploaded 1, but hope to get the other two sorted soon.

The guys over at LUGRadio have just released the latest edition of the show. They also reveal a rather big announcement, in that LUGRadio Live Live & Unleashed will be the last ever show by the team. This also mean that LUGRadio Live in a few weeks time, will also be the last ever LRL. I'm gutted as the show and event has become a staple part of my life for the past 5 years. As I knew the guys before they started the show, I was fortunate enough to be a fan from the very first show. And from such humble beginnings it's been amazing to see what the team have created. It is a credit to everyone who has been involved in LUGRadio, and the whole community that has built up surrounding both the shows and the events, that they have played a notable part of promoting Linux and Open Source. The quality of guest, discussion and inspiration has been excellent. It has always been fun and entertaining, but it has also strived to educate and pass on their passion for the projects, and communities they have introduced us to.

I'm glad I had the opportunity to play even a small part of the experience, and it has always been a joy to listen to the shows. I shall miss them. I'm fortunate in that I live not too far from the guys, so hopefully I will stay in touch and see them at Wolves LUG events in the future. But I will miss the all the LUGRadio Live events, where I get to meet so many other Linux and Open Source enthusiasts from around the UK and the World. Thanks guys, it's been a blast.

A friend pointed this post out to the WolvesLUG a little awhile ago, and it got me thinking. Firstly it annoyed me that this guy managed to be taken to task for asking something that is often a very basic question from new recruits to the Linux way of things. When told that there are a selection of varieties, potential new users are often overwhelmed to understand what they should choose, so asking what the differences are is not an unreasonable question. The answer isn't easy and in this case the guy was asking for pros and cons of each system to best analyse what would work for him. That's something most rational Linux users understand. However, the extremists do no-one any favours. Mark-Jason Dominus once posted an article at perl.com, entitled Why I Hate Advocacy, which extremists would do well to read.

After that first reaction, I started to think about why I chose the distributions I did. I tend to use Debian for my servers and Ubuntu when I need a desktop. I also use Windows XP, as that is the default install on my work laptop (I haven't been able to get Ubuntu running on it, but that's another story). But how did I come to settle on those two, Debian and Ubuntu, as my prefered platforms?

Over the last 10 years or so I've tried a variety of flavours of Linux distributions, and they all seem to have something going for them, but there is not really one that manages to be the panacea. Personally I consider that a good thing. My knowledge of Linux came from my long standing experience of Unix System V. I began working with Unix in 1985 when I started at Coventry University (Lanchester Polytechnic as it was then), and carried on with it when I went to work for GEC Telecommunications. At the time it did the job of teaching me the command line, C, network programming among other things. But it was all command line based. In one of the modules I studied at Lanchester Polytechnic, we specifically covered Operating Systems and looked at several different ones that were available back then. We were then tasked with writing our own OS. Being a big fan of curses at the time (as I was writing games such as battleships and othello with it), I persuaded my team to look at an interactive OS, rather than a command line based version. We got marked down because we couldn't print out our results on a line-printer, unlike everyone else's command line based systems. At the time it really pissed me off that the lecturer could be so ignorant of different ways of thinking. I didn't have enough knowledge to design or write a proper desktop OS, but I could see a benefit to having one. A year or so later, I got to see a copy of Windows 1.0. It planted a seed for a number of people that the interactive desktop did have a future.

Until Windows 3.11 was released, I was still working on command line based OSs, including Unix, VMS and the OS (whose name I've long forgotten) that ran on Pyramid workstations. I started to use Windows, but found it annoying. It hid away far too much from me at the command line, when I just wanted to get the job done. That has pretty much carried on throughout every Windows release. It has got better in many respects, but sometimes the command line can get right to the heart of the problem. I still use the Windows command line virtually every day.

The benefit of the Linux desktop is that I can have the desktop, but easily drop to the command line when I want to and have the full power of the OS at my disposal. My first experience of Linux was in 1998 using Debian, however not as a desktop, just as a server. I can't remember which desktop I actually tried first but around 1999, I went through Red Hat, Slackware and Mandrake, coming back to Debian. Possibly due to familiarity. Later I was given a works laptop with Red Hat on it, and stuck with that for quite some time. The actual desktop was originally KDE, but having tried Gnome ended up sticking with that instead. I do remember trying Enlightenment at some point, but it didn't last very long. In September 2000 I installed the newly released Potato from Debian as a desktop. I have to say it was rather nice. It worked without too much hassle and looked nice. I ended up sticking with it for quite sometime.

The brick, an Toshiba Satellite, stuck with me until 2006 when work finally gave me a company laptop. Understandably they weren't too comfortable with me using a personal laptop on the company network. It did get a few comments in later years, but it travelled with me to all my early conferences. At home my 3 servers were all running Debian, 2 of which running with Gnome desktops. At the end of last year Akira finally gave up after many years of service and has now been decommisioned. I now only run one headless Debian server, with another powered off to use in emergencies.

When Ubuntu surfaced I was toying with the idea of using Red Hat, or more accurately Fedora Core. I did try Fedora Core for a few weeks, but I think the Debian way had just got too comfortable, so gave Ubuntu a try. For ease of install and use, I found it much better than Fedora Core at the time. A couple of years ago I installed SUSE 10 on my works desktop, and despite a few learning curves, it didn't seem too bad. However, as time progressed and security updates, as well as general software, were needed, the system seemed to become more and more unstable with each patch. It would occasionally lock or crash, so after a particularly annoying crash, I started with a new install of Ubuntu.

The biggest win for me with Debian/Ubuntu is the deb packaging system. It occasionally had problems with dependencies, but for the past year or so, I haven't had any issues either upgrading the basic version, or with a complete dist-upgrade. Ubuntu now has more and more restricted drivers to enable laptops to just work, and Synaptic is just one of the best repository search engines I've ever had the pleasure of using. Gnome has a nice desktop feel and the layout works for me. However, this is still all just personal preference. I can't remember anything, development wise, that didn't work on one and not the other. Paths sometimes can sometimes be a bit confusing, as all the distros have their own conventions, but on the whole you get used to them.

Maybe if I'd have started with Red Hat, SUSE or Mandrake, and really got into the mindset I would still be using that distro today. I also think the fact that there are differences is a positive part of the Open Source movement, as each distro has a unique style and identity that fits some and not others. However, that does make it difficult to provide a new user with the right information to make the right choice for them, as in the end we all have a personal slant on our view. Anyone trying to make an informed choice is probably best to try all the major distros, and see how they fair installing, configuring and using. LUGRadio recently tried this, and although it wasn't the perfect test, it did go a long way to try and understand what worked for each member of the team. If you have the time to invest I would recommend trying at least Ubuntu, OpenSUSE, Fedora and Mandriva as desktops, and include Debian if you want a server based OS. If you really want to go hardcore then Gentoo might be of interest, but it really isn't recommended for a new user.

One thing the LUGRadio boys spotted during the installation, was how often the distros can ask some very confusing questions, that even experienced users could even have problems with. This is perhaps part of the nature of Linux, that it isn't (at the moment) ready for a complete handover to the uninitiated. However, with more feedback and better refinement of the options and questions, I do think we will get there. Interest in Linux as a desktop is continuing to grow and we're going to see more and more posts (like the one that started this post) by people wanting to discover what will work for them. I'm hoping the extremists will burn themselves out, more of the LUG members will get to provide a more reasoned view, and maybe even more articles will appear in more mainstream computing press that will help to give a balanced view of the differences.

So if anyone does ask you to give them a idea of the differences between the Linux distributions, please try and give them a flavour of why you choose what you did, but not to the expense of them experiencing the right distribution for them. This thread on PerlMonks is more in keeping with that idea, and gives several general hints and tips why you might choose one platform over another.

After last week's post about the Asus EEE PC, I thought it worth mentioning a local company (to me) in Redditch who got featured on last Wednesday's Bromsgrove Advertister. Elonex have released the first laptops for under £100 in the UK. The laptop, called The One, like the Asus EEE PC is aimed at the education market. However, I can also seeing it being a very attractive purchase for anyone wishing to buy a cheap laptop, so that they can use to browser the web, edit documents, manage their photo collection and play music. Particularly if they aren't too interested in the details and wouldn't classify themselves as a technical user.

It sounds an ideal purchase for kids to learn how to use a computer, as they are not power users and are unlikely to notice the slightly slower of the 300MHz processor. It's often annoyed me that PC and laptop manufacturers heavily promote the processor speed, how much RAM they have and how many Gigabytes their hard drives are. However unless you're playing top end games or getting a million hits a day on your web server, you rarely need that much power. In fact the user is often the blocking point, as browsing the web and editing documents rarely find the local computer maxed out on CPU, Memory or File IO. DanDan's laptop is not much faster and he happily plays flash games, although admittedly he is using Firefox on Ubuntu, so is less encumbered with bloatware that you find on every Windows machine now.

If the current trend of cheaper laptops for kids and the education market keeps going, I think a number of the major manufactures may want to re-evaluate some of their offerings. While there will always be a demand for high spec machines from developers and businesses, I can imagine that the home market will start to see a shift in it's desire to buy something more affordable and reliable. As such I see Linux and Open Source featured more and more as a viable alternative to Windows. The Linux desktop may just get a notable share of the lucrative market that Microsoft have held onto for so long.

As mentioned in a previous post, I'm planning to attend YAPC::NA in June. I've now submitted my talk proposals for Understanding Malware and an updated version of How To Be A CPAN Tester. The deadline for proposals is next week, so hopefully I'll hear fairly soon whether they have accept either talk. I'm also planning to host a CPAN Testers BOF for testers, authors and interested parties to meet and discuss issues and/or the future of CPAN testing.

What a breath of fresh air. A comedian, actor and presenter, who actually has an interest in things in the computer world, beyond a source of writing inspiration. I recently came across a post by Stephen Fry (for those American readers, Stephen is the other member of the comedic duo, Fry and Laurie, with Laurie being Hugh Laurie currently making a name for himself in House), in his blog. The blog post that I picked up on is entitled "Deliver us from Microsoft". Reading back through other posts it appears he is quite a strong supporter of Open Source software, and to my mind, for all the right reasons.

The article in question looks at the Asus EEE PC, which was also recently (December 2007) reviewed by the LUGRadio presenters in their "Inspirational Muppetational" episode (Season 5, Episode 7). Both Stephen and the LUGRadio guys all came out praising the machine, and although they all found some form of critism for it, their view was healthy put into perspective by the fact that the aim is to provide a cheap machine for educational purposes. It isn't aimed at power users, such as myself, but those who want a laptop that can connect to the internet, enabling them to browse the web, chat to friends, edit or write office documents.

However, the most significant thing about the laptop, which is hinted at in Stephen's blog post title, is the fact it runs using Open Source software. From the Debian base (although tailered to the Asus EEE PC), through to OpenOffice and Firefox applications. The machine is perhaps the first to ever be sold commercially from the outset, where Linux is only version available, with no Microsoft product installed. Vendors are starting to realise that users are buying their machines and installing Linux on them, wiping any hint of Microsoft off, as has been apparent by the news reports of people contacting them for refunds. The choice isn't perhaps as wide spread as some of us would like, but it is getting better.

Stephen thinks that the change will happen within 5 years, and I would certainly welcome a change in the balance, with many more people running Linux as their Operating System. Linux on the desktop, has long been a challenge that Open Source developers have been making many dramatic changes to improve. DanDan and Nicole both use Ubuntu on their laptops, and I have heard of many people getting their parents, spouses, siblings and offspring to use some flavour of Linux with great results. There are still lots of gains to be made, particular in the area of closed source drivers and getting many devices (especially wireless network devices) working out of the box, but credit where credit is due, we have a lot to thank those developers of all the Linux distributions and Open Source applications. We have come a very long way in the last 5 years, and now perhaps more than ever Linux on the desktop has a real chance of challenging Microsoft's dominance in the market. I don't expect a complete take over, as I think Stephen was hinting, but I would like to see consumers being given a better, more considered option to buy an operating that works for them.

I do accept that Microsoft can be better in some areas, particularly with games, but I can see that advantage disappearing once games developers realise that a large portion of their current geek market will switch to non-Microsoft platforms. It might even challenge Microsoft to finally listen to many of the opponents and actually evaluate their security and product quality, enabling them to release more stable and reliable products. For myself, I choose Open Source partly because I find it more secure and reliable, but also because it gives me the freedom to investigate and hopefully fix problems, and potentially give back to the wider community. I already contribute to Open Source and I'd like to think that offsets all the benefits I've gained by using Open Source software.

I don't read the Guardian, but I think I'll be reading more of Stephen Fry's blog in the future. It's been an enlightening read.

In a recent BBC news article, Microsoft set to open up software, it is reported that Microsoft plan to release the technology to some of their software in order to provide better interoperability with other rival products. It also states that they promise "not to sue open source developers for making that software available for non-commercial use."

Now some may be extremely dubious, as that just doesn't seem to fit Microsoft's business model. There has to be something unusual here for them to feel they can release something to the world for free. It wouldn't surprise me if they released their back catalogue of software that is now 10 years+ out of date. As this software is now end of life, it does make sense to release restrictions on the old file formats, so that those who have to support Win95 and Win98 machines have a chance of getting some support from the Open Source community. It benefits Microsoft in that they will likely still require credit for any software that uses their file formats, but also allows them to virtually forget about support for older formats in their newer products.

If the second statement holds true, then it will hopefully mean less of the table thumbing and general smoke clouds of threats, which never amounted to anything anyway. It might also mean older Microsoft products might get their own special Open Source security release with all the holes repaired ;)

I'll be intrigued to hear what software/technology they are releasing, but I suspect that there will be an overwhelming wave of derision from some of the more out-spoken Open Source protagonists. Pity really, as to my mind, it may well add value to the many Open Source projects. Open Source is no longer a hobby. Serious investment is made by the likes of Sun, Red Hat, Novell and many others. The future for Linux as a reliable alternative desktop is getting better and better. No doubt there will still be plenty of FUD about, but consumers are becoming more and more educated about the choices they have available to them, and Microsoft is slowly waking up to the fact that they can use the Open Source community to their advantage, and still keep their name on every desktop, just not necessarily in one of their own product releases.

Back last year I heard, through LUGRadio, about a animation film entitled Elephants Dream. I downloaded it, but for various reasons, I never got around to watching it. That is until this weekend. The reason I was reminded of it I'll come onto in a moment. However, Elephants Dream is a stunning piece of work. 6 people created this film and the results are a testament to their skill, dedication and movitation to make it. The film was the first to make real used of the Open Source video editing tool, Blender. In keeping with the Open Source ideals, the team also released the complete film, the making of and all the DVD extras all as Open Source, and indeed you can still download them for yourself and watch them on your own computer for free. Which is also how I'm also able to include their images here too, (c) copyright 2006, Blender Foundation / Netherlands Media Art Institute / www.elephantsdream.org. However, credit where credit is due, as I have no desire to let this effort go unrewarded, and so I plan to order the Blu-Ray disk (sometimes it pays to wait a while ;)). Mind you they seem to be out of stock at the moment :( I don't have a Blu-ray player yet, but I will eventually. If you've ever seen Tim Burton's The Night Before Christmas or The Corpse Bride, you'll have a good ideal of the animation style, but Elephants Dream has a bit more of a humoruos storyline to it. It's not a Pixar like film, and younger kids might get a bit scared, but certainly it's a real treat to watch. Personally, I would highly recommend buying a copy, it's well worth €15.

Big Buck Bunny

So what caused me to revisit Elephants Dream? Well the Blender Institute, who helped to produce it, are helping to producing a second film using the Blender software. This time Peach Open Movie have been creating a film for the past six months, that is due to see the light of day at the end of next month. So how did I hear about this? Well LUGRadio once again prove they are on the pulse and had Sacha "Sago" Goedegebure on the show for an interview in a recent episode. The interview itself is well worth a listen (although does contain swearing), and prompted me to go and check out the website. Based purely on Elephant's Dream, I've pre-ordered a DVD of "Big Buck Bunny". The original working title, "A Rabbit's Revenge" having been deemed not really suitable. Looking at the gallery and some of the videos, you can see this is a very professional, high quality production. Like Elephants Dream, thanks to their Creative Commons License, I've been able to include an image here too, (c) copyright Blender Foundation | peach.blender.org

Like Elephant's Dream, Big Buck Bunny will also be released as Open Source and everything that will be available on the DVD will be available for download. This is really cool. But in order to help them out, and partly to save myself the hassle of downloading, I've pre-ordered a copy. Hopefully, you'll think it's worth buying too, and help to contribute to the project, thus helping to fund future projects and films. I'll post a review of the film once it's released.

At the moment the guys are busy preparing for LUGRadio Live USA, so expect more details for the UK event after next month. The US event will be the first time the LUGRadio experience will have been seen on such a major scale outside of the UK. The guys seem suitably excited and I'll be keen to discover if the American event has the same manic and mayhem feel as the UK event. The UK event is very definitely about getting the Linux and Open Source communities together, to hopefully provide an opportunity to meet and greet with fellow developers or just people you meet on IRC or the forums. It doesn't have that corporate feel is much more laid back, thus having a much more social nature about it than many traditional conferences. Not to diminish the value of the talks and presentations, but the atmosphere is much more conducive to discussion, questions and feedback than more formal events. For me that has perhaps more value as I like to get feedback and ideas from others and some more corporate events often don't encourage that atmosphere.

In the meantime, if you're in the US and can make it to the West Coast over the weekend on 12th/13th April, checkout LUGRadio Live USA2008 and try and get along to The Metreon, San Francisco. As a tempter, watch the video trailer created by Tony Whitmore, AV coordinator for the UK event.

I shall be at LUGRadio Live UK, although whether that's as a speaker, attendee or member of the crew remains to be seen. I'm thinking of submitting my Understanding Malware talk, but seeing as it's about an hour long, and I definitely DON'T want to be on the main stage, I'm hoping the guys will agree to hiding me in a smaller room. They guys always manage to put me up against big names (Mark Shuttleworth and Chris Di Bona for the last two years), so this might be my chance to steal some of the audience back for the little guy ;)

As I don't specifically talk about Linux stuff, but more general Open Source stuff, I've often felt a bit of an outsider as a speaker. The Malware talk is again not about Linux specifically, and some aspects are not Open Source (for justifiable reasons), but the content, particularly for anyone interested in understanding what malware is and eager to gain some very basic hints and tips to protect your inbox, it's ideal. Seeing as most of the attendance for LUGRadio are knowledgeable Linux people, I'm hoping the talk will be of interest to a wide variety of people. I've now done the talk twice, for Leicester LUG last week and Coventry LUG last night. Both presentation went down very well and generated lots of interesting discussion afterwards. Seeing as some of these guys are very clueful sysadmins and developers, as a benchmark, I think the LUGRadio audience will love it. We'll see ;)

The UK event will be returning to Wolverhampton University Student's Union, the venue for the 2006 event. Personally I liked the Lighthouse, the venue for 2007, but I know the guys got heavily criticised for a variety of issues, that meant they had to reconsider the venue for the 2008 event. The SU venue is smaller than the Lighhouse too, which might cause some problems, as I can see the event getting a bigger attendance this year. For the past 3 years the attendance appears to have been increasing anyway, but in the last year, I am noticing more and more articles, blogs and posts about LUGRadio. I just hope there is enough space for everyone.

BTW if you're attending LUGRadio Live USA2008, please take a camera and post your photos publically. My site always gets a lot of hits for LUGRadio, and I'm sure the thirst for photos for the US event will be just as popular.

On the Birmingham LUG mailing list recently there was an announcement about the Birmingham Mapping Party, which is being organised by some of the guys at OpenStreetMap. Previously Alex has been over to Birmingham.pm to gives up a bit of background about GPS and mapping, and seeing as round where I live there is a distinct lack of mapping data, I thought it might be a good idea to find out how to get involved.

First off was to check whether I had the right requipment. I have a Nokia N95, and although it has GPS, I had no idea whether it could record data and allow me to upload to OpenStreetMap. Reading the notes, the N95 does indeed have the capability to record the mapping data, however it needs an additional (free) app to do it. I headed off to the Nokia Research Labs website and read up on Sportstracker, an app that allows joggers, etc to monitor their progress. As a by-product it also records the route you take in the GPX format, which can then be uploaded to OpenStreetMap. Using my local Wifi network, I logged onto the website and installed the software directly onto the phone. Having only had a quick look at the app, it does look quite cool.

So now I'm ready to record. However, I've previously mentioned to JJ about the GPS connection taking ages to triangulate my position when I switched on the GPS, and he mentioned A-GPS, which is also mentioned on the Nokia website, so I figured I ought to try and upgrade that too. Another download of the latest Software Updater, this time to the PC, and I'm ready to update. When I first tried a month ago, I had problems connecting to the Nokia website, this time around it connect without a problem. It also detected the phone and detected correctly that it has the 11.0.026 version of the firmware. The lastest version listed on the website is 20.0.015, and for A-GPS support version 12.0.013 or newer is required, so I was expecting a download and upgrade. Unfortunately, it would seem the Software Updater doesn't agree, as it is claiming that the firmware is up to date with the latest version 11.0.026. This is a bit annoying and have yet to find any way to update the phone to the latest firmware. I've now email Nokia customer support to see whether they can shed any light.

JJ did mention previously that I could go into any Vodafone shop and they would upgrade for me, but I doubt they would do anything that different from what I've tried, unless they complete wipe the OS and reinstall with the latest version. Just in case I do have to take this back to the shop, I've also downloaded the Nseries PC Suite to backup my data. I don't have too much on there, but the phonebook and messages I would rather keep, but then I should back them up every so often anyway.

Although I've had the phone for several months, this is the first time I've actually looked at it from a lower level. I'm starting to look at other possible applications, such as using as a laptop input controller, and really get the most out of the phone. Seeing as it has all these gadgets installed, it would be a shame not to use them ;)

Incidentally, I'm not planning to be at the Mapping Party, but I do hope to contribute to the mapping effort once I've figured out how to use SportsTracker. And maybe I'll be able to do a bit of Gloucestershire too seeing as I work there.

Back last year I was invited to EuroFOO. Having never attended this type of event I was a bit wary of what to expect. As it turned out it was rather an interesting couple of days. For those who never been, the event is a mini conference with the scheduled more or less decided after the welcome session, on two large whiteboards, with the attendees themselves allocating themselves to the available timeslots. To a degree it is a free for all, but there are enough clever people here, including several who were well prepared, who were able to pretty much fill all sessions within a few minutes.

The sessions themselves were a complete mixture of ideas. Some were an opportunity to show off cool apps, some focused on "mashups", others were discussion forums and several others were just whatever seemed like a good idea. Although there were a few sessions that stood out as worth attending for me, there were plenty of others that I could drop in or out of and either enter discussions or just play the part of observer. From a personal point of view I took a lot away with me, but I think if I'm ever invited next time, there are a couple of presentations I could bring with me. I'd certainly feel more confident about suggesting a session next time. When it's your first expereince of something like this, it's a bit daunting to stand up in front of so many talented people.

One aspect of the event I enjoyed was spending breakfast with Allison Randal and Gnat Torkington, and being introduce to Tim O'Reilly. Being quite a quiet person, I'm not the sort to stand out at something like this, but it was nice to realise that I did know quite a few people. On the last evening it was also great to meet Robert Lefkowitz, as it gave me the opportunity to say how much I enjoyed his talks that I heard via IT Conversations, on "The Semasiology of Open Source".

I also got time to chat to Damian Conway, Piers Cawley and Mark Fowler, which was great as I don't often get to see them these days, and when I do they're often busy preparing for talks or only standing still for a short amount of time. The weekend for me was a great success and if you're ever invited, I heartily recommend going along.

Back last year, I went to LUGRadio Live and was extremely impressed, as most people were, with the plasma screens around the building, particularly with the imagery they were displaying. It turned out that Aq had written it as a quick PHP/HTML hack. It certainly did the job and impressed me so much that I asked if I could use for the YAPC::Europe conference we were hosting in August. Aq was delighted.

The original code was written in PHP, but seeing as I don't do PHP, I rewrote the whole thing in Perl. I simplified some of the HTML and CSS, but essentially it was still the same concept. We lauch the code for YAPC::Europe and again people were suitably impressed.

Since last August I've been meaning to package up the code and release with a proper Open Source licence. I asked Aq whether he minded me using the Artistic License as used with tradional Perl libraries, and he was happy to release it. So here it is ... The Plasma Application.

All being well the guys in Vienna might be using it for YAPC::Europe 2007, but we'll have to wait and see.

After promising a while ago to upload some of my code, I've created a new section on the site. Click the Code tab on the menu at the top and you'll see what I've done.

My first launch is the latest version of my dbdump.pl utility. I use it to backup my databases to remote servers. It supports MySQL and PostgreSQL at the moment, but potentially it could support others. At some point I'll get around to packaging other utilities too. If you find the code useful, please let me know.

Last week I attended GUADEC. This year it was hosted in Birmingham, so it made it rather easy for me to get to. There was a lot of good talks, and it was nice to be able to put names to faces that I've heard mentioned for so long. I'm not a Gnome Developer, so this was definitely very much a user experience, but having said that, there were several applications that looked interesting enough to make me wonder about seeing whether I could add Perl bindings. I plan to get a full write-up soon, but first off here are all the photos:

During José's talk, 'The Acme Namespace - 20 minutes, 100 modules', at YAPC::NA in Houston, he mentioned one of the Acme modules that accesses the info for a Playboy Playmate, Acme::Playmate. After he mentioned it, Liz "zrusilla" Cortell noted that she used to work for Playboy and worked on the site that was screen scrapped by the Acme module, informing us that she wrote the backend in Perl too, "so you see it was Perl at both ends". At this point the room erupted, Liz got rather red and I'm sure wished the ground would swallow her up :)

Despite the rather salacious connotation that can be drawn from that remark, it was a phrase that struck me later as being rather more descriptive of the state of Perl. I started to think about the community, business and the way Perl is perceived. Drawing a line with the individual at one end, moving into community through small businesses and onto corporations at the far end, we can see Perl is not only used at both ends, but all the way through. But people still ask isn't Perl dead?

Perl hasn't died, in fact it's probably more vibrant now than it has been for several years. The difference now though is that it isn't flavour of the month. I did a Perl BOF at LUGRadio at the weekend, and it was a subject that got brought up there. Is Perl still be used? It would seem that Perl publicity to the outside world is extremely lacking, as several non-Perl people I've spoken to over the past few months have been surprised to learn that Perl is used pretty much in every major financial institution, in email filtering or network applications, for the Human Genome project (and bioinformatics in general) and pretty much every type of industry you can think of. It isn't dead, it just isn't sticking it's head above the parapet to say "I'm still here".

Last year at YAPC::Europe, Dave Cross talked about speaking in a vacuum. Inside the Perl community we all know that perl is great and gets the job done, but what about the people who are struggling with other languages, or project managers and technical architects who are looking at what skill set they should be using to write their new applications? What about big business that is continually confronted with the marketing of Java from Sun or .Net from Microsoft?

I see Python gaining momentum simply because several in the Linux and Open Source communities started using it to see how good it was, and now with Ubuntu using it pretty much exclusively, it has gained a large foothold with the wider developer community. Ruby has been seen as great for creating flashy websites, but beyond 37 signals, I've not heard of any big name sites that have been created with it. It gets featured at every Open Source conference and developers generally seem to think its really cool, but I'm still waiting to hear of any big take up outside of the cool, hip and trendy set. Maybe that's Perl's problem. It isn't cool, hip and trendy anymore, it's part of the establishment, part of the furniture. Does the job, does it well and without any fuss.

Perl has generated such a great community, that we seem to have forgotten that there are other communities out there, and they've partly forgotten us too. YAPCs are great conferences, but they grew out of the desire to have more affordable conferences for the developers, students and self-employed. Their success has been to the cost of Perl people wanting to go to other Open Source events such as OSCON, and keep Perl presence in the wider developer communities going. As a consequence Perl is almost seen as an add-on for legacy reasons to those conferences.

Looking back at that line I drew at the beginning, although I see Perl in our community, it doesn't feature very much in the wider communities, and as such small businesses don't notice it so much and look to other languages to develop their applications. The individual or hobbyist still uses it, and the corporations would struggle to remove it now, so to the outside world Perl is very much at both ends, but only at both ends. It's lost its focus in the middle ground.

At LUGRadio this year, I kind of felt rather relieved that people who came and spoke to me, knew me for being part of the Perl community. Most of these people are hardcore Linux, C or Python developers and although several know Perl, don't often use it. I've spent a lot of time speaking at Linux User Groups this year, and plan to speak at more later in the year. I've also been invited to speak to the PHP West Midlands User Group, invited to attend PyCon and will be attending GUADEC next week, but it's hard work to try and remind these other communities that Perl is still there. Although the personal touch certainly does help, I can't help but think there needs to be another way to promote Perl. This isn't about success stories (although they do help) or about talking at conferences and user groups (although they are just as important), but about reaching to the other communities and thus small businesses to remind them that Perl is still a viable choice, and that rather than competing for market share, the different languages can work together.

Having spoken to some developers of other languages, I'm amazed that the FUD of all Perl is unreadable, obfuscated and too hard for the beginner to learn properly is still being peddled. Challenging that mentality is a bit of a battle, but I've had to state on several occasions that you can write unreadable, obfuscate and unmaintainable code in any language, and in fact most of the respected Perl community and much of CPAN strives to write readable, clear and maintable code. It seems the Perl code from over 10 years ago and the dodgy scripts of certain archives are still poisoning the well.

Part of the problem (possibly fueled by the above FUD) that we have in the UK is overcoming the fact that several new Open Source initiatives don't even feature Perl when they talk about Open Source languages. If the networks that work between the communities and small business aren't promoting us, then it's going to be a tough slog. I've already written emails to the National Open Centre and tried to get OpenAdvantage to be more inclusive, but there are other similar initiatives, both here in Europe and in the US that need reminding too. Once they're helping to promote Perl, then it might just be something that Universities and Colleges include in the curriculums again. From there small businesses will be able to see that there is a pool of Perl developers they can employ and Perl again becomes a viable choice.

I firmly believe Perl 5 will still be around in 10 years time. Whether its running on Parrot, within Perl 6 or as it is now remains to be seen. I was asked to describe Perl 6 at the weekend and responded with a generalisation of "Perl 6 is to Perl 5 as C++ is to C". C++ took C into another realm, but C is still around. I just hope that the constant confusing information given out about Perl 6 to non-Perl people, isn't the reason why some think Perl 5 is all but dead.

The theme for the 2005 YAPC::Europe in Braga was "Perl Everywhere". I don't think that's true, but I wish it was :)

As if I haven't mentioned it enough, this weekend I went along to LUGRadio Live in Wolverhampton. It was a fantastic event, as always, and I had a great time meeting people, seeing some interesting talks and taking lots of photos. I was a little disappointed to hear Ade has decided to leave LUGRadio as a regular presenter, but I'm sure Chris Procter will do an admirable job in his place. To read my more technical writeup of the event see my use.perl journal. To see my photos, click the links below :)

Today is the first day of LUGRadio Live. Well actually it could be considered the second day, as many of the attendees were assembled in Wolverhampton last night. I had to miss the festivities last night, so I'm hoping I can make up for it tonight :)

Several local user groups will be attending, so I'm hoping to see a lot of familiar faces. I'll be taking lots of photos, and this year I hope to have them online soon after the event, not nearly a year later!

Finally got the time to sort through my photos from last week. From over 2,000 photos, I've got them down to just over 700. There are still a few in there that aren't quite as good as I'd like, but then until I can freeze people in time before taking the shot, I'm going to struggle with the current camera. I'm looking at to getting a DSLR at some point, so hopefully I won't get so many blurred pictures then. Still I'm pleased I managed to get quite a selection that I did like.

For those who discover this entry by searching for YAPC::NA, here are all the photos I have online:

Last week saw me attending the 2007 YAPC::NA Perl Conference in Houston, Texas. Well not just attending, but speaking too. I did 3 regular talks, hosted one BOF and took part in another. You can read the full gory details over on my technical journal.

The conference is a grassroots affair, and is now traditionally hosted annually by the local Perl Monger user group. This year it was a joint effort by Houston.pm and BrazosValley.pm, and was an admirable effort considering that none of the organisers had been to a YAPC before. A number of people had said they weren't attending because it was Houston, but seeing as the town is famous for Lyndon B Johnston Space Center and ZZ Top, I couldn't believe it was that bad, and indeed it wasn't. Though I didn't get a chance to wander around the town, as the University is quite a distance from the town centre, and the local transport system consists of taxis.

I have lots of photos to get through, including a trip to the Space Center, which I'll be posting soon, and I'm getting better with my camera. I seem to have taken several good photos, but having said that there were the fair share of blurred or out of focus ones too. At a conference like this, it gets frustrating when I think I've taken a good shot, then later view it on the laptop and discover it isn't as good as I thought. You never get a second chance. But I am getting better at holding the camera still and taking some nice closeups.

YAPC::NA 2007 This Way

Since I bought a 2GB xD memory card, I can now take 30 minutes worth of video. It meant I was able to video a few talks, but it also meant I discovered more of the limitations of the camera. The camera's main function is to take pictures not video, so some aspects are understandably lacking when taking video, such as being able to zoom in/out. Although you can adjust zoom before videoing, once you press record it's fixed. I assume this is because the problems with auto-focusing. Still it did mean I got to watch the talks again later :)

While Houston was hot, it was pouring with torential rain when I arrived, and did so during my stay there too. The humidity was high and occassionally felt like I was breathing in water, but for the most part we were inside in the air-conditioning, so it wasn't really that much of a problem. Apparently the cockroaches are much more of a problem, though I only saw a few on the pavements. Those staying in the dorms seem to see them at every turn. We even joked that Jose was taking a family home with him. What I saw of Houston I liked, but had they had a decent local train or bus service I might well have visited more of the town.

The conference itself was good, and I got to speak to several people, both familiar faces and newcomers. It meant there was quite a difference in the expectations and the response to talks. I think most got something out of the event, but I can't help think that the beginner type talks were a bit thin on the ground this year. I'm going to see whether I can change that and plan to work on some new material to have a go at for next year. If nothing else, it'll provide plenty of material for the 2008 Birmingham Perl Mongers World Tour :)

I spoke at the OpenAdvantageOpen Source Showcase yesterday. It was intriguing to see how some other speakers took the brief of "introduce why you use open source" to mean "a free 10 minute marketing exposure". While I certainly have nothing against small businesses trying to promote themselves at these sort of events, it would have been nice for them to better explain why they chose to use Open Source Software. Some did, albeit briefly, some explained the benefits they've gained (Birmingham Friends of The Earth was certainly a good example), but most took the time to explain how big their client portfolio was. The people in the room largely were small businesses and were looking to understand why they should consider Open Source.

One presentation failed to even mention Open Source or any Open Source product. It was only later I discovered that the hardware product worked with a Linux kernel. It was a sales pitch from start to finish. The presenters wife was sat next to me, and kept adding commentary to those around her, to follow up statements made by the presenter. It was a bit bizarre, and a bit out of place I felt.

My talk, using Labyrinth to provide an example, was really about why I chose Open Source and specifically Perl to implement the website application. I started by explaining my background, not in any great detail, but enough so the audience could understand that I had a history of programming and IT, long before Open Source and Free Software was consider the movement it is today. Whereas most other speakers were able to say they had been doing their particular field for 4-8 years, I was able to state that I have been a programmer for nearly 30 years. I also come from a very different perspective, that of someone who is a true developer. The only other developers were Kat and Dave, who did the presentation about PHP before me. Pretty much everyone else had a much more user perspective. With 13 presentations, it was an odd balance that only 2 were not user experiences.

If I was attending to represent my own company, then while user experiences would be very useful to prove that my business could benefit from using Open Source, I personally would like to understand what benefits that the actual developers see and the future for Open Source, which you're not likely to get from users. There was one presentation from a lawyer about licensing, which pretty much reaffirmed what most of us understand about licensing issues, which was well placed, as it is a subject that does worry some businesses. While some may be just interested in the cost aspect to begin with, ultimately the subject of support and longevity does get thought about. Users often can't explain those, so it would have been nice to have had a Linux distro developer or other Open Source software developer to give that sort of perspective.

There wasn't much Microsoft bashing, which was refreshing, but rather reasoned arguments why proprietry software didn't work for these particular business. One speaker gave a price list for seven basic development machines running Windows and another seven running Linux. The final cost compared £10,000 with £4,500. I did have to smile at the claim that they didn't need AV software on the Linux machine, but resisted the urge to note that Linux isn't virus-free. I originally did offer to speak about why MessageLabs use OSS, but Elliot from OpenAdvantage felt that the Perl talk would be more appropriate. Now having done the talk, I would have to agree.

The event was well attended, with about 50+ people in the audience, and generated a lot of discussion. I hope they get to invite me to another event in the future, and this time I might not over run :)

JJ made a point last night, that I also agree with. When I got home, following a chain of blog links and I came across an article written by Martin Belam, about his wifes feeling towards an aspect of DRM. She makes a very good point, that had JJ, Brian and I coincidentally discussing at length yesterday evening at the Birmingham Perl Mongers meeting. I hope Martin's wife doesn't mind me requoting it here:

"The thing I don't get is this core of people that want everything for free. Artists still have to eat. Why do these people think that they are entitled to get everything for free for ever?"

JJ's point was that the biggest failing of the Linux community was the expectation that everything they want on their desktop should be free. As a consequence the Linux community, to a large extent, has become very closed one. The idea of Open to me, is more about encompassing different forms of expression, being inclusive rather than exclusive. In terms of software that can also mean different forms of distribution. As a corporate, people like Sun, Novell, etc can afford to give away parts of their software portfolio, as they have gained a credible market share for their brand to allow other large corporates to want to buy support contracts and services at very high rates. Ubuntu has been able to come into existence because Mark Shuttleworth was willing put the money down to make it happen. Big players and very rich people can afford to do that, if they choose. But what about the little guy?

Certainly in the UK and probably in the rest of the world, the people that take risks are the individuals and small businesses. They can because there often isn't the risk or outlay that would be required by a large business. As a consequence, when an idea does work it's often taken a lot of research, time and effort to get it into a state worthy of release. That's research, time and effort that the designer, developer or company don't get anything back for doing that work. Suppose as an individual, I create a piece of software that manages website. It takes 4 years to get that product stable and complete enough to release. Why should I be expected to just give it away?

The failing of the Open Source community is the expectation that everything should be free. While developers may choose to release their software as free, if they don't they are derided or sneered at. If my piece of software revolutionised the way websites could be created, and gives value for money, then why shouldn't I ask a nominal fee for it? The argument that the Open Source community seems to favour, is that I should charge a support contract. But that argument fundamentality fails to understand how business works. Support contracts work for big business because they need someone to blame when it all goes wrong. JJ gave the example of the supply chain for Vodafone, where one software supplier they use doesn't have a support contract with Vodafone, but via another suppler, because the software suppler is too small to guarantee a 24/7 support contract. Even though the other suppler can only provide a 24/7 telephone answering service, and still passes the details to the software supplier when they turn up for work in the morning.

I, as an individual, wouldn't get any support contracts from businesses around the world for my product. And even if I did, the chance of me providing realistic level of support is minimal. However, I could charge for my software and allow others to reap the benefit. While, I wouldn't necessarily reap great rewards, at least I would be getting some reward for all that research, time and effort getting the product into a state that others can take advantage of.

I find I keep having to ask every so often, 'why is it such a crime to make money?'. I have a family, I have a house and I have a life. If I want to have my own business, am I expected to work for nothing for 4 years and then give the software away for free and expect the support contracts to come rushing in, while in the meantime, my family starve, I lose my house and end up with no life? The biggest part of the UK's economic growth is the SMB (Small Medium Business) or SME (Small Medium Enterprise) markets. They help to employ a large part of the working population, but also help feed many of the larger businesses and corporations, thus helping to employ the remaining part of the working population. When MG Rover collapsed down the road here in Longbridge, the knock on effect to the smaller businesses who made parts for MG Rover was devastating. Several went out of business, while others had to cut their workforce. They can't work for free in the hope that the other manufacturers might use their products. And exactly the same is true of the software market. Individuals and small businesses create many products that are used by bigger companies. Sometimes those products might be suitable for release to the general public, but it shouldn't it be their choice whether they make a living from it and how?

Part of this closed mindset also means commercial developers are less likely to support Linux, which is a bad thing. While I personally like what Linux and the Open Source community has to offer, and dislike DRM, I'm also able to be realistic and understand that people want to protect something they have created. I dislike DRM, not because I think the concept is bad, but the fact that all the implimentations of it are flawed and misunderstand both the demands of retailer and the consumer. However, the problem that things like DRM has uncovered, is that the Open Source community's resistence to anything commercial for "their" operating system, has reduced the choice available, and has not allowed developers to work with the community to help make Linux a vibrant alternative to governments, emerging markets and the like. Currently Microsoft are able to offer great incentives to the decision makers, simply because many of the vendors of peripheral devices and software only support Microsoft products. That's not allowing freedom of choice. It's also not allowing decision makers to make informed decisions on the systems they wish to deploy.

An individual or small business, wishing to make a commercial product available on Linux is currently met with derision and considered to be evil. Until this mindset opens up and accepts that we can all work together, Linux on the desktop is always going to be playing catchup, and even Linux on the server is occasionally going to have to accept that it cannot compete when a requirement is run a piece of software that isn't available for it. Freedom is also about Freedom Of Choice. If there isn't a choice, then is it any wonder why so many restricted or flawed installations occur?

Although just to be clear, the website management tool I've written called Labyrinth, that's take over 4 years of my free time in research and development, will be available as Open Source Software in the future. I don't believe I have a product that would warrant selling as a commercial product, as I don't feel I can devote the time and effort to making it into a marketable product. I will however, be looking to encourage potential clients who want me to design and develop their website to come to me. The fact that I will use Labyrinth is incidental, but the fact that I created it and know it better than anybody else is my unique selling point.

There are other products out there that do website management. Some are free, some are not. Some do much much more than Labyrinth, while others are very basic. I'm not interested in trying to compete with them, as Labyrinth was written to fulfill my requirements to administer websites that I created. The fact that I've been able to use it for other sites has been great. But had I not had that attitude and decided to make it a commercial product, why should I expect the ridcule and scorn of the Open Source community because I decided to make money?

You may have noticed the addition of the image links on the side panel (unless you read this via a syndication feed). I am now officially scheduled as both a speaker and BOF leader at both this year's LUGRadio Live in Wolverhampton and at YAPC::NA in Houston. Click the links for more info.

At LUGRadio I'll be doing my Selenium talk that I've been presenting at several of the events on the Birmingham Perl Mongers World Tour. I wanted to speak again this year, but was struggling to think of something to speak about. Aq saw me do my Selenium talk and insisted I do that :) I'll also be organising a Perl Mongers BOF, which is primarily to encourage attendees to get involved with their local Perl Monger group, but will probably be a general Perl thing. If you're going to the event, please come and say hello.

However, before LUGRadio Live I have to prepare myself for the North American YAPC. Unwittingly I've managed to volunteer myself for 3 talks (lasting over 2½ hours), together with a 1 hour BOF. However, I'm also likely to be involved in 2 other BOFs, so I'm going to be extremely busy during the conference. Thankfully all the talks will be based on presentations I've given before, so I don't have to start from scratch, although there is a lot more material I'll be adding.

I'm quite surprised that the Houston guys have accepted me to talk so much. But seeing as both YAPC::NA and YAPC::Europe last year and this year have extended the event to fill 4 rooms, they can have a wider breadth of talk subjects and accept more talks. This will be my 10th YAPC, although only the 6th I've spoken at. I'm really looking forward to going, but I keep getting warned it'll be hot. Just so long as they serve Guinness I'll be happy ;)

I've been wanting to upload my photos from LUGRadioLive2006 for sometime, but just haven't had the time to sort through them. The event, organised by the presenters of LUGRadio, was great and I got to see several people I knew and even more that I didn't. I was asked to speak at the event, as I was in 2005, when I had already planned to speak in Toronto for YAPC::NA, and did a presentation about how MessageLabs use Open Source Software. The talk happened after I picked up Ade and took him to a WolvesLUG meeting and we talked servers all the way there. He was quite taken aback with the idea that we manage over 3,500 Linux servers in our infrastructure.

Unfortunately no-one took any photos of me during my talk, but I did get to take several of everybody else. I'll be speaking again at this year's event, so I hope to be a bit more organised and get someone to take photos of me too. The event took places over 2 days with a "disco" on the saturday night. It was a fun packed weekend and lots and lots and LOTS of Linux and Open Source related stuff to talk about. The guys put me up against Mark Shuttleworth, so I didn't get to see all of his talk, but I was quite pleased that I still got a decent audience. Obviously not everyone was that interested in what Mark had to say ;)

Anger, Bald, Beard &amp; Ging

The second day of the event ended with recognition awards for various members of the community and the crew, leading up to the finale of the live recording of LUGRadio Live And Unleased, which went down rather well. With that the event was over for another year. The Four Large Gents has specially commissioned T-Shirts for the event, and seeing as it was a sunny day, Big Ron, Seth and myself grabbed the lads and took them outside for a fun photoshoot, with the idea that they could use the photos for promotional material in the future.

I've booked the hotel for this year's event and am looking forward to speaking again. This year I'll be talking about Selenium, which I've been presenting at various LUG groups on the Birmingham Perl Mongers World Tour. The benefit of doing it on the tour is that I've been able to see what works and what doesn't and improve the talk all the time. Plus as I've got more familiar with Selenium, I've been able to add more tests into my live demo. All being well it should be all shiny and slick by the time of LUGRadio Live 2007. Hope to see you there.

Privacy Policy

Unless otherwise expressly stated, all original material of whatever nature created by Barbie and included in the
Memories Of A Roadie website and any related pages, including the website's archives, is licensed under a
Creative Commons by Attribution Non-Commercial License.
If you wish to use material for commercial puposes, please contact me
for further assistance regarding commercial licensing.