For a number of projects I’ve worked on at Canonical have involved using GObject based libraries from C++. To make using these APIs easier and safer, we’ve developed a few helper utilities. One of the more useful ones was a class (originally by Jussi Pakkenen) that presents an API similar to the C++11 smart pointers but specialised for GObject types we called gobj_ptr. This ensures that objects are unrefed when the smart pointer goes out of scope, and increments the reference count when copying the pointer.

However, even with the use of this class I still end up with a bunch of type cast macros littering my code, so I was wondering if there was anything I could do about that. With the current C++ language the answer seems to be “no”, since GObject’s convention of subclasses as structures whose first member is a value of the parent structure type is at a higher level but then I found a paper describing an extension for C++ Static Reflection. This looked like it could help, but seems to have missed the boat for C++17. However, there is a sample implementation for Clang written by Matus Chochlik, so I downloaded and compiled that to have a play.

At its heart, there is a new reflexpr() operation that takes a type as an argument, and returns a type-like “metaobject” that describes the type. For example:

using MO = reflexpr(sometype);

These metaobjects can then be manipulated using various helpers in the std::meta namespace. For example, std::meta::reflects_same_v<MO1,MO2> will tell you whether two metaobjects represent the same thing. There were a few other useful operations:

std::meta::Class<MO> will return true if the metaobject represents a class or struct (which are effectively interchangeable in C++).

std::meta::get_data_members_m<MO> will return a metaobject representing the members of a class/struct.

From a sequence metaobject, we can determine its length with std::meta::get_size_v<MO>, and retrieve the metaobject elements in the sequence with std::meta::get_element_m<MO>

We can get a metaobject representing the type for a data member metaobject with std::meta::get_type_m<MO>.

Put all this together, and we’ve got the building blocks to walk the GObject inheritance hierarchy at compile time. Now rather than spread the reflection magic throughout my code, I used it to declare a templated compile time constant:

If we can verify that this is a correct up-cast, this function will compile down to a simple type cast. Otherwise, compilation will fail on the static_assert, printing a relatively short and readable error message.

The same primitive could be used for other things, such as allowing you to construct a gobj_ptr<T> from an instance of a subclass, or copying one gobj_ptr to another one representing a parent class.

It’d be nice to implement something like dynamic_cast for down-casting, but I don’t think even static reflection will help us map from a struct type to the corresponding helper function that returns the GType.

If you want to experiment with this, the code used to implement all of the above can be found in the following repository:

With my old ThinkPad, Lenovo provided BIOS updates in the form of Windows executables or ISO images for a bootable CD. Since I had wiped Windows partition, the first option wasn’t an option. The second option didn’t work either, since it expected me to be using the drive in the base I hadn’t bought. Luckily I was able to just copy the needed files out of the ISO image to a USB stick that had been set up to boot DOS.

When I got my new ThinkPad, I had hoped to do the same thing but found that the update ISO images appeared to be empty when mounted. It seems that the update is handled entirely from an El Torito emulated hard disk image (as opposed to using the image only to bootstrap the drivers needed to access the CD).

So I needed some way to extract that boot image from the ISO. After a little reading of the spec, I put together the following Python script that does the trick:

It isn’t particularly pretty, but does the job and spits out a 32MB FAT disk image when run on the ThinkPad X230 update ISOs. It is then a pretty easy task of copying those files onto the USB stick to run the update as before. Hopefully owners of similar laptops find this useful.

There appears to be an EFI executable in there too, so it is possible that the firmware update could be run from the EFI system partition too. I haven’t had the courage to try that though.

One of the projects I’ve been working on has been to improve aspects of the Ubuntu One Developer Documentation web site. While there are still some layout problems we are working on, it is now in a state where it is a lot easier for us to update.

I have been working on updating our authentication/authorisation documentation and revising some of the file storage documentation (the API used by the mobile Ubuntu One clients). To help verify that the documentation was useful, I wrote a small program to exercise those APIs. The result is u1ftp: a program that exposes a user’s files via an FTP daemon running on localhost. In conjunction with the OS file manager or a dedicated FTP client, this can be used to conveniently access your files on a system without the full Ubuntu One client installed.

To make it easy to run on as many systems as possible, I packaged it up as a runnable zip file so can be run directly by the Python interpreter. As well as a Python interpreter, you will need the following installed to run it:

On Linux systems, either the gnomekeyring extension (if you are using a GNOME derived desktop), or PyKDE4 (if you have a KDE derived desktop).

These could not be included in the zip file because they are extension modules rather than pure Python.

Once you’ve downloaded the program, you can run it with the following command:

python u1ftp-0.1.zip

This will start the FTP server listening at ftp://localhost:2121/. Pointing a file manager at that URL should prompt you to log in, where you can use your standard Ubuntu One credentials and start browsing your files. It will verify the credentials against the Ubuntu SSO service and issue an OAuth token that it stores in the keyring. The OAuth token is then used to authenticate requests to the file storage REST API.

While I expect this program to be useful on its own, it was also intended to act as an example of how the Ubuntu One API can be used. One way to browse the source is to simply unzip the package and poke around. Alternatively, you can check out the source directly from Launchpad:

bzr branch lp:u1ftp

If you come up with an interesting extension to u1ftp, feel free to upload your changes as a branch on Launchpad.

One feature in recent versions of Python I hadn’t played around with until recently is the ability to package up a multi-module program into a ZIP file that can be run directly by the Python interpreter. I didn’t find much information about it, so I thought I’d describe what’s necessary here.

Python has had the ability to add ZIP files to the module search path since PEP 273 was implemented in Python 2.3. That can let you package up most of your program into a single file, but doesn’t help with the main entry point.

Things improved a bit when PEP 338 was implemented in Python 2.4, which allows any module that can be located on the Python search path can be executed as a script. So if you have a ZIP file foo.zip containing a module foo.py, you could run it as:

So if you place a file called __main__.py inside your ZIP file (or directory), it will be treated as the entry point to your program. This gives us something that is as convenient to distribute and run as a single file script, but with the better maintainability of a multi-module program.

If your program has dependencies that you don’t expect to find present on the target systems, you can easily include them up in the zip file along side your program. If you need to provide some data files along side your program, you could use the pkg_resources module from setuptools or distribute.

There are still a few warts with this set up though:

If your program fails, the trace back will not include lines of source code. This is a general problem for modules loaded from zip files.

You can’t package extension modules into a zip file. Of course, if you’re in a position where the target platforms are locked down tight enough that you could reliably provide compiled code that would run on them, you’d probably be better off using the platform’s package manager.

There is no way to tell whether a ZIP file can be executed directly with Python without inspecting its contents. Perhaps this could be addressed by defining a new file extension to identify such files.

Earlier in the week, I attended a PLUG discussion panel about the National Broadband Network. While I had been following some of the high level information about the project, it was interesting to hear some of the more technical information.

The evening started with a presentation by Chris Roberts from NBN Co, and was followed by a panel discussion with Gavin Tweedie from iiNet and Warrick Mitchel from AARNet.

One question I had was when they’ll get round to building out the network where I live. There is a rollout map on the NBN Co site, but it currently only shows plans for works that will commence within a year. Apparently they plan to release details on the three year plan by the end of this month, so hopefully my suburb will appear in that list.

The NBN is being built on top of three methods of connection: GPON fibre for built up areas, fixed LTE wireless (non roaming) for the smaller towns where it is not economical to provide fibre, and satellite broadband for the really remote areas. All three connection methods provide a common interface to service providers, so companies that provide services over the network are not required to treat the three methods differently. The wireless and satellite connections will initially run at 12Mb/s down and 1Mb/s up, while fibre connections can range from 25/5 to 100/40 (with the higher connection speeds incurring higher wholesale prices). It should be quite an improvement over the upload speed I’m currently getting on ADSL2.

Chris brought in some sample “User Network Interface” (UNI) boxes that would be used on premises with a fibre connection. It provided 4 gigabit Ethernet ports, and 2 telephony ports.

The inside of a current generation NBN interface box

Rather than the 4 Ethernet ports being part of a single network as you’d expect for similar looking routers, each port represents a separate service. So the single box can support connections to 4 retail ISPs, or for any other services delivered over the network (e.g. a cable TV service). You would still need a router to provide firewall, NAT and wifi services, but since it only requires Ethernet for the WAN port there should be a bit more choice in routers than if you limit yourself to ones with ADSL modems built in. In particular, it should be easier to find a router capable of running an open firmware like OpenWRT or CeroWRT.

The box also acts as a SIPATA, where each of the two telephony ports can be configured to talk to the servers of different service providers.

It is also possible for NBN Co to remotely monitor the UNI boxes in people’s houses, so they can tell when they drop off the network. This means that they have the ability to detect and respond to faults without relying on customer complaint calls like we do for the current Telstra copper network.

Since the NBN is supposed to provide a service equivalent to the current copper telephone network, the UNI box is paired with a battery pack to keep the telephony ports active during black outs, similar to how a wired telephone draws power from the exchange. This battery pack is somewhat larger than the UNI box, holding a 7.2 Ah lead acid battery. At 10W, this can keep the box running for around 8 hours. The battery pack will automatically cut power before it is completely drained, but has an emergency switch to deliver the remaining energy at the expense of ruining the battery.

Next PLUG Event

If you’re in Perth, why not come down to the next PLUG event on March 26th? It is an open source themed pub quiz at the Moon & Sixpence. Last year’s quiz was a lot of fun, and I expect this one will be the same.

This week I put out a new release of pygpgme: a Python extension that lets you perform various tasks with OpenPGP keys via the GPGME library. The new release is available from both Launchpad and PyPI.

There aren’t any major new extensions to the API, but this is the first release to support Python 3 (Python 2.x is still supported though). The main hurdle was ensuring that the module correctly handled text vs. binary data. The split I ended up on was to treat most things as text (including textual representations of binary data such as key IDs and fingerprints), and treat the data being passed into or returned from the encryption, decryption, signing and verification commands as binary data. I haven’t done a huge amount with the Python 3 version of the module yet, so I’d appreciate bug reports if you find issues.

So now you’ve got one less reason not to try Python 3 if you were previously using pygpgme in your project.

While at linux.conf.au earlier this year, I started hacking on a Mandelbrot Set fractal renderer implemented in JavaScript as a way to polish my JS skills. In particular, I wanted to get to know the HTML5 Canvas and Worker APIs.

The results turned out pretty well. Click on the image below to try it out:

Clicking anywhere on the fractal will zoom in. You’ll need to reload the page to zoom out. Zooming in while the fractal is still being rendered will interrupt the previous rendering job.

All the calculations are done via web workers, so should not block the UI. The algorithms used to calculate these types of fractals are easy to parallelise, so it was not particularly difficult to add more workers. One side effect of this is that the lines of the fractal don’t always get rendered in order.

With Chromium, this maxes out all six cores on my desktop system. In contrast, Firefox only keeps three cores busy. As workers are not directly tied to operating system threads, this may just mean that Firefox allocates fewer threads for running workers. I haven’t tested any other browsers.

Browser technology certainly has progressed quite a bit in the last few years.

Recently I’ve been working on a Firefox extension, and needed a way to test the code. While testing code is always important, it is particularly important for dynamic languages where code that hasn’t been run is more likely to be buggy.

I had not experience in how to do this for Firefox extensions, so Eric suggested I try out Mozmill. which has been quite helpful so far. There were no Ubuntu packages for it, so I’ve put some together in my PPA for anyone interested:

This will launch an instance of Firefox using a temporary scratch profile that loads your extension, and then run your tests. The tests will run inside the Firefox instance with the results fed back to the mozmill utility. When the tests complete, the Firefox instance will exit and the scratch profile deleted.

While many of the mozmill tests that Mozilla has written are relatively high level, essentially treating it as an user input automation system, you have full access to Mozilla’s component architecture, so the framework seems well suited to lower level unit testing and functional tests.

Tests are structured as simple javascript modules, and uses conventions similar (although not identical) to many other xUnit frameworks. Any function whose name starts with “test” is a test. If the module contains “setupTest” or “teardownTest” functions, they will be called before and after each test respectively. If the module contains “setupModule” or “teardownModule” functions, they will be called before and after all the tests in the module run, respectively.

There is a “jumlib” module that you can import into your tests that provides familiar helpers like assertEquals(), etc. One difference in their behaviour to what I am used to is that they don’t interrupt the test on failure. On the plus side, if you’ve got a bunch of unrelated assertions at the end of your test, you will see all the failures rather than just the first. On the down side, you don’t get a stack trace with the failure so it can be difficult to tell which assertion failed unless you’ve provided a comment to go with each assertion.

The framework seems to do the job pretty well, although the output is a little cluttered. It has the facility to publish its test results to a special dashboard web application, but I’d prefer something easier to manage on the command line.

I’ve just got through the first one and a half days of LCA2011 in Brisbane. The organisers have done a great job, especially considering the flooding they have had to deal with.

Due to the venue change the accommodation I booked is no longer within walking distance of the conference, but the public transport is pretty good. A bit more concerning was the following change to the wiki made between the time I left Perth and the time I checked in:

BYO Toilet Paper

I’ve been impressed with the conference talks I’ve been to so far. In particular, I liked Silvia Pfeiffer’s talk on audio/video processing with HTML5 – I’ll have to have a play with some of this. Today’s keynote was by Vint Cerf about the history of internet protocols and what the challenges will be in the future (e.g. InterPlaNet).

There was a talk today about Redis: it sounded like interesting technology, but the talk didn’t really give enough information to say when you’d choose it over other systems.

I made some bagels last night. It was my second time using the recipe, so things went pretty well. The boiling process gives the crust an interesting chewy texture I haven’t seen with other bread recipes I’ve tried.

I used this recipe (half wholemeal flour, half white), but made 12 slightly larger bagels rather than the 18 the recipe suggested. I increased the boiling and baking time a bit to compensate. They weren’t particularly difficult to make, but the boiling process was fairly time consuming, since I could only fit three at a time into the pot.