The concept that came out of the short brainstorming meeting was to have a button on an iPad that would trigger a video on our display board, leading to an image showing facts about the world at the moment of revelation.

This is the story of how we made it happen.

I start with a disclaimer—this is not an example of elegant coding or optimal use of Mathematica, but is a real-life story of implementing an under-specified idea (subject to feature creep) quickly and with no regard for future maintenance—in short, like a lot of software projects.

Jeremy Davis, Wolfram’s Design Director, quickly produced a mockup in Photoshop of what it might look like while I set about building the data extraction code.

I knew that pulling data from Wolfram|Alpha and visualizing it would be easy, but what occupied my thoughts were the potential network failures, Wolfram|Alpha failures, local CPU overloads, or other horrors that could transpire to embarrass me in front of the PM!

So I started with a function that would query Wolfram|Alpha with a failure mode that would return a previous query result if no valid answer came within 20 seconds:

Likewise, here is a version of First, which won’t complain if there is no first element, but will instead return something invisible.

Anticipating that the design and color scheme would change repeatedly (though in the end it never did), I separated out the style choices:

To get the right arguments for the WolframAlpha function, the easiest way is to do a full linguistic query from the notebook by entering == followed by the query. If you then click in the pod corner and select either “Subpod content” or “Computable data” from the popup menu, you get the API code generated for you automatically.

For example, the parameters for our current local weather are "weather oxford",{{"InstantaneousWeather:WeatherData",1},"ComputableData"}. With some styling and text substitution, I end up with this code for generating the final image pod:

Images are a little trickier, as Wolfram|Alpha returns the Computable Document Format (CDF) structure of the image rather than the semantic structure. So I first used ToExpression to turn it into a meaningful Graphics expression, and then used replacement rules to swap out colors and fonts from the Wolfram|Alpha defaults to Jeremy’s design. I made a function for this:

Here is the skyPod using that (as it appears today):

The other pods were variations on these and can be downloaded at the bottom of this post. Happy with this, I sent the code off to Jeremy with a rough Grid structure to put them all together and went home.

Jeremy works in a different time zone and so had final formatted versions ready for me when I came in the next day. But he obviously thinks differently from me, as he took my symbolic graphics expressions, turned them into bitmap images, and used Mathematica‘s image-processing commands to size them and assemble them into a Photoshop-generated background image (this is where the ImageCrop I mentioned earlier came from). This is not how I would have done it, but there was no time to argue about programming style!

Now it was time to address the first real problem. All these web queries (over a UK ISP) meant that it took sometimes over 20 seconds to get all the components of the final image. Far too long to keep the Prime Minister waiting. So I split the assembly into three parts: pods that don’t change much (such as the planet locations), pods that change sometimes (such as the star chart, which changes about once per minute), and pods that change all the time (such as share prices). I also created an error-resistant version of ImageCompose that wouldn’t fail if one of the images turned out not to be an image.

Here is the modified version of his code for the slow refresh (most of the code is position and size information):

And then the fast refresh elements added in:

And finally, the parts that absolutely had to be real-time:

Having done all this, I formatted it as a package to be loaded with Needs["Plaque`"].

You may have noticed that we faked Cameron’s age. Wolfram|Alpha does compute that data, but that idea was added to the plaque at the very last moment, so there was no time to think about computing it.

The idea of splitting the code was to run the low-refresh parts only every 60 seconds and the medium-refresh parts every 10 seconds and write the latest version to disk. I did that by using CreateScheduledTask. And to avoid the risk of a scheduled task causing the kernel to run slowly when it was called on, I put that into a second compute kernel using ParallelEvaluate.

That left my control kernel free to generate the final version and write that to disk every two seconds. It also maintained an archive, so I could retrieve the actual displayed image later.

The use of RenameFile was to reduce the chance that we tried to access the latest image while it was being written. Renaming is much faster than writing.

So now, thanks to this chain of web service calls to Wolfram|Alpha, symbolic transformation, image processing, and scheduled tasks, the plaque image is rewritten to disk every two seconds, with no part of it more than 60 seconds old.

The rest was done outside of Mathematica, as we had a Flash video that we wanted to play, and Mathematica can’t embed Flash. In the end we dropped almost all of the video and could have used a CDF for the button, but the sequence was built by then. The iPad was linked by a horrible hack of remote desktoping to a PC that was in, turn, screen sharing another PC that was driving the display. An HTML button on the final machine called the Flash script, which played the movie and loaded the most recently generated Mathematica image to display when the movie finished.

And here was the moment of truth: I was standing behind the photographer and holding my breath, hoping I had considered and trapped all possible failures: