Random Etc.
Notes to self. Work, play, and the rest.

I've been experimenting with some javascript classes that mimic the structure of mapnik's Layer/Style/Rule classes and render OSM data (via GeoJSON) to a <canvas> element. I've also finally taken a look at how github works, so I've decided to share the initial code there in case people are interested. If you don't want to check the code out for yourself there's a demo page here (tested in Firefox and Safari only, so far).

Further to our updates earlier last month, we just released another round of improvements to Oakland Crimespotting. Mike and Eric have full details. I'm particularly pleased that we're able to open up our archive of around two years worth of data, but also that we're able to try something new with an interface for filtering by time of day.

We jokingly started calling this a time pie, and now we're stuck with it... in a good way. I'm still not 100% sure it's intuitive, but I think that working the real sunrise and sunset times in there should help. The only comparable interface I could find was this:

If you know of any other similar ways of selecting/filtering 24 hours, let me know in the comments!

We've been working on some updates to Oakland Crimespotting recently and Mike released the first iteration today. The most significant change is a switch to base maps using OpenStreetMap data. We're using the Pale Dawn cartography that we (Stamen) designed for CloudMade exactly as it's intended: a subtle backdrop for data that still includes the richer local information that OpenStreetMap contributors (like Mike) cover best.

Other changes we've made include numerous small performance optimisations, new sliders in the marker info-bubbles, date labels on the timeline and the crime-type filters now double as a full legend. The whole thing has had a design overhaul too thanks to Geraldine.

We've got a few more features planned for release soon, and we've started a blog to keep track of new developments. Now is a great time to let us know if you have suggestions or feature requests! Feel free to leave a comment here or email info@crimespotting.org if you prefer.

I've heard it said that this would be the best phone in the world if it wasn't for the iPhone*. I can believe it. So this is the second best phone in the world, and you write software for it using Java and (optionally) Eclipse. I know Java and I use Eclipse all the time for writing Flash apps, so this is a tempting prospect: world class hardware, easy to use software. Let's go!

* We can debate what it means to be the best phone in the world at the moment - suffice to say that I know that hardware alone will never do it. Apple's retail experience, customer support, iTunes store, developer tools etc. all leave Android and others with a lot of work to do. But it is a nice phone, certainly. I also haven't been using the phone for voice, nor have I been syncing my emails and calendar with the phone... so this isn't a review by any stretch of the imagination.

There's a brand new update available for the Android OS, version 1.5 aka "Cupcake". My phone came with 1.1 and despite some prodding it wouldn't go ahead and upgrade itself. So I had to download the 1.5 updates and do it myself. That page is full of long and complicated explanations but basically you're just copying files, renaming them to update.zip and rebooting the phone, twice. (Mine got confused in the middle because it finally started to automatically update itself and I let it. If that happens just ignore it and continue with the manual process and everything should be fine.)

My main focus with tinkering with the phone has been to get the API demos running so I can get a sense for how easy it is to work with the Google MapView classes and also how much boilerplate code I need in order to load data over the network and draw pretty things with OpenGL. The Hello World tutorial worked straight away: if you have the phone plugged in it automatically installs your app and runs it on the device, if not then it fires up an emulator. Getting the API demos up and running was a little trickier because it involved importing the project from android-sdk-mac_x86-1.5_r1/platforms/android-1.5/samples/ApiDemos first, but it did work after I upgraded to Cupcake.

For the MapView to work you need to jump through some app signing hoops before you can get a Google Maps API key that will allow the device to load map tiles. The documentation is quite dense but if you're just playing around in Eclipse you can sign things with your debug key; in this case the API key signup page tells you what to do. Just be sure to log in with the same Google account you'll be using in the Android Marketplace, if you get that far.

Once I'd kicked the tyres with the demos I decided to jump straight in and try my hand at an app that loads data from a web service and displays it on a map. The learning curve was OK, here's a list of things I wish I'd known about before I started:

Like any good Swing programmer or web app developer, I have a head for asynchronous operations and I'm comfortable with callbacks and so on. Of course Java is a little more verbose with this, and the need to run UI code on the UI thread but have long running tasks in a separate thread can quickly lead to spaghetti code. Thankfully Android has an AsyncTask class which really elegantly wraps up the pattern of bouncing between two threads and tracking the progress of long tasks. Completely recommended over lots of new Thread(new Runnable()).start(), not least because it lets you cleanly cancel things in onDestroy, too.

The Android libraries include the Apache HTTP library, which is quite good (if a little verbose). This HTTP & JSON example is great, and as I discovered if you reuse the HttpClient object your app will load lots of data happily and with good performance.

The android.util.Xml class will make you a SAX parser for XML parsing. This tutorial otherwise covers what you need to know about SAX parsing and Java if you haven't done it before. The Xml class's convenience function cuts out the AbstractThingerFactory boiler-plate code that Java programmers are generally too tolerant of.

In MapView, an ItemizedOverlay with no items still needs to call populate or your app will crash as soon as you interact with the map. This is a known issue, I'm not sure if it's a bug, I'm mainly noting it here for search engines.

It turns out Activity.onCreate—the standard entry point for Android apps—gets called if the screen rotates, which happens if you open the keyboard on the dev phone/G1. If you used onCreate to fire off a bunch of threads and load data, you need to drop those threads in onDestroy, pass a copy of the data you want to keep onRetainNonConfigurationInstance, and get it back with getLastNonConfigurationInstance. The Android Guys explain all this and more in a three part series, but the second part was most useful to me.

Compilers and code generation and XML config files are all fine, but the Android manifest file is king. If you're using the Google MapView library you need to declare this in the manifest or your app will unceremoniously crash.

Likewise if you're using the network or asking for location information, you'll need to add the relevant permissions to the manifest, or your app will fail (with no explanation). I'm not sure why this is the case, and why the SDK even lets you compile an app that requires internet access without prompting you to add the INTERNET privilege, but it does. As far as I can see these permissions aren't exposed to end users, but perhaps they'll help people navigate the Android Marketplace once there are devices out there with varying capabilities?

All in all it took me just over a day to get to the point where I felt confident that the phone was doing what I was telling it, and that there wasn't too much magic and surprise crashes were rare. The next thing I want to investigate is the OpenGL ES implementation, which I'm hoping is as slick as the iPhone's. I've been keeping a list of android links I think are worth reading at del.icio.us/TomC/android - let me know if there are any other neat/essential APIs in the Android universe that you think I should take a look at.

Michael Driscoll of Dataspora invited a few people to a Dataviz Salon yesterday evening. Mike and I went along and huddled in a brick-built basement in SoMa to listen to the following:

Two talks about baseball stats. This first was from Michael which featured his latest R-driven experiments using pitching data and 2D colour ramps in the CIELUV colour space. He has a nice little R webserver running which can do the clever business with kernel density plots (look closely at those pitch charts, those aren't stacked circles). The second was from Shane Booth who showed sketches for StrategyFan, a forthcoming project with Rio Goodman that lets people create and visualise their own metrics for players and teams. Think "day trading for baseball". (Shane was probably the designeringest person there, and has a lovely tumblelog that ffffound hhhhounds will love).

A talk from Lee Byron, who sadly (criminally) couldn't tell us what he's up to as a data scientist at Facebook (because that's not how they roll, obviously), but did tell some good stories about being the industrial design / motion graphics spanner-in-the-works at the NYTimes during the Olympics last year. (And hey, he got to push a personal project through and include an easter egg in there, well done!)

A talk from Pete Skomoroch, whose visit to town catalysed the whole event in the first place. Pete showed us some work he's been doing at Juice Analytics, lifting clients out of the dark ages and automating previously laborious data mining processes. People cluster search referrals manually, apparently. In 2009.

Brendan O'Connor from Dolores Labs showed us some of the stuff they're doing with (and without) Mechanical Turk, including those lovely colour name diagrams and the surprising news that the hot-or-not site genre is definitely not dead, and can also produce interesting stats by posing relatively simple questions to millions of people.

Thanks to Michael for putting on a great event and getting everything together at such short notice. Hopefully there'll be another one soon!

I see [ubiquitous computing] as analogous to "Physics" or "Psychology," terms that describe a focus for investigation, rather than an agenda.

Why don't others see it the same? I think it's because the term is fundamentally different because it has an implied infinity in it. Specifically, the word "ubiquitous" implies an end state, something to strive for, something that's the implicit goal of the whole project. That's of course not how most people in the industry look at it, but that's how outsiders see it. As a side effect, the infinity in the term means that it simultaneously describes a state that practitioners cannot possibly attain ("ubiquitous" is like "omniscient"--it's an absolute that is impossible to achieve) and an utopia that others can easily dismiss. It's the worst of both worlds

Mike also identifies Artificial Intelligence and Ambient Intelligence as having this problem too. In they eyes of your detractors you'll never get there, you're crazy for thinking it's worth trying, and the steps along the way don't measure up to the vision. I'd add that Virtual Reality also has this issue, since the reality part is unattainable (and if the uncanny valley is to be believed, steps towards it can actually make things worse).

I like the solution Mike offers to this. Rather than inventing new terms, he's simply asserting that ubicomp has already happened, and has been with us since around 2005. There's more on this in his talk from UX Week last August which was great, and no doubt also in his upcoming book.

I like the idea of framing these unattainable words as being about now, not some distant future, and working with that to see where we go next. It's fun to imagine a light misting of comp, that will steadily increase in saturation until it's ubi... a luminous bath, some might say. A version of Gibson's "the future is already here, it's just not very evenly distributed", perhaps.

I'm also wondering if there's something to these limitless phrases that attracts academics. I have degrees in artificial intelligence and in virtual reality so you might think I'd know, but I always felt late to the party in those circles, like I'd missed the initial buzz and arrived in time for the hard defensive slog. And hey, Web 2.0 feels like that sometimes too - arguably, whatever's next is already here and we should take a leaf out of Mike's book and start declaring it so. When Web 2.0 was first coined, it wasn't about the future!

At Stamen we've just finished building a new map for LOCOG (the London Organising Committee of the Olympic Games). This map builds on the work we did last year, with some new work on the back-end to expose a wider variety of content and another round of improvements to the Modest Maps powered front-end. This time we're trying to organise and make spatial sense of the thousands of geocoded articles and photos that the London 2012 team are producing, highlight the ongoing works in the Olympic Park, London and the UK, and showcase the depth and breadth of information available on the main site.

As always when we've just released something, I haven't had a lot of time to reflect on what's been done since I stopped working on it every day, but I wanted to get some words down while the paint's still wet. As always, but sometimes it's important to state clearly: I write for me here, not for Stamen (though I'm not sure what I'd change) and certainly not for LOCOG (you shouldn't take any of this as an endorsement from them). As always, and sometimes you can't say it enough: not all the work shown here is by me, I'm part of a bigger team at Stamen and almost all of us had a hand in this one. We also have very attentive and supportive clients!

We've had a lot of fun paying attention to their brand; going to town with the bright colours, seamless transitions, polygon shards, flags and so on whilst keeping that controversial logo moving nicely. It's sometimes tricky to stay within the guidelines and still have things make sense on top of the maps we've made, but the style guide is tough but fair and it's definitely worth it in the end. Since the branding already pushes things from the graphic design standpoint we've taken the opportunity to push the interactive end of things. The map allows you to filter the content by category, time, search terms and place, with all those (except the search terms) happening client-side to give you an immediate update.

From a technical standpoint the trickiest bit was getting the clustering right. It uses multiple levels of the UK's administrative hierarchy behind the scenes to group different categories of content together into those numbered and coloured flags. When you click on a flag we display an info bubble with tabs containing excerpts from all the content. All of those elements update when the filters change, either immediately or with a slight (and hopefully imperceptible) pause, and hundreds or thousands of animations get kicked off every second if you drag the time slider. With all that going on, the clustering had to be robust!

It's one thing to identify that your map has too much content when it's zoomed out, or that when you're zoomed in some things are overlapping. But it's another thing to group things together in intuitive ways, and yet another thing to have those groupings behave appropriately with other UI elements, and to have the content (which is really all that matters) remain accessible at all times. Throughout the final stages of the project we were worried about cramming too much stuff into the info-bubbles that appear when you click on the flags, and we considered sending you to a separate page section below the map to read extended search results. In the end though we went with the tabbed info bubble approach (I felt a little better about this idea after seeing that people like Mapeed were taking a similar approach). This can sometimes present you with a lot of scrolling to do, but with the added control given by the filters (and the constant updating of the content in the info bubble) we're happy with how that turned out.

Anyway, it's not all about technical achievement, even if that was my personal focus. Some of the features are very simple conceptually, such as showing and hiding webcams depending on whether you're zoomed-in or not. But if you zoom into the park and it happens to have snowed, you can be greeted with a pleasant surprise:

And sometimes we're really just trying to get out of the way, so that the park can speak for itself:

They also have an interesting PDF talking about how to interpret the data. Heathrow had nothing of the sort when I worked with their schedules at my last job. It was more a combination of hearsay, logic and rules of thumb to predict gate assignment there. Good stuff.

If you're the kind of (mainly 2d) graphics programmer that I am, the thing you find most attractive about Processing is the one-click publishing to make a webpage and show people what you've been doing. Everything else after that is a bonus.

If you're not that kind of programmer, and the web isn't your primary concern, then you should definitely check out LÖVE. It looks like they're having a lot of fun over there, and Lua is just nicely mind-bending enough but still familiar if you're coming from Java or Actionscript.

November 2008 marked two years at Stamen for me, and I'm not done yet. Three purely technological things I'm excited about working with in 2009:

Realtime messaging and XMPP. After some initial experiments, I'm really excited by the possibility of visualisations driven by realtime data feeds. I like the idea of XMPP, and although scaling it out gives me the fear it's a fear I'd like to confront in 2009 on a real project.

Custom cartography and up-to-date maps. I'm a long-time cheerleader and supporter of the OpenStreetMap project and the project is reaching a level of completeness and complexity that competes with commercial map providers. I'm looking forward to writing tools and maps that work with OSM data in a way that just wouldn't be possible with Google-Maps-style mapping APIs or would require data well out of the budget range of most of our projects.

Visualisation and vector mapping in a web-browser using NotFlash technologies. The healthy competition between Gecko (used in Firefox) and WebKit (used in Safari, Android, the iPhone etc.) is improving the performance of javascript, canvas and svg (not to mention the new CSS transforms). This means that the potential for interactive vector graphics in the browser is almost on a par with Flash. I imagine the developer tools will keep me with Flash for a long time, but I'm looking for the right project to kick-start a comparable tool chain for in-browser vector graphics, and looking forward to thinking about what that might look like for myself this year.

This post could probably use some supporting links, but I thought I'd get it out there before my first week back at work ended. Happy 2009 to you all.

Random Etc. is the weblog and anagram of Tom Carden, a British design technologist based in San Francisco. Tom is a Data Visualization Engineer at Square and was previously a co-founder of Bloom and a Designer/Developer at Stamen.