Crowdsourcing Our Cultural Heritage

About this blog

Posts from a cultural heritage technologist on digital humanities, heritage and history, and user experience research and design. A bit of wishful thinking about organisational change thrown in with a few questions and challenges to the cultural heritage sector on audience research, museum interpretation, interactives and collections online.

I’m Mia, I was dev/design team lead on Serendipomatic, and I’ll be talking about how play shaped both what you see on the front end and the process of making it.

How did play shape the process?

The playful interface was a purposeful act of user advocacy – we pushed against the academic habit of telling, not showing, which you see in some form here. We wanted to entice people to try Serendipomatic as soon as they saw it, so the page text, graphic design, 1 – 2 – 3 step instructions you see at the top of the front page were all designed to illustrate the ethos of the product while showing you how to get started.

How can a project based around boring things like APIs and panic be playful? Technical decision-making is usually a long, painful process in which we juggle many complex criteria. But here we had to practice ‘rapid trust’ in people, in languages/frameworks, in APIs, and this turned out to be a very freeing experience compared to everyday work. First, two definitions as background for our work…

Just in case anyone here isn’t familiar with APIs, APIs are a set of computational functions that machines use to talk to each other. Like the bank in Monopoly, they usually have quite specific functions, like taking requests and giving out information (or taking or giving money) in response to those requests. We used APIs from major cultural heritage repositories – we gave them specific questions like ‘what objects do you have related to these keywords?’ and they gave us back lists of related objects.The term ‘UX‘ is another piece of jargon. It stands for ‘user experience design’, which is the combination of graphical, interface and interaction design aimed at making products both easy and enjoyable to use. Here you see the beginnings of the graphic design being applied (by team member Amy) to the underlying UX related to the 1-2-3 step explanation for Serendipomatic.

Feed.

The ‘feed’ part of Serendipomatic parsed text given in the front page form into simple text ‘tokens’ and looked for recognisable entities like people, places or dates. There’s nothing inherently playful in this except that we called the system that took in and transformed the text the ‘magic moustache box’, for reasons lost to time (and hysteria).

Whirl.

These terms were then mixed into database-style queries that we sent to different APIs. We focused on primary sources from museums, libraries, archives available through big cultural aggregators. Europeana and the Digital Public Library of America have similar APIs so we could get a long way quite quickly. We added Flickr Commons into the list because it has high-quality, interesting images and brought in more international content. [It also turns out this made it more useful for my own favourite use for Serendipomatic, finding slide or blog post images.] The results are then whirled up so there’s a good mix of sources and types of results. This is the heart of the magic moustache.

Marvel.

User-focused design was key to making something complicated feel playful. Amy’s designs and the Outreach team work was a huge part of it, but UX also encompasses micro-copy (all the tiny bits of text on the page), interactions (what happened when you did anything on the site), plus loading screens, error messages, user documentation.

We knew lots of people would be looking at whatever we made because of OWOT publicity; you don’t get a second shot at this so it had to make sense at a glance to cut through social media noise. (This also meant testing it for mobiles and finding time to do accessibility testing – we wanted every single one of our users to have a chance to be playful.)

Without all this work on the graphic design – the look and feel that reflected the ethos of the product – the underlying playfulness would have been invisible. This user focus also meant removing internal references and in-jokes that could confuse people, so there are no references to the ‘magic moustache machine’. Instead, ‘Serendhippo’ emerged as a character who guided the user through the site.

But how does a magic moustache make a process playful?

The moustache was a visible signifier of play. It appeared in the first technical architecture diagram – a refusal to take our situation too seriously was embedded at the heart of the project. This sketch also shows the value of having a shared physical or visual reference – outlining the core technical structure gave people a shared sense of how different aspects of their work would contribute to the whole. After all, if there aren’t any structure or rules, it isn’t a game.

This playfulness meant that writing code (in a new language, under pressure) could then be about making the machine more magic, not about ticking off functions on a specification document. The framing of the week as a challenge and as a learning experience allowed a lack of knowledge or the need to learn new skills to be a challenge, rather than a barrier. My role was to provide just enough structure to let the development team concentrate on the task at hand.

In a way, I performed the role of old-fashioned games master, defining the technical constraints and boundaries much as someone would police the rules of a game. Previous experience with cultural heritage APIs meant I was able to make decisions quickly rather than letting indecision or doubt become a barrier to progress. Just as games often reduce complex situations to smaller, simpler versions, reducing the complexity of problems created a game-like environment.

UX matters

Ultimately, a focus on the end user experience drove all the decisions about the backend functionality, the graphic design and micro-copy and how the site responded to the user.

It’s easy to forget that every pixel, line of code or text is there either through positive decisions or decisions not consciously taken. User experience design processes usually involve lots of conversation, questions, analysis, more questions, but at OWOT we didn’t have that time, so the trust we placed in each other to make good decisions and in the playful vision for Serendipomatic created space for us to focus on creating a good user experience. The whole team worked hard to make sure every aspect of the design helps people on the site understand our vision so they can get with exploring and enjoying Serendipomatic.

Some possible real-life lessons I didn’t include in the paper

One Week One Tool was an artificial environment, but here are some thoughts on lessons that could be applied to other projects:

Conversations trump specifications and showing trumps telling; use any means you can to make sure you’re all talking about the same thing. Find ways to create a shared vision for your project, whether on mood boards, technical diagrams, user stories, imaginary product boxes.

Find ways to remind yourself of the real users your product will delight and let empathy for them guide your decisions. It doesn’t matter how much you love your content or project, you’re only doing right by it if other people encounter it in ways that make sense to them so they can love it too (there’s a lot of UXy work on ‘on-boarding’ out there to help with this). User-centred design means understanding where users are coming from, not designing based on popular opinion.you can use tools like customer journey maps to understand the whole cycle of people finding their way to and using your site (I guess I did this and various other UXy methods without articulating them at the time).

Document decisions and take screenshots as you go so that you’ve got a history of your project – some of this can be done by archiving task lists and user stories.

Having someone who really understands the types of audiences, tools and materials you’re working with helps – if you can’t get that on your team, find others to ask for feedback – they may be able to save you lots of time and pain.

Design and UX resources really do make a difference, and it’s even better if those skills are available throughout the agile development process.

Update: and already we’ve had feedback that people love the experience and have found it useful – it’s so amazing to hear this, thank you all! We know it’s far from perfect, but since the aim was to make something people would use, it’s great to know we’ve managed that:

After five days and nights of intense collaboration, the One Week | One Tool digital humanities team has unveiled its web application: Serendip-o-matic <http://serendipomatic.org>. Unlike conventional search tools, this “serendipity engine” takes in any text, such as an article, song lyrics, or a bibliography. It then extracts key terms, delivering similar results from the vast online collections of the Digital Public Library of America, Europeana, and Flickr Commons. Because Serendip-o-matic asks sources to speak for themselves, users can step back and discover connections they never knew existed. The team worked to re-create that moment when a friend recommends an amazing book, or a librarian suggests a new source. It’s not search, it’s serendipity.

Serendip-o-matic works for many different users. Students looking for inspiration can use one source as a springboard to a variety of others. Scholars can pump in their bibliographies to help enliven their current research or to get ideas for a new project. Bloggers can find open access images to illustrate their posts. Librarians and museum professionals can discover a wide range of items from other institutions and build bridges that make their collections more accessible. In addition, millions of users of RRCHNM’s Zotero can easily run their personal libraries through Serendip-o-matic.Serendip-o-matic is easy to use and freely available to the public. Software developers may expand and improve the open-source code, available on GitHub. The One Week | One Tool team has also prepared ways for additional archives, libraries, and museums to make their collections available to Serendip-o-matic.

If you’d asked me at 6pm, I would have said I’d have been way too tired to blog later, but it also felt like a shame to break my streak at this point. Today was hard work and really tiring – lots to do, lots of finicky tech issues to deal with, some tricky moments to work through – but particularly after regrouping back at the hotel, the dev/design team powered through some of the issues we’d butted heads against earlier and got some great work done. Tomorrow will undoubtedly be stressful and I’ll probably triage tasks like mad but I think we’ll have something good to show you.

As I left the hotel this morning I realised an intense process like this isn’t just about rapid prototyping – it’s also about rapid trust. When there’s too much to do and barely any time for communication, let alone checking someone else’s work, you just have to rely on others to get the bits they’re doing right and rely on goodwill to guide the conversation if you need to tweak things a bit. It can be tricky when you’re working out where everyone’s sense of boundaries between different areas are as you go, but being able to trust people in that way is a brilliant feeling. At the end of a long day, I’ve realised it’s also very much about deciding which issues you’re willing to spend time finessing and when you’re happy to hand over to others or aim for a first draft that’s good enough to go out with the intention to tweak if it you ever get time. I’d asked in the past whether a museum’s obsession with polish hinders innovation so I can really appreciate how freeing it can be to work in an environment where to get a product that works, let alone something really good, out in the time available is a major achievement.

Anyway, enough talking. Amrys has posted about today already, and I expect that Jack or Brian probably will too, so I’m going to hand over to some tweets and images to give you a sense of my day. (I’ve barely had any time to talk to or get to know the Outreach team so ironically reading their posts has been a lovely way to check in with how they’re doing.)

Our GitHub repository punch card report tells the whole story of this week – from nothing to huge levels of activity on the app code

I keep looking at the #OWOT commits and clapping my hands excitedly. I am a great. big. dork.— Mia (@mia_out) August 1, 2013

OH at #owot ‘I just had to get the hippo out of my system’ (More seriously, so exciting to see the design work that’s coming out!)— Mia (@mia_out) August 1, 2013

OH at #OWOT ‘I’m not sure JK Rowling approves of me’. Also, an earlier unrelated small round of applause. Progress is being made.— Mia (@mia_out) August 1, 2013

We’ve made great progress on our mysterious tool. And it has a name! Some cool design motifs are flowing from that, which in turn means we can really push the user experience design issues over the next day and a half (though we’ve already been making lots of design decisions on the hoof so we can keep dev moving). The Outreach team have also been doing some great communications work, including a Press Release and have lots more in the pipeline. The Dev/Design team did a demo of our work for the Outreach team before dinner – there are lots of little things but the general framework of the tool works as it should – it’s amazing how far we’ve come since lunchtime yesterday. We still need to do a full deployment (server issues, blah blah), and I’ll feel a lot better when we’ve got that process working and then running smoothly, so that we can keep deploying as we finish major features up to a few hours before launch rather than doing it at the end in a mad panic. I don’t know how people managed code before source control – not only does Github manage versions for it, it makes pulling in code from different people so much easier.

There’s lots to tackle on many different fronts, and it may still end up in a mad rush at the end, but right now, the Dev/Design team is humming along. I’ve been so impressed with the way people have coped with some pretty intense requirements for working with unfamiliar languages or frameworks, and with high levels of uncertainty in a chaotic environment. I’m trying to keep track of things in Github (with Meghan and Brian as brilliant ‘got my back’ PMs) and keep the key current tasks on a whiteboard so that people know exactly what they need to be getting done at any time. Now that the Outreach team have worked through the key descriptive texts, name and tagline we’ll need to coordinate content production – particularly documentation, microcopy to guide people through the process – really closely, which will probably get tricky as time is short and our tasks are many, but given the people gathered together for OWOT, I have faith that we’ll make it work.

Things I have learnt today: despite two years working on a PhD in digital humanities/digital history, I still have a brain full of technical stuff – it’s a relief to realise it hasn’t atrophied through lack of use. I’ve also realised how much the work I’ve done designing workshops and teaching since starting my PhD have fed into how I work with teams, though it’s hard right now to quantify exactly *how*. Finally, it’s re-affirmed just how much I like making things – but also that it’s important to make those things in the company of people who are scholarly (or at least thoughtful) about subjects beyond tech and inter-disciplinary, and ideally to make things that engage the public as well as researchers. As the end of my PhD approaches, it’s been really useful to step back into this world for a week, and I’ll definitely draw on it when figuring out what to do after the PhD. If someone could just start a CHNM in the UK, I’d be very happy.

I still can’t tell you what we’re making, but I *can* tell you that one of these photos in this post contains a clue (and they all definitely have nothing to do with mild lightheadedness at the end of a long day).

Day two of One Week, One Tool. We know what we’re making, but we’re not yet revealing exactly what it is. (Is that mean? It’s partly a way of us keeping things simple so we can focus on work.) Yesterday (see Working out what we’re doing: day one of One Week, One Tool) already feels like weeks ago, and even this morning feels like a long time ago. I can see that my posts are going to get less articulate as the week goes on, assuming I keep posting. I’m not sure how much value this will have, but I suppose it’s a record of how fast you can move in the right circumstances…

We spent the morning winnowing the ideas we’d put up for feedback on overnight down from c12 to 4, then 3, then 2, then… It’s really hard killing your darlings, and it’s also difficult choosing between ideas that sound equally challenging or fun or worthy. There was a moment when we literally wiped ideas that had been ruled out from the whiteboard, and it felt oddly momentous. In the end, the two final choices both felt like approaches to the same thing – perhaps because we’d talked about them for so long that they started to merge (consciously or not) or because they both fell into a sweet spot of being accessible to a wide audience and had something to do with discovering new things about your research (which was the last thing I tweeted before we made our decision and decided to keep things in-house for a while). Finally, eventually, we had enough of a critical mass behind one idea to call it the winner.

Personally, our decision only started to feel real as we walked back from lunch – our task was about to get real. It’s daunting but exciting. Once back in the room, we discussed the chosen idea a bit more and I got a bit UX/analysty and sketched stuff on a whiteboard. I’m always a bit obsessed with sketching as a way to make sure everyone has a more concrete picture (or shared mental model) of what the group is talking about, and for me it also served as a quick test of the technical viability of the idea. CHNM’s Tom Scheinfeldt then had the unenviable task of corralling/coaxing/guiding us into project management, dev/design and outreach teams. Meghan Frazer and Brian Croxall are project managing, I’m dev/design team lead, with Scott Kleinman, Rebecca Sutton Koeser, Amy Papaelias, Eli Rose, Amanda Visconti and Scott Williams (and in the hours since then I have discovered that they all rock and bring great skills to the mix), and Jack Dougherty is leading the outreach team of Ray Palin and Amrys Williams in their tasks of marketing, community development, project outreach, grant writing, documentation. Amrys and Ray are also acting as user advocates and they’ve all contributed user stories to help us clarify our goals. Lots of people will be floating between teams, chipping in where needed and helping manage communication between teams.

The Dev/Design team began with a skills audit so that we could figure out who could do what on the front- and back-end, which in turn fed into our platform decision (basically PHP or Python, Python won), then a quick list of initial tasks that would act as further reality checks on the tool and our platform choice. The team is generally working in pairs on parallel tasks so that we’re always moving forward on the three main functional areas of the tool and to make merging updates on github simpler. We’re also using existing JavaScript libraries and CSS grids to make the design process faster. I then popped over to the Outreach team to check in with the descriptions and potential user stories they were discussing. Meghan and Brian got everyone back together at the end of the day, and the dev/design team had a chance to feed back on the outreach team’s work (which also provided a very ad hoc form of requirements elicitation but it started some important conversations that further shaped the tool). Then it was back over to the hotel lobby where we planned to have a dev/design team meeting before dinner, but when two of our team were kidnapped by a shuttle driver (well, sorta) we ended up working through some of the tasks for tomorrow. We’re going to have agile-style stand-up meetings twice a day, with the aim to give people enough time to get stuck into tasks while still keeping an eye on progress with a forum to help deal with any barriers or issues. Some ideas will inevitably fall by the wayside, but because the OWOT project is designed to run over a year, we can put ideas on a wishlist for future funded development, leave as hooks for other developers to expand on, or revisit once we’re back home. In hack day mode I tend to plan so that there’s enough working code that you have something to launch, then go back and expand features in the code and polish the UX with any time left. Is this the right approach here? Time will tell.

I’m sitting in a hotel next to the George Mason University’s Fairfax campus with a bunch of people I (mostly) met last night trying to work out what tool we’ll spend the rest of the week building. We’re all here for One Week, One Tool, a ‘digital humanities barn raising’ and our aim is to launch a tool for a community of scholarly users by Friday evening. The wider results should be some lessons about rapidly developing scholarly tools, particularly building audience-focused tools, and hopefully a bunch of new friendships and conversations, and in the future, a community of users and other developers who might contribute code. I’m particularly excited about trying to build a ‘minimum viable product‘ in a week, because it’s so unlike working in a museum. If we can keep the scope creep in check, we should be able to build for the most lightweight possible interaction that will let people use our tool while allowing room for the tool to grow according to uses.

We met up last night for introductions and started talking about our week. I’m blogging now in part so that we can look back and remember what it was like before we got stuck into building something – if you don’t capture the moment, it’s hard to retrieve. The areas of uncertainty will reduce each day, and based on my experience at hack days and longer projects, it’s often hard to remember how uncertain things were at the start.

Are key paradoxes of #owot a) how we find a common end user, b) a common need we can meet and c) a common code language/framework?— Mia (@mia_out) July 29, 2013

Meghan herding cats to get potential ideas summarised

Today we heard from CHNM team members Sharon Leon on project management, Sheila Brennan on project outreach and Patrick Murray-John on coding and then got stuck into the process of trying to figure out what on earth we’ll build this week. I don’t know how others felt but by lunchtime I felt super impatient to get started because it felt like our conversations about how to build the imaginary thing would be more fruitful when we had something concrete-ish to discuss. (I think I’m also used to hack days, which are actually usually weekends, where you’ve got much less time to try and build something.) We spent the afternoon discussing possible ideas, refining them, bouncing up and down between detail, finding our way through different types of jargon, swapping between problem spaces and generally finding our way through the thicket of possibilities to some things we would realistically want to make in the time. We went from a splodge of ideas on a whiteboard to more structured ‘tool, audience, need’ lines based on agile user stories, then went over them again to summarise them so they’d make sense to people viewing them on ideascale.

So now it’s over to you (briefly). We’re working out what we should build this week, and in addition to your votes, we’d love you to comment on two specific things:

How would a suggested tool change your work?

Do you know of similar tools (we don’t want to replicate existing work)?

So go have a look at the candidate ideas at http://oneweekonetool.ideascale.com and let us know what you think. It’s less about voting than it is about providing more context for ideas you like, and we’ll put all the ideas through a reality check based on whether it has identifiable potential users and whether we can build it in a few days. We’ll be heading out to lunch tomorrow (Viriginia time) with a decision, so it’s a really short window for feedback: 10am American EST. (If it’s any consolation, it’s a super-short window for us building it too.)

If life is what happens to you while you’re busy making other plans thenI’m glad I’ve been busy planning various things, because it meant that news of the EU-funded iTacitus project was a pleasant surprise. The project looked at augmented reality, itinerary planning and contextual information for cultural tourism.

As described on their site and this gizmowatch article, it’s excitingly close to the kind of ‘Dopplr for cultural heritage’ or ‘pocket curatr’ I’ve written about before:

Visitors to historic cities provide the iTacitus system with their personal preferences – a love of opera or an interest in Roman history, for example – and the platform automatically suggests places to visit and informs them of events currently taking place. The smart itinerary application ensures that tourists get the most out of each day, dynamically helping them schedule visits and directing them between sites.Once at their destination, be it an archaeological site, museum or famous city street, the AR component helps bring the cultural and historic significance to life by downloading suitable AR content from a central server.

There’s a video showing some of the AR stuff (superimposed environments, annotated Landscapes) in action on the project site. It didn’t appear to have sound so I don’t know if it also demonstrated the ‘Spatial Acoustic Overlays’.

I think hack days are great – sure, 24 hours in one space is an artificial constraint, but the sheer brilliance of the ideas and the ingenuity of the implementations is inspiring. They’re a reminder that good projects don’t need to take years and involve twenty circles of sign-off, even if that’s the reality you face when you get back to the office.

I’m also interested in creating something like a Dopplr for museums – you tell it what you’re interested in, and when you go on a trip it makes you a map and list of stuff you could see while you’re in that city.

Like: I like Picasso, Islamic miniatures, city museums, free wine at contemporary art gallery openings, [etc]; am inspired by early feminist history; love hearing about lived moments in local history of the area I’ll be staying in; I’m going to Barcelona.

The ‘list of cultural heritage stuff I like’ could be drawn from stuff you’ve bookmarked, exhibitions you’ve attended (or reviewed) or stuff favourited in a meta-museum site.

(I don’t know what you’d call this – it’s like a personal butlr or concierge who knows both your interests and your destinations – curatr?)

The talks on RDFa (and the earlier talk on YQL at the National Maritime Museum) have inspired me to pick a ‘good enough’ protocol, implement it, and see if I can bring in links to similar objects in other museum collections. I need to think about the best way to document any mapping I do between taxonomies, ontologies, vocabularies (all the museumy ‘ies’) and different API functions or schemas, but I figure the museum API wiki is a good place to draft that. It’s not going to happen instantly, but it’s a good goal for 2009.

Tom Morris gave a lightning talk on ‘How to use Semantic Web data in your hack‘ (aka SPARQL and semantic web stuff).

He’s since posted his links and queries – excellent links to endpoints you can test queries in.

Semantic web often thought of as long-promised magical elixir, he’s here to say it can be used now by showing examples of queries that can be run against semantic web services. He’ll demonstrate two different online datasets and one database that can be installed on your own machine.

First – dbpedia – scraped lots of wikipedia, put it into a database. dbpedia isn’t like your averge database, you can’t draw a UML diagram of wikipedia. It’s done in RDF and Linked Data. Can be queried in a language that looks like SQL but isn’t. SPARQL – is a w3c standard, they’re currently working on SPARQL 2.

Go to dbpedia.org/sparql – submit query as post. [Really nice – I have a thing about APIs and platforms needing a really easy way to get you to ‘hello world’ and this does it pretty well.]

[Line by line comments on the syntax of the queries might be useful, though they’re pretty readable as it is.]

‘select thingy, wotsit where [the slightly more complicated stuff]’

Can get back results in xml, also HTML, ‘spreadsheet’, JSON. Ugly but readable. Typed.

[Trying a query challenge set by others could be fun way to get started learning it.]

One problem – fictional places are in Wikipedia e.g. Liberty City in Grand Theft Auto.

Libris – how library websites should be
[I never used to appreciate how much most library websites suck until I started back at uni and had to use one for more than one query every few years]

Has a query interface through SPARQL

Comment from the audience BBC – now have SPARQL endpoint [as of the day before? Go BBC guy!].

Playing with mulgara, open source java triple store. [mulgara looks like a kinda faceted search/browse thing] Has own query language called TQL which can do more intresting things than SPARQL. Why use it? Schemaless data storage. Is to SQL what dynamic typing is to static typing. [did he mean ‘is to sparql’?]

Question from audence: how do you discover what you can query against?
Answer: dbpedia website should list the concepts they have in there. Also some documentation of categories you can look at. [Examples and documentation are so damn important for the update of your API/web service.]

Coming soon [?] SPARUL – update language, SPARQL2: new features

The end!

[These are more (very) rough notes from the weekend’s Open Hack London event – please let me know of clarifications, questions, links or comments. My other notes from the event are tagged openhacklondon.

Quick plug: if you’re a developer interested in using cultural heritage (museums, libraries, archives, galleries, archaeology, history, science, whatever) data – a bunch of cultural heritage geeks would like to know what’s useful for you (more background here). You can comment on the #chAPI wiki, or tweet @miaridge (or @mia_out). Or if you work for a company that works with cultural heritage organisations, you can help us work better with you for better results for our users.]

There were other lightning talks on Pachube (pronounced ‘patchbay’, about trying to build the internet of things, making an API for gadgets because e.g. connecting hardware to the web is hard for small makers) and Homera (an open source 3d game engine).

Systems architecture on Doppler lets them combine 3rd party systems with their stuff without tying their servers up in knots.

At a rough count, Dopplr uses about 25 third party web APIs.

If you’re going to make a web service, site, concentrate on the stuff you’re good at. [Use what other people are good at to make yours ace.]

But this also means you’re outsourcing and part of your reliability to other people. For each bit of service you add, network latency [is?] putting another bit of risk into your web architecture. Use messaging systems to make server side stuff asynchronous.

‘&’ is his favourite thing about Linux. Fundamental in Unix that work is divided into packets; each doing the thing it does well. Not even very tightly coupled. Anything that can be run on the command line, stick & on the end, do it in the background. Can forget about things running in the background – don’t have to manage the processes, it’s not tightly coupled.

In the physical world, big machines use gearing – having different bits of system run at different speeds. Also things can freewheel then lock in to system again when done.

When building big systems, there’s a worry that one machine, one bit it depends on can bring down everything else.

[Slide of a] Diagram of all the bits of the system that don’t run because someone has sent an HTTP request – [i.e. background processes]

Flickr is doing less database work up front to make pages load as quickly as possible. They queue other things in the background. e.g. photos load, tags added slightly later. (See post ‘Flickr engineers do it offline‘.)

Enterprise Integration Patterns (Hohpe et al) is a really good book. Banks have been using messaging for years to manage the problems. Atomic packets of data can be sent on a channel – ‘Email for applications’.

Designing – think about what needs to be done now, what can be done in the background? Think of it as part of product design – what has instant effect, what has slower effect? Where can you perform the ‘sleight of hand’ without people noticing/impacting their user experience?

Example using web services 1: Dopplr and AMEE. What happens when someone asks to see their carbon impact? A request for carbon data goes to Ruby on Rails (memory hungry, not the fastest thing in the world, try to take things off that and process elsewhere). Refresh user screen ‘check back soon’, send request to message broker (in JSON). Worker process connected to message broker sends request to AMEE. Update database.

Example using web services 2: Flickr pictures on Dopplr page. When you request a trip page, the page loads with all usual stuff and empty div in page with a piece of Javascript on a timer that polls Flickr.

Keeps open connection, a way to push messages to the client while it’s waiting to do something.

When processing lots of stuff, worker processes write to memcache as a form of progress bar, but the process is actually disconnected from the webserver so load/risk is outsourced.

‘Sites built with glue and string don’t automatically scale for free.’ You can have many webservers, but the bottleneck might be in the database. Splitting work into message queues is a way of building so things can scale in parallel.

Slide of services, companies that offer messaging stuff. [Did anyone get a photo of that?]

Because of abstraction and with things happening in the background, it’s a different flow of control than you might be used to – monitoring is different. You can’t just sit there with a single debugger.

[Slide] “If you can’t see your changes take effect in a system your understanding of cause and effect breaks down” – not just about it being hard to debug, it’s also about user expectations.

I really liked this presentation – it’s always good to learn from people who are not only innovating, but are also really solid on performance and reliability as well as the user experience.