Month: March 2008

Today we launched the first short story at We Tell Stories, called The 21 Steps. It’s a thriller written by the acclaimed spy writer Charles Cumming, and it’s set within Google Maps. I’m genuinely pleased by the way in which the design of the experience meshed with Charlie’s excellent story, and so I’d really recommend you to read it.

We Tell Stories has been – and still is – an interesting challenge, because what we’re trying to do is tell stories in a way that can only be told online. We aren’t adapting stories – we’re working with authors to create entirely new stories that are native to the web. In the past few weeks, I’ve called the process ‘designing a story’, and I talked a little about it in a Gamasutra article published today:

The first story looks to use Google Maps in some way – how did you work with the author to make this happen?

What the Google Maps story does is force us to think about the reader experience. While they might not realize it, authors simply don’t have to think about this when it comes to books, since they already implicitly know the ‘design’ of books – it’s words on page, divided up into chapters, and you can flick back and forth pages to look at the ‘story history’, and bookmark pages to keep your place.

The design of books is so great that it hasn’t changed for hundreds of years, and so we just don’t think about it any more.

When we had the idea for a story based around Google Maps, we knew that it had to incorporate a lot of movement – otherwise what’s the point of having a map? So one early idea was a travelogue – a little like Around The World in 80 Days. Another was a thriller, like The 39 Steps. We ended up taking the latter option, due to its frenetic pace, and we asked Charles Cumming, an acclaimed British spy thriller author, to write a story for us.

To begin with, we simply told Charles to ‘bake movement in’ to the story. However, from early on, it became clear that this was rather trickier than any of had thought; it wasn’t enough to have the protagonist walking and driving and flying around the place, they had to do it all the time.

Early drafts of the story saw the protagonist having a very tense discussion for a couple of chapters – riveting stuff – but it was all in one room. Luckily we had a great relationship with Charles and we worked together to incorporate more movement, or references to other locations, in every chapter.

We would often give suggestions about scenes that would fit the design, and Charles was always very open to revising the story and coming up with new ideas. Ultimately, I think it was his flexibility that really made things fit together.

Something that is worth mentioning is that none of the authors we’re working with are particularly tech-savvy – some of them are the completely opposite. And while it does help, it only helps up to a point. From my point of view, I can teach an author about technology and interaction, but I can’t teach someone how to write.

I spoke about the subject of stories and games at Barcamp Brighton on Sunday (incidentally I wouldn’t call The 21 Steps a game, but it is an interactive experience). The Barcamp was a wonderful experience, and I’m sure to repeat it again. Rachel Clarke did a great writeup of my presentation on her blog, and I’ve also included the slides below:

One of the many sad results of Perplex City being put ‘on hold’ is that I can’t explore the effect of cognitive enhancement on society. As a former neuroscientist who studied experimental psychology at university, I always enjoyed writing about my pet fictional company, Cognivia, and its range of cognitive enhancements including Ceretin (wide-spectrum enhancement), Mnemosyne (memory booster), Cardinal (maths), Synergy (creativity) and others. I still think the names are really cool as well.

In a recent commentary in the journal Nature, two Cambridge University researchers reported that about a dozen of their colleagues had admitted to regular use of prescription drugs like Adderall, a stimulant, and Provigil, which promotes wakefulness, to improve their academic performance. The former is approved to treat attention deficit disorder, the latter narcolepsy, and both are considered more effective, and more widely available, than the drugs circulating in dorms a generation ago.

… One person who posted anonymously on the Chronicle of Higher Education Web site said that a daily regimen of three 20-milligram doses of Adderall transformed his career: “I’m not talking about being able to work longer hours without sleep (although that helps),” the posting said. “I’m talking about being able to take on twice the responsibility, work twice as fast, write more effectively, manage better, be more attentive, devise better and more creative strategies.”

Would I take cognitive enhancers? I would certainly like to give Provigil a try, if only to see what it’s like. I have concerns about its long-term efficacy, and obviously there are issues of developing a dependency on it (if not physiological, psychological). There are already many people out there who regularly use caffeine and Pro-Plus to pep themselves up. You could argue that the stimulant properties of caffeine are merely a side-effect, and that the reason people drink coffee is because it tastes nice, but I find that as hard to believe as the notion that people drink alcohol only because they enjoy the taste.

The fact is, we already widely use cognitive enhancers, whether it’s caffeine or sugar. They do improve our performance. They are not natural in the slightest, unless natural somehow means ‘old’. So the question becomes, are we prepared to allow use of cognitive enhancers that are even more powerful, more reliable, and with fewer side-effects?

Imagine, for example, a novel designed to take advantage of the features of the new must-have geek hipster accessory: the iPhone. When you download a new novel to your iPhone, the calendar might automatically remind you it’s the birthday of one of the characters in a few days’ time, or you might get access to the appointments schedule of the missing journalist in your thriller. The weather-forecast widget could give you the option to view the weather in London in 1880, the setting for your historical romance. Or your purchase of one of those classic Harry Potters could add The Daily Prophet to your automatic newspaper subscriptions. Stories could become pervasive: when you’re lost in a good book, your whole online world could blend seamlessly with it. The technology to do all this doesn’t exist yet, but it’s far from impossible.

Of course, all that additional content will have to be written. Therein lies one of the problems. As Adrian Hon, chief creative of the online games company Six to Start, says: “Authors don’t need to be great artists or programmers right now. They ‘just’ need to write. To make anything more advanced than a normal story, though, you need more skills.” Most authors aren’t also computer programmers, and most programmers aren’t novelists. As Hon says: “Web people come up with cool ideas, such as telling stories by web 2.0 series, wikis or e-mails. Twitter, but it fails because they can’t write a good story for it.” This needn’t be an insuperable hurdle. We may see a new partnership added to the traditional artist-and-writer combination for illustrated books, or musician-and-writer team for songs. Writers could work with programmers in this new form of storytelling.

Obviously my position is a bit more nuanced than this, but the quote gets the point across. While a lot of ‘stories on the web’ today involve some interesting technology, unfortunately, they’re just not very interesting stories. This leads a lot of people to conclude that the format of a book is superior. Of course, I disagree; we need to put a lot more thought into designing stories for the web, and that needs to be a collaborative process between not just writers and programmers, but also people who design interactive experiences on the web, who we might as well just call game or ARG designers.

The article also has a few tidbits about what we’re doing with Penguin (then again, you’ll find out much more early next week), and a review of the Amazon Kindle. The reviewer, a novelist called Stephen Amidon, has a rather plaintive lament about what eBooks and, I imagine, technology in general, holds for the future of his vocation. Continue reading “Future of Books article in Sunday Times”

There’s a fascinating series of articles at the New York Times Magazine this week about charitable giving. While many of the articles tend to cover the same ground (e.g. the move towards measuring the effectiveness of donations) there are some real gems there:

Consider Mr. Improvident, who is just like us except that he is not wired to care about his future. (There’s one in every family.) Mr. Improvident gets no neural kick from saving for tomorrow. Yet we can see that he has an objective reason to do so. He is, after all, a person extended in time, not a series of disconnected selves.

We ought to be able to see a similarly objective reason for altruism, one rooted, as the philosopher Thomas Nagel observed, in “the conception of oneself as merely a person among others equally real.” My reason for taking steps to relieve the suffering of others is, in this way of thinking, as valid as my reason for taking steps to avert my own future suffering. Both reasons arise from our understanding of what sort of beings we are, not from the vagaries of natural selection.

This was from an article about the nature of altruism, the discussion of which tends to concentrate on genetic reasons like kin selection and reciprocity. The suggestion that there is an objective reason for altruism – or at least, as objective and valid as saving for ourselves in the future – is interesting. There is of course an argument that we are more likely to save for ourselves, because we are going to be ourselves in the future – but the problem with this is the existence of Mr. Improvident. If the corollary or Mr. Provident exists, then why can’t a Mr. Altruist? Anyway…

Another great article is What Makes People Give? To me, the article is misnamed, since it’s more about ‘how can we use psychology to make people donate more?’ – which is the reason why I recommended it to the Let’s Change the Game winning team. There are some fascinating discoveries listed in the article, and while they can’t be used for all fundraising projects, I’m sure some will prove very useful, e.g.:

A matching gift effectively reduces the cost of making a donation. Without a match, you would have to spend $400 to make your favorite charity $400 richer. With a three-to-one match in place, it would cost you only $100 to add $400 to the charity’s coffers.

… But the size of the match in the experiment didn’t have any effect on giving. Donors who received the offer of a one-to-one match gave just as often, and just as much, as those responding to the three-to-one offer. That was surprising, because a larger match is effectively a deeper discount on a person’s gift. Yet in this case, the deeper discount didn’t make an impact. It was as if Starbucks had cut the price of a latte to $2 and sales didn’t increase.

and

List set out to see whether donors cared about so-called seed money. Fund-raisers generally like to have raised a large portion of their ultimate goal, sometimes as much as 50 percent, before officially announcing a new campaign. This makes the goal, as well as the cause, seem legitimate.

To see whether the strategy made sense, List and Reiley wrote letters to potential donors saying that the university wanted to buy computers for a new environmental-research center. They varied the amount of money that supposedly had already been raised. In some letters, they put the amount in hand at $2,000, out of the $3,000 they needed for a given computer; in others, they said they had raised only $300 and still needed $2,700. The results were overwhelming. The more upfront money Central Florida claimed to have on hand, the more additional money it raised. When paired with the matching-gift research, the study suggests that seed money is a better investment for charities than generous matches.

Through play, an individual avoids what he called the lure of ‘‘false endpoints,’’ a problem-solving style more typical of harried adults than of playful youngsters. False endpoints are avoided through play, Bateson wrote, because players are having so much fun that they keep noodling away at a problem and might well arrive at something better than the first, good-enough solution.

If it’s not clear from the quote above, here’s what a false endpoint is: imagine you need to come up with a new advertising slogan for a chocolate bar. There’s no urgent deadline to it, and so you might work on it for a long time. However, if you’re a bit stressed out and the process of brainstorming the slogan isn’t fun, as soon as you come up with a slogan that’s just ‘good enough’ then you might call it a day and stop working. That’s a false endpoint; you could’ve come up with a much better slogan if the process was more fun – like play – and you’d kept working away on it.

I’ve never heard of false endpoints before, and I can’t find anything about the term on Google. I find it hard to believe that no-one has ever thought of the concept before, but perhaps no-one’s given it a name until now. In any case, it ought to be more widely studied.

In highly creative fields like storytelling and game design, where you have to work to deadlines and commercial demands mean that coming up with ideas aren’t that fun, there’s a real danger that you can fall into false endpoints – simply settling on a ‘good enough’ idea because you just want to move on. I’ve seen and experienced it myself. That’s why there are so many games are stories that are pretty good, but are lacking in certain very obvious ways.

I wonder whether incorporating play into the development process (not necessarily production, although I’m sure it would help to a degree there as well) of highly creative fields would demonstrably prevent or reduce false endpoints. The notion of introduction ‘fun’ into the workplace is not new, and deservedly groan-worthy, but perhaps making creative development not simply fun but interesting in a way that encourages this ‘noodling away at a problem’ is smarter way to approach it.