Tuesday, December 23, 2008

Last post I talked about personalities in computing, but I only briefly touched on the idea as to why we may want personalities in computer programs or robots.

Aside from the obvious applications where a personality is necessary - robot pet, disabled care bots, sex bots, etc - the idea of putting a personality into the equation doesn't seem like a good one.

The most obvious example is Clippy. The basic idea of Clippy was that Word was getting rather complicated, and MS wanted some way to have a human-feeling tutorial system. A "smart guide".

Of course, it was a travesty, and is a sterling example of why NOT to make your programs have a personality. After all, if the solution or path is clear, the user doesn't need someone popping up and getting in their way. And, usually, if the solution or path isn't clear, it's easier to do a keyword search rather than clumsily interact with some primitive personality program.

But I think there are situations in which personable programs are required. A personable program may not have an overt, anthropomorphic personality, but there are times when even going that far might be useful.

First off, the basic idea of a personable program is that it understands the user better than current programs do. As time goes on and our need for context-sensitive data increases, our programs are going to have to get better at handling that. One example is a music bot that would pull music off the internet for you to listen to. Right now, the music bot software that's out there can be customized to your overall preferences, but it doesn't understand that sometimes you feel like listening to reggae, sometimes pop, sometimes filk. Humans have moods, and the current generation of software is only dimly aware of that fact.

A more advanced music bot would try to track your moods, trying to determine if, for example, you generally like funk in the morning and rap in the evening. A more physically present robot would probably try to determine your mood by scanning you - facial expression, body temperature, what clothes you decided to wear, whatever.

At this point the software can't be said to have an anthropomorphic personality. There's no pair of glasses with a mouth popping up and saying "it looks like you're trying to listen to Aerosmith. Would you like some help?"

But, despite that, you can see that the program is developing some rudimentary personality. In trying to predict your personal preferences, it will show personality, at least as far as you are concerned. Today it played military marches all day. Why? Yesterday it seemed to like the B52s. How interesting. It's got a personality, although not a clear or deep one.

Now the issue is that you've got this thing with a personality, but your interactions with it are extremely limited, largely consisting of fast-forwarding through songs. So, when it gets some weird idea in its head, you don't really have any way to (A) figure out why it's doing that or (B) get it to stop doing that.

On the other hand, if we do have a pair of glasses with a mouth pop up, we can see why it thinks what it thinks. Obviously, a cutesy icon is not what I'd choose. I'd probably either go for a blinking cursor or a sexy librarian, but we'll stay neutral for the moment.

You are fast-forwarding through songs it really thought you would like. Instead of flailing around randomly to try to determine if you're in a shitty mood or have a guest or something, it can gently pop up and say "Hey, what's the deal? You in a bad mood or something?"

And you can type back - "entertaining guests" or something. The software will understand, or at least fake it, and try to find a mode of music that fits your interests better.

...

Now, at this stage, I've only covered about a quarter of the reason that software may require personalities.

The other half of the first half is that, as time goes on, this kind of contextual data feeding will become more and more critical and overwhelming. What sounds like a cute little semi-feature will become much more necessary when you're, say, trawling through Twitter, Flickr, YouTube, and whatever comes out next trying to siphon out today's trends.

The second half happens on the back end - the side that doesn't face you. The programs require a personality because the data they'll be filtering - and how they filter it - is based on personalities.

If you don't have a personality, how can you form a meaningful opinion on whether a cute cat video is fun or not? On whether your friend (another piece of software) would like to hear this news from an earthquake-destroyed city? On this insider detail about a specific game developer...

As the volume of our media increases, we'll see a corresponding increase in the number of agents (programs) trawling through it, trying to make sense of it. Because the media is fundamentally based on humans, it's easiest to judge if you have some semblance of human in you. A personality.

This is especially true when it comes to creating and navigating the semantic net that will arise from all this data filtering...

So, in long, I think that we're going to see a rise in personality-filled programs because we're going to see a rise in the number of programs that have to interact with personalities.

Saturday, December 20, 2008

I've been thinking about Nintendogs, and Tamagochi, and Animal Crossing - all these games which feature the player nurturing some kind of pseudo-realtime entities.

A lot of people really like these games. But that's not terribly surprising to me. What is terribly surprising to me is the reason they like these games. And one of those reasons is the same surprising reason that people like The Sims, even though a lot of people who like the Sims don't like Nintendogs and visa-versa.

When you talk to these people, they talk about how their dogs or citizens or whatever other characters there are feel. They enjoy "keeping their Nintendog happy", and they notice that "one of my people has a grudge against another".

In regards to how I feel about it, maybe I dive a bit too deeply, a bit too quickly. My drive is not to figure out how these characters are feeling, it's to figure out how they feel things. So, when I dive into Nintendogs and The Sims, I am rapidly disappointed by the shallow simulation and brutally predictable responses. But the satisfied players either don't notice or don't care about that sort of thing.

I think it might be a race between how quickly a character can establish themself in your mind and how quickly you can tear their mind apart. The question in my mind is whether the characters could establish themselves even after I've figured them out. What I mean is this: I stop playing Nintendogs and The Sims because I figure them out, so I get bored and close up shop. But if I was forced to play them every day for a month, would I grow to have personal emotions about the characters?

I think I would. They would probably be hate, though, because it'd be a real bother to have to play the same boring game day after day. But... could it be another emotion? Could I grow to like a character whose actions and emotions are painfully transparent?

Well, research suggests yes, but the dynamics are a bit more difficult. In order to make it so that I like the character, the character has to serve some useful purpose in my life. That purpose could be almost anything, but it can't be "to waste your time".

The problem is that software isn't really advanced enough to do useful things on a personable, semi-automated level. For example, we could theoretically make our calendars management software a character - a virtual secretary - but it turns out that interacting with our virtual secretary is more difficult than simply filling out the raw calendar ourselves. So, even though she theoretically exists to help us out, in actuality she just wastes our time. Example: Clippy.

But I think the solution can't be applied because the problem doesn't exist yet. Right now, there aren't many applications that require a "personable heuristic", so there aren't many spaces to put one.

There are a lot of hints of this kind of thing on the horizon, though. An example are the elderly care robots you might see in Japan. These robots are pretty bad, not due to any design problems, but simply due to the fact that interacting with the real world is quite difficult and our techniques aren't quite there yet. The end result is that most of these robots are close to useless. We can imagine them growing more capable over time, however, and it stands to reason that they will get more and more common until they might be ubiquitous.

Another possible example of a hardware sort are prosthetics or meta-prosthetics. For example, if you're deaf, I can see a personable piece of software detecting all the sounds around you and regurgitating the important ones as text. It would need to be personable because what sounds any given person wants to be alerted to at any given time will change, as will the "expression" they should be reported in.

A prosthetic arm is fairly straightforward and probably wouldn't require a personality of any sort. But what about larger meta-prosthetics such as cars, houses, and computers as they become ever more closely linked to us? It makes sense to give them a personality that can not only adapt to the mood of the the owner and situation, but can also express its own "mood" to easily reveal the very complex details of the system's state.

Pure software is also an option. Right now it doesn't seem to make any sense to have a web browser with a personality: we've restricted our use of the web to that which Google can provide us. However, even in that limited term, Google's search engine attempts to adapt to our requests and even do some contextual analysis. In essence, it's a primitive version of personable software.

Is there any reason to think this will do anything besides advance? Let's look at a version that could be made today:

What if we had a Twitter aide? A virtual character who exists to feed us twitters. By knowing the sorts of things, people, and places we're interested in, he could bring us relevant twitters and, judging our responses to them, give us more twitters along the same line or even send us out to web pages with more information.

Moreover, such an aid could "time-compress" Twitter. For example, if I want to watch someone's Twitter, but a lot of it is crap, I could have the character give me a summary, or at least filter out the crap.

Right now all this stuff can be done manually, of course, but the point is to give you a hint of what might come in the future. The amount of data spooling out of Twitter is a microscopic fraction of the amount of data that will spool out of the social networks of tomorrow, and the idea that you'll spend time manually going through all those entries is silly.

"But, Craig, why would these entities need a personality? Sure, I understand for things like pets and nursemaids, but why would a super-Twitter aggregator need a personality?"

Well, I think that the next tier of software is going to have to have a personality because it will allow the users to psychologically deal with the complexity of the software. That was the idea behind Clippy, after all: create a simple interface to the complicated parts of Microsoft Word.

Microsoft Word isn't complex enough to require that kind of assistance, but high-volume data filtering very well may be. As users, we have to "understand" why our entity is bringing us this particular data, and we have to not get upset by the communication difficulties we're going to have with these entities. Both of these things are easily accomplished by a carefully chosen avatar, and that's quite apart from the fact that our software entity (whether strictly software or partially hardware) will need to empathize with us and our moods.

In some regards, this can be seen as depressing: oh, old people cared for by soul-less ro-bots instead of hu-mans. People making friends with ro-bots instead of hu-mans. Pretty sad! Or is it?

I think it's a cultural artifact. I think that once we're there, we'll find it's not sad at all.

Friday, December 19, 2008

So, I wasn't planning on reviewing Dead Space, since it's... not really very interesting. It's basically System Shock II minus Shodan, psychic powers, and hacking. IE, a random repairman against The Many.

I was just gonna let it slide peacefully into oblivion but, but...

Then I met the Asteroid Shooting Minigame. A mandatory minigame where they put you in a seat and make you play Tie Fighter. Not the later ones. The original one. On the Apple II. The one that isn't even listed in Wikipedia, presumably because it was someone's basement hack.

I always hated that game, and I have not improved any with experience. So I've tried to beat this dumb, infuriating, pointless thing several times. Each time I think to myself, "IF I WANTED TO PLAY TIE FIGHTER, I'D TIME TRAVEL BACK TO 1988! SHUT UP AND BE A SHOOTER!"

But every time I die. And every time, I get a little further. Oh, good, maybe I'll eventually beat it!

Except your captain-type-dude is sitting in your ear the whole time. "Almost got it! Just a little more!" "You said that five minutes ago, dipshit, what's the point of lying to me? Just to fool me into thinking maybe I'll win this time, maybe I'll hold out long enough on this idiotic, sub-par minigame designed by brain-damaged, idiot monkey-men and implemented by sadistic, gibbering idiots and playtested, evidently, by savants with astonishingly good control over the MOST IRRITATING MOTIONS ON THE CONTROLLER? We'll call them idiot savants, just to keep with the theme."

Yeah, I'm really enjoying it.

So the game went from being "decent" to being "totally shitty" in one fell swoop.

Reading the walkthroughs, I find that not only does nobody have any useful suggestions aside from "not sucking", but there's ANOTHER ONE OF THEM LATER ON.

See, this kind of shit is just bad game design. The idea here is that they mix it up a bit, you know? Give you a break from the regular gameplay. Maybe the game needs it: the regular gameplay consists almost entirely of walking around slowly, then freezing and dismembering zombies. It's not exactly rapid-fire. The minigame certainly is.

But you know what? Mandatory minigames are a sign that your game design is fundamentally flawed. Doesn't matter what they are - quicktime sequences, turret fighting sequences, PRESS A REALLY QUICK sequences... they're all a sign of shitty basic play being desperately propped up by other shitty play.

You can put minigames in, sure. System Shock II, which Dead Space obviously wanted to be, had a hacking minigame. I'm sure it irritated some people. But you know what?

IT WAS OPTIONAL.

Man, I go on and on about weird, advanced little theories about game design, but then I go play a so-called triple-A game and I find they need BASIC DESIGN LESSONS.

I can't imagine the designers were really this bad. All I can think of is that they had a boss breathing down their neck and two days to do something. Because it's really bad. Ugh.

UGH.

The funny thing is that every other review on the planet seems to have loved the game. Not only did they not even notice this minigame, they thought the game itself was better than I think it is. This is probably the most negative review written about the game, but even before I got into this dumbass minigame, I didn't consider the game to be so great.

Maybe I'm spoiled by the fact that I'm kind of a scifi-survival-horror specialist. All these people comparing Resident Evil to Dead Space. Nooo, you did NOT. No wonder you think it's good. Maybe you should play System Shock II again. Or, hell, Shadowgrounds is scarier than this is.

This artificially crippling the camera crap? It doesn't make the game scarier to me. At all. More responsive - even eagle-eye - cameras work just fine because in survival horror, a big part of sustaining the scare is in maneuvering. And there isn't any in RE4 OR in Dead Space.

Tuesday, December 16, 2008

Stolen more or less (more less and less more, I guess) verbatim from the MBTA here in Boston:

"Hi, this is Dan Grabauskas, general manager of the MBTA. Safety is our number one concern on the T.

"As our eyes and ears in the system, it's more important than ever for you to keep alert for suspicious behavior and activities. Even though there has not been a significant terrorist attack on a bus, train, or subway in America in our lifetime, even though Boston has never been the target of a significant terrorist attack, and even though there is no reason to think that we will be targeted any time soon, we're relying on you to report any suspicious activities, such as people with funny hats.

"Remember: in these dark times where literally ones of Americans are being murdered by foreigners every day, occasionally within two thousand miles of our borders, paranoia is a virtue and a welcome distraction. So, see something, say something: it's better to wrongfully arrest ten innocent civilians with dark skin than to let one person remember that they are safer in Boston than virtually anywhere else on the planet.

"Please stay tuned for another, back-to-back announcement on the topic of unfounded paranoia, and remember: watch that dirty rotten foreigner two seats down LIKE A HAWK."

Sunday, December 14, 2008

I happened to watch this today, only a day after writing this, and the two overlap.

I always get irritated at the beginning of these kinds of talks. It's so easy for people to get worked up about the potential horrors that theoretically await us. Perhaps unsurprisingly, Bill Joy isn't as blindly reactive as many of these kinds of people, but I thought I'd take the moment to give MY opinion on the matter.

The basic concern is that as individuals gain more capability to create higher-technology devices, someone will do something horrible. For example, once we have home printers for printing life forms (virii and bacteria first), what's to stop someone from printing up some new superdisease, causing an epidemic, and killing a billion people? The home printers for printing life forms are not scifi, they're... ten years away. Twenty at the most. Then we'll all die OMG!

Welllllll, the answer isn't easy, mostly because the question presumes things that are false are true. It supposes an incorrect social dynamic. It is in error about the way human minds work. I suppose you could call it a misunderstanding of basic memetics, except that basic memetics doesn't exist. It ALSO misunderstands the nature of science in a fundamental way. These two misunderstandings radically alter the dynamic of this kind of terrorism.

One thing that isn't an answer that I need to address is regulation. Regulation will not work when you're regulating home use. If we can do something in our house, it cannot be regulated without destroying the society you're trying to protect. Once I have a DNA printer, it's impossible to stop me, impossible to catch me, short of omnipresent monitoring from an insanely overpowered government.

Nothing less will accomplish anything, as you can clearly see from the explosion of music, video, and more flat-out illegal things that is saturating modern computers. These things - especially toxic data such as child porn - are controlled by law. Stringently. But they still range somewhere between available and omnipresent, especially if you have access to gray networks (such as Freenet) or can speak multiple languages (to search outside your government's jurisdiction).

Some people are happy to settle for omnipresent monitoring - a total loss of privacy - out of the stark fear that someday some asshole with a DNA printer will kill off half the world's population. Those people, as I mentioned, are operating under at least two false premises: they misunderstand human dynamics and they misunderstand scientific dynamics.

I'll cover the scientific dynamics first.

It's forty years in the future. I'm at home with my printer and, having been spurned by my erobot, I'm planning to destroy the world with a particularly clever little microbe that makes people explode violently. Like in video games.

I release it into the wild. What happens?

Well, if I released it TODAY, man, I'd kill off half the world. It would be aweso... ful. Yeah. Awful.

But I'm not releasing it today, I'm releasing it forty years from now. When the technology has been developed to allow a home user to build this sort of thing.

I can't predict precisely what the defense will be, but I know there will be defenses, because we always develop defenses in line with the technology. Sometimes those are nontechnical (such as the very political defenses against atomic weapons), but the majority of times they evolve as sister technologies or practices and grow to a level where we stop even considering the original technology a threat.

For example, a government worker can easily email billions of dollars of secrets to China and get paid to an anonymous account. Nobody the wiser. But it's not something people run around screaming about.

Because we've evolved defenses against it. Some are technical - email monitoring from secure sites, security clearance requirements, etc - and some are political. I don't really know what those are, but given that I could get around the technical requirements in half an hour, I presume they exist.

I'm going to list a few technical defenses that may evolve simultaneously with home DNA printers, but in honesty I think the biggest defense will be a sister technology allowing for a more... socially advanced?... human. I don't really want to go into it here, but it's pretty clear that this level of technological change fundamentally changes human society.

Anyway, technical. I think these are the most likely:

1) Alarm vaccines. These are intelligent defenses, probably bacteria, that scan against a wide variety of known intruders and actually report on the kinds of microbial intruders you're experiencing. If they find a NEW one, you can see the alarms howl! And, of course, they immediately report the structure and activities of this new form immediately, as well as beginning basic defensive operations. Sort of like a biological Norton, if you want.

2) Airborne scanners. Same idea, but in boxes rather than people. This has the advantage of being easier to update and able to use technology that couldn't exist in people's bodies. However, it would also be less widespread and unable to accurately watch the progress of a new monster against a host. Plus side, they'd catch nonhuman-targeted microbes.

3) DNA Reverse Engineering. Get a sample of the new critter, run it through the reverse engineer, and you've got instant vaccine and cure. Release the hounds within an hour of detecting the thing in the first place.

4) Internal Weather Control. Instead of injecting a drug, we inject a bacteria that can manufacture the drugs we want. Moreover, the bacteria can be recalibrated if it turns out that cocktail isn't working.

Now these all sound hopelessly farfetched and futuristic, but you have to remember that we're talking about fighting off a microbe that I created on my home machine. They're not really more futuristic than that.

The point is that by the time we have home machines for this, we'll have developed defenses and responses to the threats the home machines create.

You can argue this isn't true - that we don't have defenses against nuclear bombs, for example - but those can't be created at home. Computers and the internet were once predicted to have catastrophic consequences for humanity back in the eighties, if you recall. "A hacker brought down wall street! OMG!"

But that passed because we developed safeguards, built them right into the infrastructure of the system. Built them right in because we WERE the system. The home users actually enabled the development of the defenses we use against the many horrors of the internet.

Same idea. Technology doesn't advance on its own. It advances for a large number of people and in tandem with a large number of brother and sister technologies.

...

The OTHER issue here is a fundamental misunderstanding of humans. And this is perhaps a more serious misunderstanding.

HUMANS ARE NOT GOOD AT REMOTE ACTIVITIES.

Nearly every crime ON THE PLANET is perpetrated by someone who spends a great deal of time near the victim. Likely it's a member of their family. Kidnapped children? It's usually a family member. Abuse? It's usually a family member or a trusted friend. Theft? It's often someone you know, the rest of the time it's usually someone who lives relatively nearby. Mugging? Usually muggers stay pretty close to home or their primary hang out, believe it or not.

Believe it or not, humans shit where we eat. Criminals commit most of their crimes within spitting distance of their normal zone of activity.

There are exceptions. For example, a lot of people knock over convenience stores. But it's usually a convenience store they've been in before or at least driven by a bunch. The convenience store makes a particularly appetizing target, so I guess it's understandable.

"But that doesn't hold true for terrorists and serial murderers... does it?"

Yyyyeah, it does. To a large extent.

It seems the majority of terrorist attacks happen in the same city as the terrorist lives. Nearly all of the rest happen in the same nation. You'll notice that most nations that suffer terrorist attacks have a higher percentage of immigrants from so-called "at risk cultures".

Our very own terrorist attack was a black swan. A particularly aggressive gambit using a particularly appetizing method against a particularly appetizing target.

I need to be clear. I'm not arguing against immigration or whatever else your personal concerns are. I'm explaining that terrorists do not simply pop over on a boat and blow you up. They blow themselves up near where they live, in the same way that muggers tend to mug people on the same set of streets, often within walking distance (or bus distance) of their home.

The same is true for our theoretical disease-criminal of the future. He is operating not out of coordinated misanthropy, but out of greed or UNcoordinated misanthropy. He won't cause any more damage that criminals today do. In fact, he'll probably cause less, because we'll be operating under augmented reality, which will make us very community-driven organisms. But let's assume the same level of misanthropy as today. Not a big threat.

"How can you be so sure? Isn't in DANGEROUS?!?!?!"

Well, hell, I can put together a mean chemical cocktail out of the crap under my sink and kill seventy people on a crammed-full red line train. Anyone can. But it hasn't happened, because that's not how humans work. We just don't work like that.

There are some amazing exceptions, most of them American. The unabomber, for example. But to call these uncommon is an understatement. They are so rare that we still remember him, even though he only killed three people. THREE PEOPLE! Similarly, some of you remember the Japanese guy who attempted pretty much what I described in the last paragraph. But more people are harmed in bowling-related accidents than these kind of coordinated criminal misanthropy. Perhaps it's because anyone whose brain is working so poorly as to WANT to do these things can't possibly do them WELL. I'd like to think that if I went off to kill people, I'd do it quite a lot better... but maybe there's something inherent in the act, some kind of mental screw-up? Perhaps there's some other reason. But... these people seem incapable of doing massive harm, even though they already have the tools to do so.

There are exceptions. Black-swan events like 9/11. That's what the technical defenses are for.

Also, there are some memetic factors worth considering. For example, suicide rates go up when there's lots of news coverage of suicides. School shootings went up when the media started going nuts about school shootings. (In fact, they basically didn't exist until the media went nuts.)

These are worth considering, but it's not something that we can stop progress for.

In fact, you can't stop progress at all.

Even if I haven't convinced you, in twenty years you'll find me sipping lemonade as I genetically engineer a green puppy.

As time goes on and I build little demos and tabletop games, I find myself less and less interested in play. But in a really weird way. Lemme sup up.

I have noticed a huge difference in how I design tabletop vs how I design computer games. Here's an example of what I mean. I've designed several pseudo-star-wars games, both as computer game prototypes and as tabletop games (mostly RPGs).

When I design the computer game, I spend eons going over the play dynamics. "Well, how should light saber combat be? Just like so, using time-sensitive balance and yardeyaryaryar?"

When I design the tabletop game, I spend eons going over the, um, "narrative components". "Well, here, I've drawn a picture of the ambassador from Mrrhork, and his stats are on the back. He'll be useful because he's good friends with a rebel naval admiral and..."

Now, it's important to note that I'm not designing tabletops with common RPG mechanics. I'm not popping off to GURPS or d20 and pulling out some tired standard. I'm actually creating very new and strange mechanics for every tabletop game I make. But... but it's so easy. It takes maybe half an hour.

On the computer, it takes ten times that long just to duplicate freakin' pong, man. There's no time left for the story - I've spent all my time on trying to get the game to work in the first place!

The big factor at work here is that the interface for generating and executing rules for tabletop games is significantly simpler than computer games. IE, I write them down, then I remember how to do it. If it's vague or something I didn't expect crops up, I can modify them on the fly.

But! But!

But I hate using standard gameplay.

A big part of my dislike for the new Fallout game (which I thought was merely "quite good") was that they literally used the exact same system as for Oblivion. Even though the system wasn't very well suited to the Fallout theme. (I also disliked the world design, which was also inherited from Oblivion and had the same problem.)

Sure, they applied desperate patches and some varnish. (I'm SPECIAL. How cute.) But underneath it, we're looking at a recycled system. I can't stand that.

Its why I can't stand D&D or d20 or Gurps, either: these are systems that aren't designed specifically for the game the players are playing today. So I can't stand it. It feels like someone's trying to hammer a round peg into a Hulk-Hogan-shaped hole.

I want my gameplay to match the story (narrative elements, whatever). The game is a cohesive experience, and the idea that half of the game can be largely recycled from a completely unrelated experience is repugnant to me. So I quite literally cannot take a piece of gaming middleware such as Game Maker and make a game out of it. I hit a wall where I feel the grind between what dynamics the game needs to have and the dynamics that the middleware allows.

In some cases you can go digging, script up your specific gameplay system... but it takes just as long as making the damn thing from scratch!

It used to be that I'd make my little demos, focus on the gameplay aspects, and then drop them. But I'm having a really hard time doing that now, because I'm starting to really feel the missing half of the experience. It's like I'm arranging furniture in a house that has no roof or walls.

The horrible part is that this isn't a problem when designing tabletop games of any sort. Doesn't matter how hideously complex the game is. 200 page GM guide on time-traveling probability mathematicians? No problem, takes me a week. Board game with 500 illustrated cards? No problem, happy to spend the time.

Saturday, December 13, 2008

I've mentioned this before, but I guess it bears repeating. A lot of the stuff we now consider information wasn't considered information until technology allowed the local user to create the final form from information.

I know it's not terribly clear, so let me give an example: music.

Music wasn't information for most of the history of humans. It might CONTAIN information, but the music itself was created by the use of complicated instruments, and would be considered to be a product of that instrument plus someone who could play it.

Then we got phonographs which, in combination with records, allowed us to play back music that was recorded on any number of instruments with a very simple instrument: the speaker. The physical presence of the instrument and the time of its playing was removed from the equation: you could listen to the symphony or a jug band, all out of your home device.

This continued to improve, of course, with the advent of radio stations, tapes, CDs, and now MP3s. Moreover, some music is created without anyone ever playing an instrument at any point! At each step, the creation of the song and the end user's enjoyment of it is separated more and more. It becomes information.

It may be difficult to imagine a time when music wasn't considered information, back in a time where songs weren't protected by law because it wasn't the schematic of the song that mattered, but the individual rendering it.

As songs became more and more informationesque, laws were pushed to limit the spread of the information component. Whereas before anyone could pretty much play whatever song they wanted, now there are extremely strict limits on which songs musicians are allowed to play and in what situations they are allowed to play them. There are similarly complex laws governing the final rendering of music. This absurdity that most of the world takes as a given only exists to commoditize something that has always existed freely, but has only recently become useful.

I'm not going to argue whether it's good or bad, and I'm not someone who's advocating an information anarchy. I'm simply pointing out that as songs have had their physical requirements lifted and become information, laws have been made about who is allowed to access and replicate that information. Information that, only a few generations ago, would have been happily passed from person to person without anyone even noticing.

Of course, we all break those laws every day. Well, except me, the sterling example of not-a-music-pirate (cough). But that's not because we're evil, that's because it's gotten to the point where music is about a half a millimeter from actually being nothing but information. It has become so insanely easy to transmit and replicate that it is almost impossible for us computer nerds to really imagine it restricted.

Now, the point of this essay is that many things march towards information in the same way as song did. This is not obvious because music is the only one we're really very familiar with in our modern culture. But here are some examples:

Writing! Writing wasn't originally information, although it conveyed information. Instead, writing was in heavy books or scrolls (or cave walls or bark or whatever). It required the physical presence of these displays to exist. Now, of course, we have a billion displays that can render writing on the fly, and writing has turned into information. Just like music: we no longer need to have the physical form that writing originally required. Our ability to manufacture any "blueprint" of writing on a wide variety of personal screens or printers makes those things obsolete. We take the information - the "blueprint" of the book or essay - and we render it on our screen instantly and painlessly. You're doing it right now.

We're seeing slow strides in that direction for many, many, many things. For example, Pepsi keeps their recipes secret because, unlike three hundred years ago, that information would allow their competitors to easily "render" Pepsi. It's not feasible for individuals to do it, but other large companies can easily do it. That means that Pepsi is largely an information product. Sure, you buy a can of Pepsi, but that can's contents could be manufactured by anyone with a soda mixing plant anywhere on the planet. Pepsi's existence as a unique product is only true because of the countless laws that protect their specific mixture from being stolen and duplicated.

Does that seem odd, to consider a soft drink "information"? Well, how about coffee? More and more homes are getting coffee-savvy, with their own coffee machines, espresso makers, and so forth. Obviously, they're still using beans that are physically grown somewhere, but the final drinks they're making are the result of simple recipes - information about coffee.

My uncle makes something almost indistinguishable from Starbuck's Frappuccino. In fact, we call it a Frappuccino. My dad prefers hot drinks, so on his machine he simply makes coffee, but he grinds the beans to a very specific level and lets them seep for a very specific amount of time under a very specific heat, and it often varies from bean to bean. He's producing, from his home, coffee far superior to most public coffeehouses because his recipes - his "coffee information" - is superior and calibrated specifically to his taste.

I can't go so far as to say that coffee is information, but I can say that it is moving in that direction in the same way that the record began making music into information. It sounds insane to say that someday we'll just say "Coffee, Barbados blend from 2024", and out will pop coffee. But two hundred years ago, it would have sounded insane to say that someday we could just say "Bach, concerto #7, Philharmonic symphony orchestra 2001"... but today, that's almost literally what we can do.

"But that's so far in the future it doesn't bear to be considered!"

It's actually amusing to look back at those times and see their predictions for the future. When people were saying things like, "The phonograph of the future will allow the family to listen to up to thirty different concerts in the quiet of their own home!"

The thing about progress is it's not linear. It's exponential. Just when you can see far enough to start realizing what might happen, it explodes into insanity.

So, let me go ahead and explode this "informatization" of products into insanity.

Cars as information. Download a blueprint from OpenSourceCars, print it out on your home machine, and drive a Ferrarenti to work today. I'm officially coining that pun.

Genetics as information. Plan out a garden consisting entirely of plants to grow in exactly your soil, in exactly your weather conditions, and form a stable biome. Print them out. AS PLANTS, not seeds.

Neighborhoods as information. Get together with your community and pound out the specifics of your community power generation, water filtering, shared spaces, optimal parking... then grow it using a pseudo-life form that excretes complex structures like a clam forms a shell.

Insanely over-futuristic?

Well, it's certainly not going to happen tomorrow!

But all it needs is something that lets you render a product locally. Just a machine that lets you print out the things you want.

Wednesday, December 03, 2008

There are more and more games that try for an open world with some level of social play. Fable II, for example, lets you do any pose to anyone. But though the world has many NPCs in it, none of them feel like a character. They all feel like cardboard cut-outs.

There's something to be said for better play involving characters. In fact, I'vesaidtons on the subject myself. So I'll skip that stuff.

I've come to believe that's only half the issue. I think the other half lies in how you interact with the character.

Most games of this sort give you generic interaction options that can be pointed at any character. For example, in Fable II you can "dance" at anyone, you can "yell" at anyone, etc. Other games featuring open-world (or character-gen) social systems use the same idea, although they often use a different set of generics. Even I have, in the past, done this for nearly all of my prototypes.

But I think that's the flaw. I think that's the big flaw.

The idea behind it is that the characters will react to your generic action in a specific way, showing off their personality. But that's a crippled framework because it depends on the generic message. It's a reply to a generic comment. It's like this: you want to get a feel for how interesting a photograph is. But you're only allowed to ask a few specific questions: how big is it? How red is it? Is there a person in it?

Even if the photo is very interesting, you'll be hard-pressed to tell because your questions are so shaky. And if you have enough generic, pre-defined questions to tell you, then you have thousands of generic questions, 99.9% of which are useless in any given situation.

Social gameplay is the same way. Even if the character is interesting, you won't see that in how they respond to your thumbs-up or your yell. You can't really get to know a character because your AVATAR can't get to know the character. Your avatar's behavior doesn't change to be more specific to the character.

What if it did?

Let's pretend that instead of generic responses like thumbs-up and laugh and so forth, we have four "social action slots". The slots are filled by the places where your character's personality, the other character's personality, and your relationship collide.

The idea is that there could be hundreds, thousands, even billions of potential interactions that could be loaded up. They could be built out of sub-interactions, for example, or even made with a numeric scale involved.

A simple example is a hug. If you can hug someone in a game, they use the same animation no matter who you're hugging. But you hug people very, very differently depending on your relationship, your current mood, etc. And they hug you back (or punch you in the nose) very, very differently. Not just big differences like bum-grabbing or carefully maintaining inches of air between you, but small differences like how long they hesitate before returning your hug, exactly where on your back their hands fall, etc.

Calling it a "hug" is a crime caused by the limitations of our language. There are a billion different kinds of hugs. Instead of trying to make a big list of them, it would be much better to create a hug generating system.

...

I should stress that this is not an always-on thing. You cannot simply walk up to anyone and hug them. You only get the option to when your avatar thinks a hug would be appropriate (or, well, whenever he wants to).

This is a breach of common game design philosophy. We've grown very used to the idea that the player should be permitted to do any legal action at any time. If you take away a player's ability to jump "because his character doesn't feel like it", the players will probably crucify you.

This is really no different, and I expect the players would be irritated that they can't simply choose to try to hug (or whatever) their favorite character whenever they like. However, if it's built fairly transparently, getting that option would be one of the fun gameplay challenges.

But, as with all this week, this relies heavily on the player's avatar having a personality.

Monday, December 01, 2008

To continue this theme of hollow characters, I'd like to do a little thought exercise. I think you should try it too, because it's fun.

Picture one of the games you've played recently that had a hollow or window main character. Now replace them with a very strongly NON-hollow character. Imagine how the game would feel differently and then imagine how, if the game was designed with that character in mind from the start, how it might be fundamentally different.

In most regards, Kirk is easy to picture in Shepard's shoes. It's not much of a stretch to see Kirk going to war like that, although it's not exactly what he would do in Trek. I can't see Kirk driving around on barren planets looking for mineral deposits and loot, but I couldn't see Shepard doing that, either.

From a plot perspective, Kirk fits fine into the role of the young renegade captain sent on a special mission with a special ship. However, all the interpersonal aspects of the plot will screw up a bit because Kirk's character doesn't really have those dynamics. His romances are always held at arm's length, for example.

The whole idea of "choosing noble or asshole" is still viable, but it would be done in true Kirkian style instead of mushy, wishy-washy hollow character. "Garrus... you can't... go around killing people!" "It's Wrex - he's gone out of... control!"

The fact that Kirk actually has a personality lets the writing justify letting Kirk take the lead more often, whereas Shepard is ENTIRELY a reactive character. This is because writing an active role for a hollow character results in the players feeling like they've been gypped out of the options they would really like... but that problem is much reduced if the main character has such a strong personality that the player can't deny that the options are all that make sense for him.

If the game was designed with Kirk in mind from the start, I think it would feature the ship more centrally, because Kirk is a captain above all. Sure, he gets in fist fights and fires lasers and boffs a space elf, but the whole purpose of his character is to be the beating heart of his ship.

This sort of character replacement is kind of a fun exercise. That example was pretty straight - a substitution of a bland character with a very similar non-bland character. But it's often fun to imagine really zany mixups, and we can still claim it's educational so long as we think about how it would actually change the game.

For example, imagine Mirror's Edge with Raz from Psychonauts as the main character.

Or imagine Ash from Evil Dead as the main character in Crackdown. (Or Ash from Pokemon, I suppose.)

Or imagine Shodan as the "main character" in SimCity. Go nuts.

The point isn't "How would these characters fit into the game?" The point is "How would the game change to fit in these characters?"

...

I especially like that SimCity one. Imagine a city-building game where you play an evil artificial intelligence. Ha! "The only thinggggs of beauty in the dirt you call a citttyyyy... are the thiinnnnggggss IIIiiii builllllt therrrrrre."