Post navigation

How the Metaverse was Won

Most people deeply involved in Virtual Worlds, from researchers to developers to enthusiastic users, have read Neal Stephenson’s novel “Snow Crash.”

In it, Stephenson coins the term “Metaverse.” He describes it as a perceptually immersive successor to the Internet, populated by avatars interacting with each other in a collaboratively created virtual space.

If you’ve read it, you probably remember the Metaverse with its cool motorcycles, thrilling swordfights between avatars in The Black Sun, and the endless glittering stretch of The Street.

Seductive stuff, yes?

But I bet most of you don’t remembersomething mentioned in the novel. And I can sum it up in a simple question:

What was the one thing that made the Metaverse in Snow Crash broadly successful?

Even the most ardent fans of the novel seldom seem to remember the answer to this question. Which is funny, because it’s a pretty important question.

And I think it absolutely applies to anyone trying to build the Metaverse in reality.

The answer to this question is described in four paragraphs in Snow Crash. Let’s look at them.

Juanita (an early programmer of the Metaverse) was trying to hide a personal issue from her grandmother. She starts by describing her conversation with her grandmother over dinner:

“I avoided her until we all sat down for dinner. And then she figured out the whole situation in, maybe, ten minutes, just by watching my face across the dinner table. I didn’t say more than ten words. I don’t know how my face conveyed that information, or what kind of internal wiring in my grandmother’s mind enabled her to accomplish ths incredible feat. To condense fact from the vapor of nuance.

I didn’t even really appreciate all of this until about ten years later, as a grad student, trying to build a user interface that would convey a lot of data very quickly. I was coming up with all kinds of elaborate technical fixes like trying to implant electrodes directly into the brain. Then I remembered my grandmother and realized, my God, the human mind can absorb and process an incredible amount of information – if it comes in the right format. The right interface. If you put the right face on it.”

Finally, another character in the story explains the relevance of Juanita’s insight to the early development of the Metaverse:

“And once they got done counting their money, marketing the spinoffs, soaking up the adulation of others in the hacker community, they all came to the realization that what made this place a success was not the collision-avoidance algorithms or the bouncer daemons, or any of that other stuff. It was Juanita’s faces.

Just ask the businessmen in the Nipponese Quadrant. They come here to talk turkey with suits from around the world, and they consider it just as good as face-to-face. They more or less ignore what is being said as a lot gets lost in translation, after all. They pay attention to the facial expressions and body language of the people they are talking to. And that’s how they know what’s going on inside a person’s head – by condensing fact from the vapor of nuance.”

And there you have it. What made the Metaverse in Snow Crash insanely successful was technology that could read Real Life facial expressions and body language. Then the information was instantaneously and realistically reflected on avatars. Where other people could see and understand.

The Metaverse was won through Networked Social Signalling.

“Networked Social Signalling” is the most concise and clear phrase I can think of for this technology. If you can think of a better one, please let me know. Here’s my definition:

Networked Social Signalling: technology that detects and transfers paralinguistic cues (e.g., facial expressions and body language) from human beings to avatars for the purpose of real-time social signal communication and empathy in virtual worlds.

We ignore this lesson at our own peril. As I wrote in a previous blog post, being human is what happens when human minds touch other minds. And as human beings, we need to deeply understand each other in order for those connections to be made. I think most people forget this critical lesson in Snow Crash simply because we tend to undervalue things we do best.

Especially those things that make us most human. Like understanding the flood of emotional data behind a fleeting smile.

In my experience, most programmers and developers of virtual worlds tend to ignore “soft” scientific concepts like empathy, emotions and the squishy human nuances of interpersonal communication. They primarily want to build tools that allow people to create and exchange *things*. They want to enable cool motorcycles and swordfights. Working on something like networked social signalling tends to get swept under the rug. And it doesn’t help when sociologists and the popular press inform us that folks in the tech industry seem to have an unusually high propensity for lacking instinctive empathic skills.

I see hope on the horizon. Some researchers are experimenting with ways to read body language and facial expressions, reflecting these data onto avatars. Anton Bogdanovych at the University of Western Sydney recently posted this great video of a motion capture suit conveying body language and subtle gestures onto an avatar in Second Life. And new systems like Microsoft Kinect show how this sort of thing might be done without having to wear anything at all.

The future success of the Metaverse is full of other challenges. How to create interoperability. How to build intuitive software. How to create a space that is as democratic as possible, not dictated by corporate interests. How to cultivate virtual economies and business opportunities. Numerous developers and innovators continue to hammer away at all of these.

But what we really need is for developers of virtual worlds to deeply understand the importance of networked social signalling, and for them to build this kind of functionality into their virtual worlds from the start. That hasn’t happened. At least, not yet.

Condensing fact from the vapor of nuance is hard. But it’s something our human minds can do almost effortlessly.

The trick is to use the right interface.

And that interface is ourselves.

-John “Pathfinder” Lester

(Back in June I was invited to speak at a Virtual Worlds Research Workshop in Denmark, hosted by the Virtual Worlds Research Project at Roskilde University and Copenhagen Business School. I spoke about this concept of “networked social signalling” and related topics. You can watch a video of my keynote if you’re interested in more.)

62 thoughts on “How the Metaverse was Won”

“But what we really need is for developers of virtual worlds to deeply understand the importance of networked social signalling, and for them to build this kind of functionality into their virtual worlds from the start. That hasn’t happened. At least, not yet.”

You might be interested in this paper from Nick Yee published in 2007:

which doesn’t exactly concern what you are talking about, but does suggest that real-world social norms mediated by quite subtle non-verbal communication may be observed in even relatively crude virtual worlds like SL.

Like you say, humans are adept at discerning (or perhaps projecting) emotional meaning in interactions across a range of modalities, so motion-capture suits and the like may be overkill.

Thanks for the link. That’s a great paper. I’ve been following Yee’s work for many years, and I remember reading that one when it came out.

What really speaks to me in his findings is how much people *want* to use nonverbal cues, even when they are couched in clunky environments that only partially support them. People adapt to the affordances of the tools at hand, and use them the best they can.

This burning desire to use nonverbal cues even when folks are forced to use clunky tools tells me that folks need better tools. Motion capture suits will never catch on, in my opinion. But something like a future version of Kinect that can detect subtle facial expressions as well as body language without a person having to wear anything? That, I think, will be the tipping point.

“What really speaks to me in his findings is how much people *want* to use nonverbal cues, even when they are couched in clunky environments that only partially support them.”

I think I finally *really* understand why emoticons have taken off like they have. I mean, we’ve all understood why they are popular in more generic “fun” terms, but I think that all of this may provide the true, deeper origin of this human *need*. Not much significant facial clues in a line of text.

The good news here for people who don’t like emoticons is that they will probably just poof when something more expressive comes along, like Bogdanovych’s work.

You’ve hit the nail right on the head. How many years have we heard people say, “Oh, wait, you didn’t ‘get’ what I meant (in chat, or in an IM, or on a forum) because you couldn’t tell I meant it tongue in cheek!” Forget to add a 😉 or a 🙂 to your gibe, jest or sarcastic statement at your own peril.
The sort of technology that could transmit such things as body language, facial expressions, or even more subtle clues such as the shifting of one’s eyes or the tremble of a lip would not only make communication more intuitive and reliable but would add a whole new dimension to things like separated lovers communicating while apart, not to mention that cuber S-E-X stuff that used to be so popular. 😉

Great post, John.
Excellent points. Also, you recently spoke about the “Uncanny Valley” which is a factor in attempts that don’t quite make it. Your blog-readers may enjoy hearing your thoughts on that topic too.
There has been some very serious work done on facial animation in CG, but just hasn’t made it into Virtual Worlds, yet.
There needs to be some addition “interdisciplinary collaboration” on this, so the programmers in Virtual Worlds don’t have to re-invent every wheel that exists really well or better elsewhere, but just not in VEs yet.
Terry Beaubois, Director
Creative Research Lab
Montana State University

Curious how this might be applicable to those of us who like to wear non-human avatars. It seems like either we’d have to use human signals on a non-human body, becoming (even more) cartoonish, or hope that everyone else could read the meanings in our tail wags, ear tilts, scratches, etc. Should be an interesting problem!

I think it would be a lot of fun to play with remapping human social signals onto nonhuman avatars. Our spoken language has evolved over time in large part because of the malleability of sounds. But human social signals have remained mostly unchanged for eons, since our physical anatomy is kinda hard to change. Perhaps the malleability of our physical representation through avatars is the key to finally evolving our language of social signalling? Oooh, that’s an interesting concept…

There are two ways one could approach this as a developer trying to build a commercially successful virtual world. One could really try to facilitate social signalling by refining the sort of facial-recognition technology you describe, but this sounds like it would be hard to do.

A better bet might be to make a world that effectively created the illusion that social signalling was going on. For this you need exactly the opposite of a genuinely responsive avatar; you need an avatar that is a blank screen, upon which users can project the emotional responses they desire. This would produce the appearance that deep connections were being made, which would be enough to psychologically outweigh all the miscommunication that might suggest the opposite was true. Such a world would probably be more popular than the first kind, since it’s alway easier to deal with idealised internal objects that awkward real people.

My issue with this technique is that the social signals are all consciously initiated. Feeling sad? You manually click the frowny face, and your avatar frowns.

But that’s only half of the equation. Some of the most useful social signals are subconsciously initiated. Those fleeting facial expressions we don’t even know we’re making. The subtle shifting of your body in a chair when you are uncomfortable about something in a conversation (a cue I pay attention to particularly in business meetings involving negotiation).

So I think the ideal system would incorporate both consciously and subconsciously initiated social signals. That’s how we use social signals in real life.

And I totally agree that it’s a big technical challenge to build a system that can read human facial expressions in real-time. But that’s a challenge for clever developers. I’m just tossing out the design specs. 😉

I find the phrase “social signaling” to be too broad for what you’re talking about. Social signaling can include things besides body language, like hanging up the phone without saying goodbye, or not sharing your cookie with your friend.

Why not just call it “transmitted body language” rather than “networked social signaling”?

The software to capture real-time facial expressions from live video now exists, to a certain extent. For example, see the software used to make the Benjamin Button movie.

Of course, once you can transmit body language, you can intercept the transmission and replace it with fake / fraudulent body language. For example, you could cause your avatar to exude confidence, even if in reality you are full of fear.

And you’re totally right, using such a system to convey fake or misleading body language is definitely a possibility. Not sure how one could compensate for that. Like most tech, this would be a double-edged sword, that’s for sure.

More thoughts on the “fake body language” issue. Perhaps the solution lies in a business opportunity. A company could come up with a plug-in that uses strong encryption and trusted authentication to guarantee that the other person on the line is using “legit” real-life body language and not a simulation.

“The avatar you are currently interacting with is a registered user of TrustMyBody®. Helping you trust your virtual empathy since 2021.”

I have the opposite concern of yours. Going through my mind through much of the above was that your proposed systems would be useless if my own body language and reactions were different from those of my avatar.

For many of us, the point of virtual worlds is to be people we aren’t in First Life. I no more want authentication of “legit” body language than I want authentication of “legit” identity, age, gender, location, or what have you. (Which is to say that I abhor all such suggestions.) I think an intuitive way of indicating any sort of body language of your choice is an essential requirement here.

I agree. I’ve met many people who want their avatar to reflect a completely unique facet of their inner identity. And I’ve met many people on the opposite end, who want their avatar to very closely reflect their real life identity. But I’ve met even more people who are a blend of both. Their avatars are a mix of facets of their inner and outer selves. This mix is different for everyone, and the nature of the mix can change over time.

So I think the trick is to give people as many options as possible. When I’m talking someone in my real life family in a virtual world, I might want my avatar to completely mirror my real life facial expressions and body language. When I’m out in a virtual nightclub socializing, perhaps I’ll want to remap things a bit. Like causing my slightly nervous smile in real life to be reflected as a suave and confident smile on my avatar. Or maybe even turn off this mapping altogether, giving me the ability to manually set my expressions the way I want them to be.

Share what you want to share, when you want to share it. That’s how I imagine a future “networked social signalling” system would absolutely have to be designed to be successful.

I’ve never read “Snowcrash”, the imaginings of a fiction writer, no matter how sexy, have zero relevance to the application. I doubt that when you alienated the EDU community by taking the bullet for the legal department et al , you said to yourself, “this is how Stephenson would have handled it”.

It’s never easy to take one for the team, especially when the team has the loyalty of an alley cat.

Good Luck John, if I see a project that you’re suited for, I’ll be sure to pass it along…

When I read science fiction, I treat it as an opportunity to find interesting concepts that could be useful in real life. Science fiction authors extrapolate on current trends, pose novel ideas, and come up with stories about hypothetical outcomes. So while I think some of Stephenson’s ideas about the Metaverse are very useful to real life applications (like I just blogged about), I don’t agree with all of them. I do my best to take everything with a grain of salt. Stephenson’s work is far from a broad template for reality, in my opinion. After all, the primary goal of fiction authors is simply to entertain.

where snow crash most diverts from second life is how everyone who was anyone in the Metaverse had to parade up and down “Main Street.” EVERYONE was there. the problem in second life is that you can’t get everyone there without lagging the sim to death. on the topic of “transmitted body language”… i find an extraordinary amount of understanding of unstated communication occurs in our real Metaverse. surprisingly small nuances are picked up, at least by those intimate friends who you “click” with. (interesting pun that.) i’m surprised at the frequency of what i call “synchrotype” — a term i invented for the phenomenon where two people chance to type the same exact words at the same instant. this is evidence of how two avies can be on the same wavelength (communicating in the fullest sense of the word). i think it’s interesting too when i wear my random smiler at the same time that i’m wearing my “repulsed” animation. this causes an interesting variation in facial expression, which is totally random, but others interpret the gestures as if they were intended. lastly, i would like to observe that i find Jung’s Synchronicity Principle active all the time in the Metaverse. why do you suppose that is?

Great observations! I too have experienced a strangely high amount Jung’s Synchronicity Principle in Second Life. And I’ve heard lots of anecdotal evidence from other people.

Why is this happening? That’s an excellent question. I think it has something to do with the malleability of your identity and your environment in virtual worlds. There are more channels for conveying your subconscious thoughts, and other people can pick up on them subconscioiusly.

Hmmm. I think I’ll have to expand on that idea in a future blog post. Thank you for planting the seed. 🙂

As a counselor and a trainer of counselors, I can say with some authority that all humans are not equal in their ability to perceive or send complex information using the display of the face. Yes, those who are good at it can often gain dramatic insights. But I spent a significant part of my life in the Army, and during that time I trained people in the art of Interrogation. What we had to deal with was the fact that many humans are REALLY good at suppressing all of that display.

It’s my contention that SL and other virtual media offer these inexpressive, damped-down folks a place where they are on a more-or-less equal plane with others, and in this environment they have an opportunity to enter into relationships without feeling handicapped. I am not sure I want this option to go away.

Good points. I also agree with you that virtual worlds currently level the playing field for people who are inexpressive in real life, and I think that’s a good thing.

But I had a very interesting experience years ago when I was working with a group of patients dealing with Moebius Syndrome. In real life, they could not use any facial expressions. And they gravitated towards text-based online communities. I created a web-based forum where I provided a large library of emoticons and expressive facial imagery that they could embed in their text conversations. And they loved it.

Technology that only reads real-life facial expressions would be useless and marginalizing to folks like this. So perhaps the answer is to also include a system of tools where people can design their own facial expressions and manually cue them up on their avatars. And give people the option of using either system.

That’s one big reason I was sad when Robin left LL. Are there any women on LL’s board?

I am, however, a bit more optimistic about the next generation. There’s a lot more acceptance (and even sprinklings of positive reinforcement!) of young girls being interested in technology and science. But it’s going to take a powerful force with lots of money at its disposal to change Silicon Valley.

You think it’s just the US? Is the same problem shared in Europe? Japan? Korea? China? Oh, geez, it’ll be embarrassing if China beats the US at gender equality at executive levels, considering the pressure the US puts on China about human rights …

According to Linden Lab’s website, the two women currently in senior management are Robin Ducot (VP of Web Development) and Dana Evan (Board, Strategic Finance and Operations).

In terms of the best countries in the world for gender equality, there are a few different ways to measure it, but Scandinavia is always at the top of the list (Denmark, Norway, Sweden, Finland, Iceland). And the US always lags behind those countries.

My gut tells me the next major innovation in virtual worlds will involve Scandinavia. So I’m keeping a close eye on those folks these days. Perhaps the next “Nokia” of virtual worlds will spring forth with a product that will change the whole landscape.

Also, I was just invited to speak at an upcoming conference in Finland at Åbo Akademi University on “The Prospects of Learning in Second Life.” Really looking forward to networking and learning more about what’s going on in Scandinavia regarding virtual worlds research. I can’t wait. 🙂

This is definitely an important path of development for virtual worlds. But, like using voice instead of text, it’ll have its drawbacks . . . and similar ones. It could lead to confusing situations while multitasking (“No, I didn’t mean to make that face at YOU, I was frowning at what this other guy said in this IM I didn’t want you to know I was having during our meeting!”).

Just as we cope with the background noise of Voice, we’ll have to deal with facial expressions that haven’t got a thing to do with the conversation at hand, but which might be inspired by the cat having a hairball under the RL desk. So, even as the tech moves forward, we’re going to have to develop new social conventions, or better poker faces. I think it’ll work out okay, though I’m sure there’ll be some bumps along the way (remember that poor educator who fell asleep during an event the first week we got Voice and snored for an hour while others yelled, trying to wake them up? I wonder what facial expressions went with that snoring!)

I’m excited about real facial expressions in a virtual world. But the more convinced I am that a big gamechanger like this will happen soon, the more I consider potential pitfalls and how they might be avoided.

As an educator and researcher in experimental economics Snow Crash got me interested in looking at virtual worlds and then there was Second Life. I’m curious to what extent the ability to look through your avatar but at some distance from your self is what makes virtual worlds appealing. If so this may affect parts of our non conscious selves that we might want to expose, and more than likely there will be a time and place element to this as well. My research suggests that people start using theory of mind with only a few interpersonal ques so an important question is how much bandwidth is really needed. Again I suspect the answer is circumstantial and this problem will be interesting for developers, because they have some control over the circumstances.

I’m actually working on a post right now that discusses how virtual worlds allow us to share parts of our unconscious selves, and a very interesting side effect of that. Will be up by tomorrow morning. 😉

Late to the party on this response, but did want to say Path, that your “Networked Social Signaling” is a brilliant description of inherent value in immersive communication of virtual worlds! Although I understand your interest in capturing and relaying (authentically) subconscious cues to make this interaction more “real”, I have to side with Jonny and Kimberly’s comments above that this is something a lot of people have a problem with in RL, and value VW’s ability to mask these personal deficiencies when socially interacting with others. I can’t help but think of the first few chapters in Greg Egan’s book Permutation City, where your avatar is a proxy for social interaction from fending off salesmen, to talking to ex-lovers, and making sure your true emotions are not read like an open book. Yes, the technology would need to capture all your subconscious queues, but use a learned model that then proxies these interactions (in real time) to convey a chosen front/appearance. In this regard, todays crude AO HUD’s in SL are appropriately valued.

Great to have you joining the party, my friend. 🙂 I found Permutation City so fascinating on many levels. It’s sadly out of print now, but I know there are used copies always available on Amazon.

Developing a learned model to proxy interactions in real time would be a killer tool. It’s amazing to watch things like AO HUDs in SL paving the way for future innovations like that.

I also love to follow the gradual evolution of all kinds of novel communication technologies throughout history. How things grew from the telegraph to the telephone, for example. I think we can learn a lot by observing these historical events, giving us new insight into the arcs of current technological trends.

Coming events cast their shadows before. Hmmm…I feel the seeds of a future blog post in that proverb…

I defended my dissertation on the effects of instructor-avatar immediacy in Second Life. Telepresence and immediacy are indeed quite an interesting topic and a challenging one in computer-mediated communication and all the more intriguing in VW in which we use an avatar as a proxy for human interaction. Research has demonstrated that the more presence an instructor can demonstrate during a CMC course, the students are more involved (hence other distance ed issues get addressed: retention, motivation, actual learning, etc.).
On the mechanics of immediacy in VW, we have a few options that ask for some light training on the part of instructors: setting up their gesture shortcuts and emotes&full body gestures HUDs and AOs. But instructors should also be trained in the importance of the degree of emphasis of a gesture as well as its frequency. Too often and the instructor loses his/her credibility (I explain this during my workshops as finding the good middle ground between the zombi (no gesture, no lipsync, no body orientation) and the demented pixie (too many gestures too often) spectrum).
oh goodness, so much to talk about. This is really an exciting topic.
Some cool experiments have been done using your usual webcam and a software. CamTrax technology developed a software-only solution that recognizes any object as an interactive controller via the common webcam (http://www.camtraxtechnologies.com). The CamSpace has potential for full body motion impacting objects inworld, but the software is still limited. Kapor Enterprise’s Hands Free 3D is a portotype for a full body tracking application in SL for a hand-free 3D experience (http://www.kei.com/news.html). I believe that Kapor is working with Linden Labs on something like this now. I need to find that info to confirm it again. And there is also Cassassovici at VR-Wear who developed an application using a webcam to trigger head motion in this avatar in SL. I have not seen the released product on that one yet. I am curious about what Project Natal by Microsoft will brign to the table. The sensor device is supposed to respond to voice commands and recognize 48 joint points on the human player ..analyze and transfer the facial and bnody movement to the avatar. The project is supposedly limited to the Xbox 360 console. So I can’t wait to see how this sensor can be adapted to ANY virtual world’s avatars.
I am very pleased with this post. Thank you for bringing this up. Immediacy does not have to be exact in a virtual world to transfer emotions (well…..depending on the context and the human behind the avatar), but it is essential to full human communication.
Looking forward to more posts on this, here.

Ever since my entery into SL I have been fascinated with what body language is available, the bottom line is people choose how they look, what they wear, how they move and those choices revile tons of stuff about the person behind the avatar. So many people think they can mask or play a role, but often those mask are jsut our internal struggles, or fantacies turned outward on a world that allows you to create what ever is inside of you. Many are totally unaware of how even in the virtual world as it stand now, you are still reveiling so much about yourself , what you think, how yyou fee and who you are.

Thank you Pathfinder, it’s nice to find someone who understands the semiotics of body language and their importance. I did a presentation on the semiotics of SL and it’s culture. So manhy things i didn’t have time to say, but your quotes from the Snow Crash book are things I strongly identified with, and with that particular character. They are some of the things that matter to me and I would love to hear more and learn more about developements in this area…… also considering there will be those who will not want to give their true selves away through such an intimate interface, but will instead still crave what they believe to be aninimity through a lesser tecnically developed interface.

I’m the person developing the motion capture interface that John mentions in this post.

Found this intriguing discussion from the link on Hamlet’s blog.
I appreciate your interest, ideas and concerns. Will try to take as much of those on board as I possibly can.

So keep up the good work and help to shape the future with your ideas.

Cheers,
Anton.

P.S. And please don’t get stuck with the feeling that what we develop can only be used with this particular hardware. The reason why we chose to start with this expensive mocap suit – is because it’s the best and most precise technology available today… every sensor can be tracked in real time with millimetre precision. That’s the perfect testbed for the technology we develop to show the world its possibilities, test the limitations and get the industry interested in the topic. Our work is not going to be limited to just sensors, nor will our technology only work with this particular suit. Any camera-based solution that is precise enough and is capable of sending the motion data through a socket to our software will work just fine. In my experience, camera-based technologies are not yet anywhere near the precision of the suit I’m wearing in the demo, but I’m pretty sure they will be pretty soon.

I’m actually far more interested in cheap-hardware webcam face-tracking software. I think Kinect / Natal will be a total flop due to the cost, inaccuracy, not wanting to be standing for hours while you use a gaming system / PC, and the gigantic amount of space it requires.

However, I look at things like what’s embedded with most Logitech webcams nowadays, and it’s really impressive.

I don’t want to negate the research into mocap – but I think that will always be more for professional simulations and pre-baked animation capture. Well, maybe in 10 – 15 years when the price drops to sub-$500 ….

Pathfinder, first let me say that as a full time animator for SL, by the time any of this ever becomes mainstream in SL, I’ll be retired, and I’m not that old. I use a camera based mocap system to make animations in SL. My Animation Override system was created from scratch and I have a chat gesture system, the ability to walk together with some1, and an interactive greeting system all built into the AO hud. Each Hud has over 150 animations in it, and each are customized to be used with the different kinds of avatars that people what to be.

The gesture system is why I wanted to mention this. SL does have 1 of the most amazing systems for creating and using animation. Whoever did create it, which I would love to know, is truely a genius. Much like everything else in SL, it is plagued with bugs, which residents have already found and corrected the code, but LL has yet to fix anything, even when handed stuff on a platter. Anyways, my gesture system works off of chat words. I think the last time I worked on it, it had 150 words that triggered animation as you chat. Only the parts that need to move for the gesture get animated and the rest of the body is playing whatever mocap stand is play at that time.

Of course it still is quite young and working on it seems like a never ending thing, as the amount of words and gestures I could create as endless. Just imagine this for RP players, and it is modifiable. Even working full time in SL, I still don’t have enough time to work on it because of all the customer support, updating, new products, and fixes I have to do. Right now a bunch of animators have to fix their typing overides cause of something that changed in the latest viewer, thank god mine still works.

What I’d like to see, besides streamed facial expressions, is a speech to text, and text to speech system in SL. When I’m on the web, I never actually read anything, I text to speech it. I got work to do, who has time to actually read. Plus, we need this for the google translator to work, which, as a merchant, I love. Plus, speech to text would trigger my chat gestures, lol.

As a merchant that hangs out at my store so that I can talk directly to my customers, I hear first hand what they like and what they want. You’d be greatly surprised at what i hear from them. Also, I’ve done a number of surveys. Believe me, every1 is different and have their own reasons why.

Medhu, i’d like to talk to you more about your AO. May be we can meet in SL? Please IM Willow Shenlin in SL. Pathfinder, thanks again for this post. and Cathy, thanks for pointing me to it. Great resources have popped up.

this is all well and good – but until virtual worlds do some “thing” better than other tools, it won’t grow much

this is indeed very cool and hearkens back to virtual reality and ideologically it sounds all fine to connect on this level but to what end?

i would argue that seeing facial nuances does not always necessarily extrapolate to a deeper connection – great literary works come to mind . . .

Lady Macbeth’s “Out, damn’d spot!” has always reached me more deeply than any actors visage of it

i also have a soapbox about tying a one-for-one correlation between avatar and real person. whether i am tall, short, skinny, fat, Jew, Christian, wear glasses, or have my genitals inside or outside by body does not define my talents and passions. avatars help break down some of those prejudices by not being very close copies of ourselves

but for some reason, we want that primal physical connection and seemingly fight allowing our spirits to soar beyond human limits. we have real photos in LinkedIn despite having decided, as a nation, that photos on resumes were often used for discrimination

avatars let us do more than our bodies can, why limit that? i only say that because there will come a time that people say “how come you are not making facial gestures?”, just like the questions now on why someone does not use their real photo for gravatar or LinkedIn

but hey, women are still heavily discriminated against in the US when you look at wages, so why not add some additional means to judge others? (see, i told you it was a soapbox with maybe a sprinkling of rage) =p

but all in all, i’d use Kinect or similar, it would be fun to dance in-world and break a sweat in the real one! and for building? if it was accurate enough, that would totally rock!

I think the trick is to remember that you don’t necessarily need to have a direct correlation between RL facial expressions and your avatar’s expressions. For example. Imagine someone with an avatar that looks like a bird. When the person laughs in RL, it could be reflected on the bird avatar by having it flap its wings excitedly.

And if someone doesn’t wish to have their RL expressions mapped to an avatar, they could simply disable them. Or perhaps design completely new expressions that are not based on anything in RL.

We can represent our self and our self’s expressions in fluid ways, thanks to the malleability of virtual worlds and software. We can remap them however we wish. Same with our identity, too.

The fact that we’re already doing a lot of this speaks to the amazing flexibility of our minds and our sense of of self.

Thank you John – could not agree more and incidentally just before reading this I had an epiphany – all this talk about how Second Life can be “saved” etc etc etc: simple answer – Linden Lab has to hire a whole bunch of sociologists – FULL TIME!!!