It's the first time in a long while I have been that impressed. The whole thing looks pretty realistic, with no big Uncanny Valley moments I could see. The emotions were extremely well rendered, and what little animation was there did not look too bad either. I would love to play their next game if it's based on this.

Quantic Dream makes amazing tech demos. I've enjoyed all their games since Indigo Prophecy and while I've been impressed with all of them, they tend to be somewhat flawed as *games* as opposed to as marvelous technical feats. But whatever game they put out, I'll buy at launch based on this demo and their track record.

The whole thing looks pretty realistic, with no big Uncanny Valley moments I could see.

It's of course very subjective, but to me it was one big uncanny-valley moment. It's unquestionably very good, but I think it's actually too deep in that territory for me to find it really enjoyable. Compared to, say, MGS4 (which had much less emotive characters, but much more effective characterization), I get a little on-edge watching this.

I don't think you leap over the Uncanny Valley by modeling every muscle in the face. I think you do it by strong characterization that makes the viewer want to believe in it (both Metal Gear Solid 4 and Final Fantasy 13 come to mind) and let the viewer's mind fill in the blanks.

The whole thing looks pretty realistic, with no big Uncanny Valley moments I could see.

It's of course very subjective, but to me it was one big uncanny-valley moment. It's unquestionably very good, but I think it's actually too deep in that territory for me to find it really enjoyable. Compared to, say, MGS4 (which had much less emotive characters, but much more effective characterization), I get a little on-edge watching this.

I don't think you leap over the Uncanny Valley by modeling every muscle in the face. I think you do it by strong characterization that makes the viewer want to believe in it (both Metal Gear Solid 4 and Final Fantasy 13 come to mind) and let the viewer's mind fill in the blanks.

I think that's a shortcut that may be handy now (and thus more of a band-aid that depends on a greater degree of suspension of disbelief), but eventually digital modelling has to surpass the Uncanny Valley obstacle. It's just a matter of work and time.

I think that's a shortcut that may be handy now (and thus more of a band-aid that depends on a greater degree of suspension of disbelief), but eventually digital modelling has to surpass the Uncanny Valley obstacle. It's just a matter of work and time.

There's no "has" about it. I have seem nothing to indicate that human-behavior simulation is an algorithmically solveable problem at least as we know it. It is in no way a foregone conclusion that this is a solveable problem (and my money would be on the solution being very far away if it is).

You can throw all the triangles and shaders you want at a problem, put as many mocap dots on a character's face as you want, but that doesn't mean you're actually going to adequately simulate a human by doing it.

I think that's a shortcut that may be handy now (and thus more of a band-aid that depends on a greater degree of suspension of disbelief), but eventually digital modelling has to surpass the Uncanny Valley obstacle. It's just a matter of work and time.

There's no "has" about it. I have seem nothing to indicate that human-behavior simulation is an algorithmically solveable problem at least as we know it. It is in no way a foregone conclusion that this is a solveable problem (and my money would be on the solution being very far away if it is).

You can throw all the triangles and shaders you want at a problem, put as many mocap dots on a character's face as you want, but that doesn't mean you're actually going to adequately simulate a human by doing it.

I don't really agree. Obviously we're not there yet so there's no actual proof that we will get there but there has been a steady approach, and if we follow that graph it seems likely to me that we'll get there eventually. With enough computing power, plus perhaps advances in interface I don't see why we can't model humans realistically... I'm sure it will take a lot more than improvements in polygon counts though, that's not what the Uncanny Valley is about. (and I've read some work that is fairly critical about the veracity of the Uncanny Valley in the first place). And certainly we'll develop more powerful solutions to the nuances of human interaction than mo-cap dots as time goes on.

A "steady approach"? My friend, you know I respect your brainmeats, but I question what glue you are sniffing in this instance. The problem isn't a technical one except insofar as it can support the art necessary to present a convincing human.

If computing power limits were a serious factor, I'd wager we'd have seen significant improvement over time as computing power has exploded--but we haven't really improved in this area in at least eleven years, you realize that? We are not appreciably further along than Square managed to be in 2001, and that was entirely CG-rendered. Go watch The Spirits Within (with a lot of booze because man, did that film suck) and say that you can tell a difference between that and today's state-of-the-art. No, TSW wasn't realtime, but I didn't say "in realtime," because if you take that away we should already have sufficient computing power.

We already have "enough computing power." You want two, three, four orders of magnitude more computing power than you've already got? Spend more time on it, and there you go. That should be a hint that it's not about the computrons, it's about using them. Setting up a tent in the Uncanny Valley in realtime doesn't, in practice, you're any closer to getting to the other side of it in prerender, and that, almost by definition, will have to come first.

All the bogoflops in existence mean jack-shit when your CG skeeves people out because its eyes are flat.

The worst part of that was the old, computer gaining sentience/becoming human canard, executed in the usual overwrought fashion that is typical of Quantic Dream. I wish they'd tone down the drama.

Also, why do animators feel that having characters constantly make macroscopic shifts in their eyes and slow, exaggerated gestures makes them more human-like? Because it does exactly the opposite. It isn't just the character in this demo; everyone does it. It just makes the characters look shifty and robotic. Even the lip movements get exaggerated; like when 'kara' says 'thanks' at the end, her lip curls in an odd way that makes here smile look like a half-sneer.

Wow. I really enjoyed that. Would love it if instead of a tech demo, they'd just make a game around Kara. That demo would be the intro to the game, and you'd control her when she's sold as a servant to a rich guy. The rest of the game deals with "the entire world is crazy" similar to Planet of the Apes where only the protagonist is sympathetic. It could be like a Bladerunner story where Kara really shouldn't be hunted but she's disassembled anyways.

Quote:

The worst part of that was the old, computer gaining sentience/becoming human canard, executed in the usual overwrought fashion that is typical of Quantic Dream. I wish they'd tone down the drama.

Also, why do animators feel that having characters constantly make macroscopic shifts in their eyes and slow, exaggerated gestures makes them more human-like? Because it does exactly the opposite. It isn't just the character in this demo; everyone does it. It just makes the characters look shifty and robotic. Even the lip movements get exaggerated; like when 'kara' says 'thanks' at the end, her lip curls in an odd way that makes here smile look like a half-sneer.

Well like anything it's not for everyone. I thought the music was a little overwrought, but overall I found the whole presentation convincing. At the beginning I thought a android demo was distracting, but by the end I cared that the guy didn't break her down.

At the very, very least, we have to remember these are video games. Not books, not film. This is the same industry that gave us trite garbage like Dom searching for Maria, or Master Chief trying to emote with Cortana. And I know for a fact there are gamers who get pissed when you call out those examples as bad writing/acting.

We're at the point where most daytime soaps and movies of the week can write circles around most game developers. And I'm not saying this to be mean; I consider that statement as fact. It'll get better over time, but the acting/animation/writing still isn't there.

You realize this is real-time, right? "State of the art" means different things in different contexts. Thinking is fun--do it more.

And frankly, Avatar wasn't leaps and bounds beyond better than The Spirits Within, released in 2001, from a technical perspective (though their art direction was much more appealing--the bright colors helped considerably and using non-human CGI subjects allowed for a lot more leeway in terms of what was shown).

Now 1) is a pretty large problem. You're looking at things like subsurface scattering to emulate the diffusion of light through semi-translucent surfaces, like skin. After all, a character isn't going to look real if its skin looks like cardboard in sunlight. This is computationally expensive, but also easy to model because it's a solved algorithmical problem - if you have the algorithm, you can implement it. What isn't easy to model, so much, is proportions. It's not as easy as slapping a photo in a viewport and tracing a 3D model out of it. There's a slight asymmetry to most human faces, there are blemishes and imperfections, and it's hard to get these things absolutely right because CG modelling tends to make things too perfect.

And even if you do get past that block, as many have, the real issue is, even if you make an actual honest-to-goodness 100% replica of the human muscle/skeletal system in your model, you still get an unemotive android that tips into the Uncanny Valley simply by virtue of one thing: interaction and emotional response. It's the small things, like how you move your eyes when you speak, how often you blink, the curl of your lip when you sneer or smirk, the ever-so-slight wrinkling of the skin at the corners of your eyes when you grin, and a million other things like body language and gait and posture: it's easy to fuck these things up if you're not doing full-body mocap.

That's what Avatar tried to do with its performance capture. It succeeded to an extent, but the outcome wasn't perfectly photorealistic - though of course, we don't have a real-world reference to compare a photorealistic catthing to, so that's a bit moot. The problem with Avatar, of course, is that the CG stands out because it's mixed with live-action, and as good as CG artists might be, fusing the two seamlessly isn't going to happen until we've modelled every sort of interaction of light and materials and physics and implemented algorithms for all of them.

Which brings us to 2) - we've been able to implement an impressive amount of algorithms in CGI, from the aforementioned sub-surface scattering to radiosity to caustics by emulating light rays as beams of photons, and soft/hard-body physics and skeletal rigging and all that stuff. But are we close to simulating an objective reality in a viewport with all the laws of physics in tow? Not quite.

We're emulating that stuff as much as we can but there's always going to be something slightly off when you're fusing CGI on top of live-action stuff, because we can't simulate all 70 trillion photons in that real-world scene hitting our CGI character unless we have a supercomputing Dyson Sphere doing the goddamn calculations, or have elements like grass underfoot getting trampled by our fake character's feet.

So yeah, there's always going to be some element of the uncanny valley to a feature like Avatar. But if you model the entire scene from scratch, at some point eventually the fidelity of the simulation will be just good enough for most people to ignore the slightly uncanny parts. We aren't there yet, I don't think - even Pixar knows well enough with all its research to make its characters stylised and not photoreal.

It's a problem that Avatar acknowledged far better than TSW (which was mostly keyframed, IIRC), by capturing as high-fidelity a version of human performances as we can currently, and then transposing them to CG models, but it went about it in the wrong way. It's too easy to pick out the differences between CG and real-world scenes because the fidelity of our physical simulation just isn't there yet.

What QD is doing with Kara is pretty much the way the issue's going to be tackled in the future, in real-time or with off-line rendering: an extremely high-resolution performance capture, but with the entire scene existing purely in an artist's renderbox somewhere.

Sorry for the braindump. The topic's something I've always been interested in.