No kidding. It's less that they need faster animation rendering, than that they need better stories.

Learn from your competition - don't crank out crap just because it'll "sell", make sure you're on the right track to make something good every time. Pixar hasn't had a dud yet, and they freely admit to taking a number of their stories back to formula because someone said "hey this doesn't seem to be working" rather than pushing ahead with something crappy.

Wasn't it Jobs with Pixar that gave an interview about Disney, something about them only being able to mine past IP and come up with "crap like squirrels" or some such? All this will mean is Disney can crank out "(insert name of past character) story (insert number)" even faster. Wow, I'm soooo grateful Intel, really.

Thanks for making dreck able to be produced at the speed of light, now they'll be able to crank out 5 sequels before the first line of toys is done being made out of PCBs in China! Thanks In

I think this has the potential to make animation more like live-action film. A director woking with live actors can order more takes if he's not getting the performances he needs from his actors or the shot's composition is less than perfect. This system sounds like it might give animators the same direct feedback, allowing them to more easily compose those perfect shots,

The sad thing is that we are talking Intel here. Both AMD and NVIDIA are so far ahead of Intel when it comes to rendering technologies, you have to wonder if Intel is just throwing money at Dreamworks to experiment with stuff Intel has. Do you REALLY think that Dreamworks would have initiated that relationship when there are better alternatives available?

You honestly don't believe they don't have the entire thing storyboarded and written BEFORE they even get as far as starting rendering?

Written yes, storyboarded, probably not. In fact, storyboards are on the way out - the new hotness is animatics - basically low-res renderings of the scene. Unlike a storyboard, which is a flat static comic-like picture, an animatic actually details the motion, which can include stuff like camera angles and such.

But yes, given that South Park has a 1 week turnover time for each episode this appears to be a technology that they could take huge advantage of. That would give them a lot more time to polish their turds that they drop off on people's heads.

Sure we can! while it was cute and good at stringing together pop culture references frankly that is pretty much ALL it was, a long string of pop culture references. notice how something like Bambi or Fantasia didn't NEED constant pop culture references? that is because when you have a really quality movie you don't NEED a constant stream of pop culture references!

In 30 years or less Shrek will seem as hack and dated as those 60s TV shows that threw in "daddy-O" and every other piece of cool at that secon

Yeah my first thought was "Why are animators rendering their sequences instead of just using the GPU viewports?

Then I remembered that he had been talking this up in regards to Larabee last year. There's certainly a lot of room for improvement in parallelism. I'm working on the side for Caustic Graphics which is also working on a hardware card to make rendering more parallel and efficient. And I'm sure they would also love to get their hands on Knights Trail. But I don't know that Dreamworks is "revolu

For most purposes, GPU viewports are fine. It's just lighting where having a fast render would be helpful. Lighting is largely a process of adjust a little, rerender, turn that light up, rerender, move that a little right, rerender, put up the fog density a tiny bit, rerender... until you have it looking just perfect. You can't viewport it because viewports can't render lighting and fog in perfect detail, and the only way to make things just right is to keep trying until you perfect it.

The quote was that it takes animators a few days per 2 seconds of animation. *Animators* shouldn't need more than GPU lighting.

Interactive lighting is a god-sent. I just used it on a TV spot a couple months ago for the first time with Brazil 3 and it was like upgrading to a car from a tricycle. Now I never want to go back to adjust, re-render, adjust, rerender ever again.

Yea, since Blinn's Law [wordpress.com] won't apply here. Sure rendering in real time is a boon for animators since they can use it for previews but the final renders/composits will probably still take way to long for real time.

Why are you focused in animated movies? Think about how this technology can be extended out.

South Park. They have a 1 week turnover per episode.

How about if they can scale this technology out to the gaming market? That article points this out. Rendering is the road block to obtaining better and more realistic environments you need progressively beefier GPU to render more complex environments. Maybe if this technology takes off you will be able to see a reduction in the power requirements of GPUs that is mor

So management finally discovered SMP and threading about 20 years or so after it was introduced onto the types of systems all of these outfits have been using since the beginning of time?

Sedate this fellow before he starts to perpetrate some more "management".

No need.

As good animation is dependent upon good writing and good voice work, all this could do is bring even lower quality animated items to the masses. Seems to me, when I look over the total production time of any feature (despite the apparent cookie-cutter approach to movies) the writing takes up a tremendous amount of time. Perhaps a gifted ad-libber could do something well, making it up as he/she goes along, but you don't see a lot of those.

Much better. My six year old has grown tired of Scooby Doo. She much prefers Bill Nye and Fetch! with Ruf Ruffman. On a side note, at least I can stand to be in the same room when she gets to watch TV.

Much better. My six year old has grown tired of Scooby Doo. She much prefers Bill Nye and Fetch! with Ruf Ruffman. On a side note, at least I can stand to be in the same room when she gets to watch TV.

Great example.

As another, I'd like to direct your attention to the last cartoons Warner Brother Brothers / Looney Tunes, up to 1969. I have many of these on DVD and they're pretty depressing to watch, compared to the wonderfully thoughtful cartoons of the 40's and 50's (Hillbilly Hare squaredance was inspired by the craze of Square Dancing in Hollywood at the time, attened my many of the studio's production crew - inspired!)

Just gluing together pretty scenes with no interesting narrative won't do much, but

Take the counter-example to Dreamworls: Pixar. The average Pixar movie takes 4 years to make. The first 3 years are spent on plot, story, and character development. Voice acting is brought in at the end of 2 years. Rendering and animation is done in the last year.

On the other hand, which studio was it brought us the Toy Story movies, and Up? While dreamworks was producing... well, more Shrek sequals. Pixar really doesn't like doing sequals - the TS movies and Cars 2 are the only ones.

I think what they mean is most of these animators may have had a top of the line 4-6core cpu that could only do 60-100gflops/socket, are now going to have access to a 50 core cpu add-in card that can do 1tflop.

When you get a sudden change like that, you need some professional help to take advantage of it.

Well I think they're referring to writing code for Intel's MIC (Knights Tail). Which acts pretty differently from writing for a single CPU. It acts more like a computer cluster of networked computers than a multicore cpu.

Also even with threading and SMP renderers have problems with parallel tasks when you start ray-tracing since each ray spawns more rays and those rays spawn rays and keeping track of them and loading the correct memory into L1,2 and 3 cache causes delays. You might start with 12 rays whi

It is *way* harder than you imagine. One of the smartest people I have ever met is very involved in this -- one of the principal engineers. It is an incredibly tough thing to achieve, and if anyone can pull it off, he can.

When the PS3 and XBox 360 came out the graphics were of a higher quality than everyone but the most up-to-date PC gamers. Of course these production quality animators are using server farms to render movies so we're really talking apples and oranges here.

If you're unfamiliar with the technique watch the drug film "a scanner darkly".

This would be completely appropriate for a live event like the republican debate; I don't think those guys make sense without being on "substance D" anymore. (note I used to be a hard core "R" before they went completely batshit insane a decade or so ago, so I'm allowed to talk about "us" using that kind of language)

I think it was late 90s when in Finland we had this TV show involving a talking dog, "Galilei", to which kids could make phone calls and they solved some puzzles while the dog's facial expressions matched the actor's. Kinda cool. They thanked Silicon Graphics in the ending credits.:)

Real time rendering can only be a human animator pipeline productivity tool, not a production rendering technique. Otherwise your render farm is 100% idle unless someone is changing something, so you're wasting virtually all your time and rendering power that could be going towards better quality. Slow rendering gives the hardware something productive to do while the humans are thinking/sleeping.

I think Avatar claimed they spent something like 40 hours on each frame, so no way are we going from hours/frame

Then again the 3D in Avatar was so good I would describe it as hallucinatory. Remember, it wasn't filmed on location, because it's location didn't exist anywhere except in a computer. 40 hours each frame for that actually seems reasonable when you look at the final product.

The animation in the Incredibles or other animated films isn't anywhere close to the graphics of Avatar and I'm sure that that kind of animation could be rendered in real time. As it is Team Fortress 2 looks very much like the Incredi

Skilled animators are very expensive. If you can throw expensive hardware at the animators and give them a 50% increase in productivity, you can lay off 1/3 of your animation team - which will easily cover the cost of the hardware.

Or, the current state-of-the-art becomes the live GPU render preview they work from, and the server-farm still renders an even-higher quality animation. 40 hours per frame is probably an average of all the human hours spent - including modeling and texturing - which won't be sped up by this. You can imagine that if it were 40 hours per frame, then anything over 30 seconds would take years just to render.

7!=701.7x = Less then the gain you would expect any technology to make every year i.e. not news worthy70x = Puts them 5 years ahead, in most industries dramatically changing their competitiveness i.e. news

Is there a video? I just see two articles that very barely even explain the idea being covered. I don't see a video, just a picture. I don't think the author knew what he was talking about, he seems to be confusing rendering time and production time, (much as he confused percents and multiples ). They certainly don't go into any detail on how this increase is achieved, other than "we're working with Intel and we have written a new software"

The number of very expensive US staff needed with the skills to get the "math" or fantasy art right?
The detail needed to make it 4K and 8K ready needs many new Intel boxes?
Or is it just a huge set of current product been linked together with some old distributed computing protocol at very new hardware prices?

Since last Blender Conference last October, Cycles has been announced as the new render for Blender.This is a realtime renderer that uses Progressive Rendering and can render realtime on CPU, GPU or both!This can also be done for animation.

But this is possible for quite a while, since blender also can do OpenGL rendering realtime, and you can use the BGE as a realtime viewport.So been there, done that, open sourced it

It's better than it used to be. I'd put it on par with Maya and Max as far as 'user-friendliness' goes... which is appropriate. If you want a super easy 3d program, there is always Anim8or [anim8or.com]. Beyond that there is the Japanese program DOGA [doga.co.jp], which is probably as easy as it will ever get for 3d graphics.

The more CPU time you got, the more quality. Real-time animation is not going to look as good as non-realtime animation. However, the good news of the article summary is that CPU's, and the algorithm they devised, are now fast enough for having real-time rendering that DreamWorks finds good enough to call animation quality! Nice.

And that is the flaw, this obsession with what a CPU can do. GPU power isn't just about rendering, it has far greater computational power for many things than the CPU at this point(due to the different pixel pipelines). You want to see something, try checking the performance of Folding@Home with GPU vs. the best CPU version, and you will see why the CPU isn't always the best place to get work done.

The problem with GPUs is how they get their performance and how that influences branching

GPUs organize their 1000+ cores into groups. Each group gets streamed the same instructions. Say you have 1600 cores and 8 groups with 200 cores per group. If even one core has a branch within a group, the other 199 cores in that group stall and wait for that core to merge back into the common instruction stream.

You can see how GPUs are great for some types of calculations and horrible for others. Intel's many core desi

...Huh? Any cartoons which don't use 3D with a toon effect to look like 2D will be hand drawn. Whether it's on paper/cells or through the use of a tablet is another matter. It's not like we've advanced to the point we don't actually need artists drawing things anymore.

No matter how fast a computer is, it is not going to speed up how long it takes an animator to animate.

The animator still has to consider the movement they want to achieve, move the rig, adjust keyframes, etc., etc. This will always take time to do.

I'm not sure that rendering is necessarily the thing that most needs sped up for the artist. What would help the artist more is speeding up of things that require simulation such as hair, particles, fluids, physics, etc.

There will never be a way to render out high quality renders in real time. There isnt enough processing in the world or ram to catch up to the production demands.

Global illumination models with indirect lighting, where multiple bounces occur throughout the scene... through glass, through motion blur... etc. Its just not going to happen. The amount of real time displacement mapping that is required is simply not possible. The ram processing demands are going to be ridiculous. Its why we cant render them out

We're slowly creeping up on photo-realism in games. Once we hit that point, it won't matter if "production" quality is mathematically better, because the brain won't be able to perceive the difference.

At some point, the ability of the pixels will all be the same, it's the amount of pixels you can push that will be the difference. Production will have a benefit here as you can pre-render your scenes non-realtime to crazy high resolutions, but at some point we will also hit the limits of the human eye. Once w

This isn't a problem for animators. Getting the models and motion right requires only the quality of a good graphics card. It's tweaking lighting, shaders, and post effects, the "look", that needs many repeats of full rendering. That's done by colorists and post people.

Just imagine being able to put the camera in any spot you like. Hell, walk around the movie first-person style. Explore the world, triggering the rest of the movie as you go along? Branching story paths? Triggerable easter eggs?

There's so much potential for this. Game and animated movie will start to blend even more, at high quality and at any desired resolution.