Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

snowtigger writes "The day we'll be doing movie rendering in hardware has come: Nvidia today released Gelato, a hardware rendering solution for movie production with some advanced rendering features: displacement, motion blur, raytracing, flexible shading and lighting, a C++ interface for plugins and integration, plus lots of other goodies used in television and movie production. It will be nice to see how this will compete against the software rendering solutions used today. And it runs under Linux too, so we might be seeing more Linux rendering clusters in the future =)" Gelato is proprietary (and pricey), which makes me wonder: is there any Free software capable of exploiting the general computing power of modern video cards?

For those who don't remember, BMRT was a really cool RenderMan based renderer that Pixar had some sort of love/hate relationship with. IIRC, they used it, yet they sued the company. At the end nVidia bought them, though it wasn't clear why at the time.

And a nice Siggraph [siggraph.org] presentation of some of the capabilities of BMRT.

Interestingly, BMRT was free as in $$$ but not as in Free Software. This was one of the first software packages where I first recognized how big this distinction is. (A free as in Free Software program probably would have continued on as people may have coded around some of the disputed intellectual property - a free as in $$$ program was possible to kill with the carrot and stick of a lawsuit and buyout opportunity)

See here [aqsis.com] and here [sourceforge.net]. More and more pieces of the moviemaking toolchain are available Openly, only a matter of time before someone adds a GUI wrapper to integrate it all. Will they dare call it Raxip? (-:

Not only with hardware manufacturers/drivers, but also general software. ISV's are getting annoyed by Microsoft's dominance of the desktop market, and through that, their (heavy) influence on desktop software. It's not inconceivable that in a decade, Microsoft could control every aspect of the standard desktop PC and desktop software market. At the moment some of the only really strong ISVs in their respective areas are Adobe, Corel, Intuit, Macromedia, Oracle, and a few specialized companies. Expect a big ISV push towards a "neutral" platform, like Linux or FreeBSD. Windows is too big to stop supporting, but ISVs will be smart to at least try and carve out a suitable alternative and avoid being completely dominated by Microsoft. All that most ISVs might be able to hope for in a decade is being bought out by Microsoft or making deals with Microsoft, if things don't go the way of creating a vendor-neutral platform.

How do you know ISV's are getting annoyed? Do you go to lunch with ISV's every other day?

No, but working for a medium-sized ISV who deals with Microsoft (we buy bulk embedded XP licenses for use in custom gaming machines), I can tell you a few things about how Microsoft deals with customers. They have actually tried to offer us better deals if we discontinued our Linux solutions and marginalized our dealings with our Russian partners who produce hardware and software for use with Mandrake Linux 9.x in gaming solutions. (Sounds impossible? Think again). I can only imagine how much more underhanded Microsoft are when dealing with bigger ISVs.

Not only are you crudely generalizing, I think your point is actually not sound at all. You think Adobe cares about Microsoft dominating?

I'm sure Netscape and Sun didn't care either, until Microsoft took them out of the market. You are really insulting the intelligence of the Adobe executives if you think that they haven't considered this possibility or what they could do to avoid something similar happening.

Naturally. If the product is intended to be cross platform right from the beginning, then the developers would prefer to work on linux and port it to windows rather than the other way round, so you can expect the linux version to be released slightly earlier. What is perhaps a little surprising is that they announced it before the Windows version was ready.

It makes sense, really... If you're building an app that's intended to be used by clusters, why would you write it for XP? Having to spend an extra $100 per node really starts adding up when you've got serveral hundred or thousands of machines...

Sadly, the hardware accelerations that consumer 3D graphics cards do aren't useful for the high quality renderings that are needed for film and television. The needs of games are just different, parially because of the need to render in realtime. So I doubt whether there's much scope for free software to make use of them for that purpose...

But, NVIDIA's Quadro lineup *ARE* PCB Hacked consumer cards. Some PCI ID(or BIOS for the NV3x cards) hacking can get you a Quadro out of a GeForce easily, minus the extra video memory present on the Quadro's. I've done this heaps of times with my GeForce4 Ti 4200 8x (to a Quadro 780 XGL and even a 980 XGL) and I believe people have done it with the NV3x/FX cards as well.

This film renderer is different. It uses the GPU and CPU together as powerful floating point processors (not sure if gelato does anything more than that).

How do you know this? Did you perform benchmark comparing it to a real Quadro?

A couple of years ago I got a GeForce4 4800 and a Quadro4 900 XGL. I performed the required resistor mod and flashed the GeForce4 with the Quadro4's BIOS.

Sure the GeForce4 got recognised as a Quadro4 900 XGL in the Windows display control panel, but when you run benchmarks like SPECViewPerf it was obvious the modded-GeForce4 did NOT perform like a real Quadro4 900 XGL. Capabilities like the HW-accelerated clip planes did not see

This is not really correct. The graphics cards Gelato uses are consumer hardware. This doesn't mean that the image is generated directly by the card! The 3D hardware is used as a specialized fast and parallel calculation unit, used especially for geometric calculation (matrix per vertex multiplication, essentially) and other stuff. This (of course) means that the rendering is NOT done in realtime.

Actually, there has been reports of using such hardwares to produce the similar results of the high-end, software based methods like those used in films. The trick is to break the job (typically the complex RenderMan shaders) to many passes, and feed them to the graphics card to process. By many passes, I mean 100~200 passes. The outcome will be like rendering a frame in a few seconds (we're not talking about real-time renderings here) which is MUCH faster the software based approaches. The limit in the past was that the color representaion inside the GPUs used a small number of bits per channel and by having a lots of passes on the data, round-off errors would degredate the quality of the results. But now, nVidia supports 32 bit floating point representaion for each color channel (i.e 128 bits per pixel for RGBA!) and this brings back the idea of using the GPU with many passes to complete the job. Please note that in the film and TV business, we're talking of large clusters of machines and weeks of rendering and bringing it down to days with smaller number of machines is a very big progress.

Exactly, floating-point colour channels provide a larger dynamic range, which prevents the banding and saturation you see when doing multiple passes with say 8-bits/channel colour. This has been the crucial development to enable this. Conditionals and looping for shaders wasn't generally required - as Renderman-style languages can be decomposed into a series of standard OpenGL operations. (I remember researching all of this for an essay I wrote on this sort of stuff at uni a couple of years ago - interestin

As another person has already said, you can only have one AGP slot. PCI Express is the next-generation, high-speed replacement for PCI. Remember how the first couple of generations of 3D accelerators were PCI-based?

Besides, you don't need killer bus bandwidth with this because you're not trying to pump out 100fps using a couple of hundred megs of geometry and textures on a card with only a hundred or so megs of memory. (That means you have to send loads of data over the bus 100 times each second.)

Problem was that they got random rendering artefacts by rendering on different cards - different colors etc, and couldn't figure out why.

I have seen this problem in software renderers as well. The problem seemd to be that part of the rendering farm was running on different processors (some were Intel, some AMD and many different speeds and revs) and one of them supposedly had a little difficulty with high-precision floating points and it computed the images with a greenish tone. Took over a week to figure this one out.

I used to see that with 3ds4 as well when I was rendering this [couk.com]. One was a pentium and one was a pentium pro.

Ah those were the days. We were on a deadline and rendered it over Christmas. After four hours the disks would be full and it would be time to transfer it to the DPS-PVR. I spent six days where I couldn't leave the room for more than four hours, sleep included. Was pretty wild !

VH1 viewers voted it second best CGI video of all time, behind Gabriel's Sledgehammer so I guess it was worth it!

It's certainly possible that different hardware or even different drivers on the different machines doing the rendering can create subtle (or not-so-subtle) differences in each resulting frame, but standardising the hardware and drivers across machines should solve that completely.

It's certainly possible that different hardware or even different drivers on the different machines doing the rendering can create subtle (or not-so-subtle) differences in each resulting frame, but standardising the hardware and drivers across machines should solve that completely.

Yes, but it's not a good idea to tie one's business to a particular hardware company let alone one product, and one driver.

Also, there are many people who make the money decisions that will balk at making particular changes

Also, it's common at animation shops to coopt desktop CPUs when people aren't working - you'd either have to give this up or standardize the hardware/software in your renderfarm with the hardware/software the desktops use (which would be silly).

My question is, if a GPU on an AGP card can send the render results back to the system, what is the point of PCI-Express? That was supposed to be one of the "enhancements" of PCI-E. PCI-X was said to have the same limitations as PCI and AGP.

The opening quote of the article poster is ignorant. Movie rendering has been done in hardware forever. He seems to be mixing up doing rendering in hardware with rendering on the fly in a video card.

What we have here is a slight mix of the two, but by no means anything new on the market. Its only letting you use your quadro if you already have one for movie rendering acceleration. I certainly would not buy one for this purpose. I imagine its still in

Is there any Free software capable of exploiting the general computing power of modern video cards?

Well, since they released "a C++ interface for plugins and integration" for Gelato (ice cream in Italian, btw), this probably means that free software can (and, eventually, will) support all these high-end functions... or am I completely wrong?

For instance, just imagine Blender with a Gelato plug-in for rendering... hmmmm... Now I understand why they named it "Gelato"...

The AGP bus has assymetrical bandwidth. Upstream to video card is like 10x faster than downstream to the CPU. So you can dump tons of data to the GPU but you can't get the data back for further processing fast enough, which defeats the purpose.

You are missing the point that nVidia will be using a bridge on its first PCI Express setup. The chips will basicly talk to the bridge in AGP16x and will suffer from the same asymetry problems today agp cards suffer.Only the second generation PCI Express cards from nVidia will be native solutions and will use the bridge the other way arround (to usa a PCI Express chip inside an AGP system).

I believe the problem is not with AGP bus, but rather with the GPUs that are NOT designed in the first place to transfer anything back to the memory. In normal 3D applications, you just feed the graphics card with all sorts of data, like the texturs, geometry, shaders... and the result goes out throug the VGA connector! You don't need to give it back to the CPU or the memory. The GPU and the memory architecture of the graphics card is simply designed to just recieve the data with highest speed from the CPU.

However if the GPU can be left to crunch for most of the time and return say a row of pixels at a time I doubt the 1/10th speed of the AGP bus downstream would be a big problem. For complex scenes the input (textures, geometry, shaders) may well exceed the output (pixels) in terms of data.

You're right that the AGP port is asymmetric, but this is unlikely to be a bottleneck if they can do enough of the processing on the card.

For 3D rendering, especially non-realtime cinematic rendering, you have large source datasets - LOTS of geometry, huge textures, complex shaders - but a relatively small result. You also generally take long enough to render (seconds or even minutes, rather than fractions of a second) that the readback speed is not so much an issue.

Upload to the card is plenty fast enough (theoretical 2 GB/s, but achieved bandwidth is usually a lot less) to feed it the source data, if you're doing something intensive like global illumination (which will take a lot more time to render than the upload time). Readback speed (around 150 MB/s) is indeed a lot slower, but when your result is only e.g. 2048x1536x64 (FP16 OpenEXR format, 24 MB per image), you can typically read that back in 1/6 of a second. Not to say PCIe won't help, of course, in both cases.

Readback is more of an issue if you can't do a required processing stage on the GPU, and you have to retrieve the partially-complete image from the GPU, work on it, then send it back for more GPU processing etc, but with fairly generalised 32 bit float processing, you can usually get away with just using a different algorithm, even if it's less efficient, and keep it on the card.

Another issue might be running out of onboard RAM, but in most cases you can just dump source data instead & upload it again later.

So you can dump tons of data to the GPU but you can't get the data back for further processing fast enough, which defeats the purpose.

Fast enough for what exactly? What purpose does that defeat?
Compared to the cost of (eg) sending a frame across a LAN, the time taken to pull a frame across the bus would be utterly, utterly insignificant.

Still I think it's a good idea to mention if software for Linux is proprietary.That just saved me the trouble of clicking the link and checking it out;).

ps: I recently visited a project trying to "harness the power of GPU's". I think that project was something like seti/folding/ud/... but tried to have all the calculations be made by the GPU of your 3d card.If someone knows what I'm talking about: please post it, I can't find it anymore;)

I've toyed with shaders some and implemented a system for image processing on GPUs. Quite a lot of fun really, though we didn't do any comparisons with CPU to see how much faster it was. (That project isn't published anywhere though.)

If I had mod points I'd mod you up. I'm getting tired of the slashdot mentality that everything has to be free. If a company makes a professional grade product, and there is demand for that product, they should be able to be rewarded for their efforts.

Um... depends on what you're looking for/expect. This isn't intended for you to buy and use at home. This is more likely for smaller developers (big developers write their own usually... think Pixar). Professional grade equipment is all expensive. The first common digital nonlinear editor was the casablanca, and with an 8 gig scsi drive ran close to a grand when it was released. This was just a single unit.

I bet the type of people that buy this are like big time architects that have a few machines set up to do renders for clients, and want to perhaps do some additional effects for promo/confidence value, that likely already have people running that type of hardware.

Then again all those Quadro users could be CAD people and they've got no audience. =)

I work for a smaller graphics house that is part of an architecture firm and does mostly (but not all) architecture work.

This product is competing against other rendering engines like MentalRay, Vray, Renderman, etc. And at $2750 per license it's deffinately not for smaller developers or architects. There are plenty of other rendering engines out there that are significantly cheaper and dont require a video card that costs as much as an entire x86 rendernode.

Certainly true - more than Entropy's $1500 (when they sold it), more than many others, but still cheaper than PRMan's $3500 + $700/yr.

I think the point is not that it can render just like other engines, but that it can do so at a far greater speed (with a lot more flexibility and features than the PURE card). That would indeed be worth the money to all but the smallest studios - much faster feedback at full quality is an artist's dream, quite apart from the (more expensive) option of using it to accelerat

Perhaps you could even use a render box with a bunch of PCI-X cards in it (not sure if PCI-X allows that, I sure hope it does). Give it a few months and the current top of the line cards will have halved in price and you can actually put together such a render machine for a reasonable amount of money.

No, you can get raytracing hardware for less than the software and a Quadro FX would cost you.

For example, there's the ART PURE P1800 card [artvps.com] which is a purpose-built raytracing accelerator. It's a mature product with an excellent featureset, speaks renderman and has good integration into all the usual 3d packages. It's generally acknowledged as a very fast piece of kit with excellent image quality, and plenty of quality/speed trade-off options. And if you've a

Then again all those Quadro users could be CAD people and they've got no audience. =)

Not just CAD - I do server-side Java programming, and we've all recently been bought new PCs. The spec we went for included a Quadro FX 500; don't ask me why, it just did... (it was that, or a similar machine with a GeForce - I didn't make the choice)

Yes. I've done it with the BrookGPU library and two Nvidia cards (1 AGP, 1 PCI). The benchmark wasn't that much faster though, presumably because the datasets weren't big enough to just leave the cards running -I suspect the main CPU was too busy feeding them to get any real work done.

Is there any Free software capable of exploiting the general computing power of modern video cards?

Take a look at the Jashaka [jashaka.com] project. It is a real time video editing suit and the designers have been working with and have supposedly been getting support from Nvidia, so they may have had access and I would imagine certainly will have access to these video cards. I can't imagine them not taking advantage of this technology.

The other nice thing is if memory serves me correctly this program is being designed to work on Windows, Linux and OS X, so good news all around.

Seems like ex-Exluna staff (bought by NVidia) is going to kick PRMan's a$$ on hardware level: they tried it on software level with Entropy, but got sued into oblivion by Pixar, now it's time for revenge?

Software, like much technology, follows a classic cycle from rare/expensive to common/cheap as the knowledge and means required to build it get cheaper.

"Moore's Law" is simply the application of this general law to hardware. But it applies also to software.

Free software is an expression of this cycle: at the point where the individual price paid by a group of developers to collaborate on a project falls below some amount (which is some function of a commercial license cost), they will naturally tend to produce a free version.

This is my theory, anyhow.

We can use this theory to predict where and how free software will be developed: there must be a market (i.e. enough developers who need it to also make it) and the technology required to build it must be itself very cheap (what I'd call 'zero-price').

History is full of examples of this: every large scale free software domain is backed by technologies and tools that themselves have fallen into the zero-price domain.

Thus we can ask: what technology is needed to build products like Gelato, and how close is this coming to the zero-price domain?

Incidentally, a corollary of this theory is that _all_ software domains will eventually fall into the zero-price domain.

And a second corollary is that this process can be mapped and predicted to some extent.

Windows a specific product, not a technology. The technology would be the operating system. An OS is based on a series of other technologies, most obviously compilers, networking, disk management systems, kernel models, etc.

Since these underlying technologies have been zero-priced since the 1980's (mainly thanks to Unix), the OS as a technology has indeed fallen into the zero-price domain as well.

In other words: a small team can today build a product that competes fairly well with Windows, using off-the

is there any Free software capable of exploiting the general computing power of modern video cards?

I expect that once it suddenly becomes clear that the GPU in a modern video card has serious processing power, that someone will release a version of the SETI@Home [berkeley.edu] client which can use the rendering engine as a processor.
Bearing in mind that most computers use their GPU's for a very small percentage of their logged-in life, I suspect there is real potential for using it for analysing on distributed computing projects.

Check out www.jahshaka.com. It's an open source video compositing / FX package that leverages the 3D accelerator chip on your graphics card to do incredible things. This is one to watch, it's definitely going places.

You can download binaries for linux and windows (and MAC), and source tarballs are available for the savvy.

I know, it's not strictly a "renderer", but it employs many of the fuctions of a renderer to create realtime effects and transitions.

Almost every FX house worth its salt in the CG business uses Pixar's Renderman on UNIX or Linux machines. The reasons behind this choise are very simple.

Renderman is proven technology and has been so since the early '90s. Renderman is well known, its results are predictable and it is a fast renderer. Also, current production pipelines are optimised for Renderman.UNIX and Linux are quite good when it comes to distributed environments (can anyone say Render Farm?) and handle large file sizes well (Think a 2k by 2k image file, large RIB files).And last but not least, renderman is available with a source code license.

Hardware accelerated film rendering is in essence nothing but processor operations, some memory to hold objects and some I/O stuff to get the source files and output the film images. Please explain to me why a dedicated rendering device from NVidia would be any better than your average UNIX or Linux machine? Correct, there aren't any advantages, only disadvantages. (More expensive, proprietary hardware, unproven etc.)

Please explain to me why a dedicated rendering device from NVidia would be any better than your average UNIX or Linux machine?

Why do you think 3D hardware exists at all, when all it's doing is a load of integer maths? Surely a Linux machine is capable of adding numbers together, right? Obviously, dedicated hardware is faster, sh*tloads faster. Durr. The benefits to the artist's equivalent of the compile-edit-debug cycle are fairly obvious here, and worth rejigging the production pipeline to accomodate.

PRMan is a fine product, but it has its limitations, as well as its price. There are numerous competitors, many of which use the same Renderman interface but offer more speed and/or more features at a lower price (BMRT and Entropy are[were] notable, and relevant, until Pixar squashed them with the threat of an expensive court case). Brazil, AIR, etc - these RIB-based renderers drop into the same place in the workflow.

Please explain to me why a dedicated rendering device from NVidia would be any better than your average UNIX or Linux machine?

Only if you explain why your average UNIX or Linux machine is better than a Commodore 64 or a PDA, which is also "in essence nothing but processor operations" etc:-) If you listed SPEED in there, you're on the right track.

A modern GPU has far more floating-point hardware than any general-purpose CPU, and it's all geared towards the process of rendering pixels. For certain tasks, one of those expensive dedicated rendering devices from nVidia could be better than FIFTY of your "average" UNIX or Linux machines! Is that enough of an advantage to consider?

Personally, I'd put that rather firmly into the advantage column, and for a number of reasons. You could either render your movie with a smaller farm (always a plus) or you could render even more complex scenes in the same time period--which is probably what most people would use this technology for. On the commentary track of Monsters Inc, the guys from Pixar note that despite having MUCH faster hardware (and alot more of it) the average time to render a single frame of Monsters Inc was just as long as a single frame of Toy Story. Why? Because the frames were FAR more complex.

I think this is a Good Thing(tm) at least for the people who have the imagination to use it.

2) Renderman is by no means alone in the FX world. MentalRay and Brazil most immediatly come to mind.
3) As to UNIX and linux being quite good in a distributed environment... well duh... so are Windows and Mac boxes. As to file sizes... with the limit of 4gb on 32bit boxes, thats a pretty damned big file, and thats active in memory size limit. As to with files, the limit is more in terrabytes. As to Renderfarms, almost ever renderer under the sun handles this the same

Actually, with the transistor counts on the GPU these days, I like to think of the CPU as the co-processor. (I'm actually half-ways serious. My company does far more raw computation on the GPU than on the CPU.)

I think that indeed there is free software to do movies and rendered animations using raytracing. First, Cinelerra can use a linux cluster for movie rendering.
Second, there's a whole bunch of modellers/raytracers out there that perform very well: Povray is the oldest and most advanced, and can run on a pvm cluster, yafray is relatively recent and can use an openmosix cluster for networked rendering, Blender now integrates a raytracer AND exports to yafray. Those are the 4 programs I know of that I use, but there are more, I just haven't looked for more.
So, yes, there is free software for movie rendering already!

Repeat after me: Studios don't use FULL-SCENE raytracing, but they use raytracing for certain things where raycasting can't do a good approximization. Hollow Man is one movie where raytracing was used. They used PRMan for ordinary rendering, and then BMRT was called upon for the raytracing.

And, the major studios want: Speed, quality and a good clean API(so they can add their own stuff too)

Instead of just using the native 3D engine in the GPU, as done in games, Gelato also uses the hardware as a second floating point processor.
Does this mean that I could eventually use my GeForce to do things like matrix inversion for me?

ART, OGL assisted, now gelato. Sure there is a place but how do I stick a FX card into my several hundred 1U racks either physically or financially. Have you seen the size of these cards anyway ? Sure some vendors (mental images) are leveraging GPU power and have done the same with OGL for some time but unless the GPU calls are handled by calls to the renderer so you hide behind a consistent API it's a waste of your hard earned time getting your pipeline into shape in the first place. Long live GPU but I do

I was under the impression that it's hard to use a video card for general computing tasks because of the way that AGP is designed. It's really good at shunting massive amounts of data into the card (textures, geometry, lighting, etc) but terrible at getting a good data rate back into the computer. They're designed to take a load of data, process it and push the output back to the screen, not the processor. This is the major reason, IMHO.

IF anyone's interested, the dino on the http://film.nvidia.com/page/gelato.html page was one of of Entropy's flagship images. Entropy was a pay-to-play renderer made under the renderman Spec by a they guy who wrote BMRT. Pixar sued the company that made both of the renderman compliant renderers, and basically forced them into business with Nvidia, who quickly snatched up the company and paid off Pixar. Nvidia had been trying to come up with a hardware shader language like that of renderman, and thusly ca

Admittedly it's not exactly the same things as NVIDIA's solution, but the main component of breaking big movie quality shaders into multiple passes is in ATI's Ashli (http://www.ati.com/developer/ashli.html). The big plus is instead of costing thousands of dollars it's free.
Also I noticed everyone is saying agp read backs make this sort of thing useless. The fact is that most of the scenes rendered will take seconds to hours on the graphics card (vs. minutes to days on a CPU). The slow agp reads aren't

I have mod points, and I really want to do a little bit of smacking down, but I'll just go for correcting instead. I phear the metamods, yo!

ASHLI is *not* a renderer. It isn't anywhere near doing what Gelato does. Gelato takes a scene file, and gives you a picture. It does it very nicely, using motion blur, programmable shading, and all sorts of fun stuff like that. It is written by the Ex - Ex Luna boys. (Larry Gritz, Matt Pharr, Craig Colb -- Three mofos who know their shizzle.)

Gelato is a $2,750 software package. It is intended to be used with an Nvidia Quadro FX 4000 workstation video card. The video card has not hit the market yet, but the Quadro FX 3000 goes for $1,300. Which brings the total cost for the package at between $4000 and $5000 per machine.

Key to this doctrine of no compromises is the nature of how Gelato uses the NVIDIA Quadro FX GPU. Instead of just using the native 3D engine in the GPU, as done in games, Gelato also uses the hardware as a second floating point processor.

That sounds like an old Siggraph presentation I saw a decade or two ago when I used to go to Siggraph. Lucasfilm, I think. (The fine sample picture in the article showed a motion-blured image of a set of pool balls in motion.)

When rendering an image using raytracing, there are several effects that are achieved by similar over-rendering processes. I.e. you ray-trace several times varying a paramter:

- Depth-of-field (use different points on the iris of the "camera", blurring things at different distances from the "focal plane".)

- Diffuse shadows (use different points on the diffuse light source(s) when computing the illumination of a point.)

- Motion blur (use different positions for the objects and "camera", evenly {or randomly} distributed along their paths during the "exposure" - ideally pick the positions of the whole set of objects by picking several intermediate times, rather than picking the postion of each object separately, to avoid artifacts of improper position combinations.)

- Anti-aliased edges. (Pick different points in the pixel when computing whether you hit or missed the object or which color patch of its texture you hit.)

As I recall there were about five effects that worked similarly, but I don't recall the other(s?) just now.

To do any one of them requires rendering the frame N times {for some N} with the parameter varied, then averaging the frames. (Eight times might be typical.) Naively, to do them all would require N**5 renderings - 32,768 raytracings of the frame to do all five.

The insight was to realize that the effects could be computed SIMULTANEOUSLY. Pseudorandomly pick one of the N from each effect's set for each frame and only render N frames, rather than N**5. Eight is a LOT smaller than 32K. B-)

Sounds like Nvidia ported this hack to the firmware for their accellerator.

I hope you know that all high-end 3d packages are available on Linux: XSI, Maya, Houdini, Real3d. And then you have some cool open source, like wings3d, that can cover some a lot of ground on the modeling field. Combine that with blender & yafray, and (theoreticly) you have a complete open source animation studio!