The gallery is a bit too Pixar oriented, I'll see what else I can find.

Sort of related: we stopped at the second hand thrift store last night, and picked up (among other things) a VHS tape of The Last Starfighter. The kids watched it today and liked it a lot. My brother stopped by and mentioned that the FX looked a bit less realistic than he remembered. (In contrast, I was a real film effects freak when it came out, so it's pretty much as I remember it - fun film).

Ah! I just remembered the name of the island image at the bottom of the collection - Carla's Island. I remember watching it back in the 80's at the local college, and being bored because nothing happened in the video.

Edit: Ooops, that's the name of another classic image I'll have to add.

Oh, I didn't know that - Because I've seen it so many times I thought they just used a famous RenderMan image for the book cover.

The gallery is a bit too Pixar oriented

I think in the early days of computer graphics there was only Lucasfilm (with its spin-offs Pixar and ILM), so it will be difficult to find something different.

The first CGI animations in live-action films I saw were the Genesis sequence in Star Trek II and the stained glass knight in Young Sherlock Holmes.
But my all time favorite is of course the Pseudopod in The Abyss - Jeez, this was more than (and almost, respectively) 20 years ago IIRC they're all made by ILM using Pixar's software.

One director who took that leap was James Cameron, who had been auditioning effects houses for his 1989 film "The Abyss" with particular concern about how to create the "water snake" that would lend a fantastic element to the undersea drama. Other shops had pitched stop-motion, sculptural-replacement animation and even hydraulic water systems, but Cameron went with Muren's suggestion to do the shot using computer animation.

Thus the first photo-realistic 3-D computer effect snaked its way into film history. The effect was made possible largely as a result of a little computer program written by ILM animator John Knoll: PhotoShop.

I had my doubts about the "Photoshop" bit, but it appears here: as well:

In 1989, Muren took a year off to learn to use a Macintosh and Photoshop. When he returned, he helped take ILM's compositing - a method of blending separate visual components into one shot - into the digital age. That made possible the morphing T-1000 in Terminator 2 and the fleet-footed dinosaurs in Jurassic Park.

Interestingly, sometimes he gets credit for using Photoshop, while in other articles he gets credit for writing it. (The more informed articles have he and his brother as the authors, so I tend to believe that story). So it's likely he used the code to build things like reflections maps, which were then fed into Renderman.

All right, I finally Googled up something useful:

In 1989 an underwater adventure movie was released called "The Abyss." This movie had a direct impact on the field of CGI for motion pictures. James Cameron, director and screenwriter for Abyss, had a specific idea in mind for a special effect. He wanted a water creature like a fat snake to emerge from a pool of water, extend itself and explore an underwater oil-rig and then to interact with live characters. He felt it couldn't be done with traditional special effects tools and so he put the effect up for bid and both Pixar and ILM bid on it. ILM won the bid and used Pixar's software to create it. Catmull explains, "We really wanted to do this water creature for the Abyss, but ILM got the bid, and they did a great job on it."

So it looks like they used Renderman. I ran across some Siggraph course notes that say that ILM used PR Renderman on all their FX work after The Abyss.

(I'll try to add the pictures in the morning, or you can add them in yourself).

Ok, with a really stupid bug fixed and the first two optimizations in place, performance is acceptable now. The teapot takes (depending on the zoom level) between 20 and 70ms to subdivide and render on the MacBook. If the teapot is small (about half the size of the screen), it's very fast (30ms) because the dicer is bypassed. If the teapot is fully visible it is a bit slower (70ms) because nearly all of the faces have to be diced to level 3. If you zoom further in it gets faster again (20ms), because off-screen faces are culled, so just a few faces have to be diced (to higher levels, but the dicer is quite fast, and the overhead shrinks with the subdivision level).

I agree that all bugs are stupid, at least once they were found. This one was really stupid. It added all vertices of the mesh to a list - unfortunately it did it inside a loop, once per face, so instead of transforming 400 vertices it suddenly had to deal with 200000 vertices, which explains the slight performance drop

...and the first two optimizations in place, performance is acceptable now.

If this will ever run in hardware, it's going to fly.
I haven't tried yet, but I could imagine that it's too complex to fit into a fragment-program of a currently affordable graphics card. But one day it will!

Anyway, I'll add the third optimization, a fast SDS evaluator for "simple" faces (i.e. no creases/corners and no hierarchy). With all of them in place it should be faster than the old patch-based version in most cases.

Btw, I haven't settled on a term for the faces that result after one level of subdivision (the quadrilaterals). Would you prefere "slate" or "facette" for such a face? Or something different?
And should it be "dice" or "tesselate"?

sascha wrote:It added all vertices of the mesh to a list - unfortunately it did it inside a loop...

Oooh! That is a good one!

Anyway, I'll add the third optimization, a fast SDS evaluator for "simple" faces (i.e. no creases/corners and no hierarchy). With all of them in place it should be faster than the old patch-based version in most cases.

That's just an added bonus. The important bit is that you've replaced patches with a more stable implementation, so you can finally stop revisiting it.

Btw, I haven't settled on a term for the faces that result after one level of subdivision (the quadrilaterals). Would you prefer "slate" or "facette" for such a face? Or something different?

I think the word is 'facet', unless you're intentionally spelling it that way.

However, I've always (incorrectly) associated 'tesselation' with the process of converting to a triangular mesh. I wonder if other people have made the same misassociation. Wikipedia notes:

Wikipedia wrote:Normally, at least for real-time rendering, the data is tessellated into triangles, which is sometimes referred to as triangulation.

But I suspect people use the words interchangeably.

On the other hand, 'dice' is associated with chopping up into smaller parts, and is commonly used when talking about REYES-style renderers. I don't think it's got the same baggage as 'tessellate'.

Perhaps 'tiled' would be a better term? (I think) people associated tiles with quadrilaterals, and you could refer to a face as a 'tiled face' or a 'tiled quad', and the process of 'dicing' as 'tiling'.

Oh, thanks. It's spelled Facette in German, but since it's a foreign word I thought it's the same in English.

Ok, now we've got three options: facet, tile and slate. I took the term slate from a paper about subdivision, my algorithm is roughly based on theirs, so I thought I keep the term. But it wasn't written by native english speakers, so I'm not sure if slate is something you'd normally associate with this (the dictionary says that it's normally used as "roof slate"). My initial pick was facet, since it's "parent" object is already called "face", but again I'm not sure wheter my association is correct.

About tesselating, tiling and dicing: I haven't read "tile" in this context before - I don't know. You've mentioned REYES, and what my algorithm does is quite close to what REYES does, that's why I changed the term from tesselating to dicing. I too agree that tesselation somehow suggests triangles.

sascha wrote:Oh, thanks. It's spelled Facette in German, but since it's a foreign word I thought it's the same in English.

It makes sense as facette ("little face"), but that word is more commonly associated with the faces on a gemstone.

I took the term slate from a paper about subdivision, my algorithm is roughly based on theirs, so I thought I keep the term.

Slate brings to mind a writing slate, made out of slate stone (thus, the name) used in an old schoolhouse. The shape of the slate would often be rectangular, which is why they probably chose the term. But it's more associated with writing than the shape. As you mention, a roof tile is the other common use.

My initial pick was facet, since it's "parent" object is already called "face", but again I'm not sure whether my association is correct.

No, Wikipedia has a good example. The "-ette" refers to the size of the face, not a parent/child relationship.

I'm not happy with "diced face" though: Keep in mind that the "objects formerly known as slates" (the faces after one level of subdivision) are the input to the dicer. The job of the dicer is to subdivide up to level 6, and the output are tons of quadrilaterals that can be rendered using OpenGL.
So "diced face" is a bit misleading, as it would be the input to the dicer.
What about "tile"?

In the LionSnake model, you connect the vertices, and when the last vertex connects to the first, the modeler knows there's supposed to be a face, and tries to build one.

Wait a minute. What if the user doesn't connect the last vertice to the first one, but to some other vertex? There are potentially hundreds of faces that could be formed using the newly added edges, how could the modeler possibly guess which faces it should create?

The vertices that are in the current circuit of new edges are all tagged so that the modeler can tell when the user clicks on a vertex that is already part of the circuit (they're also highlighted in a purplish color). When such a vertex is clicked, the modeler can tell that the face is complete. It can recognize when the user tries to make a two-vertex polygon, and simply takes the edge between the two vertices out of the circuit.

Yes, I see. JPatch uses a similar approach now. You can add new segments (which later can e.g. be used by the extrude or lathe tools). Once you connect the segments to form a loop, a doubleclick on that loop will transform it into a new face (it's not done automatically because you might want to use the loop e.g. to lathe a torus).