Google’s Virtual World

Google recently bought the company that made SketchUp, an easy to use 3D modeling program, and has made it available free of charge. What’s good about this program is that it allows fairly swift realization of simple design ideas. That’s also its main limitation. And while I think it’s important to have free 3D modeling tools to foster a marketplace of 3D models, we should remember that this isn’t the first free (as in speech, or beer) 3D modeler on the scene, and that marketplace has existed for years. Can Google’s blessing change the face of 3D and virtual worlds? I’m still skeptical. I don’t doubt Google’s implicit marketing muscle for a moment. But 3D models have had the same basic problem since I was a teenager making games on my Commodore64: a model is just a model. No matter how good the modeling software, the result in almost every case is points, lines, and polygons–simple mathematical entities, with the better tools allowing us to skin these with colorful texture maps and add (gasp) curves to smooth things out. That’s so 1987. And for the most part, we’re still there.

Sure, most modern graphics cards use vertices and polygons to render. But that’s a solution for efficient hardware design. We’re humans. We work on a fundamentally different level.

The Stagnation and Proliferation of 3D Object Formats

It’s no wonder that almost every file format since 1990 is a variant on the same old theme. It’s no wonder it’s still so hard to move 3D objects from one system to another. If English were like 3D object files, we’d have six words and everyone would pick a different six to express themselves. And why not? It’s easy to speak, if not to understand. Even if you manage to translate (and it’s not hard, just annoying), you’ve lost all but the most basic meaning. Can’t we do better? And is having one powerful company pick the blessed six words (or even provide universal translation) a solution to the underlying problem?

After I left Keyhole in 2001, I started consulting and wound up at Linden Labs (now Linden Research, makers of Second Life). In the year of work I did for them (a lot of it obsolete by now), the most useful was the two weeks I spent writing some code that generates all [non-avatar, non-terrain] objects in the world. That code was based on some work I did in college and was just a few hundred lines long. Given a few parameters it could generate the primitive elements of every house, every sculpture, every lamp in the world. Assembling and tweaking those to perfection is left to the artist, creator, the user, as it should be.

I certainly didn’t invent the idea. It’s called parametric or procedural modeling. And the reason they needed it (and why Wil Wright uses something like it for the upcoming EA "Spore" game — and see what Wil has to say about the future of content…) is that it allows complex objects to be expressed in just a few bytes and changed in remarkable ways with just a few operations. For networked apps, that’s a huge savings over polygons. It also means the 3D representation is finally decoupled from the hardware, meaning high and low end computers can effortlessly scale the quality of the models based on their own capabilities, your virtual proximity, and so on. It’s only now that the next generation of graphics hardware, with "geometry programs," will make this commonplace.

But since the rest of the world still uses points and polygons to express objects, here’s the typical result, as illustrated by a Terra Nova comment and some indirectly linked screenshots. So you build a nicely scripted car in SecondLife. It has physics in SecondLife. You can break it into pieces in SecondLife. It even has ownership information. You can buy and sell it for LindenDollars.

Now take that model and stick it into Google Earth. You’ve lost everything but the points and polygons, and if you’re lucky, you kept color and texture. But you’ve even lost a sense of scale, since both programs make their own assumptions. That’s cute, but not entirely earth-changing. Even worse, if you tried to take your object into World of Warcraft, where the visual style is very different from, say, reality. You’d stick out like a wounded bloody thumb. Worse still, if you tried to take that object back to Second Life (which you probably can’t). It would have lost all of its internal structure and definition.

This is a great discussion. Nature has been in the business of design fro 4 billion years and has worked on this problem – how to pack the information of life in every cell and re-use it to build the complex life forms. The answerer to efficient packing of information lies in genetics. All products and building have a certain commonality that can be exploited to greatly reduce the information content that is required to represent it. We have been trying to implement this idea on 3D cad. Please have a look at http://www.genometri.com to see where we have gone so far.