I just saw a video about what the publishers call the "next major step after the invention of 3D". According to the person speaking in it, they use a huge amount of atoms grouped into clouds instead of polygons, to reach a level of unlimited detail.

They tried their best to make the video understandable for persons with no knowledge of any rendering techniques, and therefore or for other purposes left out all details of how their engine works.

The level of detail in their video does look quite impressive to me.

How is it possible to render scenes using custom atoms instead of polygons on current hardware? (Speed, memory-wise)

If this is real, why has nobody else even thought about it so far?

I'm, as an OpenGL developer, really baffled by this and would really like to hear what experts have to say. Therefore I also don't want this to look like a cheap advert and will include the link to the video only if requested, in the comments section.

This question came from our site for professional and enthusiast programmers.

4

Well, they've invented the most revolutionary thing since the beginning of computer graphics, yet they don't know how many millimeters fit into an inch, what does that tell you.
–
DamonAug 1 '11 at 20:42

2

They are so limited on detail (forgive the pun) about their technology that a discussion is hard. From what I understand from the video it's 20fps in software. I see a lot of static geometry, a whole bunch of instancing and don't know how much of the data is precomputed or generated on the fly. Still interesting though. I wouldn't want to call shenanigans completely. Not with the funding acquired, although that does not mean a whole lot.
–
BartAug 1 '11 at 20:45

7

It's always suspicious if someone makes fantastic claims and only shows footage of something entirely unrelated (such as Crysis). Even more so if there are claims like "technology as used in medicine/space travel", and mixing several things that have nothing to do with each other. Certainly it is possible to produce (nearly) infinite detail procedurally and it is certainly possible to render that. So what, every 1984 mandelbrot demo could do that. However, the claim is that they render objects like that elephant in infinite detail. And that's just bollocks, because they can't.
–
DamonAug 1 '11 at 21:35

6

"Your graphics are about to get better by a factor of 100,000 times." Extraordinary claims require extraordinary evidence.
–
Brad LarsonAug 1 '11 at 22:18

6 Answers
6

It's easy to do that. Using an Octtree you simply divide the world into progressively smaller pieces until you reach the level of detail needed. This might be the size of a grain of sand for example. Think Minecraft taken to an extreme.

What do you render then? If the detail is small enough you may consider rendering blocks - the leaf nodes of the octtree. Other options include spheres or even geometric primitive. A color and normal can be stored at each node, and for reduced LOD one can store composite information at higher levels of the tree.

How can you manage so much data? If the tree is an actual data structure you can have multiple pointers reference the same sub-trees, much like reusing a texture but it includes geometry too. The trick is to get as much reuse as possible at all levels. For example, if you connect 4 octants in tetrahedral arrangement all to the same child node at all levels, you can make a very large 3d sierpinsky fractal using almost no memory. Real scene will be much larger of course.

The problem is that it will only work for static geometry because real animation would require manipulation of all that data every frame. Rendering however, especially with variable LOD is no problem.

How to render such a thing? I'm a big fan of ray tracing, and it handles that type of thing quite well with and without a GPU.

All of this is speculation of course. I have no specific information on the case you're talking about. And now for something related but different:

A huge amount of data rendered

EDIT And here is one that I did, but I deliberately altered the normals to make the boxes more apparent:

Stanford bunny in voxels

That frame rate was on a single core IIRC. Doubling the depth of the tree will generally cut the frame rate in half, while using multiple cores will scale nicely. Normally I keep primitives (triangles and such) in my octtree, but for grins I had decided to render the leaf nodes of the tree itself in this case. Better performance can be had if you optimize around a specific method of course.

Somewhere on ompf there is a car done with voxels that is really fantastic - except that it's static. Can't seem to find it now...

I agree with this assessment: just watched the video myself and was struck by how static their scenes are (funny when they compare with polygon grass; at least it's blowing in the wind while theirs clearly isn't).
–
timdayAug 1 '11 at 22:16

How is it possible to render scenes using custom atoms instead of
polygons on current hardware? (Speed, memory-wise)

From watching the video nothing indicates to me that any special hardware was used. In fact, it is stated that this runs in software at 20fps, unless I have missed something.

You'll perhaps be surprised to know though that there have been quite a lot of developments into real-time rendering using a variety of technologies such as ray tracing, voxel rendering and surface splatting. It's difficult to say though what has been used in this case. (If you're interested, have a look at http://igad2.nhtv.nl/ompf2/ for a great real-time ray tracing forum, or http://www.atomontage.com/ for an interesting voxel engine. Google "surface splatting" for some great links on that topic)

If you look at the movie you'll notice that all geometry is static and although detailed, there is quite a lot of object repetition, which might hint at instancing.

And there will most likely be a lot of aggressive culling, levels of detail and space partitioning going on.

If you look at the visual quality (not at geometrical complexity) it does not look all that impressive. In fact it looks fairly flat. The shadowing shown might be baked into the data and not be evaluated in real-time.

I would love to see a demo with animated geometry and dynamic lighting.

If this is real, why has nobody else even thought about it so far?

Unless I'm completely wrong (and it wouldn't be the first time that I am) my first answer would suggest a (perhaps very clever) use of existing technology, perhaps optimized and extended to create this demo. Making it into an actual game engine though, with all of the other tasks besides rendering static geometry that that includes, is a whole different ball game.

Of course all this is pure speculation (which makes it a lot of fun to me). All I'm saying is that this is not necessarily a fake (in fact I don't think it is and am still impressed), but probably not as groundbreaking as they make it sound either.

These atoms actually are not that magic/special/alien to current graphics hardware. It's just a kind of point cloud or voxel-based rendering. So instead of triangles they render points or boxes, nothing unachievable with current hardware.

It has been and is done already and is not the super invention, but maybe they came up with a more memory and time efficient way to do it. Although it looks and sounds quite interesting, you should take this video with a grain of salt. Rendering 100,000 points instead of a fully textured polygon (that already takes up only a few pixels on screen) doesn't make your graphics quality better by a factor of 100,000.

And by the way, I've heard id software is also trying out GPU accellerated voxel rendering, but I have a bit more trust in John Carmack than in the speaker of this video :)

As for the idea, it isn't feasable on current non-dedicated hardware. The amount of points you would need to avoid gaps when looking at something close up is far beyond the amount of points you could fir in todays RAM. Even if, I don't know of any data structures or search algorithms that would yield anything near the performance shown in the demo. And even if, it were somehow possible to search these points in real time, cache misses and memory bandwidth would ensure that you can't.

I'm not doubting the fact that such images can't be achieved in real-time, just not with the method presented. My guess is that the demos were rendered with voxels, which have been used for decades and can already produce fairly high detail in real time: http://www.youtube.com/watch?v=BKEfxM6girI http://www.youtube.com/watch?v=VpEpAFGplnI

From what I saw, it seems like they are using parametric shapes instead of simple polygon shapes - in other words they change the geometry according to the required resolution.

This can be done using techniques such as geometry shaders & perlin noise.

Another possibility is using GPGPU (e.g. CUDA) to render scene including non-polygons and to perform ray-tracing (for z-order and shadows). Another possibility is a custom hardware that renders formulas instead of triangles

I think of all of their claims, the compression of memory seems like an exaggeration, I could understand something like RLE compression having a great impact. In the ends I think this system will have a lot of "pros", but a lot of "cons", much like ray-tracing, or iso-surface rendering with marching cubes.

As far as rendering 'trillions' of atoms; I don't think they're saying that. What they are instead doing is searching for W * H atoms, i.e. one atom per pixel on the screen. This could be accomplished in a lot of slow difficult ways. Some ways of speeding this up are KD Trees, and BSP Trees, Octrees, etc. In the end though, it is a lot of data being sorted through, and the fact that their demo apparently sorts 1440x720 atoms, more then once per frame, because of shadows / reflections in their demo, is amazing. So cudos!