Posted
by
Soulskill
on Tuesday August 02, 2011 @02:26AM
from the future-of-gaming-or-complete-bunk dept.

trawg writes "A small Australian software company — backed by almost AUD$2 million in government assistance — is claiming they've developed a new technology which is '100,000 times better' for computer game graphics. It's not clear what exactly is getting multiplied, but they apparently 'make everything out of tiny little atoms instead of flat panels.' They've posted a video to YouTube which shows their new tech, which is apparently running at 20 FPS in software. It's (very) light on the technical details, and extraordinary claims require extraordinary evidence, but they say an SDK is due in a few months — so stay tuned for more."
John Carmack had this to say about the company's claims: "No chance of a game on current gen systems, but maybe several years from now. Production issues will be challenging."

Most games recently just kind of suck and rest upon the shoulders of innovative graphics. This does not make me hopeful for the future of gaming.

Generally speaking, I'm in agreement on the suck part, but hold on a second there with the conclusion. If this technology is real and games do see a massive jump forward in graphics, wouldn't that allow for an end to each successive title needing to simply out-polygon the competition? Isn't it equally likely this would force a paradigm shift, where if nothing else art- real art, would supplant technical graphics specs?

The "goal" of crazy people who don't actually understand computers has always been to make graphics (and sometimes logic) based on "atoms"/particles/etc. The problem is not that it can't be done - anyone who has ever used a 3D modeling program with fluid dynamics has that power right in front of them - the problem is that it can't realistically be done in real time with our technology. Hell, it can't realistically be done pre-rendered without a supercomputer.

So sure, it could make it '100,000 times better.' No one is really debating that, and it isn't news to anyone who knows the first thing about graphics. What would be news would be hardware that better supported it. Somehow, I don't think that's what we have here. Notice the lack of specifics as to what KIND of graphics they seek to improve.

Addendum. I watched the video (OK, skimmed it). As far as particles go, this doesn't look like it is actually intended to be a full particle system. Rather, some kind of hybrid, like particle effects are done now. So sure, that could be something new - but still, their claims are formed in a very misleading way, given this.

I did however notice an extremely questionable statement which makes me seriously suspect this is a scam.

5:45 - he makes the claim that real-world scanned objects can't be used in g

5:45 - he makes the claim that real-world scanned objects can't be used in games because the resolution is too high. This is completely false. Game developers have scanned objects for a long time, and even more often, made extremely high resolution models on purpose. The models are then lowered in resolution down to a usable form, and the differences between the low-res and high-res models is compiled into a normal (bump) map. This is how almost all first person game textures are made these days. (The benefits of this process are mainly surrounding the better efficiency of textures in holding the depth data than polys, especially at varying distances where complex geometry results in extreme aliasing, and the fact that high-poly models cause serious issues with more advanced lighting schemes.) To make the claim this guy just did is highly suspect.

So, what you're saying is that people scan real world objects, but don't actually use those models in games... so... once one accounts for market speak "you can't use a scan of a real-world object in a game [without dropping enough detail so that you're not using the original scan]."

No, they don't use the raw point cloud models, they perform some processing first. And I doubt these guys use unprocessed point clouds for their engine either, so it's a ludicrous claim.

Their specific claim is that they are using point clouds. My thoughts are that if you strategically collapse points of the model (like a dynamic LOD sort of thing) that you could feasibly accomplish something similar to what they're doing.

I mean, seriously, they're talking about large point fields for grains of sand... if that were true, then why wouldn't they be able to use a raw point clouds from scanned objects?

How is that possible? Crysis 2, featured in the video, was only released this year. I'm not saying this ISN'T a scam or that you might have see a similar video, but you did not see THIS video a few years ago.

Parts of the video were awfully familiar, especially in the beginning. If I remember correctly, last time around they produced this with functions for curves etc. and created models with these curves, allowing "unlimited" detail.

They claim they can do in realtime what you say is impossible. Now, if you don't actually have any technical argument, I'll take the view of an expert: John Carmack does not think it is a scam. That said, there are big always big challenges to go from the tech demo to the finished product for sure and they are unlikely to make it especially in the current game market which is already struggling to create content.

They claim they can do in realtime what you say is impossible. Now, if you don't actually have any technical argument, I'll take the view of an expert: John Carmack does not think it is a scam. That said, there are big always big challenges to go from the tech demo to the finished product for sure and they are unlikely to make it especially in the current game market which is already struggling to create content.

Here, kid, as an actual graphics programmer, I'm translate Carmack's producer- and marketing-approved Twitter into plain, run-of-the-mill English for the simple-minded:

Statement: "No chance of a game on current gen systems, but maybe several years from now."

Translation: "No chance of a game on current-gen systems, nor what will be the next generation, as Wii U devkits have already been seeded to developers and it'd be foolish to think that Sony or Microsoft are very far behind. Insofar as nobody, not

So no real different than raytracing demo presented by Intel. That is still a level above "scam" and "impossible, duh".

There is almost every day an article about revolutionary new material, source of energy, cure for cancer,... the vast majority of it never make it to an actual product for lots of different reasons (does not scale, too weird, politic, bad time, fashion, $$$, 90% there syndrom, overoptimism, fatal flaws). That is still interesting, certainly more interesting than the 5 articles there will

It would be interesting (to me, as a graphics programmer in the games industry), if they stopped bullshitting. The claims in that video, when writtten down, are absolutely absurd. 20,000Gb of Ram. That's right. 20,000Gb of ram (at least!) to store the number of 'atoms' they claim they are displaying. Now, that simply can't be true - so they must either have left out a hell of a lot of information (such as, we are drawing the same object 20,000,000 times, or we are throwing everything at some procedural geom

The idea that they've come up with a new LoD algorithm for point cloud data is reasonable. It would then allow their ridiculous claims to be (technically) true about the size of datasets. But, if everything is held procedurally then it must have a low complexity description in order to compress that vast dataset (say 20,000Gb) into something that can be processed. Low-complexity descriptions tend to exist for highly regular geometry, and if you look at their demo they appear to have very high detail objects in a very coarse, regular and repetitive mesh to the extent that when they zoom out it looks like Minecraft.

No need for it to be a hoax, I'm guessing that they can make horrific looking (regular, craply lit, static) graphics as they claim in the video with the projected datasizes they refer to. What they gloss over is that it can't just be translated onto a real level design and scaled up to the level of complexity that you see in real level design.

It would be kind of like me saying "hey, I can draw circles at an infinite level of detail, equivalent to trillions of line segments. Can't draw more complex shapes like faces yet though....."

It would be interesting (to me, as a graphics programmer in the games industry), if they stopped bullshitting. The claims in that video, when writtten down, are absolutely absurd. 20,000Gb of Ram. That's right. 20,000Gb of ram (at least!) to store the number of 'atoms' they claim they are displaying.

What figures are you using for that calculation.

Now, that simply can't be true - so they must either have left out a hell of a lot of information (such as, we are drawing the same object 20,000,000 times, or we are throwing everything at some procedural geometry shader)

Well of course they are drawing each object many times. So do all the polygon based games. It would be stupid not to. No one would store the full unique geometry for each blade of grass.

Simple tool to magically convert polygons (that we've been lambasting for the last 5mins) into an infinite detail point cloud (thereby adding detail to the mesh that was not there to begin with? WTF?)"".

You're ranting. The polygon's that they are lambasting are for example tree trunks with 6-12 sides. If instead they take a model with a very high polygon count, higher than would be used in a polygon based game, and convert it to their "point cloud" system, that would be quite

Raytracing is quite a different situation. The issue with realtime raytracing is that it can make the lighting worse; unless you use a large number of rays (which can become prohibitive, especially with other lighting methods at work, which can complicate ray paths), it looks splotchy and staticy. It is far from impossible given current technology - you can forcibly enable it in some games, even - just not really that helpful when it comes to making a scene look better.

I find myself wondering if what they are doing is using voxels to step down the detail level om distant objects while stepping it up on near objects, and not even bothering with the objects out of view.

Actually, it was being done realistically in near real-time over 10 years ago, using splatting based techniques (see surfels and QSplat http://graphics.stanford.edu/software/qsplat/ [stanford.edu]). These systems weren't really suitable or fast enough for games at the time, but 10 years is a long time for hardware and software to progress.

His linkedin goes into a bit more detail: [linkedin.com] "The Unlimited Detail system consists of a compiler that takes point cloud data and converts it in to a compressed format, the engine is then capable of accessing this data in such a way that it only accesses the pixels needed on screen and ignores the others generati

This looks a lot like sparse voxel octrees [wikimedia.org]. As a concept, SVO is nothing new at this point, and id has been considering using it as part of their id Tech 6 engine.

A sparse voxel octree is basically a hierarchical structure for points in 3D space. The advantage of using a hierarchical structure is that you can stop looking at any time, and so zooming works very well: you just traverse the tree until you get so far down that further detail won't be visible, then you render. If the player moves closer, that

Thing is though, some members of the demo scene have been doing really impressive things with particle-based rendering systems over the past few years.

To see what can be done, you should check out the demos "Ceasefire (All falls down)" and "Numb Res" by CNCD & Fairlight. They both make very heavy use of particle-based rendering engines - the latter features a rather long section of real-time particle-bsed computational fluid dynamics simulation - the kind of stuff that one te

I remember when Wolfenstein 3D came out. It seemed unbelievable that a world of textured polygons was being manipulated in real time on a 4.77 MHz PC. We'd seen nothing like it!

Later the details of how Carmack had done it came out. This wasn't the traditional matrix manipulation of 3D points, hidden surface removal, plotting of textures and the painter's algorithm we were used to. It was 2D raycasting from a simplified data structure. Each ray cast allowed the plotting of 128 pixels. Only 280 rays had to b

"I certainly don't see anything here that is so impossible he must be a scammer."

That's actually my biggest issue with it. I don't see anything in the video *at all* that can't be done with current technology, some good hardware, and very clean programming. The claims they are making do not seem to align with what is being shown, and indeed, the claims seem to be somewhat self-contradictory.

I don't think that a massive revolution in rendering is impossible (although it would almost certainly require n

(I've done some things on computers that were 'impossible', I just didn't accept the limitations and did something nobody had thought of before. Many cool pieces of programming were considered impossible before someone went and pulled it off anyhow. So the way I see it, if I and other people can do the 'impossible' with software, I see no reason a bunch of other smart people can't do it. In a decade or two after release, nobody will understand why it took so long for someone to do it this way, just wait.)

No, things that are impossible to do on computers, are simply impossible to do. Time travel for example. That's impossible. Storing 21 trillion (as they claim in the video) anythings on a computer is impossible on current gen hardware. Unless they are expecting the PS4 to ship with 20,000Gb+ ram, it will be still be impossible on next generation of hardware. If you can show me how to store 21 trillion unique and random values on a PS3, well sir, I shall forever be your servant because I'd have a lot to lear

This is probably not actually what is generally called "voxels", but a hierarchical point cloud system consisting of points on the surface of objects, rendered via some kind of weighted splatting mechanism. There was a lot of research into such systems for visualising some of the very high resolution point clouds coming out of digital laser scanning systems (for example QSplat, which came out of the Digital Michelangelo project http://graphics.stanford.edu/software/qsplat/ [stanford.edu]).

(I submitted this article) I fired off a request for more information from the developers about this and they got back to me indicating they're willing to answer some more questions, so I've summarised some of the main ones that I've seen around the place.

We're based in the same city as this company (Brisbane, Australia) so I'm hoping that I might be able to actually go out there and eyeball this stuff myself to get a feel for it (and possibly drag along a graphics programmer to do some grilling).

Euclideon is just spinning up the marketing bullshit and trying to make a profit off of it all. They don't even have good lighting, they're just doing forward shading for each voxel ray-cast intersection using diffuse lighting with a single global point light source. And they haven't demonstrated robust animation yet.

Guess what, it is possible to animate voxel octrees, but Euclideon never came up with the method either. Some researcher in Germany came up with a working solution for his bachelor's thesis: http://www.youtube.com/watch?v=Tl6PE_n6zTk [youtube.com]

Except that this isn't Voxel. Euclideon is marketing for Unlimited Tech now. Go do a search for Unlimited graphics engine, they've been showing off their work for the last 2 years. The only thing new here is their marketing partner.

The technology is rather related to point cloud rendering which is about 10 years old now. This is the most clever implementation of point cloud rendering that I am aware of and it is pretty cool: http://graphics.stanford.edu/software/qsplat/ [stanford.edu] It renders amazingly fast.

It has its shares of problems including requiring a lot of precomputation and as far as I know noone was able to do proper anitaliasing on point clouds. Texture interpolation in the traditional sense has also not been solved to my knowledge because with these point clouds all you can do is give individual points colors, so you will always have hard edges between points. Those two combined result in a lot of visual noise that destroys the illusion in the demo videos that I have seen so far.

I saw the exact same images/videos they are showing now a year ago, so you're safe. The main problem is, it's only static data, and the gaming world moved beyond static scenery when we noticed doors could open in wolvenstein 3D.

So that means 100,000 times more work to make everything that detailed?

Or else everyone who makes games uses a standard library of objects to cut/paste and so the games end up looking the same anyway?

This is voxels all over again, in a modern iteration. Yeah, it looks cool, but it increases your development time and isn't anywhere near as fast as other techniques and all those graphical "shortcuts" that standard 3D cards do are done for a reason - nobody *really* notices or cares so long as the game runs s

If they really could do realtime graphics that were "100,000 times" more detailed than current stuff, they'd do one of two things:

1) Release a demo so people could actually try it and see it working on their systems, to prove it was real. Or more likely...

2) License that shit to a company in the industry. Intel would be extremely interested if it ran on CPUs as they'd love for people to spend more money on CPUs and none on GPUs. Any game engine maker would be extremely interested either way. Wouldn't matter if things still had to be hammered out, at the point they claim to be, that would be more than plenty to sign a licensing deal and get to work.

So I'm calling bullshit and saying it is a con. This is classic con man strategy: You show a demo, but one that is hands off, where the people watching only get to see what you want them to see and don't actually get to play with your product. You make all sorts of claims as to how damn amazing it is, but nobody actually gets to try it out.

This has been a con tactic for centuries, I've no reason to believe it is any different here.

So to them I say: Put up or shut up. Either release a demo people can download that will let them see this run on their own systems, or get a reputable company to license it. If Intel comes out and says "This is for real, we've licensed the technology and will be releasing a SDK for people as soon as it is ready," I'll believe them, as they have a history of delivering on promises. So long as it is some random guys posting Youtube videos, I call bullshit.

You should look into the underlying engine. The reason that they call it 'unlimited' is because the performance is based on a search engine that only has to be executed once per pixel instead of the more traditional for each poligon. With traditional engines, the more poligons, the more performance suffers, with the Unlimited engine, adding more points has a negligable effect on performance, adding higher resolution on the other hand, has a significant impact.

You still run into the same problem as you increase the resolution of your voxel octrees, thus increasing the depth of your octrees or spatial lookup structures, which requires that you need to recurse an more levels as you perform a spatial query, although it scales logarithmically instead of linearly. Still no reason to call it "unlimited."

So did they just essentially develop a super intelligent LOD loading system that uses procedural instancing? I'm pretty sure you could put together similarly impressive demos using the latest tricks from Nvidia and ATI using standard polygon rendering. The fact they are using points vs. polygons isn't that interesting to me.

What is fundamentally missing here? Animation, lighting and shadows. Those are going to be really hard problems to solve and I'm curious how they will go about it.

I love the detail of the models in the demos. I'd love to see games without the polygonal trees and such shown in the video. But I agree the lighting of their demos could use some serious work. It's as if there's a uniform light source shining from all directions at once in their palm tree world. There are a few shadows in their demo, but the contrast is way too low.

I'd love to see the combination of good lighting, and this non polygonal world.

Think of it this way, a model is made up of a whole lot of points, billions of them even. This engine takes on pixel of the output and searches for which points will fill it. Imagine the whole world as points (not polygons) in a giant cube. The pixel is actually 1 point with an contracting square extrude coming out of it. The engine starts close and works further and further away until the entire pixel is filled with points. The resulting image is then compressed back down to 1 pixel and sent for output as

It'll take a while for this tech to get turned into an engine with animation/shading/lighting working, and no game developer will touch it until that happens. Euclideon had the right idea making a converter to turn polygonal models into voxel models, since noone was going to dedicate the money to create high-quality voxel assets that couldn't be used if they decided to scrap the tech and use a normal polygon engine. This tech is risky, so the first game to use it is likely going to be a cheaply-made game, p

It'll take a while for this tech to get turned into an engine with animation/shading/lighting working, and no game developer will touch it until that happens.

That's also my highest doubts. How does the system handle animations at all? The demos they have shown so far have no movement aside from the camera.

I am very, very much looking forward to this. I can barely imagine the amount of creative potential being freed up if for most real-life objects you don't need hours of artist time anymore, but simply throw them into a laser scanner and be done with it. Your artists could focus on other things.

Here is what I think they probably do, similar to raytracing: They fire one "photon" from eachone pixel of screen into the scene. As oposed to raytracing, this photon is never divided to multiple copies, it travels until it reaches something. Photon is traveling through the scene by adding X,Y,Z from pre-calculated table, until it reaches box with something in it, then it halves step for x,y,z, looking for even smaller boxes etc. until box is so small it represents one pixel, OR photon is outside the box (o

I think their smarts are in modeling the environment data such, that they don't have to move gigabytes around for every image. Also, as they claim in TFA, they have a very limited "shade" model. They probably cut a lot of corners when it comes to reflectiveness and secondary light sources and all that.

We've had this discussion some time ago.. and from what I remember it came out that the procedural creation of "atoms" is kinda powerful and scalable, but will inherently not allow any kind of collision detection and/or animation.

so basically, yes, this can be used to create a very detailed static world.

First of all, I have nothing against the government spending money on computer game graphics engines, in fact I think such money is wisely spent (more wisely than most defense projects, at least). However, out of sheer curiosity I'd like to know how a small software company can get 2 million AUD$ government funding?

Euclideon is unjustly taking credit for other people's hard work. They say they've invented the methods and algorithms behind it all, well that's just pure fantasy. Here's what Euclideon is basing there technology off of:

I see a lot of very skeptic responses and I must admit I am a bit too. But then I thought back to the first time I saw "Doom" on a 486 and almost had my eyes fall out. It was just such a big step... I could have imagined it. All it took was someone with a very bright idea. Perhaps we might be in for a similar surprise...

Towards the end of their video, they talk about a demo of an island using 21 billion points. Which is pretty much impossible to keep in RAM on anything less than a minicomputer.

Let's assume that each point is storing the bare minimum of data needed - xyz position (each as 32-bit ints) and two pieces of color information (diffuse and specular, also 32 bits a piece). So that's 20 bytes of data per point, which comes out to be 391GB of data (for a static, unanimated mesh, I remind you). You can't store that in

High quality voxel graphics with dynamic deformation would allow a whole new level of user-generated content.

Imagine something like world of warcraft meets second life, but without all the furries. (Something where if you take a shovel, and dig, you can dig up rocks, and other bits-- or even bury loot, or build a house out of ambient materials, and have it be persistent.)

Some people might complain that it opens the doors to world vandalism ([sarcasm]Oh dear, somebody wrote the word "Penis" in 30 foot letter

>>High quality voxel graphics with dynamic deformation would allow a whole new level of user-generated content.

Yeah, that would actually be pretty damn neat. None of what they showed was dynamic, though.

About 10 years ago, when I was doing a lot of work with voxels, I'd arrange all the voxels in an octree and could adjust the framerate/detail simply by how far down each object's octree I'd traverse. I could have large, coarse voxels, or small, precise ones, adjust for distance from the viewer, and so

Allegedly they have 21 trillion atoms in that scene. Now pardon my skepticism, but if that's say 1byte per 'atom' (a massivey concservative estimate), then you'll need about 20,000Gb of data storage alone. Now. They are either a) lying, or b) bending the truth massively (i.e. we only have 1 model, instanced 200,,000,000 times). They also claim that they can convert a polygon mesh into a point cloud. Well. That's not hard to do, but you will be inherently limited by the detail of the original mesh, so it's s

Ok, so we don't see them all at once. To be honest, if a middleware company can't write a furstum cull, they would be closed by now!

But what do they do then when they are not seen? Sod off for a holiday in the cloud? Seriously. I think you are missing the point. Where the hell is this data being stored, and what is the size of the data set? It's got to be in memory *at some point*, and hard disk if it's not. So how much ram/disk space will this thing use exactly? Ok, so 'most of it is calculated, somewhat like fractals', well ok. But which bits? Are the trees fractals (or L-systems maybe)?. Just the leaves? The Models of the rocks they have scanned in? The 3ds max models they have converted to point clouds? The whole island? Answers to these questions need to be provided before any games developer would even bother looking at this tech. Either it's all procedural (in which case it's utterly useless for game designers), it's primarily procedural (in which case the art director will struggle to achieve a consistent look), it's partially procedural (which will annoy the modelling & texturing departments), or it's a load of made up lies. I'm erring towards the latter.....

Procedural generation works better than you would expect.
Look at these two examples:.debris (http://91.202.41.234/debris/) and.kkrieger (http://91.202.41.234/kkrieger) - they occupy virtually no space, are lengthy, interactive and perfectly playable on any modern machine with average CPU capabilities.

That's not exactly true. While they require almost no disk space, they do require quite a bit of RAM. Just because all the textures and models are procedurally generated doesn't make the need to store them go away. If things would be dynamically generated each frame in a geometry or pixel shader things might look different, but that is a whole lot more complicated then just procedural generation.

Compared to the raw number of triangles your average geforce card can theoretically process, that's very true

And no mention from the video about what kind of hardware is powering that humble 20fps "real time" preview. Even if we accept that statement, if it takes a supercomputer to get to 20fps that's not going to have much market. Given that this tech is totally different from where the industry is going, they should probably be talking with NVidia / AMD about what the hardware can help even make it feasible. Carmack is right, the hardware just simply isn't there, and for that matter is not even trending that

I think this is all just a ploy to provide Intel with a market for their Knights Ferry chips -- this won't run on GPU hardware on current systems apparently, so you need CPU might. Where do you get that? Knights Ferry, obviously.

Still, it sounds very cool, if only for statically-rendered stuff like wallpapers and movies.

From the almost content-free article, it sounded like they were rendering point clouds. You can get very nice results from using radial basis functions to generate volumes from point clouds. Splatting is insanely fast, but the results aren't so great. Discrete ray tracing produces beautiful results, but is very slow. There are some hybrid techniques around that let you get almost the quality of ray tracing in a fraction of the CPU cost.

Sadly that isn't the problem with your idea. The problem with your idea is all the game engines are tied to the consoles which are some seriously old shit ATM and frankly from the looks of it the next gen won't be much better. Hell they are using an ATI 4xxx series for the new Nintendo console and that's what... 3 years and 2 generations behind? And the console isn't even released? I've already read that the next gen engines like the next Unreal are just sitting on ice because there is no way the consoles w

Yep. As a static voxel engine (and I can only assume it to be, as they don't appear to have demonstrated anything else), it's impressively fast at that high a resolution, but not particularly useful for a game engine. A dynamic voxel engine [youtube.com] however...

1) except the games industry is bigger than Hollywood by far2) The department that provided the funding looks to be Commercialisation Australia [commercial...lia.gov.au], which seems to basically be a government-backed VC-like operation - I can only imagine that exists because of the paltry VC in Australia.

Instead of 3d voxels in the traditional sense, it would be 1d points in 3d space, with luminance, specularity, and fuzziness variables assigned. After that it is just lighting and pixel shading, which would be embarrassingly parallel. You would render the scene as a 2d canvas that fills the whole viewport.

nVidia and AMD are currently looking at real-time ray tracing, because that's where intel is going and they have to compete. There is also CUDA and OpenCL, and the next stepping for GPUs is almost half of the current. (meaning performance/cost ~doubles) Anandtech says AMD promises a 22nm card this year still. GPUs are no longer toys; they are a form-factor for supercomputers.

I don't think for example caustics would work very well with voxels, but a hybrid solution would perhaps be ideal, where you could hav

Remember, they only need to search a point cloud once for each pixel on the screen. The volume of points in the cloud has a much lesser effect on their performance than the number of pixels on the screen. So they can probably run a 640x480 output on a fairly low end machine. Yeah, running a 3-1040p monitor set up would probably require some amazing hardware, but for us mere mortals, I don't think that's quite as much of a concern.

AMD would be nice, but honestly, Google would be the best to have a hack at it

Correct, but they actually use ray-casting, not ray-tracing. Ray-casting only involves a single ray collision test and sample per pixel, and then you need to use an alternative means to compute lighting, such as a deferred shading and lighting compositor. Full ray-tracing doesn't scale well for real-time graphics in shared memory systems due to memory access patterns involved. Intel has some demos with some simple car models doing full recursive ray-tracing, but it only runs at a few FPS even on 64 cores.