I dunno, how often does someone burst in with an entirely different way of doing things in any given field?

Well, in space launch (which I used earlier as an example), it seems like all the good ideas were had within the first decade or so; it's just that no one tried them. One of the less conventional ideas getting traction lately is the SABRE engine - the required heat exchanger technology was demonstrated in 2012, but it's been nearly four decades since SATAN was invented, and that was basically a variant of a concept from 1955.

Now, space launch may not be a good analogy, since the barriers to entry are immense and the capital flow woefully inadequate compared with computer graphics. But all that really means is that the good ideas in computer graphics have had a much better chance of at least being tried, rather than that they haven't been thought of yet.

It's possible something like Mach effect propulsion could shake up space launch to the point of making everything else hopelessly obsolete. But that would be a much larger paradigm shift than I think it's reasonable to expect in computer science; even quantum computing doesn't seem to have much applicability to graphics...

Quote:

Most good techniques seem to be around for years in non-interactive graphics before there's enough power to do them in interactive stuff. [...] I can see power continuing to get cheaper, and more viability for interactive raytracing eventually, but that's about it.

If I had to guess, I'd say the rasterized triangle killer will be ray tracing.

Then again, Moore's Law seems to be on its last legs, so we may never actually get there...

I don't mean to be rude, but did anyone actually look at my link? Raymarching (a form of raytracing) works in realtime on consumer GPUs now. The Unreal Engine is already using it for soft shadows. There are plentyofexamples around the web that will run right in your browser, even on a crappy Intel GPU like mine. Granted, it doesn't work so well for animated models of humans, but it's definitely already practical for other things.

I don't mean to be rude, but did anyone actually look at my link? Raymarching (a form of raytracing) works in realtime on consumer GPUs now.

Sorry, I didn't say anything to acknowledge either of the to things you mentioned (point cloud rendering, or raymarching) but I thought they were both good examples of alternative rendering paradigms (the point cloud especially).

I didn't feel a need to say anything about raymarching specifically because it's in the same family as raytracing, and its advantages (e.g. can express curves and complex shapes with relatively simple code) and disadvantages (mainly ineffective performance) are pretty much the same as other forms of raytracing.

Rahsennor wrote:

The Unreal Engine is already using it for soft shadows. There are plentyofexamples around the web that will run right in your browser, even on a crappy Intel GPU like mine. Granted, it doesn't work so well for animated models of humans, but it's definitely already practical for other things.

I was also alluding to stuff like this when I said I've seen pixel shader raytracers in the demonscene. Years ago I saw the Valleyball demo, which was released with its source code, and it was really cool to be able to just see everything right there in the shader code. Since then shadertoy has appeared, which has been a cool place for lots of things like this.

(I like the application to shadows, I hadn't seen that yet. Very interested to learn how the SDF is being stored, which those slides don't really explain... got some research to do later, thanks. )

As far as how practical it is... none of those examples run at a "playable" game framerate at fullscreen for me (with a half decent GPU), and all of those scenes are relatively very simple. A fish swimming in water like this should be a well above 60fps walk in the park for traditional rendering (e.g. 16 year old xbox demo), and this demo struggles to do 3FPS in fullscreen with a modern GPU. This also doesn't scale well: more objects mean more code in the shader-- the limit of what you can fit in a shader program is VERY low here. The demos with an infinite field of balls do it with recursion/loops, but you can't use recursion to e.g. turn a "dinner table" function into a "furnished dining room", recursion only really makes fractal shapes. You will need code for each an every unique element, and that quickly eats up both your shader size budget and your performance budget.

(Though you could make a case for future tech where shaders have access to much, much larger shared program memory? Kind of like the idea of space-inefficient self modifying code for faster PPU upload on the NES... a big huge optimized spatial partitioning tree as unrolled code... might solve the memory half of the limitation at least.)

The fish looks really neat, but that fish is also modeled with like 100 difficult lines of code, really editable only by a specialized programmer + artist. Doing the same thing with traditional triangles would be straightforward work for any pro modeller, could perform well in more complicated scenes, and you could make a full game out of it. I'm impressed by it, but only because I know it's being done in such a difficult way.

I loooove the picture here of the snail being modelled from functions. That's really cool, and also really illustrates both the power and the problem of this approach.

3D modelers are used to using scuplting tools like Zbrush that work similar to the physical clay counterpart. Click somewhere to add detail, squish or stretch the shape, cut a groove, add a blob... A triangle mesh handles these actions very well; each just moves some of the points around, and the detail is directly there in the data. A point is where you put it, not dependent on any chain of actions, etc.

Modelling from high level shapes, though, is a really painstaking thing to do! The whole process is inverted: you want to start from high level shapes, and make something interesting looking by making as few additions to it as possible to keep the code complexity down. Every curve, every twist, every groove adds another node of code to run to produce that detail. The 3D model is like an edit history (sort of like the old sierra games). The main problem is just that if the "edit history" gets too deep, performance is dead, and that's a very hard constraint to model around. Every piece of the tree affects everything else, and making changes higher up require corrective tweaks everywhere on the model.

Maybe I should take a moment to mourn the loss of Java on the web, because one of my favourite websites no longer works functionally at all. Ken Perlin's website had so many very interesting Java demos of different ideas for rendering and animation.http://mrl.nyu.edu/~perlin/

(I like the application to shadows, I hadn't seen that yet. Very interested to learn how the SDF is being stored, which those slides don't really explain... got some research to do later, thanks. )

In Unreal the SDF is a simple 3D texture IIRC. Not by them, but I found some good blogposts on the subject. Quílez uses CSG.

Quote:

As far as how practical it is... none of those examples run at a "playable" game framerate at fullscreen for me (with a half decent GPU), and all of those scenes are relatively very simple.

Huh. The fish is a slideshow on my system too, but the others run fine even in fullscreen. Maybe Intel's Linux drivers are better than I thought...?

Quote:

The fish looks really neat, but that fish is also modeled with like 100 difficult lines of code, really editable only by a specialized programmer + artist. Doing the same thing with traditional triangles would be straightforward work for any pro modeller, could perform well in more complicated scenes, and you could make a full game out of it. I'm impressed by it, but only because I know it's being done in such a difficult way.

As mentioned above, you can also use 3D textures. There's some pretty cool GPU octree/dag techniques out there too, for both this and other forms of raytracing.

I've found acouple of games using raytracing already. I think Voxel Quest switched to point cloud rasterization at some point and YMMV on whether they count as games or tech demos, but hey, points for trying.

Quote:

Modelling from high level shapes, though, is a really painstaking thing to do! The whole process is inverted: you want to start from high level shapes, and make something interesting looking by making as few additions to it as possible to keep the code complexity down. Every curve, every twist, every groove adds another node of code to run to produce that detail. The 3D model is like an edit history (sort of like the old sierra games). The main problem is just that if the "edit history" gets too deep, performance is dead, and that's a very hard constraint to model around. Every piece of the tree affects everything else, and making changes higher up require corrective tweaks everywhere on the model.

Probably also explains why Farbrausch's .werkkzeug never took off. Triangles are like the QWERTY of 3D graphics: everyone's used to them so they're here to stay.

Hm, I was a bit confused by the diagrams; it looked like they were doing it with 2D textures, I wondered if there was some sort of nearest surface thing going on. 3D textures though... ha ha the memory involved once you go to 3D textures kinda scares me.

Rahsennor wrote:

Quote:

As far as how practical it is... none of those examples run at a "playable" game framerate at fullscreen for me (with a half decent GPU), and all of those scenes are relatively very simple.

Huh. The fish is a slideshow on my system too, but the others run fine even in fullscreen. Maybe Intel's Linux drivers are better than I thought...?

Oh, well maybe I didn't try very hard to say everything about three different things in the same sentence... The two scenes with one repeating simple object do run okay on my desktop (though maybe like 10fps on my laptop in fullscreen). The fish doesn't really run well on either in fullscreen.

Rahsennor wrote:

I've found acouple of games using raytracing already. I think Voxel Quest switched to point cloud rasterization at some point and YMMV on whether they count as games or tech demos, but hey, points for trying.

I'd like to try out either of these. Voxelquest seems to have gone silent about a year ago, and that second link points at a dead website.

Edit: Seems like there's a bunch of builds of Voxelquest available on the site, but I could only get the very earliest build to run, which did not seem to be interactive.

I guess a 512^3 4 channel texture map would be half a gig. Maybe that's not an unreasonable thing to budget, especially if that's going to be your game's thing. (Maybe also I need to update my internal idea of how much memory GPUs have these days too...)

I remember a long time ago there was a bowling game that advertised itself as using ray tracing. I seem to recall the demo ran OK at 320x200, at least, and this was probably 15 years ago. (The game's website is long defunct, and I can't find a working download of the demo anywhere.)

Voxelquest reminds me a bit of a game that was made a few years ago called Love. Not really using "offbeat" rendering techniques, but it had a distinct visual style that was entirely procedural.

I guess for games though, I'm a lot more interested in the result than the process, at least as a player. (I am interested as a game developer, but that's somewhat separate.) Like, if using a weird method of rendering resulted in some interesting art, that's great, but people do that pretty often without special techniques too! Like going back to that fish as an example, really impressive as a 3D model and animation done entirely in shader code, but as a 3D animated fish I'd consider it pretty shoddy compared to the fish in the game I linked above, Fishing Planet, which look a lot better, and aren't even remotely a performance hazard for their engine either.

Last edited by rainwarrior on Fri Sep 29, 2017 7:20 am, edited 1 time in total.

Is there anything that just interpolates the edges of 3D objects, instead of interpolating every polygon?

You mean like rounding the edges of a box or something? That's probably just easiest to do in the 3D model directly.

You could make subdivision surfaces that place more triangles at the edges of a patch than the middle, but I can't think of a case where that would really be beneficial.

With procedural generation rounding the edges of a box is actually a huge pain, ha ha. Maybe the easiest expression is just the superellipsoid, but that ain't cheap to compute.

There's also an anti-aliasing optimization where you only multisample if the pixel is on the edge of a triangle. Pixels in the middle of a triangle are already effectively anti-aliased by mip-mapping, so instead of computing every multisample fragment it just samples one for the whole pixel (and writes the same value to all its fragments).

You could make subdivision surfaces that place more triangles at the edges of a patch than the middle, but I can't think of a case where that would really be beneficial.

Google blender edge crease to see what modelers have done with configuring a Catmull-Clark subdivider to push triangles toward certain edges.

rainwarrior wrote:

There's also an anti-aliasing optimization where you only multisample if the pixel is on the edge of a triangle. Pixels in the middle of a triangle are already effectively anti-aliased by mip-mapping, so instead of computing every multisample fragment it just samples one for the whole pixel (and writes the same value to all its fragments).

Couldn't it use the two multisamples to improve the anisotropic filtering of a texture seen close to edge-on?

There's also an anti-aliasing optimization where you only multisample if the pixel is on the edge of a triangle. Pixels in the middle of a triangle are already effectively anti-aliased by mip-mapping, so instead of computing every multisample fragment it just samples one for the whole pixel (and writes the same value to all its fragments).

Couldn't it use the two multisamples to improve the anisotropic filtering of a texture seen close to edge-on?

I'm not aware of any implementation of MSAA that uses the surface angle to select whether to multisample, no. Anisotropic filtering capability can already provided at the texture fetch level anyway, which might be more effective?

Hardware that supports MSAA (the edge-only optimization) also supports FSAA (true supersampling) too, you could choose it as an option if you thought it was worthwhile. Normally you wouldn't, though, because the visual difference is pretty negligible while the performance difference is huge.

The specific implementation of stuff like this is getting into a grey area where it changes subtly between revisions/vendors, too. A lot of stuff at this level is "approximate" computation to begin with, so there is leeway for different result, especially for texture filtering and supersampling techniques specifically.

Is there anything that just interpolates the edges of 3D objects, instead of interpolating every polygon?

You mean like rounding the edges of a box or something? That's probably just easiest to do in the 3D model directly.

You could make subdivision surfaces that place more triangles at the edges of a patch than the middle, but I can't think of a case where that would really be beneficial.

With procedural generation rounding the edges of a box is actually a huge pain, ha ha. Maybe the easiest expression is just the superellipsoid, but that ain't cheap to compute.

There's also an anti-aliasing optimization where you only multisample if the pixel is on the edge of a triangle. Pixels in the middle of a triangle are already effectively anti-aliased by mip-mapping, so instead of computing every multisample fragment it just samples one for the whole pixel (and writes the same value to all its fragments).

Who is online

Users browsing this forum: Bing [Bot] and 4 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum