I think it's still widely used, just not so common in real-time generation (i.e. games).Maybe it's too computationally intensive (just my random guess, maybe just BS) in comparison, that while hardware nowadays may be good enough to handle it reasonably well, it may still be less intensive to draw A WHOLE LOT of flat surfaces (triangles or quads) to make comparatively good looking stuff and performance, so people still choose polygons for games.

NURBS was sort of popular with earlier OpenGL stuff but not really used in games. Having hardware to do some sort of curve tesselation eventually just became the generalized thing of "Geometry Shader" and/or some other the other newer parts of the pipeline between the vertex and pixel shaders. You can do NURBS with that, or various other things.

I don't think you see it that much in games because it's basically trading memory (which would be needed for more polygons) for GPU time (used to do the subdivisions). Vertex memory tends to be small potatoes compared to all those HD textures (especially when you have diffuse map, normal map, gloss map, etc. all on there), so I think in that majority case in games this just isn't a good tradeoff.

I have a NES tech demo inspired by this game, that I want to get back to, after my current SNES commission is done!I was considering making my own game more like Human Entertainment's Clock Tower on the Super Famicom.

Minimal weapons, or none at all, and a constant threat somewhere in the map, looking for the player.

The player sprites are scaled, using a simple "squishing" effect, to transition between each major frame.

I figured it was probably something like that. In my line of work, we sometimes find that going to a higher-order numerical technique doesn't pay for itself in compute time, or doesn't offer enough of an advantage to make the additional complexity worth it. For example, high-order root finding techniques exist, but nobody cares; everybody just uses Newton's method.

This is was thinking of; supporting actual curves instead of having to fake them with a bunch of lines. Of course, this will always look nicer, but will it always be more GPU intensive, as in, wouldn't traditional modeling become less efficient the smaller (less pixels) each polygon is? Of course, with 4K, we now we have 4x the number of pixels in the equivalent space...

This is was thinking of; supporting actual curves instead of having to fake them with a bunch of lines. Of course, this will always look nicer, but will it always be more GPU intensive, as in, wouldn't traditional modeling become less efficient the smaller (less pixels) each polygon is? Of course, with 4K, we now we have 4x the number of pixels in the equivalent space...

Yeah but the GPU implementations of NURBS were a subdivision surfice (i.e. "faking them with a bunch of lines").

You can raytrace in a pixel shader, too, I think there's been a lot of demoscene stuff that does this over the years, but if your question is whether this can be done in a generally useful way for typical game situations e.g. modelling and animating a 3D human in realtime... well, the most practical/efficient way to do this so far is subdivision surfaces, not raytracing.

Though, I would say the difference between a sufficiently fine subdivision surface and a "real" curve is probably not an important practical distinction? It's the same result at a certain point.

It may not be worth it in most cases to go past Newton's method, but it's absolutely worth it to go past Euler's method, or Godunov's method. There's still a limit, but the simplest possible method (analogous to triangles for a general 3D surface) is not the best one in those cases.

Or perhaps voxels would be a better analogy...?

One is tempted to wonder if this field is really mature, or if it's just gotten stuck in a rut and looks mature like the space launch industry did...

EDIT: I was careless. Just so we're clear, I am aware that there are a number of root-finding techniques more primitive than Newton's method. Voxels = bisection method?

Last edited by 93143 on Wed Sep 27, 2017 8:59 pm, edited 1 time in total.

One is tempted to wonder if this field is really mature, or if it's just gotten stuck in a rut and looks mature like the space launch industry did...

This is what I was thinking. Triangles are the most efficient for GPUs, but modern GPUs are also built around rendering triangles, as in they are built to do this in hardware. Using thousands of small triangles to emulate a smooth surface just seems... Primitive? It's not that being basic is inherently bad, but we have developed tons of new, better lighting techniques as we have gotten more processing power, while actual modeling has been about the same for 20 years. Like I said earlier, I'm even curious if you would get better performance if you made a GPU that natively handles curves at this point. It's like how it used to be more efficient to just draw over everything while it is now more efficient to use z-buffering.

Triangles are the most efficient for GPUs, but modern GPUs are also built around rendering triangles, as in they are built to do this in hardware. Using thousands of small triangles to emulate a smooth surface just seems... Primitive? It's not that being basic is inherently bad, but we have developed tons of new, better lighting techniques as we have gotten more processing power, while actual modeling has been about the same for 20 years. Like I said earlier, I'm even curious if you would get better performance if you made a GPU that natively handles curves at this point. It's like how it used to be more efficient to just draw over everything while it is now more efficient to use z-buffering.

There's not really any promising algorithms for making this efficient though. Intel has been researching real time raytracing for years (using supercomputers), but there's never been a promise of efficiency from it. Pixar experimented with raytracing (sparingly) for Cars, and then decided they should probably never do it again... (Edit: thefox points out below that Pixar has since gone back to that well.)

It's more that there are some things that raytracing inherently does easily (e.g. reflections, shadows, some curves), and if we ever reach enough CPU/GPU power in real time you can get some of those, but only by trading a huge performance advantage for it. For it to take over as a paradigm though, those benefits have to trade better than just throwing more polygons at the problem. It could be the difference between 10 raytraced trees and 1000 rasterized trees at the same framerate; yeah maybe those trees have great shadows and look nice in some special way, but I can just do so much more stuff with rasterization-- maybe I could even use some of that excess to fake or substitute for the special raytraced effect!

The efficiency difference between solving e.g. raytracing a bi-cubic curve patch vs. subdividing it with a GPU is immense. Designing the hardware around this feature wouldn't negate the nature of the algorithm. They're accomplishing the same task in vastly different ways, and one of them just takes a lot more time to compute. If you have so much power that the difference in efficiency doesn't matter, then maybe you'd consider raytracing, but commodity graphics hardware isn't going to make that irrelevant any time soon.

"Better lighting techniques" were developed and in use 30 years before GPU pixel shaders had enough power to use them. Depth buffers were the same, known long before GPUs made them standard. The examples you're using we could see coming from decades away. It was a matter of when we'd have enough cheap power/memory to make them practical.

Rasterized triangles are actually a really good solution for rendering in 3D. I don't really see them going anywhere; just because a solution is old doesn't mean it's in danger of being replaced (maybe the opposite?). Hey are we still using knives to cut our food? Isn't there a better way yet? What about robots and lasers? (Not really trying to be mocking with this analogy, just trying to get across that some ideas become stable for good reason.)

Last edited by rainwarrior on Thu Sep 28, 2017 10:30 pm, edited 1 time in total.

Maybe to illustrate another approach besides raytracing, what about trying to rasterize a curve?

To rasterize any shape, you need to be able to trace the boundary of the shape. and then find pairs of points along horizontal lines across that boundary.

With a triangle, you have 3 points. Think about ways that 3 points could be arranged on the screen. Which one is at the top? Which one is at the bottom? Is the middle one on the left or right side? Is the triangle degenerate (i.e. are all 3 points on a line or on the same pixel)?

There's a bunch of cases to consider, but in general you end up splitting the triangle into an upper wedge (from the top point to the split at the middle), and then a lower wedge from the middle split to the bottom point), and once split into these two wedges it's easy to trace the two boundary lines in parallel (with some appropriate efficient algorithm) and use those as endpoints of each raster.

So... why not just use curved lines to trace the boundary of a curved shape and rasterize that? Well, in some cases you can: there's effective circle rasterization algorithms, for example. In this case, it's solving a quadratic equation insted of a linear one, so each step is more complex than the edge of a triangle, but it's still doable.

However, a circle is an exceptionally "simple" case; has perfect symmetry that can be exploited for efficiency, has a very well defined shape and boundary, etc.

Let's think about 3D curves in general, though. What's the simplest curve? Maybe a bi-quadritic patch? i.e. a quad but with an additional piece of curvature data on each axis.

The triangle had a bunch of cases, but ultimately we could split it into two simple wedges. What do we need to do to split a quadratic patch? How do we find its boundary on the screen? This curve isn't contained by the 4 points of the quad, the curve bulges outside it! Does the buldge go left or right, up or down, in or out? Does part of the curve overlap itself from this viewpoint? (Triangles can't do that!) How do I split it to avoid the overlap? Splitting this into rasterizable pieces is not easy!!! We're only at the simplest possible thing above triangles here and there's already a combinatorial explosion of possibilities to deal with. It can be done, but there's a huge loss of efficiency dealing with all these cases. On top of that, even once you've split it up, tracing/interpolating a curved edge inherently takes more computation than tracing a line, as does interpolating across the raster between the split edges. Every single step of the operation has this added quadratic complexity to it!

The easier to build alternative is basically raytracing. Find a bounding rectangle on the screen that you know will contain all the points of the curve to be rasterized, and then just raytrace every point in that rectangle. This is not that hard to implement, it's just one easily calculated surface intersection, but it's orders of magnitude more computation than rasterization. Not to mention that the bounding rectangle is necessarily wasteful (a lot of the pixels in the bounding rectangle will be off the shape, and ultimately discarded, but you have to do the intersection calculation to determine this). This is an even bigger loss of efficiency, but at least easier to implement.

Then back to subdivision surfaces: it uses the GPU to automatically split a patch into smaller triangles, and then just use regular triangle rasterization. It's the same input data as these two other options, and the result is visually as close as you want to make it, but the subdivision process is relatively easy to implement and fairly efficient at the same time. You can even do level-of-detail adjustments where close stuff gets subdivided more than far away stuff. It's easy to scale up if you want smoother curves, and just as easy to scale back if you want coarser curves to save computation.

It may not be worth it in most cases to go past Newton's method, but it's absolutely worth it to go past Euler's method, or Godunov's method. There's still a limit, but the simplest possible method (analogous to triangles for a general 3D surface) is not the best one in those cases.

...

One is tempted to wonder if this field is really mature, or if it's just gotten stuck in a rut and looks mature like the space launch industry did...

In cases where accuracy is needed, there are all sorts of advanced methods for calculating numerically stable solutions to problems like that. I don't think there's really a lack of advanced methods in application where they're needed. Especially in physical simulations, scientific research, etc.

If you're talking about games, then relatively simple methods are usually sufficient. Accuracy isn't necessarily a problem; when the goal is just making something that's fun to interact with, and responds acceptably, that doesn't always require advanced computational techniques. In a lot of cases, keeping the algorithm simpler helps implement a lot of the cases where things interact.

...or even sometimes it's just an advantage to use a simple method, even if it's not the best tool, just because it's easy to implement and you don't understand or don't want to take the time to properly think out the "better" solution. Sometimes you just have to "roll your own" solution with limited knowledge, and move on, because the quick solution was good enough for this project.

But: if you look at physics engine middleware for games, they tend to have pretty well thought out computational methods. When you have experts that can specialize on something like that, they do get this stuff right. I wouldn't call solutions like Havok "immature" at all, from what I've seen.

The idea I was trying to get across is that there are optimum methods. In numerical physics, the tradeoff is not so much between speed and accuracy, but between how much time it takes to do a thing and how often you have to do it to get the accuracy you want. This means that there will be a method that does the job faster than either a more primitive method or a more sophisticated method.

Time marching is a good example. Heun's method (predictor-corrector) is a significant improvement over Euler's method because you can take much longer time steps, but I believe it's been shown that going past a certain order is counterproductive because it takes so long to compute each step. The commonly used Runge-Kutta (4,5) scheme is about as high as you want to go.

With computer graphics, the tradeoff is a bit different, because the frame time is fixed (more or less) and the accuracy of the representation is not. But it's fundamentally the same idea - there will be a method that produces the best picture within that frame time. Maybe that method is texture-mapped triangles. Maybe it's something that hasn't been thought of yet. But triangles do seem to have some nice advantages over both simpler and more complicated representations...

Pixar experimented with raytracing (sparingly) for Cars, and then decided they should probably never do it again...

They may have said that in 2006, but in recent years there has been a pretty significant shift towards using path tracing (a form of ray tracing) in movies. See this paper and these SIGGRAPH course notes for example.

Pixar experimented with raytracing (sparingly) for Cars, and then decided they should probably never do it again...

They may have said that in 2006, but in recent years there has been a pretty significant shift towards using path tracing (a form of ray tracing) in movies. See this paper and these SIGGRAPH course notes for example.

Hmm, yeah, thinking back on this the comment I was referring to was something I'd heard someone say at a SIGGRAPH talk in ~2008. Basically that person expressed that the raytracing in Cars probably hadn't been worth the rendering time burdens. It may have been an isolated opinion, and it may have changed by now.

93143 wrote:

It's just an analogy. ... Maybe it's something that hasn't been thought of yet. But triangles do seem to have some nice advantages over both simpler and more complicated representations...

I'm trying to think of times where there's been apparent leaps in computer graphics.

Espozo mentioned depth buffering, but I thought it was important to note that it wasn't a new idea when it became standard in GPUs. It was just that the first generation of GPUs barely had enough power to do what they were doing at all; depth buffer requires not just more RAM for an extra layer of framebuffer but also an extra read and write and compare for every pixel. We just didn't quite have the bandwidth to spare at the time. Depth buffering was a convenience, rather than a necessity, but once you had a certain amount of power there's no reason to do without it. Very quickly just became a standard feature.

Shader programs were another big change, but again not really a new idea, as software renderers had inherently "programmable shaders" since the beginning. It was just newly available to GPUs as the power grew enough that we could afford to let them run programs instead of optimizing the hardware to do only one very specific thing. The years of fixed function must have been hell for ARB standards, every new idea had to be implemented in hardware and given its own new API extension. Things have been simplified tremendously since the fixed function was abandoned.

Most of the stuff that's changed is just due to predictable steady increase in available power. Most good techniques seem to be around for years in non-interactive graphics before there's enough power to do them in interactive stuff.

A new idea that changes everything, something better than a triangle... that'd be amazing, but I haven't seen anything of the sort on the horizon. I can see power continuing to get cheaper, and more viability for interactive raytracing eventually, but that's about it. There's other oddball rendering methods too... I dunno, how often does someone burst in with an entirely different way of doing things in any given field? It'd be exciting if it happened, but I'm not expecting it.

With the way hardware has gone the last few years I've actually felt the opposite, that we finally have reached a really good generic solution for 3D rendering, and all the hardware is starting to settle down and look pretty much the same / interchangeable. I think this is a really good thing, too, because as things standardize programs can be more accessible and have more longevity.

Who is online

Users browsing this forum: No registered users and 0 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum