I know this shadow volumes stuff is getting to be pretty tedious. I kind of whiffed my initial explanation, so lots of people are a little confused about how it works. Then I’ve endlessly fussed with it without clearing up the earlier confusion. I think this is the last time I’ll bring it up for a long while. It won’t even take up the entire entry. And next we’ll do something fun. Just humor me for a bit longer.

Right now the world is cut up into chunks, and those chunks are often irregular shapes. In 2D, they’re something like this:

The geometry shader looks for points that are right on the edge between the light side and dark side of the object, and extrudes those out to make a shadow volume.

Roughly.

You’ll notice that even though the top edge is flat, it’s still broken up into a row of six line segments. What would happen if I broke this odd shape up using an octree? I’d be able to remove a lot of these redundant points, at the cost of breaking a single volume into many smaller volumes. The geometry shader would have to consider fewer vertices, but it might have to extrude more of them. The question of “is this worth it?” depends a lot on how the stuff is shaped in the first place. If an entire chunk was one great big cube, then breaking it into an octree would be objectively, drastically better. But if the shape is really irregular, then the division might do more harm than good.

It’s impossible for me to intuit where the break-even point might be, so let’s just do the experiment and see what we get. I have it divide chunks using an octree, which results in large areas being consolidated into single cubes at the cost of creating a bunch more volumes. So this single volume:

Becomes many smaller ones, made up of fewer points:

The idea here is that there are less vertices (the red dots) to process, and less total polygons (line segments, in our 2D diagram) to worry about, but more overlapping shadows. The end result might look something like this:

Note that I’m depicting the shadows stacking on top of each other just so you can see what I’m talking about. In practice, the shadows won’t actually be darker or different. But you can see these smaller, simpler shapes get turned into more total volumes.

The results would vary a lot depending on how your scenery is shaped, but in my case using an octree reduces the polygons to 1/3 their original number. At the same time, it cut the framerate in half. So in this case less polygons means a slower running program.

Now, we could probably achieve some kind of polygon reduction without using an octree. But… I don’t wanna. At least not now. That would be a long research project (it would basically be a 3D version of this) and not the sort of thing I want to mess with right now. Let’s just move on.

(There. All done now. No more shadow volumes for a bit.)

You know, I kind of want to see what this looks like with the marching cubes I was messing around with back in 2012. I’ve still got the code, and it’s basically ready for copy/paste right into the project.

Wow. That’s… really striking. Now, you might remember that back on project octant, this same idea looked like this:

Both images are using the same sort of beveled shapes, but the latter image is using smooth shading. I haven’t brought over the code for smoothing out the surface normals, and now that I’ve seen it flat-shaded I don’t think I’ll bother. Flat-shaded marching cubes have a certain geometric charm.

The only thing I don’t like about the old code is the way variables are named. The term “marching cubes” is a really odd name for a thing. That’s a verb and a noun, and the noun is already used elsewhere in the code to describe something different. (Actual cubes.) Ideally you want one-word descriptions for things in your code. And if you ARE going to have two-word names, then you probably don’t want one of the words to be a VERB. In C++ (and lots of other languages) coders usually get really fussy about naming things nouns and verbs. If a function does something then you want to give it an active, verb-y name, usually in the form of thing+action to perform on thing:

PlayerKill ();

PlayerKill ();

But if it just returns information then you give it a more passive name:

PlayerHeight ();

PlayerHeight ();

Note that this is just one of many approaches. The goal here isn’t to make the One True Naming Convention, but only to pick a system where you should be able to guess the name (or purpose) of something without needing to look it up. So having a thing with a verb in its name creates a situation where you end up with verbs where you don’t want them. This doesn’t hurt anything mind you, but it’s annoying when you settle on a set of rules and have this ONE THING that doesn’t follow them.

Back in 2012 I dealt with this in various ways. Some places they’re called marching_cubesToo long, has a verb. and other places marched_cubesToo long. Kinda awkward. and other places marchesUGH. Terrible., all in some kind of desperate attempt to make this thing less annoying to use.

This is stupid. They aren’t cubes. They do not march, or perform any marching-type activity. The name is long and confusing. So why not just call them “blobs”?

So I have to run through this old code and touch it up, re-naming all these ridiculous variables.

Other than this bit of housekeeping, I’m actually really pleased at how nice the code is. Sometimes I go through old code and cringe. Sometimes I’ll find a whole page of complex operations with no comments, numerous sections of unused code, or several things with nearly the same name, and get frustrated with my former self. This is especially true if the code was originally written as a prototype. You start off just slapping stuff together to see what works, with the idea that you’ll come back later and clean it up if it works out. And then you… don’t.

Some people deal with this by insisting on writing everything right the first time. That doesn’t work for me. It slows down work too much, especially when you’re making sweeping changes. It also results in a lot of wasted time and effort. Earlier in this entry I talked about making octree-based shadow volumes. Those were hacked into place with all sorts of shenanigans. And since I ended up deleting the entire thing the moment the experiment was complete, it would have been a huge waste of time to pretty up the code as I worked.

For other coders: My usual prototyping crimes are: Public class variables. Function names that no longer describe the function they perform, or do so poorly. No comments. Blocks of disabled code. Excessive nesting. Grossly inefficient code. Overly terse variable names. Passing around huge structs or classes by value instead of by reference. Pretty much the same sins everyone else commits, I’d imagine.

This is less of a problem if you know exactly what you’re doing. If you’re working from a detailed specification (or if you’re doing something inherently straightforward) then it’s a lot easier to write it all correctly the first time. But when I’m prototyping, I think the best approach is to do LEGO-style building: Dump everything out and make a mess, build what you need, then clean up when you’re done. The only reason this is a problem is that sometimes I run into an interesting distraction just as it comes time to clean up. I make a note to come back later, and then get so involved with the New Thing that I forget about the mess I left for myself.

The condition of the Pixel City code was ghastly. I can’t look at it now without flinching. I’m really sorry to everyone who had to untangle that mess. (I suppose my self-imposed time limit was a major contributor to that.) Project Octant was left in far better shape, and so I’ve been able to recycle bits of it without much effort.

Anyway.

I think marching cubes blobs make for less processing-intensive shadow volumes. So not only does this look nicer and more unique, but I can push the view distance a little higher before it hurts the framerate.

So this is where it leaves us:

And (roughly) the same thing in VR-view:

Looking forward to getting my Oculus Rift, although I have no idea where this project will go or how I’ll write about it once that happens. We’ll figure that out when the time comes. In the meantime, I’m kind of looking for excuses to tinker with this a bit more. I’ve basically met my goals with regards to shaders, but this is kind of fun. On the other hand, I might get back to Good Robot. Based on what I’ve learned, I could rip the guts out of the particle and sprite-drawing engine and replace it with something much, much more efficient.

When I am prototyping software my biggest crimes are having ‘god’ classes and having multiple versions of functions with all but one commented out. I am one of those people that get it working first, then fix it up later. Some of my friends are the opposite and format and plan completely from the start so they never get lost. Both have their advantages, but for single person stuff I find prototyping is better.

Those shadows look really good on the marching cubes…some might even call it an artistic style.

I’ll be interested to see you applying this new shader knowledge to Good Robot, especially since that will involve moving back into the 2D space which I myself have never used OpenGL shaders with.

I’m also a prototype-ish person, like you and Shamus. I think the biggest reason I don’t plan everything out the first time, is that outside of academia, you never know everything about how the code needs to work at the start. So, if you try to plan everything out before-hand, you make a lot of assumptions, which probably end up half wrong. Ends up with a lot of unused features that ended up not being needed, and other features missing, because you didn’t think of them. :)

I try to find a middle ground. Write up a simple design doc, detailing just what I need to do right now (a task such as “read a certain file format”), then write an interface for that task (class methods, function signatures), and figure out how the finished product would be used, and only then start coding.

I don’t need to determine everything at the outset, but I try to make sure that the interface is set before I start coding. That way, I can improve it later with only minimal modifications to the calling code, and since I first determined how it should be used, I can keep coupling to a minimum.

Having the high level design document on hand is also useful to be sure that the design I chose can actually do what I want it to do. Quick prototyping can be useful for small projects, but I’ve found that as things grow more complex it becomes impossible to advance.

Heh. I remember years ago when I learned Java, I got distressed about not having any global variables available like I had in C, so I created a class called “globals” and put ’em all in there. After all, as the saying goes, “Real Programmers can write FORTRAN programs in any language” (google it).

I start with a tiny kernel feature, whatever the most basic thing is, and I get that working. Then I layer on functionality from there, slowly growing the app with each iteration.

The biggest thing I run into is blocks of commented/disabled code scattered throughout my source. This is a bad habit that I picked up from when I started writing code in the mid-90s. You might know this as the pre-ubiquitous-source-control, pre-visual-editor days; or simply “the dark times”. Back then, for someone like myself that always likes to leave the product in a working condition as often as possible, refactoring a method was scary without having an alternate working version to fall back on.

Since I already operate in an iterative work style, the rise of Agile methodologies in the enterprise space has been a really nice change.

At some point, the light calculation has to cash out to a pixel shader, right? So you have to draw an area based on where the light is, but not include the shadows, then alpha-blend it with the original texture. Do you have to work out the intersection of the shadow volume and the surface polygons? If so, how do you do that efficiently? Do GPUs understand volumes or do you have to use the CPU? Could you draw a new polygon based on where the sizes of the shadow intersect the surface?

If I’m grasping correctly, then you could do this by, for example, setting the normal map to be perpendicular to the light everywhere in the shadow. (Inefficient, but should work.)

Kindly excuse me if I’m being dense. I consider ‘read [this] again’ to be a possibly reasonable response in this case.

Not that I’ve ever implemented stencil shadows, but there’s no need for blending or normal maps. You render the bits in shadow with (lighting – some value), then the bits not in shadow with lighting, then stick the two together and you have a completed image. “Stencil” refers to the fact that you treat the shadows as if they’re a “cut-out” from the main image.

I think you’re used to using some sort of graphics engine, hence the thing about normal map values; stencil shadowing is happening at a more fundamental level and doesn’t require you to even have a lighting model to function.

I think we’re kind of at cross purposes, but I agree that procedural normal maps certainly help with detailing this sort of scene. However:

a) lighting a surface by changing the values in its normal map is putting the cart before the horse, really – you can just call it “in shadow” and subtract some value in the shader – which would allow you to use both techniques simultaneously anyway.
b) getting normal maps to cast shadows is actually pretty easy if you’re lighting them in tangent space; all you need is some idea of height values and a direction vector to your light; although at that point you might as well just do steep parallax mapping instead.
c) both of those techniques are independent of an implementation of stencil shadows.

Looking at the last image with my eyes crossed, it looks really great. You can see the individual grass halms (is this what they’re called in English?) much more clearly. It also seems to be higher resolution, although it obviously isn’t. This is making me excited for the Oculus Rift.

What framerate can you currently achieve? And at what resolution? Because that will determine whether it will look really great with the Oculus Rift, or make you want to vomit.

I write and maintain software used internally by a business for their business-y needs. Personal experience has been that I can either clean up things right now or … never. So, I usually do that sort of stuff as soon as I can, lest it snowball into an unmaintainable mess. (It helps a lot to be working in C# and using Resharper, which means syntactic support for, say, properties and a lot of easily used context actions for renaming things, etc.)

On a personal project, I’d probably do the same thing out of habit, but I can’t see that it matters overmuch.

Present participle, if I’m not mistaken; used whenever a verb root is used in the role of an adjective. Usually the same spelling as the gerund, which would be if it was used as a noun. In either case, it’s verb+ing.

So while it’s definitely an adjective in the phrase “marching cubes”, when one changes the form to make a method name, one will find oneself with unexpected verbs when one didn’t want them.

Declension is a cool term, but the wrong one–you’re thinking of conjugation. Declensions are things that happen to nouns in Latin and Old English and stuff. There are none in English.
Basically, in English you use word order and little extra words to figure out what’s the subject of the verb, what’s the object, what’s a possession (“‘Her’ guitar”) and so on. In Old English they had little endings you stuck on the nouns that told you what the subject of the verb was etc., no matter where in the sentence it happened to be sitting.
The most fun explanation of declensions would be here:https://www.youtube.com/watch?v=IIAdHEwiAy8
(Life of Brian grammar lesson scene)
But yeah, declensions are to nouns what conjugation is to verbs. Which is to say, a pain in the rear and I’m glad we got rid of them.

I agree that using marching cubes and similar beveling algorithms can produce some pleasant effects, but they can make things a bit confusing when it comes to minecraft clones. I remember playing Planet Explorers and Starforge and being really frustrated while digging because I wasn’t able to remove the volumes from the dirt that I was wanting or expecting.

The letterman jacket person will be right with you when they’re done flushing my head! :D

One could say the same about ‘harmony’ – it (strictly) applies to music, but has a secondary meaning which works by analogy, as in ‘live in harmony.’ ‘Consonance’ has a similar secondary meaning, I’d say, such that the two secondaries are effectively synonymous (and I went with the latter for the weak riff on ‘dissonance’).

So, we’re both right, I think! Yay. (Although, only one of us has their head down a toilet. Boo.)

Back when I wrote code (so back in school), I’d comment as I wrote. I’d assume that I would forget what I was doing the next day. Or after lunch. Or after going to the bathroom. Or writing a different bit of code.

So basically I documented under the assumption that I had the memory of a goldfish.

Which works – I’ve pulled out code from almost a decade ago, written in languages I’ve entirely forgotten, and still managed to follow what was going on.

This is actually the advice that a lot of places on the internet give. It’s the motto I follow too. Every once in a while, I look at something from last week and am reminded why I always write lots of comments. :)

It is obviously “wrong”, but from a certain distance away you will not notice it.
This allows you to “cheat” by saving cycles overall and up the detail close up (since you now have cycles to spare).

Depending on the threshold distance you could have a threshold “margin” where the high LOD and low LOD are blended (sort of fading the high LOD shadow in) this should avoid shadow LOD “popping”.

I have no idea if Shadow LOD is the correct term here. (I surely can’t be the first to think of this idea as it seemed rather obvious to me at least).

If objects LOD as well then the shadow LOD should be able to reuse that data, and if possible object LOD and shadow LOD should not have the exact same threshold. (to minimize any “popping” effects further, I also suspect humans notice shadows LOD before object LOD anyway).

What does Shadow LOD mean in practice though?
Well in image 5 we see 8 shadows cast (16 polygons?)
Using my Shadow LOD idea it would be just 1 shadow (2 polygons?)
Which means a potential performance increase of 8 times (800%) which is nothing to sneeze at.

This means squarish things casts square shadows from afar, rectangular casts rectangular shadows, and circular casts circular, and cylindrical cylindricalÃ¸ and triangular, well you get the point, the shadow LOD are primitives based.

I’d actually like to know if Shamus has LoD at all. Like, if he used a version with less cubes for the farther-away stuff, then he’d get the cheaper shadows for free, with the cost of generating the lower-LoD cubes.

If you use different mesh to create shadows, then you need to spend CPU cycles and memory to juggle the two (or more) different meshes. This is generally not something you want to do unless the GPU savings are pretty big. The GPU savings in this case is mostly when generating the volumes in the geometry shader, since rendering the volumes to the stencil buffer is really lightweight. The actuall savings in this case is very much dependent on the meshes in question and how much they can be simplified, and this value is likely to be highly variable, not just between different application, but also on different situations in a game for example.

Also, the less detailed meshes needs to be created, either by the CPU or artist, which can be anywhere from no extra cost (often models with different detail levels are created anyway) to an unacceptable resource hog

Also, when you started talking about octrees I imagined you were just going to reduce the vertex and face count but then do the same thing you’d be doing if you didn’t have them, i.e. still just one shadow volume for the whole thing, but now with fewer vertices to check, and (thanks to the octree stuff) easier-to-find corners …

So cool! I notice that the “bricks” are not beveled, but the “dirt” is. Can you get something half-way? Is it possible to make… half beveled cubes? Or, variably beveled cubes?

It could work out to be a really valuable visual game shorthand to have the “beveled amount” of each cube represent some internal state. Like, how much the terrain has been eroded, or chipped away by the player’s pick, or simply how hard the material is in the first place.

Can you set the bevel to change dynamically? I think it would look super awesome to have music playing, and the bevel of all the cubes “pulsing” slightly in time to the beat. So many neat things you could do with this!

But, yeah, you really should go back and fix up Good Robot. This code will keep. Like you said, it’s in really good shape now, so you should be able to pick it up again later without much fuss.

I really do appreciate the side-by-side VR views, because I learned a trick whereby you can cross your eyes by the right amount and bring two VR images like that together to make an image that really looks 3D.

I was wondering, does the Oculus Rift have a way of passing along to the program the “focus depth” (probably wrong term) of the userÂ´s eye? I mean, where the user is focusing. Something really annoying in 3d movies for example, is that they do “field of depth” effects, which blurs everything but what they director wants you to focus on.

Games do something similar, but apply the blur to everything not centered on the screen (Skyrim’s default is pretty bad about it).

If I have the illusion of a 3d space, however, I would expect to be able to focus on elements at different “depths”, even without moving my head, the same way I can do in a real 3d environment. If I can’t, I imagine it would throw me off. Especially if the game tries to “help” by applying blur on where it thinks I shouldn’t be focusing.

Pupil tracking is a developing technology. They’re looking for ways to use it with the oculus, but it’s not ready yet. In the development guide, they strongly advise against using a lot of the traditional full-screen effects like motion blur, depth of field, and bloom. These don’t look as impressive in the oculus, and lead to disorientation or eye strain.

But as far as I can tell: No, you wouldn’t get blur effects from focusing your own eyes. One of the reasons that people get eyestrain with 3D (any kind of 3D) is that while your eyes are “crossing” to converge on different things, they’re always focused on the same depth. Sure, you might look at the mountains in the distance or the object passing right in front of your face, but your eyes are still focused on the movie screen that’s always a fixed distance away. It’s postulated that people who “can’t see” 3D are people who can’t do this. Their eyes refuse to converge at one depth while focusing on another, so the whole thing falls apart for them.

In the case of the rift, that means that everything should always be in crisp focus. (Or at least, as crisp as possible.)

I have seen an IMAX 3D movie once (yeaars ago) that used DoF, and did it fairly well. The Problem, though:

1: It works pretty well as long as the things that are in focus are also the focus of the viewers’ attention, therefore their eyes are looking at in anyway. Bonus points if that object is actually (virtuall) in the screen plance, because then eye parallax and focus all come together.

2: In general, the least straining 3D view is always when the focus of the viewer’s attention is at the same distance the screen appears to be. You can’t change the parallax setting during a scene (at least not much and not quickly), so it would seem that DoF would help direct the viewer’s attention but it actually makes it worse: If the image is blurred the viewer has a much harder time figuring out exactly how cross-eyed to look at the screen to align the two pictures, so it’s a double penalty on anyone in the audience whose eyes stray from where the director intended them to go, unless done very lightly, or you can be reasonably sure the viewer will be focused on whatever is focused in the scene as well.

The Hobbit, for example, has some very slight DoF, but really quite little, which contributes to the audience being able to identify some of the background scenery as artificial when in traditional 2D movies they would be to blurred for that.

The reason you see things you’re not focusing on blurry in real life is because your eye is a camera, and it has to be focused for the object’s image to display crisply on the retina. When you focus past something or ahead of something, the image isn’t formed right.

With the Oculus, everything is drawn at the same physical distance from your eye. Even though the simulated parallax gives your brain the illusion of depth, it doesn’t cheat your eyes.

In fact, if you tried to focus your eyes where your brain thinks the image is in virtual space, everything would turn blurry because you’d be focusing past the surface where the image is actually drawn.

Almost irrelevant note, but fits here better than most places because this is a programming post:
I know Shamus is into procedural content in games, and is interested in cases where this is explored more than it usually seems to be. So I was wondering if he’s heard of the game Limit Theory:http://ltheory.com/
It’s a science fiction game that seems to be somewhere between a sandbox thing and a 4X, where one of the major things is that all the content is generated procedurally and is effectively infinite in potential scope. To quote the website,
“All of the content that populates each universe is generated by the computer, using a technique known as procedural content generation. This means that next time you start a game, not only will the universe be different – but all things therein as well, including factions, goods, weapons, ships, mission opportunities, stations, planets, AI pilots, and more. Each universe is a totally unique experience without end.”

I don’t think the game is all the way finished. But what’s there so far is pretty, and the guy does these monthly-or-so videos where he shows stuff and talks about his progress writing the game. It just seemed like the kind of thing Shamus might have an interest in.

I know this is pretty old, now, and everyone has moved on, but I had to throw this out there because I’m dumb or something. I think a better name for the marching cubes than blobs would be marchmallows.