Just a quick update...mostly have been busy with work but in the free hours I've managed to get a lot done, even though visually it may not seem like it.

I rebuilt almost the entire engine. It is much cleaner, less code, better organized. No copy-paste code. The same routine handles all chunk generation and scheduling. Multithreading works a lot better now - rather than redundantly processing chunk edges, it synchronizes the chunk processing. Noncoindentially, I seemed to have completely eliminated the crashes I encountered every so often. (Word to the wise: do not try and use C++ vectors inside a thread, even if each one is exclusive to its own thread.) I also implemented better mipmap generation and handling. Overall, rendering performance is up 2-10 times faster depending on settings, and chunk generation is (as promised!) over 100 times faster, although there is still much room for improvement (particularly in terms of choosing visible chunks to render). But for now, I am very happy with performance and probably won't optimize it much more for the time being.

I wrote my own GUI and text-render from the ground up. It can render without shader state changes, meaning it can all be compiled into a single display list. It is generated almost entirely on the GPU (text is read from a bitmap though). It uses uniform spacing and character widths, intentionally (all part of the old-school look :D). Still in the early stages but here is a screen shot (characters have backgrounds just for the hell of it, but can easily be disabled).

Overall, I am very happy with the progress made, the engine feels much cleaner, faster, and stable.

Here are two videos I uploaded. Keep in mind its still very unoptimized and there is a lot of room for improvement. It is running at over 60 fps, but input is not threaded or smoothed correctly so it may appear a bit "jerky." Everything is being generated on the fly (very slow, but I strongly believe I can make it load over 100 times faster with optimization). Also, it can be (pre)generated and cached to disk for near-instant load times. Here you go (also, direct links here and here):

Just a quick update, since I got a lot of questions about texturing - any voxel can have its own custom material properties. Here is a very simple example I whipped up, and included a kind of crude grass and dirt texture (dirt is really just a solid color, dithered by voxel towards where the grass grows). In my opinion, this is very crude and was kind of hastily put together - I promise the future versions will look a lot better :). I am also pleased with initial reactions - my visitors have skyrocketed to over a thousand in one day (not a lot, but that is up from 1-10 visitors per day ;) ). I included one shot with a smooth simplex noise (and crappy looking) terrain and another with a sharper (I think better looking) voronoi-based terrain.

Improved the shading a bit, created an interesting voro-noise texture that looks surprisingly like my ray-casted stuff, even though I started more or less from scratch without referencing the old code. Believe it or not, this runs in realtime on a (good) mobile radeon chip, in the range of 60-70 fps (can easily go up past 200 if settings and view distance are tweaked). Performance is probably 2-3x better on a real (discrete) graphics card. It is still horribly unoptimized but runs quit well regardless. I could do some 3D mipmapping to really improve performance.

So, I have decided to use polygons (!) but only to render the quasi-voxels - its just easier and faster than messing with ray-traced volumes. Also dumped QT -- too bloaty for my needs; just using barebones (cross-platform) libraries. Here is a shot of a landscape and another two shots with just lighting (click for enlarged version). This uses Simplex Noise, Voronoi Noise, and just Plain Old Noise (TM). Also features ambient occlusion and shadows, on a per-voxel-thing basis. Also loads a virtually infinite landscape over time. Last but not least, I just added in multithreading - it maxes out every core, and I must say performance just skyrocketed after that (partly due to the fact that processing chunks no longer blocks the main thread). I wrote it in a few days. Uh, yeah it looks like a crappy version of Minecraft, you want to fight about it?

For the past few years I've been jumping between projects, but mostly distracted with surviving. I have been failing for so long that I think I might have learned how not to do it a little bit better. My goal is to get actually get a product out the door - a game or an engine (or both). I want to make it with volumetric/solid modeling and be able to generate pixel-art-esque imagery. The easy part is starting, as they say. But I really want to finish something. What I have learned over the years is all common knowledge, but I am stubborn and had to learn the hard way:

1) Premature optimization is death. An unfinished product that runs twice as fast is worth nothing.2) Non-prioritization is death. You will want to work on the most interesting to build, trivial features, but should not.3) Scrapping and rebuilding is death. You think you have learned so much, but the truth is you never stop learning.4) Boredom is death. If a project is too mundane, it will die.5) Too many features is death. Do a few things well, then expand from there.

I have worked on a number of platforms across Windows, Mac, and Linux. Windows was (by far) the easiest OS to program on (especially given the availability of tools and existing code), but it is also the hardest to develop cross-platform applications with. I also am ditching my PCs, I think, in favor of my newer Mac (I am not a huge fan of Macs, but I do most of my contracting work on them). So, after spending literally months of my little free time trying to settle on a platform, I am using Qt with OpenGL 2.x (3.x support is quirky on Macs, especially with Qt).

I have spent a fairly long time (10 years) working with polygons, I have decided that I hate them. I am using voxels because they are so much easier to do procedural generation with. You have a finite level of detail which means you can generate objects implicitly based on a given coordinate. I once wrote an engine with polygon-based Voronoi generation, with a month of hard work. Conversely, what you see below was generated in a few hours of work (After spending a countless amount of time trying to get the shaders, texture loading, etc. working on Qt). I am using volume rendering, although the underlying data can be rendered with any method (i.e. optimized for sparse voxel octrees).

So here is a screenshot: 128x128x128 Voronoi grid with my own special magic sauce algorithms. I have also implemented shadows and normals (normals not shown), but sampling still needs to be fixed. The sampling methods need improvement to get rid of stair-stepping and rotation artifacts.