I’ve been working on refactoring the terrain and “location” code for the XL Engine. There were several deficiencies with the original DaggerXL system that I was seeking to rectify, which I’ll talk about below. Everything I show in this post was rendered with the software renderer, running in 8 bit mode. As I’ve talked about before, GPU support has also been expanded – Shader Model 2.0 support will no longer be required. The new requirement for GPU rendering is OpenGL fixed function support.

The sky in Daggerfall is basically a flat plane that is scrolled as the camera is rotated, very much like the backgrounds in psuedo-3D games such as the original Mario Kart. The result is that the scrolling of the background doesn’t match the rotation in the foreground. In vanilla Daggerfall this looked strange but when running at smoother framerates an higher resolutions it looks rather disturbing. The XL Engine now uses a “tapered cylinder” to map the 2D sky textures onto 3D geometry in the scene. The result is proper scrolling and perspective while maintaining the appearance of “infinite” distance.

One visual problem in DaggerXL were the “rounded cliffs” that showed up in the terrain, especially around locations. The terrain generation has been improved, changes in elevation in the base heightmap are visualized in a much nicer way now. In addition medium detail data has been added/generated, resulting in more variety. Terrain textures are handled properly now as well, without the seams and hardlines that were in DaggerXL. There are still the occasional seams, when a combination of terrain types isn’t support by the art, but these are rare and were present in vanilla Daggerfall as well. Finally coastlines look much better now then in the last release of DaggerXL.

In DaggerXL transitions between cells caused hiccups that were very noticeable, as new terrain tiles were generated and locations loaded. Due to the terrain refactoring, these pauses are very rarely long enough to be noticeable now. Only parts of the terrain that need updating are regenerated, the area that stays the same is simply “moved” into the correct place for the new view. This means that only a fraction of the visible terrain needs to be touched, instead of all of it like before.

Object positioning is handled using two sets of coordinates – “local world space” and “global.” The global coordinates point to the “cell” the where the objects exist in the world. The “local world space” coordinates are the objects fine grain position within that cell. To position objects relative to the camera, they are converted into “relative” coordinates – basically centered around the cell that the player or camera currently occupies. In the previous release, objects were responsible for updating these relative coordinates in their game logic code. This was obviously a very bad idea, something easy to miss or fail to update. Now these relative coordinates are no longer explicitly stored, only the local world space and global coordinates are actually stored and handled by the objects. The conversion to relative coordinates is done, as needed, at a low level. All code goes through this path, freeing the game logic from having to worry about it at all. This fixes a variety of issues, including the “phantom collisions” and other collision related issues.

Foliage updates and rendering speed has also improved, resulting in much improved exterior rendering performance even on low end hardware or using software rendering. The flats are also scaled properly, fixing scaling issues that the previous release had with several flats.

This post only scratches the surface, a lot of work has been completed since the last update. Below I show a movie of some of this stuff in action.

For the movie the in-game UI has been disabled (just like all the screenshots) and enemies are not loaded. However the release will have all the UI elements, monsters and so forth that the previous release had. Also animated sprites have not been fully reincorporated yet, but they will be in the final release too.

There are more features that I didn’t talk about above that will also be in the release as well. This release will be more complete in terms of gameplay as well as visuals. Features include: waves in the ocean (as shown in the video), NPCs that walk around the locations, improved UI – fixing a variety of issues such as click-through, mouse offset, etc. and implementing missing features such as unequipping items, getting item info, scroll bars and more. So this release is not just about the refactoring and engine merger but improving and fixing existing features, improving performance, improving hardware support – with fixed function support and the software renderer, paving the way for cross platform support, adding networking support (though no actual multiplayer in this release), implementing new features for both gameplay and visuals, merging all projects under one engine and more.

You mentioned in the past that “Alpha will feature pretty complete dungeon gameplay, character advancement and itemization. ” Does that still hold true or do you plan on holding back on the gameplay features for now since you’ve been held back by the merger and developing the software renderer?

Bravo.
Likely it’s too late by now but anyway: why not using a quadtree instead of the global/local coordinates? Too be sure, you could then efficiently reject large-scale portions of the terrain out of the frustum without much difficulty or work.
What approach to texture mapping do you use?
Shall it be possible for one to play the nudity-free version of the game if the same wishes?
Thank you very much!

Rejecting terrain cells in a flat grid is pretty efficient as-is. Culling terrain chunks is not a bottleneck so going with a straightforward grid – like the original (but expanded) – is sufficient. Culling terrain cells works the same way as culling geometry chunks, which all lie in a precise grid in the original data. That said, for lods, its really more of a multi-resolution grid – it winds up being similar to a quad tree but at a larger scale.

For texture mapping I’m going to assume you’re talking about terrain texturing. I use the original textures as-is. For the hardware renderer they are put into an atlas but the software renderer takes a more interesting approach. I’ll talk about that in a future post.

As for nudity, the original game supported a feature to turn that off and I will make sure that support works properly in DaggerXL as well. If there are areas where that didn’t work in the original, feel free to point them out and I’ll make sure they work properly.

Thanks for the reply!
By “the approach to texture-mapping”, I meant the actual texture-mapping algorithm. There are many of them, Quake’s FPU / while lerping 16 or so pixels being probably a most well known technique (and a certain quad subdivision-based one being less so). The /, however, can be eliminated without sacrificing correct perspective distortion. Also, how do you perform hidden-surface-removal?
Our gratitude for the software renderer!

The algorithm is similar to that used in Quake or the original Daggerfall. Spans are usually 8-32 pixels in size, based on resolution. Things like per-pixel lighting are accelerated by generating accurate lighting values at the span edges and doing linear interpolation in-between. So while there are other texturing techniques, I went with the “classic” method in order to accelerate other calculations as well. Also triangles that are small on screen go through a simplified pure-Affine texture mapper, when the visual difference is very hard to notice (unless switching back and forth – which I did to test).

For hidden surface removal I use a simple 16-bit Z-Buffer, again similar to vanilla Daggerfall. The main difference is the z values are remapped to a more linear range, so that the zbuffer works well over large z ranges (over 8000 units). So the hidden surface removal is more precise the vanilla Daggerfall and supports much larger ranges while using the same amount of memory (well at 320×200 – obviously more memory at higher resolutions).

I’ve considered other methods, such as a C or S Buffer – but ultimately decided to stick with a simpler approach and optimize in other ways. For example I generate custom scanline rendering code depending on which features are enabled and required – using a permutation system that is similar to that used for shaders.