Doomsday Bloghttp://blog.dengine.net
Fri, 27 Jul 2018 09:48:04 +0000enhourly1https://wordpress.org/?v=4.9.8119980131Further rendering explorations – Part 3http://blog.dengine.net/2018/07/further-rendering-explorations-part-3/
http://blog.dengine.net/2018/07/further-rendering-explorations-part-3/#respondWed, 25 Jul 2018 07:00:36 +0000http://blog.dengine.net/?p=966Continue reading Further rendering explorations – Part 3]]>This is the final installment of a short series of posts detailing what I’ve learned while exploring the possible directions that a redesigned Doomsday renderer could take. This post is about integrating the new renderer into the existing engine.

Importing levels from Doom/Hexen formats

Like it is with the Doomsday 1 renderer, the original levels stored in WAD files are not directly usable for rendering. DOOM levels are authored as polygonal sectors with lines representing walls, and the original Doom renderer is based on a 2D BSP where the sectors have been split into convex subsectors plus related pieces of walls. At runtime, the BSP is traversed to quickly sort the wall segments in those subsectors in front-to-back order. However, this way of doing things is not that relevant when going in with the assumption that the entire level is already in GPU memory. For larger levels, it is still beneficial to roughly split the level to chunks and sort them to avoid unnecessary overdraw, but by and large the GPU is powerful enough to draw all the triangles of the map without additional culling. If culling is necessary due to an excessively complex map, it is still best to do it entirely on the GPU using indirect draws. I haven’t yet explored this direction very far, but ultimately it will depend on how much additional detail gets added to the surface meshes.

So, what exactly is needed for rendering? A triangulation of each sector’s planes and walls is still required (i.e., converting the polygons to a corresponding set of triangles). Walls are rather trivial, but triangulating arbitary polygons is a different matter: one needs to account for polygons with arbitrary holes inside, and single edge points that connect to multiple lines. Fortunately, this problem has already been studied for decades so there are several solutions available. In the end, I ended up writing my own routines that import the sector polygons from a WAD file and go through various manipulations to end up with a clean set of sector polygons and triangulations (i.e., a triangle mesh that covers the sector area).

The nice part is that this does not actually require creating any new vertices; unlike the subsectors produced by BSP construction, sector divisions can use the corner points themselves for subdividing the polygon. The end result is a smaller number of vertices and triangles. In other words, the importer can rely on the original representation of the level as created by the level author, and ignore the BSP tree and other such derived data. (The BSP data is still useful for gameplay routines, though.)

Although I already have the level importer up and running, there are still details that need some fine-tuning. For example, there are special flags that control how wall textures get positioned, and that behavior needs to be replicated in the shaders when calculating texture coordinates for materials. The trickiest part here is getting everything to match the original software renderer, particularly when it comes to emulating the various hacks and quirks there are for creating certain special effects. These will have to be detected during the importing phase so that corresponding geometry can be generated without the renderer actually having any knowledge of this special behavior.

Map packages

The Doomsday 2 package system plays an integral part in the new renderer. The package system was designed to simplify and modularize finding and loading resources. It will be used here to act as the sole delivery mechanism to feed data to the renderer: everything from level geometry to textures will be contained within one or more packages.

The D1 renderer grew rather organically (without knowledge of the future, naturally), so the internal boundaries did not form cleanly: for example, the way textures are loaded is convoluted, and the logic spans many subsystems from the file system code to the image loader. In a clean architecture, components are isolated from each other and require minimal knowledge of each other to function — interfaces are small. For the D2 renderer, this is a central concern: it needs no knowledge of how id Tech 1 textures are found and loaded, for instance; it just needs to be given images and coordinates — this is the core reason why my exploration work has been done in isolation, separate from the old engine; so that it is easier to rethink and re-evaluate the necessity of every detail and focus on the minimal required set of functionality.

In practice what happens is that the map importer writes a map package using the same format that a manually-authored Doomsday 2 map package would use. One immediate advantage here is that map packages can be cached to avoid repeated processing so the importing process does not need to be repeated. When the game loads a particular level, the renderer is set up simply by loading the corresponding map package. Internally, this greatly simplifies the APIs needed for the renderer, since there is no need to programmatically build and configure the renderer data structures. Everything can be handed over in a self-contained package.

Assets like materials, textures, shaders, and scripts can be embedded in the map package, or shared from other packages (e.g., one package for all textures imported from DOOM.WAD). This provides an unambiguous way to handle PWAD custom textures, too. Even 3D model assets can be embedded in map packages because they use the same package system that the 3D model renderer is already using. This is useful for both surface decoration models and customized objects.

In the cases where data from the original WAD levels is still needed (like the aforementioned 2D BSP), contents of the original unmodified WAD lumps can be included as separate files inside the map package, where the data can then be loaded easily. Some of this data can also be regenerated on-the-fly, too, but the importer should rely on the original data as much as possible.

Next steps

Perhaps the biggest missing feature that I still want to explore is global illumination. DOOM maps often have large rooms and areas that are lit in an ambient fashion, or by large luminous surfaces, so it is important to have the lighting behave in the appropriate volumetric way in these spaces. I have a good feeling about using the GPU for processing this lighting data (in a semi-dynamic fashion), but it will take some work to find the correct approach.

Currently the new rendering code resides in a completely separate test application, but eventually it will be integrated into the main Doomsday executable. I have already split the new renderer into a library of its own, to keep its dependencies clean, so the rest of Doomsday can pretty much treat it as a black box that is given a map package to display. The “classic renderer” will probably be removed completely in a future version — time is better spent improving the new renderer. It may actually be quite a challenge to remove all the old rendering code, as there may be some unexpected internal dependencies and side-effects.

While the 3D model renderer is already using shaders and PBR materials, it will still need some updating to integrate nicely with the new renderer (e.g., draw to the G-buffer with separate colors, normals, etc.). This will be pretty trivial to implement, but existing custom 3D model shaders will also need tweaking.

I’m currently targeting OpenGL 3.3 Core Profile for compatibility with older GPUS, but at some point in the future I intend to also look into Vulkan support. However, this is more of a long-term plan. The graphics code has been written in such a way that the underlying graphics API is abstracted away during normal use, so switching it shouldn’t be a massive challenge. Vulkan would provide opportunities for optimizing performance and doing async compute operations (e.g., for global illumination).

Finally, I’m still eyeing ports for Android, iOS, and Raspberry Pi. At its core, the new renderer is much better aligned with these mobile platforms as it relies on static data wherever possible. However, the shaders will need to be ported to OpenGL ES and likely simplified somewhat when it comes to the more advanced techniques. For example, I don’t expect displacement mapping to be feasible on mobile GPUs any time soon, particularly given the relatively high screen resolutions.

Conclusion

I hope you have enjoyed this short series about rendering! It was a blast exploring this stuff during the first half of the year, and it is a large part of why I haven’t paid much attention to the stable builds recently. There is a number of nice improvements in the 2.1 unstable branch that I will likely end up pushing out as a stable build sooner or later. However, many of the planned improvements for 2.1 will almost certainly be postponed to a later release… I will need to reorganize the Roadmap a bit once again so it’s aligned with my current thinking, particularly when it comes to integrating the new rendering code.

]]>http://blog.dengine.net/2018/07/further-rendering-explorations-part-3/feed/0966Further rendering explorations – Part 2http://blog.dengine.net/2018/07/further-rendering-explorations-part-2/
http://blog.dengine.net/2018/07/further-rendering-explorations-part-2/#respondWed, 11 Jul 2018 06:00:29 +0000http://blog.dengine.net/?p=951Continue reading Further rendering explorations – Part 2]]>During the spring I’ve been exploring the possibilities and potential directions that a completely redesigned renderer could take. Continuing from part one, this post contains more of the results and related thoughts about where things could and should be heading.

Parallax mapping

I’ve always thought that parallax mapping would bring a lot of nice detail to DOOM-style maps that contain large planar surfaces. When writing new material shaders, one of the first things I tried was parallax mapping. This is a technique where the surfaces remain planar, but texture coordinates are shifted to create the illusion of depth variations. Naturally, one must have a height (displacement) map for each surface that uses this effect.

I tried various versions of parallax mapping. In the screenshot above, the wall on the right is a simple quad that gets displaced in the fragment shader. In this final version, displacement is also applied in the depth buffer, which makes the end result virtually indistinguishable from actual mesh-based geometry. However, it is quite costly for the GPU in terms of per-pixel computation, and the maximum depth of the effect needs to be limited to avoid visual artifacts. It could still be useful for certain special use cases, but I’m leaning toward mesh-based surface detailing because that can be stored as static vertex data and thus can be cheaply rendered.

The simpler version of parallax mapping could also be useful in certain situations such as very rough surfaces where normal mapping doesn’t quite hide the flatness of the surface. In practice, the height/displacement values can be stored in the alpha channel of the normal map, so they can be conveniently present for selected materials.

Materials and textures

Speaking of material shading, materials suitable for physically-based rendering (PBR) are nowadays the common solution. Explained shortly, instead of having a single texture map containing the final appearance of the material, one has multiple texture maps that describe various aspects of the surface. This takes up quite a bit more memory, but the benefit is that lighting applied to the surface can accurately and dynamically adapt to the surface properties.

(The 3D model renderer in Doomsday 2 already supports PBR materials: one can provide specular/gloss and emissive maps in addition to albedo color and normal vectors.)

Reflections are a crucial part of physically-based rendering; in addition to shadows, light bouncing around is basically what gives a realistic appearance to a scene. In addition to light that is coming directly from a light source, light can also come from the surrounding environment. Solving this accurately requires ray tracing, but fortunately plausible results are possible with cubic environment maps and screen-space techniques.

The “sky box” is one environment map that can be assumed to be always available. It is convenient as a fallback for reflections where no better information is available. The smallest mipmap levels of the sky are also good for approximating overall ambient light in the scene, although better global illumination techniques would yield improved results.

I have been thinking about rendering cube maps on the fly as needed for localized reflections but haven’t implemented this yet. This is crucial for sharp reflections (such as mirrors/chrome), but in the big picture it is a relatively niche requirement — during gameplay one does not notice that the reflections are physically inaccurate, especially if they are a bit blurry anyway.

A big open question is whether it is possible to generate good-looking PBR materials based on the original Doom textures. I have a feeling this could be possible with a set of manually-created templates and generic surface detail patterns. The other option is to rely on manually-prepared texture packs. A mixture of these two approaches is also possible. In any case, if one also considers additional surface detail meshes/objects, there is plenty of manual work to create materials with the appropriate level of detail and visual quality.

Liquids

When it comes to rendering liquids, and more generally volumetric effects, there are a few things to take into account: how will the surface be rendered, and what happens to light passing through the liquid?

Compared to basic Doom map data structures, the new renderer has the concept of optionally subdividing sectors into vertical subvolumes. A volume may then be rendered as “air” (invisible), or have additional effects applied to it (such as water or fog).

Liquids are a complex effect to render due to how light behaves when it interacts with a volume of liquid. The reflected component can be rendered in the same manner as with a reflective opaque surface, but there is additional refracted light traveling through the volume and bending as it exits the surface. I’ve been rendering liquids using a separate additional pass that does a screen-space approximation of what happens to the refracted light. Volumetric fog is an important part of the effect, too, and that requires calculating the distance between the surface of the liquid and the pixels visible behind it. Thanks to the G-buffer, this information is readily available.

One further aspect to note is the surface wave pattern. So far I’ve only used a very simple sum of two moving noise patterns, which is a cheap way to get some plausible rippling waves. However, one can go very deep on this by actually performing various simulations of how waves would behave on the surface, even applying an interactive component when objects move through the liquid. This is more of a nice-to-have feature, though, and I am not planning dive into such depths in this domain.

]5Greenish water with displacement-mapped surface waves. Note how the surfaces appear bent underwater.

HDR, bloom, and tone mapping

After all the light reaching the visible pixels has been calculated, it needs to be converted to a color space suitable for the user’s screen. Thanks to the floating point framebuffer the intensity of visible light may vary greatly. This can be compensated with automatic exposure scaling and tone mapping. Automatic scaling is based on measuring average color values from the previous completed frame and tuning a scaling factor accordingly, to keep the peak values below a certain threshold. There is a lot of room for tuning the algorithm to make the effect more natural. Exposure control also ties into the final tone mapping. I experimented with a few tone mapping functions, but choosing the best one partially depends on the overall lighting system and what works with the kind of lights in use.

Additionally, a bloom effect can be applied to accentuate the really bright parts of the frame. A simple bloom effect already exists in the D1 renderer but is quite limited since it has to deal with only having 8-bit RGB colors as input. That means colors at the high end of the range will always result in bloom regardless of whether the situation warrants it or not. But now with floating point colors, bloom can be applied only to the truly bright pixels.

To be continued…

In the next part, I’ll discuss importing levels from WAD files and what remains to be done next.

]]>http://blog.dengine.net/2018/07/further-rendering-explorations-part-2/feed/0951Further rendering explorations – Part 1http://blog.dengine.net/2018/06/further-rendering-explorations-part-1/
http://blog.dengine.net/2018/06/further-rendering-explorations-part-1/#respondWed, 27 Jun 2018 10:25:04 +0000http://blog.dengine.net/?p=942Continue reading Further rendering explorations – Part 1]]>When it comes to graphics, things sure have changed since the beginning of the project. Back in the day — almost 20 (!) years ago — GPUs were relatively slow, you had a few MBs of VRAM, and screen resolutions were in the 1K range. Nowadays most of these metrics have grown by an order of magnitude or two and the optimal way of using the GPU has changed. While CPUs have gotten significantly faster over the years, GPU performance has dramatically increased in vast leaps and bounds. Today, a renderer needs to be designed around feeding the GPU and allowing it do its thing as independently as possible.

With so much computation power available, rendering techniques and algorithms are allowed to be much more complex. However, Doomsday’s needs are pretty specific — what is the correct approach to take here? During the spring I’ve been exploring the possibilities and potential directions that a completely redesigned renderer could take. In this post, I’ll share some of the results and related thoughts about where things could and should be heading.

Vertex-shaded world

One of the interesting aspects of the first-generation DOOM engine is that all floors and ceilings (i.e., planes) are allowed to dynamically move up and down at any time. This is due to the 2D nature of the map — the vertical dimension is left unspecified until the world is actually rendered.

In the past, Doomsday has resorted to pushing all new vertex data on every frame to account for this dynamic nature of the world. Combined with robust clipping of visible geometry, this has been mostly an acceptable solution, although it puts all the burden on the CPU to determine which surfaces are actually drawn. Thankfully most maps have only a small number of vertices and triangles so not too much data has been flowing around.

One of the ideas that I’ve found fascinating in recent years is the procedural augmentation of map geometry using 3D meshes. This is analogous to the idea of light source decorations, where lights can be attached to certain pixel offsets on textures. However, the end result is that surface geometry will have a lot more vertices. It will not do to keep streaming all of it dynamically.

There are basically two solutions here: static vertex data accompanied by a buffer of vertical offsets, and instanced mesh rendering. What I’ve implemented so far is that a static, read-only version of the entire map geometry is copied to one buffer, and there is another (much smaller) dynamic buffer containing “plane heights” that get applied to vertex positions in the vertex shader. All vertices connected to a certain plane use the same plane height value, allowing all of them to move by modifying a single value in the vertical offsets buffer.

On the other hand, mesh instances are good for repeating the same 3D shapes multiple times. Considering that Doom textures are pretty small and thus repeat multiple times over longer surface, 3D decorations can efficiently be rendered as instances. (So far I haven’t actually implemented this.)

There are some interesting side effects to having a single mesh represent a level where floors and ceilings can move arbitrarily. For example, consider the bottom/top section of a wall connecting two floors/ceilings. Depending on which floor/ceiling is higher at a given time, the section of the wall may be facing either the front or back sector of the line. Because the map data remains static in GPU memory and the plane heights can change dynamically, this means both sides of the line must be stored in the vertex data, and the vertex shader needs to determine which side is currently visible.

G-buffer and SSAO

Advanced rendering techniques need a lot more information than one can store in a simple 24-bit RGB framebuffer, like the one used in the Doomsday 1.x (D1) renderer. The basic principle is that instead of directly rendering to visible RGB colors, there are one or more intermediate steps where various kinds of buffers are rendered first, and then subsequent rendering passes can use this cached data to compute the actual visible colors.

In practice, I’m talking about using a “G-buffer” and a separate 16-bit floating point RGB frame buffer. The G-buffer contains depth values, normal vectors, surface albedo colors, material specular/gloss parameters, and emissive lighting values. The benefit of floating-point color is that one can sum up the results of all the intermediate shading steps and end up with a physically accurate representation of total surface lighting.

After setting up the G-buffer, the first effect I implemented was classic screen-space ambient occlusion. It is based on sampling the neighborhood of each pixel to estimate how much ambient light arriving at the pixel gets occluded. This requires access to per-pixel depth values and normal vectors, which are provided by the G-buffer. While the technique is relatively simple, tuning it to produce exactly the right appearance for Doom maps will be the main challenge. In the end, it should be able to to replicate an effect similar to the old textured triangle-based corner shadows that Doomsday has been using in the past — however with much more fidelity and the effect will apply to all meshes including objects. The old triangle-based shadowing is also quite a lot of not-that-straightforward code that will be good to retire. This will also make it unnecessary to render any kind of additional “shadow blobs” under objects, as these will be automatically produced by the SSAO pass.

Deferred shading

There are essentially two kinds of light sources in Doom maps: point lights (usually attached to objects), and volume lighting (bright sectors). The former translates pretty easily to GPU-based shading, but the latter will require more effort. If one simply takes the old sector boundaries and uses those for lighting objects and surfaces, the result won’t be terribly convincing.

I’ve been thinking about higher-fidelity global illumination approaches but haven’t prototyped one yet. The closest analogy in the D1 renderer is the “bias lighting grid”, which was essentially trying to solve the same problem. However, it was always limited to vertex-based lighting so it had trouble working in spacious/low-poly rooms. It was also updated entirely on the CPU, making it pretty inefficient. Modern GPUs have the capability to construct and update a more complex global illumination data structure on the fly; I intend to explore this direction further in the future.

With the G-buffer providing information about visible pixels in the frame, lighting can be done as a separate rendering pass that affects only the visible pixels. This kind of deferred shading is suitable especially when the scene contains many light sources.

It is interesting to note that this has some similarities to the way the D1 renderer draws dynamic lights. (Setting aside that D1 does not do any kind of per-pixel lighting calculations since it does not use shaders on world surfaces.) While a small number of lights is drawn in a single pass (light textures multiplied with the surface color), additional lights are applied during a separate rendering pass where each dynamic light is represented by quads projected on surfaces. The D1 renderer needs to first project all dynamic lights to walls and planes using the CPU to generate a set of contact quads; in the new renderer, the corresponding operation occurs fully on the GPU using stencils and without assumptions about the shape of the world.

Shadow maps

The D1 renderer hasn’t done much in the area of shadows even though there have been positional light sources since the beginning. Rendering shadows requires quickly drawing the world a few times from different points of view, and this has not been feasible due to the aforementioned inefficient way of handling map geometry.

Good-quality shadows are absolutely essential for creating convincing visuals, so they are an important feature to have in the future. I’ve been looking at two types of shadow maps: basic directional ones for bright outdoors lighting, and cubic shadows for point lights.

Now that I have the world geometry in static form in GPU memory, it is possible to render multiple shadow passes to generate a set of shadow maps. One of the tricky parts here is choosing which lights will cast shadows in the frame — each shadow map takes up rendering time and memory, and a typical map might have dozens or even hundreds of light sources. Cubic shadow maps also a bit more expensive to update as they need to be rendered using six different viewing directions.

Remaining work here is to implement cascaded shadow maps to improve the quality of directional shadows.

To be continued…

There is a lot more to cover so this will be a series of posts. In the next part, I’ll discuss HDR rendering, PBR materials, and liquids.

]]>http://blog.dengine.net/2018/06/further-rendering-explorations-part-1/feed/0942Status updatehttp://blog.dengine.net/2018/05/status-update/
Wed, 02 May 2018 18:41:07 +0000http://blog.dengine.net/?p=929Continue reading Status update]]>The spring months have been a little crazy with various Real Life time sinks preventing me from delving too deep into fun coding. Consequently, there has been little to no progress with the tasks on the roadmap, such as the multiplayer improvements for version 2.1. Given that several months have passed, it will be challenging to find motivation to restart this work. I am tempted to make some changes to the roadmap to get things rolling along again.

I have managed to steal away some time to explore an exciting new direction for the renderer, though. The basic gist of this effort is to completely revise how the game world is drawn, bringing it up to par with the recently redone 3D model renderer.

Doomsday’s “first-gen” rendering code is well past its best-by date. It was originally written for OpenGL version 1 way back in the year 2000. Since then some superficial improvements have been made, but it fundamentally relies too much on the CPU for processing the map geometry. Nowadays the GPU should be handling all the heavy lifting.

The approach I have chosen is a clean room implementation of an independently designed new renderer. This affords me the opportunity to reconsider all the assumptions about how map data is structured, and revise it to support things like slopes while keeping it fully compatible with Doom-style 2.5D maps.

While one important aspect of the new renderer is to allow efficient low-end rendering on Android/iOS (where the CPU should do as little as possible), it also needs to scale to high-end GPUs. For this kind of scaling to work, it means that both the materials and the surface geometry need to be radically enhanced for the high end. So far I’ve been exploring the different possibilities to bring additional detail to the world (for instance displacement mapping, shown together with screen-space ambient occlusion in the image at the head of this post). Overall it seems pretty clear that all of these enhancements can’t be done automatically: manually created PBR materials and hand-crafted surface details will need to factor in to the system.

Once a prototype of the new renderer is up and running and the appropriate rendering techniques have been chosen, it can be introduced as an alternative to the old renderer. It will likely be necessary to keep the Classic renderer around for some time, until the new one is sufficiently polished.

I leave you with this early screenshot where I was testing simple parallax mapping. Other noteworthy features shown here are HDR lighting and shadow mapping using a cubemap.

]]>929Downloading data files from the serverhttp://blog.dengine.net/2017/11/downloading-data-files-from-the-server/
Sat, 04 Nov 2017 06:47:38 +0000http://blog.dengine.net/?p=902Continue reading Downloading data files from the server]]>To kick off the multiplayer improvements for version 2.1, I’ve started with adding access to data files over the network. For example, if a server is running a custom PWAD that you don’t have, the client will automatically download a copy before joining the game.

That pretty much sums it up for the user-facing portion of this feature. You will see a popup displaying download progress, and there is an option to cancel.

Much of this work was recently merged to the master branch (in the form of 83 commits) and is included now in unstable builds. Note that this kind of a larger influx of changes usually leads to new glitches also being introduced… I will be improving the code in the coming days/weeks.

Internally, this is part of a larger feature called remote package repositories. The basic idea is that there is a collection of packages stored on a web server, and Doomsday is able to download required packages when necessary. There are two kinds of repositories:

A native package repository is provided by a running multiplayer server. In practice, all data files and packages loaded in the game session are offered as a package repository to clients. However, this excludes vanilla IWADs and Doomsday’s own core packages: the former because of copyright, and the latter to avoid version conflicts.

A web-hosted package repository is a collection of data files and packages on a web server where the file tree is publicly accessible via HTTP. I will talk more about this in a later blog post.

Before going further into the implementation, I would like to mention one nice addition to the Doomsday core library. Network operations can take a few seconds to finish, which makes it important to have a convenient API for asynchronous operations — the goal is to never freeze the UI. For this kind of tasks I added a new de::async utility that executes a callback in a background thread, and when that is finished, executes another callback in the main thread. This allows performing tasks like event processing, UI, and graphics in the right thread. At some point in the future I expect this will allow getting rid of the old “Busy Mode” background thread.

de::async has been very helpful with implementing the communication between the client and a file repository. The protocol works roughly as follows:

When first connecting to a repository (e.g., the multiplayer host), a list of the available files and/or packages is downloaded. The client can then quickly determine if required packages are available remotely.

A client may query file metadata such as names, sizes, and modification timestamps. In Doomsday’s virtual file system, this information is used for creating symbolic links that represent the remote files. The links can be manipulated like regular local files, with the exception that the contents need to be downloaded before reading.

Downloaded file contents are cached under <runtime>/cache/remote/ so that in the future the same file doesn’t have to be downloaded again. By default, downloaded files are tagged as “hidden” and “cached” so that they are hidden in the UI. (Note that entering “hidden” in a package search field will show only the hidden packages.)

These changes enable several UI and user experience improvements in the future. For instance, there can now be an option for installing a remote package locally. Also, the Shell can be enhanced with a UI for easily selecting PWADs and other mods to use in multiplayer games, and the clients don’t have to worry about setting up the same files manually.

]]>902Doomsday 2.0.3, other minor updateshttp://blog.dengine.net/2017/09/doomsday-2-0-3-other-minor-updates/
Mon, 04 Sep 2017 06:00:33 +0000http://blog.dengine.net/?p=895Continue reading Doomsday 2.0.3, other minor updates]]>Over the past few weeks, I’ve made a couple of stable builds with the version 2.0.3. The first one of those was made a bit early as I was about to leave for a trip abroad, and mainly included one bug fix for Hexen related to saving and restoring object state from save files. Recently I’ve made a couple of additional stable builds to investigate and fix a problem with the Ubuntu Launchpad build scripts, where the “doomsday-stable” packages were correctly built but nothing was actually included in the generated DEB packages.

On the whole, progress has been somewhat slow. Perhaps the biggest advance was in the dengine.net website backend, where I’ve now split the API functionality to a separate api.dengine.net server, so that things like master server and update queries won’t interfere with the normal operation of the project home page and forums. I hope this will alleviate the issue of dengine.net sometimes failing to respond to requests.

Prompted by a forum post, over the weekend I was investigating an audio volume issue on Windows. It turns out there is a problem with the SDL_mixer music volume controls. I have yet to determine if there is a workaround that Doomsday can do to avoid the issue. Such a workaround would be preferable to disabling SDL_mixer music on Windows completely, since SDL_mixer does bring value to the table (e.g., music formats). The situation is also slightly tricky because SDL and SDL_mixer are built in to the engine, so there isn’t a plugin to take out or something simple like that.

]]>895Doomsday 2.0.2 releasedhttp://blog.dengine.net/2017/07/doomsday-2-0-2-released/
Sat, 01 Jul 2017 08:54:44 +0000http://dengine.net/blog/?p=888Continue reading Doomsday 2.0.2 released]]>The stable version 2.0.2 (build 2372) is now available. The release notes can be found in the Manual.

This is another patch release that fixes incorrect behavior and improves the stability of the engine. Some of the fixes affect data file identification, so if you’re experiencing issues with your WAD files or other resources, try selecting Clear Cache from the DE menu and restarting Doomsday so that all your data is reindexed.

I recommend anyone currently running Doomsday 2 to upgrade to 2.0.2.

]]>888Odds and endshttp://blog.dengine.net/2017/06/odds-and-ends/
Thu, 29 Jun 2017 07:45:06 +0000http://dengine.net/blog/?p=884Continue reading Odds and ends]]>I’ve been getting back into the swing of things with a set of smaller changes and improvements. I haven’t yet started work on the major focus of the 2.1 release, which is planned to be Multiplayer improvements.

2.0.2 will be out on Saturday, July 1st, with a couple of bugfixes that I hope will be useful. As usual, I will post the details after the build is available.

The recent changes include:

Stability improvements. Several methods that went against the C++ standard were revised. Reliance on undefined behavior was leading to compiler optimization issues particularly with the latest GCC 7. Also, error reporting is now more robust. For instance, previously when something went wrong during busy mode (like engine startup), it could easily lead to a crash because the client app wasn’t reacting to the situation appropriately. When the error message was being shown, the application was still trying to do something in the background.

The model renderer shaders support more macros for easier customizability. Generally speaking, using macros is more future-proof than writing completely customized shaders.

Resource identification fixes. There was a duplicate IWAD spec for Heretic 1.3 vs. SOSR. Now only the latter remains; if you have problems with your Heretic IWADs, Clear Cache and restart Doomsday. Also, autogenerated package IDs had a problem with special characters in file names. Doomsday will choose package IDs based on the file name of the data file, however it allowed spaces and quotes be included in the IDs. This lead to problems because whitespace characters are used as separators in lists of IDs, and quotes may not be accurately escaped in Info strings. Again, Clear Cache is recommended.

Assimp build option. There’s a new CMake build option (DENG_ASSIMP_EMBEDDED) for disabling the use of Doomsday’s customized version of Assimp. This allows one to configure Doomsday to use the Assimp installed as a system library, which is more common on Linux for instance.

More UI tweaks. Plenty of small things:

Minor dialog layout improvements to avoid awkward dimensions.

Changed the UI font on Windows.

Added popup outlines for visual clarity.

Fixed Home tab scrolling with the mouse wheel so that it doesn’t always jump to the previously selected item when switching between tabs.

Game icons are refreshed when package availability changes. No more missing and incorrect icons shown for game profiles.

Fixed missing window fullscreen state notification (was affecting the appearance of the “Quit” button on Mac).

Monitor refresh rate. Yesterday I started working on monitor refresh rate configuration options. I realized these have been completely missing from the settings for some time now. Doomsday does query the supported display modes, which includes information about refresh rates, so now you have the option of choosing which refresh rate Doomsday will prefer when switching modes on Windows/Linux (display mode changes are not done at all on macOS). However, I recommend you consider adjusting the game pixel density instead of the screen resolution to keep the UI sharp and crispy.

]]>884Notes about build 2353http://blog.dengine.net/2017/06/notes-about-build-2353/
Mon, 12 Jun 2017 08:04:29 +0000http://dengine.net/blog/?p=882Continue reading Notes about build 2353]]>I’ve been taking it easy for the past couple of weeks, but last weekend I did manage to merge the recent OpenGL changes to the master branch. Build 2353 now includes this work. Please let me know if something appears broken!

All OpenGL use now happens via OpenGL 3.3 Core Profile. If you go to the Doomsday About box, the GL info popup should indicate that the OpenGL 3 or 4 driver is in use. This means that all reliance on the old OpenGL 2 fixed-function pipeline is gone, and rendering is done via shaders. However, the map renderer itself was not changed yet, so it is still unaware that shaders are available. In practice, everything should be working as before, however with a slightly slower performance. The additional overhead comes from the DGL wrapper that provides an API compatible with the old fixed-function drawing code. The use of this wrapper will be removed in Doomsday 2.2 and beyond, as the drawing code is revised to work in a more GPU-friendly way.

There’s a new build option for selecting between OpenGL 3.3, OpenGL ES 3, and OpenGL ES 2. Only the first two of these have been implemented, and some of the shaders don’t yet work under GLES yet. As work on the mobile version progresses in the future, the GLES variants will be improved as needed. For now, though, desktop OpenGL remains the priority.

In addition to graphics, I’ve also updated a few of the 3rd-party libraries:

The obsolete FMOD Ex has been replaced with the latest FMOD Studio Low-Level Programmer API. The new API is roughly equivalent in functionality as far as Doomsday is concerned.

]]>882The road to mobilehttp://blog.dengine.net/2017/05/the-road-to-mobile/
Sat, 20 May 2017 12:19:11 +0000http://dengine.net/blog/?p=875Continue reading The road to mobile]]>I’ve spent the past couple of weeks experimenting with an iOS port of Doomsday. This is now at a proof-of-concept level and not playable yet. It isn’t quite the right time to continue much further along this path, but read on for my findings!

OpenGL migration

During May, I’ve been migrating Doomsday’s OpenGL code to version 3.3 of the API. This is quite significant because many of the deprecated legacy OpenGL features are no longer available in this version, and thus Doomsday’s existing code needs a few important adaptations. This version of OpenGL is also well-aligned with OpenGL ES 3, the API that is offered on modern iOS and Android devices.

The long-term goal is to revise the renderer to work in line with the expectations of present-day OpenGL. In the short-term, though, adaptations for old code are important because they at least provide functionally correct rendering even though the performance is nowhere near optimal levels.

These OpenGL 3.3 adaptations are already working quite well. The big upside is that all rendering is finally going through shaders rather than relying on the old fixed-function pipeline.

Mobile platforms

I’ve long been wanting to port Doomsday to iOS and Android. This is one of the reasons why Qt was chosen as the underlying software framework (due to its wide platform support). While mobile devices may not exactly be the optimal way to play a game like Doom, they are also great because they force you to really think about performance optimizations and utilizing the hardware to its fullest. This kind of work will benefit Doomsday on all platforms. There is also something really attractive about having the game in your pocket at all times.

The great thing about OpenGL 3.3 is that it is largely compatible with OpenGL ES 3, the current version of OpenGL available on iOS and Android. This means that as far as the rendering is concerned, most of the code should just work on mobile.

With this in mind, I have been putting together a proof-of-concept build of Doomsday for iOS. From a technical perspective, this actually entails incorporating Doomsday’s rendering and input under a very thin Qt Quick (QML) wrapper. The benefit of this approach, instead of using native software components directly, is that the same setup will work on Android in addition to iOS. In fact, it would also work on the desktop but the environment there is different enough to warrant using the traditional window-based approach.

While the progress so far is quite promising, there are still a couple of big hurdles to overcome. Doing a full-fledged mobile port is plenty of work:

The game input mechanisms need rethinking. On the desktop, Doomsday relies heavily on mouse and keyboard input, neither of which are available on a mobile device. The common approaches here are virtual D-pads/buttons and tilt controls, and of course using supported physical gamepads.

There are various OS/third-party integrations needed for accessing your IWAD files and add-ons. For instance, one might like to load the IWADs from Dropbox or another cloud storage.

The UI needs to be tweaked for small screen touch-only interaction. Doomsday’s UI widgets are already quite flexible when it comes to resizing, but things like scrolling by dragging with your finger need to be implemented separately.

Optimizations of all kinds. It isn’t very nice if the device runs too hot and/or the frame rate is choppy.

Qt 5.8 still uses only static libraries on mobile. I gather this will change in Qt 5.9, which would make it easier to deal with Doomsday’s plugins since they would not have to be statically linked into the executable any more.

The overall plan regarding mobile is that it will be a longer-term goal. The rendering enhancements planned for version 2.2 are quite essential for mobile, so an official release would occur in the 2.3/2.4 timeframe at the earliest. Nevertheless, doing an experimental port at this stage has clarified a number of things for me in terms of what is actually required. This should make it easier to continue making preparations in the future.

Raspberry Pi

In addition to mobile devices, Raspberry Pi is also quite an interesting development platform. The OpenGL situation there is a little bit more constrained, though, with only OpenGL ES 2 supported. Compared to ES 3, it lacks a couple of useful features, but none that are showstoppers for Doomsday.

At this time I would estimate that after the 2.2 renderer enhancements are done, it will be quite straightforward to make a Raspi port as well, but presently it’s a bit premature. It’s easier to develop the mobile support first on iOS/Android, largely due to how slow software compiling is on Raspi (without distributed builds, cross-compiling on Linux, or other such tricks).