General

Here’s an overview of the current state of things. I’ll post some screen shots.
A few videos would have been better, but they take more time to make.

The platform

Image from an older tentative project (art by Max Puliero)

The game (?) doesn’t utilize any commercial engine. It’s all home made, starting from the code base of FCX mobile, and ported and evolved to currently OpenGL 4.5. The programming language is C++, plus occasional Python scripts for the build and asset pipeline.
One advantages of having an in-house system is that there is no dependence on specific proprietary software. There are no implicit limitations on what can be done, no licenses to pay and no expiration date on the code base.

The engine features are pretty much common nowadays, so I can’t make any big claims, but nonetheless:

Forward rendered. I was never a fan of deferred rendering (read: z-buffer legacy). Forward rendering is good for my main target, which is VR, so 90Hz stereo and MSAA. Eventually at least a z-prepass will be implemented for SSAO and such, but not currently.

Image Based Lighting (IBL) with some relighting features. Cube maps rebuilt on the fly, then converted to the usual Spherical Harmonics bases for lighting. This is especially useful for elements of GI in the cockpit.

Physically-Based Shaders (PBR). This is nothing particularly advanced. Just the run-of-the-mill PBR shaders. Good looks aside, it’s mostly useful to have a robust art pipeline, as in having a method for artists to produce assets that have a consistent quality and that are relatively independent of the lighting in the scene.

Cascaded Shadow Maps (CSM). This is also a fairly mundane feature… in theory… because in practice shadows are never easy and never fun. A lot of time was spent between the filtering and the tuning to balance the resolution of the cascades. The end result is good enough for now. But shadows are always a pain.

Custom image compression. All images and textures are converted in internal formats built both for lossless and lossy formats. The Lossy format is DCT-based (similar to JPEG) and encodes data so that images can be decoded on the fly at reduced resolution. This is ideal for streaming and LOD on-demand. For example, a large 8k texture can be decoded at 1k for an object that is far away and that may not need to full resolution for a while. This is also useful during development and debug, to speed up turn around time: everything loads instantly with tiny textures.

GUI system. A complete GUI system that is used both for in-game development UI and for actual game UI. This is fairly customizable to the point in which it’s being used to reproduce the F-35 avionics GUI.
The GUI system also works with multi-touch, game pads and VR controllers.

VRsupport. Both at the rendering level and at the UI level, there’s a strong VR support. As far as hardware goes, currently only the Oculus SDK is supported, however there’s an abstraction level (as it should) that makes it relatively simple to support other SDKs.

3D Audio. There’s basic 3D audio abstraction, build on OpenAL, including some workarounds for the limitations of OpenAL. However the audio system should be replaced with a more modern 3D audio API, to be decided.
There’s also custom audio sample compression format based of Wavelets that I did in a moment of need during mobile development (nothing special, but simple and fast).

Data driven. There’s a basic structured data definition language that I built for my needs. It’s similar to JSON, but better. Types can be specified, syntax is more lax in some ways, comments are supported (woohoo !) as well as algebraic expressions for numerical types.

Physics and collisions. This is actually mostly on the side of game code. The physics engine at its core is just the average rigid body dynamics. Acceleration for collision detection and ray-casting is done via a voxel subsystem (I like regular structures).

Particle system. Particles are at the bare minimum. Particles can be very important, but I’m not a fan of creative particle editing. I prefer to build a physically-based behavior in code.

Atmospheric scattering. This is vital for a flying game. It gives a believable feeling of distance, and also a base model for the Sun. It’s based on Sean O’Neil’s article and code from GPU Gems 2. Kind of old, but works great (after a few fixes of some corner cases) and it’s not as complicated to implement as some more modern models.

Continuous build system. A Jenkins-based continuous build takes care of constantly updating the latest binaries and assets. Periodical rebuilds are performed and release packages are deployed to a chosen destination.

The application

The project is not much of a game anymore, rather it’s aiming to be a light-weight simulator, focused on the F-35 Lightning II.

Note 1: a lot of what I learned on flight simulators comes from the users of the Hoggit Discord chat (related also to the Reddit forum). If you’re serious about this stuff, that’s one place where you want to hang out to.

Note 2: the plane and cockpit in the screenshots is not the actual F-35, but rather a concept by Max Puliero. He also modeled the terrain that can be seen in the background.

Terrain system

The terrain system is pretty basic, but it can cover about 500 x 500 km terrains with several large textures.
The build pipeline does polygon reduction (based on the old Stan Melax’s GDM article !) and converts large terrain texture in my own LOD format.
There’s currently no vegetation system, so this would be an ideal next step. Much more needs to be done and redone for the terrain.

Weapons system

Weapon systems are pretty important and a fairly complex part of combat simulators. They are a world onto itself.

I’ve implemented some basic missile guidance, built on a flight model that is based on the airframe (shape, wings, weight), and the rocket motors. The current reference weapon used for development is the venerable AGM-65 Maverick. Although it’s not part of the F-35 arsenal, it’s a missile that is understood well enough to be tested with some confidence.

The guidance model that I implemented is relatively simple. In practice it’s probably similar to the classical proportional navigation, but the auto-pilot will have to be refactored to use PI(D) controllers which may be a better match for proportional navigation (I think).
Still, it’s advanced enough to manage to hit a target by steering the virtual wings, even during the gliding phase, when the rocket motor is spent and careful maneuvering is essential to avoid losing too much energy.Guidance is also recursively simulated in order to provide a Missile Launch Envelope (MLE), to the plane avionics, mostly to be displayed on the HUD/HMD for the pilot to understand the likelihood to hit a target from the current position, velocity and heading. The MLE portion of things was kind of exciting, because of its recursive nature. Basically it’s a simulator that has to continuously and extemporaneously simulate potential launches, from start to end. At least that’s how I implemented it, although it’s likely that in real world weaponry use simpler analytical solutions… but if you’re doing a simulator, most of the work is already there, so one might as well reuse the simulation code.

The Plane’s Flight Dynamics Model (FDM)

My current plane FDM is weaker than that of the missiles. I started off with this one and it was a learning experience. Some things are implemented right, some other will need more work. For starters, because I didn’t have any official (or unofficial) drag and lift coefficients, I resorted to using specs of the F-16 and the F-15.

Doing a plane’s flight model is pretty interesting. But also frustrating. One big issue is that modern planes are all fly-by-wire (FbW). So, assuming that one implements an accurate flight model, then he’ll have to also implement the Flight Control System (FCS) that in many cases does cancel out the aerodynamic properties of the plane !

That’s not to say that making a FDM is a waste of time, because it’s an important underlying factor that influences other things (structural integrity, actual maneuverability, energy consumption), but still, it feels like extra work.

Most crucially, this also means that one can’t quite tell the aerodynamic properties of a plane by just flying it. because who knows what’s the FCS really doing behind the scenes (at least in the case of something as sophisticated and classified as the F-35).

Still, much can be done by following the breadcrumbs scattered around between release of specs and articles on the subject. For example, a test pilot may reveal what’s the maximum Angle-of-Attack (AoA) that a plane can maintain at a certain speed, and from that one can attempt to determine what the lift coefficient (Cl) would be.

It’s a big topic and I only scratched the surface.

The Avionics

This is the big one, and the reason why I started looking into the F-35. By avionics here, it’s meant mostly those that are related to the UI.

The F-35 has two large touch screens, with a custom windowing system that allows to split views into portals that can be configured, maximized or minimized (sort of). The pilot interacts mostly by touch but there is also support for a cursor that can be moved around (usually with a tiny stick on the throttle handle).

Primary displays, with one portal menu selection open.

I’m pretty pleased with my reproduction of the system. Although many of the widgets that can be selected are incomplete and may just don’t yet exist. The look and feel seems pretty believable. However, beyond the plain display of some portal view, there are tons of details that can take one down into a very deep rabbit hole.

For example, the Technical Situation Display (TSD), shows the shape of what is probably the SAR (Synthetic Aperture Radar), showing what areas are in the scan range. The shape of the SAR scope is tied to some details of operation, including some blind spots. So, one starts off trying to reproduce a couple of curves on the TSD and may end up looking into radar technology.

Another example is the engine display, that doesn’t simply give an RPM, but also the Exhaust Gas Temperature (EGT). Now, to give a believable display of that element, one needs to built at least some logic to simulate the basic behavior of a jet engine, its stages, the range of temperature it gets to.

Much work went into the TSD, because it’s what allows to scan for and schedule a kill-list of possible targets. But also, more recently, on the auto-pilot (AP) panel, which allows to punch in number codes for altitude, heading and speed for the plane to maintain (it works rather nicely… for some reason now it’s more fun to program the AP, than flying around freely, probably because it feels like it has its own mind).

Auto-pilot climbing at the selected altitude of 230 (23,000 ft).

The HMD/HUD

There is no fixed HUD in the F-35. All classical HUD symbology is projected directly into the eyes of the pilot, like some sort of AR glasses. This is a great excuse to use VR.

My HMD/HUD implementation is farily decent, I think, and with a good amount of details. However some problems still need some work. Specifically, one big issue is the fact that the HMD gives a sense of additive lighting, leaving the background images clearly visible.

Stereo projection allows to draw, for example, a target symbol at a virtual distance far away, right where the target is. However if a piece of the cockpit is in the way, then the contrast between the far away symbol and the close-by cockpit element, is rather unpleasant, if not downright disturbing in the long run.

I’m not sure how this is solved in practice, but my solution for now is an AR-kind of solution: I project target symbols only as deep as the closest object is. It’s some sort of virtual AR projection, but done in VR (ah !).

Above is an old (a few things have improved since) video showing operation on main display, and HDM/HUD see-through capabilities.

Another great thing about the F-35 is the embedded Digital Aperture System (DAS), which in practice gives some sort of X-ray vision. Cameras around the plane are reprojected in a way that the pilot feels like he/she can see through the plane. I did implement this feature, although to avoid issues with stereo projection, for now I opted to make it fully opaque, so it’s not as “real”, but it also isn’t a vision hazard. I’m also using grayscale colors.. I should move to greens, since that’s what’s in the real plane.

All this projection/parallax stuff will require more work, more testing, and ideally some hints from those that have had access to the real thing (call me).

What else

Much more could be said of what’s there and what’s missing (a lot, the technological scope of the actual F-35 is enormous), but this is the state of things, more or less. Hopefully I’ll get a chance to continue working on this soon, or at least to something related to it.

Here’s is a long overdue project update, also with some clearer details on the current direction.

I’ll be writing from a personal perspective, because for the time being it’s just me (Davide) working on this. All art is by Max Puliero, as usual.

First of all, the goal has pretty much shifted towards making an F-35 flight simulator (more or less, considering that most info are classified, and considering the sheer complexity of the real thing).

A Harrier’s HUD. Gritty and functional.

I’ve always been interested in technology more than the games themselves. I had a taste of flight simulation development while creating FCX, where the goal was to make a sci-fi game that was also plausible, which is one reason why there were no guns (other reason being… laziness).

While developing FCX, I often found myself struggling to implement things such as the HUD (Head-Up Display). An HUD looks cool aesthetically, but why it is how it is, and what do all symbols mean ?

Once one starts learning how to read the real thing, it’s very hard to see another game HUD again without having a chuckle. It’s a bit like hearing someone acting in a foreign language, until it’s your own native language, then it’s just funny… gone is the suspension of disbelief, forever.

yeah, whatever…

This project was meant as an evolution of FCX, focused on VR on PC and therefore it had to give a compelling cockpit experience. I started taking the F-35 avionics as a reference and then found myself learning more and more about the airplane and about airplanes in general (at least beyond a general passion that I may have had in the past).
I also realized that there is an healthy flight combat simulation community that produces mods with realism that goes well beyond what even the average gamer may think. Modern flight combat simulators are a niche unknown to most, but the level of realism and the involvement around them is nothing like the sims from the 90s (last time they almost weren’t a niche).

At the current state however, this is still pretty much an hobby project. Ideally, we’ll be able to find someone that believes in the project and that is willing to support the development. It’s more likely however that I’ll have to continue in my own free time, which is now more scarce than ever. In fact, I pretty much had to pause development for the past two months.
Still, I put so much effort into this (some to be detailed in the next post), and it would be a waste to leave it as it is.

Much work went into the game since the last update. Some of this will require a longer post to be explained. I’ll post here some screenshots relative to the more recent work that is more clearly visible.

Here’s an image of the latest digital display that represents pretty much all of the avionics GUI in the plane.
The display is heavily based on that of the F-35, as seen on public displays of simulators of the airplane.

The display is divided in 2 screen halves, each of which can contain 2 vertical windows or panels, each of which can have 2 child windows at the bottom.
The F-35 windowing system leaves some room for reorganization of the layout, something which I haven’t yet implemented, but that will come eventually.

Some of the windows in display are at least partly functional. Left to right, the Store Management System (SMS), the Tactical Situation Display (TSD), the Forward Looking InfraRed display (FLIR) and a generic map display, which will have to be replaced.
The SMS has received some cosmetic improvement, while the TSD received the bulk of recent improvements.

The TSD has now a cursor that can be used to select a potential target, zoom on the area to determine if more targets are overlapping, and then designate a target to be shot. This is especially important for ground targets, which are usually planned early in a mission.

The FLIR display is not active in the screen shot but, when active, it produces a pseudo-IR zoomed view of selected target for visual confirmation both when designating a target and later, after the target is hit, to asses the damage.

Focus on the window on which controller and keyboard inputs act on, is determined by the window on which the mouse us hovered on (non-VR mode), or by the window at the center of the visible area (VR mode). A green border is also used as a visual confirmation of the window currently receiving input messages.

The general display quality was also increased, both in terms of resolution and by increasing the number of MSAA samples.
The game uses MSAA anti-aliasing, both for the final rendering and for the rendering of the cockpit displays. This is important because if we’re to simulate actual instrumentation with the right proportions, then we also need extreme clarity of display.

We’ve recently taken a break from graphics and UI avionics rendering, to focus on physics. More specifically on aerodynamics.

Sample image or an AGM-65 gliding to hit a tank

This came about as missiles needed and improvement on hit precision.
A game can cheat at will and always make a missile hit a target, however the effect is not pleasing and the strategic element goes away, as behavior is no longer tied to laws of nature, making the experience less believable.

A missile’s ability to hit a target is determined by its capacity to individuate and seek a target, but also by the raw performance of its rocket motor and its air frame, or body.Air-to-ground/surface (AG or AS) missiles don’t necessarily have enough rocket fuel to reach a target, and may often end up reaching the target while gliding, much like a smart bomb.

For this reason we decided that improving the aerodynamic simulation was important to simulate the nuances of hitting a target, with realistic weapons with realistic physical parameters that give them specific advantages and disadvantages.

We’re currently working with a few real-world references, so to have a rough idea of what kind of rocket motor and air frame should correspond to a certain performance range.

AGM-65 airframe debug view

Here’s a wire-frame debug screenshot of a model of a Maverick AGM-65 in our game (graphics are in pure simulation mode, not representative of the actual game graphics).

Much more work is necessary, but it’s already exciting to see a missile fly using the proper laws of physics, although this has complicated things a fair bit.

At this point, the aerodynamics of the actual airplane is less advanced than that for the missiles, but in time, the improved model will be transferred to the plane as well… although with jet fighters is not that simple, but this is a topic for another post…

Here’s a brief demonstration of the cockpit instrumentation.
Many things are missing, but some main functions, such as weapon selection and FLIR (infrared display) are working.
We’re still at a pre-alpha phase, so, much of what’s in this video is likely to change.
Here the look-around is done via mouse, and arrow keys are used to control flight. At some point in the future, we’ll start recording directly in VR, where movements won’t look as stiff.