The engine is pretty much based on rendertargets (terrain generator); exposing them through an API is possible too.

We are not yet working on any networking, apart from the terrain data downloader and the login system. Technically, it's just a world rendering engine at the moment. However, we plan to add some native networking so it can be used for the multiplayer mode later.

The engine is pretty much based on rendertargets (terrain generator); exposing them through an API is possible too.

We are not yet working on any networking, apart from the terrain data downloader and the login system. Technically, it's just a world rendering engine at the moment. However, we plan to add some native networking so it can be used for the multiplayer mode later.

Sounds good. Would be nice to expose the handles to all the rendertargets (including the external cameras). Then I could just do a copy in my plugin to fetch all the data...

Porting to linux is almost a requirement for my aerospace simulator... It requires at least a potential possibility to run on a realtime OS (or a near-realtime OS, for when visualization like outerra is present).

I hope that if you're going to hold a discussion about implementing networking (the actual active synchronization protocol - to synchronize variables and objects which change a lot, and depend on user iteraction aka vehicles planes physics objects etc), it will be public. I would like to take a part in that

Well there are different requirements for the networking in our game, and for a general networking for a simulator platform.

For the sim platform we expect people to run various simulator cores, but for the outside viewer, the vehicle physics is a black box. For example, you could see someone landing a space shuttle, and he may be running a full-fledged simulator, but for someone zipping on a nearby highway it's just an object with unknown physics, and only some visible properties get synchronized and extrapolated.

Apart from the obvious and universal properties like the object's position and orientation, speed and rotation etc., there are some that come only with some vehicles. For example, flaps on a plane, or suspension state on wheels. Ultimately these translate to some basic transformations on model joints, so I guess it can be made automatic - once you declare that this and that joint's state should be synced, the networking will take care of that, and remote object will be rendered using these values when in the proper LOD level.

Atop of this one could in theory require to sync additional properties of the object, for example to be able to read some system variables remotely, though I'm not sure if it should be use the same system ..Maybe you could describe how are you imagining it.

Well there are different requirements for the networking in our game, and for a general networking for a simulator platform.

For the sim platform we expect people to run various simulator cores, but for the outside viewer, the vehicle physics is a black box. For example, you could see someone landing a space shuttle, and he may be running a full-fledged simulator, but for someone zipping on a nearby highway it's just an object with unknown physics, and only some visible properties get synchronized and extrapolated.

Apart from the obvious and universal properties like the object's position and orientation, speed and rotation etc., there are some that come only with some vehicles. For example, flaps on a plane, or suspension state on wheels. Ultimately these translate to some basic transformations on model joints, so I guess it can be made automatic - once you declare that this and that joint's state should be synced, the networking will take care of that, and remote object will be rendered using these values when in the proper LOD level.

Atop of this one could in theory require to sync additional properties of the object, for example to be able to read some system variables remotely, though I'm not sure if it should be use the same system ..Maybe you could describe how are you imagining it.

There is no difference in what variables you are synchronizing. The summary of what kind of synchronization seems to work very well from my experience:

Fixed-step networking (from networking engines point of view both clients and servers run at a fixed FPS - say 30 networking frames per second).

Sending delta-compressed "distance-compressed" full world state. Which is just a fancy way of saying that every client has a potential to receive all objects in the world, and he thinks server sends him all objects in the world. It's up to server to cull out objects which are out of clients potential visibility, client wouldn't even know of it. And delta-compression means that only variables that have significantly changed are sent out.

For precision networking with low velocities of objects in the current coordinate system (when objects position on screen must be as precise as it is possible to do): simple extrapolation based on the last known position, current time, last known time (a prediction is made by knowing previous state, a correction is made once one of the previous states is received from server).

For precision networking with very high velocities of objects: ultimately there's no "complete" solution, but if the objects move slowly relatively to each other (while moving fast in the global coordinates, aka objects on orbit), the nice way seems to be calculating all physics serverside, and then passing relative positions to the player.

The server sends each packet with networking frame number attached to it. It uses last known networking frame number acknowledged by the client as the base for delta-compression.

The client sends server a log of all his inputs (alternatively: physics forces) which have acted upon the player since the last frame acknowledged by server. Or he may send delta-changes in his own state (see below).

The server will delta-compress frames (when sending to client) based on networking frame number last received from client (the server must know what frame client SURELY has).

The server must store last acknowledged frame for the client (state of the world around player), history of previous world states (only if physics are calculated on server! If you need to do lag compensation when shooting projectiles for example), clients must store last acknowledged state of the input log/forces log.

When receiving clients input/forces the server will, for that player, roll the state back into past, and re-calculate users position (and possibly his iteraction with objects around) up to present time. This is only required if you use forces/input log, if you just send state updates, there's no nice solution.

The client interpolates 15 networking packets per second to 30 networking frames per second, and then interpolates the 30 networking frames per second to 60-120-whatever actual rendered frames (these two interpolations are actually just one from 15 to 60-120-whatever)

Illustration to packet flow for synchronizing data from server to clients:

Some of these things are excessive, and some are required. It depends on what kind of networking you want. From what you're saying I'm assuming that you're targeting your game to be a simulator, not a FPS.

If you have just a simulator, you don't need to calculate physics on server (one exception to this rule: orbital spaceflight). This means clients don't need to send their inputs backlog/forces backlog, but instead change in state. A bit easier on server (no need to store previous world states, no need to do complex extrapolation of users state. In fact can just entirely ignore extrapolation on servers side and let users do it).

Some more to-the-point stuff: I'm programming my networking protocol (in my aerospace simulator) in Lua. The actual state extrapolation and interpolation is performed in C.

It is capable of synchronizing any sort of state - there are two functions called "FrameWrite" and "FrameRead", which read and write state. There are two modes of how clients may handle their updates: they can send their new position/velocity (the protocol allows client to update any object, or create new objects - like itself. But the server only accepts updates from objects which were created or are somehow assigned to the current player), or they can send log of forces acting on the vessel (list of frame numbers and total force/moment acting upon the vessel at that time).

FrameWrite will compare current server state to last server state acknowledged by client. It compares every networked variable, and writes them to the packet if they have changed. If called on client, it will compare every object assigned to this client (a client may synchronize several "vessels" over the network, if it's a multi-stage rocket for example), and send it to server. It uses last received servers state as base for the delta-compression.

FrameRead will read the frame based on last acknowledged frame. On server it will update state of all client-owned objects (alternatively it takes past known state of the client, and re-computes his flight based on forces acting upon him). On client it will simply update the clients state.

After frame is read on client it will also be extrapolated (since I want to try and show most precise position, not most precise trajectory) based on time marker and/or frame number (I use time marker, but frame number could also be used).

(the extrapolation is done every rendering frame, based on last read frame).

As said earlier, instead of extrapolating frames upon receiving them it is possible to simply delay the rendering by a few frames, and display perfectly interpolated image (based on last two received frames). At 30 networking frames per second (15 actual packets per second) you need about 4 frames delay (at 30 FPS, ~100 msec delay).The difference is:

Extrapolation to current time is good for aircraft and situations when you need to know exact position of the object, but can ignore the trajectory it has traveled (more important to know you have not collided than know that other player has flied in a specific trajectory).

Displaying the nicely interpolated, slightly delayed picture will give you exact trajectories. This is nice if the object movement is of a great deal of importance. The downside: position on screen lags behind true position. This is usually perfectly acceptable, and is the way Quake/Source based networking works (their networking I looked at).

I've tried this approach in a networking test of a game I'm writing. It worked fairly well, everyone's movements looked perfectly nice (even though they were slightly lagging behind). The server was capable of rolling back entire world state for lag compensation, if it had to. All the physics were done on server (it used clients movement log).

Client was pressing his keys on his keyboard, he was SEEMINGLY moving around map on HIS screen - this is just clients "prediction" working. The server was repeating his actions using the log he gives, and the client was sometimes getting "pulled" to the right state. Once the player is running, he was running a little bit "sideways", which was simply server telling the player his position was slightly slightly off.

How I imagine it in Outerra:(no actual API ideas yet, would need to know how you store and work with your objects in the engine, like what kinda types there are etc).

Each object passes a table of networked variables to Outerra. These variables somehow point to internal objects data, the Outerra networking engine will synchronize them (automatically).

Each object has an extrapolation function - this function can take in account all the physics of this object. This is only a visual extrapolation based on last known state (possibly several last known states!). The object will tell the engine if it needs more than 1 previous state (this is NOT really required! Just an idea for the future, ignore it for now).

The object has an interpolation function - given two states of the object, it returns an intermediate state of the object. If this function is missing, outerra should just linearly interpolate networked variables.

The objects state is "separate" from its code. If you use OOP in your engine, then ignore this part (for OOP approach you simply have different objects. If you are programming it in C, you need to have objects state to be in some data structure, and not depend on any other states).

Each object has a "network ID" - a unique handle which defines this object on server. If client wants to update some objects state on server (if we're sending state updates, not clients input backlog...) he just uses this network ID.

Each object has a state comparison function. This is one of the best functions in the object! It will compare one state to another state, and determine which variables have significantly changed.

This so far is enough to synchronize separate objects. I'm afraid I can't suggest much for doing physics yet - I haven't yet done any networking with complex joints. I suppose the straightforward way to go about it for now is to just assume each joint is a separate object, which synchronizes joint parameters (if they are static, they will be sent only once).

Joints can just specify the objects they connect together with network IDs.

Synchronizing physics is... tricky. It's the whole reason why Source Engine is so good - the physics engine is capable of finding solutions when you have two states of the physics scene, and need to interpolate between them, while avoiding joints explosion in progress. I would suggest that for now simply propagate physics of the objects on clients, and only "pull" them into right positions received from network. For flying vessels this will be pretty OK (they don't touch ground), and for ground vessels it will avoid situations when ground vessel tries to go through the road surface (it may TRY to go through road surface, but the physics engine state has a greater weight over networking state).

For clients own objects - they are all simulated on client and look perfect. The server only does synchronizing. Clients only extrapolate or interpolate data they receive (let the objects pick it themselves!). I suppose objects can also ask to simulate themselves on the server (this is a bit tricky to do - the server must have the relevant simulator addon installed on it for that to work). But for the rest of the objects (whch are simulated on client side) the server doesn't need to know anything but their networking variables list.

That's a bit of a wall of text, just ask or comment anything you want, it's more of a call to a discussion :p

P.S. my aerospace simulators networking would still have to run in parallel, I use a custom protocol and custom server software...

I don't have too much time at the moment, but I'll return to it later. But yes, we are here talking in the context of simulators, and client-side simulation by the owner, with the server syncing the external states between the clients. By external I mean the ones that affect how the object appears, and which are needed to render the model.

Greetings. Three weeks ago I've started work on the new physics engine for aerospace simulation (and actually for simulating terrestrial vessels too). It's a separate physics library now, and is mostly oriented on procedural way of specifying models (physics + rendering models).

One of major features is support for using existing physics engines as "propagators" and automatic switch between propagators. So an object on orbit may be propagated with precise RK4, while an object trying to dock with it will be propagated with Bullet physics (to allow for collisions between two objects).

This is done via runtime conversion of vectors between coordinate systems, and support for non-inertial terms (automatic) during runtime conversion.

Everything is still work in progress, but it's already capable of modelling vessel trajectories given forces and torques acting on the vessel. Things left to do are environment module (to make standard routines for calculating gravity, atmosphere, etc), some misc parts of code in engine itself, generally making it nicer, and code to generate 3D meshes.

Some pictures from the debug renderer:

Some pictures from the parametric editor (which lets me enter data for this simulator):

This physics engine, along with my internal systems simulator will be used in my aerospace simulator. I'm going to provide Proland as one of alternative renderers, but I'm still looking towards Outerra support!

I've been working more on the simulator. It is now nearing the first release. I will not list features (they are available in the documentation at http://evds.wireos.com ), but these are future features that might interest Outerra:

Wing and control surfaces simulation (similar to approach X-Plane uses)

There will be a way to run JBSim/other simulations under EVDS (external vessel dynamics simulator).

Wide support for frames of reference, the physics can output or input coordinates in any reference systems Outerra might need.

The simulator is now open source. The code is useless until the first release, but already works. The API will not change much (possibly only names of some constants and function calls). It's available at https://github.com/FoxWorks/EVDS

Proland will not be one of alternative renderers after all, the code is too french and I can't do much with it. Instead it will be Space Engine. I'm still interested in Outerra though.

There's now a basic IG control interface in OT, that allows you to use OT as an image generator for external simulators. Right now it's just a simple interface for controlling the camera in UFO mode. The next step in this direction will be an extension of the API for rendering of objects controlled by external simulation code, just as there's an API for scripting/control of objects simulated by one of the internal simulation cores (aircraft/JSBSim or vehicle/BulletPhysics). That's where a simulator like yours will be supposed to plug in.

There are some open questions, like the UI for these plugins. In OT UI is written in html/javascript, and can communicate with the backend via the same interfaces that are exposed in C++. This should be made available to the plugins as well. But of course the whole UI can be outside of the app, using OT just for visualization here.

We are still undeveloped as long as we don´t realize, that all our science is still descriptive, and than beyond that description lies a whole new world we just haven´t even started to fully understand.