I would like to know the correct implementation to declare game logic and view logic:

I see 3 ways of implementing this:

1. Storing (eg writing preprocessing to store the change to a property) property changes to data entities that are dispatched to listener(s) of the data entity. The main listener being the view, where the view contains the declared rendering logic (ie changing the entities animation based on a property change, changing the entities render object position).

> State Model (x,y,z,state) -> View (listens to change in
> x,y,z,state)->Scene graph object (x,y,z,animation, gets set by the View)

The issue with this is that it becomes evident that game engines are executing a lot of `char[]` switch statements (eg the names of properties) in the View.

2. Declaring the view as updating to the entirety of the model. Eg, instead of saving the changes as specified in 1) the view updates the render objects on every tick by inspecting the entire model, as opposed to the provided changes of 1). The tradeoff between 1 and 2 is the necessity of the additional execution of storing the property changes on the state tick.

3. Declaring animation logic in the state logic. Eg, instantiated member render objects are dummy objects when rendering is not performed.

***The questions:***

1. As specified in the title, what is the correct implementation to reacting to the game state in the sense of where should one declare display/rendering/animation (view) logic that reacts to the game state?

2. What is the implementation used in most game engines that facilitate large projects? (if possible, please specify for specific engines: SupremeCommander/clones, SC2, Unreal, ID, Frostbite, Anvil)

3. Which implementation do studios implement to architect projects using said engines? It's apparent that it is possible to not seperate model from view in the above engines. Eg, if you implement an entity, you would presumably want it to be viable in multiplayer, coop, and single player.

So, on tick we update the game data simulation. For example:

> The state of a unit's weapon is updated to firing. However, updating
> the view of the weapon to firing (by for example, setting the animation of the weapon and animating the hand) is not necessary in all
> instances of the simulation.

Things should be updated when they need to be! Especially the rendering sub-system is a counter-example w.r.t. implementing a listener concept. Remember that rendering happens as last action in the game loop. At that point all of the before running sub-systems have passed, and they for sure should have left the game in a renderable state. In other words, the rendering sub-system simply works on what exists and how it exists when it is next.

Another example is input processing. The effect of input (I mean PC controlling input) need not take place immediately. It gets prepared when the input sub-system runs at the beginning of the game loop, but its effect take place when later the animation sub-system runs.

The above concept avoids invoking listeners at all, which is preferable, of course.

An additional strategy (on the lower level) is to use dirty marking, useful in cases where elements are updated only sparsely before another step requires dependents of them. For example, let us assume that just 10 placements of the overall 1000 entities are touched in a simulation step. Then you can write the new value to the property and set a "dependents are dirty" flag to true. When a dependent needs to be run, it checks whether the properties it depends on are marked, and updates self only if so.

What to do best depends on the use case and the engine's architecture. However, display / rendering (assuming that rendering does not include mesh generation like e.g. skinning here) should never react to the state at all. It just draws the current state. This is often done by having separate layers in the rendering sub-systems.The top layer iterates the scene, performs visibility culling, and passes rendering jobs for visible entities to the lower level. The lower level is then responsible for driving the graphics rendering API (D3D, OpenGL, …) accordingly.

1. State simulation (eg, position, attack results, necessary to be functionaly isolated in the instance of deterministic lockstep)

2. Manipulation of the scene graph objects in response to the deterministic simulation as specified in #1 or #2 or declared as directly manipulated in the state simulation as specified in #3.

3. The rendering of the scene graph objects is the render pass for which you specified an answer, which as you specified is done as a traversal of the tree.

As specified, #2 is sort of a "View" render pass. In the sense that it takes the entity state and manipulates (not creates! vertices) the scene graph to the correct state. The problem with claiming that this is a correct implementation, because it is seemingly similar to the render pass as a complete traversal of the tree, is incorrect because the render pass needs to actually create the culled, subset of applicable vertices to the GPU from the entirety of the scene graph, and they almost certainly have changed for every frame. The View, implements the higher level declaration with regards to animation and the objects of the scene graph as specified in the example.

It's not like the View pass as contempated in #2 is going to recreate scene graph objects on every tick. So, logically it is not a pass but a programmed mutation to the view based on state.