Maybe you're going to address this in the next part you mentioned, but languages could really use more neat tools to avoid all the boilerplate you have to write and read.
With methods like update and generic things like 'GameObject' you're tempted because you never have to touch that code again, and the codebase 'magically' accounts for everything when you add a new component type. And the 'main loop' that just calls the updates is never a real source of errors, as opposed to the specific style where you can (and this happens to me a ton) actually just forget to add the actual function call which may cost you a precious sanity on a bad day.
I guess a good metaprogramming language and a way to 'tag' functions/methods and variables/attributes (templates certainly aren't this) so you can look them up in the metaprogramming language later is all you'd need there, but C++ doesn't have it...
As you pointed out, the thing about 'static so a programmer has to change it' vs 'dynamic so designers can change it with a tool' is not really a thing considering the tool can theoretically spit out generated code in whatever language you need, but again the ecosystem bites you because actually doing that is a lot more complicated (if you want to be able to change stuff while the game/engine is running) than writing something data driven and loading/reloading a bunch of text files instead.

This is all conjecture, since we probably won't find out until someone tries and fails or succeeds, but I'm not quite as pessimistic about machine learning-based AI being used in game AI, even without better verification and customization tools . I could see some small subset problems where something like neural networks could probably be applied fairly well in combination with traditional hand-designed AI techniques.
In particular, I'm thinking about problems where...
... currently there don't exist any really solid/robust solutions with traditional methods, so there's no clear preference for 'let's just do it the way we know it works'.
... AI can be trained without the need of human input (so the target function can be optimized without a player having to play, which costs a lot of time/money), or where human-sourced training data is already readily available.
... error cases are either not a problem (maybe even a feature) or can be identified easily so you can fall back on some kind of default behavior that is known to work.
... Player perception
As an example that I may investigate closer in the future, AI tactics in RTS games (real-time maneuvering and usage of troops, etc.) is something where even the flagships of the industry like Total War/Starcraft 2, etc., are still suffering from major problems where the AI simply malfunctions often or where it is incredibly simplistic in its approach to battle and abused easily, just because the complexity of situations an AI can find itself in. Player discussions about AI in these games is often rife with complaints for good reason, and player consensus is that playing against other players is the only way to get actually interesting free-form battles.
Where to position troops and which actions to execute with them in order to achieve a desired win-rate, I imagine, would be something that can be optimized well with neural networks, with different networks trained according to different designer-picked restrictions in order to get different "AI-personalities", such as maximum actions per minute for the AI, or maximizing usage of certain unit types or action types.
The networks could be trained against pre-existing AI without human input, or successors to pre-existing franchises could make use of existing player replay data. Instead of maximizing winrate, neural networks could be optimized against a target winrate distribution with respect to the given training data in order to arrive at different difficulty levels. So an AI whose output is optimized to roughly follow the winrate distribution that a real player with skill level [x] would have against training data that represents players of various levels of skill, could possibly also be made to perform roughly like a player of skill level [x].
And neatly, this would be a subset problem for those games in general, and other problems (such as overall game strategy, what units to produce, what buildings to build, etc.) could still be solved with classical methods.
The largest concern I would have for something like this are training times, since simulating full battles is bound to be a slow process.

I have only implemented two variants of these, those were initially a couple of hours of work that was fairly representative in terms of performance to a fully featured implementation.
I'd approach the problem by designing an agnostic API with a stub implementation, and then brute-force choose the best performing one once I have a non trivial project going.
This is strictly only impossible if the implementation has strong implications for how programmers must design the resulting systems that use that particular implementation. I'm not sure what would even fit the "strong" qualification there. Even the variant where the components stored with systems (and the programmer who writes those systems has to put them there) can probably be circumvented with some macros or code introspection / code generation (hello clang libs, until C++ finally has a proper metaprogramming system) that says "I need physics and renderer related components here" and then figure out where to put that stuff based on the dependency graph of all systems that need it.
I guess if you really want to you can probably circumvent all of those problems by using some kind of DSL.
What am I missing?

The projectileFX action internally calls a function in our graphical effects system. That function is hardcoded to take raw game-world positions, and what the projectileFX action does is take the casting entity and the target, get the positions of those and hands it over to that FX function. If we wanted to shoot the projectile not at the target, but somewhere else, we'd add a different kind of action (or just an option to projectileFX) that doesn't take the target entity's position, but whatever else you need.
I should add that all over the code, we have cases where 'variables' are really treated as 'this is either a variable or a function' (inspired by functional programming languages), so we can do stuff like changing the target position given to that FX function at runtime (for example, if we want a projectile to track a target, instead of flying to the same place when the target moves), or change the 'arrival time' variable which the spells use to know when to trigger damage.
The lightning bolt thing wouldn't require a tree or anything like that. In our linear action list system, I would just give the thing [n] iterations of a [lightningFX, lightningSFX, damage] action triple, where n is the number of times you want it to jump. Every time the lightningFX is used, it writes the location of the target to some internal variable of the spell that later executions of lightningFX reference. We have a bunch of tracking stuff like that in our spells, too. We have a spell that spawns rotating orbs around a caster, who shoot particles regularly. Since the positions of those always change and they are spawned at runtime, we just add those particle effect objects into a 'spawnedParticleEffects' array that every spell has, and every time that spell shoots a projectile, it picks a random entry out of that array as a source location.

Not sure how much use this is if you want a hardcore 'component'-ized spell system, but in Idle Raiders (and its successor Second Run, you can look it up on Kongregate if you want to play it) we don't solve this generally, instead there's a whole bunch of (re-usable) hard coding which is working fairly well. I think we have close to a hundred different spells in the actual game now (playable by people, and more added on a regular basis) and we haven't really encountered major issues.
Our spells are lists of actions (basically, functions that get called one after the other, with delays between them). We have a fireball spell in our game, and that has the actions "projectileFX" (for the graphical effect of launching the projectile, that also computes how long it's going to take), "projectileSound" (for sound effect), followed by the action "fireDamage" (for dealing damage).
When spells are constructed they are being given generic options (string-value pairs), in this case stuff like the file name for the projectile, the damage modifier, speed of the projectile, etc... We also have an "Ice Shard" spell, that could just be implemented as the Fireball spell but with different options (for various gameplay reasons, it's an entirely separate spell in our system though).
If we wanted to add an AOE component at the end of your spell, there are two ways we could do it. We could again hard-code it (maybe leave it out by disabling it with an option when the spell is constructed), and just add an "AOEFireDamage" action in addition or instead of the single target fireDamage action. The second way would be via our 'passive ability' system. All abilities can trigger other abilities using various gameplay rules. We could just create a generic "aoedamage" spell that is triggered by the fireball spell. As a practical example there's an "Ignite" passive skill that triggers a (burning) damage over time effect on the target every time a Fireball crits. Oh yeah, there's also an actual AOEDamage ability that can be used by warriors to have a chance to "cleave" their melee attacks, that works like this.
This works in a data-driven approach, too. All these 'actions' are created at runtime anyways (it's Javascript so it's easier to do this there, but in C++ they would just be function pointers that always have the same type, or if you want to get fancy, instances of classes derived from a SpellAction base class), so it's not a problem to cobble together an editor that assembles new spells from these basic actions.
There's one major upgrade we could and would like to make to this system, which is to have skills be an action-tree instead of an action list. With action trees (so a single action can branch out into multiple follow-up actions, or multiple branches can join back into one), it would be easier to have effects where you need to track multiple instaces of something that has previously been started by a different action.
For example, a random spell could say spawn three different projectiles that travel around for a bit, which would mean the action tree branches out into three paths, and at the end there would be a connecting action node that waits for all three projectiles to arrive at their target before doing something. Or the spell launches a random amount of projectiles, and each of them does something different (random?) when it reaches the target. That kind of stuff would be a lot easier to handle in terms of code if the "random things" that happen at the end can refer to a parent chain of action nodes, instead of having to find whatever they need within the linear list of actions that is there now.
We haven't done that yet mostly because we haven't encountered any serious use cases where we couldn't just (again) hard code around the problem. It sounds dirty but complicating the system needs to be a productivity win (less time spent creating the same things for the game), which we don't see at the moment.

Is there previous work for something like that? Does it even make sense?
I'm a complete beginner when it comes to network programming. From what I've read it sounds like people mostly try out different networking implementations (regarding protocols used, prediction and interpolation approaches, etc.) by hand and end up using what feels best.
I'm wondering whether or not there exist metrics that measure various aspects of an implementation specifically suited or tailored to games. I figure it would be helpful for automated testing, and maybe speed up the development process when you're trying out a bunch of different approaches.
And if there aren't, I also wonder if this would be worth putting some work into to come up with useful metrics, or if consensus is "nah, just try until you find something that works best, how networked gameplay feels has too much subjective/complex elements attached be quantified by metrics" or something like that.
edit: I know this is a very generalized question. What a metric would look like probably depends a lot on what kind of quantities you're looking at. Am I trying to synchronize player positions as best as I can across multiple players? Server-Client or P2P? Etc.? I'm basically having a hard time googling for this stuff and wonder if people more experienced in the field have come across useful stuff. Open to anything.

Just to get on the same page... what are we talking about when we say explicit connections? I'm thinking about
struct ABCEnt
{
A *a; //points to an A component in the big linear array of A components
B *b; //same
C *c; //same
}
Of course, you can do that at runtime with something like
struct Entity
{
Component *components; //can be filled in a data-driven manner
int numComponents;
}
Entity e;
A *a = GetComponent<A>(ent); //linear search in components
But that kind of stuff in my opinion is the opposite of explicit, on a code-implementation level. Are we talking about the same thing?

Well, ECS is just an extended form of composition with data oriented and data driven design measures added in. They just counter the typical problems you get from deep inheritance trees, so I think it's natural to assume that stance of discussion and those comparisons are entirely fair. Of course, ECS is just one possible solution to the problem, which you probably don't need if you don't have all of those particular problems, but everyone should already be aware of that and we're just discussing specifics of ECS.
Regarding explicit vs implicit: The problem with explicit is that it's not data-driven at all. You can't treat different component combinations in a polymorphic manner when needed. If you have an entity 1 with components A,B,C and entity 2 with components B,C and a system that operatoes on components B and C together, you can't have a system that iterates through all entities just like that because they're different types, instead you would have to manually work on all of those different combinations when updating a system. Like, first looping through all ABC entities, then through all BC entities, if you later decide to add an ABCD entity, you have to introduce that as well, and if they interact, manage that somehow as well.
This is probably solvable with dynamic code generation and re-compilation while working in your editor or whatever, but even then you'd have to find a way to let the user define system logic and then have the code generation cobble those different entity types together...
So I think explicit only really works if the entities in your game are not "different but some subset of components is the same" like that. Or to put it differently: If you have no need for data-driveness of your core entity types. I wouldn't go that route with a generalist framework. It probably works pretty well when you're coding your engine in parallel with every game, and adapt it to that game's needs.
So overall, it's completely true this is basically a static vs dynamic typing discussion, with all the same trade-offs and reasons why you're sometimes forced to dynamic typing in the end.

I don't know how complex your game is, but my personal experience has been that any mobile device that doesn't run WebGL by now probably isn't going to be nearly fast enough to run a javascript game of non-trivial runtime complexity. For desktop, this is less true, we tested more than decade old devices which have proper WebGL browser support (even with all extensions and what not) but which are just too slow for the CPU side code.
On practically all platforms WebGL support is also just a software problem of users requiring a proper browser that supports WebGL, not a hardware problem (since it's based on ES 2.0), so if you plan to roll out your HTML5 app as a packaged application, as opposed to an actual web page that your players have to navigate to via the browser, you can just make sure to use a javascript runtime that supports WebGL in the background. This is much easier than convincing your users to upgrade their browsers (or even worse, switching to a different one) for your game.
Also canvas performance CPU-wise is pretty bad. I have no idea why this is, but even on high end desktop PCs we struggled hard to draw more than ~600 small images per frame at 60 FPS, even if everything was packed into one or two atlases. This is one of the major reasons why we decided to go WebGL only for our next web game.
Another thing to add, mobile & desktop browser WebGL support is at 92.6% globally as of February this year (probably 95%+ by now), according to http://webglstats.com/ , even just mobile devices have 90% coverage, but with the "roll out as stand alone application" approach you can probably savely put this in the 100% category.
And last but not least, consider that supporting separate rendering paths is more complex than just going pure WebGL!

How do you attempt to read the data? With image load/store you need manual synchronization (using glMemoryBarrier, ) to make sure the data is available in successive drawing operations or compute calls. For example if you want to fetch from the texture in a shader directly after writing to it, you need a glMemoryBarrier(GL_TEXTURE_FETCH_MEMORY_BIT) in between the dispatch call and the draw call that uses the texture.
edit: Whoops, saw you already have a memory barrier in there. Still, you should make sure the correct flag is set. With the SHADER_IMAGE_ACCESS_BARRIER_BIT the correct synchronization is only guaranteed if you use this image in subsequent calls by reading from it with image load/store as well (so no normal texture fetching for example).
yet another edit: Also, the memoryBarrier call is before the dispatch. If you access the texture after, you still need another barrier after the dispatch call. The barrier before it only ensures synchronization with everything that happens before it.

Binding different components together using an entity is almost the entire point of the system. It's about having different data and functionalities, and bundling that together in a way with minimal memory, runtime and abstraction overhead (the opposite of which would be huge inheritance trees).
In practical terms it's because too much code will need it. Have an animation system and a combat system? Well, if my character is hit (which inevitably ends up modifying that combat component somehow, which probably contains all the stats like HP, armor, damage, etc.) I'd like to use the animation system to set up a response for that particular character, which will in the end modify that particular animation component. Get the combat component and animation component using the entity... ez pz. Stuff like that.
If you're viewing this from a low-level perspective, it's easy to get the impression that you're mostly dealing with independent systems. A physics engine, that takes care of all its own stuff, a renderer that doesn't care about what happens on the outside, why would an audio system require outside components ever?? etc...
But that's not what ECS is really for (big IMO). It's about that part of the code above that which uses all these systems and bundles them together to make a game with it.
edit: Remember that a big part of the complexity of ECS is also requirements like being able to attach and detach components at runtime, being able to do data-driven stuff, etc... So in the end those users of your component system will just end up with similarly complex abstraction layers which tie this stuff together at runtime.

Our game (Idle Raiders and its upcoming successor) deals a metric ton with stats, and it works like this:
We have "modifiers". They are triplets ( attribute, operator, value), i.e. the attribute to be changed (e.g. damage), the operator used (how the values are added the attribute, e.g. simple ADD which does base_value + value, ADDITIVE_MUL which does base = base_value * value, other stuff, etc.).
There core modifier system does no 'runtime tracking' (recomputing every frame or something) of stats because it's not necessary. Not really sure why you would need it... we only compute the values when they are changed by gameplay (e.g. player equips new item, player applies buff, buff runs out, etc.) and keep it like that until it gets changed again. We also don't use any dirty values and just recompute an attribute immediately when a modifier is applied/unapplied, because even though we have sometimes dozens of entities fighting, modifier changes are so rare relative to the frame update frequency that changing the same modifier on the same entity twice in a frame almost never happens, and even then the computations involved are nearly trivial in computational terms.
The modifier system also doesn't actually store 'active' modifiers that are used, and only stores the combined values per operator type. So when you want to compute the final result of an attribute from all its applied modifiers, it looks up combined values from the ADD, MUL, etc. modifiers and applies those to the base stat. Combine means... when you do some MULs like this
base_value += base_value * value1 + base_value * value2 + ...
it's of course the same as doing (this is the same for all other operators)
base_value += base_value * (value1 + value 2 + ...);
so you can take the
(value1 + value 2 + ...)
and store it somewhere. It only gets changed when modifiers are applied/unapplied, and when the value of an attribute is computed, you only need to do that one step using the cached 'combined' value per each operator type.
The tracking of the actual modifier objects that need to be applied/unapplied is left to the gameplay systems that actually use them, because their lifetimes are usually dealt with in different ways. For example, the equipment system just stores all modifiers associated with an item in the item itself, and when you equip/unequip it, applies/unapplies it. These modifiers are alive all game and applied/unapplied at the behest of the player's actions. The buff system on the contrary actually does some checks once a second to see which modifiers now have to run out, etc. and actually discards some of them.
We need ordering of those modifiers, so we use a layer system. Any layer can contain any number of modifiers with all their different operations. We apply the lowest first, from that it calculates basically a new "base value" (a copy of course, the original base value never gets changed) for the layer above. That layer also applies its modifiers, and moves it up the ladder all layers.
For example, an entity starts with 100 HP and has an item that gives +20 HP and another that gives +50% HP, then that results in 170 HP since equpiment modifiers are all in the same layer and operate on the base value. But when the player now triggers a temporary buff that increases the HP of an entity by 20%, that is in a layer above (since thats just how we want it), and the entity will have a total for 204 HP. And so on.
The math operators described above are applied independently from each other within the same layer. So layer 0 MUL does not take the computation result of layer 0 ADD into account. Instead, layer 1 will take the result of the final result of layer 0 into account, etc.
All gameplay systems target a pre-defined hardcoded layer, since it's important in terms of gameplay how the different sources of modifiers are supposed to "stack". There are many different ones... we have equpiment like armor and weapons that changes stats, all characters can have skills that modify some stats, then we have temporary buffs from usable items that modify stats, permanent unlockables that modify stats, a basebuilding system that modifies stats, etc...
As a final bonus step, before storing the computation result in the actual attribute variable (myEntity.damage for example) you can put that through a custom curve if you so desire.
And actually we don't use this layering system or custom curves yet so that was a lie :D That's just how I would do it if I had the time to code it again. For now we basically make do with the same system but with only 1 layer where everything is thrown into, and we've been doing fine so far, except that our usable scrolls that increase raider damage by 10% for 30 seconds only do it based on the base value, but that's the only thing we currently don't like about it and we had more important stuff to work on so...

Another thing, I just noticed this doesn't work like that if entityID is just a globally increased number. Instead those entityIDs must be "per alive entity in the relevant component collection". I.e. if you add/remove a component to an entity you have to change its ID, and keep reusing IDs of dead entities. Shouldn't be a problem if you hand around your entity as an actual structure.

Nononono. Don't use global arrays, for anything, especially in an ECS.
Indeed, making this actually global is silly, I didn't mean global in the 'singleton' way. I'm currently working on a game where we made one or two systems 'global' (instead of local per game level) like that and we deeply regret it.