Hi was being asked to talk a bit more about my implementation of ANN here in lua.

Actually already wrote this but when I tried attching a file via drag&drop(over the ad file button instead of the text entry) so my browser made the tab a download tab for my local file and everything got deleted for my convenience I suppose.

Anyways I got this neural network algorithm to work in lua due to me being to dumb to compile a mainstream-nn-framework for love. No actually it took me months to understand it by myself and youtube while I wanted to do my own. And after months of lazy(when I had time and interest) youtube research and 3 attempts at this I finally got it working and reviewing my own code also grew my understanding of this.

The main problem I had was the chainrule. Most videos presented me the chainrule as concept thats built into the network. And while I thought I got to implement the chainrule it was already included in the formula.
And that the backpropagation is basically the same as the forwardpropagation but reusing the values of the forward pass in the derivative function(here we got the chainrule).

After I built it I broke it again by applying random jitters to all kinds of updates to see what kind of effect it has and just playing around with it.

So the dots on the bottom are vertically one batch of errorvalues for different inputs. They ideally reach the bottom of the screen. Or do nothing when a local minimum is reached. (they are randomly initiated so sometimes they get it, sometimes not for this XOR-datasetlet)

have you explored generative adversarial networks yet? Might be the next step.

This is really good stuff.

Yeah they are cool but I think a q-learner could be better for love. Actually my next step was linking those networks to automate feeding/backpropagation along multiple chained networks.

Also those long-short-term-memory models sound interesting. (https://www.youtube.com/watch?v=WCUNPb-5EYI ,actually was searching for a video of one of deepminds that guides you through your subway by taking in a photo of the subway-map)

I've always got lost in this stuff and if you're willing to go outside of neural nets you'll lose days and weeks too. K means is a good place to start for clustering. And then go on to PCA which is a relaxed solution of k-means. And if you go in that direction, when you finish with principle component analysis (I finished k means but never got past PCA), self organizing maps are supposed to be a fun generalization of pca.

One day when I get the time I'm hoping to explore some of the simpler stuff like random forests, utility systems, and other ai more traditional to games. Theres lots of topics worth exploring. Sir, if you expand any more be sure to ping me!

I bet^^. I think K-means is something I already got. I'm more about trying to make an AI that could really kick some ass and be fun to play against with.
Or imagine a simple zombie ai which is driven by a neural net. For each zombie just apply some small random jitter to the weights and it seems like you got a crowd of zombies that behave differently.

I could imagine using an neural network in combination with k-means to analyze and predict movement of troops/groups for strategy games. A bit like handwriting prediction.
Could also be combined with a GAN I think.

Hmm self organizing maps also sound interesting... (looks like it could also aid a strategy ai or map generation)

If you want 'fun' AI that is also relatively skilled, if you are fascinated by emergence, you should look into Utility AI and Behavior Trees.

I've done toy examples (on paper) to demonstrate the concept to kids, and both behavior trees and utility ai are fairly easy to grasp.

Always wanted to try capturing the output of a utility ai and transform it into a behavior tree.

If you hook up an NN as the utility function for the utility ai, or just the jitter method you wrote then you could experiment with weight manipulation for the ai in question. Unlike behavior trees, utility has to be recalculated every step so it can be expensive to track a lot of elements (read the articles about the ai in FEAR2 for more), but if there was someway to capture that and 'freeze' the utility values
as a behavior selector then some of these calculations might be mitigated.

I've seen examples that go in this direction, but where the utility ai is a node in the behavior tree, and not the other way around. All and all though, it seems like utility is the 'peanutbutter' to decision and behavior tree 'jelly' and I wish I had the time to explore it more.
Researchers have said the future of AI is through ensemble methods and from what I can tell this is a largely unexplored direction.