Phil 2.20.17

PathNet article and paper. Using genetic techniques to produce better NN systems. GAs are treated like gradient descent. Which makes sense, as gradient descent and hillclimbing are pretty much the same thing

“Since scientists started building and training neural networks, Transfer Learning has been the main bottleneck. Transfer Learning is the ability of an AI to learn from different tasks and apply its pre-learned knowledge to a completely new task. It is implicit that with this precedent knowledge, the AI will perform better and train faster than de novo neural networks on the new task.”

Adding angle and mean deltas. Interesting results, but still not sure on the best approach to classify…

So here’s a pretty typical population. It’s 10% Explorer, 90% Exploiter. Exploit social influence radius is 0.2. These settings produce an orbiting flock. Between-group interaction is allowed, so This is a grid where the accumulated relationship of each agent to every other agent is shown. Red is closest, green is farthest You can see the different populations pretty well. One thing that isn’t that obvious is that exploiters are on average slightly closer to each other than to exploiters.

A more extreme example is where the Exploit influence distance is 10: These tables show just relative position when compared to the origin.

Although I can’t figure out how to classify using this data, clustering works pretty well. This is Canopy (WEKA) on the top dataset above: