We’re happy to report that neural2d has been tested in a fresh Ubuntu 16.04 installation, and — whew — it works (subject to the caveat below). That’s using CMake 3.5.1 and g++ 5.4. You are invited to comment on what operating systems and tool versions you have successfully used with neural2d.

Webserver

Thanks to a report from an alert participant, we found a problem where the optional webserver does not get compiled and linked by default as described in the documentation. In order to compile and link the webserver, run cmake with the option “-DWEBSERVER=ON”, and then rebuild the neural2d executable. For example:

cd build
cmake -DWEBSERVER=ON ..
make

Documentation

The README file in the top level directory has been updated with clearer instructions about preparing input data for the neural net, and with a few additional internal links and references. There’s no functional changes, and no new secrets revealed; just wordsmithing. Readers are encouraged to comment on the documentation or to submit additional documentation.

A new diagram showing file relationships was checked into the repository:

Artificial neural nets and biological neural nets share many common characteristics, but one big difference is that artificial neurons typically operate in a static framework by outputting a single scalar value in response to their inputs, while biological neurons have a rich life in the time dimension and output sequences of pulses. Nobody is exactly sure what that means yet, but it’s pretty clear that our artificial neural nets do not yet model the time dimension of biological nets very well.

Here’s an article that explains how that the thousands of synaptic inputs to a neuron help it recognize sequences of patterns, not just static patterns. The authors say that they have discovered that the physical arrangement of input synapses can cause the “emergence of a computationally sophisticated sequence memory.” Also see this commentary about the article.

I’m very interested in hearing about your experiments with neural nets recognizing time-dependent sequences of patterns.

One of neural2d’s contributors recently mentioned the advantages of using Git commit templates. It’s an easy way to make commit messages more consistent and useful. It’s such a good idea that I wanted to give it some exposure here.

The Git template places some text in the commit message dialog to help you remember how to format the commit messages consistently. It does not force you to format your commit messages in any particular way; it’s just a reminder. Instructions for setting up your own commit template can be found here.

Despite the headlines, the robot in question did not become conscious. It solved a puzzle by following an algorithm. You could use pencil and paper and follow the same algorithmic calculations and arrive at the same answers the robot did.

There’s a big difference between human-like behavior driven by an algorithm, and the same behavior driven by conscious awareness and intention.

The ball-and-stick illustrations used in the neural2d documentation were made with Blender. This article documents the Python scripting used to generate the connectors (sticks) between the neurons (the spheres) for the benefit of any Blender users who are trying to do something similar.

In neural2d, convolution network layers and pooling layers typically have a depth > 1, where the depth equals the number of kernels to train.

Previously, neural2d imposed certain restrictions on how layers with depth could be connected. The assumption was that if you wanted to go from a convolution network layer to a regular layer, the destination regular layer would have a depth of one.

There was no good reason to impose such a restriction, so neural2d now allows you to define regular layers with depth and connect them in any way to any other kind of layer. This means you can now insert a sparsely connected regular layer in between two convolution network layers with depth > 1 while preserving the depth of the pipeline.

In neural2d terminology, convolution networking is when you have a set of convolution kernels that you want to train to extract features from an input signal. Convolution filtering is when you have a single predetermined, constant kernel that you want to specify.