AuthorTopic: hopfield experimentation (Read 4321 times)

What with my interest in artificial intelligence, I wrote a Hopfield network implementation early this morning, and tested it.

Well the first thing I learned is that I'll have to write my Python module in C if I want to use it on any substantial data, because it's slow as a pig for anything but small images. That being said, I started out with one image:

When I asked the Hopfield network for the same, it gave me:

...which, when negated, is the same as the original. I remember reading that Hopfield networks have a tendency to remember the inverse of their pattern. So far so good.

Next thing I tried was memory of one of several images, which proved more difficult. After making a poor selection of Elder Futhark runes, I attempted to remember Ansuz and got this:

Closer, but it looks like Othala ruined the image. After removing Othala:

"I know a twelfth ..."

I tried Sowilo but Tiwaz jinxed that and after a while I figured out that my network could only remember Raidho and Tiwaz very well together. Here's Raidho:

Well that experiment kind of sucked, but it showed me that, especially if I have monochrome images, the originals shouldn't look too similar. I'm going to try with Chinese characters next time, and introduce distortion if that next experiment turns out to be successful. Stay tuned!!

Very cool...Now and then I experiment with neural networks, but not much. Would like to take it more seriously... I'll be starting with a crappy model that was proposed for some psychological data, and suspect that it will fail miserably when confronted with the data I've gathered... but let's see. Sometimes the results can be quite surprising.

I've used Matlab for it before, but it can be a pain with all the registration hassle. I need to have the cd in the drive to launch the program and the like, so would be interested in some open-source alternatives (ideally R or Python).

How do you do them in Python? Are you programming the whole thing, or is there some module for it?

If you click on the link I have in the previous post, it takes you to the source code I personally wrote for the Hopfield model. I believe there are already Python implementations of back-propagating neural networks, in fact you should Google bpnn.

The way I knew how to implement the Hopfield network model was through the book Neural Network Architectures by Judith Dayhoff. I recommend this book because, while it lacks any code, it has reasonably detailed intermediate-level descriptions of neural networks that hand-wave enough to make them very readable, but not enough that they are useless for implementation. In fact, this book was very useful for implementation; it's not hard to finagle a proper algorithm out of the author's description. If you want to go further and read all the gory details about gradient descent and the like, there are plenty of books that cover them too.

Thank you! I'll have a look at your code and bpnn.I would only need to implement a backprop network with one hidden layer, so it looks like it won't be much of a problem...I would need to train it with lots of patterns, though, so it might be quite slow...

In my experience, the number of input patterns isn't the issue so much as size of the network. On the other hand, I believe there is an upper bound on how many input patterns you can have for a BPNN of a given size, but I'm not sure. I remember for a Hopfield network, you can present it with something like 0.15 * n input patterns, where n is the number of neurons. Don't quote me on anything wrt BPNN's.

In my case, I would need at least around 200-300 nodes in the input and output layers... the number of hidden units is to be determined, but should be more than 100 or so. Then the training would be on (at least) 3000 patterns until the performance was good enough.... I don't know if this qualifies as "big", though.