I am using this wonderful Framework to create a character recognition software. But at the moment I am stuck at one point:

At the moment when I train my network I create lots of images (one image per letter per font) at a big size (30 pt), cut out the actual letter, scale this down to a smaller size (10x10 px) and then save it to my harddisk. I can then go and read all those images, creating my double[] arrays with data. At the moment I do this on a pixel basis.

So once I have successfully trained my network I test the network and let it run on a sample image with the alphabet at different sizes (uppercase and lowercase).

But the result is not really promising. I trained the network so that RunEpoch had an error of about 1.5, but there are still some letters left that do not get identified correctly.

Now my question is: Is this caused because I have a faulty learning method (pixelbased vs. the suggested use of receptors in this article: http://www.codeproject.com/KB/cs/neural ... k_ocr.aspx) or can this happen because my segmentation-algorithm to extract the letters from the image to look at is bad?

I am not sure receptors will give you much better results on such small images like 10x10. So pixel based method is more or less OK. But I would try increasing size of images, and trying to compare both methods.

Potentially you may have segmentation issue and letters could be extracted not so accurately. Do you have any samples of images you use for learning and for testing?

Another issue, which may happen, is that some letters may look very close to each other in such small images.

Also you've mentioned that you train network on one image set, but test it with different sizes (what about fonts?). Maybe you need to extend training set to cover more variants of the same letter?

I create a training image set of letters ranging from A-Za-z0-9 for Arial, Verdana und Tahoma. I create the letter at font size 30, extract the actual letter (delete whitespace around it) and then scale that letter down to 10x10 px.

You are right, I might not be able to use receptors when the letters segmented out of the images are too small. I didn't think of it that way yet...

I also did notice that I have to improve my segmentation algorithm. There seem to be some issues that the letters are not cut out correctly, so they look a bit different compared to the training letter (Both Arial fonts).