I am writing a program that uses a genetic algorithm to 'evolve' a randomly generated image in to a template image that I provide. Here is what it looks like:

Every generation it creates (by default) 150 copies with random mutations (ellipse color/shape), and then compares them all to the template picture to see which is most similar, and that one moves on to the next generation. The current method I use to compare them is a pixel by pixel Euclidean Distance measurement. This takes about 8-9 seconds to evaluate the 150 children. Which might not seem like much, but to get a decently evolved image it can take at least 40,000 generations (about 4 days at given speed). This is too slow to test the results of different mutation rate / children / etc combinations.

What faster methods of comparing the two images are there? Or what other ideas do you have to make this more efficient?My only thought so far is to reduce the dimensions of the images but that takes the fun out of creating a high quality replica ;)

Sounds like you have either implemented the distance calculation incorrectly or you have a really slow computer. A very naive test in Matlab at full 80-bit floating point precision and 500x500 images (just making some estimate of the size from your screen shot) images takes roughly a millisecond per image pair, and that time includes a few memory allocations for temporary buffers as well. Can you show how you're calculating the distance?

I agree that you have made some kind of critical performance error in your comparison function, especially if your images are just black-and-white.

If you support color images, you also have a logical flaw.I can tell you from personal experience that you can’t work with simple Euclidean distances between pixels and expect anything near accurate of a result. I tried it in the early stages of my DXT utility and the results were shameful.If you are working with color images, you need to use MSE as used by PSNR followed by weighted per-channel sums.

For the record, while MSE is trivially more complicated than a simple Euclidean compare, you should still be able to handle around (estimating) 500 512 × 512 images per second on an average computer today. So you definitely have an implementational error on your end.

Make sure you are not converting the pixels every time you compare them, for example to and from floating-point etc.

GetPixel is probably very slow. Individually locking the image for each pixel.
If this was C++, I'd tell you to lock the whole image, and process it with a pointer for processing times close to that of a copy.
Conversion to argb should also be unneccessary for grayscale, and a bit expensive. (compared to just add's to step the pointer, and a sub to compare raw pixels directly)
You probably can do something like that in C# too, hopefully someone else can tell you how.

That cut the algorithms time down to 11 ms using the Marshal'ing method. To subtract the RGB values from each other just (aR+aG+aB)-(bR+bG+bB)?

Wouldn't it be more meaningful to compare each channel individually and merge results at the end (possibly based on perceptual intensities of red blue and green)? Blue looks nothing like red yet they'd get a difference of zero.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Wouldn't it be more meaningful to compare each channel individually and merge results at the end (possibly based on perceptual intensities of red blue and green)? Blue looks nothing like red yet they'd get a difference of zero.

Yes, it would.When performing an image compare, you should accumulate the squares of all the points for each channel separately, then normalize them separately (replace Math.Sqrt(dist) with dist /= nRows * nCols), and then combine them using weights.Also the squares should be within a range of 0-1, meaning if you read pixels from 0 to 255, you should do pixel *= 1.0f / 255.0f and square the result of that. That value is what should be accumulated for each channel.