How robots and spiders are causing issues, how to stop them. We can also talk about Completely Automated Public Turing Test To Tell Computers And Humans Apart - their use, their compliance issues, porn proxies, PWNtcha and other ways to defeat them.

In brief, "The averaging attack can be used on image-based captchas if the following conditions are met:

The predominant distortion in the captcha is of noise-like nature.
It is possible to extract a series of different images with the same information encoded in them.

Averaging of a series of images can be used to improve image quality (reduce distortion, or improve signal-to-noise ratio, so to say) of captchas and hence to make them more easily recognizeable by OCR (optical character recognision) systems. "

Its a pretty interesting idea, but since I've never really played with defeating CAPTCHAs, I'm not sure if its a real advance, especially considering the replies it go on Bugtraq (none of them were scathing, they just mainly said that its effective in a very small amount of sites and there are better ways anyway).....

But from my experience, I can't recall having seen many CAPCTCHAs which fall within the requirements.

Yes, that's actually a known attack against CAPTCHAs... they are probably the first to write it up in this way but it's been around for years. Really, that's how I broke the AcuTrust CAPTCHA (converting it into a one dimensional array): http://ha.ckers.org/acutrust/

My attack is slightly different but the idea is very similar. The basic premise to avoiding this is to not allow more than one version of the same CAPTCHA per token.