Posted
by
samzenpus
on Monday August 05, 2013 @03:35PM
from the can-you-read-this? dept.

Bismillah writes "CAPTCHA may be popular with webmasters and others running different sites, but it's a source of annoyance to blind and partially sighted people — and dyslexic people and older ones — who often end up being locked out of important websites as they can't read wonky, obfuscated letters any more than spambots can. A campaign in Australia has started to rid sites of CAPTCHA to improve accessibility for everyone."

"W3C has suggested other techniques such as logic puzzles, limited-use accounts and non-interactive checks to prevent abuse such as fraudulent account creation and spamming."

Its going to be far harder to make an AI that can create a decent logic puzzle as well as make it accessible and hard for computers to solve than it it to make an image and warp it a bit. I think any such puzzle will probably be worse than the audio captcha button.

"W3C has suggested other techniques such as logic puzzles, limited-use accounts and non-interactive checks to prevent abuse such as fraudulent account creation and spamming."

Its going to be far harder to make an AI that can create a decent logic puzzle as well as make it accessible and hard for computers to solve than it it to make an image and warp it a bit. I think any such puzzle will probably be worse than the audio captcha button.

Not to mention, logic puzzles are unfair to people who have trouble understanding logic; which, in my experience, is damn near the entire human race.

I think you're missing the idea of what type of logic puzzles they mean. Simple things like image processing (someone in the comments below brought the example up of using company logos and you type the name, pizza toppings matched to the correct pizza) or natural language processing could be used to WRECK a bot. Imagine this, I pose the question as a human verification, "What color was George Washington's favorite white horse?" A human (with half a brain) easily sees how stupid simple it is to find the answer which is white, but a bot would have hell with that type of question because it involves language processing to determine the appropriate response. That is a pretty simplified example, but you can find these all over the place and they are fairly easy to create.

Some of these could be defeated easily with something like a call to Wolfram Alpha, but you could quite easily find and create things that are not going to be simple to automate the logic processing, but would be completely trivial for a human to process, even stupid ones. Language and image processing are RIDICULOUSLY difficult to automate efficiently which would defeat the purpose of the bots, while making things a lot easier on the people that do have to deal with this sort of thing. I personally hate the current version of CAPTCHAS (hell, I can't read some of the more difficult ones and I write some of the software that USES them), but I do recognize the need for them. No reason they can't be improved upon though.

Wolphram Alpha had no idea about the color of Washington's favorite white horse (it looked up the distance between some town named George, WA and White Horse,NJ), but if you put it into google, you discover that Washington had no white horses, the closest being a gray named Blueskin.

Whatever you use, you need to be able to generate an arbitrary amount of it without significant repetition, without structure that can be automated towards, and with a large "answer space" (number of possible answers) to make the percentage of 'lucky guess' answers extremely low. Oh, and it needs to be easy for humans but difficult for computers.

Generating distorted text is perfect - random characters, random distortions, nothing about the form of the puzzle that can be used as a shortcut to the answer, guessing strings at random is fruitless, and it hits computers right in the vision, where they (used to) suck and we're really good. Unfortunately that gap is narrowing, and humans on the lower end of visual acuity are getting locked out.

Generating an endless stream of simple trivia questions is going to require a significant bank of facts, then you're going to hit the problem that if the generation method is known it can be reversed and used against you (e.g. if the answer aways appears as a word in the question, just guess a randomly chosen word from the question and you get a trivially easy 10% or so success rate). Automating the question generation is almost as hard as automating the answers...

This kind of thing shouldn't be hard at all. You don't need complicated logic puzzles or any such thing. You just need something that's hard for a computer to figure out, but easy for a human.

For instance, render a 3D scene and ask a question about perspective. "What is the person holding in her right hand?" "What is the person looking at?" and similar such questions. Trivial to render. Hard to figure out, because it's far beyond simple image recognition: you have to see and interpret what's going on

You need problems that are generated by computer that are hard for a computer to answer. In your example the computer program rendering the image must understand perspective, english grammer, and handiness.

You realize that many of the people complaining about captchas are blind, right?

Easily solved with an appropriate ALT tag, something like "A picture of a person holding a frankfurter in her right hand." In fact, can't all CAPTCHAS be fixed by simple use of the appropriate tag? "A picture of the characters E, Q, 3, 6, T and 9".

let's say you change it do you have to answer a simple addition math problem. what you get is someone crying, "i have to answer 5+8?! but i dunno maths you insensitive clod!"

you know that person really exists.

Yes they do. The solution is that they learn simple math so they're a fully functioning member of society. I suggest an intensive period of schooling - say 11-13 years. Oh wait...

Who are you going to cater for next? The guy that can't read the damn form. "But I'm illiterate you insensitive clod"? It's not a question of eliminating all objections, just ones that actually stump your audience. Capture is the worst of the worst. You can have a PhD. and get it wrong a substantial portion of the time.

On the other hand, the captchas became ridiculously fuzzy as of late. My vision is 19/20 (rough comparison; doctor said I can be anything BUT an aviator) and I still find myself refreshing several captchas because they don't make sense. Sometimes I eyeball a "word" for 10-15 seconds and then I'm sure i got it right, I type it in and ERROR, wrong captcha.

If anything, word captchas became impossible to solve for most people and very annoying to perfect vision ones.Why can't there be a captcha showing a pictur

Why can't there be a captcha showing a picture and three buttons with possible answers? Like an image of a baby and three buttons saying MAN, WOMAN, BABY. Or a picture of a running man and buttons saying SLEEP, RUN, CHILD.

Because you just plug that image into Google 3 times with each key word and pick the answer with the highest score. Or, much easier, you just randomly pick one of the options. One in three is a good hit rate, and even if you block by IP, getting past the system hundreds or thousands of times is trivial.

Like an image of a baby and three buttons saying MAN, WOMAN, BABY. Or a picture of a running man and buttons saying SLEEP, RUN, CHILD.

They can't be automatically generated, because automatic generation is equally as reliable as automatically solving them.

So a human would have to design each and every one of them, which is a job that nobody wants to pay somebody to do. There will thus also be a limited sampleset which will easily be learned by a crafty spammer (and like anything else digital, it only takes

Why can't there be a captcha showing a picture and three buttons with possible answers? Like an image of a baby and three buttons saying MAN, WOMAN, BABY. Or a picture of a running man and buttons saying SLEEP, RUN, CHILD.

Because then on average, 1/3 of all spambots would succeed. You need thousands of possible answers before it becomes usable as a barrier, and you'll need millions of photos (to prevent learning) and someone will have to choose a correct answer per photo, and make sure all other thousands of answers are incorrect.

Not sure is this is already super well known, but only 1 word is actually used for verification. In this example [wordstream.com] you could type "thrand " and pass it. The verification word always looks similar in font/size to 'thrand'. Oh, and the other word I believe is a scan from a book and if you *do* type it in, it will help the digital scan of the book actually pin point what word it is. [google.com]

I am fairly sure that your information is out of date. Not 100% sure admittedly. I have tried the trick of trying to guess which word is the important one before and failed miserably. Try it for yourself, maybe you can do better than I did.

there are already several types of captcha nowadays that are newer and much easier to use. one of the ones ive seen is one with a company logo and you have to type out the company name. another is one where you have to makea pizza with specific toppings. another one is where you have to draw an image. captchas are necessary... the problem is that they have become too ridiculously difficult instead of making it easy to use for normal ppl.

CAPTCHA will be around as long as it is the best way to stop programatic submissions.

It's well documented that there are several groups who have put put porn sites using collections of images from around the net; then they attack sites that require answering CAPTCHA. When challenged by the CAPTCHA, the forward it on to someone seeking the "free porn", and then forward that persons answer back to the site they are attacking.

So the CAPTCHA-using site wants a human to solve the CAPTCHA, a human solves the CAPTCHA, gets their porn, while the attacker gets into the "protected" web site that the

Passwords, with no two sites accepting the same format. CAPTCHAs, which often as not even normally sighted people can't read without difficulty. Security questions which are either inane or represent their own special security risk.

God almighty, can't we come up with something to replace all of these?

Captcha fulfills a need - it is, as the name implies, a test to completely automatically tell computers and humans apart. It's necessary to keep spambots from registering accounts and spamming the hell out of us.
Granted, the "type this wobbly word" may not be the most practical (nor safe) solution.
It's easy enough to come up with alternatives- Perhaps show four photographs and ask the user to click on the one that doesn't belong (maybe the kitten out of a picture of 4 cats). Coming up with good ideas? Mu

CAPTCHA may be popular with with webmasters and others running different sites, but it's a source of annoyance to blind and partially sighted people — and dyslexic people and older ones — who often end up being locked out of important websites as they can't read wonky, obfuscated letters

CAPTCHAs tend to have an audio button where a string of numbers is read off to you.
Even Slashdot has a "mp3" button that reads the letters on the CAPTCHA off to you.
Doesn't that already help all the above people with issues listed here?
(Except possibly the "older ones", who may have hearing issues too.)

I've been developing websites over 10 years and have never needed a captcha system.

This is how I always go about it:

1) Include a form input element labelled as something common, like a telephone number but on a registration form that would never actually require a telephone number. Hide the parent div using CSS in an external CSS file. When the form is submit, check to see if the element is filled out. If it is, simply display a message that you think their registration may be automated and to try again. If it continues, please contact us by other means (phone, email, etc) and we will help them through it.

2) Time the registration from the time the page is loaded to the time it is submit, if its less than 10 seconds, do the same as above, simply display a message saying you think their registration is automated and to try again, etc.

When used in conjunction I feel I've cut out 99.9999% of spam or false registrations. The timing method has to be done server side and stored in a session, and is fairly involved so not easy to do properly if you are new to web development. There is also the issue of someone hitting the back button to try again after a failed submission (if you don't use client-side validation), and them submitting from a cached page, but can be worked around if you know what you are doing.

Obviously its not bullet proof, and if the CSS file doesn't load then someone would see the extra form element. But its a small price to pay for effective protection.

Ironically, what you've described is a form of CAPTCHA. "CAPTCHA" doesn't just refer to obfuscated text, but is rather any public-facing Turing test designed to tell the bots and humans apart from each other. The funky text stuff is just the most common variety, but trivia questions, object recognition, etc. can all be forms of CAPTCHA as well.

People seem to forget that the term "CAPTCHA" (Completely Automated Public Turing test to tell Computers and Humans Apart) applies to a much broader set of tests than just those obfuscated text-based things that most of us loathe. Banning CAPTCHAs is a silly notion that would adversely affect every site currently using them, as they become swarmed by spammers. Instead of banning them, they should be asking people to use sane, simple CAPTCHAs.

For instance, on a forum I run for a group in a game, I use a form of CAPTCHA that has people drag words into categories. As an example, if our group name was "Guild X of Y", I might make the categories "Words in our group's name" and "Words not in our group's name", then ask them to categorize the words "Guild", "Elephants", "X", "Tree", "Honor", "Plus", and "Ocean". I have about two dozen sets of categories and words configured, and so far it's had a 100% success rate at stopping spammers from registering. It's also made it easier for people to register, since the number of e-mails and other off-forum messages I've received complaining about the difficulty of the CAPTCHA has dropped to 0 while registrations have actually picked up.

Such a system would obviously not work for Google or someone that large, since a spammer would just train the bot to know all of the answers, but for smaller sites, there are plenty of solutions that work just fine, and I'm sure we can find more systems that are simple for a human but complicated for a computer. No need to make something that's so complicated for a human to solve.

I recently started getting hundreds of spam signups a day on my site. So I installed a CAPTCHA to prevent that. I setup a standard image CAPTCHA with a plugin for the CMS. More then 80% of the spam sign ups just walked right through it. Then I changed the type of CAPTCHA to an ASCII art CAPTCHA. I haven't had a spam sign up since. The ASCII art CAPTCHA is also much easier to read then weird image CAPTCHAs.

Instead of a CAPTCHA, show them two posts and indicate if none of them, one of them, or both of them are spam posts. Behind the scenes, one if a post you know for sure is good or not and one you don't know about.

You can use the responses to rate users (how effective is this user at rating posts, based on how well they do identifying spam?) and posts (how likely is this post to be spam based on what users say about it?). Bad users and bad posts get booted from the system.

Anyone using a widespread bulletin board software will know that despite hard Caiaphas, spammer accounts are registered like crazy.

I include a small set of questions and answers relative to the interests of those who would visit the board. E.g., for Slashdot:

Complete the following sentence:
[randomly select from sentences]
"TFA" is an acronym meaning "The _______ Article". (7 letters)
Another alias for "Anonymous Coward" is "________ Dweller". (8 letters)
--etc--
Prior to instituting this simple questionnaire there are usually hundreds of spammers a day. Afterwards? None.
This is actually trivial to solve, indeed I don't even use the session token as a seed for creating new mappings between the numeric question ID, and the answers. So, a diligent spammer could simply collect all the questions then add the responses to the bot... Only THEN would I escalate to the code I've already written that does the randomized mappings, after first swapping in a new set of questions / answers.

But why?! Why wouldn't I use the MORE secure way right away? Because I'm not a fool. It has to be worth their time to enter an authentication war with me. Let them waste time writing a bot solver first, then immediately have their work become useless. In fact, this has already happened a few times. It's even rarer for spammers to then continue escalation -- they could just migrate to one of the other boards that is not so hostile, and upon which pre-made automated solvers still work. In fact, I have found good success Starting with only a single question. Replace the selection function:
sub random(){ return 4; } # Return truly random number, selected by fair dice roll.
Then I can simply revert to the randomized set of questions to escalate the spammer's coding and deployment cost. Thus, gaining yet another defense at little cost.

Any heterogeneous environment has what's called a "Single Point of Failure". This is why sex exists. Combinatorials are a simple way to get some randomness without all kinds of unexpected outcomes that rampant mutations in an asexual production would first attempt. Bacteria can use other methods because they've abstracted reproduction from defense: transformation, conjugation, etc. So, the uniform use of SSL, is stupid to put it mildly. It could have been like a bacteria, standardized and abstracted extensible protocol for defensive encryption... It's not though, it's a dumb for including a heterogeneous set of transforms dictated by AES standard. I mean, virtual machines exist; You're using one to decode font glyphs, and Unicode BIDI right now, but not for extensible encryption? How daft. Pervasive use of a brand of Captcha is equally retarding.

How foolish you humans are to not even learn the most basic of Life's Lessons. Diversity is a defense. When you use science to analyze natural selection's method of Trial and Error, Observation of results and Preservation of favorable outcomes... I bet you don't even make the correlation that Nature invented Science billions of years before you rediscovered it... I bet you don't even realize that's a universal truth inherent to any self improving cybernetic system, from DNA life compilers to C compilers. Ugh. Humans: Can't live with 'em; Can't teach 'em to survive.

I'm neither and they annoy the hell out of me; and those little "validation games" (dump the fish into the bucket, or whatever) are ridiculous time-wasters. I'm also a web developer, so there's that. CAPTCHAs are for lazy web developers to offload the task of anti-bot protection to the user.

Create some dynamic form elements that only display via Javascript DOM and are required by a backend script. Create a per-IP limitation on registrations per 10 minutes. Require a minimum time between form loading and form submission. Require a cookie to submit the form.

The point is: the more variety of anti-bot systems that exist, the less attractive a target there is for bot makers.

If taking a couple seconds to answer a CAPTCHA is too much effort, I probably don't really care what you have to say in the comment section.

Or a couple of minutes considering most capchas are illegible.

This!

More and more, captchas take two or three attempts.(Disclaimer: IMHO, I'm not senile, dyslexic, a horrible typist. blind. Your opinion may vary).

I suspect some sites are intentionally forcing a fail once or twice, at least occasionally, especially when you enter the wordin a timely interval. Bots probably give up after two failures, and they probably answer quickly.

So implementers make it more and more restrictive and throw in bogus failures.

i've been using minteye on my site. it's a visual captcha, works pretty well. you move a slider back and forth to unscramble an image.

I never heard of it, and upon googling it, their own website wouldn't couldn't get pass my no-script. So right there, a significant and growing number of customers would be turned away.

But, I wonder of that would remain effective, after all, bots already exist to recognize letters in images. (Those bots existed before captcha). So as soon as Minteye becomes popular it will be bot-stormed.

I've also seen the word games, these are fairly unique as well. But I'm not sure they couldn't be attacked as soon as they become popular. It almost seems that obscurity is the best we have these days.

Essentially, the guy realized that jpeg pictures with distortions should have a completely different size than the undistorted picture. But all pictures delivered by minteye were of identical length. He figured they were padding the files with zeros, and he was right. By counting the number of zeros at the end of the file, the local maxima/minima was the correct file. He wrote a few lines of javascript, and it was broke.

These are only first impressions, but it looks ridiculously easy to solve automatically.

First of all the warp angle jumps significantly more before and after the "correct" image than between other images, so a fairly simple block tracking algorithm would have a very good chance of identifying the correct image:

I have no mod points so I must say that if everyone had that same reflex you just displayed, of checking ones assumptions when it's trivial to do so, humanity would be conquering the universe at this point.

I'd be curious about what "technical measures" you are talking about. There are some "universal IDs" that help to filter out some of the spam, but it still can slip through in a way that Captchas help prevent. There is also something philosophically wrong with trusting in some huge 3rd party vendor like Facebook, Microsoft, or Google to be processing authentication on your website, not to mention concerns about the NSA tracking everybody who is logging into your website as well.

Again, I'd be curious about what technical measures you are talking about.

The NSA and its friends already track who logs into your website (or at least the IPs that do) so I wouldn't worry about that one too much.

One technical measure that has been floated recently is the idea of using Bitcoin. What you do is provably sacrifice some bitcoins to miner fees, thus creating a kind of anonymous passport. That proof of sacrifice has public keys embedded in it to which you own the private keys, and it was provably expensive to create. So the idea is that you sign up with your passport and then if you misbehave, it can get added to a blacklist kind of like how Spamhaus blacklists IP addresses. Now you can set the cost of abuse to a precise degree. Good users only have to pay once and can use the same passport for years. Abusers find their business models are unprofitable.

Unfortunately the software and protocols for that aren't implemented yet.

Adding rel="nofollow" to any links provided by your untrusted commenters is a good start. It's a promise that Google and other search engines won't do any indexing or page ranking based on the href in the same tag.

Spammers have a pretty common M.O. They sign up with an account and use their spam link as their "home page". They then pollute the blog. The obvious spam is repeated variations on the same topic, and looks like "brand name products, products brand name, brand products name,..."

Lately, link spam is done with a flattering but generic message that looks like it came from a non-native speaker: "I thanking you for your keen insight, have you other similar articles online? I would like to know more how you come to know this." An unwary site operator will often mistake the flattery for a conversation, and allow the spammer to remain a user. (The flattery is script-generated, by the way.) Their "home page" is often a dummy "news portal", which is just replaying whatever feeds they can get. The trick is this news portal has lots of links to the sites the SEO is trying to push.

While rel="nofollow" will render their efforts to associate their spam with a legitimate blog completely wasted, there are two negatives. First, unless the spammer knows it's there, they're going to spam you anyway. Second, it takes away your contribution of "linkiness" for your legitimate users' links to Google's pagerank algorithm. You can fix this with extra work like "probationary" and "full" users, but then you're taking on the task of rating your readers, which may be Sisyphean on a site the size of Slashdot.

There are bots that can automatically register on a site, then check the email account for the activation link, in order to start spamming, so that's not a solution.

The newer 'flash games' e.g. 'out of 5 objects, put the drinks in the cooler' are an interesting solution, but that probably still won't work for people with accessibility issues.

Moderation can work on sites like slashdot, but on lower traffic sites not so much, and the signal to noise ratio will be awful.

If Australia pass this and actually clamp down on 'offenders' it will do more harm than good as the only recourse webmasters will have is to not allow people to register/interact with the site as the cost of cleaning up spam will be too high.

It is possible to train an algorithm to recognize CAPTCHA, even if the success rate isn't 100%, it is high enough to enable bots to register on websites with CAPTCHA. So, Australia is only pushing people to find out better solutions than CAPTCHA. In short term, a large amount of spammers will rely on optical recognition algorithms to decipher CAPTCHA anyway.

It is possible to train an algorithm to recognize CAPTCHA, even if the success rate isn't 100%, it is high enough to enable bots to register on websites with CAPTCHA. So, Australia is only pushing people to find out better solutions than CAPTCHA. In short term, a large amount of spammers will rely on optical recognition algorithms to decipher CAPTCHA anyway.

True, but I think the OPs point is those smart bots are not that frequently encountered. We know it can be beat, but in everyday life it is still not common to encounter such bots, and even when you do, you end up blocking 98% of the bots.

As those bots become more common, captcha will become less and less useful. Its a self solving problem that probably doesn't need any help from government, because government will invariably impose something more stupid and useless.

Bad guys run some pretty high traffic sites that oddly enough, require captchas. Their client bots forward the real site captcha to the bad-guy site, which delivers it to a human who wants access to the bad-guy site and answers it - which answer is passed back to the bot and submitted to the legitimate site in real time. They also compromise legitimate captcha-secured sites for the same method. It's the Mechanical Turk method of defeating CAPTCHA. Machine learning of text recognition is not required.

Agreed, my systems (combined) are hit every 3 seconds by spammers and hackers.
While people may hate Captcha, webmasters do as well, until we have something that works at least as good, it stays, along with my other levels of fighting spam. It's imperfect, troublesome, and a hassle at times, but it's still one of the more effective anti-spam systems out.

And no, I will not let you login from Twitter or Facebook or any other junk, that opens up a whole new host of issues.

There's an obvious measure: don't allow untrusted users to provide links at all, and sanitize their data (server side) to mangle any protocol headers from their text, like adding a space before any text matching://, so the results become http:// , https://, or mailto://. No search engine will try to follow those. You are already santitizing your inputs to restrict users from posting bad stuff like javascript, right? This is just one more thing to check.

Offloading some of the responsibility to you as a human co-processor is an effective tactic called Share The Pain. It's not stupid, it's genius. You just don't favor the end result. You can always vote with your mouse and go to another website.

Yes it is stupid. I understand that spam is a problem, but if you run a website, it's *YOUR* problem. CAPTCHAs make it *MY* problem and that's just stupid.

You assume the website needs you more than you need it. For the standard commercial "wall of ads with some random content between" site, sure, what you say holds true

For a lot of smaller interest-group-themed sites, usually run by a handful of non-IT-gurus, put bluntly you need them more than they need you, and they don't have a full-time body around to read through all new posts to purge the spam.

Now, personally, I prefer the "math word problem" style CAPTCHAs - Because not only do they not discriminate against the blind or the old, they effectively keep out the spam and the stupid. Win-win!

What if I want my users to be able to post the form more than 50 times per day?Cooldowns and cacheing just wont do it. The only real alternative I see is to hide the form behind a login, which in the end is more inconvenient for the end user than a user friendly captcha.

There are simple ones that are easy on the eye out there ( like slashdot's ), and you can make your own quite easily as well. There is one widely used one, reCAPTCHA I think, that is just awful and should be avoided.

Unfortunately it's not all coming from a single IP address - there are literally thousands out there - and any one would only post as regularly as a standard user, with randomized text from large templates. You stop them at various layers - DNSBLs, CAPTCHAs, form entry field checks, link checks, specific spam text . . .

Add some fields which start out as regular text fields but then hide them with Javascript. You can give them labels or default values like "Don't change this" in case someone doesn't have Javascript enabled. Give the real fields in your form random names. For the hidden fields, give them names like "subject" or "comments" or "url" (don't use common names for personal info like "email", "fname" etc that the browser might automatically fill out). When they submit the form, check for values in those hidden fields (either any value at all, or a value different than the default). If they are filled out, reject the form. Hiding the fields with Javascript will work for virtually everyone and it doesn't require real people to do anything extra. This will fail against bots that bother to actually render the page or bots that specifically target your site (which can be remedied if you randomize all field names and store the random names in the session to match them up when the form gets submitted), but those are far less common than bots that just get the HTML and parse it to look for form actions and field names.

Defeating a human reading the source code is not the point. The point is to defeat a bot reading the source code. Another solution that was pointed out was to use CSS to target the hidden element's parent and hide that through regular CSS, which would eliminate the Javascript. Now you're talking about a bot that renders the entire page and fills out the form visually, which is not common (if done at all).

If you don't know any alternatives, you shouldn't be administering them.

Yeah, I guess the folks at Google, Yahoo, Microsoft, Amazon etc don't know what they are doing either. Captcha is used because there is no real alternative if you want anonymous form submissions on your site. There are certain measures we can put in place, in certain contexts, but no catch all one size solution.

Google et al don't rely on CAPTCHAs exclusively, at least not for important things. Google accounts uses phone verification driven by some very sophisticated analyses of the signup data. You can actually choose to skip the CAPTCHA on Google signup if you like, phone verification is used as a replacement.

I will create a single sign on service where you pay $1 to sign up. If someone reports you as a spam bot, you will be disabled until you pay me another $1. I will take the money and give a small percentage to some charities (EFF probably) and keep the rest as server and administration costs.

If people want to spam or create fake accounts, it will cost them a lot more than just having some guy answer 1000 Captchas for a buck. I could track where I get the money from to locate the spammer's accounts.