Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Pickens writes "NPR reports that NASA's Lunar Reconnaissance Orbiter is doing such a good job photographing every bit of the moon's surface that scientists can't keep up, so Oxford astrophysicist Chris Lintott is asking amateur astronomers to help review, measure, and classify tens of thousands of moon photos streaming to Earth using the website Moon Zoo, where anyone can log on, get trained, and become a space explorer. 'We ask people to count the craters that they can see ... and that tells us all sorts of things about the history and the age of that bit of surface,' says Lintott. Volunteers are also asked to identify boulders, measure the craters, and generally classify what is found in the images. If one person does the classification — even if they're an expert — then anything odd or interesting can be blamed on them. But with multiple independent classifications, the team can statistically calculate the confidence in the classification. That's a large part of the power of Moon Zoo. Lintott adds the British and American scientists heading up the LRO project have been randomly checking the amateur research being sent in and find it as good as you would get from an expert. 'There are a whole host of scientists ... who are waiting for these results, who've already committed to using them in their own research.'"

I'm sorry,but astronomers renamed Uranus in 2620 to end that stupid joke once and for all. It's now called Urectum.

Reminds me of the game Mass Effect 2. They had a little Easter egg in there.

You had to harvest planets for mineral resources in order to have the raw materials for upgrading your equipment. You harvest a planet by orbiting it and sending robotic probes to the surface that presumably bring back the raw materials from their landing sites. When you send a probe down to any planet, your ship's computer (an AI) says things like "launching probe" or "probe launched".

You can visit the Solar System in this game. If you orbit Uranus and launch a probe there, the computer voice says "Now Probing Uranus". It says that only once and it's the only time it says anything other than the standard phrase.

The computer only says "Probing Uranus" once, but if you keep trying it eventually says, "Really, commander?"...usually somewhere around the third attempt; I'm not trolling to get people to click endlessly.

Why would you expect co-authorship or an email for basic data processing? Meanwhile, they HAVE recognized I've contributed to detection of seven CME's, but I am just one of over 210 people who collaborated. Their appreciation was considerate but hardly necessary.

However I bet the data used to confirm the solar prominence you referred to didn't come from them, afaik they are only working on historical CME data. The "Spot" and latest "Incoming" data aren't even being applied yet.

A: They're stored with those provided by everyone who comes to Moon Zoo. The Moon Zoo team will carefully analyse the results to make sure that collectively we're producing results that are useful to scientists -- keep an eye on the Moon Zoo blog for details. All results will eventually be made public for anyone to use.

I think the problem here is that it is all take and no give. Categorize our images for us! We'll give you the data "eventually". Crazy idea, how about doing the statistical correlation of multiple contributors in realtime and display that information on an overall map of the Moon so there's some sense of progress at the task.

Crazy idea, how about doing the statistical correlation of multiple contributors in realtime and display that information on an overall map of the Moon so there's some sense of progress at the task.

Sure sounds good in theory but it's much easier to have a bunch of people skew the results if they're posted in real time. Imagine the Colbert Report picking up on this and deciding to tell the viewers to classify every crater as being Stephen Colbert's age... It'd make the automated process much harder and they'll have to spend much more time combing the skewed results.

I think the problem here is that it is all take and no give. Categorize our images for us! We'll give you the data "eventually".

It sounds a bit childish, really. How can you say it is all take and no give, and then immediately say that they WILL be giving you the results, but only after it has gone through that pesky scientific process.<WHINE>But I want it now!</WHINE>

What is the problem with waiting for the right answer? Zakabog has already pointed out that a real time display could be used maliciously, but it could even skew the results by well-intentioned people. If the first person who submits a result for a given region makes a mistake, then the next person who analyses that region might compare their results with the first and "correct" their own mistake. If you use statistics to build confidence in the results then the last thing you should do is tell the subjects what you are currently expecting them to do. That only uses statistics to compound errors.

Because it is basic game theory? You want the little hamster to continue running around the little wheel you give him a cookie to work for. If he gets little nibbles of the cookie he'll work HARDER trying to get more cookie, thus giving you more work. Hell nobody is saying they have to give them the actual recorded data in real time, just throw the monkey a reward for pushing the button. Maybe something that ONLY shows how you are doing? Surely that would discourage the cranks while giving the hamster a rea

Because it is basic game theory? You want the little hamster to continue running around the little wheel you give him a cookie to work for. If he gets little nibbles of the cookie he'll work HARDER trying to get more cookie, thus giving you more work. Hell nobody is saying they have to give them the actual recorded data in real time, just throw the monkey a reward for pushing the button. Maybe something that ONLY shows how you are doing? Surely that would discourage the cranks while giving the hamster a reason to keep running the wheel.

Galazy Zoo [galaxyzoo.org], which pioneered this kind of crowd-sourced classification, seems to disprove that need. Most of the people are astronomy fans, and the joy of looking at raw telescope pictures was reward enough. Eventually they did add a list of previously viewed galaxies, and let you mark your favorites for later viewing. Besides, it had 250,000 users. Get each person to look at 20 galaxies on average (just a few minutes time, easy to do) and you have 5 people looking at each of 1 million galaxies.

Aside from the other respondents showing issues with this idea, I can tell you that with the Solar Stormwatch program they run, our data was compiled and recognized sooner than I was expecting (my participation, along with over 210 other collaborators, confirmed seven CMEs).

scientists are citizens too, you know. amateur scientists are not scientists, however.

Generally the difference between a skilled amateur and a professional is that the professional is getting paid. Of course there are unskilled amateurs, but for that matter there are also unskilled professionals.

Anyone who follows and correctly applies the scientific method is a scientist. Money changing hands has nothing to do with it. Think about it, if it were otherwise then why would NASA bother to solicit the input of amateurs for a scientific project?

A scientist, in the broadest sense, is any person who engages in a systematic activity to acquire knowledge or an individual that engages in such practices and traditions that are linked to schools of thought or philosophy.

Surely if they do this, then it doesn't matter that they aren't paid or haven't been formally trained in a scientific field. There are limits to what you can achieve without an education, but what defines a scientist is the search for knowledge, not already having knowledge.

Surely if they do this, then it doesn't matter that they aren't paid or haven't been formally trained in a scientific field. There are limits to what you can achieve without an education, but what defines a scientist is the search for knowledge, not already having knowledge.

I'd argue that the scientific method is what makes a scientist, not "systematic activity to acquire knowledge"

Otherwise you end up with crap like "creation science" which starts with premises that ignore observed/tested facts and then runs off giggling into fantasy land.

A great example of this is with the same organizations "Solar Stormwatch" program, frequently people will ask in the forums for confirmation of their interpretation of something they've seen. Someone with experience can say, that is "X", mark it, or ignore it, as appropriate.

The purpose is to improve the signal to noise ratio, which increases the productivity of researchers.

How long has GalaxyZoo been around? Longer than SETI@Home? It's more likely both projects took the hint from how SETI@Home processes data. As another commenter correctly pointed out, these two projects do with spare eyes and brain cycles what SETI@Home does with spare CPU cycles, and all of them rely on having multiple redundant results for the same dataset to verify integrity of the result. It's not exactly rocket science to figure out such a technique would be useful, but SETI@Home has been around for

I do the same thing occasionally with GalaxyZoo (www.galaxyzoo.org). After being trained you classify galaxies. The second version is much better than the first iteration and goes into more detail. I like the "progress indicator" idea in the post above, but see no practical way for it to work.

...would be to use the statistically-validated user input in a feed-forward image recognition neural network utilizing error feedback that would "learn" to identify the various features of interest. Use edge detection to identify the features of interest (for instance, by number just like a paint-by-number canvas), and have users "identify" what they see. We're talking about invariant scale here, which vastly simplifies the learning process as well as automated feature measurement.

I was doing this in the '90s using multi-band spectral imagery from LANDSAT with good success. I would imagine there have been some advances in this area since that time.

...would be to use the statistically-validated user input in a feed-forward image recognition neural network utilizing error feedback that would "learn" to identify the various features of interest. Use edge detection to identify the features of interest (for instance, by number just like a paint-by-number canvas), and have users "identify" what they see. We're talking about invariant scale here, which vastly simplifies the learning process as well as automated feature measurement.

I was doing this in the '90s using multi-band spectral imagery from LANDSAT with good success. I would imagine there have been some advances in this area since that time.

Actually, since the '90s people have largely switched from using neural nets to support vector machines [wikipedia.org] (or maybe a restricted Boltzmann machine [wikipedia.org]).;) I do agree that it'd be an interesting training set for a machine learning algorithm, though.

You'd think that would be the case, but there are several reasons why humans are a better solution to this than a computer program:

1. Recognition like this requires complex interpretation. Computers might be able to interpret them, but you have no way of validating that interpretation, and computers are pretty literal about it anyway. Multiple humans with cross-checked results are going to give you (by and large) more accurate results. If we can't manage it with OCR of clearly-written and cleanly-scanne

Why not just use a computer to count craters? The current algorithms for optical recognition should work rather well for 'find circles'. Not that it's nice that they're involving us normal folk in their fancy science, but this is the sort of mundane task that computers are made for....

You might want to check out some of those pictures before jumping in with speculations.
Craters are being lit from various dirns, depending on the latitude, longitude and Sun position. This sort of imagery needs a human mind to correctly process it. Furthermore, it's not only about "counting craters", but identifying other interesting features (such as crater bouldery, artificial structures, linear features, moulds and so on). Plus, images have varying degrees of clearness (I found some corrupt images as well, pity you can't report them). The "Boulder Wars" minigame itself is rather interesting too.

...these citizens could spend their time volunteering their time and skills in their community and actually make a fellow human's life better on this planet. While this might be a good ploy to pique the interest of some students, I'm trying to figure out how this effort won't be moot in a few years when the computer image recognition/analysis software can do the same task much more efficiently.

Free help is pretty effective. You're helping to educate the public. You're involving the public in the resulting science. This is a very smart way of doing things. Based on your attitude we should wait on doing anything useful until our machine overlords are here and doing it for us, doh!

This is exactly the sort of mind numbing work grad students should be doing for a pittance. This will put them out of work! We are not providing the right incentives to create our next generation of scientists.

And why not use computers? Lintott says they can only identify what they are programmed to look for, and might miss the unusual.
"Computers don't make discoveries," he says. "They don't point at the thing in the corner and ask the question: What's that?"

Computers can however, identify what they are programmed to look for, and then indicate any areas which have features which they do not recognise. At the very least he should write a filter to parse out the completely typical images before getting the general public to do his work for him.
This guy is either too lazy or cheap to write some image analysis software, or a luddite who doesn't trust computers.

It's a great way to learn about the various images/data being captured, both in our solar system and beyond, while actually contributing something to the scientific community. There is something extremely exciting about watching a clip of the sun and seeing a comet appear out of nowhere and zoom around the sun with its tail pointing away. Or being among the first to notice a new solar storm which might affect astronauts in orbit. Or spotting