Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

The public pays for gathering the data, the public should have access to that data. Kinda hard to find fault with that.

I'm sure people with a dogmatic axe to grind will prove an annoying if minor fault. Creationists regularly mangle papers, taking quotes out of context and all. I can't imagine them being pacified by the messy data.

Oil companies and people who are dead set against thinking we -might- be changing the atmosphere will undoubtedly cherry pick out from the data, take things out of context from studies supporting climate change as a theory, and those people whose support of climate change is based more off of re

Get ready for an onslaught of mangled data analysis, with data being taken out of context, the results published to some blog, and people making policy decision based on those blog postings.

the media will focus on the new controversies this will spawn

That's a guarantee. While in theory, I welcome this development, I suspect that in practice it will lead to more chaos than before. Not because the data is shoddy, but because some meteorologist will think that running a data set through an excel curve fitting algorithm is science.

some meteorologist will think that running a data set through an excel curve fitting algorithm is science.

Nope -- it's only science if you adjust and filter the data first to make it match your truth. Resist releasing your data though, others may adjust and filter it other ways to make it match their truth. All science in the world of research driven by political agendas and egotistical arrogance.

Disclose, when in doubt disclose more. Anything less in scientific arenas where others can't repeat your experiments is just a symptom of fear, insecurity, and lack of confidence that your conclusions will stand up to the view and study of many brains (some better than yours, some worse).

Same argument for why FOSS is better - many eyes reviewing (in theory) and rapid fixes.

some meteorologist will think that running a data set through an excel curve fitting algorithm is science.

Nope -- it's only science if you adjust and filter the data first to make it match your truth.

I don't think that's what he was saying. He's saying this will lend itself to overly simplistic interpretations. Which is a good prediction in climatology, considering what people got out of "climategate."

Agreed. Asimov wrote in the forward of one of his robot books, "If knowledge poses a dangerous problem, I can't believe that ignorance is the solution." I think it applies aptly here.

Sure, some people will accidentally misuse the data, and others (hopefully fewer) will intentionally misuse the data, but for many, having that data available has a great potential for increasing the understanding we all have.

A mathematician, an engineer, and a computer scientist are the final candidates for the top tech spot at a major corporation. They are summoned one by one to be interviewed.

The mathematician goes to the interview. The person interviewing him is the CEO of the company. Only one question is asked: "What is 1+1?"The mathematician pulls out a pen and paper, makes a few scribbles, and says "This is proof that 1+1=2!"

The engineer goes to the interview next. The CEO asks him the same question, "What is 1+1?"Th

Get ready for an onslaught of mangled data analysis, with data being taken out of context, the results published to some blog, and people making policy decision based on those blog postings.

Hmm... I think you've brought up another valid point: some researchers might take the data, rehash it and publish it as their own, getting credit for it, much as you have taken my point, restated it with a minor additions, and got all the mod points for it.

I think you've brought up another valid point: some researchers might take the data, rehash it and publish it as their own, getting credit for it, much as you have taken my point, restated it with a minor additions, and got all the mod points for it.

People making bad conclusions from good data is better than making (any) conclusions from no or bad data. By using good data, it helps give the proper scientists a chance to use logic and reason to correct people. We can't change the minds of creationists because we are not drawing our conclusions from the same 'data'. People believe in global warming because of data, now deny it because of doubt in the data. They may be impulsive and believe whoever speaks the loudest, but it does imply we can bring them

The issue with the FoIA in the UK is that there is a clause requiring bodies to only have to comply with the request if the cost of fulfilling it is not more than around £450.

I've seen first hand local government abuse this by claiming that collation of the data would take 18 hours and that their FoI officer is paid £25 an hour, and hence the cost of providing the data is too high. Quite why it requires someone paid £50k a year to collate some basic data that they should already have collated anyway I've no idea, but still, they use this excuse, and the information commissioner allows such abuse of it.

So although as you say it's a great theoretical win, I believe it'll make no difference in practice either way due to the ease of which public bodies are able to sidestep FoI requests.

You know the multi-billion dollar LHC? Guess what they did their first physics on. Not finding new exotic particles, but proving that what we think we know so far still stands up. Duplicating data is exactly how things get proven and disproven. If Group A and Group B use exactly the same source data there's no possibility of Group B proving Group A's research wrong.

I totally agree. If people just start looking at each others data instead of verifying it, a lot of mistakes (or fraudulent data [wikipedia.org]) will never be caught.

Also, I have to wonder what the timeline for releasing data is. My research is funded with government money (NIH and NSF) but it can take years to get enough data to make a worthwhile paper. If I have to release my data before then it will hurt my ability to publish papers without getting scooped. You could end up with a whole closet industry of people just data mining the data others have had to disclose. And, here's the main catch, if you don't have to release results you haven't yet reported on, the problem isn't solved at all because I could just choose to "not yet publish" any results that don't agree with what I want to say. Nothing says I ever have to publish results I get, so why wouldn't I just sit on them?

Not that sitting on data just because it doesn't agree is a good thing, but it happens. And plenty of good data goes unpublished (experiments fail, uninteresting results happen, journals don't publish negative results very often etc) so what about that data? Overall this law isn't going to help anything, and will just cause issues.

If Group A and Group B use exactly the same source data there's no possibility of Group B proving Group A's research wrong.

Wrong. If Group B cannot duplicate Group A's analysis of the data, that proves that Group A did something wrong and probably came to the wrong conclusion.

If Group B cannot duplicate the experiment and get the same data (and knowing that means being able to compare both sets) that calls the experiment as a whole into question.

There is more to science than simply applying equation A to data B and getting number C.

This hubbub all came about because of the difficulty in prying the source data out of the hands of the guy who produced the "hockey stick" figures. It's covered in the book "Broken Consensus" I think it's called. The "hockey stick" is not the "source data", the source data is all of the individual readings from all the instruments, prior to corrections for sampling errors or known issues. One cannot verify the quality of the "hockey stick" result without having the source data and being able to verify the processing steps that were done to it.

The downside to free and open access to all data is that research groups get grants to collect AND process the data to come up with results. Opening the data up for free access means that other groups, who have more interest in scooping than being right, have more ability to do that scooping. That leaves the people who did the work in the cold. There is good reason to delay opening the data until the group being paid to collect it has a chance to use it.

The downside to free and open access to all data is that research groups get grants to collect AND process the data to come up with results.
Opening the data up for free access means that other groups, who have more interest in scooping than being right, have more ability to do that
scooping. That leaves the people who did the work in the cold. There is good reason to delay opening the data until the group being paid to
collect it has a chance to use it.

Opening the data up for free access means that other groups, who have more interest in scooping than being right, have more ability to do that scooping. That leaves the people who did the work in the cold.

That is not hard to achieve: someone has to make an FoI request, the cost to prepare the data has to be estimated, someone has to get hired to collect and format the data and then the data is released. That can take a considerable amount of time.....but that's not the only issue. In my field of particle physics raw data is generally useless unless you understand how it was collected and how to analyse it.

Even assuming that you had several petabytes of disk/tape available to store it, raw data from ATLAS would be completely useless to you unless you really understand the detector "warts and all". Trying to understand this data without access to the detector itself and the ability to test and cross-check ideas looking at (and sometimes carefully tweaking) the hardware is literally impossible....and that is before you get into the thorny international issues about who did what and so whether it falls under any one country's laws.

These issues were discussed on a previous experiment I worked on in the US and the conclusion was that it did not serve the public to have data released in just about any form: the raw data was useless and even the processed data still had considerable "quirks" which required understanding (e.g. acceptance drops at detector boundaries etc.). This was aptly demonstrated by a pilot project which resulted in no interest at all from the public but which worryingly attracted a few nutters who were more interested in proving their pet theory than in doing science.

So while I am very sympathetic to the "the public paid for it the public should be able to access it" argument I do not think that the public's interest is best served by releasing raw data in all (most?) cases. The best way to serve the public interest is to ensure that results and ideas arising from that research are freely available to all and allow the public to build on that.

It wastes scientists' time that would be better spent analysing the data rather than releasing it, it wastes money collecting and disseminating the data, it pollutes the real scientific results with those of nutters trying to prove their pet theory and, in the case of commercially useful data, it risks having companies use the data to develop something commercially useful that will then be locked away behind patents and the public will be charged through the nose for.

There is also the more subjective, human issue that if you don't let people who have worked like crazy to get the data have at least the first shot at analysis then recruiting scientists is going to become extremely hard and motivating them to perform large-scale experiments will be even harder if they just have to give the data away - why would you bother if you can just sit around and get the data as soon as it is collected?

Is that bad enough? There are ways you could mitigate some of the above but the bottom line is that nothing is free: it will cost more money to make the data publically available and, as a taxpayer myself, I see no real benefit from doing it and some serious potential pitfalls.

"This hubbub all came about because of the difficulty in prying the source data out of the hands of the guy who produced the "hockey stick" figures. It's covered in the book "Broken Consensus" I think it's called. The "hockey stick" is not the "source data", the source data is all of the individual readings from all the instruments, prior to corrections for sampling errors or known issues. One cannot verify the quality of the "hockey stick" result without having the source data and being able to verify the processing steps that were done to it."

I threw away some mod points because it irks me how unskeptical the garden variety climate skeptic actually is when it comes to accepting the hockey stick has been discredited. Here are a few points you should consider with your skeptics hat on...

1. Mann's original hockey stick was published in the jounal Nature, they are not well known for publishing shoddy work.

2. A senate inquisition was held on Mann's paper in which the National Acedemies of science were called in to give expert testimony [nationalacademies.org] on the veracity of Mann's paper. As you will no doubt learn when reading the testimony the NAS came down firmly in favour of Mann although they did highlight some minor technical problems.

3. Given that the NAS were able to agree with Mann's conclusions under oath at a hostile inquisition, how did they do so without access to the data?

4. The journal science is also not well known for publishing shoddy work. So why did NAS then publish a follow up study by Mann in their journal Science if they were not satisfied he had no only addressed the minor technical problems in the original but also greatly increaed the robustness of the findings?

5. Why can't I find a listing for a book called "broken consensus" which you cite as a source? Shouldn't you at least adhere to your own standards of evidence?

7. Why do people belive that some difficult to obtain data (ie: time consuming) from a few nations means that the other 99.99999% of the raw data [realclimate.org] available on the web is insuffitient to recreate the hockey stick?

8. Why is McIntrye only interested in "auditing" climate science that disagrees with his opinion? Could this be because his own paper did not stand up to the traditional auditing method called "the test of time"?

If the above points do not at least cause you to question your sources then I can only conclude your sketics hat must have slipped down over your eyes...

What if group B notices that a temperature station one day reports the temperature is -12.4C one day and 10 minutes later it's +12.4 C the next? On 2010-Apr-21 22:10, Drifting buoy 48534 [sailwx.info] did just that and that's an automated report, imagine the fun and games when human error gets added in! The data is bad, there is a lot of bad data points in the records and the records were never intended for the purpose they are being used for so quality control is even more critical. We really need a large number of huma

Examining old data has one value and one value alone - verifying that the claim made for the data matches up with the data. [...] Access to raw data for any other reason is pointless.

Hardly. One could analyse the raw data looking for something other than what the original researchers were looking for. There might even be some interesting signal buried in the data that original team, focusing on something else, disregarded as noise. Minute timing errors in, say, solar wind data returned from a spacecraft might turn up some oddity of orbital mechanics, for example. The researcher focusing on the sensor data rather than the timestamps will miss it, but it's all part of the raw data. How many biologists discarded moldy Petri dishes as ruined, without recording that, before Fleming thought to investigate why bacteria didn't grow near the mold?

So the question is, is it possible to request data on ANY publicly funded research going on in the UK?
What about research on SILEX(http://en.wikipedia.org/wiki/Silex_Process), a laser based uranium enrichment process that is much more efficient than other enrichment processes(currently very very classified) or military research?

No. Classified data is still protected by law, even if funded publicly and researched at a public university. Courts are extremely unlikely to ever decide that classified data should be blindly released for any reason, and the public nature of the funding behind it would not be grounds for release.

There is no question that having the data released eventually should be the rule. It shouldn't even be considered proven science until it can be thoroughly recreated.
However, the tricky bit is mandating exactly by when it must be released. If a lab has spent a long time, let's say 10 years, accumulating some hard fought data, they should be allowed the benefit of a few publications before releasing all the data so that better (likely privately) funded labs do to the easy rapid analysis and 24/7 postdoc

If a lab has been spending my tax money for 10 years, I want my employees to give me my data right Goddamn now.

Okay, but does that mean you should get to see the data before they're done analyzing it, before they can write a paper on their results? If we instituted such a rule, there would be nothing to stop scientists from bombarding their competitors with FOIA requests, and using the released data to scoop them. At the very least we'd need embargo rules, but even that won't entirely prevent abuses of th

not really.Your problems with these possible situations are based on the deeply flawed system we have in place now.

Give academics the respect and credit they deserve for collecting vast quantities of high quality data rather than merely for the 2 page paper they write about some interesting statistical anomalies they found in said data and this ceases to be a problem.

The way papers are written, reviewed and published today and the way academics are given credit is based on a system hundreds of years old when it costly to print hundreds of pages of boring figures.

Now data is cheap beyond words. Publishing a few hundred words or a gigabyte is little different when your audience is fairly small and the way academics publish should reflect that but it's too hidebound and dogmatic to do that.

A professor who does nothing but produce a high quality and hard to acquire dataset deserves credit even if he comes to no conclusions at all.

The problem is with the system and with the way academics think.Not with this possible change.

Give academics the respect and credit they deserve for collecting vast quantities of high quality data rather than merely for the 2 page paper they write about some interesting statistical anomalies they found in said data and this ceases to be a problem.

The problem is that interpreting raw scientific data is enormously time-consuming, because there's so much information available that we can't possibly assimilate it all. I have a PhD in biochemistry and advanced training in crystallography, but I couldn't look at a ribosome structure and easily figure out what it meant, because I don't know very much about ribosomes. The people solving the structure, on the other hand, have exactly the background necessary to perform detailed analyses, and they will undoubtedly notice things that completely escape me. And I think you're understating the value of the scientific literature. A 2 page paper on statistical anomalies won't get you a faculty position at a major university, but a well-written 10 page paper on the meaning of a crystal structure certainly can. This is even more the case if they took additional time to perform non-crystallographic experiments to verify new hypotheses.

I don't deny that there are issues with our system, but you're completely missing the point of writing papers. Simply generating massive amounts of data isn't considered science - figuring out what it means is. I say this as someone who is very good at generating data quickly, but not particularly good at interpreting it. Now I write data analysis software instead, and leave the question-asking to more suitable minds.

The original objection was that if the data is hard to come by then it's unfair to academics who wouldn't get the credit after gathering the data.

Of course simply generating massive amounts of data isn't science but it is a very very very important part of science.

Is an academic who can write that well-written 10 page paper on the meaning of a crystal structure any less mentally capable because he didn't have the funds or facilities to gather the data he's looking at?

Simply generating massive amounts of data isn't considered science - figuring out what it means is. I say this as someone who is very good at generating data quickly, but not particularly good at interpreting it.

Spot on. I have a PhD in Comp. Sci. (Multi-Agent Systems / Market Based Control). One of the things you learn (maybe in you Universitity degree courses or in your first paper presentation) is that data does not mean *anything*, what matters is the interpretation of such data.

Nevertheless, I am of the opinion that programs used for the generation / manipulation of such data should also be free / scrutinable. Specially those developped during the research as they are also being paid by the tax payers money.

In the field I am working now (Agent based computational economics) a lot of people do these so called agent-based simulations, then they write a nice paper about what their simulations showed and try to publish it. The problem is that they keep their code! and in that respect they are deffinitely removing a good chunk of the "methods" part of their research. It is absolutely impossible to duplicate that work without the code.

I work for a government lab that produces DNA sequences. We are obligated to release our data into a public database as soon as it has been verified for any samples that come from the US, and we release most of our foreign data, too, unless the other country involved gets pissy.

Nothing good comes of that speed. We get crackpots thinking they've made major discoveries (not one real one yet), we get scooped for major papers (think Science), sometimes by our own collaborators using only our data and none of theirs, and we generally spend a lot of time, effort and *more money* on media spin control. There is such a thing as releasing the raw data too fast.

We get a *ton* of FoI requests, too - people think we are withholding the good data, or being stubborn by not providing them composite statistics in exactly the format they want to see. The truth is, up until I got involved, the data management technology was so far behind the current bog-standard capabilities of the rest of the world, we couldn't actually answer the questions that were being asked, barring Herculean effort.

Don't get me wrong, I think we *should* be releasing all of this data - delayed by just a bit. That way the people who generate it would have a better shot to get recognition/credit for their work, the crackpots would have less ammo for their rants, the press would be more likely to get the facts right the first time, and the scientific integrity of the whole process would be upheld, as everyone would get the raw data to review. It'd probably save a ton of money.

The "reward" for doing publicly funded research is that you keep getting funded.

Collecting good data is hard work, and the payoff is big publications, which you need if you want to continue getting funded. Once you've got that big publication in your pocket, though, you'd better by coughing up that data set. Otherwise, everything you say is suspect. Kudos to the UK for getting this half-way right, but they'd better set some reasonable constraints on the timing of these required data releases, or face any number of frivolous lawsuits from conspiracy theorists and 'data analysis specialists' who don't want to do any of the hard work themselves...

I don't care one whit what you think you're entitled to: if you're taking my money, you work for me.

I don't care if you are a ditch digger or a particle physicist. Doing all the hard work and getting none of the credit sucks regardless of what we are discussing or who is paying the bills. So put up or shut up. Would you be willing to do all of the grunt work in your job, but take none of the recognition? Most people wouldn't - those are the kinds of jobs that make people go 'Postal'. If you aren't doing it (and even if you are), do you really expect anyone else to?

If a lab has been spending my tax money for 10 years, I want my employees to give me my data right Goddamn now......if you're taking my money, you work for me.

Just stop and think for a second about exactly what it is that us scientists are being paid to do. We are NOT being paid to collect data we are being paid to figure out how the world works and how to apply that knowledge for the betterment of mankind. The data is an end towards that means.

Now, do you REALLY want us to spend a serious fraction of our time and money preparing and making available the raw data in a form which will probably be useless to you instead of analysing and coming up with results w

I don't think you understand how scientific funding works. I am not given a lump sum and then told to go figure something out.
This is how it works in the EU:

I am given a sum of money. This has to be accounted for. There are a number predefined areas where I can spend this money. During this project I will have to fill in time sheets detailing what I'm spending my time doing. All the different work areas will have spending limits. I.e. I can't just put some more time into community outreach(like preparing

my experience of this situation comes from protein crystallography and deposition of the hard won data there

Ah, a fellow crystallographer. Welcome, brother!

I was about to post a similar comment. However, I only agree with you up to a point. Once you publish a paper reporting the structure, all of the raw data should be made publicly available (including diffraction images - although deposition of those isn't quite feasible yet). I would apply the same standard to any other field: you shouldn't publish until you are comfortable releasing the underlying data. I don't care if you're still working on some super-secret follow-up paper, as far as I'm concerned your publication is useless if I can't go to the PDB and download the coordinates. And if you're using public resources to solve your structure (like NIH funding, or one of the DOE's synchrotrons), your results are public property.

There was once intense resistance to even mandating coordinate deposition (long before I got started in the field), which just sounds insane now. Some of the people doing the most complaining were in fact some of the best funded. A decade later, the field went through the same bullshit whining with regard to reflection data. Now most journals require both coordinates and reflections, and not only has the field not suffered in the slightest, many more studies are now possible and the majority of structures can be solved without experimental phasing. If we'd left things the way the naysayers wanted it, every group attempting to study, say, ribosome structure would have to either plead with more senior groups for coordinates in order to solve their structures (and, almost certainly, further bloat the author lists and potentially cede some control over their project - which, I imagine, would have suited the senior faculty just fine), or waste half a decade making heavy metal derivatives. It is difficult to convey to non-crystallographers how huge a waste of time and money - most of it coming from tax dollars - this scenario would be.

Now, where it gets messy is situations where you have to release data ASAP, instead of waiting until publication. American structural genomics groups do this (it may be a requirement of the NIH), but PDB deposition is more of an endpoint in itself for them, and no one is going to bother trying to scoop them on most of those proteins. Genomics centers also do this. A grad school classmate of mine worked on a sequencing project where much of the gruntwork was performed by the DOE, and they had extremely strict release rules. She complained that other groups (of bioinformaticists) could start analyzing the data before she'd had a chance to complete her own studies, because the outsiders didn't have to spend a lot of time thoroughly annotating the genome before publishing. (I don't think it held her back in the end - she graduated with several papers in Science.) In many situations like this, to obtain the data you need to agree to an embargo on publications, to prevent that sort of underhanded behavior. I saw an article retraction recently where the scientific content was undisputed, but the investigators had (unintentionally, it appeared) broken an embargo by submitting the paper when they did.

In general, I think the scientific community - especially the part funded by the public - should err on the side of maximum disclosure of data, and I don't have much sympathy for the researchers in this story (and I'm not particularly sympathetic towards "climate skeptics" either). I do worry that rules will be used to harass researchers in supposedly controversial fields (Richard Lenski's adventures with Conservapedia are a particularly nauseating example), but as a scientist, I also think the benefits of making massive amounts of data available to anyone are far too important to let these risks bother us, and the drawbacks of keeping such data private are much worse than having to fight off the occasional knuckle-dragging lunatic.

The public pays for gathering the data, the public should have access to that data. Kinda hard to find fault with that.

No, it isn't. The fault is that the data may contain sensitive information. The Army collects data about enemies, should that be free access for the public? Nope. (I'm not arguing against making university data public, but your logic is flawed)

Do you think grad students were collecting data in the field on iPads in the 1980s?

Most of the data is probably in the form of moldy old penciled notebooks, core samples, B&W photo negatives and microscope slides. I hate to break it to you, but you know what, except maybe in physics or electrical engineering, not all experimental data was systematically recorded digitally until 15-20 years ago.

They collated, analyzed their data at the time, published their results in peer reviewed journals, and that was

Unfortunately, Climategate proved that, at least in the field of climate research, "peer review" is worthless; Mann et al were actively conspiring to ensure that only "friendly" eyes carried out the reviews; anyone thought to be showing signs of scepticism were blacklisted, whether individuals or publications.

To add to that, Glaciergate proved that much of what was claimed to be peer-reviewed was actually just regurgitated propaganda, often based on anecdotal evidence (reminisces of mountaineers published i

Except the public who paid for the data isn't the same as the public who are paying the researchers.

Large amounts of the data under discussion are from _foreign_ governments. Additionally, researchers frequently have to sign confidentiality agreements in order to gain access to health records and other data. If that needs to then be public, they won't have access to it.

Who cares? Are you arguing for science, or for little confidentiality fiefdoms?

There is literally no point in doing Science (with a capital S) if the data isn't available for scrutiny by everyone. Without scrutiny, it's all he said/she said, rumours and bullshit.

As to signing confidentiality agreements etc, there comes a time when a researcher has to decide: does he want to contribute to human knowledge (=> don't sign) or does he just want to wank around with secret data (=> sign it)?

It sucks to be unable to use purportedly available data, just because it can't be divulged, but it's better that way in the long run.

Unsupported data is worse than useless, it's a cancer that grows every time someone else quotes the unsupported result, until it gets to the level of unchallenged folk wisdom within the community.

Sure, I'll give you the data. But I wasn't funded to put the data in a format that's easy to understand. I've also got a job, and I don't get paid to support a competitor's data analysis attempts. Good luck.

Sure, I'll give you the data. But I wasn't funded to put the data in a format that's easy to understand. I've also got a job, and I don't get paid to support a competitor's data analysis attempts. Good luck.

Your so-called competitors will be sure to mention your viewpoints when your funding runs out and you apply for more. Not only is your research not easy to understand and you don't let others analyze the data to attempt to reproduce your conclusions, but you think that other members in the scientific community are competitors and you feel a need to sabotage their efforts by making it difficult for them to use taxpayer-funded data to advance science. If science is such a business to you, then how about you fund it all yourself from the profits you make?

Absolutely. The public should have access to the data. Public grants then also need to pay for curating the data. Libraries aren't free, archives aren't free, package data in an actually useful form takes precious time, which is scientists most precious resource. Having data in a form that is useful to the 25 people in your research group is very different than providing data that can be used by thousands of people. It's analogous to the difference between the quick bash script you have that backs up your movies to your external hard drive, and having something that you're willing to distribute to 1000 people and provide support.

Now if only the same rules were applied to the fraudsters who promote evolutionism...

Responding to a troll, I know... but if you really want the data on evolution (as opposed to foaming at the mouth and making up words to make yourself feel better about the mythology you chose that tells you that faith is when you blindly believe while being unable to show any data [Hebrews 11:1, bitches]): http://talkorigins.org/ [talkorigins.org]

I'll probably get flamebait or troll for this too, but this has always been the danger of the over-advocacy of climate change. Climate science is not even close to "settled". Nor is evolution, nor is physics. Well established and able to make verifiable predictions yes. Settled. No. The direct result of making the absurd claim that some cutting edge field of science is settled is this. Some complete moron then says "see, global warming wasn't settled, so evolution is bunk too" I've seen similar idiotic comments about plate tectonics as well. A number of years ago (far enough back it hasn't been cached), I wrote here that as scientists, we had better be right about climate change. Now we reap what we have sown. If it annoys you that idiots make claims like "global warming wasn't settled, so how can you be sure about evolution", look to the strident supporters of the cause. They (I'm talking about realclimate etc., here) are as responsible as Beck. By hammering any and all dissent without any concern as to the validity of the claims, they have made this type of comment inevitable. We will be seeing much more of it and we have only ourselves to blame.

Actually, my initial comment was intended to be humorous, as the AC IMHO did a good job replicating the style of Mr. Beck. That Mr. Beck's way of speaking resembles The Almighty Shatner is a topic for another discussion:).

making publicly funded non-military research has nothing to do with privacy. Public money is spent for the public good and there is no good justifiable reason to keep it hidden from the public... especially if its meant for the betterment of society.

if you want your data to be private, get your own privately funded money

Putting data into peoples hands whoa aren't experts often leads to bad things. See every non expert who believed Wakefield study because they didn't understand how to interpret data. In that case kids died , and kids are still dying.

In principle I agree with you, but we live in an are where everyone thinks they are a qualified expert in anything. That simply isn't true, and no good will come out of this.

The data wan't show a flaw in the study because it wasn't used, but he will inevitably cherry pick data to 'prove' the study is wrong. And people like Hannah Devlin are always happy to publish claims without proper study. So no good can come from this, and people need to understand that.

Science journals have long fought this, because their profit model is strongest when they own copyright and are the exclusive publishers of a paper. Peer review and scientific principles don't mesh well with peer review though, and many academes have either "published" their papers on their own websites or found other ways to try to work around the journals.

Ridding peer review and science of copyright would be a great improvement.

no, peer review is good. It helps to point out mistakes or inconsistencies. Getting rid of scientific journals is quasi-good (less profit motive in science, but also less chance to get work out there).

This is still an overestimate, I think. The Wiki says there are 5758 higher education institutions in the US alone. The entire budget of the Wikimedia foundation hovers around $10000000 a year, which is ~ $1736 per year per institution. We can have a project that costs 10 times as much as Wikipedia, containing, most likely, more than hundred times more data, for measly $17360 per year per institution. This is about as much as one lucky teaching fellow gets paid. This is such a trivial sum of money for the a

This has nothing to do with journals. The data was not available anywhere - not in a for-pay journal, not on a website, not on request. It was the researchers that refused to release the raw data - the publishers have no motivation to suppress these release, because it is the published paper that earns them money, not the raw data.

It turns out that "the data" are measurements of petrified tree rings, which were collected in the course of (presumably) a government grant-funded study. Now Queen's University researchers must compile the data for release because of the (UK) Freedom of Information Act. The scientists quoted in TFA apparently did not use the ring data for anything relating to climate studies, but Keenan has that purpose in mind.

Phil Willis, a Liberal Democrat MP and chairman of the Science and Technology Select Committee, said that scientists now needed to work on the presumption that if research is publicly funded, the data ought to be made publicly available.

That doesn't seem unreasonable to me. Appendices with raw data are often included already in the online editions of journals. Of course, if the ruling applies to all data generated in the course of a study, whether it is used in publications or not, it could be onerous indeed.

Michael Mann used the same tree ring data as temperature proxies for his studies and has published papers on this. But now the very same scientists who collected the tree ring data claim that data cannot be used as a temperature proxies - even though they haven't mentioned a word about how this would invalidate Michael Mann's work.

The NSF has recently taken more of an interest in research data management. They're definitely starting to make it a requirement of grant funding that the research data be digitally stored, backed up, and, after a cooling-off period to allow the principal researchers to publish, made available to the public. I'm working on a research data management group at my university, and the researchers generally seem open to the idea, though they're loathe to put in any extra effort to make it work.

Yes it does, kinda. Thanks to our publishing overlords however these 'making available' issues are more difficult than just publishing it online or so. The data cannot be made available as long as a publishing house has copyrights on it and the publishing house usually takes copyright for all work for years including data that is not directly published by them especially when the work is or becomes popular. However NSF/NIH grants usually have the requirement to release all data to the public a couple of yea

We can agree that the whole scientific process does not make much sense if we have to believe in the interpretations without seeing the actual data. From this perspective it is crucial for all scientific data to be open.

The other perspective comes from the individual scientist. It might take years to put together a complete data set of a particular phenomenon via experiments, literature review, digging in the ground or looking at the stars. So after looking for something special you finally discover somethi

OK, why does this argument not also apply to teaching? I am paid to teach and do research from the public purse. My teaching is available to any one who meets certain standards and pays a user fee.
Access to data should be the same.

I am a pretty big cynic, and I remain unconvinced that AGW is a significant problem. It doesn't help that the raw data isn't disclosed. I wish scientists would go back to doing science and quit trying to be policy makers.

I am the story's submitter. My original submission included a link to the mathematician's web page about this [informath.org]; the page has many more details. There have also been other news stories, e.g. at the BBC [bbc.co.uk].

Scientists are always concerned when people who have no idea what they are doing try to interpret data. It has nothing to do with being scared.For example:Lets say this guy cherry picks some data to support his belief and Opera finds out about his 'findings' and puts him on the air. Suddenly 25 million people who aren't qualified to judge his assessment is not hounding politician over incorrect data.I just spent about 10 years watch this very thing happen to Vaccines. Some idiots bad study gets on Opera, an

The problem that the climate scientists have created for themselves is that they are hiding the data from everyone. Up until a few months ago, these requests were relatively rare. Some of the requesting parties actually have fairly strong credentials. Steve McIntyre may be hated by the folk at realclimate, but he is an IPCC reviewer. To stonewall him is a little different than refusing to provide it to Jenny McCarthy.

That doesn't matter. The important thing is that the attacks are made. Even if every one is shown to be completely wrong, people will still remember all those (erroneous) anti-global warming reports. Especially since the media will enthusiastically report the initial attack and relegate the news of its rebuttal to a small paragraph on page 34, if they report it at all.

Unless you happen to be a scientist in a related field, raw data tends to be next to useless. Anybody can draw pretty graphs in Excel and get worried about a rising trend line, declining trend line or anomalous result but it takes someone who knows what they're talking about to explain what they actually mean.

That doesn't matter. The important thing is that the attacks are made. Even if every one is shown to be completely wrong, people will still
remember all those (erroneous) anti-global warming reports.

People don't matter. Science doesn't advance by asking what Aunt Rosie from Ohio thinks about a particular result.
Those who matter are scientists, and scientists read peer reviewed journals. Peer review is all about filtering out
all those attacks so that nobody who matters needs to read them.

On the other hand, this will likely produce a whole stream of deliberately inaccurate analyses with ulterior motives behind them.

But with the data public, it'll be easier to shoot them down for picking, choosing, skewing, and what else.

There is no reason why this kind of data should ever be "secret"

Surely it's not hard to see the dynamic that will unfold from this. Yes, the truth is "out there" for better or for worse. However, scientists will have to spend an enormous amount of time and money defending their work against cheap public shots from unqualified critics, instead of a smaller number of competent but dissenting colleagues. That will mean less time for doing research, preparing publications and writing grant proposals.

Consider also that even an expert can misinterpret raw data. Usually it

But with the data public, it'll be easier to shoot them down for picking, choosing, skewing, and what else.

Not sure what regulations are on "release all data to the public" but seems like there are loopholes big enough to drive a bus through. For instance, in my field, no one but me knows how many cells I looked at. Maybe that thing I said happens in these cells happens in all those cells. Maybe I looked at 300 before seeing one doing what I said, took a picture of that one, and that was that. All my data would be that one cell I cherrypicked.

Even if I did take pictures of all 300, no one knows but me. Those other 299 can dissapear.

If I'm -not- evil though, this could hurt me. If I looked at say 3000 cells, and 10 were doing a thing that I thought was significant, I could have my reasons. Maybe the other 2990 were the wrong cell type or something. Being the expert, that might be obvious to me just from looking at them. A non expert looking at them might not see that. They would just see that out of 3000 cells, I chose the 10 that supported my data. They might call foul without bothering to have me explain myself.

There's no reason the data should be secret, but most data doesn't stand on it's own, and writing up supporting information to -all data gathered- just isn't going to happen.

If I'm -not- evil though, this could hurt me. If I looked at say 3000 cells, and 10 were doing a thing that I thought was significant, I could have my reasons. Maybe the other 2990 were the wrong cell type or something.

Of course you would. And if you truly did find such a strange sample set, you would document those reasons with just such a sentence. Maybe they WERE the wrong cell type, and in your paper you would be expected to say precisely that. Odds are fairly good you would have a citation concernin

Except they aren't experts at knowing what is picking, choosing and skewing and what is a correct and practical analysis of data. Prepare to see a lot of cherry pickling and so called 'experts' interpreting this data incorrectly. Many of whom won't even know what a P value is.

And with Climate Science part of the process is showing how you collected and interpreted the data. If you are not willing to share the raw data so other researchers can attempt to replicate your methods and results then don't bother publishing.

On the other hand, this will likely produce a whole stream of deliberately inaccurate analyses with ulterior motives behind them.

I think they're a lot more worried about accurate analyses than inaccurate ones. FUD will be much easier to deal with if it's really FUD because it can be criticized by more people than just the keepers of the sacred data.

Does this mean every biology, chemistry, physics, and engineering research group (I'm talking about grad students and postdocs, here) would have to open their lab notebooks to anyone who asked?

Researchers who ply their trade on the cutting edge of science live in perpetual fear of being "scooped" by another group who publishes their discovery first. These are sometimes literally "races." So now a group at one university could demand access to the notebooks of a group at another university? And vice versa?

Not at all.

It means they have access to each others results and source data when published (once the group is done researching this phase, and is ready to publish). There's no "opening notebooks", simply because that's a terrible metaphor for how data is collected these days.

I am more concerned with the time and effort it will take to format data for external users.An accompanying more detailed methodology will surely have to be provided for the data to be used correctly.

That is indeed an issue. Presumably the methodology is already published, as is the rule for scientific papers. What could happen is that competent scientists have to waste their time debunking incompetent analyses by axe-grinding cranks.

Actually, if the requirement is specified up front as terms for the grant, I'm not opposed to it. I don't think it'll do any good, mind you, as a rule all that's useful is published, and scientists are generally happy to cooperate if you need more, as long as you have hones

What could happen is that competent scientists have to waste their time debunking incompetent analyses by axe-grinding cranks.

It's much more likely that incompetent scientists will be debunked by more competent analysis, because as soon as there is any controversy regarding a study the scientific community swarms to verify one way or the other.

Also, it's just as important to know what data was disregarded, and why (there are a plethora of valid reasons, but there are even more invalid reasons) as it is to know what was included. The GP's point about the tree ring data that was collected but never used, why wasn't it used? Was it

That is indeed an issue. Presumably the methodology is already published, as is the rule for scientific papers.

There is at least one case in =two climate research papers where what the methodologies claimed was impossible because the data to do it didn't even exist. This didn't come out for 16 years, and was only discovered because a FOI request was finally honored.

In this case, the authors of the papers had claimed that the station data that they used was from stations that had "few, if any, changes in instrumentation, location or observation times." (quote from one paper) and "selected stations have relatively few, if any, changes in instrumentation, location, or observation times" (quote from the other paper)

"Hey! We only used great data!"

Now, these two authors used the same data, and one of these authors was actually a co-author of the other paper. These authors are Jones (hello climate gate) and Wang.

Now, they finally sourced the data as being from the Chinese Academy of Sciences, which coincedentally had co-published a report with the US Department of Energy at about the same time as those two research papers, stating quite specifically that DATA OF THAT QUALITY DID NOT EXIST. The report was specifically about the quality of the Chinese climate record.

Both papers concluded that the Urban Heat Island effect was minimal. Too bad that they didn't actually have data good enough to draw that conclusion. They said they did, tho.

None of this would have come out if it wasn't for the Freedom of Information Act. Jones and Wang both obstructed the release of the data (denying FOI requests, etc) for nearly 2 decades.

This all came out several years ago, but the media didnt give a fuck. They did care about hacked emails tho. Go figure. Now, as it turns out it probably wasn't Jones who was lying his ass off. Wang was a co-author on Jones's paper and supplied the "data." Jones gets credit for having his email hacked.

You say that until he gets on a major talk show, talks about his improperly interpret results and suddenly 20 million people are parroting his incorrect results.

The problem is that we dont apply the same standard to a talk show as we do to a scientific institution.

If a talk show spreads incorrect information absolutely nothing happens, if a scientific institution does the same there will be a royal commission, investigation, scrutiny and even if they are found innocent someone's career is still ruined.

I don't know. The USA (and a lot of other countries) might not be too happy since it means releasing the UK is saying it's OK for these scientists to release the USA's proprietary data. So I guess, you're right in that those jerks like the USA (and a lot of other countries) that wanted to profit from this data will get their comeuppance, but I wonder if we now need to increase taxes in order to pay for these services that used to make a profit. So that means that we all need to pay more money because of