Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "PGP and GnuPG have been utilizing webs of trust to establish authenticity without a centralized certificate authority for a while. Now, a new tool seeks to extend the concept to include scientific publications. The idea is that researchers can review and sign each others' works with varying levels of endorsement, and display the signed reviews with their vitas. This creates a decentralized social network linking researchers, papers, and reviews that, in theory, represents the scientific community. It meshes seamlessly with traditional publication venues. One can publish a paper with an established journal, and still try to get more out of the paper by asking colleagues to review the work. The hope is that this will eventually provide an alternative method for researchers to establish credibility."

The problem of course is that at some level you still need to have a known good reference for the whole "web" to work. It doesn't help your credibility at all if you've got a paper signed by 100 of your closest crackpot buddies. What this does provide is the ability for someone in addition to established authorities to vet a work, such that a well respected member of the scientific community can easily and in a verifiable fashion signify his approval of a paper.

I'm sometimes bothered by the stress on studies being "verified" by something like a peer-review process. Not that I don't understand why it makes sense. It's a pretty reasonable attempt to sort valid work from crap, but...

There's still a certain way in which it's just an appeal to authority. It's people saying, "We should accept what this scientist says because other scientists say that he's right." I guess what I'm saying is that I worry that, as a process like this becomes more technical, people will be more likely to confuse a statement like, "This study has been reviewed by other scientists and seems to have merit," with something more like, "This study is correct, infallible, and indisputable."

And I guess part of the reason I worry about this is that there may be cases where what "everyone thinks" (i.e. the common conception even among experts) is wrong, and some random nutcase is right. It almost never happens, but it happens sometimes. It seems to me that a technical method of assigning trustworthiness of ideas in a web of trust might possibly lead to having all the groundbreaking ideas go into a spam filter somewhere, never to be seen again.

Interesting point. Something that occurs to me however is that any paper worth its salt really has two things that can be verified/approved independently of each other. The first, and easier of the two is the test procedures, and any math/established formula used. Assuming that no flaw can be found with that, you move on the second part, which is the theory being proposed to explain the results of the tests and/or how any discrepancies between the observed results and the theory are handled. It's entirely possible to have a paper that has excellent test results that raise interesting questions, but a completely nutjob theory attached to it. To ignore the results of the tests because the theory is crazy is to throw the baby out with the bath water. Likewise, just because the theory proposed in a paper is a well established and respected theory is no reason to sign off on flawed test results.

Actually, it doesn't need a root. Quite on the contrary, the developing graph could give amazing insight into the structure of research communities. It would be possible to identify researchers forming links between otherwise almost disconnected areas of research, and to find the great minds at the centre of such blocks. There is no "root" to the web of scientists. Even people like Erdös were only ever local subroots.

I think this project is a great idea. Unfortunately, it currently seems to consist of only a command line tool to sign reviews with GPG. That's nowhere near enough if it is to thrive beyond the CS world. It needs a simple, rock-solid GUI, and most importantly, lots of eye-candy for the graph. It will need to look cool and work well to build up the momentum for this to work at all.

As a third party, there is no way I have the time to follow the chain of logic that results in a modern scientific paper from first principles. At some point, I have to accept some of the preconditions of the paper without verifying them, because doing otherwise implies that I am an expert in the particular field the paper is relevant to. And there are plenty of cases where I want to make use of a result from a field that is related to my work but in which I am not an expert.

Appeal to authority is the fundamental reasoning technique I apply in such cases. A respected expert says it is so, and so I will trust them until I have reason to believe otherwise. That trust should not be blind -- if I am presented with reason to, I will happily re-evaluate that trust. Perhaps the expert is mistaken. But, in the interest of actually getting something done myself, I will accept as a default position that the experts know what they're talking about.

Yes, but for the web of trust to have value to the casual observer certain respected authorities need to be established which is something people tend to do naturally on their own. If something like this is implemented it will most likely never have an official authority, but it will have several de-facto ones that people will come to associate as authority figures. Essentially someone not well entrenched in a particular field may not know if Dr. X who's work is signed by Dr. Y, is any good, but they have heard of Journal Z who signed for Dr. Y, and therefore provide credibility for Dr. X.

There have been numerous attempts to redefine peer review to bring it into the 21st century. There will be many more after this effort.

Peer review is typically anonymous. It represents a trust relationship between the editor and the referee, not directly between the author and the reviewer. If the journal - or rather, the editor - is removed from the equation, then some new mechanism is needed. It isn't obvious that the web of trust as described fits the bill, however.

An equivalent to a distributed certificate authority already exists and is widely used as a metric. The only certification that will be believed - even from professional peers - is to demonstrate a need and desire to actually use the results of prior publications. These are denoted (and trusted) by building a chain of publications by tracing back through the references embedded in subsequent publications themselves.

Indeed, the implementation of flagged revisions [wikipedia.org] is currently being debated for the English Wikipedia, and was the subject of a recent./ article [slashdot.org].

A lot of the debate centers on exactly what the "signing" process will entail in terms of responsibilities and consequences for the articles subject to it.

I don't think a one-size-fits-all approach to trust networks is a good idea. Requirements for effective trust in key sharing, peer review, and wiki content may differ and I think it's appropriate for each to develop a fine-tuned approach, while borrowing good ideas from one another.

In addition, there will be an effect where more prominent scientists will get tons of links and favorable peer reviews, in exchange for being "friended" in this network.

Certainly this effect must exist already, and admittedly a bit of it is good (if someone repeatedly submits excellent papers, it stands to reason that their opinions should hold a bit more weight) but this may amplify the effect far past the point of usefulness. Ultimately, science needs to stand on its own merit, and not just the reputation of the person who published it.

In science(and generally) the "appeal to authority" is complicated because there are actually several quite different flavors of appeal to authority, which all mean quite different things; but commonly blur together in ordinary use. The following is incomplete; but it hopefully gives a rough outline.

On the one hand, you have the appeal to authority as an argument in itself. This is the classic medieval "According to the philosopher..." stuff. When this happens in science, it is undesirable, since science is supposed to be about the world, not opinion(unless you are doing opinion polls, of course).

On the other hand, you have the appeal to authority as intellectual heuristic: If you don't know about subject X, it is generally most sensible to find somebody who does, and ask them about it. If you don't know who knows, then you ask about that. So, in effect, the statement "X is Y because Professor Z says so." is just (sloppy) shorthand for "I don't know about X; but people I believe to be familiar with the field of X say that Professor Z has done excellent research on X, and Professor Z says that X is Y." This is imperfect, to be sure; but barring the (generally recognized as impractical) strategy of being omniscient, it is more or less the best option.

The picture is further clouded by the way humans actually evaluate information. We didn't evolve our trust metrics to handle scientific papers, we evolved them to deal with social signalling in small hominid kin groups. So, it is often extremely difficult to avoid assigning or subtracting trust for scientifically irrelevant reasons. Again, though, this is something to watch out for, and it is part of why we have to use statistics and logic rather than hunches and feelings; but we don't really have a better option.

The trouble is, when somebody actually makes an argument from authority, they are likely to be mixing more than one flavor into the same statement. It might be a shorthand reference to X's excellent technique and scrupulous data gathering, it might be a sense of respect for X's character, based on personal interactions, it might be some creepy cult of personality thing. These are distinct phenomena; but they can show up together, and in very similar looking statements.

Yes, because the first thing I'd do on seeing a vaguely interesting paper is call up half a dozen random researchers, wait until they weren't busy in the lab to get a comment back, and then eventually have some clue what the consensus among those more directly involved in the field than myself is several hours later. Why not just have them publish their opinions? Then they don't have to answer the same questions repeatedly.

The question isn't "why should we include the hashes?" but more properly "Is there any reason not to use a properly designed digital signature?" The fact that I trust someone is a poor reason to deliberately design a weakness into the review system when it's so easy to avoid. What's that, you need a benefit as well? How about drafts of papers -- using hashes makes it easy to get someone to review the preprint of the paper, and make comments. A later draft could address those comments. Their signature should then only be applied to the first one, not the second, until they review it as well. Revision tracking is a useful feature.

Journal publications are basically used to tell people who don't work in your area that you're doing decent work. If someone is doing decent work in an area you're a specialist in, you probably know them at least by sight and you probably hear about their results fairly soon after they prove them; the journal paper may well come a year or two later.

But if you want funding, or you want a job, you have to convince a bunch of people who know very little about your area that you are a valuable person. The easiest way to do that is to point at recent papers in good journals (which, really, isn't so different to the web of trust idea: I have a paper in CPC because someone thought my work was good enough to go there, that kind of thing).

There are lots of problems with the sort of metric you suggest; you need something relevant to now, you don't want it to discard people who do good work on their own or in tight groups (and there are quite a few of the latter), you don't want it to be distorted by the sort of mathematician who will publish every result they can get in any collaboration (there are quite a few, some of whom are very good and very well-connected but still publish some boring results along with the good ones).

Spam yes. Crackpots piped to math.GM for the amusement of all (e.g. the guy whose 'proof' of the Riemann hypothesis was 20 pages of verbiage boiling down to 'the universe is built on maths and maths is built on primes, so they must behave naturally and therefore the result is true..').

No one? So, tell me, how have you heard about Galileo? Found an dusty tome in a shelf of an old monastery, translated it from latin and amazed yourself of how ingenious he was?

He is only known today because his people 'modded him up'. Some of his ideas were controversial, against the good ol' Aristotle, but he was a very respected teacher, that made brilliant insights in various aspects of physicis and mathematics, and only later reached his astounding conclusions. Read his biography.

If some paper is refused for publication, it's because it's plain bad, not controversial. In physics, you have objective criteria of quality, and is by that that they're judged.

It might be cool to imagine the lone crackpot that made revolutionary discoveries that are ignored by the scientific community, but that is just romance. Crackpots are just poor bastards that couldn't even get quantum mechanics right and went nuts. What they say may look cool for laymen but is just plain rubbish.

Peer review is actually pretty weak. It's mainly effective at spotting obvious howling errors. Peer review is not the same as replication and, indeed, many reviewers don't bother to check the equations or data presented in a paper unless they are genuinely suspicious of the conclusion. Replication, not peer review, is the gold standard of science.