Let that sink in for a moment; a supposedly respectable academic publisher put their imprimatur on Merck propoganda disguised as scientific journals. They even had the nerve to claim that they weren’t journals, even though one is called The Australasian Journal of Bone and Joint Medicine, and is printed to look like any other Elsevier journal.

So, any time you hear people from the publishing industry blathering about how for-profit journals are necessary to maintain peer review, keep this story in mind. The drive for profit is undermining the integrity of peer review, and Elsevier is on the forefront of doing so.

EDIT: Let me elaborate a little, in response to the excellent comment of Greg below. I agree with most of what Greg said, and I think we’re coming from roughly the same position, but putting a different emphasis on things. I absolutely agree that our current publishing system is broken, and needs to be changed.

The present journal system is essentially 19th century in character, except with shorter travel times, and has done very little to capture the free-floating information on the web. But part of building support for moving to a new model is pointing out how deeply flawed the previous model is, and how badly it’s failing us. While this and El Naschie are extreme cases (one could also include the bizarre events around the breakup of K-theory), they’re also vivid illustrations of how unreliable our current system is. I think people are somewhat conservative by nature and are very reluctant to drop a model long after it has become extremely suboptimal. If one is to prod them into considering something new, I think loudly pointing out the failure of the current regime is very necessary, if not at all sufficient.

Like this:

Related

Post navigation

29 thoughts on “Elsevier plumbing new lows”

I’m as much in favor of freely available math papers as anybody. I’ve also put in my share of professional service to help make it happen.

Even so, I think that blaming Elsevier for the condition of mathematical communication is a little shallow. It’s like blaming Sodexo for university food. Yes, their name is on a lot of food that I don’t much enjoy. I wouldn’t be surprised if there is a lot of sausage-making in their business, both the political kind and the literal kind. And, like any for-profit company, you shouldn’t trust them all that much.

But hey, that’s capitalism. At least for mathematicians, neither Elsevier nor Sodexo is an entrenched monopoly. They are at worst monopolies by default. That is, they have market power because it’s a minor hassle for us to switch to something else, and we mathematicians don’t pay the real costs.

If Elsevier shilled for drug companies, that’s terrible and someone should throw the book at them. But that’s not our territory. Harvard has also shilled for drug companies; should mathematicians boycott Harvard?

Instead of bashing Elsevier, the real solution is to build new modes of scholarly communication, so that eventually none of us will need Elsevier. That is what the math arXiv has accomplished, but of course it’s only a partial victory. I believe that what we need is a new kind of journal, or peer review service, that is a blend of journals and Math Reviews, and that directly reviews arXiv articles. But it has to be implemented properly in order to succeed — any such project is a work of social engineering. And it hasn’t happened yet.

Greg, you claim that Elsevier has “market power”, which seems (to me) to imply you think that “market forces” should be what changes their behavior, if anything.

But I put it to you: what is the reaction of a group of mathematicians who say “we do not like how you do business; we do not believe others who hear about how you do business will like how you do business; we will publicize how you do business; we will encourage ourselves and others to hurt another segment of your business until you change” but a market force?

… they’ve admitted to publishing journals that are un-peer-reviewed advertisements for drug companies.

Small point, but from my reading of Goldacre’s article, I don’t think this is quite right. The published articles were reprints or summaries of old articles – so they presumably had already been peer reviewed.

what is the reaction of a group of mathematicians who say “we do not like how you do business; we do not believe others who hear about how you do business will like how you do business; we will publicize how you do business; we will encourage ourselves and others to hurt another segment of your business until you change” but a market force?

John, I agree with you that this is a market force. But I don’t think that it’s a very effective market force, nor necessarily a very accurate one, even though up to a point I can agree that it is a laudable force. In particular, in the case of Elsevier, I’ve seen denunciations and boycotts for 10-15 years. They have had a limited effect. For that matter, even when negative branding does work, the market responds by offering other brands. It’s just hard to keep track of which brands are almost as bad as Elsevier. Do you want to boycott World Scientific? Kluwer? Taylor and Francis? Pushpa Publishing?

No, the situation can be compared to that of Microsoft. Bashing Microsoft didn’t work either. In hindsight, I think that frustration led people to ratchet up the denunciations until they weren’t even entirely fair. (In fact my own attitude wasn’t all that great on this point.) Linux and Google have been a much more constructive response to Microsoft than bashing Microsoft.

Sure, providing an alternative is what ultimately brings force to bear, But how many individual end-users would have used Linux simply on its own merits? Yes, many people find it superior to Windows. And some of them even recognized that early on. But a large chunk of users found the “Screw Microsoft” factor to be part of the original inducement.

Shorter version: “come for the Elsevier opposition, stay for the _____”. The blank’s important, but the first half is going to help people notice the blank in the first place.

Okay, providing an alternative. The first question is, an alternative to what? While it is true that there is a lot of free-floating information the, I don’t think that some kind of Twitter for intellectuals would replace journal publishing. In any case I don’t understand Twitter very well as a social institution.

As I see it, the main need that journals fill is formal recognition of research results. You want to get hired and you want to get promoted. For that purpose you want a CV that lists your refereed publications. Once upon a time, journals provided the triple service of peer review, typesetting, and distribution. To be fair, I don’t think of this as a 19th century thing; rather its zenith as a social and economic model came in the middle of the 20th century. But it is now clearly a legacy process.

People have tried “quickie review” services that look something like blogs. I think that these have some of them elements of what is needed. But crucially, they were never CV-compatible. So I think that a better approach is a service that is nominally a journal, by design fully CV-ready, but streamline it in various ways to make it as cheap and as fast as possible. Here is my suggested laundry list of reforms:

(1) No typesetting and no copyright. The journal should have a “masthead” just like nytimes.com has a masthead, but it should just be a web page header. “Published” papers should just link to the arXiv.

(2) Again, CV-compatibility is crucial. The journal should have a conservative journal-like name, such as “Interesting Mathematics” or “Proceedings in Mathematics”. It should have a year, maybe a volume, and a “page” number. But since papers aren’t typeset, the “page” number should just be consecutive integers. I.e, the first paper in 2010 can be “Proc. Math. 2 (2010), p. 1”, the second one “Proc. Math. 2 (2010), p. 2”, etc.

(3) Since the journal does not claim copyright, there is no point in exclusive peer review, just like a movie can be reviewed once or many times. In fact, the editorial board or anyone else can submit a paper to “Proc. Math.”, not necessarily the author. On the other hand, since the whole point is to help authors with their careers, once a paper is submitted, the author should receive correspondence as if he/she did submit the paper.

(4) Why would authors want to submit to a newfangled, high-volume journal such as “Proc. Math.”? Well, two reasons. First, it saves a lot of time to “publish” the paper immediately after it is reviewed. Second, I think that this new style of journals should, like Math Reviews, publish a review signed by a referee *for papers that are accepted*. (On the other hand a referee can anonymously decline a paper or even anonymously send messages to the author.) So a really good publication might look like this in your CV:

Proc. Math. 2 (2010), p. 5, reviewed by Pierre Deligne

(5) What about the work that secretaries and editors do to remind referees to submit their reports? I think that this is the hardest part to improve. However, there are some tricks available to streamline this at least some. If a paper will likely be accepted and your confidentiality as a referee won’t last anyway, then the authors themselves could ask the referees to hurry up, after a grace period. When you are asked to referee a paper, you should fill out an entry saying, “I think that I can referee this paper in ____ weeks,” followed by “after which time, (a) the paper should be automatically rejected, or (b) my name should be revealed to the author.”

—-

I came up with this basic plan some years ago. I never found time to try it though.

However, one thing people look for from journals is some crude signal of an articles quality. If papers start all going through the same paper, you lose an ability to show the quality of a paper that should be replaced somehow. Not that I know how, necessarily.

Sorry, let me expand on that. If the referee is only provisionally confidential, it’s fine (just as with promotion letters) for an author to suggest a referee, even though the editorial board may in the end use a different referee. If you are dissatisfied with who wrote your review, or with what it says, you can ask for a second review.

“Again, the name of the reviewer, not to mention what he/she has to say.”

Your suggestions are interesting but I think they neglect an interesting phenomenon at work with reputable journals. Someone wants to be employed, and to be employed he needs to be recognized as a good mathematician, and he will be recognized as a good mathematician if good mathematicians say he is a good mathematician. So far so good, “reviewed by Pierre Deligne” will do the job. But “good mathematicians”, in that sense, are (by definition) a scarce commodity (the job market being competitive, all this business is relative and not absolute).

Now, reputable journals perform an interesting function: regardless on who actually read your work, it lends social credit to it, say the social credit of the board of editors. If this board of editors say this is a good article, you can boast that good mathematicians think you are a good mathematician, even though your article was in fact reviewed by Rusty McNail (after all, the reviewing process can “go down” the hierarchy of academia if the first proposed reviewer in turn suggests someone else). The problem with your solution “reviewed by Rusty McNail and Pierre Deligne” is that it assumes the original problem is already solved: why would Pierre Deligne read an article already reviewed by Rusty McNail (and thus presumably not so good), when he has already enough to do in reviewing for himself and in reading articles reviewed by Serre, Katz or Voedvodsky (and thus presumably much more interesting)? Of course, if the article is sufficiently good (in some absolute term), you may optimistically think that people (including Deligne) will eventually review it, but that again is assuming the problem solved: in the time-interval during which this article climbs up the hierarchy, its author will not have been employed, which was his reason to publish in the first place.

You also suggest that the author had a right to ask for a second review. But here again the same problem lurks: nothing stops me in submitting an article to Annals of Math (and there it will be treated with the same seriousness as any other, though rejected in an eye-blink); whereas even though nothing stops me in complaining that Wiles hasn’t reviewed by last arxiv paper, I don’t think I can do much about it (again, our operative notion of good mathematician implies that they are rare, and hence busy).

The crucial point being here that reputable journals somehow act as multiplier of social capital, something that cannot obviously happen within your suggestions. Then again, modifying your proposition in Proc. Math. High Quality, Proc. Math. Outstanding etc. would seem to do the trick.

Olivier: You haven’t convinced me to be pessimistic, but conceivably you are right that names of good mathematicians do not easily substitute for names of good journals.

I think that the thing to do is launch the boat and see if it floats, instead of fretting over why it might sink. I simply have not found the time to try my idea, and I maybe it’s not all that easy to do properly. It would require both good judgment and a time commitment.

Greg: you are planning on rejecting some papers. Is the standard mere correctness? If so, then I think you are going to be swamped with uninteresting submissions and your referees are going to be unable to do a thorough job.

I want to see something like this work; and I’m glad you’re working on it.

The way that a paper would be “rejected” would be if neither the author nor the editorial board can find a referee for it. Then too, referees can say that a paper is bad/boring and opt to stay anonymous, in effect withdrawing as referees.

You may be glad that I’m discussing this on this blog, but the fact is that I’m not working on it. I haven’t found the time. But I am happy to consult and even help if anyone else wants to take lead responsibility.

Oh, also a comment about correctness: The question of whether or not a referee checks a paper for correctness is a mess in the current system. First of all, the vast majority of respectable-looking papers simply are correct. Self-respecting authors have big reasons to check their work with or without help of a referee. With a low error rate a priori, and with anonymous referees, how do you really know what they checked? You don’t. Moreover, sometimes the instructions to the referees explicitly say that you don’t have to check every detail in the paper; the larger question is the significance of the results. At the same time many people figure that a paper published in this or that journal has been checked by a referee, whether or not they have any evidence that it was.

Since I recommend a fusion of Math Reviews and journals, such a service could hold back its own reputation by puncturing the illusion that anonymous referees dependably check papers. Sometimes they really do check the paper and that is a good thing, but this certainly doesn’t happen dependably in any journal. (It is likely to be refereed carefully if it is a big-ticket result, especially a big-ticket result sent to a top journal. But with ordinary results, top journals are not much more likely to referee carefully than other journals.)

The best I can suggest is a choice between separate boxes to discuss the paper’s validity and its significance, and a combined box to discuss both. And one of the boxes could be left blank, and you can suggest a second or third referee to discuss validity, if the first referee only discussed significance.

Exaggerating a bit, many of my current colleagues could not tell Pierre Deligne from Rusty McNail. They can, however, tell Annals of Mathematics from Proceedings of the Elbonian Mathematical Society.

The essential problem is that the number of good journals is a lot smaller than the number of good mathematicians. It is possible to vaguely keep track of the good journals. If one is not in a specific subfield, it is impossible to vaguely keep track of the good mathematicians. This is even worse when “good” is replaced by “half-decent” or “not bogus”.

Even if the chair of the search committee at Walden College, who hasn’t done any research since his dissertation on numerical analysis 30 years ago, can’t figure out the difference between me and Rusty McNail, he can figure out the difference between J. Alg. Combin. and Proc. Elb. Math. Soc.

Alex: Well, I think that you are exaggerating. One Google search easily reveals the distinctions that a good reviewer has earned. And the fact is, we already calculate on the basis of reviewers and what they say when it comes to letters of recommendation. So why wouldn’t it work with papers?

Of course, you could wonder if it would work for papers if people aren’t used to it for papers. But again, we already have Math Reviews. I think that it would be a mash-up of existing modes of peer review, and in that sense not a radically new thing.

(By the way, Alex, you have interesting home pages at both Berkeley and Davis, neither of which indicate that you are now at St. Olaf.)

I don’t think positive reviews should be made public. The problem is that it will create pressure on referees to write flattering testimonials to how great the paper is. When a paper is worthy of acceptance, I just want to accept it, without risking offending the author if my public essay praising the paper (written under my own name) isn’t sufficiently complimentary. I have no problem with identifying the referees of accepted papers, but I very much don’t want to have to write a public essay.

I also agree with Alex that people will find it very difficult to judge the reputation of referees. One problem is that mathematical accomplishments are an insufficient basis for judging a referee. Some great mathematicians are careless or have low standards, and others are biased in various ways. To judge the value of a referee’s endorsement, you need to understand this context. Under the current system, editors do this, and that’s reasonable: they have much more experience and connections than the typical mathematician. (And even then, they aren’t responsible for judging referees they didn’t pick themselves.) Under the proposed system, everyone would need a way to track this information, but it’s not easy to discuss in public. I’m not about to announce publicly that I think 2010 Fields medallist Rusty McNail pushes for top journals to accept mediocre papers that extend his own work.

You might argue that these issues already arise for letters of recommendation, but there’s no point in importing the weaknesses of that system into other areas.

P.S. I don’t think Math Reviews is relevant to the question of whether positive reviews should be public. Nobody cares that much about Math Reviews, since nobody thinks the quality or enthusiasm of the review is in any way a basis for the legitimacy or prestige of the paper.

With all due acknowledgement of the many stories of negligent or sloppy refereeing, there are a lot of papers out there (certainly including some of mine) which thank a diligent referee for making valuable suggestions, in some cases even quite substantial contributions. I’m not sure I see that fitting into your system very well. Obviously it could in principle: a reviewer accepts the paper conditionally and the author posts a revision on the arxiv, etc., but as with the whole system, it’s harder to guess what would happen in practice.

I’m making this comment based in part on the fact that I view my role very differently when I referee for a journal than when I review for Math Reviews; in particular I see the latter role as more limited. (I presume, but don’t know, that others make a similar distinction.) So I worry that in your proposed system we might risk losing some of the valuable work that good referees currently do.

I agree that Pierre Deligne would not be a problem. But it is harder to tell Monica Vazirani from James Ruffo (former student of Frank Sottile, now an Assistant Professor at SUNY-Oneonta), and people will be wanting to make that kind of distinction a lot more often than any comparison involving a Fields Medalist.

Most people here would say that most letters of recommendation are not worth the paper they are printed on (or even the bytes of bandwith they consume).

(Berkeley – fixed. Davis – I thought it was deleted with my account. Web page here – when I have time (sometime mid-summer hopefully).)

As I mentioned above, I at least think of Math Reviews and referees’ reports quite differently. I personally use Math Reviews all the time, but almost never as an indication of the quality of a paper. Instead I essentially read them as a (usually more helpful) substitute for the abstract/introduction to help me decide whether a closer look at the paper is warranted. I write Math Reviews aiming to help others do the same. My understanding is that this is more or less consistent with the aim of Math Reviews.

I worry that in your proposed system we might risk losing some of the valuable work that good referees currently do.

I take your point, but I’m not really yet promising a new “system”. I think that mathematics does need a new system to replace journals, but we can’t really know what we might gain or lose in that new system until we build it. In answer to Ben, I described as best I could how I would start to do this. But it is essential to remember this: First comes the suggestion, then comes success, then comes the revolution. Many proposals never get off the ground because people imagine leaping straight from suggestion to revolution, and naturally you then worry about what new system would look like.

In other words, it’s hard enough to succeed at all. If success snowballs, you have plenty of time along the way to at least address worries about what you might lose. In any case, if success is irreversible, there is no point in debating whether or not the idea is worth trying; the right question is to make it work as well as possible.

Anyway it is true that a review in Math Reviews and a review of a paper for a journal are not exactly the same thing. But they seem to combine well to me. It seems like less total work for one person to do both than to have two people do them separately.

The problem I see is quite different- I think the system being suggested runs the danger of creating cliques and making it more difficult for “outsiders” to get published:
Prof. A and Prof. B are in neighbouring universities in Boston, so Prof. B knows about Prof. A’s papers, and reviews them (and probably accepts them).
Prof. C, however, lives in far-off Timbuktoo. Prof. B would never review his paper, unless told to do so by the editor of a journal.
So a naive application of Greg’s idea would run the risk of making it nearly impossible for Prof. C to get published, while with a standard print journal, Prof A and Prof C would (ideally) have exactly the same probability of being published if they did work of roughly equivalent value.
That’s why the question of who the reviewer is should probably be something decided by an editor, and only by an editor. Favouritism would very quickly kill off an awful lot of good mathematics otherwise.

I don’t know if any of you guys have heard about this yet, but Elsevier appear sinking to new lows to sell its products. From the BBC news website:

“It’s no surprise that the recent actions of science publisher Elsevier caused a storm. The firm offered a $25 (£15) Amazon voucher to academics who contributed to the textbook Clinical Psychology if they would go on Amazon and Barnes & Noble (a large US books retailer) and give it five stars. Elsevier was quick to disown the actions of its marketing employee and emphasise that it had all been a mistake.

“The company doesn’t pay for positive reviews,” says Tom Reller, director of corporate relations. “This was a recent employee error. We haven’t given out any gift cards under the programme.” ”

Secret Blogging Seminar

A group blog by 8 recent Berkeley mathematics Ph.D.'s. Commentary on our own research, other mathematics pursuits, and whatever else we feel like writing about on any given day. Sort of like a seminar, but with (even) more rude commentary from the audience.