Representation theory, geometry and whatever else we decide is worth writing about today.

Menu

Thinking about Elsevier replacements

I’m still considering whether to sign on to the Elsevier boycott. But, in preparation, I’ve started thinking about which Elsevier journals would be hard to find replacements for. This is tricky for me because I really don’t know much about different journals. My algorithm for choosing a place to submit is (1) see if one of my co-authors has an opinion (2) ask someone more senior (3) look at the papers I cite and see who published them.

I thought I’d start a comment thread for people to recommend journals, or to describe niches which they don’t know of a good non-Elsevier journal for. Eventually, this data would probably do better in a wiki, but I think it will be easier to get the discussion going beneath the fold.

Let me repeat the above disclaimer that I really don’t know much about journals, so some of my impressions of what journals are comparable may be quite off base. If so, I hope people will correct me below.

These are the Elsevier journals in which I have published:

Advances in Mathematics A journal for very good general interest papers which are not quite at the top level. In my opinion, there are many good alternatives: Duke, JEMS, Transactions, Compositio, Amer. Jour. of Math. all seem roughly comparable to me. Of course, getting into any of these journals is a challenge, and they are not simply interchangeable, but I think that there are several good options in this niche.

Journal of Combinatorial Theory: Series A This is one of the two journals where I would send a good paper in algebraic combinatorics — the other being Journal of Algebraic Combinatorics. In the past, I’ve alternated between them; I can switch all those submissions to JACO in the future. (Of course, JACO is owned by Springer, which many are arguing is not much better than Elsevier.) If JACO isn’t interested, I’m not sure where else to go. The Electronic Journal of Combinatorics has wonderful editorial policies and is a joy to work with, both as an author and a referee, but I don’t think it is as prestigious. I might submit to it anyway, but I would never recommend that someone without tenure choose the less prestigious journal. Is there a good journal in this niche which I am missing? Am I underestimating EJC?

Discrete Math The only paper I have sent to them was a long time ago. My impression is that they are a top journal for Hungarian-style combinatorics, particularly graph theory. Tim Gowers discusses some alternatives here.

Journal of Algebra This one seems hard to replace to me. They take a lot of papers on standard monomial theory and combinatorial representation theory which aren’t big enough to make it to a general interest journal. If a paper like this is sufficiently combinatorial, it can go to some of the journals which I discussed in the JCTA section. If it is sufficiently representation theoretic, it can go to Transformation Groups, and it might also be possible to shoe-horn it into Linear Algebra and its Applications. But those journals are, I think, not as prestigious and, in any case, there is a gap between JCTA and Transformation Groups which I thought Journal of Algebra filled very well. If it helps focus the discussion, here is the paper I put there.

Related

Post navigation

33 thoughts on “Thinking about Elsevier replacements”

For me, there’s nothing that quite replaces Advances. I confess that this is part of the reason why I suggested it as a possible target for a campaign, over on Tim Gowers’s blog. The problem is that, to my knowledge, Advances is the only really prestigious general-interest journal that seems willing to publish papers in category theory.

I’d be really happy to be proved wrong, but looking at your list of alternatives, I can’t remember ever having seen a category theory paper in any of them.

Vladimir: I think Tom is right as a statistical matter. In 2010 and 2011, Duke published 2 papers with MSC primary category 18 (one of them by you, congratulations!). Transactions took 3 and Compositio took 1. By contrast, Advances took 26.

Part of that is simply that Advances is larger. I hadn’t realized this until now but, in 2011, Advances had 367 articles. By comparison, there were 270 in Transactions, 47 in Compositio and 55 in Duke. But that doesn’t negate Tom’s point: killing Advances would remove a major option for placing category theory articles. In fact, it may weaken my point above — if all of the journals which I think of as comparable to Advances are much smaller, then it will be harder to do without it.

Thanks for the examples, Vladimir. It’s good to know what’s out there.

Of course, it’s a fool’s game to try to draw a sharp dividing line, pronouncing that such-and-such a paper is category theory but such-and-such a paper is not. So I won’t try to say “yes” or “no” to your examples.

(I’ll observe parenthetically that just as a paper can use the word “set” hundreds of times without being set theory, a paper can use the word “category” hundreds of times without being category theory. This observation is not particularly pointed at your examples, though.)

I do think that Advances has a particularly strong tradition of publishing abstract or categorical papers. It currently has on its editorial board Ross Street and Bertrand Toën, and in the past has had John Baez and André Joyal (both now resigned). And it was started by Rota. Actually, I notice that Toën has signed the petition at thecostofknowledge (look under B, not T), so perhaps he’ll be resigning too. Are there any other journals of comparable status that have similarly categorical people on their editorial boards?

I actually haven’t recently checked the current state of the editorial boards for the journals I mentioned, so your comment was a good incentive to do that now. Compositio has Bernhard Keller and Jacob Lurie on its board, which is quite categorical to my taste. For the others, it is easy to point at one somewhat “categorical” editor (e.g. Raphaël Rouquier in Duke), but that’s about it.

Even more specifically, in higher category theory (including monoidal category theory) Advances places a particularly large role. There’s a strange thing where sometimes a general interest journal doubles as the top specialized journal in a field (I’m lead to believe that Duke plays this role for number theory), and Advances is where you put very good higher category theory papers whether or not they’re of general interest. It’s certainly true that the loss of Advances, or even a loss of prestige for Advances would hurt higher category theorists.

@David: of course you’re right statistically (I am really glad if our brief comment exchange with Tom was informative for him though). I have a mixed feeling about the flow of papers Advances has been releasing recently (I think it was not always the case), even ignoring the main topic of our conversation. For the moment though, I still fully agree that for some topics, categorical and not, Advances remains one of the most important general interest journals.

Thanks for starting this thread! I’ve been waiting for the right place to push Documenta Mathematica as an alternative to Advances. Documenta is in many ways the model citizen in the world of math journals: it’s online, completely free, and (I’ve been led to believe) aspires to be at the level of Crelle.

Now, it’s probably not at that level at the moment. But of course its level will go up if we send good papers there, and if one journal is going to benefit from Elsevier/Advances losing papers, I think Documenta is deserving!

Do you really think that Transactions is anywhere near the same tier as Advances? My perception is that it is much weaker. I also think that Duke is stronger than Advances and that American Journal and Compositio are a bit weaker.

To me, “weaker than Duke but stronger than other general interest journals” is a pretty useful niche.

And I agree with you about Journal of Algebra.

I also happen to really like Comptes Rendus, which I perceive as being better than its competitors (i.e. Proc AMS) for very short papers.

David, I too think Documenta is a great model! It deserves to get lots of excellent papers at Elsevier’s expense.

Andy, I think such judgements are completely dependent on one’s field. Some generalist journals publish much better articles in some areas than others. It depends very much on tradition and editorial board.

Advances is about 5 times as large in page count as it was 10 years ago. I can’t help but think that average quality is lower now as a result. In my mind, I think of it as not as good as Compositio or American J. and nowhere near Duke. My perception could be biased because, historically, papers with significant amounts of combinatorics have been more likely to be accepted in Advances than in other generalist journals.

Another alternative to JCTA is Annals of Combinatorics, which I would rate near EJC. It is, however, also a Springer journal.

Journal of Algebra has no good alternatives. JPAA is also an Elsevier journal. It’s possible the relatively new Algebra and Number Theory could end up being of similar standard.

Like Advances, Inventiones is also much larger than its competitors in page count.

My impression is also that Duke is substantially better than Advances while Transactions is substantially worse. (Though I’d love to be wrong, as I have a paper in Transactions!)

My impression was that Algebra and Number Theory was already of similar or better quality than Journal of Algebra, but that Journal of Algebra will be hard to replace because it publishes such a huge volume. ANT has fewer pages and includes number theory.

I also think that Algebra and Number Theory has already surpassed the Journal of Algebra by a fair amount. My perception is that they aspire to be for algebraic subjects the equivalent of Geometry and Topology, which I would rank at basically the same level as Advances. But of course they are not there yet.

How do we all acquire this mental ranking of journals? Do you actually form judgements based on papers you’ve read in those journals, and the quality you judge them to have? Or is it based on how good other people say those journals are? (I suppose that’s what prestige is: other people’s opinions.) Or is it something else?

Personally, I don’t recall ever having made use of a paper from Inventiones or Journal of the AMS. I’ve never wanted to consult Duke either, and I don’t think I’d even heard of American Journal of Mathematics until a year or two ago. So my impressions of the swankiness of those journals are entirely formed by what other people say. It’s kind of crazy. If I judged periodicals by what I’ve actually had use for, it would probably be the Springer Lecture Notes series that came out on top.

I think we’re helping the commercial publishers by carrying around these intricate rankings in our heads. We convince ourselves that Elsevier journal A is two and a half notches better than non-exploitative journal B, so we submit to A. I’m as guilty as anyone, and it’s hard to break out of it on your own: hence the importance of mass action.

I think it is crazy how much the journal ranking system depends on innuendo and vague prejudices. Speaking just for myself, it is a combination of three things (1) What I’ve heard from other people, particularly my co-authors. (2) Impressions formed by noticing which journals I frequently see when I click the “View Article” link in MathSciNet. For example, I noticed that Duke is unusually open (at least in algebraic geometry) to papers which give a better proof of a preexisting result. (3) When I am considering what journal to submit to, I usually look at the table of contents of their latest issue and try to judge whether my result would fit in well in terms of length, importance and field.

All of this is very unscientific and prone to random error.

But while I think the way that we acquire these rankings is silly, I don’t think having them is. Committees who are doing hiring or grants apportionment need a very rapid way to get a rough sense of the quality of a paper. Have you done any work where you had to evaluate hundreds (or even dozens) of candidates’ publication records? If you didn’t rely on this sort of sloppy judgment as a first pass, what did you do?

Aside from the anecdotal factors that you mention, one can find lists of journals ranked by various sorts of impact factor, for instance (free to anyone) at http://eigenfactor.org or (free to mathscinet subscribers) at http://ams.org/mathscinet/citations.html . Of course there are various valid objections to impact factors as metrics, but at the very least this is more objective than vague impressions of swankiness based on what other people say, or even based on the non-representative sample of articles you’ve seen. On a few occasions I’ve found, contrary to what I might initially have thought, that non-exploitative journal B turns out to be comparatively ranked according to multiple metrics to Springer-or-Elsevier journal A and moreover has an appropriate editor for my paper, thus making me aware of options I would not have otherwise realized I had.

David, yes, I have had to go through big stacks of applications, and I’m sorry to say I’ve found myself doing all the classic things: taking note when people have attended prestigious institutions, published in big-name journals, or simply published a lot of papers.

So, someone who chooses to work at a non-famous university for personal reasons, who publishes in free online journals for ethical reasons, and who writes relatively few papers because they value quality over quantity, would probably have completely escaped my notice. All I can say in my defence is that, last time round at least, I did slightly hate myself afterwards. I have no good suggestions.

I know this is not the topic of the post, but here is a little something for the humility file: I was talking to someone recently who said he had about 10 papers in the agreed top few journals and about 5 publications with 50+ citations in MathSciNet. There was only one publication in both groups and also only one of them was among what he thought was his best work. So in his mind, there was no reason why you should expect that someone’s papers in top journals, their most cited publications, and their best work should form non-disjoint sets.

I’m not sure if I find your algorithm for selecting a journal depressing or refreshing. The former, because it’s just so … random … that it means that any weight that someone applies to the fact that you published in journal X is meaningless. The latter, because it’s just so … random … that it means that any weight that someone applies to the fact that you published in journal X is meaningless.

Or rather, the fact that there are still people that take this metric seriously is depressing. The fact that you don’t is refreshing.

On David’s remark that we have silly ways of ranking journals. I think on the contrary that all those observations that make us rank the way we do come from experience, history, with usually/mostly good reasons and are actually flexible and make for a healthy system.

If there is a fixed metric, journals, authors, and other participants, may be tempted to game it, but having a flexible open system directly controlled by smart users (editors, hiring committees, researchers) who know how to detect fraud and know their peers know that, etc. seems best, and it seems this must pass by subtle observations like those David mentions.

I think this kind of reasoning can be justified in the framework of “mechanism design theory”, like some sort of revelation principle (people rank journals/articles/researchers honestly -including their own work). My feeling is that the collective aspect of rating, building on “local experiences” of individual researchers is also a necessary aspect to provide complexity and fraud-proofness to the system.http://en.wikipedia.org/wiki/Mechanism_design

I would find it quite useful if someone maintained a webpage (perhaps a wiki?) with a list of “good journals”, meaning both good mathematics and good practices. Each entry could have a brief description of the journal, perhaps containing some indication of what kind of papers should be published there and/or what’s so good about their practices. Then the next time that I’m considering submitting a paper to, say, Advances, I would peek at the list and see what other journals I might want to consider instead.

In addition to being useful to people who are trying to be good citizens, such a list might also have the positive effect of boosting the prestige of such journals by bringing them recognition. (That is, if enough people took notice.)

Nick, I wonder whether a wiki would be the appropriate format. I know mathematicians are relatively rigorous in all respects, but wiki tends to avoid “subjectivity”. To exaggerate a little imagine a wiki of good restaurants, that would be difficult to moderate. Rather other review formats could be useful like technical appliances reviews listing various statistics (“metrics”) of journals (this part can be done in a wiki format), and a discussion/user review part for each journal where mathematicians could share observations as they did in this blog post, and perhaps give a personal rating (and subratings).

Though I think what was done here in comments would be hard to replicate on a large scale, i.e. systematically for many mathematics journals. But I would sure be interested in such a website.

Some journal statistics, such as impact factors and the MCQ used by AMS, are proprietary to the entities that compile them, so they probably couldn’t be legally put up on a public website.

However these wishes for a website comparing journals reminded me of something I learned about a while ago on this very blog, at https://sbseminar.wordpress.com/2011/01/13/cowdsourced-department-ranking/ A crowdsourced journal ranking system could, if enough people contribute to it, provide an alternative measurement of journal quality based on the subjective opinions of a broad sample of people rather than on citation statistics.

So I’ve taken the liberty of setting up a place where people can make binary comparisons between members of a set of 178 journals, at

Please do contribute your votes, and feel free to spread the word about this if you think that it could be something worthwhile. I prefer to remain anonymous, but can be contacted at rank.journals at gmail should someone want to do so.

Peter, thanks for the link and thanks alot for the comments and links on Gowers’s blog. I guess single article ratings could be combined with journal ratings. It would probably be very good to have a variety of rating and ranking systems to play with, so that users with different needs could personalize results to suit them. A basic example would be to categorize journals and articles according to the AMS’s MSC scheme (this kind of information is easy to automatically/programmably retrieve I think) and allow users to restrict rankings to category, or just give more weight to 1 category.

For instance if I want to hire a probabilist I could double the weight of probability/stats journals compared to default, and simply look at the resulting ranking of journals, or even put in applicant publications and see how they rate as I vary those or other weights. That could be extended to more and more web data, and this could allow users to enter arbitrary weights, say I want one user to have a headstart because he has great references from a friend, or I want to value web (mathoverflow?) presence, etc.

I don’t mean such a system would always be used appropriately. Most scientific knowledge/tool can be misused but I do think this one would be overwhelmingly positive, unlike say the nuclear bomb. And the more flexible the ranking the easier it would be to suppress suspected biases. There could be a principal component analysis feature allowing to cluster parameters contributing to a given rating (of journal, researcher, or article) and we could choose to lower that cluster’s weight, if we suspected it is manipulated.
I think such a method suitably formalized is used to prove some revelation principles -that voters will report preferences honestly under appropriate rules.
(This is my answer to Yemon.)

I just want to add some pointers to efforts within the scientific community that (right now) fail for us (I think mostly because we disseminate our work differently in mathematics).

Namely, the altmetrics project that already spawned total-impact, readermeter and sciencecard. Then there’s also figshare.com which is something else we could use a mathematical counterpart of.

Generally, I think we should encouraging post-publication peer review via papercritic or researchblogging.org would help more than journal rankings since it helps us identify good people and interesting results at the same time.

Thanks again Peter. After looking a more at your links (especially papercritic) I see myself really wishing for a comment section at the arXiv home (Cornell). I think this would be an awesome forum, where we could ask questions to authors and have answers right there for all to find easily. We need that open arxiv “people’s monopoly”, like mathoverflow and math.stackexchange for questions, and twitter for “chats” I guess.

Then we may ask what role is left for journals. And I think they are still important, as expert cores (the world’s specialized computers) doing the highly specialized checks, computations, analyses, required but that can be neatly summarized by “accepted for publication”. It just happens that some things come in discrete amounts, even binary -e.g. our basic mathematical quantity, the theorem.

I mean, for instance, de Branges may post a purported proof of the Riemann hypothesis on the arxiv (or his homepage), but the value of that paper changes dramatically once it is accepted in Math. Annalen. And this eventually comes from a stability property of (classical logic) mathematical theories, once we have proved a result, we also know all its implications are true -for RH these are many, whence the choice. This stability reflects on our rational agent-based society -and this more dynamical perspective, agents making decisions vs. true/false results is what makes me choose the term “stability.

I hope my babbling does not annoy too much, it is what I come up with when trying to understand what to do about our publishing system, our big world’s fate voting system.

I would like to remind everyone that whatever system we come up with also needs to work for places like Luther College.

Mathematicians at Luther don’t do research primarily to contribute to mathematical knowledge; they do research primarily because being engaged in research helps them convince their students that mathematics is a living enterprise and helps them communicate to students how mathematics is done rather than just its dry facts.

Given their teaching and service responsibilities, most of them probably only publish one somewhat mediocre paper every several years. (Perhaps, having picked a specific example, I should actually look up the faculty on MathSciNet, but I’m too lazy. Sorry.)

As we all know, basically no one reads mediocre papers; crowdsourced ratings or post-publication peer review won’t work for these people because there won’t be a crowd reading their papers.

At the same time, there needs to be a mechanism to check that they are doing real mathematical work (even if it is not all that generally interesting) and have not run off the rails. The tenure and promotion committee at Luther College certainly wants to know. The graduate admissions committee at the University of Iowa also wants to know since they will occasionally be deciding whether or not accept a student who studied at Luther College as an undergraduate in large part based on recommendation letters from mathematicians there.

Secret Blogging Seminar

A group blog by 8 recent Berkeley mathematics Ph.D.'s. Commentary on our own research, other mathematics pursuits, and whatever else we feel like writing about on any given day. Sort of like a seminar, but with (even) more rude commentary from the audience.