The first observation is that the “best strategy” is to write papers that are just good enough to get into the conference that you want, but no better. This way you minimize your work while maximizing your publications.

Even taking the totally cynical “rational” point of view, I doubt that this LPU strategy is “best” or even good. The LPU strategy may indeed be optimal if you want to maximize the number of your publications, but not at all if you want to get hired, get promoted, get grants, or other selfish utility measures. In most reasonable places, “paper counting” — especially of conference papers — is rarely significant. Reputation (of author and of the venue), letters, even citation counts (and impact factors) are used instead — and these all do not fare well under the LPU strategy.

So why do so many researchers seem to follow this strategy to some extent? Why do so many of us produce many mediocre papers rather than fewer good papers? I think that the reason is not game-theoretic: most researchers who trade-off quality for quantity are not trying to cynically optimize their career. They know that their many mediocre papers will fool no-one. Well, almost no-one: they do manage to fool themselves. Indeed producing a paper gives us a very concrete sense of achievement. We can add it to our CV, count it, tell our spouse that we wrote it, and in general have something specific to show for our work. It gives us the required psychological boost of success. A fake one, unfortunately, like some drugs do. The antidote: don’t fool yourself. Easier said than done, of course.

What if some mechanism was introduced to make us aware that simply adding a paper to the CV does not make it significant?

What if it became common practise to not only uphold one list of publications, but two:

1) A full list of publications.
2) A list of best publications, with at most two un-awarded papers per year.

The second list would shift the focus from quantity to quality, and if a few good papers are indeed more important than many mediocre papers, it would over time outperform the first list as a measure of skill.

Ideally, we would be forced to plan in advance which two papers to put additional effort into, rather than going for every other problem that comes our way.

Over time, hiring committees, and people in general, might even stop paying attention to the first list, which would effectively put an end to the mass production of papers.

Note also that allowing awarded papers not to count towards the limit of two papers would suddenly make initiatives such as NAJ essential in further improving ones CV.

So: Let us start using lists of best publications, or perhaps more provokingly lists of significant publications, and judge each other by those.

There are many ways to technically incentivize quality over quantity. “List your X best papers in the last Y years” for small values of X is one version (taken e.g. in the European ERC grants.) Counting citations rather than papers is another.

I think there is this presumption that one would produce good/great papers if only one focused on doing that. Although it is quite possible to waste time/energy on mediocre papers/projects, very often people publish what they come up with in their quest to do some thing interesting. That in itself is not bad.

Lest I am misunderstood, let me start by saying that I am not a supporter of the LPU strategy.

Indeed producing a paper gives us a very concrete sense of achievement. We can add it to our CV, count it, tell our spouse that we wrote it, and in general have something specific to show for our work. It gives us the required psychological boost of success.

Noam, what is wrong with this, provided that one is not fooled into believing that one is offering a major contribution to (computer) science just because one might be churning out lots of papers?

Some researchers have a very strong self-belief and a sense that what they are doing is important, even if they might not be producing or publishing papers for a while. Others harbour perennial self-doubt and need paper production to feel that they are still contributing to the research endeavour.

We are all very different, but I like to think that we all try to offer our modest contribution to the development of our science. If, for some of us, this contribution takes the form of a sequence of “minor” papers, then so be it.

I may be wrong, but I feel that the pressure to publish is increasing and that paper counting does play a major role in many hiring and promotion decisions.

You have a good point. If one works on significant problems it can be years between significant successes. It seems totally reasonable and productive that one shares any “small” results obtained along the way in the form of “minor papers”. (Like everyone else, the vast majority of my own papers are certainly quite “minor”.)

The point is that these “minor” papers should not become goals to themselves, neither game-theoretically, nor psychologically (and certainly not scientifically). The problem is that too often they do become a goal — and my claim was that the reason for this is mostly psychological rather than “rational” cynical advancement of career.

The problem is that too often they do become a goal — and my claim was that the reason for this is mostly psychological rather than “rational” cynical advancement of career.

For what it is worth, I completely agree with your point. Much good literature has focused on the psychology of creating novel research ideas. There is also, however, the psychology behind convincing oneself that one can (still) actually contribute something, no matter how small, to the research enterprise. “Minor” papers do play a role in this aspect of our psyche and might motivate some of us to tackle more challenging problems we might not have the courage to deal with otherwise.

In many countries, the length of PhD studies is short (e.g. three years in France). In many countries, a PhD student is able to defend her thesis if and only if she has two or three publications in conferences/journals. In many countries, the list of accepted conferences contains a lot of crappy conferences where submitting a decent text in a LaTeX format means acceptation.

To my knowledge, it is not infrequent that minor papers are submitted (and published) because they guarantee that the PhD student will be able to defend on time, i.e. the student will not be “penalized” by a “maybe-too-elitist” policy of his advisor, or by a “maybe-too-unreasonable-in-two-years” research topic.

I disagree with a lot of the thoughts in this post. But I think to articulate why will take a post of my own, at a future date. I have written in the past, though, about how CS, at least in my mind, often works like a local search heuristic — as individuals, we make lots of small steps, but over time, we get quite far, and this is arguably much more effective than everyone sitting around trying to make a big breakthrough that “jumps” us to a better place (by solving some big problem). Let’s take that as my first argument against and I’ll try to have more written later.

This is by far the most refreshing post I’ve ever read on a theory blog. I’ve run into too many colleagues who seem to forget that our goal in all of this should be to advance scientific knowledge, and not simply to publish papers.

I think some of the disagreements on this blog are due to the fact that people are assigning different semantics to the word “minor”.

To me (and the relevant parties can correct me if I am wrong), the “LPU” being referred to by Yehuda and Noam is a paper that is uninteresting and/or trivial. Others (Luca in particular) seem to be using “minor” to refer to an interesting and non-trivial paper that has a small audience and/or is not a “big” result.

Seems to be we all basically agree that publishing “LPUs” to pad one’s resume is bad, but publishing “minor” results that make incremental progress toward some worthwhile goal is research.