I'm sure you remember this experiment, which changed the required number of close/reopen votes from 5 to 3. Before getting to the meat of the analysis, I should point out that the change worked almost exactly as intended. Close votes clustered around 3 and the median time to close dropped noticably from November 9 to December 9, 2015:

The rates remained steady for the most part in the months leading up to the experiment. For the month of the experiment closing, editing of closed questions, and reopening spiked noticeably. (Deletion has a longer lag time so we don't have the final tally.)

The big number that sticks out is the 73% closure rate. We've already gone round and round about what that number should be, so there's no point in rehashing that argument. I think too many questions closed here are salvageable and other people (who have firsthand experience in the close queue) believe that the lines are clear and most closed questions would be bad for the site. If Jeff couldn't grok this site, I don't see why I would either.

The reason I don't understand the goals of this site is, in fact, because I'm not part of this community. Closing is part of how this community, as Clay Shirky puts it, "defends itself so that it can stay on its sophisticated goals and away from its basic instincts." Bikeshed questions represent a considerable portion of those instincts, I suspect. Closing questions that verge toward bad subjective protects the group from junk answers.

Shirky also points out in "A Group Is Its Own Worst Enemy":

You have to find some way to protect your own users from scale. This doesn't mean the scale of the whole system can't grow. But you can't try to make the system large by taking individual conversations and blowing them up like a balloon; human interaction, many to many interaction, doesn't blow up like a balloon. It either dissipates, or turns into broadcast, or collapses. So plan for dealing with scale in advance, because it's going to happen anyway.

Our solution to scaling historically has been to encourage users to trust fellow programmers. While it's not without problems, the philosophy does allow one community to handle 8k questions a day. It's been ages since anyone could read all the questions on the site, but the system works well enough to produce an enormous (and growing) store of knowledge about coding. Most of the questions are crap, but that's more than balanced by the quality of the content that rises to the top of searches.

It's clear that changing the number of votes required to close questions will allow this site to scale without changing the criteria for closing questions. I completely understand why this would be desirable. But I think it would be a short-term hack. It will not fix the root of the problem.

You might as well know what I think the core problem is: distrust. Over and over since I started this experiment, I've heard (whether this is what was intended or not, I don't know) concern that outsiders will take advantage of this community the moment its guard is dropped. And it's not just outsiders: there is a deep division among high-reputation users and former users of the site that is marked by distrust. I have some suggestions about how to work toward a compromise, but I haven't earned any trust here myself.

But in the off chance that it could still be useful, I'm going to toss out my ideas anyway:

Assume that only the sign and not the magnitude of question scores matter. Voting volume tends to follow the number of views. Views are driven largely by how interested the general public is in the question and it's answers. The second most-upvoted question on SO is: How do you undo the last commit? It didn't start off as a particularly good question, but it represents an extremely common problem and so it gets a lot of views. I suppose in a just world, those views would not turn into votes. But in the real world, not everything is tied to merit.

Assume that only the first few answers are ever read. Almost a corollary to the previous point: popular questions will get more answers. If you dig through those answers, it's pretty likely some of them are low-quality. Gallingly, they are likely to have more votes than brilliant answers to obscure questions. But within a question, voting does a good job of surfacing quality. Studies consistently find that people rarely finish reading pages they land on. Either they find the information they are looking for and leave or they give up on the page. Downvote bad answers and trust your fellows to do the same.

There will always be people who misunderstand the nature of this site. Several people pointed me to a comment (on a now-deleted question) that cited this question as a reason for asking something similar. The example has been closed for over a year. Closing questions does not make them less tempting to copy. Don't confuse post hoc justification for motivation. People will ask dumb questions whether there are similar questions around or not.

Be proud of your past. I know this site has gone through a lot of pain. It's also produced some amazing answers. I've seen the accepted answer to Why do game developers prefer Windows? cited several times by people who aren't active on Stack Exchange. The answers to Is there any reason to use C++ instead of C, Perl, Python, etc.? make me wonder if I gave up on C++ templates too soon. While poking around the codebase here at Stack Overflow, I found the answers to What is MVC, really? useful. These bikeshed questions manage to produce exceptional answers in spite of themselves. Or rather, there are some amazing contributors here who have turned leaden questions into answer gold.

Summary

The increase in closure rate makes reducing the number of votes required to close questions impractical. Other effects (particularly reopen rate and median time to close) were encouraging, but did not counter-balance the cost in terms of new questions closed.

Would it be technically possible to require different number of close votes for different close reasons? I'm thinking 3 for off topic (to quickly take care of blatantly off topic questions), 5 for everything else (dupes, migrations, too broad, opinion based).
– yannisDec 18 '15 at 7:50

When you say "closed" do you mean "on hold or closed?" Or do you mean "[closed]?" It looks like in November, 20% of questions were reopened (as compared to 8% the month before). Which means that a total of only 53% of questions remained closed, compared to about 50% the month before. To me this seems like a very successful experiment - the entire point of "on hold" vs "closed" is to clarify/improve the question prior to getting answers. If the closed (not "on hold") rate is the same isn't this a great thing? Can you help me understand how I'm misunderstanding those numbers?
– enderlandDec 18 '15 at 13:08

3

Also, is the close rate only for those questions asked in the given month? So when there are 73.5% questions closed in November, does that mean that of the questions asked in November that many were closed? Or is that saying "1294 total questions asked in November. 951 questions closed in November. 951/1294 = 73.5%?" If its the latter, it might just be the result of flushing the close vote queue at the beginning of the experiment. In which case the above comment becomes even more meaningful.
– enderlandDec 18 '15 at 16:00

7

I would upvote because I am very much thankful that you took the time to run this experiment and analyze the data. But I disagree with the conclusion and some of the broader points that were suggested.
– GlenH7♦Dec 18 '15 at 16:03

Thank you for trying this experiment. I hope that this experience doesn't discourage you from trying it on another site, since good experiments require more than a sample size of one.
– 200_successDec 18 '15 at 19:35

@200_success because sometimes the moderators will lock it without closing it - or before it completes the process for closing it. For such specific questions, I would ask the moderator who did it. We've got quite a few in this state.
– user40980Dec 18 '15 at 20:30

@enderland: The "Closed" number means questions that were closed at some point. "Close Rate" uses that number as the numerator and questions asked as the denominator. Some of those questions have since been reopened. But the "Reopened" column uses "Closed" and not questions as the denominator. Similarly, the "Closed->Edited" uses closed questions and the "Cl->Ed->Re" uses closed and edited questions. So while these are positive developments, the total effect is a net increase in closed questions. Not surprising, but a bit disappointing.
– Jon Ericson♦Dec 18 '15 at 20:37

1

@200_success as a guess for some of them, as they were migrations from SO, the closure would have meant extra work from a mod to close, clear the rejected migration lock, undelete the posted answers, and relock the question. Much easier to just lock it and forget about it.
– user40980Dec 18 '15 at 20:38

1

@200_success: There's no functional advantage to closing a question before locking. Closing first just burdens outside readers in the detritus of Stack Exchange politics.
– Jon Ericson♦Dec 18 '15 at 20:38

1

@JonEricson it also fails to instruct new users about what was wrong with the question other than the nebulous "not a good question." While the close reasons c. 2011-13 weren't that great, knowing why the question wasn't a good one might have helped future readers avoid the same pitfalls of asking similar questions.
– user40980Dec 18 '15 at 20:41

@JonEricson as opposed to the question's current state, where it just says that one moderator thinks "it's a bad question", but there's no reason given for it being bad? That's really weird.
– 200_successDec 18 '15 at 20:41

@200_success: May I point you to #4 and #3 above? This question may be the only place on the internet where this history lesson can be found and you already gotta scroll past a page of irrelevant nonsense (including the question itself) to get to the good stuff. (Oh. See also a bit of #2.) Adding a close notice will make the page less useful without really helping you avoid bad questions. Because the only way we can avoid bad questions is to disable the programmers.stackexchange.com/questions/ask route. Seriously, the [closed] notice does not deter people from copying a question.
– Jon Ericson♦Dec 18 '15 at 20:59

8 Answers
8

Were you really expecting a different result?

This is simple mathematics; you reduce the number of votes to close, and the number of questions that get closed is going to increase.

That's not what this is about anyway. It's about whether or not the right questions are getting closed.

I do think that some salvageable questions get incorrectly closed, but I also think it's a relatively small percentage of closed questions. I've worked with the community to get some of these questions reopened, and to relax their standards a bit. I've prevailed in some cases. But that's not the real problem.

The real problem is that most questions that get asked here suck.

I'd like to address some things you've stated in your post.

If Jeff couldn't grok this site, I don't see why I would either.

This is just not a defensible position. Programmers is the easiest site on the entire network to understand. What is it about? Wait for it...

software design and development

That's it. Simple, isn't it?

The only exception to this categorical statement of topicality is writing and troubleshooting code, which we're very specific about because we don't want to be another Stack Overflow.

Our solution to scaling historically has been to encourage users to trust fellow programmers.

That's just not true.

A couple of years ago, Stack Overflow nearly collapsed under the weight of a hundred thousand monkeys bashing away at a hundred thousand typewriters. Your response was to put in an automated system that blocks a significant percentage of all questions asked at Stack Overflow before the community ever sees them. It further blocks those users who manage to get through those quality filters and persistently ask bad questions. That's not trusting the community; it's trusting a machine.

We don't need that at our scale. All we've ever asked for here on Programmers is a way to close crap questions easier, so that we can get them off the front page faster.

Leaving it up to the community at Stack Overflow, by and large, has produced a site scope which basically amounts to "how do I fix my broken code?" A lot of these questions are far too specific to ever be useful to anyone else, though I am consistently surprised by Stack Overflow's ability to answer a programming question I have through the result of a Google Search, a quality I attribute to the sheer number of questions that have been asked and answered there.

Because Programmers only attracts a tiny percentage of Stack Overflow's volume, I doubt that it will ever reach the same critical mass, which means that the quality of the questions we decide to keep here becomes far more important. Stack Overflow can be content with housing millions of questions and having perhaps a hundred thousand of them receive a significant audience; Programmers doesn't have that same luxury, nor can it have the same tolerance for chaff.

What kinds of questions are acceptable here?

That's easy: good questions. Assuming they are on-topic; that is, they are about Software Design and Development (excluding specific coding questions), good questions:

Are clear and understandable.

Have a specific problem statement.

Don't ask for lists of things.

Don't ask for product or service recommendations.

Don't require extended discussions or lengthy explanations.

Don't ask "which is better" without explaining what "better" specifically means to them, in a way that isn't a tautology ("best practice" is not any better than "better.")

These should already look familiar to you; they are all the same metrics that will cause a question to get closed at Stack Overflow if they are not respected.

Within a question, voting does a good job of surfacing quality

Only if the question asked is a good, on-topic one that follows the rules described above.

There will always be people who misunderstand the nature of this site

Nobody disagrees with this, but there are reasons why this happens more often than it should: old, highly-voted but off-topic questions that serve as bad examples, tags that don't belong here, an extremely vague site title, our checkered past. And there is still a contingent of people who strongly believe that Programmers should be what it once was: a water cooler for programmers.

Closing questions does not make them less tempting to copy

Which is why deletion, not just closing, is required for those questions that do not adequately represent the site.

Be proud of your past...

This whole paragraph basically asks us to accept mounds of chaff on the off-chance that one of those ill-advised questions might produce a golden nugget. The vast majority of them will falter, leaving us with inadequate shovels to clean up the resulting piles of shit. Stack Overflow doesn't put up with this, nor should we.

New users should not be treated as threats, but as potential contributors

Again, nobody disagrees with this. We give people every opportunity to reform their unsuitable questions. Most of them do not oblige, else they wouldn't have posted their question in its original form to begin with. This is nothing new; you've dealt with it for years on Stack Overflow, but with bigger guns.

I'm sad that the experiment failed

Not because it was not a success (I believe it was), but because our goals seem to differ. Your goal seems to be to save the rare ugly ducklings that lay golden eggs (it seems inconceivable to me how you thought reducing the number of close votes would achieve that, unless you were relying on the three votes to reopen to balance the equation). Our goal is to make closing a bit easier so that we can sweep the questions that nobody really wants (except the ones asking them) off the front page so that visitors to the site can see the good questions that accurately represent who we are.

When I go to a Stack Exchange home page, I see a list of questions. If most of those are terrible questions with little to no indication that I'd [not] be wasting my time by reading them, the value proposition of visiting and participating is diminished: I have better things to do.

Compare that to answers on a specific question: I've made a conscious choice to look into what I think is an interesting question. I already made the decision that the question is worth my time. If I find the answers to be useless, I have a few different options to register my displeasure, including writing my own answer. Being able to write your own answer is key: if your answer is good enough, it'll rise above the junk answers and everyone will be better off for it.

There is no such action for question lists. I can't say "these questions suck, show me this question I just thought up instead": that'd be silly. So, it's imperative the question list have a high signal-to-noise ratio, removing the penalty for those users who do take the time to read a question and later find it to be useless.

Robert Harvey's answer is almost perfect. He touches on one point at several points, but I do think it needs more emphasis.

On Programmers, we close questions ruthlessly and viciously. And that's a good thing.

Closing questions quickly is a very good thing. It stops them from getting low-quality answers and lets the asker and the community work to fix them. If they are totally unsalvagable, quick deletion is also a tool that we use here. And this approach works very well for us.

I noticed that more people were involved in question closures. I saw names that I didn't see often. Also, during peak hours, these questions were closed without people mentioning them in The Whiteboard or flagging them to get a mod on it faster. It made voting actually work by letting the community better self-police. My work as a diamond mod was significantly lessened during this time period.

Finally, when I look at your data...I just see an experiment that worked. The number of closed questions went up. I'd be curious to see the average time to close a question - I suspect it went down, perhaps even significantly. The edit and reopen rate also went up. By closing questions quicker, we could engage users faster and get their questions in good shape for reopening or migration.

Admittedly, there are some barriers that should be looked at. Systematically, the reputation requirements to post on Meta and chat. These are where we want to direct people to go to help. I think many times, users have sufficient reputation, but I know I've vented in The Whiteboard and TL about these limits preventing people from getting to resources to help them improve their question. Culturally, I think we need to improve how we comment on posts, especially posts from first time users, to provide better guidance on get them to helpful resources so that the people who want to participate and learn can.

The "average (median) time to close a question" is the graph presented. Prior to the experiment my eyeballs say that it was a median time of 200 to 400 minutes since July and fairly close to the 100 to 200 range from last January to the past July. With the 3 close vote range, the median was down in the 50 and 60 minute range.
– user40980Dec 18 '15 at 13:54

I'd like to focus on an aspect that hasn't been addressed yet. Namely, the edit and reopen rate. During the experiment, the site had an atypically high Close->Edit->Reopen rate. I don't believe that was by chance.

Several of us within The Whiteboard noted during the experiment that we had more time available to edit marginal questions so that they could be re-opened. I even made a comment to the effect that one of the major benefits of three votes-to-close was that it freed active community members up to focus on higher order aspects of the site. That's a very, very good thing.

All of us share a common goal of creating high quality content for the site. Robert's answer very eloquently addresses why it is imperative for us to get rid of the crap questions quickly. This site simply doesn't have the volume of quality questions to drown out the crap questions, so we need to be diligent in cleaning the site.

If we treat active users' time as a zero-sum game, then it's fair to explore how we want them investing their time. Personally, I would much rather they focus on editing the salvageable questions in order to foster higher quality content. I think the experiment created the right conditions to allow active users to focus on more valuable contributions.

All that said, I understand there is disagreement in how to interpret these results. I believe that the true results of the experiment may be masked by the relatively short duration of the experiment and natural fluctuations in the various rates.

I'd suggest re-running the experiment but for an extended period of time. At a minimum, let's see what happens after 3 months. Or more preferably, let's run the next experiment for 6 months and then analyze the results. Arguably the only risk is that we'll close more questions than perhaps ought to be closed. On the other hand, we should see higher quality questions resulting from the additional editing that can be provided.

Yes, we closed 951 questions in that period of time. The question is: is that the right number that we should have closed?

It would be interesting to take a sampling... 10% randomly. 95 questions. Or you could make it an even 100 questions. And then do a Stack Overflow Triage style review of them.

Print them out, get five programmers from the office and ask them to do a triage on them. Fifteen to thirty seconds each question (thats triage) - about 45 minutes per person or so. Mark with a green, yellow, or red pen on each for looks ok, should be improved (enough information for a P.SE community member to edit it into shape), or unsalvageable (it should be closed). And then tally up the ones where the majority of the reviews were green pen marks.

Are we looking at "we're closing things where 1% of it should have been open? 10%? 50%?" If one could then do a further analysis of the information and see if these are ones that are considered off topic by the community, or too broad, or opinion, or unclear.

Then one can look again at the "if P.SE is getting bad closes on 3 out of every 100 too broad questions" or "if P.SE is getting bad closes on 50 out of 100 opinion questions" then we can look at how those close reasons are used.

But, if on the other hand, we're looking at 1 out of 100 questions that gets a bad close while properly handling 99 out of 100 in a more expedited way... then the discussion needs to turn to how we can cut down on the poorly asked questions before they are asked rather than making it take longer (and give a worse experience for everyone involved) for questions that really should be closed.

But until we've got this idea of what you want and what you don't want beyond the "don't close everything" it is rather hard to make any measurable change in what the site is doing.

Instead, the queue went from about 20 review tasks to over 100 review tasks in a week. And two hours after vote reset, I've got two close votes left after going through the queue and the questions that have remained open.

I would be willing to contend that the activity of people actually casting close votes is a larger on the close percentage than the close vote count. I also went from often having several left over at vote reset for a month to running out early for the past few days. Model the question of what happens if one or two core close voters goes on vacation for a week or two.

Phrased another way - is the number of questions closed constrained by the number of people who have used 75% of their close votes in a day - and as this pool fluctuates over time. Over the experiment time, was that constraint removed (and those people who were before and since the change) leaving excess close votes for those who typically cast all of them? There's another quality centric (and willing to cast close votes) person who just shy of 3k - will this lead to more questions being closed - getting closer again to that 70% mark where we were finding ourselves with surplus close votes.

tl;dr:

Most question closures, when questions in aggregate are analyzed, are valid. This suggests the experiment was successful - not a failure.

This meta post won't be for the faint of heart.

Abstract

I have often heard/seen the "p.se closes too much!" cry but rarely see this supported by a random selection of questions which are closed and an analysis as to whether this is appropriate.

It is important when trying to call the closure rate on Programmers "good?" or "bad?" by understanding what the characteristics of the questions are. If 70% of all questions here are "hi give me a book plz" then a close rate above 70% is ok - likewise, if 70% of questions are good, solid, appropriate questions then a close rate of 70% is unacceptable.

The objective of this book of a post is to provide one perspective regarding closed questions the community has already deemed somewhat valuable (see Methods).

This post is a comprehensive attempt to analyze the aggregate (not focusing on the individual exceptions, which are discussed per-question below) to find meaningful trends. Isolating individual questions and finding fault with their closure is pointless, if the overwhelming majority disagrees. In other words: focusing on outliers may miss what the data indicates. Don't bikeshed.

The objective is to answer:

Are too many questions unfairly closed on Programmers.SE?

Methods

This is a enderland designed heuristic to try to indicate questions that might not be good to be closed. It's not perfect nor all inclusive. It won't include deleted questions (regardless of the number of upvotes). But it's a good chunk of the remaining questions which were closed that have the highest likelihood of being "maybe should be reopened closures."

There are, as of writing this meta post, only 127 questions fitting this criteria (of the 951 questions closed during the period). Note that some of these have actually been since deleted but since SEDE is not live data the total will be incorrect (20 total). A few didn't show as dups in SEDE either for various reasons.

Enderland's on/off topic philosophy

I am generally a bit less strict when it comes to off-topicness than many others here. For example, of this entire list, I only cast close votes on two of them (despite being active in The Whiteboard far too much).

I am generally a bit more accepting of borderline questions, such as indicated by how infrequently I had voted on these questions.

When reviewing, I took care to write "deleted", "marginal", and "reopened" in my descriptions in order to do a better search/count to quickly summarize totals.

The reason I include this section is it perspective on how I generally vote for close and reopen.

Results

Of the 107 non-deleted (of 127) questions, I cast reopen on 15 questions. Assuming my heuristic is reasonable, and my interpretation of on-topic-ness is right, it means that a very small percentage of questions are being wrongly closed.

I also labeled 24 questions as "marginal" - in some cases I voted to reopen these and some I did not. Most marginal reopen votes were erring on the side of generosity.

That means, of the 107 non-deleted questions, over 80% I (even with a fairly generous review attitude) would leave closed.

Conclusions

Focusing on the appropriateness of closure, there were a handful of missed questions. I expect a few on this list to be reopened as a result.

What I didn't expect was how few questions on the list are well scoped for a Q/A site. Robert's answer provides considerable insight into why this might be, so I will not elaborate. Given the strong language in the original results, I expect to find considerably more marginal questions or to run out of reopen votes. Again, I only found 24 questions I considered marginal - I only found 1 in 4 questions meeting my heuristic were considered marginally on topic. And these are the ones that survived and are good! And I expect many people will disagree they are even good enough to be marginal.

Second, regarding the closure rates, focusing purely on the number of questions put on hold is misleading. The entire purpose of "on hold" vs "closed" is to allow users to clarify their question (or have them edited) and have them opened by the community.

With this in mind, analyzing not just closure rate but the characteristics of the data, we notice some interesting trends. In spite of the significant increase in question closure, the percentage of questions edited after closure was only 15.2% (vs an average of 11% the year before). This suggests that most questions here are either abandoned by the poster or community at a fairly high rate regardless of how quickly they get closed (85% vs 89%). A single month is not really significant enough to determine the implications of this.

I do not see this experiment as a "failure." I see this as a pretty good result to be honest. Let's say I'm wrong and 2x as many questions from this list should be reopened, which makes 30. Let's also say there were another 30 deleted questions which are accidentally deleted. That means that about a 6% false-closure rate.

Focusing on finding ways to benefit the community in dealing with the 94% of questions which should have been closed seems preferable than handicapping them over the generous 6% which were incorrectly closed.

Future Work

Ideally, a random sampling of all questions asked during this period would be used in order to arrive at a better understanding of the correctness of closing

It would be interesting to characterize the votes on closed questions, such as % closed with zero total upvotes, % at -4 or lower, for the trial period vs

Questions

Here is the list of all the specific questions I reviewed individually and my evaluation.

My perspective is clearly not going to be perfect. You may disagree (or agree) with my reasoning. Be aware the goal here is not the individual questions isolated from each other but the aggregate result.

Note that there were two questions that were way too long for me to review in a meaningful fashion, indicated by tl;dr.

XY at best, accepted an answer that doesn't answer any question about "safety" but rather "how to avoid confirmations" but real question is "what is the registry and why is it important" which might be scoped for Superuser?

I think a significant part of the disagreement here might be as simple as conflicting uses of terms like "off-topic".

Some people use "off-topic" in a far broader sense than I do, encompassing what I personally refer to as "unanswerable" questions in addition to questions which are objectively off-topic. I fear these distinct uses of the word are clouding this discussion somewhat.

The way I use the word:

A poll about the "best" implementation of the factory pattern is on-topic but unanswerable.

A poll about the "best" C++ compiler would be off-topic and unanswerable.

A question about whether a specific implementation of the factory pattern fits a specific use case would hopefully be on-topic and answerable.

A question about whether a specific C++ compiler will perform RVO is off-topic but answerable.

My understanding of StackExchange is that the correct response to these questions is:

Vote to close as "too broad" or "primarily opinion-based".

Vote to close as "off-topic", without voting to migrate.

Edit, upvote, write an answer, or do nothing.

Vote to close as "off-topic", possibly also voting for migration (let's not get into that debate).

In general, what makes a question "unanswerable" applies equally to all StackExchange sites. If a question is unanswerable on StackOverflow, that means it's unanswerable here, and it's unanswerable on CodeReview, etc, etc.

In general, what makes a question on- or off-topic is completely different for all StackExchange sites. If a question is on- or off-topic here, that doesn't tell us whether it's on- or off-topic on any of the other sites.

Robert's and Jon's existing answers to this meta question only make sense to me if Robert is using the narrow definition of "off-topic" while Jon is using the broad definition. This is also the only reason I can come up with for Jon's apparent failure to understand Robert's answer (other than the three highly improbable reasons that Jon suggested in his own answer).

While I think it's an overstatement to claim that we're the easiest site to understand, our scope really is as simple as "software design and development" with the exception of "writing and troubleshooting code". The questions that we close as off-topic (not broad/unclear/POB, but actually off-topic) are most often tool recommendations and code debugging questions. I can't imagine anyone seriously having difficulty understanding those closures.

What I can understand is disagreement over when a question is sufficiently broad, unclear or POB to justify a close vote rather than an edit or some other response. That issue is far more contentious, and it does need to be discussed. But, and this is the whole reason I'm writing this answer: We need to stop confusing the issues of what's too broad/unclear/POB with the largely uncontroversial issue of what is on-topic.

Questions about "best" X are completely answerable. We have an entire site that thrives on such questions. Now they need specialized guidance to keep posts focused and it's entirely fine to refuse these questions on this site. But let's not be unscientific about why these questions don't fit this site.
– Jon Ericson♦Dec 19 '15 at 18:56

7

@JonEricson they started with those specialized guidance bits recognizing the problem that they'd have from the start. The entire community that maintains and curates it agrees on those principles. You will find much less cohesion of similar principles here - though the portion of the community that maintains and curates the answers here tend to be fairly consistent. What principles we have come up with ("name that thing is problematic", asking "is it possible" needs to be refined", asking "what did so and so mean" is guesswork) came up with on a case by case basis as we recognized problems.
– user40980Dec 20 '15 at 5:07

Software Design and Development means those things that are directly involved with producing software. I could go into great detail about this, but since it has been discussed exhaustively, I'll give just two examples: scrum is on-topic; licensing questions, by and large, are not,* and it shouldn't be difficult to see why.

As to your three possible reasons, number two seems to fit the most. Stack Exchange seems to have forgotten how problematic poll questions, "best of" questions and Big List questions can be and, because a site now exists that has been able to make them work (albeit with special rules), that suddenly these questions are suitable everywhere. They are not. Moreover, unlike Software Recommendations, this community has no interest in encouraging, cultivating or curating such questions.

I don't follow this logic

Let's say you have a bin of 100 marbles, 70 of which are white, 20 of which are gray, and 10 of which are black, The black marbles are the ones that are not only well-written, on-topic questions but also have answers that are useful to others. The gray ones are borderline questions that may or may not have useful answers. The white ones are off-topic, too broad, unclear or primarily opinion-based. A tiny portion of these, for whatever reason, survive and produce gem answers, but the vast majority produce nothing but a time sink.

Now take those numbers, and multiply all of them by 100. Given the same relative proportion of users, one might assume that the chances of finding something useful should be the same on each site. But you'd be wrong, for the simple reason that there are 100x more black marbles on the larger site, and those marbles rise to the top of a Google search. This is why scale matters, and why optimizing for black marbles takes on much greater importance on the smaller site.

It's probably worth noting here that the rash of shoddy questions constantly appearing on the front page is not at all the same problem as the occasional borderline but popular question getting trapped in the closure net, nor should the two problems be conflated. Stack Exchange seems more interested in solving the latter problem than they are of understanding the former, and I find that more than a little frustrating.

*Yes, I know licensing questions are listed as on-topic in our help center. They are restricted to a very narrow (and frankly uninteresting) subset of all licensing questions, but ultimately they don't really have much to do with core software development. The difficulties inherent in these questions are already adequately chronicled on Meta, so I won't repeat those arguments here.

In recent weeks I've started making a conscious effort to close only the white marbles, so the gray marbles get a chance too. Now that I'm actively looking for the gray ones, I'm seeing a lot more of them than I expected, which makes the extreme rarity of the black ones a lot less depressing. (not that it matters, but I'd suggest the proportions are closer to 60/35/5 than 70/20/10)
– IxrecDec 28 '15 at 9:55

Print them out, get five programmers from the office and ask them to do a triage on them.

This is a fine idea. I reported my results internally and asked for volunteers to do this experiment. I haven't heard back from anyone yet, but it is a Friday and there's some movie premiering today, I believe. I'll see about following up in the next few weeks.

Were you really expecting a different result?

Not really. I was, however, hoping for one.

[Not understanding the site] is just not a defensible position. Programmers is the easiest site on the entire network to understand. What is it about? Wait for it...

software design and development

So I can take this one of three ways:

I don't understand this site because I'm not a programmer.

As it happens, I've been writing and maintaining software most of my life. But maybe my previous jobs don't count as "design and development"? I've done a lot more work on other people's code than building new systems, so it's possible I don't get the topic after all.

I don't understand this site because I don't understand the Stack Exchange model.

I was a beta user of Stack Overflow and have asked questions on dozens of sites with minimal closure. I'm also working as a Community Manager here. On the other hand, I've historically been soft on question closing, so maybe this is my problem.

I don't understand this site because it's a community more complex than the cross section of software design and development and Stack Exchange.

I feel this is the most likely explanation because I don't think #1 or #2 apply to me. I certainly don't think either #1 or #2 apply to Jeff Atwood. Therefore, I'm fairly confident that this site can be difficult to understand for reasons entirely separate from the topic.

A couple of years ago, Stack Overflow nearly collapsed under the weight of a hundred thousand monkeys bashing away at a hundred thousand typewriters. Your response was to put in an automated system that blocks a significant percentage of all questions asked at Stack Overflow before the community ever sees them. It further blocks those users who manage to get through those quality filters and persistently ask bad questions. That's not trusting the community; it's trusting a machine.

Correct. I don't think any of us is happy about that outcome. But what do you think we use for input? Other than the annoying title filter all of the automated systems use community feedback as a primary source of input.

As the review system has improved, we've significantly softened the question block system. In addition, we have added speed bumps to slow down users who have show a pattern of asking poorly-recieved questions. (Rolling rate limits are now enabled network-wide, by the way.) We used to get many false positives (meaning people were banned who probably shouldn't have been). After making those changes, we see far fewer false positives. We think that the reason for this is that the system is doing a better job of amplifying other users' feedback to people who have a difficult time understanding it for whatever reason.

Stack Overflow can be content with housing millions of questions and having perhaps a hundred thousand of them receive a significant audience; Programmers doesn't have that same luxury, nor can it have the same tolerance for chaff.

I don't follow this logic. Remember there are almost 150 sites of all shapes and sizes. Stack Overflow has a unique scaling problem. As it turns out, the systems we built to scale our flagship site work fairly well for a wide variety of topics and at the full range of question rate. I would phrase the final sentence in reverse: "Programmers has the luxury to be intolerant of chaff." Having a small volume of questions means this community can ignore the systems that allow Stack Overflow to scale. It's a lot of unnecessary work but you can do it.

On Programmers, we close questions ruthlessly and viciously. And that's a good thing.

I can't help but notice this is an excellent example of the Sanctity moral foundation. The only other site that has anything like Programmers emphasis on purity of questions is Skeptics. I'd suggest that this community's unique set of values, and not topic confusion, make this site difficult for outsiders.

Several of us within The Whiteboard noted during the experiment that we had more time available to edit marginal questions so that they could be re-opened. I even made a comment to the effect that one of the major benefits of three votes-to-close was that it freed active community members up to focus on higher order aspects of the site. That's a very, very good thing.

I agree. If we could keep that without dramatically increasing question closing (which is already a deep outlier on the network), than I would be advocating for this change to be kept.

(It's drifting too far from the topic at hand, but I don't think treating user time as a zero-sum is a reasonable estimate. People tend to spend more time on sites they enjoy. It's one of the reasons we encourage people to have fun. Why would anyone keep participating in a hobby that makes them miserable?)

This meta post won't be for the faint of heart.

No it is not. ;-)

I'll need to take more time going over it in the future, but my initial sampling of those questions is that I wouldn't vote to close most of them. But that shouldn't surprise you, I guess.

"The only other site that has anything like Programmers emphasis on purity of questions is Skeptics" -- what about Server Fault?
– gnatDec 19 '15 at 6:05

8

"Why would anyone keep participating in a hobby that makes them miserable?" - I'd like to touch upon that briefly. I, and other active users, use Programmers to help provide a backstop to the problems we encounter at work. In other words, we use the site to work more efficiently and not purely for fun. If you don't have the benefit of working on a large team, Programmers can be useful for exposure to concepts that you wouldn't otherwise encounter. That said, there is a measure of fun that comes with mastering a skill, and programming is a skill. But we're not looking for frivolity, ...
– GlenH7♦Dec 19 '15 at 16:45

8

... as we could simply use "/r/programming" if frivolity was what we were interested in. We specifically want a high signal:noise ratio because we're using this site for professional reasons. We still have a good time and enjoy what we do here. So it's a different definition of fun that causes us to moderate in the way that we do. If anything, that difference in perspective is likely what underpins why we're getting an impedance mismatch in discussing the site's scope and what constitutes and appropriate question.
– GlenH7♦Dec 19 '15 at 16:52

2

There's more that I'd add, but I fear that I'm also digressing from the primary purpose of this post and I don't want to hijack it. Thank you again for continuing the discussion regarding this experiment, and I'm grateful that you're engaging in the conversation with an open mind. Even though I still disagree regarding your current conclusions, you get my up vote.
– GlenH7♦Dec 19 '15 at 16:55

7

If you come back with a list of what was closed that you wouldn't close, that might help. I'm not sure if anyone will change their mind anyway, but it's very unlikely without such a list.
– psrDec 21 '15 at 17:29

4

Its been some time, and the holidays have passed. Could you consider grabbing our 100 most recently closed (make sure you grab deleted and migrated) questions and passing them around to the SE developers to get their opinions on how many of them are clear, on topic, and not too broad or opinion oriented? I'm really curious about how many of them (and which ones) have a disagreement for what community close votes have said and what another pool of devs think.
– user40980Jan 27 '16 at 21:18