Jason, would you say that having 3 readers gives (approximately) the same result as having (say) a set of guidelines for interpreting the numeric values, provided to readers in advance? I like your "stabilizing" approach, it just seems like having 3 readers on each and every script would drive costs up (which isn't much of a concern if people are willing to pay, I suppose).

If only I had the TA manpower to give my students' papers 3 separate readings each! That would cut down on a lot of complaining about grades.

Jason, would you say that having 3 readers gives (approximately) the same result as having (say) a set of guidelines for interpreting the numeric values, provided to readers in advance? I like your "stabilizing" approach, it just seems like having 3 readers on each and every script would drive costs up (which isn't much of a concern if people are willing to pay, I suppose).

If only I had the TA manpower to give my students' papers 3 separate readings each! That would cut down on a lot of complaining about grades.

In a perfect world there would exist a reader with perfect taste who applies a perfect set of guidelines perfectly every time. That person (and that set of quidelines) would definitely be worth a lot to us, especially if we could clone her, since yes, having three readers increases our costs dramatically, which is why it's difficult for us to deliver our 3-in-1 coverage service for less than $200 (we're going to be raising our price from $147 to $197 in a couple of weeks).

Unfortunately, and to quote The Breakfast Club, "Screws fall out all the time; the world is an imperfect place." Our coverage rubric is over 45 pages long, and over half of that is taken up with very detailed guidelines for determining which numeric value to choose for each of the 11 categories, with questions the reader needs to ask herself for each category and instructions for how to prove and/or defend her selections with citations and examples from the script.

Yet even with such explicit instructions, three readers agree on the overall rating of a script (that is, Pass/Consider/Recommend) only about a third of the time. The rest of the time at least one of the readers will differ from the other two, and we see three different ratings of the same script fairly often as well. And that's just one attribute of the script. We've never once seen two people apply the exact same numbers to all eleven categories.

Cynics might read the above and say we're doing it wrong in some way, whether in our approach or our execution or our vetting and training and management of our readers. I suppose any of that's possible, but having done this now for over six months, I can assert that we're seeing in practice what we all instinctively know to be true: Assessing creative work, whether written or performed or sculpted or whathaveyou, is an inherently subjective activity. Reasonable people can and often do have differing opinions, even when viewing something through the same lens, which is what our rubric is designed to provide.

And as for cutting down on complaining about grades, you're absolutely right, we're seeing that happen in practice, too. Our most common feedback from our writer clients boils down to this: 'Getting three sets of unbiased comments was eye-opening. I may not agree with everything each reader said, but they each obviously gave it a thorough read and supported their opinions honestly.'

Is this a stupid idea? Since you have three readers, maybe they should get in a room like a jury and fight it out until there is agreement. A good debate could reveal weakness or strength in an opinion. Try it once and see if it works.

Is this a stupid idea? Since you have three readers, maybe they should get in a room like a jury and fight it out until there is agreement. A good debate could reveal weakness or strength in an opinion. Try it once and see if it works.

It's not a bad idea, but it's not what we're going for. Rather than a moderated, committee-driven consensus score, we want a wisdom-of-the-crowd-style composite score based on multiple opinions. Those differing opinions are provided to our clients separately so the writers can see which critiques are common. If all three readers are tripping over a plot hole, or felt the dialogue was too on the nose, or the lead character wasn't active enough, then it's pretty clear that area's worth addressing in the next draft. Conversely, if only one person had a problem with one of those areas, the writer can decide whether he agrees with the comment and address it or simply ignore it and focus on the rest.

That said, while reasonable people can differ, sometimes one person is simply wrong, and we do a virtual version of your idea when readers' comments diverge dramatically from each other. For example, if one person says a given script is a Recommend and another gives it a Strong Pass (a 4 and a 1, numerically), a red flag goes up; one of those people simply isn't following the guidelines. If the gap is that wide, the readers have an email discussion to figure out what's going on and then scores get changed. In cases where the gap is only two notches (a Recommend and a Pass, or a 4 and a 2, numerically), we still take a close look at the comments to make sure nothing fishy is going on.

The readers collaborate on the synopsis in a similar way, too -- three synopses of the same script would be a waste of everyone's time, so the first reader does the first draft and the second and third readers edit and refine it.

Well, you said something that makes sense, an approach BL doesn't offer.
I like some discussion before summing to a rating if there is a serious disagreement. Especially pertaining to comedy, it's so personal, coming to consensus could be impossible. I like your three reader approach and a method to deal with broad disagreement. You're making a serious attempt to get to the truth. Mike D

Well, you said something that makes sense, an approach BL doesn't offer.
I like some discussion before summing to a rating if there is a serious disagreement. Especially pertaining to comedy, it's so personal, coming to consensus could be impossible. I like your three reader approach and a method to deal with broad disagreement. You're making a serious attempt to get to the truth. Mike D

Thanks. To the extent one can even apply the idea of "truth" to this field, yes. We're serious, at the very least.

I see new and young writers obsessing about discovering consistent and quantifiable results from readers. When you've been writing for a while, you start to understand and accept the reality of divergent opinions on a script, which only proves this is an art and not a science.

I agree with others here who are saying it's a waste of energy trying to "fix" the process. This is the only thing you should be devoting your time to:

1. Think up the best concept you possibly can, pitch the idea to everyone and anyone to see if it sounds interesting or confusing, ditch it if people just aren't getting it and think up the next best concept you possibly can. Repeat the process until you have a winner.

2. Write the best logline you possibly can and go through the same vetting process as above until it's absolutely solid.

3. Write the best script you possibly can (not just good enough), get feedback on it (1) in a writers group populated only by professionally-minded writers and not dilettantes and/or (2) script report services. Rewrite the script until it's the best it can possibly be.

4. Write the best query you possibly can and vet it. Target a few managers, agents, and production companies as a litmus test. If you get no read requests, revise your query and vet it. Expand your queries to other managers, agents, and prodcos.

5. While waiting, start the process all over with your next logline.

That's all you should be doing. Notice this does not include chasing rainbows and trying to capture the empirical reader because (spoiler alert) she's as elusive as a leprechaun. But if you don't want to listen to this advice, don't worry, it only means less competition for me.

Nobody has suggested that screenwriting is a science, that's just a straight up strawman.

Just about every time someone reminds us of the artistic, unquantifiable nature of story or screenwriting, their very next comment seems to be some version of: "write a GOOD story" or "write the BEST concept/logline/script/query you can." The issue here is that such statements lack any meaning or substance in the subjective contexts their utterers insist we are dealing with. But translating story or poetry into numerals isn't magic; it's an acquired skill with particular traits. To wonder what kinds of mechanisms are available that would encourage greater consistency in the judgements of readers is not to characterize screenwriting as a calculative enterprise. Having consistent standards benefits everyone in the end because it lends heft and substance to the ratings that are assigned, and that would be a demonstrable step up from the status quo. Writers would have more reliable ideas as to where their writing stands, and readers would be less prone to pass over those few gems that come along but happen to be in genres with conventions of which they are largely ignorant. (A reader who writes romantic comedies doesn't learn the slasher genre by reading hundreds of shitty slasher scripts; they learn it by watching slasher movies, which they may or may not have done and may or may not be inclined ever to do.)

I still stand by my statement. The goal of a new writer is to break into the industry by getting representation, selling specs, and landing assignments. Trying to improve subjectivity shouldn't even be a concern. In the world you're talking about, everyone who passed on the original STAR WARS should have been sat down to explain themselves and find some methodology by which they all would have recommended the project.

Here's the truth: This industry is patently unfair. If you accept and understand this notion you will free yourself from wasted energy. The ONLY way to combat it is to write strong material and network intelligently. That is how the industry decreases its unfairness.