As we are in the middle of study section meetings for NIH grants submitted for the June-July dates and heading toward yet another revised-application due date, I'm thinking about the way amended applications are reviewed. The amount of information available to a given reviewer on the previous history of a particular amended application is variable, leading to much dissatisfaction on the part of the applicants. The system could stand to be improved.

When an amended NIH application (A1, A2) is reviewed, it picks up a 6th major category of critique, namely the degree to which it is "Responsive to Prior Critique". There are some very important problems with this process, namely the delay for revision churning and the lack of effect it has on the eventual conduct of the science; I've touched on that before. In this post, however, I want to consider the more prosaic and practical implications of the way amended or revised applications are reviewed. The reviewer of the amended application is provided with the most recent summary statement and the revised application. Period. There is no provision of the actual prior application(s) or prior summary statements (for an A2 revision) and certainly not the prior score ranges, the identity of the prior reviewers or anything else.
The first and biggest problem is the failure to provide the prior application to the reviewer to facilitate assessment of "responsiveness" to the prior critique. This means that a new reviewer for the application is basically guessing on the basis of the summary statement what was contained in the prior application. [Grantsmanship sidebar: This is why it is a GoodIdea to follow the convention of highlighting changes in the application with a vertical black line in the margin next to substantially revised text passages.] S/he is also hampered in ability to determine what parts of the summary statement may be unjustified and therefore a failure to respond to a particular critique may be neutral. You can imagine first how difficult this is as a reviewer new to the application and second the degree to which summary statement writing which is too brief or too poorly matched to the actual review/discussion can introduce big problems.
The second problem is related and stems from the fact that revised applications not infrequently have reviewers who have reviewed prior versions, reviewers who may have been present at the discussion (but not have been assigned the application) and reviewers for whom the proposal is novel. Thus there will be a varying level of recollection of the prior version(s) of the application and of the issues that arose at the prior review. This can be really frustrating. For example when a set of critiques come back with some saying "highly responsive" and another saying "didn't appropriately respond to the comments on X, Y and Z". You can imagine the reasons. A previous reviewer may recall the key points of discussion better and be able to focus on the criticisms that are most important to be changed. Alternately a novel reviewer may not have any ego bound up in the process and may think an partial response perfectly appropriate whereas the person who made the comment is unsatisfied by anything less than wholesale adoption of the critique. My point is not to say that either scenario is always correct or always incorrect, just that this is a source of variance in the process.
Finally, a further related point is that even for reviewers who have reviewed the application before, their memory of what happened 8 months (the usual 2 review rounds necessary to revise) ago can vary tremendously. The official rules insist that reviewers are to destroy all information related to the review including copies of the applications and any notes made during review. I wonder how seriously people take this. I have for certain sure received summary statements of my revised applications in which substantial blocks of text are reproduced verbatim or nearly so. It takes no great leap to see that the return reviewers have held on to their prior critique document file for re-use on the revised application. Since they cannot reasonably judge which revisions they will later receive to review, it is likewise no leap to assume that reviewers may hold onto all of their reviews that they have written. Do they hold onto notes and the applications as well? Surely some of them do!
So there is clearly going to be a wide range in the familiarity a set of reviewers has with the history of, say, an A2 amended application. From a very intimate appreciation (reviewer has seen all three versions, has great memory for the review discussions and has retained the prior applications and notes) to essentially minimal historical information (a reviewer new to the panel and application). In and of itself this is bound to introduce variance into the process. As a related concern the difference in familiarity is going to bias the perceived authority of the reviewers in the impact it has on the rest of the panel. On my panel at least people are plenty forthright about saying "I've reviewed this one the prior two times and...." or "Well, I didn't see the prior version but it seems responsive to me...". This is going to have disproportional impact on the rest of the panel. As above, I'm not saying this is monolithic in direction. One might assume the clearly more-knowledgeable reviewer will seem more authoritative. However, sometimes the "more informed" reviewer comes across as the dog who will just not relinquish his/her favorite bone which the rest of the panel can see should long since have been buried.
There seems a very simple solution that would go a long ways towards reducing these sources of variance. Namely that all reviewers should be provided with all the prior versions and summary statements. This would go a long ways towards equalizing the amount of "extra" information a given reviewer brings to a revised application.

Good suggestion. Now that most submissions are electronic, it should be easy to provide reviewers with pdf's of the prior versions and their accompanying summary statements.
As far as destroying the documents from a previous cycle of reviews, HAHAHAHAHAHAHA! We've had plenty of cut-and-paste reviews; both good and bad.
I also think that someone reviewing an -A1 or -A2, who hasn't reviewed any previous versions, should be required to explicitly state that in the committee meeting and in the summary statement-- a sort of full disclosure in the interest of transparency.
This will do two things: (1) it will communicate to the table that his/her review should be viewed in the context of their n00byniss to the grant; and (2) it provides an investigator with some leveraged discussion points for the Program Officer. For example, if the PO checks the SRAs notes that the n00b liked the grant, the PI can argue that the n00b came in with no scientific conflicts and their review should hold more weight. Conversely, if the n00b didn't like the grant, the PI can argue that the relevance of their review should be viewed within the context that they were not privy to previous versions and their discussion.
Along similar lines, I think that the summary statement should include the raw scores that their grant received. Perhaps the raw scores could be indicated along the lines of "Primary Reviewer Scores" and "Table Scores". Say your grant gets a 180, and you look at the raw scores to see that one of the primary reviewers and one of the table reviewers gave it a 120. Such a discrepany could then be used in the 'off the record' discussion with the PO, perhaps aiding your case this time (or next).

True dat...but, the PO doesn't always make it. For example, we've had SEP reviews before, and oftentimes the SEP panels meet around the time Council meets. If that happens, then the PO usually can't attend the study section meeting since he/she is at Council. Either way, I still think the raw scores should be available.

25% of grants scoring between 45%-50% get funded? Am I reading this correctly?
I don't think you are reading correctly, no. The score on the X-axis is for the original A0 submission. The numbers getting funded at 45-50th %ile are essentially zero in the 2004-2005 rounds. Some of these got funded as A1 applications, i.e., after being revised and re-scored. Presumably the A1s received much more attractive scores in order to be funded. This graph is attempting to communicate the cumulative funding probability and eventual fate, given a particular score on the A0 and assuming that the revisions are submitted.

Either way, I still think the raw scores should be available.
Are you talking about the full panel numbers or just the preliminary and/or post-discussion scores?
Either way...what a nightmare. Look Program fields enough whining from applicants and they are motivated to do whatever they can to minimize it. It is my firmest conviction that most of the noncommunicative robo-speak from POs, failure to provide firm funding lines, etc, is based on the fear of massive PI whining. I think we can all see that this is a very reasonable fear!
There are frequently disparate scores. This does not always mean that something is "wrong" with review. Yet think about how applicants are going to take it if they were privy to the score ranges. Just about every other applicant would be complaining about how the 0.3 pt outlier was clear evidence of malfeasance on the part of one reviewer....