In the Shape of a Crowd /Jen Boyle

In the Shape of a Crowd

1. As co-editor of the postmedieval “Becoming Media” issue (3.1, 2012) I have spent considerable time over the last year thinking about “open” scholarly peer review. Alongside my colleague, Martin Foys, I have also spent more than a few rewarding but frustrating hours working with an alternative peer review system. Real labor was shared with the willful network objects and actors of the review system itself: interfaces, discrete disciplinary networks, typeface fonts, blog personas, comment plug-ins, missing paragraph numbers, the ethos of vetting, and WordPress (.com, .org), were all determinative, and sometimes resistant, actants in the system that was (and is) the crowd review experiment. It is this network of actants that I want to touch on in relation to recent discussions of peer review in the humanities. Before doing so, I offer some background on the concept of the “crowd.”

2. Eileen Joy, co-editor of postmedieval, was the first to suggest that the process we were envisioning for “Becoming Media” would be better described as a crowd review than as an open review. We had already selected the essays that would be included in the issue, and thus, the entirety of the editorial process was not completely open. Crowd aptly described the processes and configuration of our review: as editors we gathered together to make decisions about what to include in the issue; networks of scholars and graduate students, accessed through conversation, personal requests, and social media, then “gathered” to offer feedback on the essays, along the way digitally mingling with one another; and finally, the entire text of those exchanges was archived as a component of the journal issue, as a kind of living imprint of the event as a collection of texts and as a lingering conversation. In a further sense, however, the crowd concept indelibly outlines the often hidden or erased messiness to peer review in any shape or form.

3. As Kathleen Fitzpatrick and Bonnie Wheeler beautifully elucidate, the premise of traditional “closed” (or, more problematically, “blind”) review is that of filtering and purification (Fitzpatrick, 2010; Wheeler, 2011). Fitzpatrick’s project expands on the contradictory tenor of filtering: a device for both vetting/improvement and censorship. The concept of purification is an even more interesting register. Fitzpatrick draws on the relevance of Bill Readings’s model of the university to the system of peer review. Readings argues that we have been lulled into complicity with the corporate, University of Excellence — an empty designation that implies a quantitative measure of achievement in the most obscure terms possible. Fitzpatrick extends this notion of empty consensus to the current model of closed peer review, and she points to how this system relies on a “forced agreement” about the purity of the process: clear and exhaustive standards; a common and translatable language among readers, reviewers and authors. Wheeler highlights how this process is perceived often as not preserving excellence so much as inhibiting the emergence of “new work.” (Wheeler, 2011, 313). Both Wheeler and Fitzpatrick speculate profoundly on how we might reform the culture of excellence and purification that informs traditional peer review (moving away from individual reputation and toward “community” and distributed reward and responsibility). The value of such a transition, however, is still highly contested.

4. One of the concerns about open review is that it is not, in the end, really open. Some of the most insightful critiques illuminate a seeming disconnect between the rhetorical frameworks surrounding digital review and exchange and the actualities of its manifestations. Sharon O’Dair identifies many of the most worrying, fast-approaching “catalysts” of “internet-based cultural democracy” (O’Dair, 2011). Argued for as “open,” “democratic,” and “non-hierarchical,” she points to how these gestures often obscure the realities of online commentary writ large: a few, often uniformed interlocutors offering up highly reductive surface engagements. Most significant from my perspective is the further implicit disclosure of how these systems feed back into a creeping, all-encompassing neo-liberal control of discourse and the production of knowledge.1

5. I am not convinced, however, that the salve for as much can be found in the status quo of academic professionalism (in the guise of traditional peer review or otherwise). And here again, the “crowd” takes on an important role. Many of the arguments made in favor of traditional peer review deploy the ideals associated with individuated professional expertise. Such expertise is defended as rigorous, objective, and most saliently, as a discrete embodiment of evaluative “standards.” Further assertions along these lines associate this class of professional judgment with an ethos resistant to a consumerist model of education and neoliberalism. Amid a reductive gesture not unlike the sweeping claims for online democracy, the crowd is shaped as an indistinct contaminant to this sphere of skilled and professional evaluation. In many senses, this argument relies on an image of the individual scholar protected from the contaminating influences of the tyranny of the uniformed: the crowd, the mob, the naïve and many.

6. Ironically, it is an early media theorist, Marshall Mcluhan, who offers a very different tracing of this trajectory from the naïve crowd to the expert professional (Mcluhan, 1964). As Mcluhan argues, there is a highly anxious but symbiotic relationship between crowd psychology and the aggrandizement of “ratio”: numbers, digits, and rational measure. The appeal to the singularity of quantification is born out of an anxiety that is a piece with the crowd, the horde, or the heap. Ratios and numbers, whether conceived behind the abstractions of “standards” or “excellence,” are not a classificatory system separate from the crowd but an extension of it. The “shadow of number,” as Mcluhan refers to it, is not an image that guides one above Plato’s cave, but a phantasmagoria that proffers the illusion that numbers and digits are natural extensions of the mind/body of the authoritative (“non-tribal”) individual. As Mcluhan shows, the Greek notion of “common sense” initiates this sleight-of-hand between figures of the crowd and the sensorium of the private body as a rational calculus (the one; the individual).

7. We can extend Mcluhan’s observations via Bruno Latour to describe how “modern” systems like academic review require that this ratio of excellence eradicate attention to the specifics of “translation networks.” (Latour, 1993). Translation networks refer to the effects and distributed causation of a multiplicity of actors in any given system – human and non-human. There is an unstated assumption that the main authority in peer evaluation is the judgment of an authoritative expert. Yet, the reality of the peer review process is closer to Latour’s concept of a “parliament of things”: an equal constitution between humans, objects and networks. This is true of both traditional and alternative models of review: the design of a commenting interface or the tardiness of a correspondence between reviewer and reviewed have as much impact as an evaluating scholar on what we might regard as a “decision.” The process is never pure, but within traditional review the shadow show of the “work of purification” trumps the “work of mediation” (Latour, 1993, 131-138).

8. Graham Harman offers up an image of Latour’s take on democratizing that cannot simply be dismissed as “pandering to the spirit of our age”: production of knowledge not as “one against the crowd, but as one in the shape of a crowd of allies” (Harman, 2009, 88). Allies in this passage does not imply consensus (with the crowd or the expert) as much as it does an attention to the shifting alliances and networks that make thought realizable. This emphasis on shape versus reduction via ratio or number I think has tremendous power for speculating about what peer review might become.

9. Shape invokes a model that is as much descriptive as it is evaluative; a model that does not exclude the work of mediation. Though not designed specifically for academic production, the in-progress internet evaluation system, Hypothes.is, offers a future example of such a model: a system of platforms that make transparent how categories of evaluation (“domain experts”; “merit”) form up in direct relationship to their emergent and constitutive judgments of content.This would require a limited departure from the exclusive emphasis on individual reputation and isolated evaluative judgment within thought communities (See also Martin Foys’s response in this Forum on a companion image, the scholar in isolation). This model will work only if we can embody dissensus as a shape that does not necessarily have to become a discrete judgment. It also requires attention to how things help constitute thought as a category. How might we include the determinative influence of interface design, network nodes, and blog correspondence as real actants in the process and development of the meaning and value of communicated thought?

10. As Latour would argue, descriptive realism does not imply a lack of judgment, but opens up the representational field in ways that expose the myth that epistemology and political valuation can be cordoned off from one another (politics here including the polemics of professional communities as well). We now have systems available to us that make such a peer-to-peer model possible (digital networks, blogs, electronic archives). What remains to be seen is if we can find merit and value in peer review as a shape in “plain view” (Felski, 2009, 30).2

Notes1 In particular, we can think of how far apart the claims for digital democracy are from the realities of the too often behind the scenes struggles over corporate control of databases and network protocols; proprietary software; and legal rights to everything from broadband access to academic journals.2 For similar “descriptive turns” within academic methodology see, Love (2010) and Moretti (2005).