Critical Discourse in the Digital Humanities by Fred Gibbs

This post is a moderately revised version of a talk I gave as part of MITH’s Digital Dialogues series, titled “Criticism in the Digital Humanities.” The original audio and slides have been posted; this version has benefitted from the thoughtful questions and comments that followed my presentation. Many thanks to MITH for both warm hospitality and provocative discussion.

My interest in the role and nature of criticism in the Digital Humanities grows out of a question that Alan Liu has asked in a few places this year: Where is the cultural criticism in the digital humanities? Although i’m not convinced that DH needs its own brand of cultural criticism beyond what its constituents would normally do as humanists, the question resonated with me because it made me wonder (with only silence to follow): where is the criticism in the digital humanities?

My ideas center around 3 main points that i’ll very briefly sketch out here.

1) DHers have not created an effective critical discourse around their work.
By this i don’t mean that we haven’t gone far enough in publicly trashing each others’ goals and results. Nor do i mean simply that there hasn’t been enough peer review (though it’s true). Rather, i suggest that DH criticism needs to go beyond typical peer review and inhabit a genre of its own—a critical discourse—that itself provides a valuable service both inside and outside the community. More importantly, criticism and the intellectual work that it does makes the value of our work clearer to those outside of the DH community.

2) To achieve more effective criticism, we need more rubrics for evaluating DH work.
Although scholarly communication and peer review have been highly active topics in DH circles for several years, especially this last one, i haven’t seen truly useful evaluative criteria emerge from these discussions that appreciate the differences between digital and traditional work (not that there is a strict dichotomy). If some are around, they’re not nearly as pervasive as they need to be. Everyone in the field knows that the most innovative DH projects cannot be fully evaluated through the traditional, critical, and theoretical lenses of the humanities. But what lenses do we have? How do we know when to use them? How can we help others outside the field use them?

3) DH work requires a different kind of peer review to produce effective criticism.
Effective single-author reviews are almost an impossible expectation given the complexity of most DH work. A new peer review that’s based on a new kind of collaborative, self-mediated peer review will make for much more effective criticism. Not only do we need a more crystallized rubric, but new models of publishing now require a fundamentally new kind of peer review—and i don’t mean simply online peer review or open peer review, two efforts that i think have gotten the lion’s share of (much-needed) attention when it comes to reforming antiquated review processes. The multifaceted nature of DH requires a different kind of critique than is typical in the humanities because it puts rather unique demands on both critics and criticism itself.

I: towards a critical discourse

We all know that disciplinary boundaries are notoriously difficult to define. Yet they do somehow exist beyond professional titles and departmental affiliations. This boundary problem also gives rise to the question of whether there is any real difference between the humanities and the digital humanities—an interminable debate that need not detain us now. It will suffice for present purposes to say that digital humanities is different enough from the analog humanities, at least at the moment. But allow me a brief moment of justification that will be important later on. The digital humanities are of course not fundamentally different in any larger epistemological or hermeneutic sense from the humanities at large. Both are fueled by humanistic inquiry about the human condition and good stuff like that. But there are lots of different methodologies, many objects of study, and many ways of writing about them. Why should the digital be any different?

Part of what defines a discipline is its rhetoric and the aesthetics of its scholarly discourse. Philosophy texts sound different from history texts, which sound different from literary analysis. These differences become especially apparent during collaborative projects. As much as we champion cross-disciplinary work, there is an inherent unease to it, in no small part because it becomes harder to tell how to evaluate it. Given a particular piece of scholarship: How should one read it? Which criteria should be applied? Of course these lines in the sand are easily blurred and effectively dissolve if one looks too closely. But in the larger view, they’re there.

If rhetoric and aesthetics can help characterize and delineate different kinds scholarly work, it’s manifest to no small degree by a critical discourse shaped by community consensus, convention, and by defining practical and theoretical ideals. One major way in which DH is in fact separate from the humanities (again, at least for now) is in that it requires new ways of evaluating very complex work in terms that are often unfamiliar to most humanists.

One major way in which DH is in fact separate from the humanities (again, at least for now) is in that it requires new ways of evaluating very complex work in terms that are often unfamiliar to most humanists.

One of my favorite illustrations of this is William Thomas’s article, “Writing a Digital History Journal Article from Scratch.” The article is from 2007, but describes events that seem ancient now, circa 2003. How did analog historians critique history scholarship in the form of a website? Despite the project’s many virtues, reviewers could only wonder what it did better than the standard practice, and whether “the rewards [of the electronic article] were simply not commensurate with the effort and confusion involved.” Well, it was a long time ago, you say. Agreed. But i’m not sure that a similar exercise today would yield significantly different criticism from a non-DH audience. There is indeed more sensitivity to digital work, but the work itself has gotten considerably more complex as well.

This is not to criticize the average humanist for not knowing the value of normalized datasets, relational databases, or valid XML. This is to say that those who do know their value haven’t been particularly clear about why these things are useful in the context that they are employed. What we might perceive as ignorance on the part of reviewers is at least in part because the rhetoric and aesthetics of DH work is not particularly well established. In other words, the critical sphere has not yet materialized.

Why might this be? One reason for difficulty in fostering a critical discourse might center around the nature of the DH community—a rallying point for many, if not most, self-proclaimed digital humanists. As a community, we’ve been encouraging and supportive, tending to include and welcome everyone with open arms. The Big Tent theme from the 2011 Digital Humanities Conference suggest it’s ongoing. Such an approach has been essential and ultimately very successful in terms of broadening the scope and influence of the field. This should, and hopefully will, continue.

However, such strong community solidarity and support may inadvertently curtail public criticism. This is not to say that we should become rude, exclusionary, and inwardly hostile. But we can’t be unhappy that tradition-bound hiring committees, promotion and tenure committees, deans, and other humanists don’t appreciate the value of our work when we haven’t really outlined how it’s different and how it should be appreciated. In other words, we haven’t provided a public critical discourse that offers the traditional signals to those who are not expert as to what work is good and what is not—and thus serves as a compass for practitioners, critics, and outsiders alike.

We haven’t provided a public critical discourse that offers the traditional signals to those who are not expert as to what work is good and what is not—and thus serves as a compass for practitioners, critics, and outsiders alike.

In sum, DH needs more critical theory that grows out of its own work and also from farther afield, drawing on critical methods from those working in new media and history of technology, as well as platform, hardware, and software studies.

Post-talk chatter via Twitter prompts me to clarify two important points: To argue for a critical discourse is not to suggest that DH projects are inherently flawed and must offer more tempered epistemological claims and greater transparency—and that they should be criticized when they don’t. Good criticism will, of course, address these issues, but that’s not really the point here. Criticism serves a much larger role beyond pointing out flaws, as i try to argue in the following section. I should also emphasize that to suggest that there is an insufficient critical discourse surrounding DH work does not inherently suggest that DHers are uncritical idiots. We’ve all criticized projects, approaches, results, and what-have-you behind closed doors. We always want learn from and improve upon past work; we all think carefully about how to do our best work and sound scholarship. But the most useful critical discourse is a public one. Exactly what constitutes the sound scholarship that we want to do (and actually do) is not nearly as apparent to others, especially those outside the DH community, as it should be. It befalls the producers of that good scholarship to explain what is and what is not considered good, and why.

II. the value of criticism

What is the function of the critical discourse? Certainly not for simply lambasting each other’s work. Criticism is fundamentally about interpretation. It outlines utility and value, blemishes and flaws; it identifies sources, commonalities, and missed opportunities. It points out true innovation when it’s perhaps not obvious that paint slopped onto a canvas is actually worth thinking about. It points out when success claims point to little more than—to adapt a phrase from Michael Joyce—technological frosting on a stale humanities cake.

This is just one instance where a critical discourse for DH would be farmore valuable than grant applications that sell potential work and post-facto white papers that champion whatever work happened to get completed.

Have we not all not seen intriguing, if not jaw-dropping, visualizations that made virtually no sense? Of course the real thrill of these is to recognize the beauty in that some obscene amount of data could be viewed in a small space, possibly interactively. Anyone who’s even thought about trying it knows how difficult it is. But what does everyone else think? We need to discuss, for example, the value of being able to automate the creation of such visuals apart from the communication that happens as a result of their design. Is this a methodological triumph, or an interpretive one? How can an explanation of the creation of such visuals ease fears of black-box manipulation? This is just one instance where a critical discourse for DH would be far more valuable than grant applications that sell potential work and post-facto white papers that champion whatever work happened to get completed. We need more than traditional journal articles that describe the so-called “real” humanities research that came out of digital projects.

Perhaps most importantly, as I’ve already suggested, criticism serves a crucial signaling function. Matthew Arnold in his The Function of Criticism at the Present Time defined it as “a disinterested endeavor to learn and propagate the best that is known as thought in the world.” (1864/5, 75) I think that this pretty well describes what we need to do. The staggering rate of DH project abandonment has caused some alarm of late. We’re painfully aware that most academics aren’t that good at marketing beyond their disciplinary peers. Criticism selects and propagates projects that deserve merit and to serve as models. To continue with the previous example, we need criticism that praises technological achievement of visualizing, while condemning poor design practices; we need criticism that lauds the interpretive potential while critiquing the extent to which anyone can use the methodology. Again, the point of such criticism isn’t to ridicule or minimize difficult work, but to advance the field.

Of course criticism has to be good and original, not dogmatic. Irving Howe, the influential cultural critic from the mid-20th century, remarked that “power of insight counts far more than allegiance to a critical theory or position…No method can give the critic what he needs most: knowledge, disinterestedness, love, insight, style.” It’s not easy to have these! But when someone does, and can write clearly, the resulting criticism performs extremely valuable scholarly work—work that goes far beyond the original project, but makes it even more useful at the same time. Such criticism is especially good at establishing and debating terms of how to analyze a particular work. This discourse of critique where new standards get hammered out. It’s the connective tissue of projects that pronouncements from on-high simply cannot have.

so what do we look for?
This last year in particular has seen much energetic rethinking of scholarly publishing. Part of this discussion recognizes that what we’re publishing is different. There is the MLA site that outlines types of digital work; guidelines for evaluating digital work. To their credit, the MLA has been one of the most visible scholarly societies in facilitating these kinds of discussion. BUT, here and elsewhere, the focus has remained on getting non-print work recognized and promoting the value of process over results.

These were important arguments to make (and to continue in some cases), but we have to go beyond that now as well. Even if digital work is more acceptable, we haven’t really created sufficient guidelines for evaluating digital work (broadly defined) on its own terms. As far as the MLA goes, the guidelines for evaluating digital work are not all that different from evaluating analog work. On one hand, that is exactly their point! On the other hand, it’s perhaps a bit counter-productive because it doesn’t consider what’s unique about digital work. More helpful, i think, are NINES guidelines for peer review. But they are at once too general to enable rigorous criticism, and too specific to NINES projects. They are, however, an excellent starting point.

I’d like to outline a few very general criteria that might be broadly applicable to digital work, as disparate as it can be. Certainly this list is not comprehensive, but rather a starting point. I won’t dwell on them here; i don’t pretend to have all the answers. I only hope that this list can serve as a small step in furthering the discussion.

Transparency

“I used a certain proprietary tool to get this complicated visualization that took a gazillion hours to encode everything in my own personal schema–I won’t bore you with the details–but here’s what we learn…”

Can we really understand what’s going on? If not, it’s not good scholarship. “I used a certain proprietary tool to get this complicated visualization that took a gazillion hours to encode everything in my own personal schema–I won’t bore you with the details–but here’s what I learned from the diagram…” This cannot be considered good scholarship, no matter what the conclusions are. It’s like not having footnotes. Even though we don’t check footnotes, generally, we like to think that we can. So it’s natural to expect resistance when the footnote resembles a black box. DHers have gained some traction in encouraging others to value process over product. Transparency helps us to evaluate whether a process is really innovative or helpful, or if it’s just frosting.

Reusability

Discussion about what must be and what cannot be reusable will get worked out in a vibrant critical discourse about both concrete work and in abstract theoretical terms.

Can people take away what you’ve done and apply it to existing or future projects? This embodies so much of what is central to the ethos of the community. We’re always looking for better ways of doing things. This applies to methodology, code, and data; it applies also to both process and product. It creates interesting gray area for generalized tools. Are these to be shared? For now, i would say: absolutely! It’s part of the effort, as Matthew Arnold put it, to propagate the best. Obviously, not everything is reusable, but discussion about what must be and what cannot be are important theoretical positions that will get worked out in a vibrant critical discourse about both concrete work and in abstract theoretical terms.

Data

It’s not good enough to point people towards a raw data source only to say: “well, I cleaned this up, standardized it, reformatted it…but I’m going to keep that work invisible and hoard it.” It’s like footnotes without page numbers.

Needless to say, most if not all DH projects rely on data. It must be available! Not just for retesting, but for use in other places. Exactly how data should look is far from obvious. If nothing else, discussing a project’s use of data will encourages conversations about ownership, copyright, the limits of what can be shared, and so on. Those issues are becoming more relevant than ever as we create new research corpora that bridge historically separate datasets. I think that a critical discourse about our projects is a fruitful venue for this, and perhaps more effective on the ground than more abstract, theoretical proclamations (like this essay…). It’s unacceptable to just simply keep our data hidden to avoid the difficult issues. In reality, it’s necessary sometimes. But it’s not good enough to point people towards a raw data source only to say: “well, I cleaned this up and standardized it, and reformatted it…but I’m going to keep this work invisible and hoard it.” It’s like footnotes without page numbers.

Design

Design is not only graphic in nature: it must also apply to the decisions behind database design, encoding, markup, code, etc.

There is great value, I think, in representing humanities scholarship in non-traditional forms. We’re seeing more scholarship online, and new kinds of “publications”. However, academic convention dictates that—it least in terms of scholarly content—we privilege content over form. In reality there is a palpable tension between the separation of content and form. On one hand, especially in the context of new media, web design separates these; on the other hand, as McLuhan pointed out long ago: the medium is the massage. Even more broadly, by ‘design,’ i really mean organizing principles. Why is a particular design strategy the best one or not? DH projects might be more explicit about such choices, but certainly our critique of such work must address these issues as well. Design is not only graphic in nature: it must also apply to the decisions behind database design, encoding, markup, code, etc.

III: New kinds of peer review / criticism

Again, my point isn’t just that DH projects should embrace these values. Obviously, many already do. My point is that they need to get critiqued explicitly and publicly. But how do we really DO it?

As everyone is well aware, the nature of publishing has changed; we now do many digital projects that are never really done or officially published (with an imprimatur of review and vetting and so on). This means that the typical review process has been turned on its head. Getting a grant is too often an end in itself, taken to justify even the completed work. But this signing-off by the scholarly community happens before any work gets done. While traditional scholarship (books and articles) is held accountable to its stated goals and methodologies (as far as the medium permits), digital projects have not had that accountability from the scholarly community. This is a grave disservice in two ways: Projects learn less from each other, and projects remain isolated from relevant scholarly discourse.

It may sound as if i’m simply advocating for more peer review, and conversations about scholarly communication have often made similar suggestions. For example, in late spring of this year, Jennifer Howard wrote a very nice piece for the Chronicle titled “No Reviews of Digital Scholarship = No Respect.” The gist of the article is evident from the title. She ends by saying that scholarly societies and editors of traditional journals need to step up and encourage this work. I think that this has been a popular viewpoint.

Obviously, i agree with Howard’s point that peer review is necessary for legitimization. However, while change on the part of societies and journals would be nice, i think that is entirely incumbent on the practitioners to set the terms. Let’s not wait for a few gatekeepers to dictate terms to us. More importantly, i’m not sure that getting a formal review and thus the imprimatur of serious scholarship is enough. Let’s be honest: we do need more peer review! But that’s not all. We need a fundamentally different kind of peer review. Just as the nature of publishing is changing, the nature of peer review must evolve, especially for large DH projects, but even individual ones as well. Digital humanities work requires a different kind of criticism than most academic criticism because of the very nature of the work. DH projects often serve much broader audiences, and embody interdisciplinary in a way that eludes traditional models of critique.

As a way of fostering useful criticism, peer review needs be more collaborative than before. I mentioned earlier the unease of situating interdisciplinary work in professional pigeon holes. It makes for difficult reviews as well, which need to be collaborative in two ways:

More people to review individual projects. How many people can really critique various facets of a digital humanities project, when they range from graphic design, interface design, code, encoding standards, etc. Even if one could, it’s a herculean task not befitting the typical lone reviewer

Collaborate to organize these reviews. Given the nature of complex publishing models of DH projects, why not move away from editor-mediated peer review, which minimizes the public effect of the critique? What is the code like? What is the data like? What conversations is it trying to join? What work does it enable at either methodological or interpretive levels?

Furthermore, it is incumbent upon the projects to build time for these critiques into grants and get this feedback during and after the project. They should get not just empty, laudatory critiques but ones that try to shape the project in productive, if challenging, ways. They should be published as part of the project, as it helps to foster a vibrant critical discourse is crucial. DH work is often iterative in nature, and the review process needs to be as well. Just as digital humanities projects are inherently more public than the typical humanities project, everyone benefits when their critiques are more public.

A project without accountability, without connectedness, without critique, simply fills another plot in the DH project graveyard.

Funders need to broaden expectations of sustainability beyond access and infrastructure, to include also how a project situates itself within the larger scholarly discourse. In other words, projects need a social contract to the broader scholarly community, not just funder. Funders must prioritize and encourage public critiques as a way of establishing scholarly value, rather than through grant selection alone. A project without accountability, without connectedness, without critique, simply fills another plot in the DH project graveyard.

Not only do we need to do this ourselves, but it needs to be part of the DH curriculum. It seems like there’s a new slate of DH classes each year, if not each semester. These courses need to explicitly teach critical methods for the unique issues in confronting DH work. Both theory and practice is essential here. We must have more than gossipy complaints that don’t go beyond the classroom walls, or vapid reviews that fill the backs of most printed journals. Good criticism is very difficult. Students need practice pointing out what’s good and lacking in a project in a way that benefits both the project and the average humanist who needs to understand (and not just evaluate) it.

IV: lastly

The criteria i mentioned (transparency, reusability, data, design) operate in a larger theoretical framework as well. We can benefit, i think, from considering an adaptation of a well-known diagram of criticism from M. H. Abrams. With DH work at the center, four proximal spheres of criticism might guide our approach. The formalist critique examines the form of the work, examining how well its structure, form, and design serve its purpose in the context of similar works; didactic criticism focuses on the ability for the work to reach, inform, and educate an audience; mimetic criticism might evaluate how well the DH work is truly humanist work or facilitates it (this replaces “universe” in the original diagram); expressive critique discuss how the work reflects the unique characteristics and style of the creator(s). Here the “team” anchor replaces the original “artist” label, but only reluctantly, since DH projects require, as I have argued, a more complex kind of criticism than the typical scholarly work, perhaps one more akin to literary criticism. Of course these spheres are not entirely separate. Throughout each of them, for instance, we must remember that code and metadata, as well as data and whatever structures govern it, are not entirely objective entities but are informed, attacked, and defended by ideology and theory. Perhaps not to the same extent as a work of art, but they matter and they need to be discussed. These spheres of criticism are of course applicable to any humanities research; they are especially crucial to digital work when so much of it is misunderstood.

In the end, I hope I’ve suggested how it would be useful for our projects to have their own kind of critical discourse that will do the essential work of outlining what’s good and why. My remarks and exhortation to criticism should absolutely not be taken as any kind of attack on the worth or value of the Digital Humanities or its practitioners. Obviously, i am offering criticism of existing practices (including my own), but only because I want the DH community to enjoy even greater, more efficient success and broader acceptance within the humanities at large. A more sophisticated critical discourse seems like one productive, if not essential, avenue towards those goals.

I really respect and admire these notes toward the development of a critical discourse “native” to DH. I think we do need such a thing. And if I were trying to sketch such a discourse, I would produce a less-lucid version of what you’ve produced here. At the same time, I want to mark the possibility of an alternative. I increasingly feel that certain kinds of DH (especially text mining) are going to have mostly heuristic rather than evidentiary value: in other words, we may use topic modeling, etc to discover a historical lead that we then pursue and write up using relatively traditional historical/critical methods and evidence. Where this is the goal, I’m not sure that the critical standards “native” to DH (transparency, etc) are going to be quite as central. To be sure, it’s still nice if the topic-modeling process we use isn’t a black box hidden somewhere inside a silo. But the bottom-line question may be simply, Are leads of this sort turning out to be useful, by conventional humanistic standards?