Linking big data to fine analysis: the challenge of textual methods in the digital humanities.

ABSTRACT

In this talk I will illustrate and explain some of the guiding principles behind our efforts with the Indiana Philosophy Ontology (InPhO) and its partner projects. Although the term "ontology" in the realm of computer science has come to mean a formal structure primarily designed around the requirements of computers, our mantra is that ontologies are for people too. In the digital humanities, especially, this means a focus on representational flexibility and human interpretation should remain at the core of what we do. Furthermore, large-scale analysis alone is insufficient for the needs of humanities scholars: They (both scholars and algorithmic analyses) need to be connected to the primary texts in ways that help support scholarly interpretation. I will illustrate these principles through the "Digging by Debating" joint project between InPhO and other partners in the U.S. and U.K., which has been funded by NEH and JISC through the Digging Into Data challenge. In this project we seek to connect a high level views of large quantities of digitized text (such as selections from the HathiTrust/Google Books collection) to close analyses of specific pages containing philosophical arguments that are of significance to the history and philosophy of science.

Since its initial role in artificial intelligence research during the early 1970s, computer vision --- defined, for the purposes of this talk, as the automated description and reconstruction of the physical world (including its subjects and objects) through algorithms --- has grown increasingly accessible to a wide variety of audiences through a broad range of consumer electronics. For instance, consider the number of cultural heritage projects relying extensively on optical character recognition. Or, in commonplace apps like iPhoto, note the use of face detection techniques for image description and searching. Elsewhere, web-based repositories such as Thingiverse are housing museum collections (e.g., at the Art Institute of Chicago) of 3D scans and print-on-demand models generated by both staff and patrons. And now Kinect hacks are practically ubiquitous on the web, with people regularly repurposing the sensor to create games, build DIY robots, and construct playful interfaces. Unpacking these phenomena across academic and popular domains, this talk highlights the need for digital humanities practitioners to not only engage how computer vision is embedded in our research but also explore how it actively transduces our materials, with an emphasis on the production of prototypes --- or "fabrications" --- that do not yet exist in the physical world. Here, the talk draws examples from recent research conducted by the Maker Lab in the Humanities at the University of Victoria, where --- through its "Z-Axis" research initiative --- practitioners are conducting experiments in stitching (i.e., translating 2D photos into 3D models), decimation (i.e., reducing the polygon count of models), and displacement (i.e., pushing and pulling the geometry of models to generate depth and detail) in order to articulate new-form arguments about literary and cultural histories. The Lab's Z-Axis methodologies develop existing digital humanities research in speculative computing (Drucker and Nowviskie), geospatial expression (Moretti), data visualization (Manovich), algorithmic criticism (Ramsay), and ruination (McGann, Sample, and Samuels) in order to: 1) build persuasive objects that, like written essays, function as scholarship, 2) explore the potential of 3D techniques, desktop fabrication, and critical making for humanities research, 3) open material culture and history to unique modes of perception and interpretation, and 4) resist quotidian assumptions that computer vision affords neutral, high-fidelity replicas of our lived, social realities. To "lie" with computer vision, then, is to tinker with its default settings and transductions, reconfigure them, and mobilize them toward novel and unanticipated forms of scholarly persuasion.

whitney trettien, PhD Candidate, Duke University, English

Short-Circuiting the Hardware of History.

ABSTRACT

The past is, as Wolfgang Ernst has provocatively written, the “artifactual hardware, so to speak, upon which historical discourse operates like a form of software.” Taking up the implications of Ernst’s statement, this talk explores how tinkering with the material weight of history, its hardware, through the creative/critical use of digital media has the power to update the software of our discourse. By deliberately engaging the charged differences of electronic media – their material strangeness in relation to historical artifacts – tactical methods of creative deformation and critical making have the power to short-circuit scholarly conventions, forcing current methods of reading, writing and communicating to run along new paths.

The Forum is free to attend and open to participants beyond KU, but space is limited.

Questions may be directed to the Institute for Digital Research in the Humanities, idrh@ku.edu