Saturday, July 8, 2017

Images and Objectivity

Objectivity attempts to trace the emergence of scientific objectivity as a concept, ideal, and moral framework for researchers during the nineteenth century. In particular, the book focuses on shifting ideas about scientific images during the period. In the eighteenth and early nineteenth centuries, Daston and Galison argue, the scientific ideal was “truth-to-nature,” in which particular examples are primarily useful for the ways in which they reflect and help construct an ideal type: not this leaf, specifically, but this type of leaf. Under this regime scientific illustrations did not attempt to reconstruct individual, imperfect specimens, but instead to generalize from specimens and portray a perfect type.

Objectivity shows how, as the nineteenth century progressed and new image technologies such as photography shifted the possibilities for scientific imagery, truth-to-nature fell out of favor, while objectivity rose to prominence.

And that's what interests me, the focus on images, and the rise of photography:

In debates about the virtues of illustration versus photography, for instance, illustration was touted as superior to the relative primitivism of photography—technologies such as drawing and engraving simply allowed finer detail than blurry nineteenth century photography could. Nevertheless photography increasingly dominated scientific images because it was seen as less susceptible to manipulation, less dependent on the imagination of the artist (or, indeed, of the scientist).

Images, of course, are clearly distinct from the prose in which they are (often) set. Images are a form of objectification, though it takes more than objectification to yield objectivity.

Cordell then goes on to discuss computational criticism (aka distant reading), where "computation is invoked as a solution to problems of will that are quite familiar from decades of humanistic scholarship." Computational critics

might argue that methods such as distant reading or macroanalysis seek to bypass the human will that constructed such canons through a kind of mechanical objectivity. While human beings choose what to focus on for all kinds of reasons, many of them suspect, the computer will look for patterns unencumbered any of those reasons. The machine is less susceptible to the social, political, or identity manipulations of canon formation.

Interesting stuff. I've got two comments:

1) Consider one of my touchstone passages by Sydney Lamb, a linguist of Chomsky’s generation but of a very different intellectual temperament. Lamb cut his intellectual teeth on computer models of language processes and was concerned about the neural plausibility of such models. In his major systematic statement, Pathways of the Brain: The Neurocognitive Basis of Language (John Benjamins 1999) remarked on importance of visual notation (p. 274): “... it is precisely because we are talking about ordinary language that we need to adopt a notation as different from ordinary language as possible, to keep us from getting lost in confusion between the object of description and the means of description.” That is, we need the visual notation in order to objectify language mechanisms.

I note that, I think of objectification (in the sense immediately above) as a prerequisite for objectivity, but it is by no means a guarantee of it. That requires empirical evidence. A computer model will give us objectification, but no more.

2) Tyler Cowen has an interesting and wide-ranging interview with Jill Lepore in which she notes that Frederick Douglass was the most widely photographed man of 19th century America: "In the 1860s, he writes all these essays about photography in which he argues that photography is the most democratic art. And he means portrait photography. And that no white man will ever make a true likeness of a black man because he’s been represented in caricature — the kind of runaway slave ad with the guy, the little figure, silhouette of the black figure carrying a sack."