About Me

I am Professor of Digital Humanities at the University of Glasgow and Theme Leader Fellow for the 'Digital Transformations' strategic theme of the Arts and Humanities Research Council. I tweet as @ajprescott.

This blog is a riff on digital humanities. A riff is a repeated phrase in music, used by analogy to describe a improvisation or commentary. In the 16th century, the word 'riff' meant a rift; Speed describes riffs in the earth shooting out flames. The poet Jeffrey Robinson points out that riff perhaps derives from riffle, to make rough.

Maybe we need to explore these other meanings of riff in thinking about digital humanities, and seek out rough and broken ground in the digital terrain.

5 July 2012

Making the Digital Human: Anxieties, Possibilities, Challenges

This lecture was given to the Digital Humanities Summer School, Oxford University, 6 July 2012.

During my time in charge of the stunning
Founders’ Library at St David’s College Lampeter in Wales, one volume which particularly
fascinated me was this early thirteenth century theological manuscript, the
oldest in the library. When George Borrow visited Lampeter in 1854, he was told
that the leaves of this manuscript were stained with the blood of monks
slaughtered at the time of the Reformation. The story of the monks’ blood is
apocryphal, but this manuscript is remarkable in other ways, because it is an
early manuscript of Peter of Capua’s Distinctiones
Theologicae. The collections of biblical extracts known as distinctiones compiled by Peter Chanter,
Peter of Capua and others represent a key moment in human history, because they
are among the earliest experiments in alphabetization. Collections of biblical
extracts in alphabetical form enabled preachers more readily to locate relevant
texts. Contemporaries expressed amazement at the richness of innovative references
in the sermons of preachers who made use of this remarkable new tool. These
manuscripts of the distinctiones
were, as Richard and Mary Rouse have pointed out, the direct ancestor of all
later alphabetical and searchable tools.

The idea that texts could be arbitrarily
arranged according to an abstract system such as the letters of the alphabet
was a startling one in the middle ages, which had previously sought in
arranging texts to illustrate their relationship to the natural order. But the distinctiones showed the advantages of
more abstract methods, and they paved the way for the first concordance to the
scriptures, which was compiled under the supervision of the Dominican Hugh of
St Cher between 1235 and 1249 at the Dominican monastery of St Jacques in
Paris. This is a manuscript of the first verbal concordance from St Jacques.
The creation of this concordance, which organized every word in the bible
alphabetically, was one of the greatest-ever feats of information engineering.
It is said that about 500 Dominicans worked on compiling the concordance. The
organization of the project was almost industrial in its scale and conception,
with each Dominican assigned blocks of letters for indexing. The idea that a sacred
text like the Bible could be approached in such an abstract and arbitrary
fashion was revolutionary. Not only was the creation of the concordance a great
technical and intellectual advance, but it implied a change in the relationship
between man, text and God. The development of alphabetical tools changed the
way people behaved and thought. Previously, memory had been the key attribute
used in engaging with and making accessible the Bible. With these new alphabetical
tools, the cultivation of memory became less important and it was the ability
to manipulate these new knowledge systems which counted. The distinctiones and concordances altered
the way in which man explored his relationship with God changed; they changed
conceptions of what it meant to be human.

In 1875, the librarians at the British
Museum were sorting through duplicate books prior to disposing of them. To
their surprise, they found among the refuse a manuscript which had been
acquired by Sir Hans Sloane, the founder of the British Museum, and was among
his greatest treasures. This volume contained William Harvey’s notes for the course
of public lectures in 1616 in which he first described the circulation of the
blood. Harvey’s discovery of the circulation of the blood was another moment
when understanding of what it meant to be human was radically changed. Harvey
portrayed a world in which the human heart seemed no more than a pump, so that
the body started to sound like a machine. As Allison Muri has discussed in her
fascinating study, The Enlightenment
Cyborg, Harvey’s discovery was to usher in from the end of the seventeenth
century a vigorous debate about the extent to which the human is a machine and
whether machines could become human.

It is possible to interpret the history of
much science and technology as one of constant renegotiation of our
understanding of the nature of being human and of the place of the human in the
wider universe. When Edmond Halley calculated the dates of astronomical events
which he knew he would never see, this raised many issues about the wider place
of the human in the universe and changed human self-perception. The ferocious
objections to Jenner’s use of vaccination against smallpox were largely due to
his introduction of animal matter into the human bloodstream. Likewise, industrialization
fundamentally reshaped many aspects of human life and behaviour: Wordsworth
portrays factory workers as having been fundamentally dehumanized and turned
into machines.

In 1948, Claude Shannon’s landmark paper The Mathematical Theory of Communication
established many of the fundamentals of digital theories of communication and
introduced the concept of the bit as a unit of measurement of information. Shannon
calculated that a wheel used in an adding machine comprised three bits. Single
spaced typing represented 103 bits. Shannon considered that the
genetic constitution of man represented 105 bits. With the decoding
of the human genome, the reduction of humanity to bits and bytes implicit in
Shannon’s calculation seems complete. It seems that this reengineering of our
understanding of the human is daily assuming greater speed and depth. In her
celebrated cyborg manifesto of 1985, Donna Harraway declared that ‘we are all chimeras, theorized and fabricated hybrids of
machine and organism; in short, we are cyborgs’. This ushered in the idea that
we are post-human – that is to say, that the Enlightenment understanding of the
relationship between body and mind has ceased to be relevant as a result of
technological advances. Exactly what our post-human condition might be is of
course not clear, but it is clearly very different to the understanding that
say Halley might have had of his position in the universe.

I don’t hear want to venture into a
complex area of critical theory which I am ill equipped to discuss. In
considering the implications of the post-human, it is better to refer to the
works of much larger intellects than mine, such as particularly Katherine
Hayles. The formulation post-human (first recorded in 1888) is a deliberately provocative
one. It does not merely mean that humans will somehow be pushed aside by
machines – this is an oversimplistic and perhaps philosophically impossible
notion. The term post-human rather suggests that our sense of what it is to be
human has changed – as Katherine Hayles puts it, the post-human is a state of
mind, a realization that mankind has finally understood that it is definitely
not the centre of the universe. My concern here is to consider the implications
of this post-human state of mind for our understanding and practice of the
digital humanities. Although the debates about what the digital humanities are
have ranged far and wide, the focus of the discussion has mainly been on the
digital side of the equation. There has been little discussion of what we mean
by the humanities. The orthodox view of the humanities which prevailed when I
was a young man was best summarized by the American literary critic Ronald
Crane in his 1967 book The Idea of the
Humanities. For Crane, the humanities was first and foremost the study of
human achievement. Crane described how human beings (of course, chiefly men in
his view) developed languages, produced works of literature and art, and created
philosophical, scientific and historical system. Such human achievements were
for Crane at the heart of the study of the humanities.

Since Crane wrote, the idea that the
humanities should explore and celebrate mankind’s achievements has been progressively
challenged. The human has ceased to be the exclusive focus of the humanities.
This partly reflects the impact of technology, which has become so pervasive
and so deeply integrated into everyday life that influential theorists such as
McLuhan and Kittler portray technology as displacing the human. But the
dethroning of the human also reflects wider shifts in understanding. Historians
such as Ferdinand Braudel have shown how human society may be shaped by deep
underlying geographical factors. Cary Wolfe and others have forcefully reminded
us that the relationship between human society and the animal and plant worlds
is complex and symbiotic, and by no means a one-way traffic. All these trends
have helped displace the human from the centre of debate.

Another assumption central to Crane’s
view of the humanities was that there is a neatly packaged cultural canon defining the heights of human achievement. This view has been been
subject to sustained and justifiable attack. In a British context, for example,
Raymond Williams charged that the concept of a cultivated minority which helped
preserve civilised standards from the threat of a ‘decreated’ mass was both
arrogant and socially damaging. For Williams, ‘culture is ordinary in every
society and in every mind’. In response to these developments it has been argued
that we need to develop a post-humanities which overturns any vestiges of an
elitist of view of the humanities, while also seeing the human in a more
interactive sense. Thus, Geoffrey Winthorp Young has proposed that post-humanities
should be characterized by a focus on technology accompanied by a critical
engagement with biological matters – a post-humanities which looks at the
interaction of climates and computers, mammals and machines, media and
microbes.

In a compelling series of recent talks and lectures, Tim Hitchcock has discussed the implications for humanities scholars
of tools like Google’s n-Gram viewer or the use of visualisations to analyse
data from corpora like the Old Bailey Proceedings. Tim forcefully argues that
the interests of humanities scholars need to shift towards interrogating and
manipulating in new ways the vast quantities of data which have now become
available. Hitchcock says that he dreams of ‘a bonfire of the disciplines’
which would release scholars from the constraints of their existing
methodologies and allow them to develop new approaches to the large datasets
now becoming available. Tim’s position is a recognizably post-human one. Tim’s
call for a bonfire of the humanities echoes the frustration expressed by Neil
Badmington in his outline of ‘Cultural Studies and the Posthumanities’.
Badmington describes how he was writing in the Humanities Building in Cardiff
University and declares ‘I wish for the destruction of this cold, grey
building. I wish for the dissolution of the departments that lie within its
walls. I wish, finally, that from the rubble would arise the Posthumanities’.

Discussion of the digital humanities frequently
gives vent to impatience with disciplinary boundaries and expresses a desire to
reshape the humanities. This has been pithily put by Mark Sample: ‘It’s all about innovation and disruption. The digital
humanities is really an insurgent humanities’. Comments such as this have
excited the ire of the eminent critic Stanley Fish who noted that little is
said of the ‘humanities’ part of the digital humanities, and asked ‘Does the
digital humanities offer new and better ways to realize traditional humanities
goals? Or does the digital humanities completely change our understanding of
what a humanities goal (and work in the humanities) might be?’ Fish’s questions
are fair ones, and are not asked often enough. Is the digital humanities
aligned with a conventional Ronald Crane view of the humanities, or do they
seek to help move us towards – as both Hitchcock’s and Sample’s comments seem
to suggest – a post-humanities?

In Britain, digital humanities centres
have recently been very active in creating directories of projects which
provide us with an overview of the current intellectual agenda of the digital
humanities in the UK. A comprehensive listing of projects is available on
arts-humanities.net, but this includes a number of commercial and other
packages not produced by digital humanities centres. In order to get a clearer
idea of what the digital humanities as formally constituted in Britain
represents, it is best to look at the directories of projects created by the
major digital humanities centres. Let’s start with my own centre at King’s College London. The type of humanities represented by the directory of projects
undertaken by the Department of Digital Humanities at King’s College is one
which would have gladdened the heart of Ronald Crane. Of the 88 content
creation projects listed, only 8 are concerned in any way with anything that
happened after 1850. The overwhelming majority – some 57 projects – deal with
subjects from before 1600, and indeed most of them are concerned with the
earliest periods, before 1100. The geographical focus of most of the projects
are on the classical world and western Europe. The figures that loom largest
are standard cultural icons: Ovid, Shakespeare, Ben Jonson, Jane Austen,
Chopin. This is an old-style humanities, dressed out in bright new clothes for
the digital age.

Oxford University has recently launched
a very impressive directory digitalhumanities@Oxford, which lists around 190
content creation projects in the humanities at the University. While Oxford
seems a little more willing to countenance modernity than King’s College, the
figures are still not impressive: about 30 of the 190 projects at Oxford are
concerned with the period after 1850. While these include some projects on
major modern themes such as the First World War archive and the Around 1968
project, the connection of other projects with the modern world is more
tangential, such as Translations of Classical Scholarship, which just happens
to extend to 1907. At Oxford, the centre of gravity of the digital humanities
is also firmly rooted in earlier periods, with about half of the projects being
concerned with the period before 1600. And again we are presented with an
extremely conservative view of the humanities, in which the classical world has
an elevated position, and names like Chaucer, Leonardo, Holinshed, John Foxe
and Jonathan Swift dominate. The smaller range of projects produced by the
Humanities Research Institute at Sheffield reflect a similar bias, with just
over half dealing with the period before 1700. Glasgow, I am pleased to say,
has by far the highest proportion of more modern projects, with almost a half
of its forty projects covering the period since 1850. However, this stronger
emphasis on more modern subjects at Glasgow doesn’t seem generally to reflect a
difference in intellectual approach – the projects are dominated by such
old-style male cultural icons as Burns, Mackintosh and Whistler.

For all the rhetoric about digital
technologies changing the humanities, the overwhelming picture presented by the
activities of digital humanities centres in Great Britain is that they are
busily engaged in turning back the intellectual clock and reinstating a view of
the humanities appropriate to the 1950s which would have gladdened the heart of
Ronald Crane. One of the great achievements of humanities scholarship in the
past fifty years is to have widened our view of culture and to have expanded
the subject matter of scholarship beyond conventional cultural icons. There is
virtually no sense of this in digital humanities as it is practiced in Britain.
If recent scholarship in the humanities has managed (in the words of Raymond
Williams) to wrest culture away from the Cambridge teashop, the digital humanities
seems intent on trying to entice culture back to the Earl Grey and scones. This
use of digital technologies to inscribe very conservative views of culture is
not restricted to digital humanities centres. Libraries and museums have
frequently seen digital technologies as a means of giving access to their so-called
‘treasures’, so that it is the elite objects rather than the everyday to which
we get access. The sort of priorities evident in the British Library are very
similar to those of digital humanities centres: the Codex Sinaiticus, Caxton
editions of Chaucer, illuminated manuscripts from the old library of the
English Royal Family, early Byzantine manuscripts, and Renaissance Festival
Books.

There are some more intellectually and
culturally imaginative projects at the British Library, such as the excellent
UK Soundmap, but significantly they do not come from the mainstream library
areas. Digital technologies have generally not enabled libraries and archives to
enhance access to concealed and hidden material in their collections, and does
not offer those outside the library fresh perspectives on their collections.
Here’s one example. As a legal deposit library, the British Library has
historically received vast quantities of printed material which it does not
have the resources to catalogue. Thousands of such items lurk under one line ‘dump
entries’ which can be located in the printed catalogue but are paradoxically
very difficult to find in the new online ‘Explore the British Library’. This
unknown and unrecorded material in the British Library includes for example
thousands of estate agents prospectuses for new suburban developments in the
1930s. This material is of potentially very great cultural, historical and
local importance, but at present it is completely inaccessible. Shouldn’t the
British Library be giving a higher priority to making available documents like
these, recording an everyday culture, rather than making its so-called
‘treasures’ available in an ever-increasing range of technological forms?

I am conscious that my remarks are
based very much on Britain and of course in painting such a general picture
there are always bound to be major exceptions (I would for example suggest that
the Old Bailey Proceedings has developed a very different cultural and
intellectual agenda to the majority of British digital humanities projects).
But nevertheless I feel confident in my general charge: to judge from the
projects it produces, the digital humanities as formally constituted has been
party to a concerted attempt to reinstate an outmoded and conservative view of
the humanities. The reasons for this are complicated, and again the American
situation is different to the British one in some important respects, but in
Britain the problem is I think that the digital humanities has failed to
develop its own distinctive intellectual agendas and is still to all intents
and purposes a support service. The digital humanities in Britain has generally
emerged from information service units and has never fully escaped these
origins. Even in units which are defined as academic departments, such as my
own in King’s, the assumption generally is that the leading light in the project
will be an academic in a conventional academic department. The role of the
digital humanities specialists in constructing this project is always at root a
support one. We try and suggest that we are collaborating in new ways, but at
the end of the day a unit like that at King’s is simply an XML factory for
projects led by other researchers. We are interdisciplinary, in that we work
with different departments, but so do other professional services. Departments
like ours can only keep people in work if we constantly secure funding for new
research projects. So we are sitting ducks – if a good academic has a bright
idea for a project, it is difficult to say no, because otherwise someone might
be out of a job. But this means that intellectually, the digital humanities is
always reactive. Above all, it means that it is vulnerable to those subjects,
like classics or medieval studies, who are anxious about their continued relevance
and funding, who are desperate to demonstrate that their subjects can be
switched on, up to date and digital. The digital humanities has become caught up
in a form of project dependency which will eventually kill it unless it can be
weaned off the collaborative drug.

Now I am a medievalist by training, and,
recovering recently from my broken leg, I realized that there is nothing I
would like now to do as much as spend my time using the remarkable online archive of medieval legal records created by Richard Palmer in Texas. But I
also subscribe strongly to a point of view which sees Super Mario or Coronation
Street or Shrek as just as culturally interesting and significant as Ovid and
Chaucer. It is an article of faith for me that You Tube is just as worthy of
scholarly examination as an illuminated manuscript. One of the stimulating
things about working somewhere like the British Library is that it brings home
just how many amazing forms culture takes. On the shelves of the British
Library, you regularly encounter an ancient potsherd with writing by Ethiopian merchant
next to a regency laundry list underneath an Aztec picture manuscript and just
across the corridor from a Fats Waller LP. One of the exciting things about
digital cultures is that they give us access to such an eclectic, boundary-crossing
view of culture, and if our digital humanities fails to embrace such an
inclusive and all-embracing view of culture and of the humanities, then there
will always be a disjunction between the digital humanities and the digital
world it professes to inhabit. But our academic collaborators in classics or
history or even literature will want to keep us close to hand and prevent us
wandering away down such paths. Until we seize control of our own intellectual
agendas, the digital humanities are doomed to be – at best – no more than an
ancillary discipline (the term frequently applied in the past to paleography
and bibliography).

Our stress on collaboration and
interdisciplinarity are our worst enemies. I take pride in having been returned
for three different panels for research assessment exercises, so I feel that I
have really committed personally to interdisciplinarity. However, as far as the
digital humanities are concerned, interdisciplinarity is just a cover for the lack
of a distinctive intellectual agenda. We rarely assemble truly
interdisciplinary teams – Tim Hitchcock’s current collaboration with social
scientists and mathematicians is an exception which proves the rule. Similarly,
team working has become routine with the establishment of research council
funding in the humanities. We are not unusual because we work in teams – it is
the lone scholar which is more of a rarity nowadays. Everyone claims to be
interdisciplinary today, so the digital humanities to claim this as one of its
distinctive characteristics is to claim nothing.

Another major obstacle preventing the
digital humanities developing its own scholarly identity is our interest in
method. If we focus on modelling methods used by other scholars, we will simply
never develop new methods of our own. The idea – at the heart of a lot of
thinking about methods, models and scholarly primitives - that a synthesis could
be developed from these methods to produce a sort of alchemical essence of
scholarship is absurd. If we truly believe that digital technologies can be
potentially transformative, the only way of achieving that is by forgetting the
aging rhetoric about interdisciplinarity and collaboration, and starting to do
our own scholarship, digitally. A lot of this will be ad hoc, will pay little
attention to standards, won’t be seeking to produce a service, and won’t worry
about sustainability. It will be experimental.

The starting point is to start to
saying no to other people’s projects if they don’t enthuse us. Everyone now
accepts that digital technology is changing scholarship. We don’t need to
convince them and don’t need to embrace as a convert every humanities academic
who thinks that a computer might help. What we need more urgently to do is to
develop our own projects that are innovative, inspiring, and different, rather
than endlessly cranking up what Torsten Reimer has called the digital
photocopier. We might start by seeking closer contact with our colleagues in
Cultural and Media Studies. There is a huge body of scholarship on digital
cultures with which we engage only patchily and which offers us powerful
critical frameworks in articulating our own scholarly programme. One lesson which immediately emerges from
dipping a toe into this burgeoning scholarship is that those of us in the
digital humanities need to engage more with the born digital. Humanities scholars are increasingly studying
the digital, yet the digital humanities (paradoxically) does not got much
involved in this discussion – the huge preponderance of projects concerned with
the period before 1600 is an eloquent declaration that British digital
humanities is mostly not very interested in what is currently happening on the
internet. Again, we might link this to the way in which the digital humanities
has become annexed by a very conservative view of the nature of humanities
scholarship – digital humanities practitioners have too often seen their role
as being responsible for shaping on-line culture and for ensuring the provision
of suitably high-brow material. But this is a futile enterprise as the culture
of the web has exploded. The internet has become a supreme expression of how
culture is ordinary and everywhere, and there is a great deal for us to
explore.

I’m sure you will have seen the videos of very young children instinctively using an iPad or iPhone, which are used to
illustrate how children can instinctively accept the digital (or at least a tablet). But watching a
child playing with an iPad raises a host of other issues about text, record and
memory. My former colleague at Glasgow, Chris Philo, has produced some very thought-provoking papers about the methodological issues posed by recording
childhood activities such as writing and drawing. Since researchers have their
own memories of childhood and frequently parenthood as well, in attempting to
record conversations with children or encouraging children to write and draw,
researchers often impose their own memories of childhood. Correspondingly the
children themselves are eager to please and this will shape their drawing and
writing for adult researchers. Philo raises the question of whether, given such
complex feedback loops, archives of childhood are ever feasible. He asks how we can ever accurately document childhood. He also
suggests that maybe new technology provides an answer. Can we gain a more direct insight into
childhood by recording and analyzing how a three-year old uses an iPad? Maybe
this is the sort of new digital humanities, analyzing the human intersection
with the machine, which we might pursue.

There are also increasing quantities of
born digital materials more recognizable as the conventional stuff of
humanities research. For major disruptive events such as terrorist attacks, our
information has in the past often been largely textual or produced by
professional media, so that the information is often restricted to the
immediate incident. The July 2005
bombings in London were among the first events that were recorded in a variety
of ways: apart from conventional media, there were blog reports, mobile phone images
uploaded on Flickr, SMS traffic, CCTV coverage. What is fascinating in the
reportage of July 7th is the way in which these different media
affect the way in which we can explore the nature and structure of the event.
While there are a few dramatic mobile phone images from the bombed tube trains,
the vast majority of the pictures of the July 7th bombings on Flickr
show the disruption in the streets: people trying to find their way home,
gathering anxiously to get news. The emergency services nervously try to
control the situation; normally busy streets are eerily deserted. This is a
curiously de-centered view of the event. For many people, the memory of 7th
July was one of confusion, waiting and
uncertainty. This is an aspect of such major events which often is not recorded
in conventional media, but one that we can explore here.

The chief issue which emerges from this
material on 7th July is that of memorialization, as a recent special issue of Memory Studies has
discussed. The engagement of people with the events of that day was heavily
mediated in many different ways throughout technology and they also sought to
use technology to memorialize and record their experiences on that day.
Different forms of technology created different forms of memorialisation – the
mobile phone interaction (as itself memorialized through Flickr) was very
different to that in blogs or in conventional media. Moreover, the new media
also enabled older informal methods of communication and memorialization to be
recorded. Presumably on other occasions in the past, poignant handwritten
notices and posters had appeared, but generally they are not recorded. However,
the availability and cheapness of mobile phones and digital cameras means that
this distinctive type of textuality from such disruptive events has been
recorded.

While the digital and textual traces of
the July bombings provide rich material for investigating the memorialization of
major events, this does not mean that our focus needs to be restricted to the
contemporary and recent. One feature of the digital humanities should be that
we provide the historical and critical range and depth to help provide new
contexts for contemporary technologies – we understand that the internet in
some ways is the heir of the thirteenth-century concordance. We might compare the
digital and media traces of July 7th to the way in which earlier
major events such as the Fire of London or the Peasants’ Revolt of 1381 appear
in the media of the period. Major events such as these appear differently when
viewed through the lens of broadside ballads or medieval chronicles. This kind
of historical media studies is one rich area for a future digital humanities.

One major theme which would emerge from
such a study is the intersection between technology and the different types of
human memory and understanding to which we give the overall label of textuality.
The blog is used in a different way to the mobile phone which in turn is used
in a different way to the handwritten poster. These differ from printed ballads
or manuscript chronicles. A fundamental aspect of our engagement with
textuality is a materiality which should be at the heart of the digital
humanities, and which should enable us to bridge the gap between the born
digital and the medieval. Although much cultural commentary on the digital
portrays it as disembodied, flickering, volatile and elusive, digital
technology is as material (maybe more so) than writing and printing. As Matt
Kirschenbaum has reminded us in the ‘Grammatology of the Hard Drive’ in his book Mechanisms, the
computer in the end comes down to a funny whirring thing that works much like a
gramophone. The internet is not magic; it depends on vast cables protected by
one of the great Victorian discoveries, gutta percha. At Porthcurno Beach in
Cornwall, fourteen cables linked Britain to the rest of the British Empire, and
the internet still comes ashore through cables at Porthcurno.

Katherine Hayles has described how one
of the fundamental issues in the emergence of the post-human derived from
Claude Shannon’s work on improving the quality of telegraph communication over cables
like those at Porthcurno. Shannon found
that on-off signals – bits – could be retrieved more efficiently and accurately
over cables. Shannon proposed that the information should be separate from the
medium carrying it. Shannon
declared that the ‘fundamental problem of communication
is that of reproducing at one point either exactly or approximately a message
selected at another point’ – in other words, communication science should strip
a message down to those essentials which could be fixed in such a way that it
could be reproduced at a distance. In short, information is about fixing and
attempting to stabilize what are construed as the essential elements. Even at the time, there were complaints that Shannon’s approach
was over formalistic and by ignoring issues like meaning was inadequate as the
basis for a theory of communication. But the practical need to improve the
quality of the cable traffic at Porthcurno prevailed. Shannon’s discoveries
form the basis of modern computing, but it by no means follows that in thinking
about the way in which textuality works we should be bound by this model. For
large parts of the humanities, our understanding of the nature of textuality
(in its broadest sense as construing images, video, sound and all other forms
of communication as well as verbal information) is deeply bound up with its
materiality. The interaction between carver and stone is important in
understanding the conventions and structure of different types of inscription.
The craft of the scribe affected the structure and content of the manuscript.
The film director is shaped by the equipment at his disposal. I write differently
when I tweet to when I send an e-mail. Text technologies have a complex
interaction with textuality and thus with the whole of human understanding.

Texts are always unstable, chaotic,
messy and deceptive for a simple reason – because they are human. The only way
in which we can recover and explore this human aspect of the text is by
exploring its materiality. It will never become wholly disembodied data. We can
display information from ship’s logs in geographical form and manipulate it in
a variety of ways, but at the end of the day if the captain had bad writing,
was careless in keeping his log or got drunk for days on hand, then the data
will be deceptive. We can get a much better idea of the nature of that log and
the human being behind it by exploring its material nature – were there a lot
of ink blots? Were pages ripped out? Were sections corrected? And it is by
exploring this materiality that we can start to reintegrate the human and the
digital, and develop a view which transcends the post-humanities and, while
accepting that technology changes the experience of being human, it can also
enable us to explore in new ways the way in which different textual objects,
from manuscripts to films, from papyri to tweets, engage with humans and
humanity. It is impossible to come here to Oxford and not mention the name of
Dom Mackenzie, who taught us how historical bibliography is a means of
exploring the cultural and social context of text. The mission of the digital
humanities should be to bring the vision that Mackenzie brought to historical
bibliography to bear on the whole range of textual technologies.

And in pursuing such a new vision of
the study of digital cultures and text technologies, we need to create new
scholarly alliances and new conjunctions. I tried to suggest earlier that our
claim to be distinguished by a commitment to interdisciplinarity is a rather
empty one, and that such claims carry increasingly less weight as
interdisciplinarity becomes more widespread. But I nevertheless believe that
digital humanities is uniquely well placed to create new conjunctions between
the humanities scholar, the curator, the scientist, the librarian and the
artist. A focus on the materiality of text enhances such alliances. After the
notebook of William Harvey was rediscovered, it was noticed how badly faded it
was. The infant science of photography was used to try and enhance the damaged
pages of Harvey’s notes. Likewise, we can use new imaging and scanning
techniques to explore Harvey’s manuscript – we can do much much more than
simply digitizing it, and we should be developing such projects. Similarly,
much of our evidence for understanding how those Dominican monks compiled the first concordance to the
scriptures in the thirteenth century comes from discarded manuscript fragments they
used in listing the words. We could imagine a project which imaged those
fragments and reintegrated them to understand the working methods of the
compilers of the concordance. But, in doing so, our aims should not simply be
to help breath new life into medieval studies. We should be seeking to develop
new technologies and new science as a result of this work. We should be seeking
to provide new perspectives on the way in which technology interacts with text.
And in so doing we provide new perspectives on what it means to be human.

I thought I had finished writing this lecture,
when I read a tweet from the Science Museum, which described a new brainscanner which uses magnetic resonance imaging to detect different blood flows
in the brain when a different letter of the alphabet is read. The practical
application of this is that it potentially allows complete paralysed people to
communicate by spelling out words, which could be read by the scanner. But it
potentially represents a huge shift in our ability to explore and investigate
the relationship between the human and text. How does the brain react to the
same letter forms in different types? How do different people react to, say, a
medieval manuscript? What does this tell us about the nature of reading? An
invention like this poses questions for the humanities while also offering the
humanities huge new opportunities. It is the exploration of these new
opportunities which is the business of the digital humanities, and not
preserving antiquated and desiccated forms of scholarship.

6
comments:

I think you set up a false dichotomy between the traditional humanities study of historical, literary and other works, and the study of contemporary culture, media and digital practice. There is as you say room for both in a school of humanities, faculty of liberal arts, or department of digital humanities. But the implication that more modern subject matter is by definition less conservative is fallacious; it's possible to study late twentieth century literature or cinema and be just as theoretically conservative and obsessed with the dead-white-male canon as Ronald Crane. The study of classical inscriptions and papyri enables us to pay attention to women, non-elites, foreigners, slaves and other groups that are under-studied and under-represented in the classical canon. Date of subject is a poor indicator of level of conservatism/innovation. Or to put it another way: the humanities is defined by more than just its subject matter—we are a network of approaches, methods, ideologies and theories, as well as texts (images, multimedia experiences and collations/concordances).

Within humanities is a lot of conservative scholarship; areas like palaeography, philology, prosopography, art restoration/conservation, architectural archaeology, historical linguistics are core to our disciplines. They are the foundations on which more experimental and risky and sexy work is built, but if we stop doing the conservative work the foundations will crumble. The fact that some (maybe most) digital humanities is concerned with these conservative subject areas is not a sign of ill-health at all.

A digital humanities scholar (like myself) whose background discipline is one of the historical humanities subjects, and whose focus may be on using digital methods and standards to improve the study and/or dissemination of traditional subject matter, may also have a methodological interest in how digital tools and technologies affect our academic and pedagogical thinking, culture as a whole, and current political events such as protest movements. They may (and often do) write papers and propose funded projects in these areas; they may engage in outreach through various media on these subjects (as I strongly believe all academics should); or their interest may be restrained to conversations over lunch. None of these things get away from the fact that a rigorous humanities training and career has equipped them with the skills and sensibilities to engage with these subjects; they do not cease to be historians and become culture/media studies scholars as soon as they write one paper on reader response to iPad apps. (For one thing, studying media studies for many years and gaining a PhD and teaching experience in the area equips one with a whole other range of skills, overlapping but not identical to those of the historian or philologist.)

I don't disagree with your point that the humanities (and with them the digital humanities) could do with more diversity and variety in our areas as well as our methods of study. The apparent denigration of pre-21st century history and equating it with neo-Victorian canon-worship is simplistic, and I believe inaccurate. Collaboration with colleagues in the humanities is not an unhealthy dependency or addiction, it is what makes us a humanities discipline. Yes, we should be bringing our own research agendas to the projects we work on (many of us already do), and we should be building and running our own projects, and in an ideal world you should never ever work on anything that you are not enthused by. Many computer scientists and informaticians, who do not call themselves "digital humanists" or "interdisciplinary", recognize quite openly that the complex and messy research questions brought by humanities projects contribute in significant ways to both design and implementation of tools and software that are also used for quite different ends, including well-funded areas like medicine and security. I don't think that collaboration weakens them as scholars, and nor does it us.

The argument that the Digital Humanities reinstates a conservative view of the humanities relies for its own on a curiously narrow definition of humanistic inquiry. The field of linguistics has played a crucial role in the development of DH, and it is hard to imagine any of Prescott's critiques applying to 'digital' research in linguistics. Linguistics always seems to have been an 'insurgent humanities,' whether it was Saussure's structural linguistics being appropriated to anthropology and literary theory or Chomsky's wrestling with the application of Shannon's information theory to human language and the consequent refining of the notion of 'human creativity.' The current 'insurgent' form of linguistics is corpus/computational linguistics, a distinctly digital form of research that built itself up as a self-conscious form of 'humanities computing'. It is impossible to say that the procedure for either developing the Brown corpus or tagging it for parts of speech was a case of "modelling methods used by other scholars." Current research built on that foundation includes that of Ian Lancashire (with the computer scientist Graeme Hirst), whose studies of dementia in novelists are situated at the intersection of clinical and corpus linguistics and literary and textual studies. How should we tell him that he needs to "forget the aging rhetoric about interdisciplinarity and collaboration, and start to do [his] own scholarship, digitally"? Prescott speaks approvingly of Tim Hitchcock's work, which itself also depends partly on decades of research in corpus linguistics.

Beyond linguistics, though, perhaps the best rebuttal for Prescott's argument is Kenneth Goldsmith's "About" page for Ubuweb. Every one of Prescott's points is addressed by Goldsmith--Ubuweb is (to quote Prescott) "ad hoc, pay[s] little attention to standards, [doesn't] seek to produce a service, and [doesn't] worry about sustainability. It [is] experimental." It clearly provides a "focus on the materiality of text." As an archive it has struggled with the technologies of restaging works in the digital domain. The periodical Aspen (http://www.ubu.com/aspen/), for example, was already a multimedia experiment issued from 1965-71; its presentation over the web becomes a thought-provoking extension of its original impetus. More important, though, is its determination to operate without the usual kinds of support--grants, advertisements, institutional budgets--that work to circumscribe DH intellectual work as Prescott sees it. Working constantly to stay outside of the usual economy, Ubuweb attains the status of pure DH project that Prescott calls for.

I will start by saying that I take this presentation in the spirit of Manfred Thaller's recent reminder that "research is not driven by harmony, but by controversial discussion" in 'Controversies around the Digital Humanities' (http://hsozkult.geschichte.hu-berlin.de/zeitschriften/ausgabe=6950). Andrew Prescott has, in my view, set out some very important challenges here for the field in the UK, which have wider implications for digital humanities at an international level, but like others, I am concerned at the over-simplistic connection made between the period of the subject matter and the perceived modernity of digital humanities research.

There are many reasons why we are having this debate about the digital humanities now, many of them driven by outside factors (the economy; funding; wider concerns about the role of the humanities) and some by the recent attention given to the field, which has led to a modest growth in academic positions (although not to the degree one might expect), but they have little, if anything, to do with the historical period under focus.

I share concerns expressed here about the perceived 'service' aspect of *some* digital humanities research, but over-polarising the debate does us no favours. Statements like "King's is simply an XML factory for projects led by other researchers" are not only unfortunate for colleagues at King's, but also those whose research involves text modelling more generally (which is not to say that all work involving XML constitutes interesting research of course). While it is true to say that issues of credit on collaborative projects in the humanities are problematic - something the Digital Humanities has been active in exploring in recent years - it is simply not true to imply that XML-related work (whether at the conceptual or development level) does not constitute independent research in its own right, or that DH researchers cannot get respect for their work from other colleagues in the humanities, as I'm sure many colleagues at King's would attest. It is clear that we need to avoid dependency on project funding and to develop distinct intellectual agendas, but all of this is not incompatible with collaboration, or indeed project work per se. There need to be more projects led by DH researchers (in addition to, rather than instead, of collaborative acts), but other humanities disciplines provide a fertile terrain without which our experimentation will struggle to germinate.

Similarly, I do not see a contradiction between carrying out our own research and using standards or considering sustainability. While I agree that much research will need to be experimental and surpass the limits of current frameworks for modelling humanities data, we lose an important advantage of digital technologies if we ignore the advances of the last few years, which have taken digital research away from the kind of black-box mentality (a different form of elitism) which I thought we were trying to avoid. And you cannot manipulate vast quantities of data, as mentioned in the article, if you do not have some common frameworks for doing so.

In summary, we need to strike off in new directions, and to ensure that we protect our intellectual identities, but we do not need to destroy all that we have built as we strive to navigate the digital humanities through fast- moving waters.

I will start by saying that I take this presentation in the spirit of Manfred Thaller's recent reminder that "research is not driven by harmony, but by controversial discussion" in 'Controversies around the Digital Humanities' (http://hsozkult.geschichte.hu-berlin.de/zeitschriften/ausgabe=6950). Andrew Prescott has, in my view, set out some very important challenges here for the field in the UK, which have wider implications for digital humanities at an international level, but like others, I am concerned at the over-simplistic connection made between the period of the subject matter and the perceived modernity of digital humanities research.

There are many reasons why we are having this debate about the digital humanities now, many of them driven by outside factors (the economy; funding; wider concerns about the role of the humanities) and some by the recent attention given to the field, which has led to a modest growth in academic positions (although not to the degree one might expect), but they have little, if anything, to do with the historical period under focus.

I share concerns expressed here about the perceived 'service' aspect of *some* digital humanities research, but over-polarising the debate does us no favours. Statements like "King's is simply an XML factory for projects led by other researchers" are not only unfortunate for colleagues at King's, but also those whose research involves text modelling more generally (which is not to say that all work involving XML constitutes interesting research of course). While it is true to say that issues of credit on collaborative projects in the humanities are problematic - something the Digital Humanities has been active in exploring in recent years - it is simply not true to imply that XML-related work (whether at the conceptual or development level) does not constitute independent research in its own right, or that DH researchers cannot get respect for their work from other colleagues in the humanities, as I'm sure many colleagues at King's would attest. It is clear that we need to avoid dependency on project funding and to develop distinct intellectual agendas, but all of this is not incompatible with collaboration, or indeed project work per se. There need to be more projects led by DH researchers (in addition to, rather than instead, of collaborative acts), but other humanities disciplines provide a fertile terrain without which our experimentation will struggle to germinate.

Similarly, I do not see a contradiction between carrying out our own research and using standards or considering sustainability. While I agree that much research will need to be experimental and surpass the limits of current frameworks for modelling humanities data, we lose an important advantage of digital technologies if we ignore the advances of the last few years, which have taken digital research away from the kind of black-box mentality (a different form of elitism) which I thought we were trying to avoid. And you cannot manipulate vast quantities of data, as mentioned in the article, if you do not have some common frameworks for doing so.

In summary, we need to strike off in new directions, and to ensure that we protect our intellectual identities, but we do not need to destroy all that we have built as we strive to navigate the digital humanities through fast- moving waters.