Post navigation

Martin R. Delany was an African American abolitionist and political activist. His only novel, Blake, or, the Huts of America, features runaway slave Henry Blake, who travels through the American South disseminating a “secret” which implicitly foments a slave rebellion, although such is never explicitly stated. Although Delany sought to have the novel published as a book, it was never accepted for publication but did appear twice in serial form. Chapters 1-23 and 29-31 were originally found in the 1859 Anglo-African Magazine. The complete novel was discovered years later in the 1861-62 Anglo-African Weekly. These issues revealed a new setting of the previously discovered 26 chapters, as well as an additional 44 chapters. Although Floyd J. Miller produced an annotated edition of Blake in 1970 for the Beacon Press, publishing all 70 chapters, in preparing his edition he did not collate the two serializations of the first 26 to see if Delany revised between 1859 and 1861. These chapters lay uncollated until I took them up this past November as a project for Jerome McGann’s “American Historiography” course.

My materials included the Arno Press reprint of Volume I of the 1859 Anglo-African Magazine and scans of the original 1861-62 Weekly broadsides. I uploaded the texts chapter by chapter into Juxta Commons, collated them, and then used Juxta’s Edition Starter feature to produce HTML files which I then linked together into a navigable website created through GitHub. For more on the process of using Juxta for this project, see last week’s post. The purpose of my post today is to highlight a few of the changes my Juxta collations revealed.

The collations revealed many variants between the two serializations. Delany made scattered substantive changes to the work, usually changing one or two words or slightly or rearranging a sentence for clarity. The many small changes reflect an author interested in smoothing out and correcting his work but not in altering its original meaning. Delany’s letter to William Lloyd Garrison on 19 February 1859 supports this impression:

The three chapters published in the first number of the Magazine, were full of errors, in consequence of the hurried manner in which it was got out, and the whole will be carefully revised and corrected as far as published up to the time, should the work be taken up by a publisher. [1]

Delany was at this point inquiring after a publisher to print the novel in book form, but it is likely that in anticipation of the next serialization in 1861, Delany would have wished to make these revisions and corrections.

Most of these changes serve to clarify Delany’s meaning or improve the language, but many render the depictions more powerful and striking. In one of the most shocking chapters of the novel, an abused slave boy is forced to perform tricks for his master’s guests. He is described in 59 as “the miserable child” and in 61 as “the miserable child of pity.” In 59, he is made to caper about on all fours “like an animal,” but “like a brute” in 61. The two words technically mean the same, but “brute” sounds more abrasive. A peculiar change which Delany makes is in the physical description of the boy: in 59 he has “protruding upper teeth,” while in 61 he has “protruding under teeth.” Provided this is not a mere misreading on the part of the compositor, if deliberate I believe the purpose of this is to increase the shock factor of the scene and make it all the more jarring, as an overbite is much more common than an under-bite. Delany does not change anything in the overall meaning of the work, but he does make this instance of abuse more dramatic.

Another characteristic instance of increasing the power of a scene occurs in Chapter XXIX: when the runaway slaves evade capture, their white pursuers stand “grinding their teeth” in 59 but “gnashing their teeth” in 61. The use of the word “gnashing” increases the sense of angst as well as lending a Biblical quality to the scene, invoking the image of those in Hell and their “weeping and gnashing of teeth” (Matthew 8:12, 13:42, 25:30; Luke 13:28; Acts 7:54—to name a few). I mention this particular revision because it is an example of the Biblical language which pervades the entire novel, and it is also one of several instances where Delany revises in favor of increasing this Biblical quality. In Blake, Henry is frequently depicted as a Christ figure, a Messiah who will come to bring liberation to the slaves. As Henry travels the South, slave families welcome him into their homes, expecting his arrival without having received any warning of his coming. One couple says they knew Henry by a mysterious “mahk”["mark"], and descriptions of slaves receiving the “secret” are fraught with harvest images, heightening the millenarian impression. Before running away from his master, Henry tells his mother- and father-in-law, Mammy Judy and Daddy Joe, “time with me is precious” in 59 and “the time with me is precious” in 61. “The” makes Henry’s statement seem less like a declaration of being in a hurry and more like a hint that his time with Judy and Joe is a transient and preset period of time—much like Jesus’ time on Earth, which he knows will end, first with the Crucifixion, and second, the Ascension.

The last issue of Blake in the Weekly shows Henry and the Cuban blacks and mulattoes enraged at the murder of poet Placido, and the final sentence of the novel is suggestive of the advent of rebellion: “Woe be unto those devils of whites, I say!” However, the absence of any further issues begs the question: did Delany write an ending featuring an actual rebellion, or did he finish on that final powerful but inconclusive imprecation? One theory which we discussed in Professor McGann’s course is that an actual rebellion is never intended, but rather, that the real revolution is to be an intellectual one and will occur through the enlightenment of the slaves. Throughout the novel, Henry tells his fellow slaves, “Stand still, and see the salvation,” and the emphasis on seeing suggests the focus to be enlightenment rather than warfare.

One of the most intriguing group of changes is that of dialect orthography, and these changes—although far from solving—can inform this question of intellectual versus martial revolution. In Blake, spelling of dialect is used to show the gap between “intelligent” and “unintelligent” characters, both black and white. Henry’s speech is perfectly spelt standard English, and it is he who first recognizes his right to freedom, escapes from his bondage, and disseminates the “secret” to others, which may well be that same recognition of their own right to freedom. The slaves’ non-standard dialect becomes a marker of their intellectual bondage, and the pattern of revision enforces this idea. In 1861, all slave dialect is re-spelt to be easier to read—the original orthography being at times prohibitive to readers’ comprehension. Below are some examples of words that are consistently revised in the slaves’ dialect:

Without evidence in Delany’s correspondence with his editor, it is impossible to know who was responsible for these changes. I strongly suspect that, even if prompted by his editor to improve readability, Delany was involved in making these changes, as they ultimately support the trajectory of the novel. Although the orthography is adjusted, other variants reveal the intent to retain the language divide between the educated and uneducated slaves. This is accomplished through the insertion of subject-verb disagreement in dialect which would otherwise sound too standard due to the improved spelling:

I hate him so! 59] I hates ~ 61
while we wuh at de suppeh 59] ~we was~ 61
Dat’s what I like to know 59] ~I likes~ 61
yeh keep 59] yeh keeps 61

Clearly, those responsible for these revisions wanted Blake to be readable but also wanted to keep the speech gap in tact.

Further evidence of intentional dialect revision can be found in the language of the poor whites, which, unlike that of the slaves, is frequently altered to make these characters sound more backward and less intelligent. For instance, while in 59, the slave trader Harris calls Henry a “fellow,” the word “feller” is consistently substituted in 61. Below are other similar changes:

The orthographical revisions are intriguing to me because they show an increased desire as of 1861 to make it clear in Blake that enlightenment is not racially determined but may be acquired–or not–by all, regardless of race. Enlightenment is critical to the slaves’ rebellion, whether this rebellion be a literal uprising or the realization of the slaves’ equality, and the compromising of the whites’ own enlightenment suggests an even greater certainty of the slaves’ eventual success.

When Blake was serialized in 1859, the Civil War loomed on the horizon. When the Anglo-African Weekly set the first issue of the new serialization (26 Nov. 1861), hostilities had long since begun, and a potential end to the age of slavery was in sight. Blake is full of apocalyptic images, and the revisions revealed in Juxta suggest that when the apocalypse comes, the newly enlightened slaves will be as or better qualified than the whites to assume their new place in society. Given this, actual rebellion becomes a moot point, making it all the more likely that Delany never penned the much anticipated uprising.

These reflections constitute an initial analysis of the results of collating this fascinating work. I will continue my research and appreciate any comments you may have. Those interested may access the complete results at Blake, or, the Huts of America: a collation.

Stephanie Kingsley is a second-year English MA student specializing in 19th-century American literature, textual studies, and digital humanities. She is one of this year’s Praxis Fellows [see Praxis blogs] and Rare Book School Fellows. For more information, visit http://stephanie-kingsley.github.io/.

I have had the opportunity to use Juxta Commons for several editorial projects, and while taking a breath between a Juxta-intensive term project last semester and my Juxta-intensive MA thesis this semester, I would like to offer a few thoughts on Juxta as an editorial tool.

For my term project for Jerome McGann’s American Historiography class last semester, I conducted a collation of Martin R. Delany’s novel, Blake, or, The Huts of America, one of the earliest African American novels published in the United States.Little did I know that my exploration would conduct me into an adventure as much technological as textual, but when Professor McGann recommended I use Juxta for conducting the collation and displaying the results, that is exactly what happened. I input my texts into Juxta Commons, collated them, and produced HTML texts of the individual chapters, each with an apparatus of textual variants, using Juxta’s Edition Starter. I linked these HTML files together into an easily navigable website to present the results to Professor McGann. I’ll be posting on the intriguing results themselves next week, but in the meantime, they can also be viewed on the website I constructed, hosted by GitHub: Blake Project home.

Juxta helped me enormously in this project. First, it was incredibly useful in helping me clean up my texts. My collation involved an 1859 serialization of the novel, and another serialization in 1861-62. The first, I was able to digitize using OCR; the second, I had to transcribe myself. Anyone who has done OCR work knows that every minute of scanning leads to (in my case) an average of five or ten minutes of cleaning up OCR errors. I also had my own transcription errors to catch and correct. By checking Juxta’s highlighted variants, I was able to—relatively quickly—fix the errors and produce reliable texts. Secondly, once collated, I had the results stored in Juxta Commons; I did not have to write down in a collation chart every variant to avoid losing that information, as I would if I were machine- or sight-collating. Juxta’s heat-map display allows the editor to see variants in-line, as well, which saves an immense amount of time when it comes to analyzing results: you do not have to reference page and line numbers to see the context of the variants. Lastly, Juxta enabled me to organize a large amount of text in individual collation sets—one for each chapter. I was able to jump between chapters and view their variants easily.

As helpful as Juxta was, however, I caution all those new to digital collation that no tool can perfectly collate or create an apparatus from an imperfect text. In this respect, there is still no replacement for human discretion—which is, ultimately, a good thing. For instance, while the Juxta user can turn off punctuation variants in the display, if the user does want punctuation and the punctuation is not spaced exactly the same in both witnesses, the program highlights this anomalous spacing. Thus, when 59 reads

‘ Henry, wat…

and 61 reads

‘Henry, wat…

Juxta will show that punctuation spacing as a variant, while the human editor knows it is the result of typesetting idiosyncrasies rather than a meaningful variant. Such variants can carry over into the Juxta Edition Builder, as well, resulting in meaningless apparatus entries. For these reasons, you must make your texts perfect to get a perfect Juxta heat map and especially before using Edition Starter; otherwise, you’ll need to fix the spacing in Juxta and output another apparatus, or edit the text or HTML files to remove undesirable entries.

Spacing issues can also result in disjointed apparatus entries, as occurred in my apparatus for Chapter XI in the case of the contraction needn’t. Notice how because of the spacing in needn t and need nt, Juxta recognized the two parts of the contraction as two separate variants (lines 130 and 131):

This one variant was broken into two apparatus entries because Juxta recognized it as two words. There is really no way of rectifying this problem except by checking and editing the text and HTML apparatuses after the fact.

I mean simply to caution scholars going into this sort of work so that they can better estimate the time required for digital collation. This being my first major digital collation project, I averaged about two hours per chapter (chapters ranging between 1000 and 4000 words each) to transcribe the 61-62 text and then collate both witnesses in Juxta. I then needed an extra one or two hours per chapter to correct OCR and transcription errors.

While it did take me time to clean up the digital texts so that Juxta could do its job most efficiently, in the end, Juxta certainly saved me time—time I would have spent keeping collation records, constructing an apparatus, and creating the HTML files (as I wanted to do a digital presentation). I would be remiss, however, if I did not recommend a few improvements and future directions.

As useful as Juxta is, it nevertheless has limitations. One difficulty I had while cleaning my texts was that I could not correct them while viewing the collation sets; I had, rather, to open the witnesses in separate windows.

The ability to edit the witnesses in the collation set directly would make correction of digitization errors much easier. This is not a serious impediment, though, and is easily dealt with in the manner I mentioned. The Juxta download does allow this in a limited capacity: the user can open a witness in the “Source” field below the collation visualization, then click “Edit” to enable editing in that screen. However, while the editing capability is turned on for the “Source,” you cannot scroll in the visualization—and so navigate to the next error which may need to be corrected.

A more important limitation is the fact that the Edition Starter does not allow for the creation of eclectic texts, texts constructed with readings from multiple witnesses; rather, the user can only select one witness as the “base text,” and all readings in the edition are from that base text.

Most scholarly editors, however, likely will need to adopt readings from different witnesses at some point in the preparation of their editions. Juxta’s developers need to mastermind a way of selecting which reading to adopt per variant; selected readings would then be adopted in the text in Edition Starter. For the sake of visualizing, I did some screenshot melding in Paint of what this function might look like:

Currently, an editor wishing to use the Edition Starter to construct an edition would need to select either the copy-text or the text with the most adopted readings for the base text. The editor would then need to adopt readings from other witnesses by editing the the output DOCX or HTML files. I do not know the intricacies of the code which runs Juxta. I looked at it on GitHub, but, alas! my very elementary coding knowledge was completely inadequate to the task. I intend to delve more as my expertise improves, and in the meantime, I encourage all the truly code-savvy scholars out there to look at the code and consider this problem. In my opinion, this is the one hurdle which, once overcome, would make Juxta the optimal choice as an edition-preparation tool—not just a collation tool. Another feature which would be fantastic to include eventually would be a way of digitally categorizing variants: accidental versus substantive; printer errors, editor corrections, or author revisions; etc. Then, an option to adopt all substantives from text A, for instance, would—perhaps—leave nothing to be desired by the digitally inclined textual editor. I am excited about Juxta. I am amazed by what it can do and exhilarated by what it may yet be capable of, and taking its limitations with its vast benefits, I will continue to use it for all future editorial projects.

Stephanie Kingsley is a second-year English MA student specializing in 19th-century American literature, textual studies, and digital humanities. She is one of this year’s Praxis Fellows [see Praxis blogs] and Rare Book School Fellows. For more information, visit http://stephanie-kingsley.github.io/, and remember to watch for Ms. Kingsley’s post next week on the results of her collation of Delany’s Blake.

Director of NINES Andrew Stauffer and Project Manager Dana Wheeles will be joining the UVa Scholar’s Lab today to discuss Juxta Commons and possible uses for the software in the classroom. Below are a list of sets included in the demo to illustrate the numerous ways Juxta could draw students’ attention to textual analysis and digital humanities.

Every now and then I like to browse the project list at DHCommons.org, just to get an idea of what kind of work is being done in digital scholarship around the world. This really paid off recently, when I stumbled upon Digital Thoreau, an engaging and well-structured site created by a group from SUNY-Geneseo. This project centers around a TEI-encoded edition of Walden, which will, to quote their mission statement, “be enriched by annotations links, images, and social tools that will enable users to create conversations around the text.” I highly recommend that anyone interested in text encoding take a look at their genetic text demo of “Solitude,” visualized using the Versioning Machine.

What really caught my attention, however, is that they freely offer a toolkit of materials from their project, including XML documents marked up in TEI. This allowed me to take a closer look at how they encoded the text featured in the demo, and try visualizing it, myself.

This embed shows the same text featured on the Digital Thoreau site, now visualized in Juxta Commons. It is possible to import a file encoded in TEI Parallel Segmentation directly into Juxta Commons, and the software will immediately break down the file into its constituent witnesses (see this example of their base witness from Princeton) and visualize them as a comparison set.

Uploading Parallel Segmentation

Parallel Segmentation file added and processed

Once you’ve successfully added the file to your account, you have access to the heat map visualization (where changes are highlighted blue on the chosen base text), the side-by-side option, and a histogram to give you a global view if the differences between the texts in the set. In this way, the Juxta Commons R&D hope to enable the use of our software in concert with other open-source tools.

I should also note that Juxta Commons allows the user to export any other sets they have created as a parallel-segmented file. This is a great feature for starting an edition of your own, but it no way includes the complexity of markup one would see in files generated by a rigorous project like Digital Thoreau. We like to think of it the Parallel Segmentation and new experimental edition builder export as building blocks for future scholarly editions.

Many thanks to the team at Digital Thoreau for allowing us to make use of their scholarship!

What do you get when you collate as many free Google versions of the same text as you can find? Those familiar with Google Books may suggest that you’ll quickly discover rampant OCR errors, or perhaps some truly astounding misinformation in the metadata fields. In my experiment using Juxta Commons to explore the versions of Alfred, Lord Tennyson’s long poem, The Princess, available online, I encountered my fill of both of these issues. But I also discovered a number of interesting textual variations – ones that led me to a deeper study of the poem’s publication history.

In the process of testing the efficacy of the software, I believe I stumbled upon a useful experiment that may prove helpful in the classroom: a new way to introduce students to textual scholarship, to the value of metadata, and to the modes of inquiry made possible by the digital humanities.

Many of the editions of Tennyson’s works offered in Google Books are modern, or modern reprints, and are thus available only in snippet view. Paging through the results, I chose six versions of the Princess that were available in e-book form, and I copied and pasted the text into the text editor in Juxta Commons*. Because the poem is relatively long, I chose to focus solely on its Prologue – not only to expedite the process of collation, but to see if one excerpt could give a more global view of changes to the poem across editions. Another important step was to click on the orange “i” button at the upper left of the screen to save original URLs and basic metadata about the object for future reference.

This step turned out to be invaluable, once I realized that the publication information offered on the title pages of the scanned documents didn’t always agree with the metadata offered by Google (see this example).

Once the set was complete, and collated, I noticed right away that there were significant passages that were missing in the 1863 and 1900 editions of the poem.

Stepping chronologically through the set using the witness visibility feature (the eye icons on the left) showed no apparent timeline for this change (why would it be missing in 1863, present in 1866, 1872, 1875, and excised again in 1900?). The answer could only be found in a robust explanation of the revision and publication history of Tennyson’s work.

Without going too deeply into the reasons behind this set of differences (I’ll refer you to Christopher Ricks’ selected critical edition of Tennyson, if you’re interested), The Princess happens to be one of the most revised long poems of Tennyson’s career. The Prologue was expanded in the 5th edition (published in 1853) and it is that version that generally considered the standard reading text today. However, as we have seen from the Google Books on offer, even in 1900, editions were offered that were based on earlier versions of the poem. Could the fact that both versions missing the stanzas are American editions be important?

I invite Tennyson scholars to help me continue to piece together this puzzle. However, I believe that in this one example we have seen just how powerful Juxta Commons can be for delving into seemingly innocuous editions of one of Tennyson’s poem and exposing a myriad of possible topics of study. Next time you’re wondering just *which* version of a text you’re looking at on Google Books, I hope you’ll consider Juxta Commons a good place to start.

* Please note that Juxta Commons can accept some e-book formats, but those offered by Google Books have image information only, and the text cannot be extracted.

Guest post by NINES Fellow, Emma Schlosser. The full set is embedded at the end of this post.

Juxta Commons now offers a platform by which we can study the evolution of the most visited encyclopedia on the web—Wikipedia! The Wikipedia API feature allows users to easily collate variants that reveal changes made to articles, a useful tool when tracking the development of current events. In light of President Obama’s recent nomination of Senator John Kerry to be Secretary of State following Susan Rice’s withdrawal of her bid for the position, I decided to trace Wikipedia’s article on the September 11th 2012 attack on the U.S. consulate in Benghazi. The attack resulted in the tragic deaths of four Americans including Ambassador Christopher Stevens.

I prepared thirteen witnesses taken from the course of the article’s history on Wikipedia, stemming back to September 14th, 2012. In selecting the variants, I chose to focus on information most pertinent to the role of Rice, who is U.S. Ambassador to the UN. These witnesses for the most part fall under the article’s “U.S. Government Response” section. As various editors added more information regarding the attack and its aftereffects, I noted that on September 22nd a section had been added to the article entitled “Criticism of U.S. Government Response.”

In a September 16th version of the article, an editor adds that the U.S. government has begun to doubt whether a low quality and poorly produced film circulated on YouTube entitled Innocence of Muslims was in fact behind the attack.

By September 22nd, an entire paragraph had been added to the “U.S. Government Response” section, including quotations from Senator John McCain (R, Arizona) who decried any claim that the attack was spontaneous: “Most people don’t bring rocket-propelled grenades and heavy weapons to demonstrations. That was an act of terror.” A September 27th version reports that Susan Rice appeared on five separate news shows on the 16th, asserting that the attacks were a “spontaneous reaction to a hateful and offensive video widely disseminated throughout the Arab and Muslim world.” The 27th variant also affirms that the Benghazi attack had become a politically fueled issue during the heated presidential race.

The October 28th variant cites under the “Criticism of U.S. Government Response” section that Senator McCain specifically accused the administration of using Susan Rice to cover the true motives of the attack.

As the progression of this Wikipedia article shows, the U.S. government response to the Benghazi attack overshadowed, to some degree, the causes and nature of the attack itself. This, of course, had much to do with the then raging U.S. presidential campaign. Rice’s tangential role in the response to the Benghazi attack, as evidenced by the paucity of references to her within the article, implicitly reveals the nature of political scapegoating. Thanks to Juxta’s Wikipedia API feature it was easy for me to trace the evolution of an article on a contemporary controversy, revealing the methods by which we continually modify and interpret our understanding of current events.

For all those attending the Modern Languages Association Conference in Boston this year, please join NINES Directory Andrew Stauffer and Performant Software for a reception in the exhibit hall (Booth 717) on Saturday, January 5. We’ll be running demos of Juxta Commons and answering your questions about NINES, Juxta and digital humanities software in general.

Congratulations to Tonya Howe, the winner of our Juxta Commons sharing competition, leading up to the MLA Conference in Boston (#MLA13). Be sure to have a look at the side-by-side view of her comparison set, Legend of Good Women, Prologues A and B.

We’ll be featuring the set in the Juxta Commons gallery in the very near future, along with some of the other sets that received lots of interest in the last month.

Juxta Commons users – share your favorite comparison set today, and the set with the most views in the next month wins the user free swag!

Create and share a collation set and get the MOST UNIQUE VIEWS in one month. You’ll win a lovely Juxta Commons commuter coffee mug, plus fame and glory. We will be keeping track via analytics, and we will announce the winner in early January.

You can share your visualizations by Twitter, Facebook, Google +, link, email and embed, so there are lots of ways to get the word out.

Remember that we are still in beta and are particularly interested in load-testing the site, so bear with us if you experience slow-downs. You can send us feedback and bug reports via our Google Group and you can find information about using the site on our User Guide.

The winner will be announced at our presentation at MLA. Happy collating!

The Juxta R&D team recently did a demo of Juxta Commons at a DH Day conference at NC State University, and one of the attendees brought the site NewsDiffs.org to our attention. It’s a great resource, tracking changes to online, “published” articles from some of the largest media outlets out there. But, while NewsDiffs brings together a bunch of different versions of these online articles, their visualizations are only helpful for two versions at a time.

As an experiment, I took several versions from this example and plugged them into Juxta Commons. When combined in the heat map, the results were truly surprising.

In this example, the newest version (captured on November 6, 2012) is the base witness, with the previous revisions made on the day of the articles release included in the set. Just imagine: readers visiting the New York Times article at 11:15am would have read a very different set of opening paragraphs than those checking in at 11:45am.