Tag Archives: Digital Humanities

A Review of Kenneth M. Price’s “Electronic Scholarly Editions”

Walt Whitman

Kenneth M.  Price’s article, “Electronic Scholarly Editions”, addresses the advantages and disadvantages of digital scholarly publication as a method of preservation of scholarly editions.  Price is a Professor of American Literature and is very much involved in the digital humanities.  He is co-director of the Walt Whitman Archive with Ed Folsom, which involves editing Whitman’s works online.  He also co-directs the Center for Digital Research in the Humanities for the University of Nebraska-Lincoln.  He begins his article by pointing out the cost and time put into digital editions and states that it remains an “attractive medium” for editors, despite its uncertainty as a method of preservation.  Price specifically concentrates on scholarly editions.  He states that “[m]ere digitizing produces information; in contrast, scholarly editing produces knowledge”.

The first concern Price raises about electronic editions is the lack of dedicated qualified staff to carry out the task of converting content.  He says that academics tend to neglect editing because literary and cultural theories are given priority by the academy.  The scholarly editions Price refers to are often termed “archives” on the net, and there are many such examples available such as the William Blake Archive, the Dickinson Electronic Archives, and the Einstein Archive Online.  Digital archiving “blends features of editing and archiving”.  Price believes that the edition is only part of the archive: the archive itself contains much more.  For example, the Walt Whitman Archive contains many tools and resources such as letters, transcriptions, images, manuscripts, audio clips, etc.  It is much more than a mere edition, it is an interactive history.

Price presents a good argument for the production of electronic scholarly editions.  He lists the advantages as follows: they are capacious, and hence allow scholars go beyond the limits of print publishing; depth and richness can be added through the use of art, colour, audio and video clips; they also add depth of meaning to a text and bring a wider readership to the edition.  Digital editions allow a greater scope for editing, or perhaps lack of editing, as all versions of a text can be included along with commentaries from authors and editors alike.  A text no longer has to be whittled down to the author’s final intended text.  All versions can be included and readers can debate the eligibility of each one.  According to Price, with censorship and social pressures removed, a text’s true values and meanings can be questioned.  However, editorial decisions are not removed.  There are still issues such as database design and mark-up of texts to be decided on.  Other disadvantages, Price suggests, include the possibility of bias.  The way an edition is presented plays a key role in its interpretation.  While Price openly admits that electronic scholarly editions can be challenging to produce, he embraces these challenges and sees them as attractive: “I would argue that these very challenges contribute to the attraction of working in this medium”.

Price goes on to describe the difference between digital library editions and electronic scholarly editions, using the Wright American Fiction project as an example.  While this section does not add much to Price’s argument, it does offer some insight into the amount of work put into an undertaking such as the Wright American Fiction project. Price raises an interesting point here which is the possibility of releasing digital editions as a work-in-progress.  The advantages of this lie in its searchability, however the stability of an electronic edition is affected by its ever-changing nature.

Price dedicates a large section of his article to “unresolved issues and unrealized potentials” of digital editions. He believes that the full potential of electronic editing can only be reached by adherence to international standards, such as those set out by the TEI and the EAD.  Price also points out that “scholarly work may be free to the end-user but it is not free to produce”, something which is very easy for the reader of an electronic edition to forget.  Electronic scholarship is  lacking in funding, which is essential to its future development, and there is also the problem of undefined roles: “Traditional boundaries are blurring before our eyes as these groups – publishers, scholars, and librarians – increasingly take on overlapping functions”.  However, Price once again turns the negative into a positive: “While this situation leaves much uncertainty, it also affords ample room for creativity, too, as we move across newly porous dividing lines”.

Price, while able to see the challenges facing digital scholarship in the future, is ever-optimistic.  With proper funding, he believes, electronic editions will expand audiences and, while not replacing paper-based articles, they will certainly contribute to their informative value and to the preservation of texts.  Price sums it up well in his own words when he says it is “a field of expansiveness and tremendous possibility”.

Advertisements

A Review of James Cummings’ “The Text Encoding Initiative and the Study of Literature”

James Cummings is a digital medievalist at Oxford University, specialising in TEI XML.  His article on “The Text Encoding Initiative and the Study of Literature” may be found here.

Cummings begins with a well-grounded description of what the TEI is and why it was founded.  He notes that the TEI has existed since before the web was formed, and so “its recommendations have influenced the development of a number of web standards, most notably XML and XML-related standards”.  His article is not a complete history of the TEI, nor is it a general introduction.  Instead it serves to sample “some of the history, a few of the issues and some of the methodological assumptions” of the TEI.

Cummings goes on to give a general description of the content and structure of the TEI Guidelines.  This seems to be a rather pointless feat, as a quick glance at the TEI’s website will reveal this information.  The main body of this article deals with the technological and theoretical background of the TEI.  It begins with a description of the TEI’s early manifesto, drawn up at a conference at Poughkeepsie in 1987.  This is quite interesting as it allows the reader not only to chronicle the evolution of the TEI, but also to recognise areas of weakness or under-development.  According to Cummings, institutions such as the Oxford Text Archive and the University of Virginia’s Electronic Text Center have greatly assisted in the firm establishment of the TEI’s standards for text-encoding and preservation.

Text Encoding Model

One of the main benefits of the TEI, as Cummings points out, is the fact that it is “driven by the needs of its members, but also directed by […] the technologies it employs”.  It evolves according to necessity.  The TEI incorporates a diverse community of disciplines, resulting in a general encoding structure that can be adapted for basic or specialised modules.  The TEI is very much community-based and continually adapts according to its users’ needs: ” That the nature of the TEI is to be directed by the needs of its users is not surprising given that it is as a result of the need for standardisation and interoperability that the TEI was formed”.  Cummings goes on to describe the fact that the Guidelines have made the elements “more applicable to a greater number of users”.

However, Cummings also points out the disadvantages of such an approach.  He believes that it leads to “methodological inequality”, where specialised markup is used for some projects, whereas others only require more generalised methods. Cummings believes that the solution to this problem is the development of “rigorous local encoding guidelines”.

Cummings communicates a very interesting series of statements towards the centre of his article:

It is needless to say that many involved with the earliest efforts to create systems of markup for computer systems were not literary theorists, but this is not the case with the development of the TEI, which has often benefited from rigorous debate on the very nature of what constitutes a text (McGann 2001: 187).  While the history of textual markup obviously pre-dates computer systems, its application to machine-readable text was partly influenced by simultaneous developments in literary theory and the study of literature.

While these facts may seem obvious to Cummings, they would not be so to someone with no previous knowledge in this area.  For this reason it seems to me that Cummings is writing for his peers rather than a more general audience.  However, a readership with expertise in TEI would find his introduction very basic and perhaps a bit pointless.

The article then goes on to hypothesise that New Criticism may have influenced the application of markup to digital text.  I think it would have been interesting if Cummings dwelt on this point a bit more, however he brushes over it rather quickly.

Cumming believes that the TEI has greatly advanced our understanding of what a text is.  This is a bit far-fetched considering many people have never even heard of the TEI, but Cummings’ description of the hierarchy of texts and their overlapping structure is well elucidated.

Cummings spends much of the remainder of the article quoting from the TEI Guidelines which makes for a rather monotonous read.  Overall, I think he makes some good points but spends a lot of time getting to his main one, which is that the TEI is not a perfect system but with compromise it makes digital representation of texts much easier.

Images from http://it.wikipedia.org/wiki/Text_Encoding_Initiative; http://scripts.sil.org/cms/scripts/page.php?item_id=IWS-Chapter01