Category Archives: Lit Reviews

A Review of Kenneth M. Price’s “Electronic Scholarly Editions”

Walt Whitman

Kenneth M.  Price’s article, “Electronic Scholarly Editions”, addresses the advantages and disadvantages of digital scholarly publication as a method of preservation of scholarly editions.  Price is a Professor of American Literature and is very much involved in the digital humanities.  He is co-director of the Walt Whitman Archive with Ed Folsom, which involves editing Whitman’s works online.  He also co-directs the Center for Digital Research in the Humanities for the University of Nebraska-Lincoln.  He begins his article by pointing out the cost and time put into digital editions and states that it remains an “attractive medium” for editors, despite its uncertainty as a method of preservation.  Price specifically concentrates on scholarly editions.  He states that “[m]ere digitizing produces information; in contrast, scholarly editing produces knowledge”.

The first concern Price raises about electronic editions is the lack of dedicated qualified staff to carry out the task of converting content.  He says that academics tend to neglect editing because literary and cultural theories are given priority by the academy.  The scholarly editions Price refers to are often termed “archives” on the net, and there are many such examples available such as the William Blake Archive, the Dickinson Electronic Archives, and the Einstein Archive Online.  Digital archiving “blends features of editing and archiving”.  Price believes that the edition is only part of the archive: the archive itself contains much more.  For example, the Walt Whitman Archive contains many tools and resources such as letters, transcriptions, images, manuscripts, audio clips, etc.  It is much more than a mere edition, it is an interactive history.

Price presents a good argument for the production of electronic scholarly editions.  He lists the advantages as follows: they are capacious, and hence allow scholars go beyond the limits of print publishing; depth and richness can be added through the use of art, colour, audio and video clips; they also add depth of meaning to a text and bring a wider readership to the edition.  Digital editions allow a greater scope for editing, or perhaps lack of editing, as all versions of a text can be included along with commentaries from authors and editors alike.  A text no longer has to be whittled down to the author’s final intended text.  All versions can be included and readers can debate the eligibility of each one.  According to Price, with censorship and social pressures removed, a text’s true values and meanings can be questioned.  However, editorial decisions are not removed.  There are still issues such as database design and mark-up of texts to be decided on.  Other disadvantages, Price suggests, include the possibility of bias.  The way an edition is presented plays a key role in its interpretation.  While Price openly admits that electronic scholarly editions can be challenging to produce, he embraces these challenges and sees them as attractive: “I would argue that these very challenges contribute to the attraction of working in this medium”.

Price goes on to describe the difference between digital library editions and electronic scholarly editions, using the Wright American Fiction project as an example.  While this section does not add much to Price’s argument, it does offer some insight into the amount of work put into an undertaking such as the Wright American Fiction project. Price raises an interesting point here which is the possibility of releasing digital editions as a work-in-progress.  The advantages of this lie in its searchability, however the stability of an electronic edition is affected by its ever-changing nature.

Price dedicates a large section of his article to “unresolved issues and unrealized potentials” of digital editions. He believes that the full potential of electronic editing can only be reached by adherence to international standards, such as those set out by the TEI and the EAD.  Price also points out that “scholarly work may be free to the end-user but it is not free to produce”, something which is very easy for the reader of an electronic edition to forget.  Electronic scholarship is  lacking in funding, which is essential to its future development, and there is also the problem of undefined roles: “Traditional boundaries are blurring before our eyes as these groups – publishers, scholars, and librarians – increasingly take on overlapping functions”.  However, Price once again turns the negative into a positive: “While this situation leaves much uncertainty, it also affords ample room for creativity, too, as we move across newly porous dividing lines”.

Price, while able to see the challenges facing digital scholarship in the future, is ever-optimistic.  With proper funding, he believes, electronic editions will expand audiences and, while not replacing paper-based articles, they will certainly contribute to their informative value and to the preservation of texts.  Price sums it up well in his own words when he says it is “a field of expansiveness and tremendous possibility”.


A Review of James Cummings’ “The Text Encoding Initiative and the Study of Literature”

James Cummings is a digital medievalist at Oxford University, specialising in TEI XML.  His article on “The Text Encoding Initiative and the Study of Literature” may be found here.

Cummings begins with a well-grounded description of what the TEI is and why it was founded.  He notes that the TEI has existed since before the web was formed, and so “its recommendations have influenced the development of a number of web standards, most notably XML and XML-related standards”.  His article is not a complete history of the TEI, nor is it a general introduction.  Instead it serves to sample “some of the history, a few of the issues and some of the methodological assumptions” of the TEI.

Cummings goes on to give a general description of the content and structure of the TEI Guidelines.  This seems to be a rather pointless feat, as a quick glance at the TEI’s website will reveal this information.  The main body of this article deals with the technological and theoretical background of the TEI.  It begins with a description of the TEI’s early manifesto, drawn up at a conference at Poughkeepsie in 1987.  This is quite interesting as it allows the reader not only to chronicle the evolution of the TEI, but also to recognise areas of weakness or under-development.  According to Cummings, institutions such as the Oxford Text Archive and the University of Virginia’s Electronic Text Center have greatly assisted in the firm establishment of the TEI’s standards for text-encoding and preservation.

Text Encoding Model

One of the main benefits of the TEI, as Cummings points out, is the fact that it is “driven by the needs of its members, but also directed by […] the technologies it employs”.  It evolves according to necessity.  The TEI incorporates a diverse community of disciplines, resulting in a general encoding structure that can be adapted for basic or specialised modules.  The TEI is very much community-based and continually adapts according to its users’ needs: ” That the nature of the TEI is to be directed by the needs of its users is not surprising given that it is as a result of the need for standardisation and interoperability that the TEI was formed”.  Cummings goes on to describe the fact that the Guidelines have made the elements “more applicable to a greater number of users”.

However, Cummings also points out the disadvantages of such an approach.  He believes that it leads to “methodological inequality”, where specialised markup is used for some projects, whereas others only require more generalised methods. Cummings believes that the solution to this problem is the development of “rigorous local encoding guidelines”.

Cummings communicates a very interesting series of statements towards the centre of his article:

It is needless to say that many involved with the earliest efforts to create systems of markup for computer systems were not literary theorists, but this is not the case with the development of the TEI, which has often benefited from rigorous debate on the very nature of what constitutes a text (McGann 2001: 187).  While the history of textual markup obviously pre-dates computer systems, its application to machine-readable text was partly influenced by simultaneous developments in literary theory and the study of literature.

While these facts may seem obvious to Cummings, they would not be so to someone with no previous knowledge in this area.  For this reason it seems to me that Cummings is writing for his peers rather than a more general audience.  However, a readership with expertise in TEI would find his introduction very basic and perhaps a bit pointless.

The article then goes on to hypothesise that New Criticism may have influenced the application of markup to digital text.  I think it would have been interesting if Cummings dwelt on this point a bit more, however he brushes over it rather quickly.

Cumming believes that the TEI has greatly advanced our understanding of what a text is.  This is a bit far-fetched considering many people have never even heard of the TEI, but Cummings’ description of the hierarchy of texts and their overlapping structure is well elucidated.

Cummings spends much of the remainder of the article quoting from the TEI Guidelines which makes for a rather monotonous read.  Overall, I think he makes some good points but spends a lot of time getting to his main one, which is that the TEI is not a perfect system but with compromise it makes digital representation of texts much easier.

Images from http://it.wikipedia.org/wiki/Text_Encoding_Initiative; http://scripts.sil.org/cms/scripts/page.php?item_id=IWS-Chapter01


Book Review: Roadside Crosses

Roadside Crosses is a suspense thriller by number one best-selling author Jeffery Deaver.  It is the final installment of a high-tech trilogy that explores the sinister side of the virtual world.  Roadside Crosses features kinesics expert Kathryn Dance, a CBI agent whose investigation of a series of murders on the Monterey Peninsula leads her into the cyber world.  Her investigation centres on troubled teenager Travis who is seeking revenge on those who cyberbullied him on a popular blog, The Chilton Report, for his part in a fatal car accident.  Deaver explores several aspects of the online world, such as blogging, gaming and social networking.  He plays with the idea that the more information people post about themselves online, the more vulnerable they become.  What sets this novel apart, however, is that it incorporates the digital world into the plot in a very real way.  Every forty or fifty pages there is a URL which the reader can use to look up the fictional blog, The Chilton Report, online and find more clues related to the book’s plot.  This blending of fiction with cyberspace adds a sinister element of reality to the story.  A typical Deaver novel, it has a fast-paced plot that endures a number of twists before the final blow is dealt.  It leaves the reader with a chilling insight into the potential dangers of the minefield that is the virtual world.


A Review of Aimée Morrison’s “Blogs and Blogging: Text and Practice”

Aimee Morrison’s article on blogs and blogging in the Blackwell Companion to Digital Literary Studies is a very informative read, particularly for those with little or no previous knowledge of the topic.  She begins with a short history of the blog, which started as a series of links of interest to a webpage’s author and went on to become a writing genre.  I found her definitions to be simple but detailed, and  easy to absorb.  She describes the concepts of archiving and keywords, and the use of blogs as “online diary services”, a hugely popular phenomenon.  I think one of the most likeable aspects of Morrison’s writing is her use of digital terminology, for example “blogosphere”, throughout her article.  She also backs up her statements well, continuously referring to research on the subject, such as that carried out by Herring et al.

Morrison’s informative introduction leads seamlessly on to her hypothesis on the cultural and technological forces “driving the increases in readership and authorship of blogs”.  She cites the 2004 presidential election in America as a major factor in the increase in blog readership and general awareness of the blogosphere itself.  Morrison also points out that a significant proportion of bloggers are students, particularly young male students.  While many of these are using their blogs as online personal diaries, the fact that blogs are being  so regularly accessed by students points toward their potential as learning tools.  Morrison illustrates Daniel W. Drezner‘s blog as an example of a blog that “both reflects his academic work in political science and departs from the academic standards that govern his peer-reviewed work”.

As with Liu, Morrison also refers to the significance of software tools being structured in a way that can be widely used, thus lowering the “technical barrier to entry” into the blogosphere.  She also represents the other side of blogging – reading: “Reading a blog, of course, requires nothing more than an internet connection and a web browser”.  She describes the role of RSS and Atom in increasing blog readership across “the internet population”.  In the past decade blogs have become very accessible.  Even newspapers now often have a “blogwatch” feature, and there are search engines that are dedicated specifically to finding blogs (e.g. blogsearch).

One aspect of Morrison’s article which I found very enlightening was her description of the blogging community.  As well as an overview of the blogroll tool – “[c]ommunities of interest are formed in the intersection of blogs in the blogroll” – Morrison also gives an account of blog “carnivals”.  Blog carnivals are a type of circulating online magazine, something which I had never heard of before reading Morrison’s article.  She cites a website, blogcarnival.com, which keeps a list of editions belonging to 290 different carnivals.

I also particularly enjoyed Morrison’s paragraph on the genres  of blogs, especially the opening in which she describes blogging as a rapidly changing “landscape”.  It appears that the blogosphere is indeed a broad landscape encompassing a variety of blogs from the personal-style diary blog to the academic library logs (“LibLogs”).  She also notes that the power of blogs lies in their immediacy.

Overall, a  light but informative read which tracks the progress of the blog from a total of 23 community based blogs in 1997 to today’s ever-increasing diversity of blogs created every second.

 

Images from http://www.dailyblogtips.com/what-is-a-blog/ and http://www.guildmag.com/index.php


A Review of Alan Liu’s “Imagining the New Media Encounter”

Alan Liu is a professor of English at the University of California.  In 1994 he set up a website called Voice of the Shuttle ,a research portal for digital humanities, and from there began researching the relationship between literature and technology.

His article “Imagining the New Media Encounter” serves as an introduction to the Blackwell Companion to Digital Literary Studies.  The “encounter” Liu describes is the “boundary between the literary and the digital”.  Liu interestingly points out that “[n]ew media, it turns out, is a very old tale”.  New media encounters have been occurring for centuries, Liu states, beginning with orality and progressing through literacy, manuscripts and print to the digital world. Liu believes that these encounters are an inevitable part of life and culture.  He compares the digital media encounter  to the experience the Egyptian king felt when his people were first introduced to writing.  He quotes Plato’s Phaedrus: “for this discovery of yours [writing] will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves”.  Liu goes on to describe more instances of “first contact” such as the practice of silent reading, the first book, law, image,  and so on.  I found this aspect of Liu’s article very enlightening, as the connection between the digital world and the changes in textual transmission that have been happening for centuries was one I had not made before.

While Liu acknowledges that these encounters have significant impacts on society – “[n]ew media encounters are a proxy wrestle for the soul of the person and the civilisation”- he also argues that sometimes too much emphasis is placed on encounters which he just sees as part of the progression of life: “Dramatizations of the instant when individuals, villages, or nations first wrap their minds around a manuscript, book, telephone, radio, TV, computer, cell phone, iPod, etc., are overdetermined”.  I would agree with Liu that the new media encounter is simply another way of transmitting textual data and that the way in which text is presented can be important.  Liu refers to McLuhan’s The Media is the Message to elucidate the idea that the way in which something is communicated is more important than the content: “The extension of any one sense alters the way we think and act – the way we perceive the world”.  I think Liu makes a valid point when he states that the medium is central to the way we perceive data and that it makes it more widely accessible, but I find it hard to believe that the method of transmission is more relevant that the core message of a text.

Throughout his article Liu uses a biological trope to describe the way in which the new media encounter encompasses everyone.  He describes it as a “narrative genome”, something with both dominant and recessive aspects; he uses the term “identity chiasmus” to describe the “otherness” of the new media encounter.  On first reading I found these to be strange metaphors, however on reading a second and third time I thought it was quite a novel way to describe the multi-faceted existence of the digital world.  Liu describes the digital world as an “ecology”,  stating that there is a place for everyone: “There are niches for both the established media aggregators (church, state, broadcast or cable TV, Microsoft, Google, iTunes, YouTube, etc.) and newer species, or start-ups, that launch in local niches with underestimated general potential”.

I think Liu makes  one of his best points when he says that narratives of the new media encounter “are not just neutral substrates […] they are  doped with human contingencies”, and that it is the human element that makes the new media encounter a welcome experience.  However, he also states that it is the human element that makes media encounters “messy”.  I very much agree with his statement that acceptance of something new into society can be challenging: “multiple generations were needed to convert oral peoples into people who thought in terms of writing”.  It is evident that the same can be said about the digital representation of texts.

Liu summarises his thoughts on the new media encounter well in his paragraph on the “déjá vu haunting of new by old media”.  He believes that old media are recycled and freshly communicated through new media, and that this is caused by the ever-growing needs of new generations.

While Liu’s article proved quite a laborious read and its structure was presented in a disjointed manner at times, his overall points are worth consideration.  He believes the move to digitisation is a transition comparable to the introduction of the novel in the eighteenth century – it is simply a new way of transmitting ideas and information by the merging new and old media.

The following is a video of Alan Liu’s talk on Social Computing as part of the Centre for Digital Humanities 2010 Future Knowledge lecture series:



 

Image from http://www.winterwell.com/media.php