#Lazyweb: Textual Studies Primer


One of the things that I love about Twitter is the #lazyweb feature: the ability to ask the world to help you find the answer to questions. Unfortunately my query today needs a bit more space to explain. And it’s not that I’m lazy in this case, it’s just that I’m short on time and I know that many of my colleagues will be able to quickly point me in the right direction.

I’m super excited to be teaching again this fall, and even more excited to be teaching an “Introduction to Digital Humanities.” It’s the first time a course like this will be taught at Emory, and it’s going to give me a great chance to dive more deeply into aspects of the field that I’m less familiar with. As I’ve been turning over the course in my mind, I’ve known that I’ve wanted to do one or more projects with the students, probably using our special collections, which tend to be quite strong in particular swaths of literature. This week I sat down with Liz Chase, one of our special collections librarians, and brainstormed. We came up with a great project involving our holdings of Carol Ann Duffy’s notebooks. In short, we want to do some comparisons between how she writes in her 1999 volume, The World’s Wife, and her previous volumes. We’re interested in thematic material, vocabulary she uses, poetic styles, and so forth. But as I’ve been working to design the project, I’ve come to realize that the students’ work (to say nothing of my teaching) will be improved by the inclusion of some readings on textual scholarship along these lines. But I don’t know this field at all.

What’s more, I’ve been trying to think about what sort of software we might most profitably use to help push our analysis after creating a dataset of the texts. I’m guessing we’ll want to represent word counts, word clouds, line structures, and more. My first thought is SEASR, but I’m not familiar with the tool and I’m not sure if it’s overkill or underkill or totally off the mark. I can always use Wordle, but I would like to have more options. And perhaps if I really knew this field of scholarship then it would be easier for me to know which tools I should be using.

What I really need, then, is a suggestion of books or articles that I should read so that our class proceeds thoughtfully on the project with an understanding of what’s been done in the past. Any tool suggestions would be welcome as well.

  1. #1 by kirstyn leuner on July 28, 2011 - 6:41 pm

    Your course sounds fascinating and really fun to teach! Though I get the sense you’re a bit of a DH wizard, here are some tools I would explore — you may already be familiar with them. Hope this helps 🙂

    – Juxta http://www.juxtasoftware.org/: for comparing versions of the same work with visual analysis. I’ve used this to discuss differences between Mary Shelley’s 1818 and 1831 Frankenstein editions (borrowed that lesson from Laura Mandell). If there are versions of the same poem in the collection, this would be a great way to study that.

    https://digitalresearchtools.pbworks.com/w/page/17801708/Text-Analysis-Tools: Compilation of Text Analysis tools that can be used for creating concordances, doing word counts, and visualizing word or string repetitions in a work. I have to say that although it looks really antiquated and clunky, I’ve used AntConc to help me write 2 essays, and it’s pretty straightforward and I love the data it generates. However, I’m sure there are fancier tools out there that I have not yet discovered — would love it if you find some if you would pass them on. Many of the tools on the list that I’ve wanted to try are for Windows only, so they seem like they’d be sexy, but I have a Mac and, to my detriment, haven’t taken my work to the PC computer lab in the library to try them.

    https://digitalresearchtools.pbworks.com/w/page/17801661/Data-Visualization: Compilation of Data Viz tools. I’ve seen Jon Saklofske demo NewRadial and it’s awesome. It might be a really nice way to visualize and organize the many different genres of works in the Duffy collection. It was designed for Blake (you prob know this already). I’ve never used it myself, but am dying to and will prob try to integrate it into my Brit Lit survey course this fall. I’ll send details when I do.

    – Have students compare textual vizualizations using Wordle versus Tagcrowd. (these links are in the data viz compilation list). It will not only create visual data, but introduce the idea of critiquing what digital tools do differently — strengths and weaknesses of similar tools. We had great fun with this doing analyses of Othello.
    —> a Wordle brainstorm a colleague had recently: have students upload their *own* writing in Wordle and analyze the vizualization. I cannot wait to try this in my fall Brit lit class!

    – TEI demo: Since I’ve done some TEI work before, I have given my classes (and Lori Emerson’s class) a short intro to TEI and textual encoding, and let students play around with it, but not to create anything per se. Just to know how some works that they read online — notably those digitized by libraries — are being produced and to answer the “what’s under the hood” question. Usually, the focus of class is (1) short explanation of encoding and what it looks like, (2) what you need to do it, (3) what the TEI is, and (4) Finale: discussion of 3 different versions of the same poem: (a) the html version displayed on the web, (b) the version showing all the code, and (c) a PDF of the poem. Ask ?s like: is this the same text in all 3 versions, which v. is the most “authentic,” which version is the most useful, etc. (Borrowed lesson from Laura Mandell).

    – TAPoR: http://portal.tapor.ca/portal/portal. If any of the texts exist online, or you can put them online, this would be a fabulous way to do some accessible textual analysis. I’ve never used it, but could easily see myself using it and have wanted to.

    – Omeka: Saw a fabulous talk at STS by Jessica DeSpain on how she has her students make book history exhibits for final projects using Omeka. Perhaps you could have your students curate their own digital Duffy collection in Omeka for a similar project — the diff organizations of the exhibit, explanations, and analysis would be really interesting to see/evaluate.

  2. #2 by Leonardo Flores on July 29, 2011 - 8:39 am

    Peter Shillingsburg does a good job of recapping and to a certain degree harmonizing textual theories. I recommend Scholarly Editing in the Digital Age and Resisting Texts.

    John Bryant’s The Fuid Text: A Theory of Revision does a similar job in an accessible, pragmatic, and more recent way.

  3. #3 by Meagan Timney on July 29, 2011 - 7:31 pm

    I don’t know much about Duffy’s writing, but I would definitely recommend that you and your class read an article that Julie Meloni suggested to me when I was in the process of preparing for my HUMA 150 class:

    Martha Nell Smith’s “Computing: What Has American Literary Study To Do with It? – American Literature 74:4 American Literature 74.4 (2002) 833-857 ”

    Feel free to take a look at the HUMA150 course website, too. I’m not sure how helpful it will be, but the reading list and lab exercises may be of use: http://mdouglas.etcl.uvic.ca/huma150/

Comments are closed.