Friday, November 19, 2010

On being a digital medievalist

by Martin Foys and Asa Simon Mittman


[We are grateful to MF and ASM for composing this guest post, a process that gave us a dizzying view of dynamic electronic collaboration in action: they collaborated via Google docs and FB simultaneously, and allowed us to watch the process unfold. -- JJC]

The Machine in the Ghost: reactions to being (not, not at all, not by a long shot) a digital medievalist rock star
Martin Foys

Yesterday I was fortunate enough to be included in a New York Times article about scholarly efforts that are now considered the digital humanities. I ended up in the Times by being in the right place at the right time - literally the right place, as living twenty miles outside of New York afforded the paper an easy photo-op. The whole experience so far has been twenty-four hours of incongruous weirdness. Friends I haven't heard from in years have emailed me congratulations, the president of my institution wants to have lunch with me, on his birthday no less; a company in Tampa named "In the News" called and asked if I wanted the article mounted on a plaque - one of three such offers I received in the space of a morning.

It's also been a crystalizing, bracing opportunity to think about what, exactly digital humanities means for me, and why I am doing the work I am. I spent part of my day fruitlessly defining and defending on Facebook my own sense of this work to a medievalist acquaintance who wrote to me a litany of protests like "‎I think this mapping stuff (turning text into data and into an image) is, well, pretty much without consequence, personally."

Newspaper articles are designed to tell you difficult things easily, and by their nature misquote you, even when they print word-for-word a sentence you actually said. The Times article is a fine attempt to talk about some of the work being done in the so-called Humanities 2.0, but such pieces inevitably flatten much of what is exciting here, at least to me. What I didn't realize, though, was how reporting about some humanities scholars’ desire to focus on digital modes of data and visualization of information would unintentionally catalyze cries of reductionism and the de-humani(ties)zation of our disciplines. The nearly one hundred comments to the article (I know - why did I read them?) are dominated by harsh runs of posts rejecting the significance and relevance of data in the humanities, and at times the humanities in general (though there was an edifying, late rally of support for such work before comments closed at the end of the day). But at least only one particularly frustrated reader went as far to call the academics profiled in the piece "pseudo-scholars" (I’m looking at you, #43, Thomas Clarke, from Phoenix Ariz.).

I am deeply grateful for such sentiments; they remind me of the necessity of living beyond the work you do, and what you think it does, and your like-minded peers - beyond the "horizontal modeling" of your intellect (a phrase itself is stolen un-horizontally from an English professor whose arguments I find profoundly misguided: Mark Bauerlein, author of The Dumbest Generation). I thought I'd take a little space here to work out in a little more (if still brief and woefully unspecific) detail what this kind of work means to me, and why I find it important. Most of my own early work in applying digital technologies to medieval studies was of a remediating sort - in the 1990's, while designing the now, already, antique Bayeux Tapestry Digital Edition, I hadn't yet worked out that technology didn't allow you to do things better, faster, stronger (a.k.a. the Bionic Man hermeneutic), but instead differently. Around that time, I was at a series of Kalamazoo sessions on digital resources for medieval studies (at the time, mostly new web sites or giant, funded catalogue and/or database initiatives) and Dan O'Donnell (soon to become the founder of Digital Medievalist) asked a question that has haunted me ever since. During the q&a, Dan said something like, "Well, okay, is all this just improving what we already do, or is actually changing what we do?" Sitting there, I was appalled - not at the question, but at the answer I myself had - which was "no, not really."

Where, exactly, was the brave new world?


Writing Virtually Anglo-Saxon was, in part, my attempt to work out theoretically some of the possible valencies of new media for medieval study. But it wasn't until about five years later that the a-ha moment came for me regarding digital praxis in medieval studies. At a 2003 conference about "New Technologies and Old Texts”, I saw a presentation by the HUMI Project, based at Keio University. The Japanese have a knack for bringing it in ways you do not expect. Like Pocky (seriously, have you seen the design aesthetic here?). HUMI was at the time analyzing the typographic output of Caxton's printing house, and computer generating histograms of individual printed letter characters from different books. They were doing this in part to trace evidence of decay of the typeset being used across Caxton's print runs. In effect, though, they were also rebuilding from printed words on the page a granular history of the objects which produced them. This is where the penny dropped, as I realized that computers had the ability to read those words differently than we instinctively could or would -- a reverse engineering of what I later learned was transcoding, or the ability of new media to read machine code, but simultaneously reformulate it into forms of signification readable by human users. For the computer, the word on the page encoded far more than we could physically see, calculate, or unaided understand.

"Calculate" is the key word here. The brain needs help to do its scholarly work. We've had technology assisting us for longer than we care to remember - indeed, we've forgotten, interiorized, as Ong puts it, the technology; we’ve embedded the glasses, the pen, the book, and gradually now the computer deep within us - becoming unwitting cyborgs in the process. A number of reader comments in the Times piece took pains to differentiate between the tools and the transformational rhetoric the article espoused, saying, in summary, They're just tools - they don't do anything - the scholar still needs to interpret the data, or, as Anthony Grafton states in the article, “It’s easy to forget the digital media are means and not ends." Sure, fine, yep. But since when haven't tools transformed the human? The old-school cybernetic turn that tools are the extensions of "man" was outmoded at the moment that Norbert Wiener (n.b. best computer geek name ever) began to formulate it in the late 1940's, even by his own admission. The point of tools and humans is not that they augment the human, or technologically determine the human, but rather the constant feedback loop existing between the tool and the human - a kind of cultural technetics of a circulating, hybridized, slow dance of mutual prosthesis. In digital scholarship, we are beginning to generate data that we before did not have the capacity to imagine, both in quantity (the economies of scale are already becoming staggering), but in quality as well. So what happens, to give one emergent example from our own work in Digital Mappaemundi, when we can analyze the coordinate-coded proximity of related images, inscriptions and topographical features across a wide corpus of medieval maps and geographic texts? We can’t wait to find out, so we’re building the tools to do so, but it’s going to be a slow process of discovery.

This is the necessary symbiosis between flesh and machine. One thing I've learned this semester while team-teaching an undergraduate digital humanities class with my computer-scientist collaborator Shannon Bradshaw, is that computers are great at things that humans are not, and vice-versa. Our mental ability to calculate complex equations is rudimentary at best - a dollar-store calculator outperforms us. But relatively speaking, computing technology is still ghastly when it comes to such things as linguistic facility, or recognizing "image affect" - the capacity of visuals to generate emotion. Fascinating. Numbers, words and pictures - these are what drive our hyperreal world after all, and our interpretation of it; we need machines to produce them, and machines to understand them. One manifest utility of digital technology is that it forces us to admit how porous the boundaries are between such signifying ingredients. We're already quite accomplished at turning everything about our world into words - we've had centuries of scholarly practice - but what happens when we turn words back into objects, or into images, or turn images into numbers?

There's much more that could be discussed, of course - especially the communal and collaborative qualities of new media discourse that are also ardently post-human, but I’ve written enough, and I think Asa’s going to hit this aspect a bit. (Oh, okay - one quick anecdote: I've taught Foucault's "What is an Author?" in my Intro to Lit Crit class for the better part of two decades now. This year was the first year where I actually felt the author function re-historicizing in modes analogous to medieval corporate production, as students brought up Wikipedia and other digital texts as examples of authoritative, yet authorless discourse. It was pretty nifty).

The work being done by the scholars mentioned in the Times article, and the intense industry of the other “geek/poets” that we there stand in for as convenient body doubles, is not revolutionary, but revelatory. The article is not, as some have (mis)taken it, meant to indicate that now is the time when the new, exciting digital work is being done (it has been for decades), or that now is the time that changes the game. Change is always. The article is simply a signpost along the way of a slow, inexorable tipping point of the grinding shift to that which comes next. Anglo-Saxon England didn't end in 1066, despite all the tumult and arguably sudden upsets to the ruling structure. Gradually, later, it just was no longer, well, really what it was before, and had become something different. Mostly. Welcome to the humanities . . . again.


Always was a rockstar (in my mind)...
Asa Simon Mittman

I get a message from Martin, subject line “Dude, you're in the Times.” Confusion. Nothing in the message but a link. I click it. The New York Times. Heart begins to pound. When I wrote to friends and colleagues that this was a life-goal achieved, I pretended that I said it with my tongue in my cheek. What had followed has been, as Martin notes, bizarre. I have been contacted by a curator with an exhibition he sees as relevant, by a publisher (no, no new book deal will ensue), by other digital humanities folks, friends from afar, and by a prof from right here at Chico State who first heard about me by reading the article in the Times. (The Times! Ok, I’m ok.) He is full-time in another department, but serving as a visiting prof in mine, and we’ve never met; he first heard about me in this article. Social networks.

The article does a good job, I think, of presenting some of the newer ideas and efforts (only a tiny sample) in the Digital Humanities, and while not in-depth, it is not nearly as reductive as some of the commentators claim. Like Martin, I read the comments. Like him, I was struck by many, especially one from “Swagato C” (Chicago, IL), who writes:
I am not so sure that 'crowdsourcing' academic ventures is the way forward. Could Kant have written his Critiques by a collaborative effort across continents and schools of thought? At some point the individual still stands apart. The tools, yes, they must indeed be refreshed as necessary.
Martin does a good job here (and an excellent job in Virtually Anglo-Saxon) of pointing out that all technology is a mediation, even (perhaps especially) those technologies that we fail to see because, in our post-human status, they seem mere extensions of ourselves. The pen giving way to the keyboard, vellum being overtaken by paper, these technologies effect what we write, how we write, how we house, organize, and sort that information, and how we find and read it. But Swagato is perhaps (wilfully) blind to the historical processes that produced Kant’s Critiques. This is the standard romanticizing of the writing process as the work of the lone scholar, toiling in isolation in a rickety garret. Preferably on a mechanical typewriter.
And, if possible, the author’s world should become black-and-white. All the better if he (always he in this scenario) has a glass of whiskey or a bottle of absinthe to serve to provide the inspiration.

But this has no more foothold in reality than Jerry Lewis has a typewriter. It is part of a grand narrative about how Great Works come into being (no less prevalent in my field of Art History than in Literature or Philosophy). Never mind that we are not suggesting that we should crowdsource all academic ventures, Kant did not write in a vacuum. He was responding to and building on a mass of philosophical thought. Good work occurs in dialogue, out of myriad contacts and readings and replies. Indeed, the curator who contacted me has put together an exhibition on the Republic of Letters (the subject of a very cool digital humanities project out of Stanford, my alma mater, which is doing a lot of good work in the field), and I cannot help but think that those folks would have loved the Internet.

So, really, there is no question about the value of collaboration, is there? This then brings us to the data portion of the argument, which has two prongs:
1. Human passion is the only way to access the humanities (Julia, Hiawassee, GA):
How can the Arts be "quantified"? The Great God Data is, I feel, anathema to creativity and the human soul. Technology may have proven itself valuable in the fields of science and economics, but, please leave the worlds of literature and art alone! To quote Descartes: "The heart has its reason that reason does not know." As an artist and a humanist, "j'accuse".
Again, we are with the Romantics. This notion would have been fairly alien to artists of many periods, including the “artists” -- really, better termed illuminators -- who made the maps that we have been focusing on, many of whom were quite interested in data. Take the Saint John’s College MS 17 map, for example (the map from which is at our site, but the whole of which is available in a great site headed by Faith Wallis of McGill). The map is housed in a manuscript that begins with a geometry problem (though this is a later addition), and contains, among other things:
Materials related to computus … grammar of numbers; calendar prognostications; note on world-ages; how to calculate embolisms; lunaria; horoscope; Coena Cypriani; riddles; runic, cryptographic and exotic alphabets; demonstration of cryptography; diagrams showing Easter termini, times of sunrises and sunsets; world-map; table of feriae; tables and texts on degrees of consanguinity; taxonomies of knowledge; Byrhtferth's Diagram.
Note the world map as merely one bit of the very technical, mathematical, data-filled context of this manuscript. The “human soul” argument sounds expansive and grand, but is, in truth, historically embedded and quite limiting.
2. Instead of crunching the data, we should just use digital tools as virtual archives, which will help us go about our (already, though unacknowledgedly, mediated) work as we always have. John Kleeberg (New York, NY) voices this clearly:
This research could easily end up being pointless. Contrast that to the vital importance of scanning and digitizing our libraries and archives so that they can be preserved and made them available to all. That way, the next time an Anna Amalia library burns up, or a Cologne archive collapses (and we know for certain that there will be a next time), at least the information will be preserved in digital form.
In fact, Martin, Shannon and I ran into exactly this response from the Kress Foundation. When we applied to a new funding program for electronic work, they told us, “while the Kress Foundation appreciates the significance of the Digital Mappaemundi project, our Digital Resources initiative is new and its early focus will be on what we are calling ‘destination’ photo archives … The Kress Foundation commends the Digital Mappaemundi project’s goals and we are certain that it will indeed provide greater access to scholarly records of the mappaemundi.” It would be such a dreadful shame if computer scientists went through all the trouble of creating these stunning abilities and all we did was use them as filing cabinets, rather than letting them do the sorts of work that, as Martin says, humans just aren’t good at.

As 2010 wanes, I’ve been thinking a lot about a TV show I used to watch as a kid, Beyond 2000. It promised me flying cars and jet packs (and I am somewhat pissed that I don’t have either, yet. Just a 1999 Subaru with its “Check Engine” light on). But the writers of the show missed all the big stuff, and they didn’t see the value of humanities computing. We are not “just getting started,” or anything like that. We’ve been at it for ages. Bob Stevik converted Beowulf to punch cards. In 1967. I do agree with Keeberg, some of “this research could easily end up being pointless.” Of course it could. Thank goodness that was never the case with print-based scholarship. Or quill based.

2 comments:

  1. For goodness sakes! The humanities and the arts can't be quantified, runs the complaint. But nobody is claiming to quantify them, except perhaps administrators keen to cut them. We're simply talking about uses of new tools. As a medievalist, I have no trouble with word-searching the digital Mignes Patrologia Latina, referring my colleagues to page numbers of the critical edition of Anselm's texts, or composing my articles and book on a state of the art computer. I fail to see where any threat to the arts of humanities lies in all of this, and I applaud the work of those who are willing to use technology to our advantage for humanistic, hermeneteuic ends

    ReplyDelete
  2. As usual, the NYTimes article is slightly off-kilter (the way Martin and Asa are emphasizing quantitative differences between computers and humans seems too subtle to fit the quantitative vs. qualitative binarism) -- but this post wonderfully represents the questions, concerns and messy half-thoughts driving so much digital humanities research right now. Kudos!

    I had the pleasure of meeting Tim Stinson at THATCamp RTP. I was a mild skeptic of genetic databases being anything more than scientizing the humanities (or at least giving precedence to scientific methods, which is already overdone in DH). He patiently convinced me, though, that it isn't a return to objectivity, but a potentially new perspective on how to slice the orange of history. It shifts the conversation.

    Of course, our primary ways of publishing research (not information, but the product of a specific line of inquiry) still haven't changed enough to reflect these changes in methodology. And probably won't -- or, at least, by the time they do, the new methods will have been institutionalized and the changes therefore inevitable, normalized and disarmed.

    ReplyDelete

Comments are moderated. Please be patient.