A Working Definition of Digital Humanites

Hah! I tricked you. I don’t intend to define digital humanities here—too much blood has already been spilled over that subject. I’m sure we all remember the terrible digital humanities / humanities computing wars of 2004, now commemorated yearly under a Big Tent in the U.S., Europe, or in 2015, Australia. Most of us still suffer from ACH or ALLC (edit: I’ve been reminded the more politically correct acronym these days is EADH).

Instead, I’m here to report the findings of an extremely informal survey, with a sample size of 5, inspired by Paige Morgan’s question of what courses an undergraduate interested in digital humanities should take:

The question inspired a long discussion, worth reading through if you’re interested in digital humanities curricula. I suggested, were the undergrad interested in the heavily computational humanities (like Ted Underwood, Ben Schmidt, etc.), they might take linear algebra, statistics for social science, programming 1 & 2, web development, and a social science (like psych) research methods course, along with all their regular humanities courses. Others suggested to remove some and include others, and of course all of these are pipe dreams unless our mystery undergrad is in the six year program.

The Pipe Dream Curriculum. [via]
The Pipe Dream Curriculum. [via]
The discussion got me thinking: how did the digital humanists we know and love get to where they are today? Given that the basic ethos of DH is that if you want to know something, you just have to ask, I decided to ask a few well-respected DHers how someone might go about reaching expertise in their subject matter. This isn’t a question of how to define digital humanities, but of the sorts of things the digital humanists we know and love learned to get where they are today. I asked:

Dear all,

Some of you may have seen this tweet by Paige Morgan this morning, asking about what classes an undergraduate student should take hoping to pursue DH. I’ve emailed you, a random and diverse smattering of highly recognizable names associated with DH, in the hopes of getting a broader answer than we were able to generate through twitter alone.

I know you’re all extremely busy, so please excuse my unsolicited semi-mass email and no worries if you don’t get around to replying.

If you do reply, however, I’d love to get a list of undergraduate courses (traditional humanities or otherwise) that you believe was or would be instrumental to the research you do. My list, for example, would include historical methods, philosophy of science, linear algebra, statistics, programming, and web development. I’ll take the list of lists and write up a short blog post about them, because I believe it would be beneficial for many new students who are interested in pursuing DH in all its guises. I’d also welcome suggestions for other people and “schools of DH” I’m sure to have missed.

Many thanks,
Scott

The Replies

And because the people in DH are awesome and forthcoming, I got many replies back. I’ll list them first here, and then attempt some preliminary synthesis below.

Ted Underwood

The first reply was from Ted Underwood, who was afraid my question skirted a bit too close to defining DH, saying:

No matter how heavily I hedge and qualify my response (“this is just a personal list relevant to the particular kind of research I do …”), people will tend to read lists like this as tacit/covert/latent efforts to DEFINE DH — an enterprise from which I never harvest anything but thorns.

Thankfully he came back to me a bit later, saying he’d worked up the nerve to reply to my survey because he’s “coming to the conclusion that this is a vital question we can’t afford to duck, even if it’s controversial [emphasis added]”. Ted continued:

So here goes, with three provisos:

  1. I’m talking only about my own field (literary text mining), and not about the larger entity called “DH,” which may be too deeply diverse to fit into a single curriculum.
  2. A lot of this is not stuff I actually took in the classroom.
  3. I really don’t have strong opinions about how much of this should be taken as an undergrad, and what can wait for grad school. In practice, no undergrad is going to prepare themselves specifically for literary text mining (at least, I hope not). They should be aiming at some broader target.

But at some point, as preparation for literary text-mining, I’d recommend

  • A lot of courses in literary history and critical theory (you probably need a major’s worth of courses in some aspect of literary studies).
  • At least one semester of experience programming. Two semesters is better. But existing CS courses may not be the most efficient delivery system. You probably don’t need big-O notation. You do need data structures. You may not need to sweat the fine points of encapsulation. You probably do need to know about version control. I think there’s room for a “Programming for Humanists” course here.
  • Maybe one semester of linguistics (I took historical linguistics, but corpus linguistics would also work).
  • Statistics — a methods course for social scientists would be great.
  • At least one course in data mining / machine learning. This may presuppose more math than one semester of statistics will provide, so
  • Your recommendation of linear algebra is probably also a good idea.

I doubt all of that will fit in anyone’s undergrad degree. So in practice, any undergrad with courses in literary history plus a semester or two of programming experience, and perhaps statistics, would be doing very well.

So Underwood’s reply was that focusing too much in undergrad is not necessarily ideal, but were an undergraduate interested in literary text mining, they wouldn’t go astray with literary history, critical theory, a programming for humanists course, linguistics, statistics, data mining, and potentially linear algebra.

Johanna Drucker

While Underwood is pretty well known for his computational literary history, Johanna Drucker is probably most well known in our circles for her work in DH criticism. Her reply was concise and helpful:

Look at http://dh101.humanities.ucla.edu

In the best of all possible worlds, this would be followed by specialized classes in database design, scripting for the humanities, GIS/mapping, virtual worlds design, metadata/classification/culture, XML/markup, and data mining (textual corpora, image data mining, network analysis), and complex systems modeling, as well as upper division courses in disciplines (close/distant reading for literary studies, historical methods and mapping etc.).

The site she points is an online coursebook that provides a broad overview of DH concepts, along with exercises and tutorials, that would make a good basic course on the groundwork of DH. She then lists a familiar list of computer-related and humanities course that might be useful.

Melissa Terras

The next reply came from Melissa Terras, the director of the DH center (I’m sorry, centre) at UCL. Her response was a bit more general:

My first response is that they must be interested in Humanities research – and make the transition to being taught about Humanities, to doing research in the Humanities, and get the bug for finding out new information about a Humanities topic. It doesn’t matter what the Humanities subject is – but they must understand Humanities research questions, and what it means to undertake new research in the Humanities proper. (Doesn’t matter if their research project has no computing component, it’s about a hunger for new knowledge in this area, rather than digesting prior knowledge).

Like Underwood and Drucker, Terras is stressing that students cannot forget the humanities for the digital.

Then they must become information literate, and IT literate. We have a variety of training courses at our institution, and there is also the “European Driving License in IT” which is basic IT skills. They must get the bug for learning more about computing too. They’ll know after some basic courses whether they are a natural fit to computing.

Without the bug to do research, and the bug to understand more about computing, they are sunk for pursuing DH. These are the two main prerequisites.

Interestingly (but not surprisingly, given general DH trends), Terras frames passion about computing as more important than any particular skill.

Once they get the bug, then taking whatever courses are on offer to them at their institution – either for credit modules, or pure training courses in various IT methods, would stand them in good stead. For example, you are not going to get a degree course in Photoshop, but attending 6 hours of training in that…. plus spreadsheets, plus databases, plus XML, plus web design, would prepare you for pursuing a variety of other courses. Even if the institution doesnt offer taught DH courses, chances are they offer training in IT. They need to get their hands dirty, and to love learning more about computing, and the information environment we inhabit.

Her stress on hyper-focused courses of a few hours each is also interesting, and very much in line with our “workshop and summer school”-focused training mindset in DH.

It’s at that stage I’d be looking for a master’s program in DH, to take the learning of both IT and the humanities to a different level. Your list excludes people who have done “pure” humanities as an undergrad to pursuing DH, and actually, I think DH needs people who are, ya know, obsessed with Byzantine Sculpture in the first instance, but aren’t afraid of learning new aspects of computing without having any undergrad credit courses in it.

I’d also say that there is plenty room for people who do it the other way around – undergrads in comp sci, who then learn and get the bug for humanities research.

Terras continued that taking everything as an undergraduate would equate more to liberal arts or information science than a pure humanities degree:

As with all of these things, it depends on the make up of the individual programs. In my undergrad, I did 6 courses in my final year. If I had taken all of the ones you suggest: (historical methods, philosophy of science, linear algebra, statistics, programming, and web development) then I wouldn’t have been able to take any humanities courses! which would mean I was doing liberal arts, or information science, rather than a pure humanities degree. This will be a problem for many – just sayin’. 🙂

But yes, I think the key thing really is the *interest* and the *passion*. If your institution doesnt allow that type of courses as part of a humanities degree, you haven’t shot yourself in the foot, you just need to learn computing some other way…

Self-teaching is something that I think most people reading this blog can get behind (or commiserate with). I’m glad Terras shifted my focus away from undergraduate courses, and more on how to get a DH education.

John Walsh

John Walsh is known in the DH world for his work on TEI, XML, and other formal data models of humanities media. He replied:

I started undergrad as a fine arts major (graphic design) at Ohio University, before switching to English literary studies. As an art major, I was required during my freshman year to take “Comparative Arts I & II,” in which we studied mostly the formal aspects of literature, visual arts, music, and architecture. Each of the two classes occupied a ten-week “quarter” (fall winter spring summer), rather than a semester. At the time OU had a department of comparative arts, which has since become the School of Interdisciplinary Arts.

In any case, they were fascinating classes, and until you asked the question, I hadn’t really considered those courses in the context of DH, but they were definitely relevant and influential to my own work. I took these courses in the 80s, but I imagine an updated version that took into account digital media and digital representations of non-digital media would be especially useful. The study of the formal aspects of these different art forms and media and shared issues of composition and construction gave me a solid foundation for my own work constructing things to model and represent these formal characteristics and relationships.

Walsh was the first one to single out a specific humanities course as particularly beneficial to the DH agenda. It makes sense: the course appears to have crossed many boundaries, focusing particularly on formal similarities. I’d hazard that this approach is at the heart of many of the more computational and formal areas of digital humanities (but perhaps less so for those areas more aligned with new media or critical theory).

I agree web development should be in the mix somewhere, along with something like Ryan Cordell’s “Text Technologies” that would cover various representations of text/documents and a look at their production, digital and otherwise, as well as tools (text analysis, topic modeling, visualization) for doing interesting things with those texts/documents.

Otherwise, Walsh’s courses aligned with those of Underwood and Drucker.

Matt Jockers

Matt Jockers‘ expertise, like Underwoods, tends toward computational literary history and criticism. His reply was short and to the point:

The thing I see missing here are courses Linguistics and Machine Learning. Specifically courses in computational linguistics, corpus linguistics, and NLP. The later are sometimes found in the CS depts. and sometimes in linguistics, it depends. Likewise, courses in Machine Learning are sometimes found in Statistics (as at Stanford) and sometimes in CS (as at UNL).

Jockers, like Underwood, mentioned that I was missing linguistics. On the twitter conversation, Heather Froehlich pointed out the same deficiency. He and Underwood also pointed out machine learning, which are particularly useful for the sort of research they both do.

Wrapping Up

I was initially surprised by how homogeneous the answers were, given the much-touted diversity of the digital humanities. I had asked a few others to get back to me, who for various reasons couldn’t get back to me at the time, situated more closely in the new media, alt-ac, and library camps, but even the similarity among those I asked was a bit surprising. Is it that DH is slowly canonizing around particular axes and methods, or is my selection criteria just woefully biased? I wouldn’t be too surprised if it were the latter.

In the end, it seems (at least according to life-paths of these particular digital humanists), the modern digital humanist should be a passionate generalist, well-versed in their particular field of humanistic inquiry, and decently-versed in a dizzying array of subjects and methods that are tied to computers in some way or another. The path is not necessarily one an undergraduate curriculum is well-suited for, but the self-motivated have many potential sources for education.

I was initially hoping to turn this short survey into a list of potential undergraduate curricula for different DH paths (much like my list of DH syllabi), but it seems we’re either not yet at that stage, or DH is particularly ill-suited for the undergraduate-style curricula. I’m hoping some of you will leave comments on the areas of DH I’ve clearly missed, but from the view thus-far, there seems to be more similarities than differences.

Breaking the Ph.D. model using pretty pictures

Earlier today, Heather Froehlich shared what’s at this point become a canonical illustration among Ph.D. students: “The Illustrated guide to a Ph.D.” The illustrator, Matt Might, describes the sum of human knowledge as a circle. As a child, you sit at the center of the circle, looking out in all directions.

PhDKnowledge.002[1]Eventually, he describes, you get various layers of education, until by the end of your bachelor’s degree you’ve begun focusing on a specialty, focusing knowledge in one direction.

PhDKnowledge.004[1]A master’s degree further deepens your focus, extending you toward an edge, and the process of pursuing a Ph.D., with all the requisite reading, brings you to a tiny portion of the boundary of human knowledge.

PhDKnowledge.007[1]

 

You push and push at the boundary until one day you finally poke through, pushing that tiny portion of the circle of knowledge just a wee bit further than it was. That act of pushing through is a Ph.D.

PhDKnowledge.010[1]

 

It’s an uplifting way of looking at the Ph.D. process, inspiring that dual feeling of insignificance and importance that staring at the Hubble Ultra-Deep Field tends to bring about. It also exemplifies, in my mind, one of the broken aspects of the modern Ph.D. But while we’re on the subject of the Hubble Ultra-Deep Field, let me digress momentarily about stars.

1024px-Hubble_ultra_deep_field_high_rez_edit1[1]Quite a while before you or I were born, Great Thinkers with Big Beards (I hear even the Great Women had them back then) also suggested we sat at the center of a giant circle, looking outwards. The entire universe, or in those days, the cosmos (Greek: κόσμος, “order”), was a series of perfect layered spheres, with us in the middle, and the stars embedded in the very top. The stars were either gems fixed to the last sphere, or they were little holes poked through it that let the light from heaven shine through.

pythagoras

As I see it, if we connect the celestial spheres theory to “The Illustrated Guide to a Ph.D.”, we’d arrive at the inescapable conclusion that every star in the sky is another dissertation, another hole poked letting the light of heaven shine through. And yeah, it takes a very prescriptive view of the knowledge and the universe that either you or I can argue with, but for this post we can let it slide because it’s beautiful, isn’t it? If you’re a Ph.D. student, don’t you want to be able to do this?

Flammarion[1]The problem is I don’t actually want to do this, and I imagine a lot of other people don’t want to do this, because there are already so many goddamn stars. Stars are nice. They’re pretty, how they twinkle up there in space, trillions of miles away from one another. That’s how being a Ph.D. student feels sometimes, too: there’s your research, my research, and a gap between us that can reach from Alpha Centauri and back again. Really, just astronomically far away.

distance

It shouldn’t have to be this way. Right now a Ph.D. is about finding or doing something that’s new, in a really deep and narrow way. It’s about pricking the fabric of the spheres to make a new star. In the end, you’ll know more about less than anyone else in the world. But there’s something deeply unsettling about students being trained to ignore the forest for the trees. In an increasingly connected world, the universe of knowledge about it seems to be ever-fracturing. Very few are being trained to stand back a bit and try to find patterns in the stars. To draw constellations.

orion-the-hunter[1]I should know. I’ve been trying to write a dissertation on something huge, and the advice I’ve gotten from almost every professor I’ve encountered is that I’ve got to scale it down. Focus more. I can’t come up with something new about everything, so I’ve got to do it about one thing, and do it well. And that’s good advice, I know! If a lot of people weren’t doing that a lot of the time, we’d all just be running around in circles and not doing cool things like going to the moon or watching animated pictures of cats on the internet.

But we also need to stand back and take stock, to connect things, and right now there are institutional barriers in place making that really difficult. My advisor, who stands back and connects things for a living (like the map of science below), gives me the same prudent advice as everyone else: focus more. It’s practical advice. For all that universities celebrate interdisciplinarity, in the end you still need to get hired by a department, and if you don’t fit neatly into their disciplinary niche, you’re not likely to make it.
430561725_4eb7bc5d8a_o1[1]My request is simple. If you’re responsible for hiring researchers, or promoting them, or in charge of a department or (!) a university, make it easier to be interdisciplinary. Continue hiring people who make new stars, but also welcome the sort of people who want to connect them. There certainly are a lot of stars out there, and it’s getting harder and harder to see what they have in common, and to connect them to what we do every day. New things are great, but connecting old things in new ways is also great. Sometimes we need to think wider, not deeper.

northern-constellations-sky[1]

Improving the Journal of Digital Humanities

Twitter and the digital humanities blogosphere has been abuzz recently over an ill-fated special issue of the Journal of Digital Humanities (JDH) on Postcolonial Digital Humanities. I won’t get too much into what happened and why, not because I don’t think it’s important, but because I respect both parties too much and feel I am too close to the story to provide an unbiased opinion. Summarizing, the guest editors felt they were treated poorly, in part because of the nature of their content, and in part because of the way the JDH handles its publications.

I wrote earlier on twitter that I no longer want to be involved in the conversation, by which I meant, I no longer want to be involved in the conversation about what happened and why. I do want to be involved in a discussion on how to get the JDH move beyond the issues of bias, poor communication, poor planning, and microaggression, whether or not any or all of those existed in this most recent issue. As James O’Sullivan wrote in a comment, “as long as there is doubt, this will be an unfortunate consequence.”

Journal of Digital Humanities
Journal of Digital Humanities

The JDH is an interesting publication, operating in part under the catch-the-good model of seeing what’s already out there and getting discussed, and aggregating it all into a quarterly journal. In some cases, that means re-purposing pre-existing videos and blog posts and social media conversations into journal “articles.” In others, it means soliciting original reviews or works that fit with the theme of a current important issue in DH. Some articles are reviewed barely at all – especially the videos – and some are heavily reviewed. The structure of the journal itself, over its five issues thus-far, has changed drastically to fit the topic and the experimental whims of editors and guest editors.

The issue that Elijah Meeks and I guest edited changed in format at least three times in the month or so we had to solidify the issue. It’s fast-paced, not always organized, and generally churns out good scholarship that seems to be cited heavily on blogs and in DH syllabi, but not yet so much in traditional press articles or books. The flexibility, I think, is part of its charm and experimental nature, but as this recent set of problems shows, it is not without its major downsides. The editors, guest editors, and invited authors are rarely certain of what the end product will look like, and if there is the slightest miscommunication, this uncertainty can lead to disaster. The variable nature of the editing process also opens the door for bias of various sorts, and because there is not a clear plan from the beginning, that bias (and the fear of bias) is hard to guard against. These are issues that need to be solved.

Roopika RisamMatt Burton, and I, among others, have all weighed in on the best way to move forward, and I’m drawing on these previous comments for this plan. It’s not without its holes and problems, and I am hoping there will be comments to improve the proposed process, but hopefully something like what I’m about to propose can let the JDH retain its flexibility while preventing further controversies of this particular variety.

  • Create a definitive set of guidelines and mission statement that is distributed to guest editors and authors before the process of publication begins. These guidelines do not need to set the publication process in stone, but can elucidate the roles of each individual and make clear the experimental nature of the JDH. This document cannot be deviated from within an issue publication cycle, but can be amended yearly. Perhaps, as with the open intent of the journal, part of this process can be crowdsourced from the previous year’s editors-at-large of DHNow.
  • Have a week at the beginning of each issue planning phase where authors (if they’ve been chosen yet), guest editors, and editors discuss what particular format the forthcoming issue will take, how it will be reviewed, and so forth. This is formalized into a binding document and will not be changed. The editorial staff has final say, but if the guest editors or authors do not like the final document, they have ample opportunity to leave.
  • Change the publication rate from quarterly to thrice-yearly. DH changes quickly, it shouldn’t be any slower than that, but quarterly seems to be a bit too tight for this process to work smoothly–especially with the proposed week-long committee session to figure out how the issue be run.
  • Make the process of picking special issue topics more open. I know the special issue I worked on came about by Elijah asking the JDH editors if they’d be interested in a topic modeling issue, and after (I imagine) some internal discussion, they agreed. The dhpoco special issue may have had a similar history. Even a public statement of “these people came to us, and this is why we thought the topic was relevant” would likely go a long way in fostering trust in the community.
  • Make the process of picking articles and authors more open; this might be the job of special issue guest editors, as Elijah and I were the ones who picked most of the content. Everyone has their part to play. What’s clear is there is a lot of confusion right now about how it works; some on Twitter recently have pointed out that, until recently, they’d assumed all articles came from the DHNow filter. Making content choice more clear in an introductory editorial would be useful.

Obviously this is not a cure for all ills, but hopefully it’s good ground to start on the path forward. If the JDH takes this opportunity to reform some of their policies, my hope is that it will be seen as an olive branch to the community, ensuring to the best of their ability that there will be no question of whether bias is taking place, implicit or otherwise. Further suggestions in the comments are welcome.

Addendum: In private communication with Matt Burton, he and I realized that the ‘special issue’ and ‘guest editor’ role is not actually one that seems to be aligned with the initial intent of the JDH, which seemed instead to be about reflecting the DH discourse from the previous quarter. Perhaps a movement away from special issues, or having a separate associated entity for special issues with its own set of rules, would be another potential path forward.

On MOOCs

Nobody has said so to my face, but sometimes I’m scared that some of my readers secretly think I’m single-handedly assisting in the downfall of academia as we know it. You see, I was the associate instructor of an information visualization MOOC this past semester, and next Spring I’ll be putting together my own MOOC on information visualization for the digital humanities. It’s an odd position to be in, when much of the anti-DH rhetoric is against MOOCs while so few DHers actually support them (and it seems most vehemently denounce them). I’ve occasionally wondered if I’m the mostly-fictional strawman the anti-DH crowd is actually railing against. I don’t think I am, and I think it’s well-past time I posted my rationale of why.

This post itself was prompted by two others; one by Adam Crymble asking if The Programming Historian is a MOOC, and the other by Jonathan Rees on why even if you say your MOOC is going to be different, it probably won’t be. That last post was referenced by Andrew Goldstone:

With that in mind, let me preface by saying I’m a well-meaning MOOCer, and I think that if you match good intentions with reasonable actions, MOOCs can actually be used for good. How you build, deploy, and use a MOOC can go a long way, and it seems a lot of the fear behind them assumes there is one and only one inevitable path for them to go down which would ultimately result in the loss of academic jobs and a decrease in education standards.

Let’s begin with the oft-quoted Cathy Davidson, “If we can be replaced replaced by a computer screen, we should be.” I don’t believe Davidson is advocating for what Rees accuses her of in the above blog post. One prevailing argument against MOOCs is that the professors are distant, the interactivity is minimal-to-non-existent, and the overall student experience is one of weak detachment. I wonder, though, how many thousand-large undergraduate lectures offer better experiences; many do, I’m sure, but many also do not. In those cases, it seems a more engaging lecturer, at least, might be warranted. I doubt many-if-any MOOC teachers believe there are any other situations which could warrant the replacement of a university course with a MOOC beyond those where the student experience is already so abysmal that anything might help.

The question then arises, in those few situations where MOOCs might be better for enrolled students, what havoc might they wreak on already worsening faculty job opportunities? The toll on teaching in the face of automation might match the toll of the skilled craftsmen in the face of the industrial and eventually mechanical revolution. If you feel angry at replacing laborers with machines in situations where the latter are just (or nearly) as good as the former, and at a fraction of the cost, then you’ll likely also believe MOOCs replacing giant undergrad lectures (which, let’s face it, are often closer to unskilled than to skilled labor in this metaphor) is also unethical.

Rees echoes this fear of automation on the student’s end, suggesting “forcing students into MOOCs as a last resort is like automating your wedding or the birth of your first child.  You’re taking something that ought to depend upon the glorious unpredictability of human interaction and turning it into mass-produced, impersonal, disposable schlock.” The fear is echoed as well by Adam Crymble in his Programming Historian piece, when he says “what sets a MOOC apart from a classroom-based course is a belief that the tutor-tutee relationship can be depersonalized and made redundant. MOOCs replace this relationship with a series of steps. If you learn the steps in the right order and engage actively with the material you learn what you need to know and who needs teacher?”

MOOCs can happen to you! via cogdogblog.
MOOCs can happen to you! via cogdogblog.

The problem is that this entire dialogue rests on assumptions of Crymble and others taking the form that those who support MOOCs do so because, deep down, they believe “If you learn the steps in the right order and engage actively with the material you learn what you need to know and who needs teacher?” It is this set of assumptions that I would like to push against; the idea that all MOOCs must inevitably lead to automated teaching, regardless of the intentions, and that they exist as classroom replacements. I argue that, if designed and utilized correctly, MOOCs can lead to classroom augmentations and in fact can be designed in a way that they can no more be used to replace classrooms than massively-distributed textbooks can.

When Katy Börner and our team designed and built the Information Visualization MOOC, we did so using Google’s open source Course Builder, with the intention of opening knowledge to whomever wanted to learn it regardless of whether they could afford to enroll in one of the few universities around that offers this sort of course. Each of the lectures were recordings of usual lectures for that class, but cut up into more bite-sized chunks, and included tutorials on how to use software. We ran the MOOC concurrently with our graduate course of the same focus, and we used the MOOC as a sort of video textbook that (I hope) was more concise and engaging than most information visualization textbooks out there, and (importantly) completely free to the students. Students watched pre-recorded lectures at home  and then we discussed the lessons and did hands-on training during class time, not dissimilar from the style of flipped teaching.

For those not enrolled in the physical course, we opened up the lectures to anyone who wanted to join in, and created a series of tests and assignments which required students to work together in small teams of 4-5 on real world client data if they wanted to get credit for the course. Many just wanted to learn, and didn’t care about credit. Some still took the exams because they wanted to know how well they’d learned the data, even if they weren’t taking the course credit. Some just did the client projects, because they thought it would give them good real-world experience, but didn’t take the tests and go for the credit. The “credit,” by the way, was a badge from Mozilla Open Badges, and we designed them to be particularly difficult to achieve because we wanted them to mean something. We also hand-graded the client projects.

The IVMOOC Badge.
The IVMOOC Badge.

The thing is, at no time did we ever equate the MOOC with a graduate course, or ever suggest that it could be taken as a replacement credit  instead of some real course. And, by building the course in Google’s course-builder and hosting it ourselves, we have complete control over it; universities can’t take it and change it around as they see fit to offer it for credit. I suppose it’s possible that some university out there might allow students to wave a methodology credit if they get our badge, but I fail to see how that would be any different from universities offering course-waving for students reading a textbook on their own and taking some standard test afterward; it’s done, but not often.

In short, we offer the MOOC as a free and open textbook, not as a classroom replacement. Within the classroom, we use it as a tool for augmenting instruction. For those who choose to do assignments, and perform well on them with their student teams, we acknowledge their good work with a badge rather than a university credit. The fear that MOOCs will necessarily automate teachers away is no more well-founded than the idea that textbooks-and-standardized-tests would; further, if administrators choose to use MOOCs for this purpose, they are no more justified in doing this than they would be justified in replacing teachers with textbooks. That they still might is of course a frightening prospect, and something we need to guard against, but should no more be blamed on MOOC instructors than they would be blamed on textbook authors in the alternative scenario. It doesn’t seem we’re any different from what Adam Crymble described The Programming Historian to be (recall: definitely not a MOOC).

We’re making it easier for people to teach themselves interesting and useful things. Whether administrators use that for good or ill is a separate issue. Whether more open and free training trumps our need to employ all the wandering academics out there is a separate issue – as is whether or not that dichotomy is even a valid one. As it stands now, though, I’m proud of the work we’ve done on the IVMOOC, I’m proud of the students of the physical course, and I’m especially proud of all the amazing students around the world who came out of the MOOC producing beautiful visualization projects, and are better prepared for life in a data-rich world.

Call for Computational Folkloristics Papers

What’s this? Two CFPs at the Irregular in quick succession? That’s right, first Marten Düring’s fabulous Historical Network Research cfp comes out, and it has been followed closely by a call for papers by the great and powerful Tim Tangherlini. Those of you who don’t know him, should. Tangherlini organized the wildly successful Networks and Network Analysis for the Humanities NEH Summer Workshop and followup conference, is the co-author on a wonderful piece on computational folkloristics, and is a great guy to boot. He also dances comfortably on the bleeding edge of computational humanities research. All of these should be reason enough to either submit to or wait in eager anticipation of Tim’s forthcoming special issue of the Journal of American Folklore, the CFP for which is bellow.

I should point out that the Journal of American Folklore is not Open Access. If this is something you care about (and you should), but you’re interested in submitting an article, consider emailing the editor of JAF and asking for the journal to join the admirable ranks of Open Folklore, a Bloomington-based initiative that hopes to increase access to folklore material of all varieties. The initiative is also part of the American Folklore Society, which is responsible for the above-mentioned Journal of American Folklore.

One of Tangherlini's many neat analytic analyses of folklore.
One of Tangherlini’s many neat analytic analyses of folklore. via ACM.

——————————————————————————————————–

Over the course of the past decade, a revolution has occurred in the materials available for the study of folklore. The scope of digital archives of traditional expressive forms has exploded, and the magnitude of machine-readable materials available for consideration has increased by many orders of magnitude. Many national archives have made significant efforts to make their archival resources machine-readable, while other smaller initiatives have focused on the digitization of archival resources related to smaller regions, a single collector, or a single genre. Simultaneously, the explosive growth in social media, web logs (blogs), and other Internet resources have made previously hard to access forms of traditional expressive culture accessible at a scale so large that it is hard to fathom. These developments, coupled to the development of algorithmic approaches to the analysis of large, unstructured data and new methods for the visualization of the relationships discovered by these algorithmic approaches—from mapping to 3-D embedding, from time-lines to navigable visualizations—offer folklorists new opportunities for the analysis of traditional expressive forms. We label approaches to the study of folklore that leverage the power of these algorithmic approaches “Computational Folkloristics” (Abello, Broadwell, Tangherlini 2012).

The Journal of American Folklore invites papers for consideration for inclusion in a special issue of the journal edited by Timothy Tangherlini that focuses on “Computational Folkloristics.” The goal of the special issue is to reveal how computational methods can augment the study of folklore, and propose methods that can extend the traditional reach of the discipline. To avoid confusion, we term those approaches “computational” that make use of algorithmic methods to assist in the interpretation of relationships or structures in the underlying data. Consequently, “Computational Folkloristics” is distinct from Digital Folklore in the application of computation to a digital representation of a corpus.

We are particularly interested in papers that focus on: the automatic discovery of narrative structure; challenges in Natural Language Processing (NLP) related to unlabeled, multilingual data including named entity detection and resolution; topic modeling and other methods that explore latent semantic aspects of a folklore corpus; the alignment of folklore data with external historical datasets such as census records; GIS applications and methods; network analysis methods for the study of, among other things, propagation, community detection and influence; rapid classification of unlabeled folklore data; search and discovery on and across folklore corpora; modeling of folklore processes; automatic labeling of performance phenomena in visual data; automatic classification of audio performances. Other novel approaches to the study of folklore that make use of algorithmic approaches will also be considered.

A significant challenge of this special issue is to address these issues in a manner that is directly relevant to the community of folklorists (as opposed to computer scientists). Articles should be written in such a way that the argument and methods are accessible and understandable for an audience expert in folklore but not expert in computer science or applied mathematics. To that end, we encourage team submissions that bridge the gap between these disciplines. If you are in doubt about whether your approach or your target domain is appropriate for consideration in this special issue, please email the issue editor, Timothy Tangherlini at tango@humnet.ucla.edu, using the subject line “Computational Folkloristics—query”. Deadline for all queries is April 1, 2013.

All papers must conform to the Journal of American Folklore’s style sheet for authors. The guidelines for article submission are as follows: Essay manuscripts should be no more than 10,000 words in length, including abstract, notes, and bibliography. The article must begin with a 50- to 75-word abstract that summarizes the essential points and findings of the article. Whenever possible, authors should submit two copies of their manuscripts by email attachment to the editor of the special issue at: tango@humnet.ucla.edu. The first copy should be sent in Microsoft Word or Rich Text Format (rtf) and should include the author’s name. Figures should not be included in this document, but “call outs” should be used to designate where figures should be placed (e.g., “<insert Figure 1 here>”). A list at the end of the article (placed after the bibliography) should detail the figures to be included, along with their captions. The second copy of the manuscript should be sent in Portable Document Format (pdf). This version should not include the author’s name or any references within the text that would identify the author to the manuscript reviewers. Passages that would identify the author can be marked in the following manner to indicate excised words: (****). Figures should be embedded in this version just as they would ideally be placed in the published text. Possible supplementary materials (e.g., additional photographs, sound files, video footage, etc.) that might accompany the article in its online version should be described in a cover letter addressed to the editor. An advisory board for the special issue consisting of folklorists and computer scientists will initially consider all papers. Once accepted for the special issue, all articles will be subject to the standard refereeing procedure for the journal. Deadline for submissions for consideration is June 15, 2013. Initial decisions will be made by August 1, 2013. Final decisions will be made by October 1, 2013. We expect the issue to appear in 2014.

CfP: “Historical Network Research” at Sunbelt, May 21-26, Germany

Marten Düring, an altogether wonderful researcher who is responsible for this brilliant bibliography of networks in history, has issues a call for papers to participate in this year’s Sunbelt Conference, which is one of the premier social network analysis conferences in the world.

Historical network. via.
Historical network. via Marten Düring.

————————-

Call for papers “Historical Network Research” at the XXXIII. Sunbelt Conference, May 21-26 – University of Hamburg, Germany

 

The concepts and methods of social network analysis in historical research are recently being used not only as a mere metaphor but are increasingly applied in practice. In the last decades several studies in the social sciences proved that formal methods derived from social network analysis can be fruitfully applied to selected bodies of historical data as well. These studies however tend to be strongly influenced by concerns, standards of data processing, and, above all, epistemological paradigms that have their roots in the social sciences. Among historians, the term network has been used in a metaphorical sense alone for a long time. It was only recently that this has changed.
We invite papers which successfully integrate social network analysis methods and historical research methods and reflect on the added value of their methodologies. Topics could cover (but are not limited to) network analyses of correspondences, social movements, kinship or economic systems in any historical period.
Submission will be closing on December 31 at 11:59:59 EST. Please limit your abstract to 250 words. Please submit your abstract here: http://www.abstractserver.com/sunbelt2013/absmgm/
and select “Historical Network Research” as session title in the drop down box on the submission site. Please put a note in the “additional notes” box on the abstract submission form that states Marten During and Martin Stark as the session organizers.
For further information on the venue and conference registration see: http://hamburg-sunbelt2013.org/, for any questions regarding the panel, please get in touch with the session organizers.

Session organizers:
Marten During, Radboud University Nijmegen, martenduering@gmail.com
Martin Stark, University of Hamburg, martin.stark@wiso.uni-hamburg.de

Check https://sites.google.com/site/historicalnetworkresearch/ for a detailed bibliography, conferences, screencasts and other resources.

The Internet Listens

The public science blogosphere has recently been buzzing about an online edited book review called Download The Universe. The twist is that the editors only review online-only science books, and their definition of “book” is broadly construed:

[W]e define ebooks broadly. They may be self-published pdf manuscripts. They may be Kindle Singles about science. They can even be apps that have games embedded in them. We hope that we will eventually review new kinds of ebooks that we can’t even imagine yet. And we hope that you will find Download the Universe a useful doorway into that future.

The site aims to fill the publicity gap that prevents interesting and good science ebooks from finding their way into the hands of receptive readers. Traditional reviews and blogs tend not to cover this new media, the editors say. In the spirit of the fast-paced nature of the internet, the entire project was conceived last month at Science Online (#scio12) and already features 8 posts and an editing staff of 16.

Download The Universe

My initial excitement of this project was tempered somewhat when I found that their news feed offered exceptionally tiny snippets of their ebook reviews. That’s no good! I’m subscribed to 361 feeds in Google Reader, with nearly 500 posts a day, and if I don’t have a few paragraphs to see whether an article is interesting, it is unlikely that I’d ever click through to the actual page to investigate further. (By the way, if you’re interested in the best of what I read, you can subscribe to my favorite feed items here, where I read through 361 blogs so you don’t have to.) Unfortunately, snippet news feeds are becoming increasingly frequent, as blogs and sites attempt to entice you to their pages where they can get usage statistics and ad-views in ways they could not through a simple RSS feed.

Apparently, when you talk, the internet listens. My disappointment was such that I sent an email to the coordinating editor, science writer Carl Zimmer, explaining my problem. He immediately sent a reply telling me he would look into the feedburner settings, and within short order, the RSS became a full, no-snippet news feed. Woah! A big (and public) thank you to Carl Zimmer, and the entire crew at Download The Universe, for putting together a wonderful and important new site and for being so receptive to their readers. Bravo!

Citing ODH’s Summer Institutes

While I generally like to reserve posts for a wider audience, this is the second time I’ve come across this particular issue, and I’d like help from the masses. Every summer, the NEH’s Office for Digital Humanities funds a series of Institutes for Advanced Topics in the Digital Humanities. I’ve had the great fortune of attending one on computer simulations in the humanities, and teaching at one on network analysis for the humanities. I often find myself wishing I could cite one, as a whole, because of all the valuable experience and knowledge I received there. Unfortunately I have found no standard format to cite whole conferences, workshops, or summer institutes.

Our Great and Glorious Funders

I asked Brett Bobley, the ODH director, if he had any suggestions, but unfortunately he was at as much a loss as I. His reply: “Good question! I’d cite the URL (ex: http://is.gd/QnFs11 ). But we don’t have a format. Want to choose one & we’ll anoint it?” I’m not terribly familiar with citation styles, but I figured I’d try one out and see if the The DH Hive Mind had any better ideas. If so, please post in the comments. Ideally, the citation should include the URL of the grant, the PI(s), the date, the location, and the grant number (this is very important for tracking the impact of these summer institutes). While the PI is important, though, as the cited ideas do not come from the PI but rather the entire institute, I have chosen to place the institute name first.

“Network Analysis for the Humanities.” August 15-27, 2010. ODH Institute for Advanced Topics in the Digital Humanities: HT-50016-09. Tim Tangherlini, PI. https://securegrants.neh.gov/PublicQuery/main.aspx?f=1&gn=HT-50016-09.

“Computer Simulations in the Humanities.” June 1-17, 2011. ODH Institute for Advanced Topics in the Digital Humanities: HT-50030-10. Marvin J. Croy, PI. https://securegrants.neh.gov/PublicQuery/main.aspx?f=1&gn=HT-50030-10

What thoughts?