Argument Clinic

Zoe LeBlanc asked how basic statistics lead to a meaningful historical argument. A good discussion followed, worth reading, but since I couldn’t fit my response into tweets, I hoped to add a bit to the thread here on the irregular. I’m addressing only one tiny corner of her question, in a way that is peculiar to my own still-forming approach to computational history; I hope it will be of some use to those starting out.

In brief, I argue that one good approach to computational history cycles between data summaries and focused hypothesis exploration, driven by historiographic knowledge, in service to finding and supporting historically interesting agendas. There’s a lot of good computational history that doesn’t do this, and a lot of bad computational history that does, but this may be a helpful rubric to follow.

In the spirit of Monty Python, the below video has absolutely nothing to do with the discussion at hand.

Zoe’s question gets at the heart of one of the two most prominent failures of computational history in 2017 1: the inability to go beyond descriptive statistics into historical argument. 2 I’ve written before on one of the many reasons for this inability, but that’s not the subject of this post. This post covers some good practices in getting from statistics to arguments.

Describing the Past

Historians, for the most part, aren’t experimentalists. 3 Our goals vary, but they often include telling stories about the past that haven’t been told, by employing newly-discovered evidence, connecting events that seemed unrelated, or revisiting an old narrative with a fresh perspective.

Facts alone usually don’t cut it. We don’t care what Jane ate for breakfast without a so what. Maybe her breakfast choices say something interesting about her socioeconomic status, or about food culture, or about how her eating habits informed the way she lived. Alongside a fact, we want why or how it came to be, what it means, or its role in some larger or future trend. A sufficiently big and surprising fact may be worthy of note on its own (“Jane ate orphans for breakfast” or “The government did indeed collude with a foreign power”), but such surprising revelations are rare, not the only purpose for historians, and still beg for context.

Computational history has gotten away with a lot of context-free presentations of fact. 4 That’s great! It’s a sign there’s a lot we didn’t know that contemporary methods & data make easily visible. 5 Here’s an example of one of mine, showing that, despite evidence to the contrary, there is a thriving community at the intersection of history and philosophy of science:

My citation analysis showing a bridge between history & philosophy of science.

But, though we’re not running out of low-hanging fruit, the novelty of mere description is wearing thin. Knowing that a community exists between history & philosophy of science is not particularly interesting; knowing why it exists, what it changes, or whether it is less tenuous than any other disciplinary borderland are more interesting and historiographically recognizable questions.

Context is Key

So how to get from description to historical argument? Though there’s no right path, and the route depends on the type of claim, this post may offer some guidance. Before we get too far, though, a note:

Description has little meaning without context and comparison. The data may show that more people are eating apples for breakfast, but there’s a lot to unpack there before it can be meaningful, let alone relevant.

Line chart of # of people who eat apples over time.

It may be, for example, that the general population is growing just as quickly as the number of people who eat apples. If that’s the case, does it matter that apple-eaters themselves don’t seem to be making up any larger percent of the population?

Line chart of # of people who eat apples over time (left axis) compared to general population (right axis).

The answer for a historian is: of course it matters. If we were talking about casualties of war, or amount of cities in a country, rather than apples, a twofold increase in absolute value (rather than percentage of population) makes a huge difference. It’s more lives affected; it’s more infrastructure and resources for a growing nation.

But the nature of that difference changes when we know our subject of study matches population dynamics. If we’re looking at voting patterns across cities, and we notice population density correlates with party affiliation, we can use that as a launching point for so what. Perhaps sparser cities rely on fewer social services to run smoothly, leading the population to vote more conservative; perhaps past events pushed conservative families towards the outskirts; perhaps.

Without having a ground against which to contextualize our results, a base map like general population, the fact of which cities voted in which direction gives us little historical meat to chew on.

On the other hand, some surprising facts, when contextualized, leave us less surprised. A two-fold increase in apple eating across a decade is pretty surprising, until you realize it happened alongside a similar increase in population. The fact is suddenly less worthy of report by itself, though it may have implications for, say, the growth of the apple industry.

But Zoe asked about statistics, not counting, in finding meaning. I don’t want to divert this post into teaching stats, and nor do I want to assume statistical knowledge, so I’ll opt for an incredibly simple metric: ratio.

The illustration above shows an increase in both population and apple-eating, and eyeball estimates show them growing apace. If we divide the total population by the number of people eating apples, however, our story is complicated.

Line chart of # of people who eat apples over time (left axis) compared to general population (right axis). A thick blue line in the middle (left axis) shows the ratio between the two.

Though both population and apple-eating increase, in 1806 the population begins rising much more rapidly than the number of apple-eaters. 6 It is in this statistically-derived difference that the historian may find something worth exploring and explaining further.

There are a many ways to compare and contextualize data, of which this is one. They aren’t worth enumerating, but the importance of contextualization is relevant to what comes next.

Question- and Data-Driven History

Computational historians like to talk about question-driven analysis. Computational history is best, we say, when it is led by a specific question or angle. The alternative is dumping a bunch of data into a statistics engine, describing it, and finding something weird, and saying “oh, this looks interesting.”

When push comes to shove, most would agree the above dichotomy is false. Historical questions don’t pop out of thin air, but from a continuously shifting relationship with the past. We read primary and secondary sources, do some data entry, do some analysis, do some more reading, and through it all build up a knowledge-base and a set of expectations about the past. We also by this point have a set of claims we don’t quite agree with, or some secondary sources with stories that feel wrong or incomplete.

This is where the computational history practice begins: with a firm grasp of the history and historiography of a period, and a set of assumptions, questions, and mild disagreements.

From here, if you’re reading this blog post, you’re likely in one of two camps:

  1. You have a big dataset and don’t know what to do with it, or
  2. You have a historiographic agenda (a point to prove, a question to answer, etc.) that you don’t know how to make computationally tractable.

We’ll begin with #1.

1. I have data. Now what?

Congratulations, you have data!

Congratulations!

This is probably the thornier of the two positions, and the one more prone to results of mere description. You want to know how to turn your data into interesting history, but you may end up doing little more than enumerating the blades of grass on a field. To avoid that, you must begin down a process sometimes called scalable reading, or a special case of the hermeneutic circle.

You start, of course, with mere description. How many records are there? What are the values in each? Are there changes over time or place? Who is most central? Before you start quantifying the data, write down the answers you expect to these questions, with a bit of a causal explanation for each.

Now, barrage your dataset with visualizations and statistical tests to find out exactly what makes it up. See how the results align with the hypotheses you noted down. If you created the data yourself, one archival visit at a time, you won’t find a lot that surprises you. That’s alright. Be sure to take time to consider what’s missing from the dataset, due to archival lacunae, bias, etc.

If any results surprise you, dig into the data to try to understand why. If none do, think about claims from secondary sources–do any contradict the data? Align with it?

This is also a good point to bring in contextualization. If you’re looking at the number of people doing something over time, try to compare your dataset to population dynamics. If you’re looking at word usage, find a way to compare your data to base frequencies of that word in similar collections. If you’re looking at social networks, compare them to random networks to see if their average path length or degree distribution are surprising compared to networks of similar size. Every unexpected result is an opportunity for exploration.

Internal comparisons may also yield interesting points to pursue further, especially if you think your data are biased. Given a limited dataset of actors, their genders, their roles, and play titles, for example, you may not be able to make broad claims about which plays are more popular, but you could see how different roles are distributed across genders within the group.

Internal comparisons could also be temporal. Given a dataset of occupations over time with a particular city, if you compare those numbers to population changes over time, you could find the moments where population and occupation dynamics part ways, and focus on those instances. Why, suddenly, are there more grocers?

The above boils down into two possible points of further research: deviations from expectation, or deviations from internal consistency.

Deviations from expectation–your own or that of some notable secondary source–can be particularly question-provoking. “Why didn’t this meet expectations” quickly becomes “what is wrong or incomplete about this common historical narrative?” From here, it’s useful to dig down into the points of data that exemplify such deviations, and see if you can figure out why and how they break from expectations.

Deviations from internal consistency–that is, when comparisons within the data wind up showing different trends–lead to positive rather than negative questions. Instead of “why is this theory wrong?”, you may ask, “why are these groups different?” or “why does this trend cease to keep pace with population during these decades?” Here you are asking specific questions that require new or shifted theories, whereas with deviations from expectations, you begin by seeing where existing narratives fail.

It’s worth reiterating that, in both scenarios, questions are drawn from deviations from some underlying theory.

In deviations from expectation, the underlying theory is what you bring to your data; you assume the data ought to look one way, but it doesn’t. You are coming with an internal, if not explicit, quantitative model of how the data ought to look.

In deviations from internal consistency, that data’s descriptive statistics provide the underlying theory against which there may be deviations. Apple-eaters deviating in number from population growth is only interesting if, at most points, apple-eaters grow  evenly alongside population. That is, you assume general statistics should be the same between groups or over time, and if they are not, it is worthy of explanation.

This an oversimplification, but a useful one. Undoubtedly, combinations of the two will arise: maybe you expect the differences between men and women in roles they play will be large, but it turns out they are small. This provides a deviation of both kinds, but no less legitimate for it. In this case, your recourse may be looking for other theatrical datasets to see if the gender dynamics play out the same across them, or if your data are somehow special and worthy of explanation outside the context of larger gender dynamics.

Which brings us, inexorably, to the cyclic process of computational history. Scalable reading. The hermeneutic circle. Whatever.

Point is, you’re at the point where some deviation or alignment seems worth explanation or exploration. You could stop here. You could present this trend, give a convincing causal just-so story of why it exists, and leave it at that. You will probably get published, since you’ve already gone farther than mere description, the trap of so much computational history.

But you shouldn’t stop here. You should take this opportunity to strengthen your story. Perhaps this is the point where you put your “traditional” historian’s cap back on, and go dust-diving for archival evidence to support your claims. I wouldn’t think less of you for it, but if you stop there, you’d only be reaping half the advantages of computational history.

In the example above, looking for other theatrical datasets to contextualize gender results in your own, hinted at the second half of the computational history research cycle: creating computationally tractable questions. Recall this section described the first half: making sense of data. Although I presented the two as separate, they productively feed on one another.

Once you’ve gone through your data to find how it aligns with your or others’ preconceived notions of the past, or how by its own internal deviations it presents interesting dilemmas, you have found yourself in the second half of the cycle. You have questions or theories you want to ask of data, but you do not yet have the data or the statistics to explore them.

This seems counter-intuitive. Why not just use the data or statistics already gathered, sometimes painstakingly over several years? Because if you use the same data & stats to both generate and answer questions, your evidence is circular. Specifically, you risk making a scientistic claim of what could easily be a spurious trend. It may simply be that, by random chance, the breakfast record-keeper lost a bunch of records from 1806-1810, thus causing the decline seen in the population ratio.

To convincingly make arguments from a historical data description, you must back it up using triangulation–approaching the problem from many angles. That triangulation may be computational, archival, archaeological, or however else you’re used to historying, but we’ll focus here on computational.

2. Computationally Tractable Questions

So you’ve got a historiographic agenda, and now you want to make it computationally tractable. Good luck! This is the hard part.

Good luck!

“Sparse areas relied less on social services.” “The infrastructure of science became less dependent on specific individuals over the course of the 17th century.” “T-Rex was a remarkable climber.” “Who benefited most from the power vacuum left by the assassination?” These hypotheses and questions do not, on their own, lend themselves to quantitative analysis.

Chief among the common difficulties of turning a historiographic agenda into a computationally tractable hypothesis is a lack of familiarity of computational methods. If you don’t know what a computer is good at, you can’t form an experiment to use one.

I said that history isn’t experimental, but I lied. Archival research can be an experiment if you go in with a hypothesis and a pre-conceived approach or set of criteria that would confirm it. Computational history, at this stage, is also experimental. It often works a little like this (but it may not): 7

  1. Set your agenda. Start with a hypothesis, historiographic framework, or question. For example, “The infrastructure of science became less dependent on specific individuals over the course of the 17th century.” (that question’s mine, don’t steal it.)
  2. Find testable hypotheses. Break it into many smaller statements that can be confirmed, denied, or quantitatively assessed. “If science depends less on specific individuals over the 17th century, the distribution of names mentioned in scholarly correspondence will flatten out. That is, in 1600 a few people will be mentioned frequently, whereas most will be mentioned infrequently; in 1700, the frequency of name mentions will be more evenly distributed across correspondence.” Or “If science depends less on specific individuals over the 17th century, when an important person died, it affected the scholarly network less in 1700 than in 1600.” (Notice in these two examples how finding evidence for the littler statements will corroborate the bigger hypothesis, and vice-versa.)
  3. Match hypotheses to approaches. Come up with methodological proxies, datasets, and/or statistical tests that could corroborate the littler statements. Be careful, thorough, and specific. For example, “In a network of 17th-century letter writers, if the removal of a central figure in 1600 decreases the average path length of the network less than the the removal of a central figure in 1700, central figures likely played less important structural roles. This will be most convincing if the effects of node removal smoothly decreases across the century.” (This is the step in which you need to come to the table with knowledge of different computational methods and what they do.)
  4. Specify proxies. List specific analytic approaches needed for the promising tests, and the data required to do them. For example, you need a list of senders and recipients of scholarly letters, roughly evenly distributed across time between 1600 and 1700, and densely-packed enough to perform network analysis. There could be a few different analytic approaches, including removing highly-central nodes and re-calculating average path length; employing measurements of attack tolerance; etc. Probably worth testing them all and seeing if each yields conforms to the pre-existing theory.
  5. Find data. Find pre-existing datasets that will fit your proxies, or estimate how long it will take to gather enough data yourself to reasonably approach your hypotheses. Opt for data that will work for as many approaches as possible. You may find some data that will suggest new hypotheses, and you’ll iterate back and forth between steps #3-#5 a few times.
  6. Collect data. Run experiments. Uh, yeah, just do those things. Easy as baking apple pie from scratch.
  7. Match experimental results to hypotheses. Here’s the fun part, you get to see how many of your predictions matched your results. Hopefully a bunch, but even if they didn’t, it’s an excuse to figure out why, and start the process anew. You can also start exploring the additional datasets to help you develop new questions. The astute may have noticed, this step brings us back to the first half of computational historiography: exploring data and seeing what you can find. 8

From here, it may be worthwhile to cycle back to the data exploration stage, then back here to computationally tractable hypothesis exploration, and so on ad infinitum.

By now, making meaning out of data probably feels impossible. I’m sorry. The process is much more fluid and intertwined than is easily unpacked in a blog post. The back-and-forth can take hours, days, months, or years.

But the important thing is, after you’ve gone back-and-forth a few times, you should have a combination of quantitative, archival, theoretical, and secondary support for a solidly historical argument.

Contexts of Discovery and Justification

Early 20th-century philosophy of science cared a lot about the distinction between the contexts of discovery and justification. Violently shortened, the context of discovery is how you reached your conclusion, and the context of justification is how you argue your point, regardless of the process that got you there.

I bring this up as a reminder that the two can be distinct. By the 1990s, quantitative historians who wanted to remain legible to their non-quantitative colleagues often saved the data analysis for an appendix, and even there the focus was on the actual experiments, not the long process of coming up with tests, re-testing, collecting more data, and so on.

The result of this cyclical computational historiography need not be (and rarely is, and perhaps can never be) a description of the process that led you to the evidence supporting your argument. While it’s a good idea to be clear about where your methods led you astray, the most legible result to historians will necessarily involve a narrative reconfiguration.

Causality and Truth

Small final notes on two big topics.

First, Causality. This approach won’t get you there. It’s hard to disentangle causality from correlation, but more importantly in this context, it’s hard to choose between competing causal explanations. The above process can lead you to plausible and corroborated hypotheses, but it cannot prove anything.

Consider this: “My hypothesis about apples predicts these 10 testable claims.” You test each claim, and each test agrees with your predictions. It’s a success, but a soft one; you’ve shown your hypothesis to be plausible given the evidence, but not inevitable. A dozen other equally sensible hypotheses could have produced the same 10 testable claims. You did not prove those hypotheses wrong, you just chose one model that happened to work. 9

Even if no alternate hypothesis presents itself, and all of your tests agree with your hypothesis, you still do not have causal proof. It may be that the proxies you chose to test your claims are bad ones, or incomplete, or your method has unseen holes. Causality is tricky, and in the humanities, proof especially so.

Which leads us to the next point: Truth. Even if somehow you devise the perfect process to find proof of a causal hypothesis, the causal description does not constitute capital-T Truth. There are many truths, coming from many perspectives, about the past, and they don’t need to agree with each other. Historians care not just about what happened, but how and why, and those hows and whys are driven by people. Messy, inconsistent people who believe many conflicting things within the span of a moment. When it comes to questions of society, even the most scientistic of scholars must come to terms with uncertainty and conflict, which after all are more causally central to the story of history than most clever narratives we might tell.

Notes:

  1. Also called digital history, and related to quantitative history and cliometrics in ways we don’t often like to admit.
  2. The other most prominent failure in computational history is our tendency to group things into finite discrete categories; in this case, a two-part list of failures.
  3. With some notable exceptions. Some historians simulate the past, others perform experiments on rates of material decay, or on the chemical composition of inks. It’s a big world out there.
  4. When I say fact, assume I add all the relevant post-modernist caveats of the contingency of objectivity etc. etc. Really I mean “matters of history that the volume of available evidence make difficult to dispute.”
  5. Ted Underwood and I have both talked about the exciting promise of incredibly low-hanging fruit in new approaches.
  6. OK in retrospect I should have used a more historically relevant example – I wasn’t expecting to push this example so far.
  7. If this seems overly scientistic, worry not! Experimental science is often defined by its recourse to rote procedure, which means pretty much any procedural explanation of research will resemble experimental science. There are many ways one can go about scalable reading / triangulation of computational historiography, not just the procedural steps #1-#7 above, but this is one of the easier approaches to explain. Soft falsification and hypothesis testing are plausible angles into computational history, but not necessary ones.
  8. A brief addendum to steps #6-#7: although I’d argue Null-Hypothesis Significance Testing or population-based statistical inferences may not be relevant to historiography, especially when its based in triangulation, they may be useful in certain cases. Without delving too deeply into the weeds, they can help you figure out the extent to which the effect you see may just be noise, not indicative of any particular trend. Statistical effect sizes also may be of use, helping you see whether the magnitude of your finding is big enough to have any appreciable role in the historical narrative.
  9. Shawn Graham and I wrote about this in relation to archaeology and simulation here, on the subject of underdetermination and abduction

Submissions to DH2017 (pt. 1)

Like many times before, I’m analyzing the international digital humanities conference, this time the 2017 conference in Montréal. The data I collect is available to any conference peer reviewer, though I do a bunch of scraping, cleaning, scrubbing, shampooing, anonymizing, etc. before posting these results.

This first post covers the basic landscape of submissions to next year’s conference: how many submissions there are, what they’re about, and so forth.

The analysis is opinionated and sprinkled with my own preliminary interpretations. If you disagree with something or want to see more, comment below, and I’ll try to address it in the inevitable follow-up. If you want the data, too bad—since it’s only available to reviewers, there’s an expectation of privacy. If you are sad for political or other reasons and live near me, I will bring you chocolate; if you are sad and do not live near me, you should move to Pittsburgh. We have chocolate.

Submission Numbers & Types

I’ll be honest, I was surprised by this year’s submission numbers. This will be the first ADHO conference held in North America since it was held in Nebraska in 2013, and I expected an influx of submissions from people who haven’t been able to travel off the continent for interim events. I expected the biggest submission pool yet.

Submissions per year by type.
Submissions per year by type.

What we see, instead, are fewer submissions than Kraków last year: 608 in all. The low number of submissions to Sydney was expected, given it was the first  conference held outside Europe or North America, but this year’s numbers suggests the DH Hype Machine might be cooling somewhat, after five years of rapid growth.

Annual presentations at DH conferences, compared to growth of DHSI in Victoria.
Annual presentations at DH conferences, compared to growth of DHSI in Victoria, 1999-2015.

We need some more years and some more DH-Hype-Machine Indicators to be sure, but I reckon things are slowing down.

The conference offers five submission tracks: Long Paper, Short Paper, Poster, Panel, and (new this year) Virtual Short Paper. The distribution is pretty consistent with previous years, with the only deviation being in Sydney in 2015. Apparently Australians don’t like short papers or posters?

I’ll be interested to see how the “Virtual Short Paper” works out. Since authors need to decide on this format before submitting, it doesn’t allow the flexibility of seeing if funding will become available over the course of the year. Still, it’s a step in the right direction, and I hope it succeeds.

Co-Authorship

More of the same! If nothing else, we get points for consistency.

Percent of Co-Authorships
Percent of Co-Authorships

Same as it ever was, nearly half of all submissions are by a single author. I don’t know if that’s because humanists need to justify their presentations to hiring and tenure committees who only respect single authorship, or if we’re just used to working alone. A full 80% of submissions have three or fewer authors, suggesting large teams are still not the norm, or that we’re not crediting all of the labor that goes into DH projects with co-authorships. [Post-publication note: See Adam Crymble’s comment, below, for important context]

Language, Topic, & Discipline

Authors choose from several possible submission languages. This year, 557 submissions were received in English, 40 in French, 7 in Spanish, 3 in Italian, and 1 in German. That’s the easy part.

The Powers That Be decided to make my life harder by changing up the categories authors can choose from for 2017. Thanks, Diane, ADHO, or whoever decided this.

In previous years, authors chose any number of keywords from a controlled vocabulary of about 100 possible topics that applied to their submission. Among other purposes, it helped match authors with reviewers. The potential topic list was relatively static for many years, allowing me to analyze the change in interest in topics over time.

This year, they added, removed, and consolidated a bunch of topics, as well as divided the controlled vocabulary into “Topics” (like metadata, morphology, and machine translation) and “Disciplines” (like disability studies, archaeology, and law). This is ultimately good for the conference, but makes it difficult for me to compare this against earlier years, so I’m holding off on that until another post.

But I’m not bitter.

This year’s options are at the bottom of this post in the appendix. Words in red were added or modified this year, and the last list are topics that used to exist, but no longer do.

So let’s take a look at this year’s breakdown by discipline.

Disciplinary breakdown of submissions
Disciplinary breakdown of submissions

Huh. “Computer science”—a topic which last year did not exist—represents nearly a third of submissions. I’m not sure how much this topic actually means anything. My guess is the majority of people using it are simply signifying the “digital” part of their “Digital Humanities” project, since the topic “Programming”—which existed in previous years but not this year—used to only connect to ~6% of submissions.

“Literary studies” represents 30% of all submissions, more than any previous year (usually around 20%), whereas “historical studies” has stayed stable with previous years, at around 20% of submissions. These two groups, however, can be pretty variable year-to-year, and I’m beginning to suspect that their use by authors is not consistent enough to take as meaningful. More on that in a later post.

That said, DH is clearly driven by lit, history, and library/information science. L/IS is a new and welcome category this year; I’ve always suspected that DHers are as much from L/IS as the humanities, and this lends evidence in that direction. Importantly, it also makes apparent a dearth in our disciplinary genealogies: when we trace the history of DH, we talk about the history of humanities computing, the history of the humanities, the history of computing, but rarely the history of L/IS.

I’ll have a more detailed breakdown later, but there were some surprises in my first impressions. “Film and Media Studies” is way up compared to previous years, as are other non-textual disciplines, which refreshingly shows (I hope) the rise of non-textual sources in DH. Finally. Gender studies and other identity- or intersectional-oriented submissions also seem to be on the rise (this may be an indication of US academic interests; we’ll need another few years to be sure).

If we now look at Topic choices (rather than Discipline choices, above), we see similar trends.

Topical distribution of submissions
Topical distribution of submissions

Again, these are just first impressions, there’ll be more soon. Text is still the bread and butter of DH, but we see more non-textual methods being used than ever. Some of the old favorites of DH, like authorship attribution, are staying pretty steady against previous years, whereas others, like XML and encoding, seem to be decreasing in interest year after year.

One last note on Topics and Disciplines. There’s a list of discontinued topics at the bottom of the appendix. Most of them have simply been consolidated into other categories, however one set is conspicuously absent: meta-discussions of DH. There are no longer categories for DH’s history, theory, how it’s taught, or its institutional support. These were pretty popular categories in previous years, and I’m not certain why they no longer exist. Perusing the submissions, there are certainly several that fall into these categories.

What’s Next

For Part 2 of this analysis, look forward to more thoughts on the topical breakdown of conference submissions; preliminary geographic and gender analysis of authors; and comparisons with previous years. After that, who knows? I take requests in the comments, but anyone who requests “Free Bird” is banned for life.

Appendix: Controlled Vocabulary

Words in red were added or modified this year, and the last list are topics that used to exist, but no longer do.

Topics

  • 3D Printing
  • agent modeling and simulation
  • archives, repositories, sustainability and preservation
  • audio, video, multimedia
  • authorship attribution / authority
  • bibliographic methods / textual studies
  • concording and indexing
  • content analysis
  • copyright, licensing, and Open Access
  • corpora and corpus activities
  • crowdsourcing
  • cultural and/or institutional infrastructure
  • data mining / text mining
  • data modeling and architecture including hypothesis-driven modeling
  • databases & dbms
  • digitisation – theory and practice
  • digitisation, resource creation, and discovery
  • diversity
  • encoding – theory and practice
  • games and meaningful play
  • geospatial analysis, interfaces & technology, spatio-temporal modeling/analysis & visualization
  • GLAM: galleries, libraries, archives, museums
  • hypertext
  • image processing
  • information architecture
  • information retrieval
  • interdisciplinary collaboration
  • interface & user experience design/publishing & delivery systems/user studies/user needs
  • internet / world wide web
  • knowledge representation
  • lexicography
  • linking and annotation
  • machine translation
  • metadata
  • mobile applications and mobile design
  • morphology
  • multilingual / multicultural approaches
  • natural language processing
  • networks, relationships, graphs
  • ontologies
  • project design, organization, management
  • query languages
  • scholarly editing
  • semantic analysis
  • semantic web
  • social media
  • software design and development
  • speech processing
  • standards and interoperability
  • stylistics and stylometry
  • teaching, pedagogy and curriculum
  • text analysis
  • text generation
  • universal/inclusive design
  • virtual and augmented reality
  • visualisation
  • xml

Disciplines

  • anthropology
  • archaeology
  • art history
  • asian studies
  • classical studies
  • computer science
  • creative and performing arts, including writing
  • cultural studies
  • design
  • disability studies
  • english studies
  • film and media studies
  • folklore and oral history
  • french studies
  • gender studies
  • geography
  • german studies
  • historical studies
  • italian studies
  • law
  • library & information science
  • linguistics
  • literary studies
  • medieval studies
  • music
  • near eastern studies
  • philology
  • philosophy
  • renaissance studies
  • rhetorical studies
  • sociology
  • spanish and spanish american studies
  • theology
  • translation studies

No Longer Exist

  • Digital Humanities – Facilities
  • Digital Humanities – Institutional Support
  • Digital Humanities – Multilinguality
  • Digital Humanities – Nature And Significance
  • Digital Humanities – Pedagogy And Curriculum
  • Genre-specific Studies: Prose, Poetry, Drama
  • History Of Humanities Computing/digital Humanities
  • Maps And Mapping
  • Media Studies
  • Other
  • Programming
  • Prosodic Studies
  • Publishing And Delivery Systems
  • Spatio-temporal Modeling, Analysis And Visualisation
  • User Studies / User Needs

Summary: Martin & Runyon’s “Digital Humanities, Digital Hegemony”

Today’s post just summarizes an article recently shared with me, as an attempt to boost the signal:

Those following along at home know I’ve been exploring how digital humanities infrastructure reinforces pre-existing cultural biases, most recently with Nickoal Eichmann & Jeana Jorgensen looking at DH Conferences, 2000-2015.

One limitation of our study is we know very little about the content of conference presentations or the racial identities of authors, which means we can’t assess bias in those directions. John D. Martin III & Carolyn Runyon recently published preliminary results more thoroughly addressing race & gender in DH from a funding perspective, and focused on the content of grants:

Martin, John D., III, and Carolyn Runyon. “Digital Humanities, Digital Hegemony: Exploring Funding Practices and Unequal Access in the Digital Humanities.” SIGCAS Computers and Society. 46, no. 1 (March 2016): 20–26. doi:10.1145/2908216.2908219.

By hand-categorizing 656 DH-oriented NEH grants from 2007-2016, totaling $225 million, Martin & Runyon found 110 projects whose focus involved gender or individuals of a certain gender, and 228 which focused on race/ethnicity or individuals identifiable with particular races/ethnicities.

From the article
From the article

Major findings include:

  • Twice as much money goes to studying men than to women.
  • On average, individual projects about women are better-funded.
  • The top three race/ethnicity categories by funding amount are White ($21 million), Asian ($7 million), and Black ($6.5 million).
  •  White men are discussed as individuals, and women and non-white people are focused on as groups.

Their results fit well with what I and others have found, which is that DH propagates the same cultural bias found elsewhere within and outside academia.

A next step, vital to this project, is to find equivalent metrics for other disciplines and data sources. Until we get a good baseline, we won’t actually know if our interventions are improving the situation. It’s all well and good to say “things are bad”, but until we know the compared-to-what, we won’t have a reliable way of testing what works and what doesn’t.

Representation at Digital Humanities Conferences (2000-2015)

Nickoal Eichmann (corresponding author), Jeana Jorgensen, Scott B. Weingart 1

NOTE: This is a pre-peer reviewed draft submitted for publication in Feminist Debates in Digital Humanities, eds. Jacque Wernimont and Elizabeth Losh, University of Minnesota Press (2017). Comments are welcome, and a downloadable dataset / more figures are forthcoming. This chapter will be released alongside another on the history of DH conferences, co-authored by Weingart & Eichmann (forthcoming), which will go into further detail on technical aspects of this study, including the data collection & statistics. Many of the materials first appeared on this blog. To cite this preprint, use the figshare DOI:  https://dx.doi.org/10.6084/m9.figshare.3120610.v1

Abstract

Digital Humanities (DH) is said to have a light side and a dark side. Niceness, globality, openness, and inclusivity sit at one side of this binary caricature; commodification, neoliberalism, techno-utopianism, and white male privilege sit at the other. At times, the plurality of DH embodies both descriptions.

We hope a diverse and critical DH is a goal shared by all. While DH, like the humanities writ large, is not a monolith, steps may be taken to improve its public face and shared values through positively influencing its communities. The Alliance of Digital Humanities Organizations’ (ADHO’s) annual conference hosts perhaps the largest such community. As an umbrella organization of six international digital humanities constituent organizations, as well as 200 DH centers in a few dozen countries, ADHO and its conference ought to represent the geographic, disciplinary, and demographic diversity of those who identify as digital humanists.

The annual conference offers insight into how the world sees DH. While it may not represent the plurality of views held by self-described digital humanists, the conference likely influences the values of its constituents. If the conference glorifies Open Access, that value will be taken up by its regular attendees; if the conference fails to prioritize diversity, this too will be reinforced.

This chapter explores fifteen years of DH conferences, presenting a quantified look at the values implicitly embedded in the event. Women are consistently underrepresented, in spite of the fact that the most prominent figures at the conference are as likely women as men. The geographic representation of authors has become more diverse over time—though authors with non-English names are still significantly less likely to pass peer review. The topical landscape is heavily gendered, suggesting a masculine bias may be built into the value system of the conference itself. Without data on skin color or ethnicity, we are unable to address racial or related diversity and bias here.

There have been some improvements over time and, especially recently, a growing awareness of diversity-related issues. While many of the conference’s negative traits are simply reflections of larger entrenched academic biases, this is no comfort when self-reinforcing biases foster a culture of microaggression and white male privilege. Rather than using this study as an excuse to write off DH as just another biased community, we offer statistics, critiques, and suggestions as a vehicle to improve ADHO’s conference, and through it the rest of self-identified Digital Humanities.

Introduction

Digital humanities (DH), we are told, exists under a “big tent”, with porous borders, little gatekeeping, and, heck, everyone’s just plain “nice”. Indeed, the term itself is not used definitionally, but merely as a “tactical convenience” to get stuff done without worrying so much about traditional disciplinary barriers. DH is “global”, “public”, and diversely populated. It will “save the humanities” from its crippling self-reflection (cf. this essay), while simultaneously saving the computational social sciences from their uncritical approaches to data. DH contains its own mirror: it is both humanities done digitally, and the digital as scrutinized humanistically. As opposed to the staid, backwards-looking humanities we are used to, the digital humanities “experiments”, “plays”, and even “embraces failure” on ideological grounds. In short, we are the hero Gotham needs.

Digital Humanities, we are told, is a narrowly-defined excuse to push a “neoliberal agenda”, a group of “bullies” more interested in forcing humanists to code than in speaking truth to power. It is devoid of cultural criticism, and because of the way DHers uncritically adopt tools and methods from the tech industry, they in fact often reinforce pre-existing power structures. DH is nothing less than an unintentionally rightist vehicle for techno-utopianism, drawing from the same font as MOOCs and complicit in their devaluing of education, diversity, and academic labor. It is equally complicit in furthering both the surveillance state and the surveillance economy, exemplified in its stunning lack of response to the Snowden leaks. As a progeny of the computer sciences, digital humanities has inherited the same lack of gender and racial diversity, and any attempt to remedy the situation is met with incredible resistance.

The truth, as it so often does, lies somewhere in the middle of these extreme caricatures. It’s easy to ascribe attributes to Digital Humanities synecdochically, painting the whole with the same brush as one of its constituent parts. One would be forgiven, for example, for coming away from the annual international ADHO Digital Humanities conference assuming DH were a parade of white men quantifying literary text. An attendee of HASTAC, on the other hand, might leave seeing DH as a diverse community focused on pedagogy, but lacking in primary research. Similar straw-snapshots may be drawn from specific journals, subcommunities, regions, or organizations.

But these synecdoches have power. Our public face sets the course of DH, via who it entices to engage with us, how it informs policy agendas and funding allocations, and who gets inspired to be the next generation of digital humanists. Especially important is the constituency and presentation of the annual Digital Humanities conference. Every year, several hundred students, librarians, staff, faculty, industry professionals, administrators and researchers converge for the conference, organized by the Alliance of Digital Humanities Organizations (ADHO). As an umbrella organization of six international digital humanities constituent organizations, as well as 200 DH centers in a few dozen countries, ADHO and its conference ought to represent the geographic, disciplinary, and demographic diversity of those who identify as digital humanists. And as DH is a community that prides itself on its activism and its social/public goals, if the annual DH conference does not celebrate this diversity, the DH community may suffer a crisis of identity (…okay, a bigger crisis of identity).

So what does the DH conference look like, to an outsider? Is it diverse? What topics are covered? Where is it held? Who is participating, who is attending, and where are they coming from? This essay offers incomplete answers to these questions for fifteen years of DH conferences (2000-2015), focusing particularly on DH2013 (Nebraska, USA), DH2014 (Lausanne, Switzerland), and DH2015 (Sydney, Australia). 2 We do so with a double-agenda: (1) to call out the biases and lack of diversity at ADHO conferences in the earnest hope it will help improve future years’ conferences, and (2) to show that simplistic, reductive quantitative methods can be applied critically, and need not feed into techno-utopic fantasies or an unwavering acceptance of proxies as a direct line to Truth. By “distant reading” DH and turning our “macroscopes” on ourselves, we offer a critique of our culture, and hopefully inspire fruitful discomfort in DH practitioners who apply often-dehumanizing tools to their subjects, but have not themselves fallen under the same distant gaze.

Among other findings, we observe a large gender gap for authorship that is not mirrored among those who simply attend the conference. We also show a heavily gendered topical landscape, which likely contributes to topical biases during peer review. Geographic diversity has improved over fifteen years, suggesting ADHO’s strategy to expand beyond the customary North American / European rotation was a success. That said, there continues to be a visible bias against non-English names in the peer review process. We could not get data on ethnicity, race, or skin color, but given our regional and name data, as well as personal experience, we suspect in this area, diversity remains quite low.

We do notice some improvement over time and, especially in the last few years, a growing awareness of our own diversity problems. The #whatifDH2016 3 hashtag, for example, was a reaction to an all-male series of speakers introducing DH2015 in Sydney. The hashtag caught on and made it to ADHO’s committee on conferences, who will use it in planning future events. Our remarks here are in the spirit of #whatifDH2016; rather than using this study as an excuse to defame digital humanities, we hope it becomes a vehicle to improve ADHO’s conference, and through it the rest of our community.

Social Justice and Equality in the Digital Humanities

Diversity in the Academy

In order to contextualize gender and ethnicity in the DH community, we must take into account developments throughout higher education. This is especially important since much of DH work is done in university and other Ivory Tower settings. Clear progress has been made from the times when all-male, all-white colleges were the norm, but there are still concerns about the marginalization of scholars who are not white, male, able-bodied, heterosexual, or native English-speakers. Many campuses now have diversity offices and have set diversity-related goals at both the faculty and student levels (for example, see the Ohio State University’s diversity objectives and strategies 2007-12). On the digital front, blogs such as Conditionally Accepted, Fight the Tower, University of Venus, and more all work to expose the normative biases in academia through activist dialogue.

From both a historical and contemporary lens, there is data supporting the clustering of women and other minority scholars in certain realms of academia, from specific fields and subjects to contingent positions. When it comes to gender, the phrase “feminization” has been applied both to academia in general and to specific fields. It contains two important connotations: that of an area in which women are in the majority, and the sense of a change over time, such that numbers of women participants are increasing in relation to men (Leathwood and Read 2008, 10). It can also signal a less quantitative shift in values, “whereby ‘feminine’ values, concerns, and practices are seen to be changing the culture of an organization, a field of practice or society as a whole” (ibid).

In terms of specific disciplines, the feminization of academia has taken a particular shape. Historian Lynn Hunt suggests the following propositions about feminization in the humanities and history specifically: the feminization of history parallels what is happening in the social sciences and humanities more generally; the feminization of the social sciences and humanities is likely accompanied by a decline in status and resources; and other identity categories, such as ethnic minority status and age/generation, also interact with feminization in ways that are still becoming coherent.

Feminization has clear consequences for the perception and assignation of value of a given field. Hunt writes: “There is a clear correlation between relative pay and the proportion of women in a field; those academic fields that have attracted a relatively high proportion of women pay less on average than those that have not attracted women in the same numbers.” Thus, as we examine the topics that tend to be clustered by gender in DH conference submissions, we must keep in mind the potential correlations of feminization and value, though it is beyond the scope of this paper to engage in chicken-or-egg debates about the causal relationship between misogyny and the devaluing of women’s labor and women’s topics.

There is no obvious ethnicity-based parallel to the concept of the feminization of academia; it wouldn’t be culturally intelligible to talk about the “people-of-colorization of academia”, or the “non-white-ization of academia.” At any rate, according to a U.S. Department of Education survey, in 2013 79% of all full-time faculty in degree-granting postsecondary institutions were white. The increase of non-white faculty from 2009 (19.2% of the whole) to 2013 (21.5%) is very small indeed.

Why does this matter? As Jeffrey Milem, Mitchell Chang, and Anthony Lising Antonio write in regard to faculty of color, “Having a diverse faculty ensures that students see people of color in roles of authority and as role models or mentors. Faculty of color are also more likely than other faculty to include content related to diversity in their curricula and to utilize active learning and student-centered teaching techniques…a coherent and sustained faculty diversity initiative must exist if there is to be any progress in diversifying the faculty” (25). By centering marginalized voices, scholarly institutions have the ability to send messages about who is worthy of inclusion.

Recent Criticisms of Diversity in DH

In terms of DH specifically, diversity within the community and conferences has been on the radar for several years, and has recently gained special attention, as digital humanists and other academics alike have called for critical and feminist engagement in diversity and a move away from what seems to be an exclusionary culture. In January 2011, THATCamp SoCal included a section called “Diversity in DH,” in which participants explored the lack of openness in DH and, in the end, produced a document, “Toward an Open Digital Humanities” that summarized their discussions. The “Overview” in this document mirrors the same conversation we have had for the last several years:

We recognize that a wide diversity of people is necessary to make digital humanities function. As such, digital humanities must take active strides to include all the areas of study that comprise the humanities and must strive to include participants of diverse age, generation, sex, skill, race, ethnicity, sexuality, gender, ability, nationality, culture, discipline, areas of interest. Without open participation and broad outreach, the digital humanities movement limits its capacity for critical engagement. (ibid)

This proclamation represents the critiques of the DH landscape in 2011, in which DH practitioners and participants were assumed to be privileged and white, that they excluded student-learners, and that they held myopic views of what constitutes DH. Most importantly for this chapter, THATCamp SoCal’s “Diversity in DH” section participants called for critical approaches and social justice of DH scholarship and participation, including “principles for feminist/non-exclusionary groundrules in each session (e.g., ‘step up/step back’) so that the loudest/most entitled people don’t fill all the quiet moments.” They also advocated defending the least-heard voices “so that the largest number of people can benefit…”

These voices certainly didn’t fall flat. However, since THATCamps are often comprised of geographically local DH microcommunities, they benefit from an inclusive environment but suffer as isolated events. As result, it seems that the larger, discipline-specific venues which have greater attendance and attraction continue to amplify privileged voices. Even so, 2011 continued to represent a year that called for critical engagement in diversity in DH, with an explicit “Big Tent” theme for DH2011 held in Stanford, California. Embracing the concept the “Big Tent” deliberately opened the doors and widened the spectrum of DH, at least in terms of methods and approaches. However, as Melissa Terras pointed out, DH was “still a very rich, very western academic field” (Terras, 2011), even with a few DH2011 presentations engaging specifically with topics of diversity in DH. 4

A focus on diversity-related issues has only grown in the interim. We’ve recently seen greater attention and criticism of DH exclusionary culture, for instance, at the 2015 Modern Language Association (MLA) annual convention, which included the roundtable discussion “Disrupting Digital Humanities.” It confronted the “gatekeeping impulse” in DH, and echoing THATCamp SoCal 2011, these panelists aimed to shut down hierarchical dialogues in DH, encourage non-traditional scholarship, amplify “marginalized voices,” advocate for DH novices, and generously support the work of peers. 5 The theme for DH2015 in Sydney, Australia was “Global Digital Humanities,” and between its successes and collective action arising from frustrations at its failures, the community seems poised to pay even greater attention to diversity. Other recent initiatives in this vein worth mention include #dhpoco, GO::DH, and Jacqueline Wernimont’s “Build a Better Panel,” 6 whose activist goals are helping diversify the community and raise awareness of areas where the community can improve.

While it would be fruitful to conduct a longitudinal historiographical analysis of diversity in DH, more recent criticisms illustrate a history of perceived exclusionary culture, which is why we hope to provide a data-driven approach to continue the conversation and call for feminist and critical engagement and intervention.

Data

While DH as a whole has been critiqued for its lack of diversity and inclusion, how does the annual ADHO DH conference measure up? To explore this in a data-driven fashion, we have gathered publicly available annual ADHO conference programs and schedules from 2000-2015. From those conference materials, we have entered presentation and author information into a spreadsheet to analyze various trends over time, such as gender and geography as indicators of diversity. Particular information that we have collected includes: presentation title, keywords (if available), abstract and full-text (if available), presentation type, author name, author institutional affiliation and academic department (if available), and corresponding country of that affiliation at the time of the presentation(s). We normalized and hand-cleaned names, institutions, and departments, so that, to the best of our knowledge, each author entry represented a unique person and, accordingly, was assigned a unique ID. Next, we added gender information (m/f/other/unknown) to authors by a combination of hand-entry and automated inference. While this is problematic for many reasons, 7 since it does not allow for diversity in gender options and tracing gender changes over time, it does give us a useful preliminary lense to view gender diversity at DH conferences.

For 2013’s conference, ADHO instituted a series of changes aimed at improving inclusivity, diversity, and quality. This drive was steered by that year’s program committee chair, Bethany Nowviskie, alongside 2014’s chair, Melissa Terras. Their reformative goals matched our current goals in this essay, and speak to a long history of experimentation and improvement efforts on behalf of ADHO. Their changes included making the conference more welcome to outsiders through ending policies that only insiders knew about; making the CFP less complex and easier to translate into multiple languages; taking reviewer language competencies into account systematically; and streamlining the submission and review process.

The biggest noticeable change to DH2013, however, was the institution of a reviewer bidding process and a phase of semi-open peer review. Peer reviewers were invited to read through and rank every submitted abstract according to how qualified they felt to review the abstract. Following this, the conference committee would match submissions to qualified peer reviewers, taking into account conflicts of interest. Submitting authors were invited to respond to reviews, and the committee would make a final decision based on the various reviews and rebuttals.This continues to be the process through DH2016. Changes continue to be made, most recently in 2016 with the addition of “Diversity” and “Multilinguality” as new keywords authors can append to their submissions.

While the list of submitted abstracts was private, accessible only to reviewers, as reviewers ourselves we had access to the submissions during the bidding phase. We used this access to create a dataset of conference submissions for DH2013, DH2014, and DH2015, which includes author names, affiliations, submission titles, author-selected topics, author-chosen keywords, and submission types (long paper, short paper, poster, panel).

We augmented this dataset by looking at the final conference programs in ‘13, ‘14, and ‘15, noting which submissions eventually made it onto the final conference program, and how they changed from the submission to the final product. This allows us to roughly estimate the acceptance rate of submissions, by comparing the submitted abstract lists to the final programs. It is not perfect, however, given that we don’t actually know whether submissions that didn’t make it to the final program were rejected, or if they were accepted and withdrawn. We also do not know who reviewed what, nor do we know the reviewers’ scores or any associated editorial decisions.

The original dataset, then, included fields for title, authors, author affiliations, original submission type, final accepted type, topics, keywords, and a boolean field for whether a submission made it to the final conference program. We cleaned the data up by merging duplicate people, ensuring e.g., if “Melissa Terras” was an author on two different submissions, she counted as the same person. For affiliations, we semi-automatically merged duplicate institutions, found the countries they reside in, and assigned those countries to broad UN regions. We also added data to the set, first automatically guessing a gender for each author, and then correcting the guesses by hand.

Given that abstracts were submitted to conferences with an expectation of privacy, we have not released the full submission dataset; we have, however, released the full dataset of final conference programs. 8

We would like to acknowledge the gross and problematic simplifications involved in this process of gendering authors without their consent or input. As Miriam Posner has pointed out, with regards to Getty’s Union List of Author Names, “no self-respecting humanities scholar would ever get away with such a crude representation of gender in traditional work”. And yet, we represent authors in just this crude fashion, labeling authors as male, female, or unknown/other. We did not encode changes of author gender over time, even though we know of at least a few authors in the dataset for whom this applies. We do not use the affordances of digital data to represent the fluidity of gender. This is problematic for a number of reasons, not least of which because, when we take a cookie cutter to the world, everything in the world will wind up looking like cookies.

We made this decision because, in the end, all data quality is contingent to the task at hand. It is possible to acknowledge an ontology’s shortcomings while still occasionally using that ontology to a positive effect. This is not always the case: often poor proxies get in the way a research agenda (e.g., citations as indicators of “impact” in digital humanities), rather than align with it. In the humanities, poor proxies are much more likely to get in the way of research than help it along, and afford the ability to make insensitive or reductivist decisions in the name of “scale”.

For example, in looking for ethnic diversity of a discipline, one might analyze last names as a proxy for country of origin, or analyze the color of recognized faces in pictures from recent conferences as a proxy for ethnic genealogy. Among other reasons, this approach falls short because ethnicity, race, and skin color are often not aligned, and last names (especially in the U.S.) are rarely indicative of anything at all. But they’re easy solutions, so people use them. These are moments when a bad proxy (and for human categories, proxies are almost universally bad) does not fruitfully contribute to a research agenda. As George E.P. Box put it, “all models are wrong, but some are useful.”

Some models are useful. Sometimes, the stars align and the easy solution is the best one for the question. If someone were researching immediate reactions of racial bias in the West, analyzing skin tone may get us something useful. In this case, the research focus is not someone’s racial identity, but someone’s race as immediately perceived by others, which would likely align with skin tone. Simply: if a person looks black, they’re more likely to be treated as such by the (white) world at large. 9

We believe our proxies, though grossly inaccurate, are useful for the questions of gender and geographic diversity and bias. The first step to improving DH conference diversity is noticing a problem; our data show that problem through staggeringly imbalanced regional and gender ratios. With regards to gender bias, showing whether reviewers are less likely to accept papers from authors who appear to be women can reveal entrenched biases, whether or not the author actually identifies as a woman. With that said, we invite future researchers to identify and expand on our admitted categorical errors, allowing everyone to see the contours of our community with even greater nuance.

Analysis

The annual ADHO conference has grown significantly in the last fifteen years, as described in our companion piece 10, within which can be found a great discussion of our methods. This piece, rather than covering overall conference trends, focuses specifically on issues of diversity and acceptance rates. We cover geographic and gender diversity from 2000-2015, with additional discussions of topicality and peer review bias beginning in 2013.

Gender

Women comprise 36.1% of the 3,239 authors to DH conference presentations over the last fifteen years, counting every unique author only once. Melissa Terras’ names appears on 29 presentations between 200-2015, and Scott B. Weingart’s name appears on 4 presentations, but for the purpose of this metric each name counts only once. Female authorship representation fluctuates between 29%-38% depending on the year.

Weighting every authorship event individually (i.e., Weingart’s name counts 4 times, Terras’ 29 times), women’s representation drops to 32.7%. This reveals that women are less likely to author multiple pieces compared to their male counterparts. More than a third of the DH authorship pool are women, but fewer than a third of every name that appears on a presentation is a woman’s. Even fewer single-authored pieces are by a woman; only 29.8% of the 984 single-authored works between 2000-2015 female-authored. About a third (33.4%) of first authors on presentations are women. See Fig. 1 for a breakdown of these numbers over time. Note the lack of periodicity, suggesting gender representation is not affected by whether the conference is held in Europe or North America (until 2015, the conference alternated locations every year). The overall ratio wavers, but is neither improving nor worsening over time.

Figure 1. re
Figure 1. Representation of Women at ADHO Conferences, 2000-2015.

The gender disparity sparked controversy at DH2015 in Sydney. It was, however, at odds with a common anecdotal awareness that many of the most respected role-models and leaders in the community are women. To explore this disconnect, we experimented with using centrality in co-authorship networks as a proxy for fame, respectability, and general presence within the DH consciousness. We assume that individuals who author many presentations, co-author with many people, and play a central role in connecting DH’s disparate communities of authorship are the ones who are most likely to garner the respect (or at least awareness) of conference attendees.

We created a network of authors connected to their co-authors from presentations between 2000-2015, with ties strengthening the more frequently two authors collaborate. Of the 3,239 authors in our dataset, 61% (1,750 individuals) are reachable by one another via their co-authorship ties. For example, Beth Plale is reachable by Alan Liu because she co-authored with J. Stephen Downie, who co-authored with Geoffrey Rockwell, who co-authored with Alan Liu. Thus, 61% of the network is connected in one large component, and there are 299 smaller components, islands of co-authorship disconnected from the larger community.

The average woman co-authors with 5 other authors, and the average man co-authors with 5.3 other authors. The median number of co-authors for both men and women is 4. The average and median of several centrality measurements (closeness, betweenness, pagerank, and eigenvector) for both men and women are nearly equivalent; that is, any given woman is just as likely to be near the co-authorship core as any given man. Naturally, this does not imply that half of the most central authors are women, since only a third of the entire authorship pool are women. It means instead that gender does not influence one’s network centrality. Or at least it should.

The statistics show a curious trend for the most central figures in the network. Of the top 10 authors who co-author with the most others, 60% are women. Of the top 20, 45% are women. Of the top 50, 38% are women. Of the top 100, 32% are women. That is, the over half the DH co-authorship stars are women, but the further towards the periphery you look, the more men occupy the middle-tier positions (i.e., not stars, but still fairly active co-authors). The same holds true for the various centrality measurements: betweenness (60% women in top 10; 40% in top 20; 32% in top 50; 34% in top 100), pagerank (50% women in top 10; 40% in top 20; 32% in top 50; 28% in top 100), and eigenvector (60% women in top 10; 40% in top 20; 40% in top 50; 34% in top 100).

In short, half or more of the DH conference stars are women, but as you creep closer to the network periphery, you are increasingly likely to notice the prevailing gender disparity. This supports the mismatch between an anecdotal sense that women play a huge role in DH, and the data showing they are poorly represented at conferences. The results also match with the fact that women are disproportionately more likely to write about management and leadership, discussed at greater length below.

The heavily-male gender skew at DH conferences may lead one to suspect a bias in the peer review process. Recent data, however, show that if such a bias exists, it is not direct. Over the past three conferences, 71% of women and 73% of men who submitted presentations passed the peer review process. The difference is not great enough to rule out random chance (p=0.16 using χ²). The skew at conferences is more a result of fewer women submitting articles than of women’s articles not getting accepted. The one caveat, explained more below, is that certain topics women are more likely to write about are also less likely to be accepted through peer-review.

This does not imply a lack of bias in the DH community. For example, although only 33.5% of authors at DH2015 in Sydney were women, 46% of conference attendees were women. If women were simply uninterested in DH, the split in attendance vs. authorship would not be so high.

In regard to discussions of women in different roles in the DH community – less the publishing powerhouses and more the community leaders and organizers – the concept of the “glass cliff” can be useful. Research on the feminization of academia in Sweden uses the term “glass cliff” as a “metaphor used to describe a phenomenon when women are appointed to precarious leadership roles associated with an increased risk of negative consequences when a company is performing poorly and for example is experiencing profit falls, declining stock performance, and job cuts” (Peterson 2014, 4). The female academics (who also occupied senior managerial positions) interviewed in Helen Peterson’s study expressed concerns about increasing workloads, the precarity of their positions, and the potential for interpersonal conflict.

Institutional politics may also play a role in the gendered data here. Sarah Winslow says of institutional context that “female faculty are less likely to be located at research institutions or institutions that value research over teaching, both of which are associated with greater preference for research” (779). The research, teaching, and service divide in academia remains a thorny issue, especially given the prevalence of what has been called the pink collar workforce in academia, or the disproportionate amount of women working in low-paying teaching-oriented areas. This divide likely also contributed to differing gender ratios between attendees and authors at DH2015.

While the gendered implications of time allocation in universities are beyond the scope of this paper, it might be useful to note that there might be long-term consequences for how people spend their time interacting with scholarly tasks that extend beyond one specific institution. Winslow writes: “Since women bear a disproportionate responsibility for labor that is institution-specific (e.g., institutional housekeeping, mentoring individual students), their investments are less likely to be portable across institutions. This stands in stark contrast to men, whose investments in research make them more highly desirable candidates should they choose to leave their own institutions” (790). How this plays out specifically in the DH community remains to be seen, but the interdisciplinarity of DH along with its projects that span multiple working groups and institutions may unsettle some of the traditional bias that women in academia face.

Locale

Until 2015, the DH conference alternated every year between North America and Europe. As expected, until recently, the institutions represented at the conference have hailed mostly from these areas, with the primary locus falling in North America. In fact, since 2000, North American authors were the largest authorial constituency at eleven of the fifteen conferences, even though North America only hosted the conference seven times in that period.

With that said, as opposed to gender representation, national and institutional diversity is improving over time. Using an Index of Qualitative Variation (IQV), institutional variation begins around 0.992 in 2000 and ends around 0.996 in 2015, with steady increases over time. National IQV begins around 0.79 in 2010 and ends around 0.83 in 2015, also with steady increases over time. The most recent conference was the first that included over 30% of authors and attendees arriving from outside Europe or North America. Now that ADHO has implemented a three-year cycle, with every third year marked by a movement outside its usual territory, that diversity is likely to increase further still.

The most well-represented institutions are not as dominating as some may expect, given the common view of DH as a community centered around particular powerhouse departments or universities. The university with the most authors contributing to DH conferences (2.4% of the total authors) is King’s College London, followed by the Universities of Illinois (1.85%), Alberta (1.83%), and Virginia (1.75%). The most prominent university outside of North America or Europe is Ritsumeikan University, contributing 1.07% of all DH conference authors. In all, over a thousand institutions have contributed authors to the conference, and that number increases every year.

While these numbers represent institutional origins, the data available does not allow any further diving into birth countries, native language, ethnic identities, etc. The 2013-2015 dataset, including peer review information, does yield some insight into geography-influenced biases that may map to language or identity. While the peer review data do not show any clear bias by institutional country, there is a very clear bias against names which do not appear frequently in the U.S. Census or Social Security Index. We discovered this when attempting to statistically infer the gender of authors using these U.S.-based indices. 11 From 2013-2015, presentations written by those with names appearing frequently in these indices were significantly more likely to be accepted than those written by authors with non-English names (p < 0.0001). Whereas approximately 72% of authors with common U.S. names passed peer review, only 61% of authors with uncommon names passed. Without more data, we have no idea whether this tremendous disparity is due to a bias against popular topics from non-English-speaking countries, a higher likelihood of peer reviewers rejecting text written by non-native writers, an implicit bias by peer reviewers when they see “foreign” names, or something else entirely.

Topic

When submitting a presentation, authors are given the opportunity to provide keywords for their submission. Some keywords can be chosen freely, while others must be chosen from a controlled list of about 100 potential topics. These controlled keywords are used to help in the process of conference organization and peer reviewer selection, and they stay roughly constant every year. New keywords are occasionally added to the list, as in 2016, where authors can now select three topics which were not previously available: “Digital Humanities – Diversity”, “Digital Humanities – Multilinguality”, and “3D Printing”. The 2000-2015 conference dataset does not include keywords for every article, so this analysis will only cover the more detailed dataset, 2013-2015, with additional data on submissions for DH2016.

From 2013-2016, presentations were tagged with an average of six controlled keywords per submission. The most-used keywords are unsurprising: “Text Analysis” (tagged on 22% of submissions), “Data Mining / Text Mining” (20%), “Literary Studies” (20%), “Archives, Repositories, Sustainability And Preservation” (19%), and “Historical Studies” (18%). The most frequently-used keyword potentially pertaining directly to issues of diversity, “Cultural Studies”, appears on on 14% of submissions from 2013-2016. Only 2% of submissions are tagged with “Gender Studies”. The two diversity-related keywords introduced this year are already being used surprisingly frequently, with 9% of submissions in 2016 tagged “Digital Humanities – Diversity” and 6% of submissions tagged “Digital Humanities – Multilinguality”. With over 650 conference submissions for 2016, this translates to a reasonably large community of DH authors presenting on topics related to diversity.

Joining the topic and gender data for 2013-2015 reveals the extent to which certain subject matters are gendered at DH conferences. 12 Women are twice as likely to use the “Gender Studies” tag as male authors, whereas men are twice as likely to use the “Asian Studies” tag as female authors. Subjects related to pedagogy, creative / performing arts, art history, cultural studies, GLAM (galleries, libraries, archives, museums), DH institutional support, and project design/organization/management are more likely to be presented by women. Men, on the other hand, are more likely to write about standards & interoperability, the history of DH, programming, scholarly editing, stylistics, linguistics, network analysis, and natural language processing / text analysis. It seems DH topics have inherited the usual gender skews associated with the disciplines in which those topics originate.

We showed earlier that there was no direct gender bias in the peer review process. While true, there appears to be indirect bias with respect to how certain gendered topics are considered acceptable by the DH conference peer reviewers. A woman has just as much chance of getting a paper through peer review as a man if they both submit a presentation on the same topic (e.g., both women and men have a 72% chance of passing peer review if they write about network analysis, or a 65% chance of passing peer review if they write about knowledge representation), but topics that are heavily gendered towards women are less likely to get accepted. Cultural studies has a 57% acceptance rate, gender studies 60%, pedagogy 51%. Male-skewed topics have higher acceptance rates, like text analysis (83%), programming (80%), or Asian studies (79%). The female-gendering of DH institutional support and project organization also supports our earlier claim that, while women are well-represented among the DH leadership, they are more poorly represented in those topics that the majority of authors are discussing (programming, text analysis, etc.).

Regarding the clustering – and devaluing – of topics that women tend to present on at DH conferences, the widespread acknowledgement of the devaluing of women’s labor may help to explain this. We discussed the feminization of academia above, and indeed, this is a trend seen in practically all facets of society. The addition of emotional labor or caretaking tasks complicates this. Economist Teresa Ghilarducchi explains: “a lot of what women do in their lives is punctuated by time outside of the labor market — taking care of family, taking care of children — and women’s labor has always been devalued…[people] assume that she had some time out of the labor market and that she was doing something that was basically worthless, because she wasn’t being paid for it.” In academia specifically, the labyrinthine relationship of pay to tasks/labor further obscures value: we are rarely paid per task (per paper published or presented) on the research front; service work is almost entirely invisible; and teaching factors in with course loads, often with more up-front transparency for contingent laborers such as adjuncts and part-timers.

Our results seem to point to less of an obvious bias against women scholars than a subtler bias against topics that women tend to gravitate toward, or are seen as gravitating toward. This is in line with the concept of postfeminism, or the notion that feminism has met its main goals (e.g. getting women the right to vote and the right to an education), and thus is irrelevant to contemporary social needs and discourse. Thoroughly enmeshed in neoliberal discourse, postfeminism makes discussing misogyny seem obsolete and obscures the subtler ways in which sexism operates in daily life (Pomerantz, Raby, and Stefanik 2013). While individuals may or may not choose to identify as postfeminist, the overarching beliefs associated with postfeminism have permeated North American culture at a number of levels, leading us to posit the acceptance of the ideals of postfeminism as one explanation for the devaluing of topics that seem associated with women.

Discussion and Future Research

The analysis reveals an annual DH conference with a growing awareness of diversity-related issues, with moderate improvements in regional diversity, stagnation in gender diversity, and unknown (but anecdotally poor) diversity with regards to language, ethnicity, and skin color. Knowledge at the DH conference is heavily gendered, though women are not directly biased against during peer review, and while several prominent women occupy the community’s core, women occupy less space in the much larger periphery. No single or small set of institutions dominate the conference attendance, and though North America’s influence on ADHO cannot be understated, recent ADHO efforts are significantly improving the geographic spread of its constituency.

The DH conference, and by extension ADHO, is not the digital humanities. It is, however, the largest annual gathering of self-identified digital humanists, 13 and as such its makeup holds influence over the community at large. Its priorities, successes, and failures reflect on DH, both within the community and to the outside world, and those priorities get reinforced in future generations. If the DH conference remains as it is—devaluing knowledge associated with femininity, comprising only 36% women, and rejecting presentations by authors with non-English names—it will have significant difficulty attracting a more diverse crowd without explicit interventions. Given the shortcomings revealed in the data above, we present some possible interventions that can be made by ADHO or its members to foster a more diverse community, inspired by #WhatIfDH2016:

  • As pointed out by Yvonne Perkins, Ask presenters to include a brief “Collections Used” section, when appropriate. Such a practice would highlight and credit the important work being done by those who aren’t necessarily engaging in publishable research, and help legitimize that work to conference attendees.

  • As pointed out by Vika Zafrin, create guidelines for reviewers explicitly addressing diversity, and provide guidance on noticing and reducing peer review bias.

  • As pointed out by Vika Zafrin, community members can make an effort to solicit presentation submissions from women and people of color.

  • As pointed out by Vika Zafrin, collect and analyze data on who is peer reviewing, to see whether or the extent to which biases creep in at that stage.

  • As pointed out by Aimée Morrison, ensure that the conference stage is at least as diverse as the conference audience. This can be accomplished in a number of ways, from conference organizers making sure their keynote speakers draw from a broad pool, to organizing last-minute lightning lectures specifically for those who are registered but not presenting.

  • As pointed out by Tonya Howe, encourage presentations or attendance from more process-oriented liberal arts delegates.

  • As pointed out by Christina Boyles, encourage the submission of research focused around the intersection of race, gender, and sexuality studies. This may be partially accomplished by including more topical categories for conference submissions, a step which ADHO has already taken for 2016.

  • As pointed out by many, take explicit steps in ensuring conference access to those with disabilities. We suggest this become an explicit part of the application package submitted by potential host institutions.

  • As pointed out by many, ensure the ease of participation-at-a-distance (both as audience and as speaker) for those without the resources to travel.

  • As requested by Karina van Dalen-Oskam, chair of ADHO’s Steering Committee, send her an email on how to navigate the difficult cultural issues facing an international organization.

  • Give marginalized communities greater representation in the DH Conference peer reviewer pool. This can be done grassroots, with each of us reaching out to colleagues to volunteer as reviewers, and organizationally, perhaps by ADHO creating a volunteer group to seek out and encourage more diverse reviewers.

  • Consider the difference between diversifying (verb) vs. talking about diversity (noun), and consider whether other modes of disrupting hegemony, such as decolonization and queering, might be useful in these processes.

  • Contribute to the #whatifDH2016 and #whatifDH2017 discussions on twitter with other ideas for improvements.

Many options are available to improve representation at DH conferences, and some encouraging steps are already being taken by ADHO and its members. We hope to hear more concrete steps that may be taken, especially learned from experiences in other communities or outside of academia, in order to foster a healthier and more welcoming conference going forward.

In the interest of furthering these goals and improving the organizational memory of ADHO, the public portion of the data (final conference programs with full text and unique author IDs) is available alongside this publication [will link in final draft]. With this, others may test, correct, or improve our work. We will continue work by extending the dataset back to 1990, continuing to collect for future conferences, and creating an infrastructure that will allow the database to connect to others with similar collections. This will include the ability to encode more nuanced and fluid gender representations, and for authors to correct their own entries. Further work will also include exploring topical co-occurrence, institutional bias in peer review, how institutions affect centrality in the co-authorship network, and how authors who move between institutions affect all these dynamics.

The Digital Humanities will never be perfect. It embodies the worst of its criticisms and the best of its ideals, sometimes simultaneously. We believe a more diverse community will help tip those scales in the right direction, and present this chapter in service of that belief.

Works Cited

#whatifdh2015 “TAGS Searchable Twitter Archive,” n.d. http://hawksey.info/tagsexplorer/arc.html?key=10C2c1phG1QywDmy4lG4mro6VBiv0UuZlLL_uZ8HFfkc&gid=400689247

ADHO. “Our Mission,” n.d. http://adho.org/

“ADHO Announces New Steering Committee Chair.” ADHO, n.d. http://www.adho.org/announcements/2015/adho-announces-new-steering-committee-chair

“All Models Are Wrong.” Wikipedia, September 20, 2015. https://en.wikipedia.org/w/index.php?title=All_models_are_wrong&oldid=681908687

Blevins, Cameron, and Lincoln Mullen. “Jane, John … Leslie? A Historical Method for Algorithmic Gender Prediction.” Digital Humanities Quarterly 9, no. 3 (2015). http://www.digitalhumanities.org/dhq/vol/9/3/000223/000223.html

Boyles, Christina. “#WhatIfDH2016 Made Space for Scholars Who Are Interested in the Intersection(s) between DH and Race, Gender, and Sexuality Studies?” @clboyles, July 1, 2015. https://twitter.com/clboyles/statuses/616080151365861376

Burton, John W. Culture and the Human Body: An Anthropological Perspective. Prospect Heights, Ill.: Waveland Press, 2001.

“centerNet,” n.d. http://www.dhcenternet.org/

Cohen, Dan. “Catching the Good.” Dan Cohen, March 30, 2012. http://www.dancohen.org/2012/03/30/catching-the-good/

“Conditionally Accepted.” Inside Higher Education, n.d. https://www.insidehighered.com/users/conditionally-accepted

“Conference.” ADHO, n.d. http://adho.org/conference

“Congrats, You Have an All Male Panel!” n.d. http://allmalepanels.tumblr.com/

“DH Dark Sider (@DHDarkSider) | Twitter,” n.d. https://twitter.com/dhdarksider

“DH Enthusiast (@DH_Enthusiast) | Twitter,” n.d. https://twitter.com/DH_Enthusiast

“Disrupting the Digital Humanities.” Disrupting the Digital Humanities, n.d. http://www.disruptingdh.com/

Diversity in DH @ THATCamp. “Toward an Open Digital Humanities,” January 11, 2011. https://docs.google.com/document/d/1uPtB0xr793V27vHBmBZr87LY6Pe1BLxN-_DuJzqG-wU/edit?usp=sharing

Drucker, Johanna. “Humanistic Theory and Digital Scholarship.” In Debates in the Digital Humanities. University of Minnesota Press, 2012. http://dhdebates.gc.cuny.edu/debates/text/34

“Fight The Tower : Women of Color in Academia,” n.d. http://fighttower.com/

Ghilarducci, Teresa. “Why Women Over 50 Can’t Find Jobs.” Portside, n.d. http://portside.org/2016-01-18/why-women-over-50-can’t-find-jobs

“Global Outlook::Digital Humanities | Promoting Collaboration among Digital Humanities Researchers World-Wide,” n.d. http://www.globaloutlookdh.org/

Golumbia, David. “Right Reaction and the Digital Humanities.” Uncomputing, July 3, 2015. http://www.uncomputing.org/?p=1666

Howe, Tonya. “#whatifDH2016 Advocated for More Process-Oriented Liberal Arts Delegates?” Microblog. Twitter.com/howet, June 30, 2015. https://twitter.com/howet/statuses/616045260570030080

Hunt, Lynn. “Has the Battle Been Won? The Feminization of History.” Perspectives on History, May 1998. https://www.historians.org/publications-and-directories/perspectives-on-history/may-1998/has-the-battle-been-won-the-feminization-of-history

Lothian, Alexis. “THATCamp and Diversity in Digital Humanities.” Queer Geek Theory, n.d. http://www.queergeektheory.org/2011/01/thatcamp-and-diversity-in-digital-humanities/

Milen, Jeffrey F., Mitchell J. Chang, and Anthony Lising Antonio. “Making Diversity Work on Campus: A Research-Based Perspective.” Association American Colleges and Universities, 2005. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.129.2597&rep=rep1&type=pdf

Morrison, Aimée. “#WhatIfDH2016 Had as Many Women on the Stage as in the Audience? http://www.scottbot.net/HIAL/?p=41355 #dh2015.” Microblog. @digiwonk, June 30, 2015. https://twitter.com/digiwonk/status/616042963093835776

Mullen, Lincoln. Ropensci/gender: Predict Gender from Names Using Historical Data, n.d. https://github.com/ropensci/gender

Nowviskie, Bethany. “Asking for It.” Bethany Nowviskie, February 8, 2014. http://nowviskie.org/2014/asking-for-it/

———. “Cats and Ships.” Bethany Nowviskie, November 2, 2012. http://nowviskie.org/2012/cats-and-ships/

Ohio State University. “Diversity Action Plan,” n.d. https://www.osu.edu/diversityplan/index.php

Perkins, Yvonne. “International Researchers Value Work of Australian Libraries and Archives.” Stumbling Through the Past, July 20, 2015. https://stumblingpast.wordpress.com/2015/07/21/intnl_researchers_value_oz_libraries_archives/

Peterson, Helen. “An Academic ‘Glass Cliff’? Exploring the Increase of Women in Swedish Higher Education Management.” Athens Journal of Education 1, no. 1 (February 2014): 32–44.

Pomerantz, Shauna, Rebecca Raby, and Andrea Stefanik. “Girls Run the World? Caught between Sexism and Postfeminism in the School.” *Gender & Society *27, no. 2 (April 1, 2013): 185-207. doi:10.1177/0891243212473199

Posner, Miriam. “What’s Next: The Radical, Unrealized Potential of Digital Humanities.” Miriam Posner’s Blog, July 27, 2015. http://miriamposner.com/blog/whats-next-the-radical-unrealized-potential-of-digital-humanities/

“Postcolonial Digital Humanities | Global Explorations of Race, Class, Gender, Sexuality and Disability within Cultures of Technology,” n.d. http://dhpoco.org/

Steiger, Kay. “The Pink Collar Workforce of Academia: Low-Paid Adjunct Faculty, Who Are Mostly Female, Have Started Unionizing for Better Pay—and Winning.” The Nation, July 11, 2013. http://www.thenation.com/article/academias-pink-collar-workforce/

Terras, Melissa. “Disciplined: Using Educational Studies to Analyse ‘Humanities Computing.’” Literary and Linguistic Computing 21, no. 2 (June 1, 2006): 229–46. doi:10.1093/llc/fql022

———. “Peering Inside the Big Tent: Digital Humanities and the Crisis of Inclusion.” Melissa Terras’ Blog, July 26, 2011. http://melissaterras.blogspot.com/2011/07/peering-inside-big-tent-digital.html

“THATCamp Southern California 2011 | The Humanities and Technology Camp,” n.d. http://socal2011.thatcamp.org/

“University of Venus.” Inside Higher Education, n.d. https://www.insidehighered.com/blogs/university-venus

U.S. Department of Education, National Center for Education Statistics. “Race/ethnicity of College Faculty,” 2015. https://nces.ed.gov/fastfacts/display.asp?id=61

Weingart, Scott. “Acceptances to Digital Humanities 2015 (part 4).” The Scottbot Irregular, June 28, 2015. http://www.scottbot.net/HIAL/?p=41375

———. “The Myth of Text Analytics and Unobtrusive Measurement.” The Scottbot Irregular, May 6, 2012. http://www.scottbot.net/HIAL/?p=16713

Wernimont, Jacqueline. “Build a Better Panel: Women in DH.” Jacqueline Wernimont. Accessed January 14, 2016. https://jwernimont.wordpress.com/2015/09/19/build-a-better-panel-women-in-dh/

———. “No More Excuses.” Jacqueline Wernimont, September 19, 2015. https://jwernimont.wordpress.com/2015/09/19/no-more-excuses/

Winslow, Sarah. “Gender Inequality and Time Allocations Among Academic Faculty.” Gender & Society 24, no. 6 (December 1, 2010): 769–93. doi:10.1177/0891243210386728.

Zafrin, Vika. “#WhatIfDH2016 Created Guidelines for Reviewers Explicitly Addressing Diversity & Providing Guidance on Reducing One’s Bias?” Microblog. @veek, June 30, 2015. https://twitter.com/veek/status/616041712163680256

———. “#WhatIfDH2016 Encouraged ALL Community Members to Reach out to Women & POC and Solicit Paper Submissions?” Microblog. @veek, June 30, 2015. https://twitter.com/veek/statuses/616041931949363200

———. “#WhatIfDH2016 Expanded ConfTool Pro to Record Reviewer Biases along Gender, Race, Country-of-Origin GDP Lines?” Microblog. @veek, June 30, 2015. https://twitter.com/veek/statuses/616043562799636481

Notes:

  1. Each author contributed equally to the final piece; please disregard authorship order.
  2. See Melissa Terras, “Disciplined: Using Educational Studies to Analyse ‘Humanities Computing.’” Literary and Linguistic Computing 21, no. 2 (June 1, 2006): 229–46. doi:10.1093/llc/fql022. Terras takes a similar approach, analyzing Humanities Computing “through its community, research, curriculum, teaching programmes, and the message they deliver, either consciously or unconsciously, about the scope of the discipline.”
  3. The authors have created a browsable archive of #whatifDH2016 tweets.
  4. Of the 146 presentations at DH2011, two standout in relation to diversity in DH: “Is There Anybody out There? Discovering New DH Practitioners in other Countries” and “A Trip Around the World: Balancing Geographical Diversity in Academic Research Teams.”
  5. See “Disrupting DH,” http://www.disruptingdh.com/
  6. See Wernimont’s blog post, “No More Excuses” (September 2015) for more, as well as the Tumblr blog, “Congrats, you have an all male panel!”
  7. Miriam Posner offers a longer and more eloquent discussion of this in, “What’s Next: The Radical, Unrealized Potential of Digital Humanities.” Miriam Posner’s Blog. July 27, 2015. http://miriamposner.com/blog/whats-next-the-radical-unrealized-potential-of-digital-humanities/
  8. [Link to the full public dataset, forthcoming and will be made available by time of publication])
  9. We would like to acknowledge that race and ethnicity are frequently used interchangeably, though both are cultural constructs with their roots in Darwinian thought, colonialism, and imperialism. We retain these terms because they express cultural realities and lived experiences of oppression and bias, not because there is any scientific validity to their existence. For more on this tension, see John W.Burton, (2001), Culture and the Human Body: An Anthropological Perspective. Prospect Heights, Illinois: Waveland Press, 51-54.
  10. Weingart, S.B. & Eichmann, N. (2016). “What’s Under the Big Tent?: A Study of ADHO Conference Abstracts.” Manuscript submitted for publication.
  11. We used the process and script described in: Lincoln Mullen (2015). gender: Predict Gender from Names Using Historical Data. R package version 0.5.0.9000 (https://github.com/ropensci/gender) and Cameron Blevins and Lincoln Mullen, “Jane, John … Leslie? A Historical Method for Algorithmic Gender Prediction,” Digital Humanities Quarterly 9.3 (2015).
  12. For a breakdown of specific numbers of gender representation across all 96 topics from 2013-2015, see Weingart’s “Acceptances to Digital Humanities 2015 (part 4)”.
  13. While ADHO’s annual conference is usually the largest annual gathering of digital humanists, that place is constantly being vied for by the Digital Humanities Summer Institute in Victoria, Canada, which in 2013 boasted more attendees than DH2013 in Lincoln, Nebraska.

Acceptances to DH2016 (pt. 1)

[note: originally published as draft on March 17th, 2016]

DH2016 announced their final(ish) program yesterday and, of course, that means it’s analysis time. Every year, I steal scrape submission data from the reviewer interface, and then scrape the final conference program, to report acceptance rates and basic stats for the annual event. See my previous 7.2 million previous posts on the subject. Nobody gives me data, I take it (capta, amiright?), so take these results with as many grains of salt as you’ll find at the DH2016 salt mines.

As expected, this will be the biggest ADHO conference to date, continuing a mostly-consistent trend of yearly growth. Excluding workshops & keynotes, this year’s ADHO conference in Kraków, Poland will feature 417 posters & presentations, up from 259 in 2015 (an outlier, held in Australia) and the previous record of 345 in 2014 (Switzerland). At this rate, the number of DH presentations should surpass our human population by the year 2126 (or earlier in the case of unexpected zombies).

# of conference presentations since 2000
Number of conference presentations since 2000.

Acceptance rates this year are on par with previous years. An email from ADHO claims this year’s overall acceptance rate to be 62%, and my calculations put it at 64%. Previous years were within this range: 2013 Nebraska (64%), 2014 Switzerland (59%), and 2015 Australia (72%). Regarding form, the most difficult type of presentation to get through review is a long paper, with only 44% of submitted long papers being accepted as long papers. Another 7.5% of long papers were accepted as posters, and 10% as short papers. In total, 62% of long paper submissions were accepted in some form. Reviewers accepted 75% of panels and posters, leaving them mostly in their original form. The category least likely to get accepted in any form was the short paper, with an overall 59% acceptance rate (50% accepted as short papers; 8% accepted as posters). The moral of the story is that your best bet to get accepted to DH is to submit a poster. If you hate posters, submit a long paper; even if it’s not accepted as a long paper, it might still get in as a short or a poster. But if you do hate posters, maybe just avoid this conference.

Acceptances by type. (Left: Submission type. Right: Acceptance type or rejection).
Proportion of acceptances by type, 2016. Submission type on left, acceptance type or rejection on right.

About a third of this year’s presentations are single-authored, another third dual-authored, and the last third are authored by three or more people. As with 2013-2015, more authors means a more likely acceptance: reviewers accepted 51% of single-authored presentations, 66% of dual-authored presentations, and 74% of three-or-more-authored presentations.

Acceptance rate by number of authors.
Acceptance rate by number of authors.

Topically, the landscape of DH2016 will surprise few. A quarter of all presentations will involve text analysis, followed by historical studies (23% of all presentations), archives (21%), visualizations (20%), text/data mining (20%), and literary studies (20%). DH self-reflection is always popular, with this year’s hot-button issues being DH diversity (10%), DH pedagogy (10%), and DH facilities (7%). Surprisingly, other categories pertaining to pedagogy are also growing compared to previous years, though mostly it’s due to more submissions in that area. Reviewers still don’t rate pedagogy presentations very highly, but more on that in the next post. Some topical low spots compared to previous years include social media (2% of all presentations), anthropology (3%), VR/AR (3%), crowdsourcing (4%), and philosophy (5%).

This year will likely be the most linguistically diverse conference thus-far: 92% English, 7% French, 0.5% German, with other presentations in Spanish, Italian, Polish, etc. (And by “most linguistically diverse” obviously I mean “really not very diverse but have you seen the previous conferences?”) Submitting in a non-English language doesn’t appreciably affect acceptance rates.

That’s all for now. Stay-tuned for Pt. 2, with more thorough comparisons to previous years, actual granular data/viz on topics, analyses of gender and geography, as well as interpretations of what the changing landscape means for DH.

Submissions to DH2016 (pt. 1)

tl;dr Basic numbers on DH2016 submissions.


Twice a year I indulge my meta-disciplinary sweet tooth: once to look at who’s submitting what to ADHO’s annual digital humanities conference, and once to look at which pieces get accepted (see the rest of the series). This post presents my first look at DH2016 conference submissions, the data for which I scraped from ConfTool during the open peer review bidding phase. Open peer review bidding began in 2013, so I have 4 years of data. I opt not to publish this data, as most authors submit pieces under an expectation of privacy, and might violently throw things at my face if people find out which submissions weren’t accepted. Also ethics.

Submission Numbers & Types

The basic numbers: 652 submissions (268 long papers, 223 short papers, 33 panels / multiple paper sessions, 128 posters). For those playing along at home, that’s:

  • 2013 Nebraska: 348 (144/118/20/66)
  • 2014 Lausanne: 589 (250/198/30/111)
  • 2015 Sydney: 360 (192/102/13/53)
  • 2016 Kraków: 652 (268/223/33/128)
Comparisons of submission types to DH2013-DH2016
Comparisons of submission types to DH2013-DH2016

DH2016 submissions are on par to continue the consistent-ish trend of growth every year since 1999, the large dip in 2015 unsurprising given its very different author pool, and the fact that it was the first time the conference visited the southern hemisphere or Asia-Pacific. The different author pool in 2015 also likely explains why it was the only conference to deviate from the normal submission-type ratios.

Co-Authorship

Regarding co-authorship, the number has shifted this year, though not enough to pass any significance tests.

Co-authorships in DH2013-DH2016 submissions.
Co-authorship in DH2013-DH2016 submissions.

DH2016 has proportionally slightly fewer single authored papers than previous years, and slightly more 2-, 3-, and 4-authored papers. One submission has 17 authors (not quite the 5,154-author record of high energy physics, but we’re getting there, eh?), but mostly it’s par for the course here.

Topics

Topically, DH2016 submissions continue many trends seen previously.

Authors must tag their submissions into multiple categories, or topics, using a controlled vocabulary. The figure presents a list of topics tagged to submissions, ordered top-to-bottom by the largest proportion of submissions with a certain tag for 2016. Nearly 25% of DH2016 submissions, for example, were tagged with “Text Analysis”. The dashed lines represent previous years’ tag proportions, with the darkest representing 2015, getting lighter towards 2013. New topics, those which just entered the controlled vocabulary this year, are listed in red. They are 3D Printing, DH Multilinguality, and DH Diversity.

Scroll past the long figure below to read my analysis:

dh2016-topics

In a reveal that will shock all species in the known universe, text analysis dominates DH2016 submissions—the proportion even grew from previous years. Text & data mining, archives, and data visualization aren’t far behind, each growing from previous years.

What did actually (pleasantly) surprise me was that, for the first time since I began counting in 2013, history submissions outnumber literary ones. Compare this to 2013, when literary studies were twice as well represented as historical. Other top-level categories experiencing growth include: corpus studies, content analysis, knowledge representation, NLP, and linguistics.

Two areas which I’ve pointed out previously as needing better representation, geography and pedagogy, both grew compared to previous years. I’ve also pointed out a lack of discussion of diversity, but part of that lack was that authors had no “diversity” category to label their research with—that is, the issue I pointed out may have been as much a problem with the topic taxonomy as with the research itself. ADHO added “Diversity” and “Multilinguality” as potential topic labels this year, which were tagged to 9.4% and 6.5% of submissions, respectively. One-in-ten submissions dealing specifically with issues of diversity is encouraging to see.

Unsurprisingly, since Sydney, submissions tagged “Asian Studies” have dropped. Other consistent drops over the last few years include software design, A/V & multimedia (sadface), information retrieval, XML & text encoding,  internet & social media-related topics, crowdsourcing, and anthropology. The conference is also getting less self-referential, with a consistent drop in DH histories and meta-analyses (like this one!). Mysteriously, submissions tagged with the category “Other” have dropped rapidly each year, suggesting… dunno, aliens?

I have the suspicion that some numbers are artificially growing because there are more topics tagged per article this year than previous years, which I’ll check and report on in the next post.

It may be while before I upload the next section due to other commitments. In the meantime, you can fill your copious free-time reading earlier posts on this subject or my recent book with Shawn Graham & Ian Milligan, The Historian’s Macroscope. Maybe you can buy it for your toddler this holiday season. It fits perfectly in any stocking (assuming your stockings are infinitely deep, like Mary Poppins’ purse, which as a Jew watching Christmas from afar I just always assume is the case).

Work with me! CMU is hiring a DH Developer

Carnegie Mellon University is hiring a DH Developer!

I’ve had a blast since starting as Digital Humanities Specialist at CMU. Enough administrators, faculty, and students are on board to make building a DH strength here pretty easy, and we’re neighbors to Pitt DHRX, a really supportive supercomputing center, and great allies in the Mayor’s Office keen on a city rich with art, data, and both combined.

We want a developer to help jump-start our research efforts. You’ll be working as a full collaborator on projects from all sorts of domains, and as a review board member you’ll have a strong say in which projects they are and how they get implemented. You and I will work together in achievable rapid prototyping, analyzing data, and web deployment.

The idea is we build or do stuff that’s scholarly, interesting, and can have a proof-of-concept or article done in a semester or two. With that, the project can go on to seek additional funding and a full-time specialized programmer, or we can finish there and all be proud authors or creators of something we enjoyed making.

Ideally, you have a social science, humanities, journalism, or similar research background, and the broad tech chops to create a d3 viz, DeepDream some dogs into a work of art, manage a NoSQL database, and whatever else seems handy. Ruby on Rails, probably.

We’re looking for someone who loves playing with new tech stacks, isn’t afraid to get their hands dirty, and knows how to talk to humans. You probably have a static site and a github account. You get excited by interactive data stories, and want to make them with us. This job values breadth over depth and done over perfect.

The job isn’t as insane as it sounds—you don’t actually need to be able to do all this already, just be the sort of person who can learn on the fly. A bachelor’s degree or similar experience is required, with a strong preference for candidates with some research background. You’ll need to submit or point to some examples of work you’ve done.

We’re an equal opportunity employer, and would love to see applications from women, minorities, or other groups who often have a tough time getting developer jobs. If you work here you can take two free classes a semester. Say, who wants a fancy CMU computer science graduate degree? We can offer an awesome city, friendly coworkers, and a competitive salary (also Pittsburgh’s cheap so you wouldn’t live in a closet, like in SF or NYC).

What I’m saying is you should apply ’cause we love you.


The ad, if you’re too lazy to click the link, or are scared CMU hosts viruses:

Job Description
Digital Humanities Developer, Dietrich College of Humanities and Social Sciences

Summary
The Dietrich College of Humanities and Social Sciences at Carnegie Mellon University (CMU) is undertaking a long-term initiative to foster digital humanities research among its faculty, staff, and students. As part of this initiative, CMU seeks an experienced Developer to collaborate on cutting edge interdisciplinary projects.

CMU is a world leader in technology-oriented research, and a highly supportive environment for cross-departmental teams. The Developer would work alongside researchers from Dietrich and elsewhere to plan and implement digital humanities projects, from statistical analyses of millions of legal documents to websites that crowdsource grammars of endangered languages. Located in the the Office of The Dean under CMU’s Digital Humanities Specialist, the developer will help start up faculty projects into functioning prototypes where they can acquire sustaining funding to hire specialists for more focused development.

The position emphasizes rapid, iterative deployment and the ability to learn new techniques on the job, with a focus on technologies intersecting data science and web development, such as D3.js, NoSQL, Shiny (R), IPython Notebooks, APIs, and Ruby on Rails. Experience with digital humanities or computational social sciences is also beneficial, including work with machine learning, GIS, or computational linguistics.

The individual in this position will work with clients and the digital humanities specialist to determine achievable short-term prototypes in web development or data analysis/presentation, and will be responsible for implementing the technical aspects of these goals in a timely fashion. As a collaborator, the Digital Humanities Developer will play a role in project decision-making, where appropriate, and will be credited on final products to which they extensively contribute.

Please submit a cover letter, phone numbers and email addresses for two references, a résumé or cv, and a page describing how your previous work fits the job, including links to your github account or other relevant previous work examples.

Qualifications

  • Bachelor’s Degree in humanities computing, digital humanities, informatics, computer science, related field, or equivalent combination of training and experience.
  • At least one year of experience in modern web development and/or data science, preferably in a research and development team setting.
  • Demonstrated knowledge of modern machine learning and web development languages and environments, such as some combination of Ruby on Rails, LAMP, Relational Databases or NoSQL (MongoDB, Cassanda, etc.), MV* & JavaScript (including D3.js), PHP, HTML5, Python/R, as well as familiarity with open source project development.
  • Some system administration.

Preferred Qualifications

  • Advanced degree in digital humanities, computational social science, informatics, or data science. Coursework in data visualization, machine learning, statistics, or MVC web applications.
  • Three or more years at the intersection of web development/deployment and machine learning (e.g. data journalism or digital humanities) in an agile software environment.
  • Ability to assess client needs and offer creative research or publication solutions.
  • Any combination of GIS, NLTK, statistical models, ABMs, web scraping, mahout/hadoop, network analysis, data visualization, RESTful services, testing frameworks, XML, HPC.

Job Function: Research Programming

Primary Location: United States-Pennsylvania-Pittsburgh

Time Type: Full Time

Organization: DIETRICH DEAN’S OFFICE

Minimum Education Level: Bachelor’s Degree or equivalent

Salary: Negotiable

What’s Counted Counts

tl;dr. Don’t rely on data to fix the world’s injustices. An unusually self-reflective and self-indulgent post.

[Edit: this question was prompted by a series of analyses and visualizations I’ve done in collaboration with Nickoal Eichmann, but I purposefully left her out of the majority of this post, as it was one of self-reflection about my own personal choices. A respected colleague pointed out in private that by doing so, I nullified my female collaborator’s contributions to the project, for which I apologize deeply. Nickoal’s input has been integral to all of this, and she and many others, including particularly Jeana Jorgensen and Heather Froehlich (who has written on this very subject), have played vital roles in my own learning about these issues. Recent provocations by Miriam Posner helped solidify a lot of these thoughts and inspired this post. What follows is a self-exploration, recapping what many people have already said, but hopefully still useful to some. Mistakes below shouldn’t reflect poorly on those who influenced or inspired me. The post from this point on is as it originally appeared.]


Someone asked yesterday why I cared enough 1 about gender equality in academia to make this chart (with Nickoal Eichmann).

Gender representation as authors at DH conferences over the last decade. (Women consistently represent around 33% of authors)
Gender representation as authors at DH conferences over the last decade. Context. (Women consistently represent around 33% of authors)

I didn’t know how to answer the question. Our culture gives some more and better opportunities than others, so in order to make things better for more people, we must reveal and work towards resolving points of inequality. “Why do I care?” Don’t most of us want to make things better, we just go about it in different ways, and have different ideas of what’s “better”?

But the question did make me consider why I’d started with gender equality, when there are clearly so many other equally important social issues to tackle, within and outside academia. The answer was immediately obvious: ease. I’d attempted to explore racial and ethnic diversity as well, but it was simply more fraught, complicated, and less amenable to my methods than gender, so I started with gender and figured I’d work my way into the weeds from there. 2

I’ll cut to the chase. My well-intentioned attempts at battling inequality suffer their own sort of bias: by focusing on measurements of inequality, I bias that which is easily measured. It’s not that gender isn’t complex (see Miriam Posner’s wonderful recent keynote on these and related issues), but at least it’s a little easier to measure than race & ethnicity, when all you have available to you is what you can look up on the internet.

[scroll down]

Saturday Morning Breakfast Cereal. [source]
Saturday Morning Breakfast Cereal. [source]
While this problem is far from new, it takes special significance in a data-driven world. That which is countable counts, and damn the rest. At its heart, this problem is one of classification and categorization: those social divides which have the clearest seams are those most easily counted. And in a data-driven world, it’s inequality along these clear divides which get noticed first, even when injustice elsewhere is far greater.

Sex is easy, compared to gender. At most 2% of people are born intersex according to most standards (but not accounting for dysmorphia & similar). And gender is relatively easy compared to race and ethnicity. Nationality is pretty easy because of bureaucratic requirements for passports and citizenship, and country of residence is even easier, unless you live somewhere like Palestine.

But even the Palestine issue isn’t completely problematic, because counting still works fine when one thing exists in multiple categories, or may be categorized differently in different systems. That’s okay.

[source]
[source]
Where math gets lost is where there are simply no good borders to draw around entities—or worse, there are borders, but those borders themselves are drawn by insensitive outgroups. We see this a lot in the history of colonialism. Have you ever been to the Pitt Rivers Museum in Oxford? It’s a 19th century museum that essentially shows what the 19th century British mind felt about the world: everything that looks like a flute is in the flute cabinet, everything that looks like a gun is in the gun cabinet, and everything that looks like a threatening foreign religious symbol is in the threatening foreign religious symbol cabinet. Counting such a system doesn’t reveal any injustice except that of the counters themselves.

Pitt Rivers Museum [source]
Pitt Rivers Museum [source]
And I’ll be honest here: I want to help make the world a better place, but I’ve got to work to my strengths and know my limits. I’m a numbers guy. I’m at my best when counting stuff, and when there are no sensitive ways to classify, I avoid counting, because I don’t want to be That Colonizing White Dude who tries to fit everything into boxes of his own invention to make himself feel better about what he’s doing for the world. I probably still fall into that trap a lot anyway.

So why did I care enough to count gender at DH conferences? It was (relatively) easy. And it’s needed, as we saw at DH2015 and we’ve seen throughout the digital humanities – we have a gender issue, and a feminism issue, and they both need to be pointed out and addressed. But we also have lots of other issues that I’ll simply never be able to approach, and don’t know how to approach, and am in danger of ignoring entirely if I only rely on quantitative evidence of inequality.

useless by xkcd
useless by xkcd

Of course, only relying on non-quantitative evidence has its own pitfalls. People evolved and are socialized to spot patterns, to extrapolate from limited information, even when those extrapolations aren’t particularly meaningful or lead to Jesus in a slice of toast. I’m not advocating we avoid metrics entirely (for one, I’d be out of a job), but echoing Miriam Posner’s recent provocation, we need to engage with techniques, approaches, and perspectives that don’t rely on easy classification schemes. Especially, we need to listen when people notice injustice that isn’t easily classified or counted.

“Uh, yes, Scott, who are you writing this for? We already knew this!” most of you are likely asking if you’ve read this far. I’m writing to myself in early college, an engineering student obsessed with counting, who’s slowly learned the holes in a worldview that only relies on quantitative evidence. The one who spent years quantifying his health issues, only to discover the pursuit of a number eventually took precedence over the pursuit of his own health. 3

Hopefully this post helps balance all the bias implicit in my fighting for a better world from a data-driven perspective, by suggesting “data-driven” is only one of many valuable perspectives.

Notes:

  1. Upon re-reading the original question, it was actually “Why did you do it? (or why are you interested?)”. Still, this post remains relevant.
  2. I’m light on details here because I don’t want this to be an overlong post, but you can read some more of the details on what Nickoal and I are doing, and the decisions we make, in this blog series.
  3. A blog post on mental & physical health in academia is forthcoming.

Acceptances to Digital Humanities 2015 (part 4)

tl;dr

Women are (nearly but not quite) as likely as men to be accepted by peer reviewers at DH conferences, but names foreign to the US are less likely than either men or women to be accepted to these conferences. Some topics are more likely to be written on by women (gender, culture, teaching DH, creative arts & art history, GLAM, institutions), and others more likely to be discussed by men (standards, archaeology, stylometry, programming/software).

Introduction

You may know I’m writing a series on Digital Humanities conferences, of which this is the zillionth post. 1 This post has nothing to do with DH2015, but instead looks at DH2013, DH2014, and DH2015 all at once. I continue my recent trend of looking at diversity in Digital Humanities conferences, drawing especially on these two posts (1, 2) about topic, gender, and acceptance rates.

This post will be longer than usual, since Heather Froehlich rightly pointed out my methods in these posts aren’t as transparent as they ought to be, and I’d like to change that.

Brute Force Guessing

As someone who deals with algorithms and large datasets, I desperately seek out those moments when really stupid algorithms wind up aligning with a research goal, rather than getting in the way of it.

In the humanities, stupid algorithms are much more likely to get in the way of my research than help it along, and afford me the ability to make insensitive or reductivist decisions in the name of “scale”. For example, in looking for ethnic diversity of a discipline, I can think of two data-science-y approaches to solving this problem: analyzing last names for country of origin, or analyzing the color of recognized faces in pictures from recent conferences.

Obviously these are awful approaches, for a billion reasons that I need not enumerate, but including the facts that ethnicity and color are often not aligned, and last names (especially in the states) are rarely indicative of anything at all. But they’re easy solutions, so you see people doing them pretty often. I try to avoid that.

Sometimes, though, the stars align and the easy solution is the best one for the question. Let’s say we were looking to understand immediate reactions of racial bias; in that case, analyzing skin tone may get us something useful because we don’t actually care about the race of the person, what we care about is the immediate perceived race by other people, which is much more likely to align with skin tone. Simply: if a person looks black, they’re more likely to be treated as such by the world at large.

This is what I’m banking on for peer review data and bias. For the majority of my data on DH conferences, Nickoal Eichmann and I have been going in and hand-coding every single author with a gender that we glean from their website, pictures, etc. It’s quite slow, far from perfect (see my note), but it’s at least more sensitive than the brute force method, we hope to improve it quite soon with user-submitted genders, and it gets us a rough estimate of gender ratios in DH conferences.

But let’s say we want to discuss bias, rather than diversity. In that case, I actually prefer the brute force method, because instead of giving me a sense of the actual gender of an author, it can give me a sense of what the peer reviewers perceive an author’s gender to be. That is, if a peer reviewer sees the name “Mary” as the primary author of an article, how likely is the reviewer to think the author is written by a woman, and will this skew their review?

That’s my goal today, so instead of hand-coding like usual, I went to Lincoln Mullen’s fabulous package for inferring gender from first names in the programming language R. It does so by looking in the US Census and Social Security Database, looking at the percentage of men and women with a certain first name, and then gives you both the ratio of men-to-women with that name, and the most likely guess of the person’s gender.

Inferring Gender for Peer Review

I don’t have a palantír and my DH data access is not limitless. In fact, everything I have I’ve scraped from public or semi-public spaces, which means I have no knowledge of who reviewed what for ADHO conferences, the scores given to submissions, etc. What I do have the titles and author names for every submission to an ADHO conference since 2013 (explanation), and the final program of those conferences. This means I can see which submissions don’t make it to the presentation stage; that’s not always a reflection of whether an article gets accepted, but it’s probably pretty close.

So here’s what I did: created a list of every first name that appears on every submission, rolled the list it into Lincoln Mullen’s gender inference machine, and then looked at how often authors guessed to be men made it through to the presentation stage, versus how often authors guessed to women made it through. That is to say, if an article is co-authored by one man and three women, and it makes it through, I count it as one acceptance for men and three for women. It’s not the only way to do it, but it’s the way I did it.

I’m arguing this can be used as a proxy for gender bias in reviews and editorial decisions: that if first names that look like women’s names are more often rejected 2 than ones that look like men’s names, there’s likely bias in the review process.

Results: Bias in Peer Review?

Totaling all authors from 2013-2015, the inference machine told me 1,008 names looked like women’s names; 1,707 looked like men’s names; and 515 could not be inferred. “Could not be inferred” is code for “the name is foreign-sounding and there’s not enough data to guess”. Remember as well, this is counting every authorship as a separate event, so if Melissa Terras submits one paper in 2013 and one in 2014, the name “Melissa” appears in my list twice.

*drum roll*

Acceptance rates to DH2013-2015 by gender.
Figure 1. Acceptance rates to DH2013-2015 by gender.

So we see that in 2013-2015, 70.3% of woman-authorship-events get accepted, 73.2% of man-authorship-events get accepted, and only 60.6% of uninferrable-authorship-events get accepted. I’ll discuss gender more soon, but this last bit was totally shocking to me. It took me a second to realize what it meant: that if your first name isn’t a standard name on the US Census or Social Security database, you’re much less likely to get accepted to a Digital Humanities conference. Let’s break it out by year.

Figure 2. Acceptance rates to DH2013-2015 by gender and year.

We see an interesting trend here, some surprising, some not. Least surprising is that the acceptance rates for non-US names is most equal this year, when the conference is being held so close to Asia (which the inference machine seems to have the most trouble with). My guess is that A) more non-US people who submit are actually able to attend, and B) reviewers this year are more likely to be from the same sorts of countries that the program is having difficulties with, so they’re less likely to be biased towards non-US first names. There’s also potentially a language issue here: that non-US submissions are more likely to be rejected because they are either written in another language, or written in a way that native English speakers may find difficult to understand.

But the fact of the matter is, there’s a very clear bias against submissions by people with names non-standard to the US. The bias, oddly, is most pronounced in 2014, when the conference was held in Switzerland. I have no good guesses as to why.

So now that we have the big effect out of the way, let’s get to the small one: gender disparity. Honestly, I had expected it to be worse; it is worse this years than the two previous, but that may just be statistical noise. It’s true that women do fair worse overall by 1-3%, which isn’t huge, but it’s big enough to mention. However.

Topics and Gender

However, it turns out that the entire gender bias effect we see is explained by the topical bias I already covered the other day. (Scroll down for the rest of the post.)

Figure 3. Topic by gender. Total size of horizontal grey bar equals the number of submissions to a topic. Horizontal black bar shows the percentage of that topic with women authors. Orange line shows the 38% mark, which is the expected number of submissions by women given the 38% submission ratio to DH conferences. Topics are ordered top-to-bottom by highest proportion of women. The smaller the grey bar, the more statistical noise / less trustworthy the result.

What’s shown here will be fascinating to many of us, and some of it more surprising than others. A full 67% of authors on the 25 DH submissions labeled “gender studies” are labeled as women by Mullen’s algorithm. And remember, many of those may be the same author; for example if “Scott Weingart” is listed as an author on multiple submissions, this chart counts those separately.

Other topics that are heavily skewed towards women: drama, poetry, art history, cultural studies, GLAM, and (importantly), institutional support and DH infrastructure. Remember how I said a large percentage of of those responsible for running DH centers, committees, and organizations are women? This is apparently the topic they’re publishing in.

If we look instead at the bottom of the chart, those topics skewed towards men, we see stylometrics, programming & software, standards, image processing, network analysis, etc. Basically either the CS-heavy topics, or the topics from when we were still “humanities computing”, a more CS-heavy community. These topics, I imagine, inherit their gender ratio problems from the various disciplines we draw them from.

You may notice I left out pedagogical topics from my list above, which are heavily skewed towards women. I’m singling that out specially because, if you recall from my previous post, pedagogical topics are especially unlikely to be accepted to DH conferences. In fact, a lot of the topics women are submitting in aren’t getting accepted to DH conferences, you may recall.

It turns out that the gender bias in acceptance ratios is entirely accounted for by the topical bias. When you break out topics that are not gender-skewed (ontologies, UX design, etc.), the acceptance rates between men and women are the same – the bias disappears. What this means is the small gender bias is coming at the topical level, rather than at the gender level, and since women are writing more about those topics, they inherit the peer review bias.

Does this mean there is no gender bias in DH conferences?

No. Of course not. I already showed yesterday that 46% of attendees to DH2015 are women, whereas only 35% of authors are. What it means is the bias against topics is gendered, but in a peculiar way that actually may be (relatively) easy to solve, and if we do solve it, it’d also likely go a long way in solving that attendee/authorship ratio too.

Get more women peer reviewing for DH conferences.

Although I don’t know who’s doing the peer reviews, I’d guess that the gender ratio of peer reviewers is about the same as the ratio of authors; 34% women, 66% men. If that is true, then it’s unsurprising that the topics women tend to write about are not getting accepted, because by definition these are the topics that men publishing at DH conferences find less interesting or relevant 3. If reviewers gravitate towards topics of their own interest, and if their interests are skewed by gender, it’d also likely skew results of peer review. If we are somehow able to improve the reviewer ratio, I suspect the bias in topic acceptance, and by extension gender acceptance, will significantly reduce.

Jacqueline Wernimont points out in a comment below that another way improving the situation is to break the “gender lines” I’ve drawn here, and make sure to attend presentations on topics that are outside your usual scope if (like me) you gravitate more towards one side than another.

Obviously this is all still preliminary, and I plan to show the breakdown of acceptances by topic and gender in a later post so you don’t just have to trust me on it, but at the 2,000-word-mark this is getting long-winded, and I’d like feedback and thoughts before going on.

Notes:

  1. rounding up to the nearest zillion
  2. more accurately, if they don’t make it to the final program
  3. see Jacqueline Wernimont’s comment below

Acceptances to Digital Humanities 2015 (part 3)

tl;dr

There’s a disparity between gender diversity in authorship and attendance at DH2015; attendees are diverse, authors aren’t. That said, the geography of attendance is actually pretty encouraging this year. A lot of this work draws a project on the history of DH conferences I’m undertaking with the inimitable Nickoal Eichmann. She’s been integral on the research of everything you read about conferences pre-2013.

Diversity at DH2015: Preliminary Numbers

For those just joining us, I’m analyzing this year’s international Digital Humanities conference being held in Sydney, Australia (part 1, part 2). This is the 10th post in a series of reflective entries on Digital Humanities conferences, throughout which I explore the landscape of Digital Humanities as it is represented by the ADHO conference. There are other Digital Humanities (a great place to start exploring them in Alex Gil’s arounddh), but since this is the biggest event, it’s also an integral reflection on our community to the public and non-DH academic world.

Map from Around DH in 80 Days.
Figure 1. Map from Around DH in 80 Days.

If the DH conference is our public face, we all hope it does a good job of representing our constituent parts, big or small. It does not. The DH conference systematically underrepresents women and people from parts of the world that are not Europe or North America.

Until today, I wasn’t sure whether this was an issue of underrepresentation, an issue of lack of actual diversity among our constituents, or both. Today’s data have shown me it may be more underrepresentation than lack of diversity, although I can’t yet say anything with certainty without data from more conferences.

I come to this conclusion by comparing attendees to the conference to authors of presentations at the conference. My assumption is that if authorship and attendee diversity are equal, and both poor, then we have a diversity problem. If instead attendance is diverse but authorship is not, then we have a representation problem. It turns out, at least in this dataset, the latter is true. I’ve been able to reach the conclusion because the conference organizing committee (themselves a diverse, fantastic bunch) have published and made available the DH2015 attendance list.

Because this is an important subject, this post is more somber and more technically detailed than most others in this series.

Geography

The published Attendance List was nice enough to already attach country names to every attendee, so making an interactive map to attendees was a simple manner of cleaning the data (here it is as csv), aggregating it and plugging it into CartoDB.

Despite a lack of South American and African attendees, this is still a pretty encouraging map for DH2015, especially compared to earlier years. The geographic diversity of attendees is actually mirrored in the conference submissions (analyzed here), which to my mind means the ADHO decision to hold the conference somewhere other than North America or Europe succeeded in its goal of diversifying the organization. From what I hear, they hope to continue this trend by moving to a three-year rotation, between North America, Europe, and elsewhere. At least from this analysis, that’s a successful strategy.

DH submissions broken down by UN macro-continental regions.
Figure 2. DH submissions broken down by UN macro-continental regions (details in an earlier post).

If we look at the locations of authors at ADHO conferences from 2004-2013, we see a very different profile than is apparent this year in Sydney. The figure below, made by my collaborator Nickoal Eichmann, shows all author locations from ADHO conferences in this 10-year range.

ADHO conference author locations, 2004-2013. Figure by Nickoal Eichmann.
Figure 3. ADHO conference author locations, 2004-2013. Figure by Nickoal Eichmann.

Notice the difference in geographic profile from this year?

This also hides the sheer prominence of the Americas (really, just North America) at every single ADHO conference since 2004. The figure below shows the percentage of authors from different regions at DH2004-2013, with Europe highlighted in orange during the years the conference was held in Europe.

Geographic home of authors to ADHO conferences 2004-2013. Years when Europe hosted are highlighted in orange.
Figure 4. Geographic home of authors to ADHO conferences 2004-2013. Years when Europe hosted are highlighted in orange.

If you take a second to study this visualization, you’ll notice that with only one major exception in 2012, even when the conference was held in Europe, the majority of authors hailed from the Americas. That’s cray-cray, yo. Compare that to 2015 data from Figure 2; the Americas are still technically sending most of the authors, but the authorship pool is significantly more regionally diverse than the decade of 2004-2013.

Actually, even before the DH conference moved to Australia, we’ve been getting slightly more geographically diverse. Figure 5, below, shows a slight increase in diversity score from 2004-2013.

Regional diversity of authors at ADHO conferences, 2004-2013.
Figure 5. Regional diversity of authors at ADHO conferences, 2004-2013.

In sum, we’re getting better! Also, our diversity of attendance tends to match our diversity of authorship, which means we’re not suffering an underrepresentation problem on top of a lack of diversity. The lack of diversity is obviously still a problem, but it’s improving, and in no small part to the efforts of ADHO to move the annual conference further afield.

Historical Gender

Gravy train’s over, folks. We’re getting better with geography, sure, but what about gender? Turns out our gender representation in DH sucks, it’s always sucked, and unless we forcibly intervene, it’s likely to continue to suck.

We’ve probably inherited our gender problem from computer science, which is weird, because such a large percentage of leadership in DH organizations, committees, and centers are women. What’s more, the issue isn’t that women aren’t doing DH, it’s that they’re not being well-represented at our international conference. Instead they’re going to other conferences which are focused on diversity, which as Jacqueline Wernimont points out, is less than ideal.

So what’s the data here? Let’s first look historically.

Gender ratio of authors to presentations at DH2004-DH2013. First authorship ratio is in red.
Figure 6. Gender ratio of authors to presentations at DH2004-DH2013. First authorship ratio is in red. In collaboration with Nickoal Eichmann.

Figure 6 shows percentage of women authors at DH2004-DH2013. The data were collected in collaboration with Nickoal Eichmann. 1

Notice the alarming tendency for DH conference authorship to hover between 30-35% women. Women fair slightly better as first authors—that is to say, if a woman authors an ADHO presentation, they’re more likely to be a first author than a second or third. This matches well with the fact that a lot of the governing body of DH organizations are women, and yet the ratio does not hold in authorship. I can’t really hazard a guess as to why that is.

Gender in 2015

Which brings us to 2015 in Sydney. I was encouraged to see the organizing committee publish an attendance list, and immediately set out to find the gender distribution of attendees. 2 Hurray! I tweeted. About 46% of attendees to DH2015 were women. That’s almost 50/50!

Armed with the same hope I’ve felt all week (what with two fantastic recent Supreme Court decisions, a Papal decree on global warming, and the dropping of confederate flags all over the country), I set out to count gender among authors at DH2015.

Preliminary results show 34.6% 3 of authors at DH2015 are women. Status quo quo quo quo.

So how do we reconcile the fact that only 35% of authors at DH2015 are women, yet 46% of attendees are? I’m interpreting this to mean that we don’t have a diversity problem, but a representation problem; for some reason, though women comprise nearly half of active participants at DH conferences, they only comprise a third of what’s actually presented at them.

This representation issue is further reflected by the topical analysis of DH2015, which shows that only 10% of presentations are tagged as cultural studies, and only 1% as gender studies. Previous years show a similar low number for both topics. (It’s worth noting that cultural studies tend to have a slightly lower-than-average acceptance rate, while gender studies has a slightly higher-than-average acceptance rate. Food for thought.)

Given this, how do we proceed? At an individual level, obviously, people are already trying to figure out paths forward, but what about at the ADHO level? Their efforts, and efforts of constituent members, have been successful at improving regional diversity at our flagship annual event. What sort of intervention can we create to similarly improve our gender representation problems? Hopefully comments below, or Twitter conversation, might help us collaboratively build a path forward, or offer suggestions to ADHO for future events. 4

Stay-tuned for more DH2015 analyses, and in the meantime, keep on fighting the good fight. These are problems we can address as a community, and despite our many flaws, we can actually be pretty good at changing things for the better when we notice our faults.

Notes:

  1. It’s worth noting we made a lot of simplifying assumptions that  we very much shouldn’t have, as Miriam Posner so eloquently pointed out with regards to Getty’s Union List of Author Names.

    We labeled authors as male, female, or unknown/other. We did not encode changes of author gender over time, even though we know of at least a few authors in the dataset for whom this would apply. We hope to remedy this issue in the near future by asking authors themselves to help us with identification, and we ourselves at least tried to be slightly more sensitive by labeling author gender by hand, rather than by using an algorithm to guess based on the author’s first name.

    This series of choices was problematic, but we felt it was worth it as a first pass as a vehicle to point out bias and lack of representation in DH, and we hope you all will help us improve our very rudimentary dataset soon.

  2. This is an even more problematic analysis than that of conference authorship. I used Lincoln Mullen’s fabulous gender guessing library in R, which guesses gender based on first names and statistics from US Social Security data, but obviously given the regional diversity of the conference, a lot of its guesses are likely off. As with the above data, we hope to improve this set as time goes on.
  3. Very preliminary, but probably not far off; again using Lincoln Mullen’s R library.
  4. Obviously I’m far from the first to come to this conclusion, and many ADHO committee members are already working on this problem (see GO::DH), but the more often we point out problems and try to come up with solutions, the better.