Summary: Martin & Runyon’s “Digital Humanities, Digital Hegemony”

Today’s post just summarizes an article recently shared with me, as an attempt to boost the signal:

Those following along at home know I’ve been exploring how digital humanities infrastructure reinforces pre-existing cultural biases, most recently with Nickoal Eichmann & Jeana Jorgensen looking at DH Conferences, 2000-2015.

One limitation of our study is we know very little about the content of conference presentations or the racial identities of authors, which means we can’t assess bias in those directions. John D. Martin III & Carolyn Runyon recently published preliminary results more thoroughly addressing race & gender in DH from a funding perspective, and focused on the content of grants:

Martin, John D., III, and Carolyn Runyon. “Digital Humanities, Digital Hegemony: Exploring Funding Practices and Unequal Access in the Digital Humanities.” SIGCAS Computers and Society. 46, no. 1 (March 2016): 20–26. doi:10.1145/2908216.2908219.

By hand-categorizing 656 DH-oriented NEH grants from 2007-2016, totaling $225 million, Martin & Runyon found 110 projects whose focus involved gender or individuals of a certain gender, and 228 which focused on race/ethnicity or individuals identifiable with particular races/ethnicities.

From the article
From the article

Major findings include:

  • Twice as much money goes to studying men than to women.
  • On average, individual projects about women are better-funded.
  • The top three race/ethnicity categories by funding amount are White ($21 million), Asian ($7 million), and Black ($6.5 million).
  •  White men are discussed as individuals, and women and non-white people are focused on as groups.

Their results fit well with what I and others have found, which is that DH propagates the same cultural bias found elsewhere within and outside academia.

A next step, vital to this project, is to find equivalent metrics for other disciplines and data sources. Until we get a good baseline, we won’t actually know if our interventions are improving the situation. It’s all well and good to say “things are bad”, but until we know the compared-to-what, we won’t have a reliable way of testing what works and what doesn’t.

Who sits in the 41st chair?

tl;dr Rich-get-richer academic prestige in a scarce job market makes meritocracy impossible. Why some things get popular and others don’t. Also agent-based simulations.

Slightly longer tl;dr This post is about why academia isn’t a meritocracy, at no intentional fault of those in power who try to make it one. None of presented ideas are novel on their own, but I do intend this as a novel conceptual contribution in its connection of disparate threads. Especially, I suggest the predictability of research success in a scarce academic economy as a theoretical framework for exploring successes and failures in the history of science.

But mostly I just beat a “musical chairs” metaphor to death.

Positive Feedback

To the victor go the spoils, and to the spoiled go the victories. Think about it: the Yankees; Alexander the Great; Stanford University. Why do the Yankees have twice as many World Series appearances as their nearest competitors, how was Alex’s empire so fucking vast, and why does Stanford get all the cool grants?

The rich get richer. Enough World Series victories, and the Yankees get the reputation and funding to entice the best players. Ol’ Allie-G inherited an amazing army, was taught by Aristotle, and pretty much every place he conquered increased his military’s numbers. Stanford’s known for amazing tech innovation, so they get the funding, which means they can afford even more innovation, which means even more people think they’re worthy of funding, and so on down the line until Stanford and its neighbors (Google, Apple, etc.) destroy the local real estate market and then accidentally blow up the world.

Alexander's Empire [via]
Alexander’s Empire [via]
Okay, maybe I exaggerated that last bit.

Point is, power begets power. Scientists call this a positive feedback loop: when a thing’s size is exactly what makes it grow larger.

You’ve heard it firsthand when a microphoned singer walks too close to her speaker. First the mic picks up what’s already coming out of the speaker. The mic, doings its job, sends what it hears to an amplifier, sending an even louder version to the very same speaker. The speaker replays a louder version of what it just produced, which is once again received by the microphone, until sound feeds back onto itself enough times to produce the ear-shattering squeal fans of live music have come to dread. This is a positive feedback loop.

Feedback loop. [via]
Feedback loop. [via]
Positive feedback loops are everywhere. They’re why the universe counts logarithmically rather than linearly, or why income inequality is so common in free market economies. Left to their own devices, the rich tend to get richer, since it’s easier to make money when you’ve already got some.

Science and academia are equally susceptible to positive feedback loops. Top scientists, the most well-funded research institutes, and world-famous research all got to where they are, in part, because of something called the Matthew Effect.

Matthew Effect

The Matthew Effect isn’t the reality TV show it sounds like.

For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken even that which he hath. —Matthew 25:29, King James Bible.

It’s the Biblical idea that the rich get richer, and it’s become a popular party trick among sociologists (yes, sociologists go to parties) describing how society works. In academia, the phrase is brought up alongside evidence that shows previous grant-recipients are more likely to receive new grants than their peers, and the more money a researcher has been awarded, the more they’re likely to get going forward.

The Matthew Effect is also employed metaphorically, when it comes to citations. He who gets some citations will accrue more; she who has the most citations will accrue them exponentially faster. There are many correct explanations, but the simplest one will do here: 

If Susan’s article on the danger of velociraptors is cited by 15 other articles, I am more likely to find it and cite her than another article on velociraptors containing the same information, that has never been citedThat’s because when I’m reading research, I look at who’s being cited. The more Susan is cited, the more likely I’ll eventually come across her article and cite it myself, which in turn increases the likelihood that much more that someone else will find her article through my own citations. Continue ad nauseam.

Some of you are thinking this is stupid. Maybe it’s trivially correct, but missing the bigger picture: quality. What if Susan’s velociraptor research is simply better than the competing research, and that’s why it’s getting cited more?

Yes, that’s also an issue. Noticeably awful research simply won’t get much traction. 1 Let’s disqualify it from the citation game. The point is there is lots of great research out there, waiting to be read and built upon, and its quality isn’t the sole predictor of its eventual citation success.

In fact, quality is a mostly-necessary but completely insufficient indicator of research success. Superstar popularity of research depends much more on the citation effects I mentioned above – more citations begets even more. Previous success is the best predictor of future success, mostly independent of the quality of research being shared.

Example of positive feedback loops pushing some articles to citation stardom.
Example of positive feedback loops pushing some articles to citation stardom. [via]
This is all pretty hand-wavy. How do we know success is more important than quality in predicting success? Uh, basically because of Napster.

Popular Music

If VH1 were to produce a retrospective on the first decade of the 21st century, perhaps its two biggest subjects would be illegal music sharing and VH1’s I Love the 19xx… TV series. Napster came and went, followed by LimeWire, eDonkey2000, AudioGalaxy, and other services sued by Metallica. Well-known early internet memes like Hamster Dance and All Your Base Are Belong To Us spread through the web like socially transmitted diseases, and researchers found this the perfect opportunity to explore how popularity worked. Experimentally.

In 2006, a group of Columbia University social scientists designed a clever experiment to test why some songs became popular and others did not, relying on the public interest in online music sharing. They created a music downloading site which gathered 14,341 users, each one to become a participant in their social experiment.

The cleverness arose out of their experimental design, which allowed them to get past the pesky problem of history only ever happening once. It’s usually hard to learn why something became popular, because you don’t know what aspects of its popularity were simply random chance, and what aspects were genuine quality. If you could, say, just rerun the 1960s, changing a few small aspects here or there, would the Beatles still have been as successful? We can’t know, because the 1960s are pretty much stuck having happened as they did, and there’s not much we can do to change it. 2

But this music-sharing site could rerun history—or at least, it could run a few histories simultaneously. When they signed up, each of the site’s 14,341 users were randomly sorted into different groups, and their group number determined how they were presented music. The musical variety was intentionally obscure, so users wouldn’t have heard the bands before.

A user from the first group, upon logging in, would be shown songs in random order, and were given the option to listen to a song, rate it 1-5, and download it. Users from group #2, instead, were shown the songs ranked in order of their popularity among other members of group #2. Group #3 users were shown a similar rank-order of popular songs, but this time determined by the song’s popularity within group #3. So too for groups #4-#9. Every user could listen to, rate, and download music.

Essentially, the researchers put the participants into 9 different self-contained petri dishes, and waited to see which music would become most popular in each. Ranking and download popularity from group #1 was their control group, in that members judged music based on their quality without having access to social influence. Members of groups #2-#9 could be influenced by what music was popular with their peers within the group. The same songs circulated in each petri dish, and each petri dish presented its own version of history.

Music sharing site from Columbia study.
Music sharing site from Columbia study.

No superstar songs emerged out of the control group. Positive feedback loops weren’t built into the system, since popularity couldn’t beget more popularity if nobody saw what their peers were listening to. The other 8 musical petri dishes told a different story, however. Superstars emerged in each, but each group’s population of popular music was very different. A song’s popularity in each group was slightly related to its quality (as judged by ranking in the control group), but mostly it was social-influence-produced chaos. The authors put it this way:

In general, the “best” songs never do very badly, and the “worst” songs never do extremely well, but almost any other result is possible. —Salganik, Dodds, & Watts, 2006

These results became even more pronounced when the researchers increased the visibility of social popularity in the system. The rich got even richer still. A lot of it has to do with timing. In each group, the first few good songs to become popular are the ones that eventually do the best, simply by an accident of circumstance. The first few popular songs appear at the top of the list, for others to see, so they in-turn become even more popular, and so ad infinitum.  The authors go on:

experts fail to predict success not because they are incompetent judges or misinformed about the preferences of others, but because when individual decisions are subject to social influence, markets do not simply aggregate pre-existing individual preferences.

In short, quality is a necessary but insufficient criteria for ultimate success. Social influence, timing, randomness, and other non-qualitative features of music are what turn a good piece of music into an off-the-charts hit.

Wait what about science?

Compare this to what makes a “well-respected” scientist: it ain’t all citations and social popularity, but they play a huge role. And as I described above, simply out of exposure-fueled-propagation, the more citations someone accrues, the more citations they are likely to accrue, until we get a situation like the Yankees (40 world series appearances, versus 20 appearances by the Giants) on our hands. Superstars are born, who are miles beyond the majority of working researchers in terms of grants, awards, citations, etc. Social scientists call this preferential attachment.

Which is fine, I guess. Who cares if scientific popularity is so skewed as long as good research is happening? Even if we take the Columbia social music experiment at face-value, an exact analog for scientific success, we know that the most successful are always good scientists, and the least successful are always bad ones, so what does it matter if variability within the ranks of the successful is so detached from quality?

Except, as anyone studying their #OccupyWallstreet knows, it ain’t that simple in a scarce economy. When the rich get richer, that money’s gotta come from somewhere. Like everything else (cf. the law of conservation of mass), academia is a (mostly) zero-sum game, and to the victors go the spoils. To the losers? Meh.

So let’s talk scarcity.

The 41st Chair

The same guy who who introduced the concept of the Matthew Effect to scientific grants and citations, Robert K. Merton (…of Columbia University), also brought up “the 41st chair” in the same 1968 article.

Merton’s pretty great, so I’ll let him do the talking:

In science as in other institutional realms, a special problem in the workings of the reward system turns up when individuals or organizations take on the job of gauging and suitably rewarding lofty performance on behalf of a large community. Thus, that ultimate accolade in 20th-century science, the Nobel prize, is often assumed to mark off its recipients from all the other scientists of the time. Yet this assumption is at odds with the well-known fact that a good number of scientists who have not received the prize and will not receive it have contributed as much to the advancement of science as some of the recipients, or more.

This can be described as the phenomenon of “the 41st chair.” The derivation of this tag is clear enough. The French Academy, it will be remembered, decided early that only a cohort of 40 could qualify as members and so emerge as immortals. This limitation of numbers made inevitable, of course, the exclusion through the centuries of many talented individuals who have won their own immortality. The familiar list of occupants of this 41st chair includes Descartes, Pascal, Moliere, Bayle, Rousseau, Saint-Simon, Diderot, Stendahl, Flaubert, Zola, and Proust

[…]

But in greater part, the phenomenon of the 41st chair is an artifact of having a fixed number of places available at the summit of recognition. Moreover, when a particular generation is rich in achievements of a high order, it follows from the rule of fixed numbers that some men whose accomplishments rank as high as those actually given the award will be excluded from the honorific ranks. Indeed, their accomplishments sometimes far outrank those which, in a time of less creativity, proved
enough to qualify men for his high order of recognition.

The Nobel prize retains its luster because errors of the first kind—where scientific work of dubious or inferior worth has been mistakenly honored—are uncommonly few. Yet limitations of the second kind cannot be avoided. The small number of awards means that, particularly in times of great scientific advance, there will be many occupants of the 41st chair (and, since the terms governing the award of the prize do not provide for posthumous recognition, permanent occupants of that chair).

Basically, the French Academy allowed only 40 members (chairs) at a time. We can be reasonably certain those members were pretty great, but we can’t be sure that equally great—or greater—women existed who simply never got the opportunity to participate because none of the 40 members died in time.

These good-enough-to-be-members-but-weren’t were said to occupy the French Academy’s 41st chair, an inevitable outcome of a scarce economy (40 chairs) when the potential number benefactors of this economy far outnumber the goods available (40). The population occupying the 41st chair is huge, and growing, since the same number of chairs have existed since 1634, but the population of France has quadrupled in the intervening four centuries.

Returning to our question of “so what if rich-get-richer doesn’t stick the best people at the top, since at least we can assume the people at the top are all pretty good anyway?”, scarcity of chairs is the so-what.

Since faculty jobs are stagnating compared to adjunct work, yet new PhDs are being granted faster than new jobs become available, we are presented with the much-discussed crisis in higher education. Don’t worry, we’re told, academia is a meritocracy. With so few jobs, only the cream of the crop will get them. The best work will still be done, even in these hard times.

Recent Science PhD growth in the U.S. [via]
Recent Science PhD growth in the U.S. [via]
Unfortunately, as the Columbia social music study (among many other studies) showed, true meritocracies are impossible in complex social systems. Anyone who plays the academic game knows this already, and many are quick to point it out when they see people in much better jobs doing incredibly stupid things. What those who point out the falsity of meritocracy often get wrong, however, is intention: the idea that there is no meritocracy because those in power talk the meritocracy talk, but don’t then walk the walk. I’ll talk a bit later about how, even if everyone is above board in trying to push the best people forward, occupants of the 41st chair will still often wind up being more deserving than those sitting in chairs 1-40. But more on that later.

For now, let’s start building a metaphor that we’ll eventually over-extend well beyond its usefulness. Remember that kids’ game Musical Chairs, where everyone’s dancing around a bunch of chairs while the music is playing, but as soon as the music stops everyone’s got to find a chair and sit down? The catch, of course, is that there are fewer chairs than people, so someone always loses when the music stops.

The academic meritocracy works a bit like this. It is meritocratic, to a point: you can’t even play the game without proving some worth. The price of admission is a Ph.D. (which, granted, is more an endurance test than an intelligence test, but academic success ain’t all smarts, y’know?), a research area at least a few people find interesting and believe you’d be able to do good work in it, etc. It’s a pretty low meritocratic bar, since it described 50,000 people who graduated in the U.S. in 2008 alone, but it’s a bar nonetheless. And it’s your competition in Academic Musical Chairs.

Academic Musical Chairs

Time to invent a game! It’s called Academic Musical Chairs, the game where everything’s made up and the points don’t matter. It’s like Regular Musical Chairs, but more complicated (see Fig. 1). Also the game is fixed.

Figure 1: Academic Musical Chairs
Figure 1: Academic Musical Chairs

See those 40 chairs in the middle green zone? People sitting in them are the winners. Once they’re seated they have what we call in the game “tenure”, and they don’t get up until they die or write something controversial on twitter. Everyone bustling around them, the active players, are vying for seats while they wait for someone to die; they occupy the yellow zone we call “the 41st chair”. Those beyond that, in the red zone, can’t yet (or may never) afford the price of game admission; they don’t have a Ph.D., they already said something controversial on Twitter, etc. The unwashed masses, you know?

As the music plays, everyone in the 41st chair is walking around in a circle waiting for someone to die and the music to stop. When that happens, everyone rushes to the empty seat. A few invariably reach it simultaneously, until one out-muscles the others and sits down. The sitting winner gets tenure. The music starts again, and the line continues to orbit the circle.

If a player spends too long orbiting in the 41st chair, he is forced to resign. If a player runs out of money while orbiting, she is forced to resign. Other factors may force a player to resign, but they will never appear in the rulebook and will always be a surprise.

Now, some players are more talented than others, whether naturally or through intense training. The game calls this “academic merit”, but it translates here to increased speed and strength, which helps some players reach the empty chair when the music stops, even if they’re a bit further away. The strength certainly helps when competing with others who reach the chair at the same time.

A careful look at Figure 1 will reveal one other way players might increase their chances of success when the music stops. The 41st chair has certain internal shells, or rings, which act a bit like that fake model of an atom everyone learned in high-school chemistry. Players, of course, are the electrons.

Electron shells. [via]
Electron shells. [via]
You may remember that the further out the shell, the more electrons can occupy it(-ish): the first shell holds 2 electrons, the second holds 8; third holds 18; fourth holds 32; and so on. The same holds true for Academic Musical Chairs: the coveted interior ring only fits a handful of players; the second ring fits an order of magnitude more; the third ring an order of magnitude more than that, and so on.

Getting closer to the center isn’t easy, and it has very little to do with your “academic rigor”! Also, of course, the closer you are to the center, the easier it is to reach either the chair, or the next level (remember positive feedback loops?). Contrariwise, the further you are from the center, the less chance you have of ever reaching the core.

Many factors affect whether a player can proceed to the next ring while the music plays, and some factors actively count against a player. Old age and being a woman, for example, take away 1 point. Getting published or cited adds points, as does already being friends with someone sitting in a chair (the details of how many points each adds can be found in your rulebook). Obviously the closer you are to the center, the easier you can make friends with people in the green core, which will contribute to your score even further. Once your score is high enough, you proceed to the next-closest shell.

Hooray, someone died! Let’s watch what happens.

The music stops. The people in the innermost ring who have the luckiest timing (thus are closest to the empty chair) scramble for it, and a few even reach it. Some very well-timed players from the 2nd & 3rd shells also reach it, because their “academic merit” has lent them speed and strength to reach past their position. A struggle ensues. Miraculously, a pregnant black woman sits down (this almost never happens), though not without some bodily harm, and the music begins again.

Oh, and new shells keep getting tacked on as more players can afford the cost of admission to the yellow zone, though the green core remains the same size.

Bizarrely, this is far from the first game of this nature. A Spanish boardgame from 1587 called the Courtly Philosophy had players move figures around a board, inching closer to living a luxurious life in the shadow of a rich patron. Random chance ruled their progression—a role of the dice—and occasionally they’d reach a tile that said things like: “Your patron dies, go back 5 squares”.

The courtier's philosophy. [via]
The courtier’s philosophy. [via]
But I digress. Let’s temporarily table the scarcity/41st-chair discussion and get back to the Matthew Effect.

The View From Inside

A friend recently came to me, excited but nervous about how well they were being treated by their department at the expense of their fellow students. “Is this what the Matthew Effect feels like?” they asked. Their question is the reason I’m writing this post, because I spent the next 24 hours scratching my head over “what does the Matthew Effect feel like?”.

I don’t know if anyone’s looked at the psychological effects of the Matthew Effect (if you do, please comment?), but my guess is it encompasses two feelings: 1) impostor syndrome, and 2) hard work finally paying off.

Since almost anyone who reaps the benefits of the Matthew Effect in academia will be an intelligent, hard-working academic, a windfall of accruing success should feel like finally reaping the benefits one deserves. You probably realize that luck played a part, and that many of your harder-working, smarter friends have been equally unlucky, but there’s no doubt in your mind that, at least, your hard work is finally paying off and the academic community is beginning to recognize that fact. No matter how unfair it is that your great colleagues aren’t seeing the same success.

But here’s the thing. You know how in physics, gravity and acceleration feel equivalent? How, if you’re in a windowless box, you wouldn’t be able to tell the difference between being stationary on Earth, or being pulled by a spaceship at 9.8 m/s2 through deep space? Success from merit or from Matthew Effect probably acts similarly, such that it’s impossible to tell one from the other from the inside.

Gravity vs. Acceleration. [via]
Gravity vs. Acceleration. [via]
Incidentally, that’s why the last advice you ever want to take is someone telling you how to succeed from their own experience.

Success

Since we’ve seen explosive success requires but doesn’t rely on skill, quality, or intent, the most successful people are not necessarily in the best position to understand the reason for their own rise. Their strategies may have paid off, but so did timing, social network effects, and positive feedback loops. The question you should be asking is, why didn’t other people with the same strategies also succeed?

Keep this especially in mind if you’re a student, and your tenured-professor advised you to seek an academic career. They may believe that giving you their strategies for success will help you succeed, when really they’re just giving you one of 50,000 admission tickets to Academic Musical Chairs.

Building a Meritocracy

I’m teetering well-past the edge of speculation here, but I assume the communities of entrenched academics encouraging undergraduates into a research career are the same communities assuming a meritocracy is at play, and are doing everything they can in hiring and tenure review to ensure a meritocratic playing field.

But even if gender bias did not exist, even if everyone responsible for decision-making genuinely wanted a meritocracy, even if the game weren’t rigged at many levels, the economy of scarcity (41st chair) combined with the Matthew Effect would ensure a true meritocracy would be impossible. There are only so many jobs, and hiring committees need to choose some selection criteria; those selection criteria will be subject to scarcity and rich-get-richer effects.

I won’t prove that point here, because original research is beyond the scope of this blog post, but I have a good idea of how to do it. In fact, after I finish writing this, I probably will go do just that. Instead, let me present very similar research, and explain how that method can be used to answer this question.

We want an answer to the question of whether positive feedback loops and a scarce economy are sufficient to prevent the possibility of a meritocracy. In 1971, Tom Schelling asked an unrelated question which he answered using a very relevant method: can racial segregation manifest in a community whose every actor is intent on not living a segregated life? Spoiler alert: yes.

He answered this question using by simulating an artificial world—similar in spirit to the Columbia social music experiment, except for using real participants, he experimented on very simple rule-abiding game creatures of his own invention. A bit like having a computer play checkers against itself.

The experiment is simple enough: a bunch of creatures occupy a checker board, and like checker pieces, they’re red or black. Every turn, one creature has the opportunity to move randomly to another empty space on the board, and their decision to move is based on their comfort with their neighbors. Red pieces want red neighbors, and black pieces want black neighbors, and they keep moving randomly ’till they’re all comfortable. Unsurprisingly, segregated creature communities appear in short order.

What if we our checker-creatures were more relaxed in their comforts? They’d be comfortable as long as they were in the majority; say, at least 50% of their neighbors were the same color. Again, let the computer play itself for a while, and within a few cycles the checker board is once again almost completely segregated.

Schelling segregation. [via]
Schelling segregation. [via]
What if the checker pieces are excited about the prospect of a diverse neighborhood? We relax the criteria even more, so red checkers only move if fewer than a third of their neighbors are red (that is, they’re totally comfortable with 66% of their neighbors being black)? If we run the experiment again, we see, again, the checker board breaks up into segregated communities.

Schelling’s claim wasn’t about how the world worked, but about what the simplest conditions were that could still explain racism. In his fictional checkers-world, every piece could be generously interested in living in a diverse neighborhood, and yet the system still eventually resulted in segregation. This offered a powerful support for the theory that racism could operate subtly, even if every actor were well-intended.

Vi Hart and Nicky Case created an interactive visualization/game that teaches Schelling’s segregation model perfectly. Go play it. Then come back. I’ll wait.


Such an experiment can be devised for our 41st-chair/positive-feedback system as well. We can even build a simulation whose rules match the Academic Musical Chairs I described above. All we need to do is show that a system in which both effects operate (a fact empirically proven time and again in academia) produces fundamental challenges for meritocracy. Such a model would be show that simple meritocratic intent is insufficient to produce a meritocracy. Hulk smashing the myth of the meritocracy seems fun; I think I’ll get started soon.

The Social Network

Our world ain’t that simple. For one, as seen in Academic Musical Chairs, your place in the social network influences your chances of success. A heavy-hitting advisor, an old-boys cohort, etc., all improve your starting position when you begin the game.

To put it more operationally, let’s go back to the Columbia social music experiment. Part of a song’s success was due to quality, but the stuff that made stars was much more contingent on chance timing followed by positive feedback loops. Two of the authors from the 2006 study wrote another in 2007, echoing this claim that good timing was more important than individual influence:

models of information cascades, as well as human subjects experiments that have been designed to test the models (Anderson and Holt 1997; Kubler and Weizsacker 2004), are explicitly constructed such that there is nothing special about those individuals, either in terms of their personal characteristics or in their ability to influence others. Thus, whatever influence these individuals exert on the collective outcome is an accidental consequence of their randomly assigned position in the queue.

These articles are part of a large literature in predicting popularity, viral hits, success, and so forth. There’s The Pulse of News in Social Media: Forecasting Popularity by Bandari, Asur, & Huberman, which showed that a top predictor of newspaper shares was the source rather than the content of an article, and that a major chunk of articles that do get shared never really make it to viral status. There’s Can Cascades be Predicted? by Cheng, Adamic, Dow, Kleinberg, and Leskovec (all-star cast if ever I saw one), which shows the remarkable reliance on timing & first impressions in predicting success, and also the reliance on social connectivity. That is, success travels faster through those who are well-connected (shocking, right?), and structural properties of the social network are important. This study by Susarla et al. also shows the importance of location in the social network in helping push those positive feedback loops, effecting the magnitude of success in YouTube Video shares.

Twitter information cascade. [via]
Twitter information cascade. [via]
Now, I know, social media success does not an academic career predict. The point here, instead, is to show that in each of these cases, before sharing occurs and not taking into account social media effects (that is, relying solely on the merit of the thing itself), success is predictable, but stardom is not.

Concluding, Finally

Relating it to Academic Musical Chairs, it’s not too difficult to say whether someone will end up in the 41st chair, but it’s impossible to tell whether they’ll end up in seats 1-40 until you keep an eye on how positive feedback loops are affecting their career.

In the academic world, there’s a fertile prediction market for Nobel Laureates. Social networks and Matthew Effect citation bursts are decent enough predictors, but what anyone who predicts any kind of success will tell you is that it’s much easier to predict the pool of recipients than it is to predict the winners.

Take Economics. How many working economists are there? Tens of thousands, at least. But there’s this Econometric Society which began naming Fellows in 1933, naming 877 Fellows by 2011. And guess what, 60 of 69 Nobel Laureates in Economics before 2011 were Fellows of the society. The other 817 members are or were occupants of the 41st chair.

The point is (again, sorry), academic meritocracy is a myth. Merit is a price of admission to the game, but not a predictor of success in a scarce economy of jobs and resources. Once you pass the basic merit threshold and enter the 41st chair, forces having little to do with intellectual curiosity and rigor guide eventual success (ahem). Small positive biases like gender, well-connected advisors, early citations, lucky timing, etc. feed back into increasingly larger positive biases down the line. And since there are only so many faculty jobs out there, these feedback effects create a naturally imbalanced playing field. Sometimes Einsteins do make it into the middle ring, and sometimes they stay patent clerks. Or adjuncts, I guess. Those who do make it past the 41st chair are poorly-suited to tell you why, because by and large they employed the same strategies as everybody else.

Figure 1: Academic Musical Chairs
Yep, Academic Musical Chairs

And if these six thousand words weren’t enough to convince you, I leave you with this article and this tweet. Have a nice day!

Addendum for Historians

You thought I was done?

As a historian of science, this situation has some interesting repercussions for my research. Perhaps most importantly, it and related concepts from Complex Systems research offer a middle ground framework between environmental/contextual determinism (the world shapes us in fundamentally predictable ways) and individual historical agency (we possess the power to shape the world around us, making the world fundamentally unpredictable).

More concretely, it is historically fruitful to ask not simply what non-“scientific” strategies were employed by famous scientists to get ahead (see Biagioli’s Galileo, Courtier), but also what did or did not set those strategies apart from the masses of people we no longer remember. Galileo, Courtier provides a great example of what we historians can do on a larger scale: it traces Galileo’s machinations to wind up in the good graces of a wealthy patron, and how such a system affected his own research. Using recently-available data on early modern social and scholarly networks, as well as the beginnings of data on people’s activities, interests, practices, and productions, it should be possible to zoom out from Biagioli’s viewpoint and get a fairly sophisticated picture of trajectories and practices of people who weren’t Galileo.

This is all very preliminary, just publicly blogging whims, but I’d be fascinated by what a wide-angle (dare I say, macroscopic?) analysis of the 41st chair in could tell us about how social and “scientific” practices shaped one another in the 16th and 17th centuries. I believe this would bear previously-impossible fruit, since a lone historian grasping ten thousand tertiary actors at once is a fool’s errand, but is a walk in the park for my laptop.

As this really is whim-blogging, I’d love to hear your thoughts.

Notes:

  1. Unless it’s really awful, but let’s avoid that discussion here.
  2. short of a TARDIS.

Acceptances to Digital Humanities 2015 (part 3)

tl;dr

There’s a disparity between gender diversity in authorship and attendance at DH2015; attendees are diverse, authors aren’t. That said, the geography of attendance is actually pretty encouraging this year. A lot of this work draws a project on the history of DH conferences I’m undertaking with the inimitable Nickoal Eichmann. She’s been integral on the research of everything you read about conferences pre-2013.

Diversity at DH2015: Preliminary Numbers

For those just joining us, I’m analyzing this year’s international Digital Humanities conference being held in Sydney, Australia (part 1, part 2). This is the 10th post in a series of reflective entries on Digital Humanities conferences, throughout which I explore the landscape of Digital Humanities as it is represented by the ADHO conference. There are other Digital Humanities (a great place to start exploring them in Alex Gil’s arounddh), but since this is the biggest event, it’s also an integral reflection on our community to the public and non-DH academic world.

Map from Around DH in 80 Days.
Figure 1. Map from Around DH in 80 Days.

If the DH conference is our public face, we all hope it does a good job of representing our constituent parts, big or small. It does not. The DH conference systematically underrepresents women and people from parts of the world that are not Europe or North America.

Until today, I wasn’t sure whether this was an issue of underrepresentation, an issue of lack of actual diversity among our constituents, or both. Today’s data have shown me it may be more underrepresentation than lack of diversity, although I can’t yet say anything with certainty without data from more conferences.

I come to this conclusion by comparing attendees to the conference to authors of presentations at the conference. My assumption is that if authorship and attendee diversity are equal, and both poor, then we have a diversity problem. If instead attendance is diverse but authorship is not, then we have a representation problem. It turns out, at least in this dataset, the latter is true. I’ve been able to reach the conclusion because the conference organizing committee (themselves a diverse, fantastic bunch) have published and made available the DH2015 attendance list.

Because this is an important subject, this post is more somber and more technically detailed than most others in this series.

Geography

The published Attendance List was nice enough to already attach country names to every attendee, so making an interactive map to attendees was a simple manner of cleaning the data (here it is as csv), aggregating it and plugging it into CartoDB.

Despite a lack of South American and African attendees, this is still a pretty encouraging map for DH2015, especially compared to earlier years. The geographic diversity of attendees is actually mirrored in the conference submissions (analyzed here), which to my mind means the ADHO decision to hold the conference somewhere other than North America or Europe succeeded in its goal of diversifying the organization. From what I hear, they hope to continue this trend by moving to a three-year rotation, between North America, Europe, and elsewhere. At least from this analysis, that’s a successful strategy.

DH submissions broken down by UN macro-continental regions.
Figure 2. DH submissions broken down by UN macro-continental regions (details in an earlier post).

If we look at the locations of authors at ADHO conferences from 2004-2013, we see a very different profile than is apparent this year in Sydney. The figure below, made by my collaborator Nickoal Eichmann, shows all author locations from ADHO conferences in this 10-year range.

ADHO conference author locations, 2004-2013. Figure by Nickoal Eichmann.
Figure 3. ADHO conference author locations, 2004-2013. Figure by Nickoal Eichmann.

Notice the difference in geographic profile from this year?

This also hides the sheer prominence of the Americas (really, just North America) at every single ADHO conference since 2004. The figure below shows the percentage of authors from different regions at DH2004-2013, with Europe highlighted in orange during the years the conference was held in Europe.

Geographic home of authors to ADHO conferences 2004-2013. Years when Europe hosted are highlighted in orange.
Figure 4. Geographic home of authors to ADHO conferences 2004-2013. Years when Europe hosted are highlighted in orange.

If you take a second to study this visualization, you’ll notice that with only one major exception in 2012, even when the conference was held in Europe, the majority of authors hailed from the Americas. That’s cray-cray, yo. Compare that to 2015 data from Figure 2; the Americas are still technically sending most of the authors, but the authorship pool is significantly more regionally diverse than the decade of 2004-2013.

Actually, even before the DH conference moved to Australia, we’ve been getting slightly more geographically diverse. Figure 5, below, shows a slight increase in diversity score from 2004-2013.

Regional diversity of authors at ADHO conferences, 2004-2013.
Figure 5. Regional diversity of authors at ADHO conferences, 2004-2013.

In sum, we’re getting better! Also, our diversity of attendance tends to match our diversity of authorship, which means we’re not suffering an underrepresentation problem on top of a lack of diversity. The lack of diversity is obviously still a problem, but it’s improving, and in no small part to the efforts of ADHO to move the annual conference further afield.

Historical Gender

Gravy train’s over, folks. We’re getting better with geography, sure, but what about gender? Turns out our gender representation in DH sucks, it’s always sucked, and unless we forcibly intervene, it’s likely to continue to suck.

We’ve probably inherited our gender problem from computer science, which is weird, because such a large percentage of leadership in DH organizations, committees, and centers are women. What’s more, the issue isn’t that women aren’t doing DH, it’s that they’re not being well-represented at our international conference. Instead they’re going to other conferences which are focused on diversity, which as Jacqueline Wernimont points out, is less than ideal.

So what’s the data here? Let’s first look historically.

Gender ratio of authors to presentations at DH2004-DH2013. First authorship ratio is in red.
Figure 6. Gender ratio of authors to presentations at DH2004-DH2013. First authorship ratio is in red. In collaboration with Nickoal Eichmann.

Figure 6 shows percentage of women authors at DH2004-DH2013. The data were collected in collaboration with Nickoal Eichmann. 1

Notice the alarming tendency for DH conference authorship to hover between 30-35% women. Women fair slightly better as first authors—that is to say, if a woman authors an ADHO presentation, they’re more likely to be a first author than a second or third. This matches well with the fact that a lot of the governing body of DH organizations are women, and yet the ratio does not hold in authorship. I can’t really hazard a guess as to why that is.

Gender in 2015

Which brings us to 2015 in Sydney. I was encouraged to see the organizing committee publish an attendance list, and immediately set out to find the gender distribution of attendees. 2 Hurray! I tweeted. About 46% of attendees to DH2015 were women. That’s almost 50/50!

Armed with the same hope I’ve felt all week (what with two fantastic recent Supreme Court decisions, a Papal decree on global warming, and the dropping of confederate flags all over the country), I set out to count gender among authors at DH2015.

Preliminary results show 34.6% 3 of authors at DH2015 are women. Status quo quo quo quo.

So how do we reconcile the fact that only 35% of authors at DH2015 are women, yet 46% of attendees are? I’m interpreting this to mean that we don’t have a diversity problem, but a representation problem; for some reason, though women comprise nearly half of active participants at DH conferences, they only comprise a third of what’s actually presented at them.

This representation issue is further reflected by the topical analysis of DH2015, which shows that only 10% of presentations are tagged as cultural studies, and only 1% as gender studies. Previous years show a similar low number for both topics. (It’s worth noting that cultural studies tend to have a slightly lower-than-average acceptance rate, while gender studies has a slightly higher-than-average acceptance rate. Food for thought.)

Given this, how do we proceed? At an individual level, obviously, people are already trying to figure out paths forward, but what about at the ADHO level? Their efforts, and efforts of constituent members, have been successful at improving regional diversity at our flagship annual event. What sort of intervention can we create to similarly improve our gender representation problems? Hopefully comments below, or Twitter conversation, might help us collaboratively build a path forward, or offer suggestions to ADHO for future events. 4

Stay-tuned for more DH2015 analyses, and in the meantime, keep on fighting the good fight. These are problems we can address as a community, and despite our many flaws, we can actually be pretty good at changing things for the better when we notice our faults.

Notes:

  1. It’s worth noting we made a lot of simplifying assumptions that  we very much shouldn’t have, as Miriam Posner so eloquently pointed out with regards to Getty’s Union List of Author Names.

    We labeled authors as male, female, or unknown/other. We did not encode changes of author gender over time, even though we know of at least a few authors in the dataset for whom this would apply. We hope to remedy this issue in the near future by asking authors themselves to help us with identification, and we ourselves at least tried to be slightly more sensitive by labeling author gender by hand, rather than by using an algorithm to guess based on the author’s first name.

    This series of choices was problematic, but we felt it was worth it as a first pass as a vehicle to point out bias and lack of representation in DH, and we hope you all will help us improve our very rudimentary dataset soon.

  2. This is an even more problematic analysis than that of conference authorship. I used Lincoln Mullen’s fabulous gender guessing library in R, which guesses gender based on first names and statistics from US Social Security data, but obviously given the regional diversity of the conference, a lot of its guesses are likely off. As with the above data, we hope to improve this set as time goes on.
  3. Very preliminary, but probably not far off; again using Lincoln Mullen’s R library.
  4. Obviously I’m far from the first to come to this conclusion, and many ADHO committee members are already working on this problem (see GO::DH), but the more often we point out problems and try to come up with solutions, the better.

Acceptances to Digital Humanities 2015 (part 2)

Had enough yet? Too bad! Full-ahead into my analysis of DH2015, part of my 6,021-part series on DH conference submissions and acceptances. If you want more context, read the Acceptances to DH2015 part 1.

tl;dr

This post’s about the topical coverage of DH2015 in Australia. If you’re curious about how the landscape compares to previous years, see this post. You’ll see a lot of text, literature, and visualizations this year, as well as archives and digitisation projects. You won’t see a lot of presentations in other languages, or presentations focused on non-text sources. Gender studies is pretty much nonexistent. If you want to get accepted, submit pieces about visualization, text/data, literature, or archives. If you want to get rejected, submit pieces about pedagogy, games, knowledge representation, anthropology, or cultural studies.

Topical analysis

I’m sorry. This post is going to contain a lot of giant pictures, because I’m in the mountains of Australia and I’d much rather see beautiful vistas than create interactive visualizations in d3. Deal with it, dweebs. You’re just going to have to do a lot of scrolling down to see the next batch of text.

This year’s conference presents a mostly-unsurprising continuations of the status quo (see 2014’s and 2013’s topical landscapes). Figure 1, below, shows the top author-chosen topic words of DH2015, as a proportion of the total presentations at the conference. For example, an impressive quarter, 24%, of presentations at DH2015 are about “text analysis”. The authors were able to choose multiple topics for each presentation, which is why the percentages add up to way more than 100%.

Scroll down for the rest of the post.

Figure 1. Topical coverage of DH2015. Percent represents the % of presentations which authors have tagged with a certain topical keyword. Authors could tag multiple keywords per presentation.
Figure 1. Topical coverage of DH2015. Percent represents the % of presentations which authors have tagged with a certain topical keyword. Authors could tag multiple keywords per presentation.

Text analysis, visualization, literary studies, data mining, and archives take top billing. History’s a bit lower, but at least there’s more history than the abysmal showing at DH2013. Only a tenth of DH2015 presentations are about DH itself, which is maybe impressive given how much we talk about ourselves? (cf. this post)

As usual, gender studies representation is quite low (1%), as are foreign language presentations and presentations not centered around text. I won’t do a lot of interpretation this post, because it’d mostly be repeat of earlier years. At any rate, acceptance rate is a bit more interesting than coverage this time around. Figure 2 shows acceptance rates of each topic, ordered by volume. Figure 3 shows the same, sorted by acceptance rate.

The topics that appear most frequently at the conference are on the far left, and the red line shows the percent of submitted articles that will be presented at DH2015. The horizontal black line is the overall acceptance rate to the conference, 72%, just to show which topics are above or below average.

Figure 2. Acceptance rates of topics to DH2015, sorted by volume.
Figure 2. Acceptance rates of topics to DH2015, sorted by volume. Click to enlarge.
Figure 2. Acceptance rates of topics to DH2015, sorted by acceptance rate. Click to enlarge.
Figure 3. Acceptance rates of topics to DH2015, sorted by acceptance rate. Click to enlarge.

Notice that all the most well-represented topics at DH2015 have a higher-than-average acceptance rate, possibly suggesting a bit of path-dependence on the part of peer reviewers or editors. Otherwise, it could mean that, since a majority peer reviewers were also authors in the conference, and since (as I’ve shown) the majority of authors have a leaning toward text, lit, and visualization, it’s also what they’re likely to rate highly in peer review.

The first dips we see under the average acceptance rate is “Interdisciplinary Studies” and “Historical Studies” (☹), but the dips aren’t all that low, and we ought not to read too much into it without comparing it to earlier conferences. More significant are the low rates for “Cultural Studies”, and even more than that are the two categories on Teaching, Pedagogy, and Curriculum. Both categories’ acceptance rates are about 20% under the average, and although they’re obviously correlated with one another, the acceptance rates are similar to 2014 and 2013. In short, DH peer reviewers or editors are more unlikely to accept submissions on pedagogy than on most other topics, even though they sometimes represent a decent chunk of submissions.

Other low points worth pointing out are “Anthropology” (huh, no ideas there), “Games and Meaningful Play” (that one came as a surprise), and “Other” (can’t help you here). Beyond that, the submission counts are too low to read any meaningful interpretations into the data. The Game Studies dip is curious, and isn’t reflected in earlier conferences, so it could just be noise for 2015. The low acceptance rates in Anthropology are consistent 2013-2015, and it’d be worth looking more into that.

Topical Co-Occurrence, 2013-2015

Figure 4, below, shows how topics appear together on submissions to DH2013, DH2014, and DH2015. Technically this has nothing to do with acceptances, and little to do with this year specifically, but the visualization should provide a little context to the above analysis. Topics connect to one another if they appear on a submission together, and the line connecting them gets thicker the more connections two topics share.

Figure 4. Topical co-occurrence, 2013-2015. Click to enlarge.
Figure 4. Topical co-occurrence, 2013-2015. Click to enlarge.

Although the “Interdisciplinary Collaboration” topic has a low acceptance rate, it understandably ties the network together; other topics that play a similar role are “Visualization”, “Programming”, “Content Analysis”, “Archives”, and “Digitisation”. All unsurprising for a conference where people come together around method and material. In fact, this reinforces our “DH identity” along those lines, at least insofar as it is represented by the annual ADHO conference.

There’s a lot to unpack in this visualization, and I may go into more detail in the next post. For now, I’ve got a date with the Blue Mountains west of Sydney.

Acceptances to Digital Humanities 2015 (part 1)

[Update!] Melissa Terras pointed out I probably made a mistake on 2015 long paper -> short paper numbers. I checked, and she was right. I’ve updated the figures accordingly.

tl;dr

Part 1 is about sheer numbers of acceptances to DH2015 and comparisons with previous years. DH is still growing, but the conference locale likely prohibited a larger conference this year than last. Acceptance rates are higher this year than previous years. Long papers still reign supreme. Papers with more authors are more likely to be accepted.

Introduction

It’s that time of the year again, when all the good little boys, girls, and other genders of DH gather around the scottbot irregular in pointless meta-analysis quiet self-reflection. As most of you know, the 2015 Digital Humanities conference occurs next week in Sydney, Australia. They’ve just released the final program, full of pretty exciting work, which means I can compare it to my analysis of submissions to DH2015 (1, 2, & 3) to see how DH is changing, how work gets accepted or rejected, etc. This is part of my series on analyzing DH conferences.

Part 1 will focus on basic counts, just looking at percentages of acceptance and rejection by the type of presentation, and comparing it with previous years. Later posts will cover topical, gender, geography, and kangaroos. NOTE: When I say “acceptances”, I really mean “presentations that appear on the final program.” More presentations were likely accepted and withdrawn due to the expense of traveling to Australia, so take these numbers with appropriate levels of skepticism. 1

Volume

Around 270 papers, posters, and workshops are featured in this year’s conference program, down from last year’s ≈350 but up from DH2013’s ≈240. Although this is the first conference since 2010 with fewer presentations than the previous year’s, I suspect this is due largely to geographic and monetary barriers, and we’ll see a massive uptick next year in Poland and the following in (probably) North America. Whether or not the trend will continue to increase in 2018’s Antarctic locale, or 2019’s special Lunar venue, has yet to be seen. 2

Annual presentations at DH conferences, compared to growth of DHSI in Victoria.
Annual presentations at DH conferences, compared to growth of DHSI in Victoria.

As you can see from the chart above, even given this year’s dip, both DH2015 and the annual DHSI event in Victoria reveals DH is still on the rise. It’s also worth noting that last year’s DHSI was likely the first where more people attended it than the international ADHO conference.

Acceptance Rates

A full 72% of submissions to DH2015 will be presented in Sydney next week. That’s significantly more inclusive than previous years: 59% of submitted manuscripts made it to DH2014 in Lausanne, and 64% to DH2013.

At first blush, the loss of exclusivity may seem a bad sign of a conference desperate for attendees, but to my mind the exact opposite is true: this is a great step forward. Conference peer review & acceptance decisions aren’t particularly objective, so using acceptance as a proxy for quality or relevance is a bit of a misdirection. And if we can’t aim for consistent quality or relevance in the peer review process, we ought to aim at least for inclusivity, or higher acceptance rates, and let the participants themselves decide what they want to attend.

Form

Acceptance rates broken down by form (panel, poster, short paper, long paper) aren’t surprising, but are worth noting.

  • 73% of submitted long papers were accepted, but only 45% of them were accepted as long papers. The other 28% were accepted as posters or short papers.
  • 61% of submitted short papers were accepted, but only 51% as short papers; the other 10% became posters.
  • 85% of posters were accepted, all of them as posters.
  • 85% of panels were accepted, but one of them was accepted as a long paper.
  • A few papers/panels were converted into workshops.
How submitted articles eventually were rejected or accepted. (e.g. 45% of submitted long papers were accepted as long papers, 14% as short papers, 15% as posters, and 27% were rejected.)

Weirdly, short papers tend to have a lower acceptance rate than long papers over the last three years. I think that’s because if a long paper is rejected, it’s usually further along in the process enough that it’s more likely to be secondarily accepted-as-a-poster, but even that doesn’t account for the entire differential in the acceptance rate. Anyone have any thoughts on this?

Looking over time, we see an increasingly large slice of the DH conference pie is taken up by long papers. My guess is this is just a natural growth as authors learn the difference between long and short papers, a distinction which was only introduced relatively recently.

This is simply wrong with the updated data (tip of the hat to Melissa Terras for pointing it out); the ratio of long papers to short papers is still in flux. My “guess” from earlier was just that, a post-hoc explanation attached to an incorrect analysis. Matthew Lincoln has a great description about why we should be wary of these just-so stories. Go read it.

A breakdown of presentation forms at the last three DH conferences.

The breakdown of acceptance rates for each conference isn’t very informative, due in part to the fact I only have the last three years. In another few years this will probably become interesting, but for those who just can’t get enough o’ them sweet sweet numbers, here they are, special for you:

Breakdown of conference acceptances 2013-2015. The right-most column shows the percent of, for example, long papers that were not only accepted, but accepted AS long papers. Yellow rows are total acceptance rates per year.

Authorship

DH is still pretty single-author-heavy. It’s getting better; over the last 10 years we’ve seen an upward trend in number of authors per paper (more details in a future blog post), but the last three years have remained pretty stagnant. This year, 35% of presentations & posters will be by a single author, 25% by two authors, 13% by 3 authors, and so on down the line. The numbers are unremarkably consistent with 2013 and 2014.

Percent of accepted presentations with a certain number of co-authors in a given year. (e.g. 35% of presentations in 2015 were single-authored.)
Percent of accepted presentations with a certain number of co-authors in a given year. (e.g. 35% of presentations in 2015 were single-authored.)

We do however see an interesting trend in acceptance rates by number of authors. The more authors on your presentation, the more likely your presentation is to be accepted. This is true of 2013, 2014, and 2015. Single-authored works are 54% likely to be accepted, while works authored by two authors are 67% likely to be accepted. If your submission has more than 7 authors, you’re incredibly unlikely to get rejected.

Acceptance rates by number of authors, 2013-2015. The more authors, the more likely a submission will be accepted.
Acceptance rates by number of authors, 2013-2015. The more authors, the more likely a submission will be accepted.

Obviously this is pure description and correlation; I’m not saying multi-authored works are higher quality or anything else. Sometimes, works with more authors simply have more recognizable names, and thus are more likely to be accepted. That said, it is interesting that large projects seem to be favored in the peer review process for DH conferences.

Stay-tuned for parts 2, π, 16, and 4, which will cover such wonderful subjects as topicality, gender, and other things that seem neat.

Notes:

  1. The appropriate level of skepticism here is 19.27
  2. I hear Elon Musk is keynoting in 2019.

Digital History, Saturn’s Rings, and the Battle of Trafalgar

History and astronomy are a lot alike. When people claim history couldn’t possibly be scientific, because how can you do science without direct experimentation, astronomy should be used as an immediate counterexample.

Astronomers and historians both view their subjects from great distances; too far to send instruments for direct measurement and experimentation. Things have changed a bit in the last century for astronomy, of course, with the advent of machines sensitive enough to create earth-based astronomical experiments. We’ve also built ships to take us to the farthest reaches, for more direct observations.

Voyager 1 Spacecraft, on the cusp of interstellar space. [via]
Voyager 1 Spacecraft, on the cusp of interstellar space. [via]
It’s unlikely we’ll invent a time machine any time soon, though, so historians are still stuck looking at the past in the same way we looked at the stars for so many thousands of years: through a glass, darkly. Like astronomers, we face countless observational distortions, twisting the evidence that appears before us until we’re left with an echo of a shadow of the past. We recreate the past through narratives, combining what we know of human nature with the evidence we’ve gathered, eventually (hopefully) painting ever-clearer pictures of a time we could never touch with our fingers.

Some take our lack of direct access as a good excuse to shake away all trappings of “scientific” methods. This seems ill-advised. Retaining what we’ve learned over the past 50 years about how we construct the world we see is important, but it’s not the whole story, and it’s got enough parallels with 17th century astronomy that we might learn some lessons from that example.

Saturn’s Rings

In the summer 1610, Galileo observed Saturn through a telescope for the first time. He wrote with surprise that

Galileo's observation of Saturn through a telescope, 1610. [via]
Galileo’s Saturn. [via]

the star of Saturn is not a single star, but is a composite of three, which almost touch each other, never change or move relative to each other, and are arranged in a row along the zodiac, the middle one being three times larger than the two lateral ones…

This curious observation would take half a century to resolve into what we today see as Saturn’s rings. Galileo wrote that others, using inferior telescopes, would report seeing Saturn as oblong, rather than as three distinct spheres. Low and behold, within months, several observers reported an oblong Saturn.

Galileo's Saturn in 1616.
Galileo’s Saturn in 1616.

What shocked Galileo even more, however, was an observation two years later when the two smaller bodies disappeared entirely. They appeared consistently, with every observation, and then one day poof they’re gone. And when they eventually did come back, they looked remarkably odd.

Saturn sometimes looked as though it had “handles”, one connected to either side, but the nature of those handles were unknown to Galileo, as was the reason why sometimes it looked like Saturn had handles, sometimes moons, and sometimes nothing at all.

Saturn was just really damn weird. Take a look at these observations from Gassendi a few decades later:

Gassendi's Saturn [via]
Gassendi’s Saturn [via]
What the heck was going on? Many unsatisfying theories were put forward, but there was no real consensus.

Enter Christiaan Huygens, who in the 1650s was fascinated by the Saturn problem. He believed a better telescope was needed to figure out what was going on, and eventually got some help from his brother to build one.

The idea was successful. Within short order, Huygens developed the hypothesis that Saturn was encircled by a ring. This explanation, along with the various angles we would be viewing Saturn and its ring from Earth, accounted for the multitude of appearances Saturn could take. The figure below explains this:

Huygens' Saturn [via]
Huygens’ Saturn [via]
The explanation, of course, was not universally accepted. An opposing explanation by an anti-Copernican Jesuit contested that Saturn had six moons, the configuration of which accounted for the many odd appearances of the planet. Huygens countered that the only way such a hypothesis could be sustained would be with inferior telescopes.

While the exact details of the dispute are irrelevant, the proposed solution was very clever, and speaks to contemporary methods in digital history. The Accademia del Cimento devised an experiment that would, in a way, test the opposing hypotheses. They built two physical models of Saturn, one with a ring, and one with six satellites configured just-so.

The Model of Huygens' Saturn [via]
The Model of Huygens’ Saturn [via]
In 1660, the experimenters at the academy put the model of a ringed Saturn at the end of a 75-meter / 250-foot hallway. Four torches illuminated the model but were obscured from observers, so they wouldn’t be blinded by the torchlight.  Then they had observers view the model through various quality telescopes from the other end of the hallway. The observers were essentially taken from the street, so they wouldn’t have preconceived notions of what they were looking at.

Depending on the distance and quality of the telescope, observers reported seeing an oblong shape, three small spheres, and other observations that were consistent with what astronomers had seen. When seen through a glass, darkly, a ringed Saturn does indeed form the most unusual shapes.

In short, the Accademia del Cimento devised an experiment, not to test the physical world, but to test whether an underlying reality could appear completely different through the various distortions that come along with how we observe it. If Saturn had rings, would it look to us as though it had two small satellites? Yes.

This did not prove Huygens’ theory, but it did prove it to be a viable candidate given the observational instruments at the time. Within a short time, the ring theory became generally accepted.

The Battle of Trafalgar

So what’s Saturn’s ring have to do with the price of tea in China? What about digital history?

The importance is in the experiment and the model. You do not need direct access to phenomena, whether they be historical or astronomical, to build models, conduct experiments, or generally apply scientific-style methods to test, elaborate, or explore a theory.

In October 1805, Lord Nelson led the British navy to a staggering victory against the French and Spanish during the Napoleonic Wars. The win is attributed to Nelson’s unusual and clever battle tactics of dividing his forces in columns perpendicular to the single line of the enemy ships. Twenty-seven British ships defeated thirty-three Franco-Spanish ones. Nelson didn’t lose a single British ship lost, while the Franco-Spanish fleet lost twenty-two.

Horatio Nelson [via]
Horatio Nelson [via]
But let’s say the prevailing account is wrong. Let’s say, instead, due to the direction of the wind and the superior weaponry of the British navy, victory was inevitable: no brilliant naval tactician required.

This isn’t a question of counterfactual history, it’s simply a question of competing theories. But how can we support this new theory without venturing into counterfactual thinking, speculation? Obviously Nelson did lead the fleet, and obviously he did use novel tactics, and obviously a resounding victory ensued. These are indisputable historical facts.

It turns out we can use a similar trick to what the Accademia del Cimento devised in 1660: pretend as though things are different (Saturn has a ring; Nelson’s tactics did not win the battle), and see whether our observations would remain the same (Saturn looks like it is flanked by two smaller moons; the British still defeated the French and Spanish).

It turns out, further, that someone’s already done this. In 2003, two Italian physicists built a simulation of the Battle of Trafalgar, taking into account details of the ships, various strategies, wind direction, speed, and so forth. The simulation is a bit like a video game that runs itself: every ship has its own agency, with the ability to make decisions based on its environment, to attack and defend, and so forth.  It’s from a class of simulations called agent-based models.

When the authors directed the British ships to follow Lord Nelson’s strategy, of two columns, the fleet performed as expected: little loss of life on behalf of the British, major victory, and so forth. But when they ran the model without Nelson’s strategy, a combination of wind direction and superior British firepower still secured a British victory, even though the fleet was outnumbered.

…[it’s said] the English victory in Trafalgar is substantially due to the particular strategy adopted by Nelson, because a different plan would have led the outnumbered British fleet to lose for certain. On the contrary, our counterfactual simulations showed that English victory always occur unless the environmental variables (wind speed and direction) and the global strategies of the opposed factions are radically changed, which lead us to consider the British fleet victory substantially ineluctable.

Essentially, they tested assumptions of an alternative hypothesis, and found those assumptions would also lead to the observed results. A military historian might (and should) quibble with the details of their simplifying assumptions, but that’s all part of the process of improving our knowledge of the world. Experts disagree, replace simplistic assumptions with more informed ones, and then improve the model to see if the results still hold.

The Parable of the Polygons

This agent-based approach to testing theories about how society works is exemplified by the Schelling segregation model. This week the model shot to popularity through Vi Hart and Nicky Case’s Parable of the Polygons, a fabulous, interactive discussion of some potential causes of segregation. Go click on it, play through it, experience it. It’s worth it. I’ll wait.

Finished? Great! The model shows that, even if people only move homes if less than 1/3rd of their neighbors are the same color that they are, massive segregation will still occur. That doesn’t seem like too absurd a notion: everyone being happy with 2/3rds of their neighbors as another color, and 1/3rd as their own, should lead to happy, well-integrated communities, right?

Wrong, apparently. It turns out that people wanting 33% of their neighbors to be the same color as they are is sufficient to cause segregated communities. Take a look at the community created in Parable of the Polygons under those conditions:

Parable of the Polygons
Parable of the Polygons

This shows that very light assumptions of racism can still easily lead to divided communities. It’s not making claims about racism, or about society: what it’s doing is showing that this particular model, where people want a third of their neighbors to be like them, is sufficient to produce what we see in society today. Much like Saturn having rings is sufficient to produce the observation of two small adjacent satellites.

More careful work is needed, then, to decide whether the model is an accurate representation of what’s going on, but establishing that base, that the model is a plausible description of reality, is essential before moving forward.

Digital History

Digital history is a ripe field for this sort of research. Like astronomers, we cannot (yet?) directly access what came before us, but we can still devise experiments to help support our research, in finding plausible narratives and explanations of the past. The NEH Office of Digital Humanities has already started funding workshops and projects along these lines, although they are most often geared toward philosophers and literary historians.

The person doing the most thoughtful theoretical work at the intersection of digital history and agent-based modeling is likely Marten Düring, who is definitely someone to keep an eye on if you’re interested in this area. An early innovator and strong practitioner in this field is Shawn Graham, who actively blogs about related issues.  This technique, however, is far from the only one available to historians for devising experiments with the past. There’s a lot we can still learn from 17th century astronomers.

Submissions to Digital Humanities 2015 (pt. 3)

This is the third post in a three-part series analyzing submissions to the 2015 Digital Humanities conference in Australia. In parts 1 & 2, I covered submission volumes, topical coverage, and comparisons to conferences in previous years. This post will briefly address the geography of submissions, further exploring my criticism that this global-themed conference doesn’t feel so global after all. My geographic analysis shows the conference to be more international than I originally suspected.

I’d like to explore whether submissions to DH2015 are more broad in scope than those to previous conferences as well, but given time constraints, I’ll leave that exploration to a later post in this series, which has covered submissions and acceptances at DH conferences since 2013.

For this analysis, I looked at the universities of the submitting (usually lead) author on every submission, and used a geocoder to extract country, region, and continent data for each given university. This means that every submission is attached to one and only one location, even if other authors are affiliated with other places. Not perfect, but good enough for union work. After the geocoding, I corrected the results by hand 1, and present those results here.

It is immediately apparent that the DH2015 authors represent a more diverse geographical distribution than those in previous years. DH2013 in Nebraska was the only conference of the three where over half of submissions were concentrated in one continental region, the Americas. The Switzerland conference in 2014 had a slightly more even regional distribution, but still had very few contributions (11%) from Asia or Oceania. Contrast these heavily skewed numbers against DH2015 in Australia, with a third of the contributions coming from Asia or Oceania.

DH submissions broken down by UN macro-continental regions.

The trend continues broken down by UN micro-continental regions. The trends are not unexpected, but they are encouraging. When the conference was in Switzerland, Northern and Western Europe were much more well-represented, as was (surprisingly?) Eastern Asia. This may present the case that Eastern Asia’s involvement in DH is on the rise even not taking into account conference locations. Submissions for 2015 in Sydney are well-represented by Australia, New Zealand, Eastern Asia, and even Eastern Europe and Southern Asia.

DH conferences broken down by % covered from region in a given year.
DH conferences broken down by % covered from region in a given year.

One trend is pretty clear: the dominance of North America. Even at its lowest point in 2015, authors from North America comprise over a third of submissions. This becomes even more stark in the animation below, on which every submitting author’s country is represented.

DH2013-2015 with dots sized by the percent coverage that year.
DH2013-2015 with dots sized by the percent coverage that year.

The coverage from the United States over the course of the last three years barely changes, and from Canada shrinks only slightly when the conference moves off of North America. The UK also pretty much retains its coverage 2013-2015, hovering around 10% of submissions. Everywhere else the trends are pretty clear: a slow move eastward as the conference moves east. It’ll be interesting to see how things change in Poland in 2016, and wherever it winds up going in 2017.

In sum, it turns out “Global Digital Humanities 2015” is, at least geographically, much more global than the conferences of the previous two years. While the most popular topics are pretty similar to those in earlier years, I haven’t yet done an analysis of the diversity of the less popular topics, and it may be that they actually prove more diverse than those in earlier years. I’ll save that analysis for when the acceptances come in, though.

Notes:

  1. It’s a small enough dataset. There’s 648 unique institutional affiliations listed on submissions from 2013-2015, which resolved to 49 unique countries in 14 regions on 4 continents.

Submissions to Digital Humanities 2015 (pt. 2)

Do you like the digital humanities? Me too! You better like it, because this is the 700th or so in a series of posts about our annual conference, and I can’t imagine why else you’d be reading it.

My last post went into some summary statistics of submissions to DH2015, concluding in the end that this upcoming conference, the first outside the Northern Hemisphere, with the theme “Global Digital Humanities”, is surprisingly similar to the DH we’ve seen before. This post will compare this year to submissions to the previous two conferences, in Switzerland and the Nebraska. Part 3 will go into some more detail of geography and globalizing trends.

I can only compare the sheer volume of submissions this year to 2013 and 2014, which is as far back as I’ve got hard data. As many pieces were submitted for DH2015 as were submitted for DH2013 in Nebraska – around 360. Submissions to DH2014 shot up to 589, and it’s not yet clear whether the subsequent dip is an accident of location (Australia being quite far away from most regular conference attendees), or whether this signifies the leveling out of what’s been fairly impressive growth in the DH world.

DH by volume, 1999-2014.  This chart shows how many DHSI workshops occurred per year (right axis), alongside how many pieces were actually presented at the DH conference annually (left axis). This year is not included because we don't yet know which submissions will be accepted.
DH by volume, 1999-2014. This chart shows how many DHSI workshops occurred per year (right axis), alongside how many pieces were actually presented at the DH conference annually (left axis). This year is not included because we don’t yet know which submissions will be accepted.

This graph shows a pretty significant recent upward trend in DH by volume; if acceptance rates to DH2015 are comparable to recent years (60-65%), then DH2015 will represent a pretty significant drop in presentation volume. My gut intuition is this is because of the location, and not a downward trend in DH, but only time will tell.

Replying to my most recent post, Jordan T. T-H commented on his surprise at how many single-authored works were submitted to the conference. I suggested this was of our humanistic disciplinary roots, and that further analysis would likely reveal a trend of increasing co-authorship. My prediction was wrong: at least over the last three years, co-authorship numbers have been stagnant.

This chart shows the that ~40% of submissions to DH conferences over the past three years have been single-authored.
This chart shows the that ~40% of submissions to DH conferences over the past three years have been single-authored.

Roughly 40% of submissions to DH conferences over the past three years have been single-authored; the trend has not significantly changed any further down the line, either. Nickoal Eichmann and I are looking into data from the past few decades, but it’s not ready yet at the time of this blog post. This result honestly surprised me; just from watching and attending conferences, I had the impression we’ve become more multi-authored over the past few years.

Topically, we are noticing some shifts. As a few people noted on Twitter, topics are not perfect proxies for what’s actually going on in a paper; every author makes different choices on how they they tag their submissions. Still, it’s the best we’ve got, and I’d argue it’s good enough to run this sort of analysis on, especially as we start getting longitudinal data. This is an empirical question, and if we wanted to test my assumption, we’d gather a bunch of DHers in a room and see to what extent they all agree on submission topics. It’s an interesting question, but beyond the scope of this casual blog post.

Below is the list of submission topics in order of how much topical coverage has changed since 2013. For example, this year 21% of submissions were tagged as involving Text Analysis. By contrast, only 15% were tagged as Text Analysis in 2013, resulting in a growth of 6% over the last two years. Similarly, this year Internet and World Wide Web studies comprised 7% of submissions, whereas that number was 12% in 2013, showing coverage shrunk by 5%. My more detailed evaluation of the results are below the figure.

dh-topicalchange-2015

We see, as I previously suggested, that Text Analysis (unsurprisingly) has gained a lot of ground. Given the location, it should be unsurprising as well that Asian Studies has grown in coverage, too. Some more surprising results are the re-uptake of Digitisation, which have been pretty low recently, and the growth of GLAM (Galleries, Libraries, Archives, Museums), which I suspect if we could look even further back, we’d spot a consistent upward trend. I’d guess it’s due to the proliferation of DH Alt-Ac careers within the GLAM world.

Not all of the trends are consistent: Historical Studies rose significantly between 2013 and 2014, but dropped a bit in submissions this year to 15%. Still, it’s growing, and I’m happy about that. Literary Studies, on the other hand, has covered a fifth of all submissions in 2013, 2014, and 2015, remaining quite steady. And I don’t see it dropping any time soon.

Visualizations are clearly on the rise, year after year, which I’m going to count as a win. Even if we’re not branching outside of text as much as we ought, the fact that visualizations are increasingly important means DHers are willing to move beyond text as a medium for transmission, if not yet as a medium of analysis. The use of Networks is also growing pretty well.

As Jacqueline Wernimont just pointed out, representation of Gender Studies is incredibly low. And, as the above chart shows, it’s even lower this year than it was in both previous years. Perhaps this isn’t so surprising, given the gender ratio of authors at DH conferences recently.

Gender ratio of authors at DH conferences 2010-2013. Women consistently represent a bit under a third of all authors.
Gender ratio of authors at DH conferences 2010-2013. Women consistently represent a bit under a third of all authors.

Some categories involving Maps and GIS are increasing, while others are decreasing, suggesting small fluctuations in labeling practices, but probably no significant upward or downward trend in their methodological use. Unfortunately, most non-text categories dropped over the past three years: Music, Film & Cinema Studies, Creative/Performing Arts, and Audio/Video/Multimedia all dropped. Image Studies grew, but only slightly, and its too soon to say if this represents a trend.

We see the biggest drops in XML, Encoding, Scholarly Editing, and Interface & UX Design. This won’t come as a surprise to anyone, but it does show how much the past generation’s giant (putting together, cleaning, and presenting scholarly collections) is making way for the new behemoth (analytics). Internet / World Wide Web is the other big coverage loss, but I’m not comfortable giving any causal explanation for that one.

This analysis offers the same conclusion as the earlier one: with the exception of the drop in submissions, nothing is incredibly surprising. Even the drop is pretty well-expected, given how far the conference is from the usual attendees. The fact that the status is pretty quo is worthy of note, because many were hoping that a global DH would seem more diverse, or appreciably different, in some way. In Part 3, I’ll start picking apart geographic and deeper topical data, and maybe there we’ll start to see the difference.

Submissions to Digital Humanities 2015 (pt. 1)

It’s that time of the year again! The 2015 Digital Humanities conference will take place next summer in Australia, and as per usual, I’m going to summarize what is being submitted to the conference and, eventually, how those submissions become accepted. Each year reviewers get the chance to “bid” on conference submissions, and this lets us get a peak inside the general trends in DH research. This post (pt. 1) will focus solely on this year’s submissions, and next post will compare them to previous years and locations.

It’s important to keep in mind that trends in the conference over the last three years may be temporal, geographic, or accidental. The 2013 conference took place in Nebraska, 2014 in Switzerland, 2015 in Australia, and 2016 is set to happen in Poland; it’s to be expected that regional differences will significantly inform who is submitting pieces and what topics will be discussed.

This year, 358 pieces were submitted to the conference (about as many as were submitted to Nebraska in 2013, but more on that in the follow-up post). As with previous years, authors could submit four varieties of works: long papers, short papers, posters, and panels / multi-paper sessions. Long papers comprised 54% of submissions, panels 4%, posters 15%, and short papers 30%.

In total, there were 859 named authors on submissions – this number counts authors more than once if they appear on multiple submissions. Of those, 719 authors are unique. 1 Over half the submissions are multi-authored (58%), with 2.4 authors per submission on average, a median of 2 authors per submission, and a max of 10 authors on one submission. While the majority of submissions included multiple authors, the sheer number of single-authored papers still betrays the humanities roots of DH. The histogram is below.

A histogram of authors-per-submission.
A histogram of authors-per-submission.

As with previous years, authors may submit articles in any of a number of languages. The theme of this year’s conference is “Global Digital Humanities”, but if you expected a multi-lingual conference, you might be disappointed. Of the 358 submissions, 353 are in English. The rest are in French (2), Italian (2), and German (1).

Submitting authors could select from a controlled vocabulary to tag their submissions with topics. There were 95 topics to choose from, and their distribution is not especially surprising. Two submissions each were tagged with 25 topics, suggesting they are impressively far reaching, but for the most part submissions stuck to 5-10 topics. The breakdown of submissions by topic is below, where the percentage represents the percentage of submissions which are tagged by a specific topic. My interpretation is below that.

Percentage of submissions tagged with a specific topic.
Percentage of submissions tagged with a specific topic.

A full 21% of submissions include some form of Text Analysis, and a similar number claim Text or Data Mining as a topic. Other popular methodological topics are Visualizations, Network Analysis, Corpus Analysis, and Natural Language Processing. The DH-o-sphere is still pretty text-heavy; Audio, Video, and Multimedia are pretty low on the list, GIS even lower, and Image Analysis (surprisingly) even lower still. Bibliographic methods, Linguistics, and other approaches more traditionally associated with the humanities appear pretty far down the list. Other tech-y methods, like Stylistics and Agent-Based Modeling, are near the bottom. If I had to guess, the former is on its way down, and the latter on its way up.

Unsurprisingly, regarding disciplinary affiliations, Literary Studies is at the top of the food chain (I’ll talk more about how this compares to previous years in the next post), with Archives and Repositories not far behind. History is near the top tier, but not quite there, which is pretty standard. I don’t recall the exact link, but Ben Schmidt argued pretty convincingly that this may be because there are simply fewer new people in History than in Literary Studies. Digitization seems to be gaining some ground its lost in the previous years. The information science side (UX Design, Knowledge Representation, Information Retrieval, etc.) seems reasonably strong. Cultural Studies is pretty well-represented, and Media Studies, English Studies, Art History, Anthropology, and Classics are among the other DH-inflected communities out there.

Thankfully we’re not completely an echo chamber yet; only about a tenth of the submissions are about DH itself – not great, not terrible. We still seem to do a lot of talking about ourselves, and I’d like to see that number decrease over the next few years. Pedagogy-related submissions are also still a bit lower than I’d like, hovering around 10%. Submissions on the “World Wide Web” are decreasing, which is to be expected, and TEI isn’t far behind.

All in all, I don’t really see the trend toward “Global Digital Humanities” that the conference is themed to push, but perhaps a more complex content analysis will reveal a more global DH than we’ve sen in the past. The self-written Keyword tags (as opposed to the Topic tags, not a controlled vocabulary) reveal a bit more internationalization, although I’ll leave that analysis for a future post.

It’s worth pointing out there’s a statistical property at play that makes it difficult to see deviations from the norm. Shakespeare appears prominently because many still write about him, but even if Shakespearean research is outnumbered by work on more international playwrights, it’d be difficult to catch, because I have no category for “international playwright” – each one would be siphoned off into its own category. Thus, even if the less well-known long tail topics  significantly outweigh the more popular topics, that fact would be tough to catch.

All in all, it looks like DH2015 will be an interesting continuation of the DH tradition. Perhaps the most surprising aspect of my analysis was that nothing in it surprised me; half-way around the globe, and the trends over there are pretty identical to those in Europe and the Americas. It’ll take some more searching to see if this is a function of the submitting authors being the same as previous years (whether they’re all simply from the Western world), or whether it is actually indicative of a fairly homogeneous global digital humanities.

Stay-tuned for Part 2, where I compare the analysis to previous years’ submissions, and maybe even divine future DH conference trends using tea leaves or goat entrails or predictive modeling (whichever seems the most convincing; jury’s still out).

Notes:

  1. As far as I can tell – I used all the text similarity methods I could think of to unify the nearly-duplicate names.

Barriers to Scholarship & Iterative Writing

This post is mostly just thinking out loud, musing about two related barriers to scholarship: a stigma related to self-plagiarism, and various copyright concerns. It includes a potential way to get past them.

Self-Plagiarism

When Jonah Lehrer’s plagiarism scandal first broke, it sounded a bit silly. Lehrer, it turned out, had taken some sentences he’d used in earlier articles, and reused them in a few New Yorker blog posts. Without citing himself. Oh no, I thought. Surely, this represents the height of modern journalistic moral depravity.

Of course, later it was revealed that he’d bent facts, and plagiarized from others without reference, and these were all legitimately upsetting. And plagiarizing himself without reference was mildly annoying, though certainly not something that should have attracted national media attention. But it raises an interesting question: why is self-plagiarism wrong? And it’s as wrong in academia as it is in journalism.

Lehrer chart from Slate [via].
Lehrer chart from Slate. [via]
I can’t speak for journalists (though Alberto Cairo can, and he lists some of the good reasons why non-referenced self-plagiarism is bad and links to not one, but two articles about it, and), but for academia, the reasons behind the wrongness seem pretty clear.

  1. It’s wrong to directly lift from any source without adequate citation. This only applies to non-cited self-plagiarism, obviously.
  2. It’s wrong to double-dip. The currency of the academy is publications / CV lines, and if you reuse work to fill your CV, you’re getting an unfair advantage.
  3. Confusion. Which version should people reference if you have so many versions of a similar work?
  4. Copyright. You just can’t reuse stuff, because your previous publishers own the copyright on your earlier work.

That about covers it. Let’s pretend academics always cite their own works (because, hell, it gives them more citations), so we can do away with #1. Regular readers will know my position on publisher-owned copyright, so I just won’t get into #4 here to save you my preaching. The others are a bit more difficult to write off, but before I go on to try to do that, I’d like to talk a bit about my own experience of self-plagiarism as a barrier to scholarship.

I was recently invited to speak at the Universal Decimal Classification seminar, where I presented on the history of trees as a visual metaphor for knowledge classification. It’s not exactly my research area, but it was such a fun subject, I’ve decided to write an article about it. The problem is, the proceedings of the UDC seminar were published, and about 50% of what I wanted to write is already sitting in a published proceedings that, let’s face it, not many people will ever read. And if I ever want to add to it, I have to change the already-published material significantly if I want to send it out again.

Since I presented, my thesis has changed slightly, I’ve added a good chunk of more material, and I fleshed out the theoretical underpinnings. I now have a pretty good article that’s ready to be sent out for peer review, but if I want to do that, I can’t just have a reference saying “half of this came from a published proceeding.” Well, I could, but apparently there’s a slight taboo against this. I was told to “be careful,” that I’d have to “rephrase” and “reword.” And, of course, I’d have to cite my earlier publication.

I imagine most of this comes from the fear of scholars double-dipping, or padding their CVs. Which is stupid. Good scholarship should come first, and our methods of scholarly attribution should mold itself to it. Right now, scholarship is enslaved to the process of attribution and publication. It’s why we willingly donate our time and research to publishing articles, and then have our universities buy back our freely-given scholarship in expensive subscription packages, when we could just have the universities pay for the research upfront and then release it for free.

Copyright

The question of copyright is pretty clear: how much will the publisher charge if I want my to reuse a significant portion of my work somewhere else? The publisher to which I refer, Ergon Verlag, I’ve heard is pretty lenient about such things, but what if I were reprinting from a different publish?

There’s an additional, more external, concern about my materials. It’s a history of illustrations, and the manuscript itself contains 48 illustrations in all. If I want to use them in my article, for demonstrative purposes, I not only need to cite the original sources (of course), I need to get permission to use the illustrations from the publishers who scanned them – and this can be costly and time consuming. I priced a few of them so-far, and they range from free to hundreds of dollars.

A Potential Solution – Iterative Writing

To recap, there are two things currently preventing me from sending out a decent piece of scholarship for peer-review:

  1. A taboo against self-plagiarism, which requires quite a bit of time for rewriting, permission from the original publisher to reuse material, and/or the dissolution of such a taboo.
  2. The cost and time commitment of tracking down copyright holders to get permission to reproduce illustrations.

I believe the first issue is largely a historical artifact of print-based media. Scholars have this sense of citing the source because, for hundreds of years, nearly every print of a single text was largely identical. Sure, there were occasionally a handful of editions, some small textual changes, some page number changes, but citing a text could easily be done, and so we developed a huge infrastructure around citations and publications that exists to this day. It was costly and difficult to change a printed text, and so it wasn’t done often, and now our scholarly practices are based around the idea scholarly material has to be permanent and unchanging, finished, if they are to enter into the canon and become citeable sources.

In the age of Wikipedia, this is a weird idea. Texts grow organically, they change, they revert. Blog posts get updated. A scholarly article, though, is relatively constant, even those in online-only publications. One of the major exceptions are ArXiv-like pre-print repositories, which allow an article to go through several versions before the final one goes off to print. But generally, once the final version goes to print, no further changes are made.

The reasons behind this seem logical: it’s the way we’ve always done it, so why change a good thing? It’s hard to cite something that’s constantly changing; how do we know the version we cited will be preserved?

In an age of cheap storage and easily tracked changes, this really shouldn’t be a concern. Wikipedia does this very well: you can easily cite the version of an article from a specific date and, if you want, easily see how the article changed between then and any other date.

Changes between versions of the Wikipedia entry on History.
Changes between versions of the Wikipedia entry on History.

This would be more difficult to implement in academia because article hosting isn’t centralized. It’s difficult to be certain that the URL hosting a journal article now will persist for 50 years, both because of ownership and design changes, and it’s difficult to trust that whomever owns the article or the site won’t change the content and not preserve every single version, or a detailed description of changes they’ve made.

There’s an easy solution: don’t just reference everything you cite, embed everything you cite. If you cite a picture, include the picture. If you cite a book, include the book. If you cite an article, include the article. Storage is cheap: if your book cites a thousand sources, and includes a copy of every single one, it’ll be at most a gigabyte. Probably, it would be quite a deal smaller. That way, if the material changes down the line, everyone reading your research will till be able to refer to the original material. Further, because you include a full reference, people can go and look the material up to see if it has changed or updated in the time since you cited it.

Of course, this idea can’t work – copyright wouldn’t let it. But again, this is a situation where the industry of academia is getting in the way of potential improvements to the way scholarship can work.

The important thing, though, is that self-plagiarization would become a somewhat irrelevant concept. Want to write more about what you wrote before? Just iterate your article. Add some new references, a paragraph here or there, change the thesis slightly. Make sure to keep a log of all your changes.

I don’t know if this is a good solution, but it’s one of many improvements to scholarship – or at least, a removal of barriers to publishing interesting things in a timely and inexpensive fashion – which is currently impossible because of copyright concerns and institutional barriers to change. Cameron Neylon, from PLOS, recently discussed how copyright put up some barriers to his own interesting ideas. Academia is not a nimble beast, and because of it, we are stuck with a lot of scholarly practices which are, in part, due to the constraints of old media.

In short: academic writing is tough. There are ways it could be easier, that would allow good scholarship to flow more freely, but we are constrained by path dependency from choices we made hundreds of years ago. It’s time to be a bit more flexible and be more willing to try out new ideas. This isn’t anywhere near a novel concept on my part, but it’s worth repeating.

The last big barrier to self-plagiarism, double dipping to pad one’s CV, still seems tricky to get past. I’m not thrilled with the way we currently assess scholarship, and “CV size” is just one of the things I don’t like about it, but I don’t have any particularly clever fixes on that end.