Teaching Yourself to Code in DH

tl;dr Book-length introductions to programming or analytic methods (math / statistics / etc.) aimed at or useful for humanists with limited coding experience.

I’m collecting programming & methodological textbooks for humanists as part of a reflective study on DH, but figured it’d also be useful for those interested in¬†teaching themselves to code, or teachers who need a textbook for their class. Though I haven’t read them all yet, I’ve organized them into very imperfect categories and provided (hopefully) some useful comments. Short coding exercises, books that¬†assume some pre-existing knowledge of coding, and theoretical introductions are not listed here.

Thanks to @Literature_Geek, @ProgHist, @heatherfro, @electricarchaeo, @digitaldante, @kintopp, @dmimno, & @collinj for their contributions to the growing list. In the interest of maintaining scope, not all of their suggestions appear below.

Historical Analysis

  • The Programming Historian, 1st edition¬†(2007). William J. Turkel and Alan MacEachern.
    • An open access introduction to programming in Python. Mostly web scraping and basic text analysis. Probably best to look to newer resources, due to the date. Although it’s aimed at historians, the methods are broadly useful to all text-based DH.
  • The Programming Historian, 2nd edition (ongoing).¬†Afanador-Llach, Maria Jos√©, Antonio Rojas Castro, Adam Crymble, V√≠ctor Gayol, Fred Gibbs, Caleb McDaniel, Ian Milligan, Amanda Visconti, and Jeri Wieringa, eds.
    • Constantly updating lessons, ostensibly aimed at historians, but useful to all of DH. Includes introductions to web development, text analysis, GIS, network analysis, etc. in multiple programming languages. Not a monograph, and no real order.
  • Computational Historical Thinking with Applications in R (ongoing). Lincoln Mullen.
    • A series of lessons in in R, still under development with quite a few chapters missing. Probably the only programming book aimed at historians that actually focuses on historical questions and approaches.
  • The Rubyist Historian (2004). Jason Heppler.
    • A short introduction to programming in Ruby. Again, ostensibly aimed at historians, but really just focused on the fundamentals of coding, and useful in that context.
  • Natural Language Processing for Historical Texts (2012).¬†Michael Piotrowski.
    • About natural language processing, but not an introduction to coding. Instead, an introduction to the methodological approaches of natural language processing specific to historical texts (OCR, spelling normalization, choosing a corpus, part of speech tagging, etc.).¬†Teaches a variety of tools and techniques.
  • The Historian’s Macroscope¬†(2015). Graham, Milligan, & Weingart.
    • Okay I’m cheating¬†a bit here! This isn’t teaching you to program, but Shawn, Ian, and I spent a while writing this intro to digital methods for historians, so I figured I’d sneak a link in.

Literary & Linguistic Analysis

  • Text Analysis with R for Students of Literature (2014). Matthew Jockers.
    • Step-by-step introduction to¬†learning R, specifically focused on literary text analysis, both for close and distant reading, with primers on the statistical approaches¬†being used. Includes¬†approaches to, e.g., word frequency distribution, lexical variety,¬†classification, and topic modeling.
  • The Art of Literary Text Analysis (ongoing). St√©fan Sinclair & Geoffrey Rockwell.
    • A growing, interactive textbook similar in scope to Jockers’ book (close & distant reading in literary analysis), but in Python rather than R. Heavily focused on the code itself, and includes such methods as topic modeling and sentiment analysis.
  • Statistics for Corpus Linguistics (1998). Michael Oakes.
    • Don’t know anything about this one, sorry!

General Digital Humanities

Many¬†of the above books are focused on literary or historical analysis only in name, but are really useful for everyone¬†in DH. The below are similar in scope, but don’t aim themselves at one particular group.

  • Humanities Data in R (2015). Lauren Tilton & Taylor Arnold.
    • General introduction to¬†programming through R, and broadly focused on many approaches, including basic statistics, networks, maps, texts, and images. Teaches concepts and programmatic implementations.
  • Digital Research Methods with Mathematica (2015). William J. Turkel.
    • A Mathematica notebook (thus, not accessible unless you have an appropriate reader) teaching text, image, and geo-based analysis. Mathematica itself is¬†an expensive piece of software without an institutional license, so this resource may be inaccessible to many learners. [NOTE: Arno Bosse wrote¬†positive feedback on this textbook in a comment below.]
  • Exploratory Programming for the Arts and Humanities (2016). Nick Montfort.
    • An introduction to the fundamentals of programming specifically for arts and humanities, languages Python and Processing, that goes through statistics, text, sound, animation, images, and so forth.¬†Much more expansive than many other options listed here, but not as focused on needs of text analysis (which is probably a good thing).
  • An Introduction to Text Analysis: A Coursebook (2016). Brandon Walsh & Sarah Horowitz.
    • A brief textbook with exercises and explanatory notes specific to text analysis for the study of literature and history. Not an introduction to programming, but covers some of the¬†mathematical and methodological concepts used in these sorts of studies.
  • Python Programming for Humanists (ongoing).¬†Folgert Karsdorp and Maarten van Gompel.
    • Interactive (Jupyter) notebooks teaching Python for statistical¬†text analysis. Quite thorough, teaching methodological¬†reasoning and examples, including quizzes and other lesson helpers, going from basic tokenization up through unsupervised learning, object-oriented programming, etc.

Statistical Methods & Machine Learning

  • Statistics for the Humanities (2014). John Canning.
    • Not an introduction to coding of any sort, but a solid intro to statistics geared at the sort of stats needed by humanists (archaeologists, literary theorists, philosophers, historians, etc.). Reading this should give you a solid foundation of statistical methods (sampling, confidence intervals, bias, etc.)
  • Data Mining: Practical Machine Learning Tools and Techniques, 4th edition (2016).¬†Witten, Frank, Hall, & Pal.
    • A practical intro to machine learning in Weka, Java-based software for data mining and modeling. Not aimed at humanists, but legible to the dedicated amateur. It really gets into the weeds of how machine learning works.
  • Text Mining with R (2017).¬†Julia Silge and David Robinson.
    • Introduction to text mining aimed at data scientists in the statistical programming language R. Some knowledge of R is expected; the authors suggest using R for Data Science¬†(2016)¬†by¬†Grolemund & Wickham to get up to speed. This is for those interested in current data science coding best-practices, though it does not get as in-depth as some other texts focused on literary text analysis. Good¬†as a solid base to learn from.
  • The Curious Journalist’s Guide to Data (2016). Jonathan Stray.
    • Not an intro to programming or math, but rather a good guide to¬†quantitatively thinking through evidence and argument. Aimed at journalists, but of¬†potential use to more empirically-minded humanists.
  • Six Septembers: Mathematics for the Humanist¬†(2017).¬†Patrick Juola & Stephen Ramsay.
    • Fantastic introduction to simple and advanced mathematics written by and for humanists. Approachable, prose-heavy, and grounded in humanities examples. Covers topics like algebra, calculus, statistics, differential equations. Definitely a foundations text, not an applications one.

Data Visualization, Web Development, & Related

  • D3.js in Action, 2nd edition (2017). Elijah Meeks.
    • Introduction to programmatic, online data visualization in javascript and the library D3.js. Not aimed at the humanities, but written by a digital humanist; easy to read and follow. The great thing about D3 is it’s a library for visualizing something in whatever fashion you might imagine, so this is a good book for those who want to design their own visualizations rather than using off-the-shelf tools.
  • Drupal for Humanists (2016).¬†Quinn Dombrowski.
    • Full-length introduction to Drupal, a web platform that allows you to build “environments for gathering, annotating, arranging, and presenting their research and supporting materials” on the web. Useful for those interested in getting started with the creation of web-based projects but who don’t want to dive head-first into from-scratch web development.
  • (Xe)LaTeX appliqu√© aux sciences humaines (2012).¬†Ma√Įeul Rouquette, Brendan Chabannes et Enimie Rouquette.
    • French introduction to LaTeX for humanists. LaTeX is the primary means scientists use to prepare documents (instead of MS Word or similar software), which allows for more sustainable, robust, and easily typeset scholarly publications. If humanists wish to publish in¬†natural (or some social) science journals, this is an important skill.

Submissions to DH2017 (pt. 1)

Like many times before, I’m analyzing the international digital humanities conference, this time the 2017 conference¬†in Montr√©al. The data I collect is available to any conference peer reviewer, though I do a bunch of scraping, cleaning, scrubbing, shampooing, anonymizing, etc. before posting these results.

This first post covers the basic landscape of submissions to next year’s conference: how many submissions there are, what they’re about, and so forth.

The analysis is opinionated and sprinkled¬†with my own preliminary interpretations. If you disagree with something or want to see more, comment below, and I’ll try to address it in the inevitable follow-up. If you want the data, too bad‚ÄĒsince it’s only available to reviewers, there’s an expectation of privacy. If you are sad for political or other reasons and live near me, I will bring you chocolate; if you are sad and do not live near me, you should move to Pittsburgh. We have chocolate.

Submission Numbers & Types

I’ll be honest, I was surprised by this year’s submission numbers. This will be the first¬†ADHO conference held in North America since it was held in Nebraska in 2013, and I expected¬†an influx of submissions from people who haven’t been able to travel off the continent for interim events. I expected the biggest¬†submission pool yet.

Submissions per year by type.
Submissions per year by type.

What we see, instead, are fewer submissions than Krak√≥w last year: 608 in all. The low number of submissions to Sydney was expected, given it¬†was the first ¬†conference held outside Europe or North America, but this year’s numbers suggests the DH Hype Machine might be cooling somewhat, after¬†five years of rapid growth.

Annual presentations at DH conferences, compared to growth of DHSI in Victoria.
Annual presentations at DH conferences, compared to growth of DHSI in Victoria, 1999-2015.

We need some more years and some more DH-Hype-Machine Indicators to be sure, but I reckon things are slowing down.

The¬†conference offers five submission tracks: Long Paper, Short Paper, Poster, Panel, and (new this year) Virtual Short Paper. The distribution is pretty consistent with previous years, with the only deviation being in Sydney in 2015. Apparently Australians don’t like short papers or¬†posters?

I’ll be interested to see how the “Virtual Short Paper” works out. Since authors need to decide on this format before submitting, it doesn’t allow the flexibility of seeing if funding will become available over the course of the year. Still, it’s a step in the right direction, and I hope it succeeds.


More of the same! If nothing else, we get points for consistency.

Percent of Co-Authorships
Percent of Co-Authorships

Same as it ever was, nearly half of all submissions are by a single author. I don’t know if that’s because humanists need to justify their presentations to hiring and tenure committees who only respect single authorship, or if we’re just used to working alone. A full 80% of submissions have three or fewer authors, suggesting large teams are still not the norm, or that we’re not crediting all of the labor that goes into DH projects with co-authorships. [Post-publication note: See Adam Crymble’s comment, below, for important context]

Language, Topic, & Discipline

Authors choose from several possible submission languages. This year, 557 submissions were received in English, 40 in French, 7 in Spanish, 3 in Italian, and 1 in German. That’s the easy part.

The Powers That Be decided to make my life harder by changing up the categories authors can choose from for 2017. Thanks, Diane, ADHO, or whoever decided this.

In previous years, authors chose any number of keywords from a controlled vocabulary of about 100 possible topics that applied to their submission. Among other purposes, it helped match authors with reviewers. The potential topic list was relatively static for many years, allowing me to analyze the change in interest in topics over time.

This year, they added, removed, and consolidated a bunch of topics, as well as divided the controlled vocabulary into “Topics” (like metadata, morphology, and machine translation) and “Disciplines” (like disability studies, archaeology, and law). This is ultimately good for the conference, but makes it difficult for me to compare this against earlier years, so I’m holding off on that until another post.

But I’m not bitter.

This year’s options are at the bottom of this post in the appendix. Words in red¬†were added or modified this year,¬†and the last list are¬†topics that used to exist, but no longer do.

So let’s take a look at this year’s breakdown by discipline.

Disciplinary breakdown of submissions
Disciplinary breakdown of submissions

Huh. “Computer science”‚ÄĒa topic which last year did not exist‚ÄĒrepresents nearly a third of submissions.¬†I’m not sure how much this topic actually means¬†anything. My guess is the majority of people using it are simply signifying the “digital” part of their “Digital Humanities” project, since the topic “Programming”‚ÄĒwhich existed in previous years but not this year‚ÄĒused to only connect to ~6% of submissions.

“Literary studies” represents 30%¬†of all submissions, more than any previous year (usually around 20%), whereas “historical studies” has stayed stable with previous years, at around 20% of submissions.¬†These two groups, however, can be pretty variable year-to-year, and I’m beginning to suspect that their use by authors is not consistent enough to take as meaningful. More on that in a later post.

That said, DH is clearly driven by¬†lit, history, and library/information science. L/IS¬†is a new and welcome category this year; I’ve always suspected that DHers are as much from¬†L/IS as the humanities, and this lends evidence in that direction. Importantly, it also¬†makes apparent a dearth in our disciplinary genealogies: when we trace the history of DH, we talk about the history of humanities computing, the history of the humanities, the history of computing, but rarely the history of L/IS.

I’ll have a more detailed breakdown later, but there were some surprises in my first impressions. “Film and Media Studies” is way up compared to previous years, as are other non-textual disciplines, which refreshingly shows (I hope) the rise of non-textual sources in DH. Finally. Gender studies and other identity- or intersectional-oriented submissions also seem to be on the rise (this may be an indication of US academic interests; we’ll need another few years to be sure).

If we now look at Topic choices (rather than Discipline choices, above), we see similar trends.

Topical distribution of submissions
Topical distribution of submissions

Again, these are just first impressions,¬†there’ll be more soon. Text is still the bread and butter of DH, but we see more non-textual methods being used than ever. Some of the old favorites of DH, like authorship attribution, are staying pretty steady against previous years, whereas others, like XML and encoding, seem to¬†be decreasing in interest year after year.

One last note on Topics and Disciplines. There’s a list of discontinued topics at the bottom of the appendix. Most of them have simply been consolidated into other categories, however one set is conspicuously absent: meta-discussions of DH. There are no longer categories for DH’s history, theory, how it’s taught, or its institutional support. These were pretty popular categories in previous years, and I’m not certain why they no longer exist. Perusing the submissions, there are certainly several¬†that fall into these categories.

What’s Next

For Part 2 of this analysis, look forward to more thoughts on the topical breakdown of conference submissions; preliminary geographic and gender analysis of authors; and comparisons with previous years. After that, who knows? I take requests in the comments, but anyone who requests “Free Bird” is banned for life.

Appendix: Controlled Vocabulary

Words in red were added or modified this year, and the last list are topics that used to exist, but no longer do.


  • 3D Printing
  • agent modeling and simulation
  • archives, repositories, sustainability and preservation
  • audio, video, multimedia
  • authorship attribution / authority
  • bibliographic methods / textual studies
  • concording and indexing
  • content analysis
  • copyright, licensing, and Open Access
  • corpora and corpus activities
  • crowdsourcing
  • cultural and/or institutional infrastructure
  • data mining / text mining
  • data modeling and architecture including hypothesis-driven modeling
  • databases & dbms
  • digitisation – theory and practice
  • digitisation, resource creation, and discovery
  • diversity
  • encoding – theory and practice
  • games and meaningful play
  • geospatial analysis, interfaces & technology, spatio-temporal modeling/analysis & visualization
  • GLAM: galleries, libraries, archives, museums
  • hypertext
  • image processing
  • information architecture
  • information retrieval
  • interdisciplinary collaboration
  • interface & user experience design/publishing & delivery systems/user studies/user needs
  • internet / world wide web
  • knowledge representation
  • lexicography
  • linking and annotation
  • machine translation
  • metadata
  • mobile applications and mobile design
  • morphology
  • multilingual / multicultural approaches
  • natural language processing
  • networks, relationships, graphs
  • ontologies
  • project design, organization, management
  • query languages
  • scholarly editing
  • semantic analysis
  • semantic web
  • social media
  • software design and development
  • speech processing
  • standards and interoperability
  • stylistics and stylometry
  • teaching, pedagogy and curriculum
  • text analysis
  • text generation
  • universal/inclusive design
  • virtual and augmented reality
  • visualisation
  • xml


  • anthropology
  • archaeology
  • art history
  • asian studies
  • classical studies
  • computer science
  • creative and performing arts, including writing
  • cultural studies
  • design
  • disability studies
  • english studies
  • film and media studies
  • folklore and oral history
  • french studies
  • gender studies
  • geography
  • german studies
  • historical studies
  • italian studies
  • law
  • library & information science
  • linguistics
  • literary studies
  • medieval studies
  • music
  • near eastern studies
  • philology
  • philosophy
  • renaissance studies
  • rhetorical studies
  • sociology
  • spanish and spanish american studies
  • theology
  • translation studies

No Longer Exist

  • Digital Humanities – Facilities
  • Digital Humanities – Institutional Support
  • Digital Humanities – Multilinguality
  • Digital Humanities – Nature And Significance
  • Digital Humanities – Pedagogy And Curriculum
  • Genre-specific Studies: Prose, Poetry, Drama
  • History Of Humanities Computing/digital Humanities
  • Maps And Mapping
  • Media Studies
  • Other
  • Programming
  • Prosodic Studies
  • Publishing And Delivery Systems
  • Spatio-temporal Modeling, Analysis And Visualisation
  • User Studies / User Needs

Lessons From Digital History’s Antecedents

The below is the transcript from my October 29 keynote presented to the Creativity and The City 1600-2000 conference in Amsterdam, titled “Punched-Card Humanities”. I survey¬†historical approaches to quantitative history, how they relate to the nomothetic/idiographic divide, and discuss some lessons we can learn from past successes and failures. For ‚Čą200 relevant references, see this¬†Zotero folder.

Title Slide
Title Slide

I’m here to talk about Digital History, and what we can learn from its quantitative antecedents. If yesterday’s keynote was framing our mutual interest in the creative city, I hope mine will help frame our discussions around the bottom half of the poster; the eHumanities perspective.

Specifically, I’ve been delighted to see at this conference, we have a rich interplay between familiar historiographic and cultural approaches, and digital or eHumanities methods, all being brought to bear on the creative city. I want to take a moment to talk about where these two approaches meet.

Yesterday’s wonderful keynote brought up the complicated goal of using new digital methods to explore the creative city, without reducing the city to reductive indices. Are we living up to that goal? I hope a historical take on this question might help us move in this direction, that by learning from those historiographic moments when formal methods failed, we can do better this time.

Creativity Conference Theme
Creativity Conference Theme

Digital History is different, we‚Äôre told. ‚ÄúNew‚ÄĚ. Many of us know historians who used computers in the 1960s, for things like demography or cliometrics, but what we do today is a different beast.

Commenting on these early punched-card historians, in 1999, Ed Ayers wrote, quote, ‚Äúthe first computer revolution largely failed.‚ÄĚ The failure, Ayers, claimed, was in part due to their statistical machinery not being up to the task of representing the nuances of human experience.

We see this rhetoric of newness or novelty crop up all the time. It cropped up a lot in pioneering digital history essays by Roy Rosenzweig and Dan Cohen in the 90s and 2000s, and we even see a touch of it, though tempered, in this conference’s theme.

In yesterday’s final discussion on uncertainty, Dorit Raines reminded us the difference between quantitative history in the 70s and today’s Digital History is that today’s approaches broaden our sources, whereas early approaches narrowed them.

Slide (r)evolution
Slide (r)evolution

To say ‚Äúwe‚Äôre at a unique historical moment‚ÄĚ is something common to pretty much everyone, everywhere, forever. And it‚Äôs always a little bit true, right?

It’s true that every historical moment is unique. Unprecedented. Digital History, with its unique combination of public humanities, media-rich interests, sophisticated machinery, and quantitative approaches, is pretty novel.

But as the saying goes, history never repeats itself, but it rhymes. Each thread making up Digital History has a long past, and a lot of the arguments for or against it have been made many times before. Novelty is a convenient illusion that helps us get funding.

Not coincidentally, it’s this tension I’ll highlight today: between revolution and evolution, between breaks and continuities, and between the historians who care more about what makes a moment unique, and those who care more about what connects humanity together.

To be clear, I’m operating on two levels here: the narrative and the metanarrative. The narrative is that the history of digital history is one of continuities and fractures; the metanarrative is that this very tension between uniqueness and self-similarity is what swings the pendulum between quantitative and qualitative historians.

Now, my claim that debates over continuity and discontinuity are a primary driver of the quantitative/qualitative divide comes a bit out of left field — I know — so let me back up a few hundred years and explain.


Francis Bacon wrote that knowledge would be better understood if it were collected into orderly tables. His plea extended, of course, to historical knowledge, and inspired renewed interest in a genre already over a thousand years old: tabular chronology.

These chronologies were world histories, aligning the pasts of several regions which each reconned the passage of time differently.

Isaac Newton inherited this tradition, and dabbled throughout his life in establishing a more accurate universal chronology, aligning Biblical history with Greek legends and Egyptian pharoahs.

Newton brought to history the same mind he brought to everything else: one of stars and calculations. Like his peers, Newton relied on historical accounts of astronomical observations to align simultaneous events across thousands of miles. Kepler and Scaliger, among others, also partook in this ‚Äúscientific history‚ÄĚ.

Where Newton departed from his contemporaries, however, was in his use of statistics for sorting out history. In the late 1500s, the average or arithmetic mean was popularized by astronomers as a way of smoothing out noisy measurements. Newton co-opted this method to help him estimate the length of royal reigns, and thus the ages of various dynasties and kingdoms.

On average, Newton figured, a king’s reign lasted 18-20 years. If the history books record 5 kings, that means the dynasty lasted between 90 and 100 years.

Newton was among the first to apply averages to fill in chronologies, though not the first to apply them to human activities. By the late 1600s, demographic statistics of contemporary life — of births, burials and the like — were becoming common. They were ways of revealing divinely ordered regularities.

Incidentally, this is an early example of our illustrious tradition of uncritically appropriating methods from the natural sciences. See? We’ve all done it, even Newton!  

Joking aside, this is an important point: statistical averages represented divine regularities. Human statistics began as a means to uncover universal truths, and they continue to be employed in that manner. More on that later, though.

Musgrave Quote

Newton’s method didn’t quite pass muster, and skepticism grew rapidly on the whole prospect of mathematical history.

Criticizing Newton in 1782, for example, Samuel Musgrave argued, in part, that there are no discernible universal laws of history operating in parallel to the universal laws of nature. Nature can be mathematized; people cannot.

Not everyone agreed. Francesco Algarotti passionately argued that Newton’s calculation of average reigns, the application of math to history, was one of his greatest achievements. Even Voltaire tried Newton’s method, aligning a Chinese chronology with Western dates using average length of reigns.

Nomothetic / Idiographic
Nomothetic / Idiographic

Which brings us to the earlier continuity/discontinuity point: quantitative history stirs debate in part because it draws together two activities Immanuel Kant sets in opposition: the tendency to generalize, and the tendency to specify.

The tendency to generalize, later dubbed Nomothetic, often describes the sciences: extrapolating general laws from individual observations. Examples include the laws of gravity, the theory of evolution by natural selection, and so forth.

The tendency to specify, later dubbed Idiographic, describes, mostly, the humanities: understanding specific, contingent events in their own context and with awareness of subjective experiences. This could manifest as a microhistory of one parish in the French Revolution, a critical reading of Frankenstein focused on gender dynamics, and so forth.  

These two approaches aren’t mutually exclusive, and they frequently come in contact around scholarship of the past. Paleontologists, for example, apply general laws of biology and geology to tell the specific story of prehistoric life on Earth. Astronomers, similarly, combine natural laws and specific observations to trace to origins of our universe.

Historians have, with cyclically recurring intensity, engaged in similar efforts. One recent nomothetic example is that of cliodynamics: the practitioners use data and simulations to discern generalities such as why nations fail or what causes war. Recent idiographic historians associate more with the cultural and theoretical turns in historiography, often focusing on microhistories or the subjective experiences of historical actors.

Both tend to meet around quantitative history, but the conversation began well before the urge to quantify. They often fruitfully align and improve one another when working in concert; for example when the historian cites a common historical pattern in order to highlight and contextualize an event which deviates from it.

But more often, nomothetic and idiographic historians find themselves at odds. Newton extrapolated ‚Äúlaws‚ÄĚ for the length of kings, and was criticized for thinking mathematics had any place in the domain of the uniquely human. Newton‚Äôs contemporaries used human statistics to argue for divine regularities, and this was eventually criticized as encroaching on human agency, free will, and the uniqueness of subjective experience.

Bacon Taxonomy
Bacon Taxonomy

I’ll highlight some moments in this debate, focusing on English-speaking historians, and will conclude with what we today might learn from foibles of the quantitative historians who came before.

Let me reiterate, though, that quantitative is not nomothetic history, but they invite each other, so I shouldn’t be ahistorical by dividing them.

Take Henry Buckle, who in 1857 tried to bridge the two-culture divide posed by C.P. Snow a century later. He wanted to use statistics to find general laws of human progress, and apply those generalizations to the histories of specific nations.

Buckle was well-aware of historiography‚Äôs place between nomothetic and idiographic cultures, writing: ‚Äúit is the business of the historian to mediate between these two parties, and reconcile their hostile pretensions by showing the point at which their respective studies ought to coalesce.‚ÄĚ

In direct response, James Froud wrote that there can be no science of history. The whole idea of Science and History being related was nonsensical, like talking about the colour of sound. They simply do not connect.

This was a small exchange in a much larger Victorian debate pitting narrative history against a growing interest in scientific history. The latter rose on the coattails of growing popular interest in science, much like our debates today align with broader discussions around data science, computation, and the visible economic successes of startup culture.

This is, by the way, contemporaneous with something yesterday‚Äôs keynote highlighted: the 19th century drive to establish ‚Äėurban laws‚Äô.

By now, we begin seeing historians leveraging public trust in scientific methods as a means for political control and pushing agendas. This happens in concert with the rise of punched cards and, eventually, computational history. Perhaps the best example of this historical moment comes from the American Census in the late 19th century.

19C Map
19C Map

Briefly, a group of 19th century American historians, journalists, and census chiefs used statistics, historical atlases, and the machinery of the census bureau to publicly argue for the disintegration of the U.S. Western Frontier in the late 19th century.

These moves were, in part, made to consolidate power in the American West and wrestle control from the native populations who still lived there. They accomplished this, in part, by publishing popular atlases showing that the western frontier was so fractured that it was difficult to maintain and defend. 1

The argument, it turns out, was pretty compelling.

Hollerith Cards
Hollerith Cards

Part of what drove the statistical power and scientific legitimacy of these arguments was the new method, in 1890, of entering census data on punched cards and processing them in tabulating machines. The mechanism itself was wildly successful, and the inventor’s company wound up merging with a few others to become IBM. As was true of punched-card humanities projects through the time of Father Roberto Busa, this work was largely driven by women.

It’s worth pausing to remember that the history of punch card computing is also a history of the consolidation of government power. Seeing like a computer was, for decades, seeing like a state. And how we see influences what we see, what we care about, how we think.  

Recall the Ed Ayers quote I mentioned at the beginning of his talk. He said the statistical machinery of early quantitative historians could not represent the nuance of historical experience. That doesn’t just mean the math they used; it means the actual machinery involved.

See, one of the truly groundbreaking punch card technologies at the turn of the century was the card sorter. Each card could represent a person, or household, or whatever else, which is sort of legible one-at-a-time, but unmanageable in giant stacks.

Now, this is still well before ‚Äúcomputers‚ÄĚ, but machines were being developed which could sort these cards into one of twelve pockets based on which holes were punched. So, for example, if you had cards punched for people‚Äôs age, you could sort the stacks into 10 different pockets to break them up by age groups: 0-9, 10-19, 20-29, and so forth.

This turned out to be amazing for eyeball estimates. If your 20-29 pocket was twice as full as your 10-19 pocket after all the cards were sorted, you had a pretty good idea of the age distribution.

Over the next 50 years, this convenience would shape the social sciences. Consider demographics or marketing. Both developed in the shadow of punch cards, and both relied heavily on what‚Äôs called ‚Äúsegmentation‚ÄĚ, the breaking of society into discrete categories based on easily punched attributes. Age ranges, racial background, etc. These would be used to, among other things, determine who was interested in what products.

They’d eventually use statistics on these segments to inform marketing strategies.

But, if you look at the statistical tests that already existed at the time, these segmentations weren’t always the best way to break up the data. For example, age flows smoothly between 0 and 100; you could easily contrive a statistical test to show that, as a person ages, she’s more likely to buy one product over another, over a set of smooth functions.

That’s not how it worked though. Age was, and often still is, chunked up into ten or so distinct ranges, and those segments were each analyzed individually, as though they were as distinct from one another as dogs and cats. That is, 0-9 is as related to 10-19 as it is to 80-89.

What we see here is the deep influence of technological affordances on scholarly practice, and it’s an issue we still face today, though in different form.

As historians began using punch cards and social statistics, they inherited, or appropriated, a structure developed for bureaucratic government processing, and were rightly soon criticized for its dehumanizing qualities.

Pearson Stats

Unsurprisingly, given this backdrop, historians in the first few decades of the 20th century often shied away from or rejected quantification.

The next wave of quantitative historians, who reached their height in the 1930s, approached the problem with more subtlety than the previous generations in the 1890s and 1860s.

Charles Beard’s famous Economic Interpretation of the Constitution of the United States used economic and demographic stats to argue that the US Constitution was economically motivated. Beard, however, did grasp the fundamental idiographic critique of quantitative history, claiming that history was, quote:

‚Äúbeyond the reach of mathematics — which cannot assign meaningful values to the imponderables, immeasurables, and contingencies of history.‚ÄĚ

The other frequent critique of quantitative history, still heard, is that it uncritically appropriates methods from stats and the sciences.

This also wasn’t entirely true. The slide behind me shows famed statistician Karl Pearson’s attempt to replicate the math of Isaac Newton that we saw earlier using more sophisticated techniques.

By the 1940s, Americans with graduate training in statistics like Ernest Rubin were actively engaging historians in their own journals, discussing how to carefully apply statistics to historical research.

On the other side of the channel, the French Annales historians were advocating longue durée history; a move away from biographies to prosopographies, from events to structures. In its own way, this was another historiography teetering on the edge between the nomothetic and idiographic, an approach that sought to uncover the rhymes of history.

Interest in quantitative approaches surged again in the late 1950s, led by a new wave of Annales historians like Fernand Braudel and American quantitative manifestos like those by Benson, Conrad, and Meyer.

William Aydolette went so far as to point out that all historians implicitly quantify, when they use words like ‚Äúmany‚ÄĚ, ‚Äúaverage‚ÄĚ, ‚Äúrepresentative‚ÄĚ, or ‚Äúgrowing‚ÄĚ – and the question wasn‚Äôt can there be quantitative history, but when should formal quantitative methods be utilized?

By 1968, George Murphy, seeing the swell of interest, asked a very familiar question: why now? He asked why the 1960s were different from the 1860s or 1930s, why were they, in that historical moment, able to finally do it right? His answer was that it wasn’t just the new technologies, the huge datasets, the innovative methods: it was the zeitgeist. The 1960s was the right era for computational history, because it was the era of computation.

By the early 70s, there was a historian using a computer in every major history department. Quantitative history had finally grown into itself.

Popper Historicism
Popper Historicism

Of course, in retrospect, Murphy was wrong. Once the pendulum swung too far towards scientific history, theoretical objections began pushing it the other way.

In Poverty of Historicism, Popper rejected scientific history, but mostly as a means to reject historicism outright. Popper’s arguments represent an attack from outside the historiographic tradition, but one that eventually had significant purchase even among historians, as an indication of the failure of nomothetic approaches to culture. It is, to an extent, a return to Musgrave’s critique of Isaac Newton.

At the same time, we see growing criticism from historians themselves. Arthur Schlesinger famously wrote that ‚Äúimportant questions are important precisely because they are not susceptible to quantitative answers.‚ÄĚ

There was a converging consensus among English-speaking historians, as in the early 20th century, that quantification erased the essence of the humanities, that it smoothed over the very inequalities and historical contingencies we needed to highlight.

Barzun's Clio
Barzun’s Clio

Jacques Barzun summed it up well, if scathingly, saying history ought to free us from the bonds of the machine, not feed us into it.

The skeptics prevailed, and the pendulum swung the other way. The post-structural, cultural, and literary-critical turns in historiography pivoted away from quantification and computation. The final nail was probably Fogel and Engerman’s 1974 Time on the Cross, which reduced the Atlantic  slave-trade to economic figures, and didn’t exactly treat the subject with nuance and care.

The cliometricians, demographers, and quantitative historians didn’t disappear after the cultural turn, but their numbers shrunk, and they tended to find themselves in social science departments, or fled here to Europe, where social and economic historians were faring better.

Which brings us, 40 years on, to the middle of a new wave of quantitative or ‚Äúformal method‚ÄĚ history. Ed Ayers, like George Murphy before him, wrote, essentially, this time it‚Äôs different.

And he’s right, to a point. Many here today draw their roots not to the cliometricians, but to the very cultural historians who rejected quantification in the first place. Ours is a digital history steeped in the the values of the cultural turn, that respects social justice and seeks to use our approaches to shine a light on the underrepresented and the historically contingent.

But that doesn’t stop a new wave of critiques that, if not repeating old arguments, certainly rhymes. Take Johanna Drucker’s recent call to rebrand data as capta, because when we treat observations objectively as if it were the same as the phenomena observed, we collapse the critical distance between the world and our interpretation of it. And interpretation, Drucker contends, is the foundation on which humanistic knowledge is based.

Which is all to say, every swing of the pendulum between idiographic and nomothetic history was situated in its own historical moment. It’s not a clock’s pendulum, but Foucault’s pendulum, with each swing’s apex ending up slightly off from the last. The issues of chronology and astronomy are different from those of eugenics and manifest destiny, which are themselves different from the capitalist and dehumanizing tendencies of 1950s mainframes.

But they all rhyme. Quantitative history has failed many times, for many reasons, but there are a few threads that bind them which we can learn from — or, at least, a few recurring mistakes we can recognize in ourselves and try to avoid going forward.

We won’t, I suspect, stop the pendulum’s inevitable about-face, but at least we can continue our work with caution, respect, and care.

Which is to be Master?
Which is to be Master?

The lesson I’d like to highlight may be summed up in one question, asked by Humpty Dumpty to Alice: which is to be master?

Over several hundred years of quantitative history, the advice of proponents and critics alike tends to align with this question. Indeed in 1956, R.G. Collingwood wrote specifically ‚Äústatistical research is for the historian a good servant but a bad master,‚ÄĚ referring to the fact that statistical historical patterns mean nothing without historical context.

Schlesinger, the guy who I mentioned earlier who said historical questions are interesting precisely because they can‚Äôt be quantified, later acknowledged that while quantitative methods can be useful, they‚Äôll lead historians astray. Instead of tackling good questions, he said, historians will tackle easily quantifiable ones — and Schlesinger was uncomfortable by the tail wagging the dog.

Which is to be master - questions
Which is to be master – questions

I’ve found many ways in which historians have accidentally given over agency to their methods and machines over the years, but these five, I think, are the most relevant to our current moment.

Unfortunately since we running out of time, you’ll just have to trust me that these are historically recurring.

Number 1 is the uncareful appropriation of statistical methods for historical uses. It controls us precisely because it offers us a black box whose output we don’t truly understand.

A common example I see these days is in network visualizations. People visualize nodes and edges using what are called force-directed layouts in Gephi, but they don’t exactly understand what those layouts mean. As these layouts were designed, physical proximity of nodes are not meant to represent relatedness, yet I’ve seen historians interpret two neighboring nodes as being related because of their visual adjacency.

This is bad. It’s false. But because we don’t quite understand what’s happening, we get lured by the black box into nonsensical interpretations.

The second way methods drive us is in our reliance on methodological imports. That is, we take the time to open the black box, but we only use methods that we learn from statisticians or scientists. Even when we fully understand the methods we import, if we’re bound to other people’s analytic machinery, we’re bound to their questions and biases.

Take the example I mentioned earlier, with demographic segmentation, punch card sorters, and its influence on social scientific statistics. The very mechanical affordances of early computers influence the sort of questions people asked for decades: how do discrete groups of people react to the world in different ways, and how do they compare with one another?

The next thing to watch out for is naive scientism. Even if you know the assumptions of your methods, and you develop your own techniques for the problem at hand, you still can fall into the positivist trap that Johanna Drucker warns us about — collapsing the distance between what we observe and some underlying ‚Äútruth‚ÄĚ.

This is especially difficult when we‚Äôre dealing with ‚Äúbig data‚ÄĚ. Once you‚Äôre working with so much material you couldn‚Äôt hope to read it all, it‚Äôs easy to be lured into forgetting the distance between operationalizations and what you actually intend to measure.

For instance, if I’m finding friendships in Early Modern Europe by looking for particular words being written in correspondences, I will completely miss the existence of friends who were neighbors, and thus had no reason to write letters for us to eventually read.

A fourth way we can be mislead by quantitative methods is the ease with which they lend an air of false precision or false certainty.

This is the problem Matthew Lincoln and the other panelists brought up yesterday, where missing or uncertain data, once quantified, falsely appears precise enough to make comparisons.

I see this mistake crop up in early and recent quantitative histories alike; we measure, say, the changing rate of transnational shipments over time, and notice a positive trend. The problem is the positive difference is quite small, easily attributable to error, but because numbers are always precise, it still feels like we’re being more precise than doing a qualitative assessment. Even when it’s unwarranted.

The last thing to watch out for, and maybe the most worrisome, is the blinders quantitative analysis places on historians who don’t engage in other historiographic methods. This has been the downfall of many waves of quantitative history in the past; the inability to care about or even see that which can’t be counted.

This was, in part, was what led Time on the Cross to become the excuse to drive historians from cliometrics. The indicators of slavery that were measurable were sufficient to show it to have some semblance of economic success for black populations; but it was precisely those aspects of slavery they could not measure that were the most historically important.

So how do we regain mastery in light of these obstacles?

Which is to be master - answers
Which is to be master – answers

1. Uncareful Appropriation – Collaboration

Regarding the uncareful appropriation of methods, we can easily sidestep the issue of accidentally misusing a method by collaborating with someone who knows how the method works. This may require a translator; statisticians can as easily misunderstand historical problems as historians can misunderstand statistics.

Historians and statisticians can fruitfully collaborate, though, if they have someone in the middle trained to some extent in both — even if they‚Äôre not themselves experts. For what it‚Äôs worth, Dutch institutions seem to be ahead of the game in this respect, which is something that should be fostered.

2. Reliance on Imports – Statistical Training

Getting away from reliance on disciplinary imports may take some more work, because we ourselves must learn the approaches well enough to augment them, or create our own. Right now in DH this is often handled by summer institutes and workshop series, but I’d argue those are not sufficient here. We need to make room in our curricula for actual methods courses, or even degrees focused on methodology, in the same fashion as social scientists, if we want to start a robust practice of developing appropriate tools for our own research.

3. Naive Scientism – Humanities History

The spectre of naive scientism, I think, is one we need to be careful of, but we are also already well-equipped to deal with it. If we want to combat the uncareful use of proxies in digital history, we need only to teach the history of the humanities; why the cultural turn happened, what’s gone wrong with positivistic approaches to history in the past, etc.

Incidentally, I think this is something digital historians already guard well against, but it’s still worth keeping in mind and making sure we teach it. Particularly, digital historians need to remain aware of parallel approaches from the past, rather than tracing their background only to the textual work of people like Roberto Busa in Italy.

4. False Precision & Certainty – Simulation & Triangulation

False precision and false certainty have some shallow fixes, and some deep ones. In the short term, we need to be better about understanding things like confidence intervals and error bars, and use methods like what Matthew Lincoln highlighted yesterday.

In the long term, though, digital history would do well to adopt triangulation strategies to help mitigate against these issues. That means trying to reach the same conclusion using multiple different methods in parallel, and seeing if they all agree. If they do, you can be more certain your results are something you can trust, and not just an accident of the method you happened to use.

5. Quantitative Blinders – Rejecting Digital History

Avoiding quantitative blinders Рthat is, the tendency to only care about what’s easily countable Рis an easy fix, but I’m afraid to say it, because it might put me out of a job. We can’t call what we do digital history, or quantitative history, or cliometrics, or whatever else. We are, simply, historians.

Some of us use more quantitative methods, and some don’t, but if we’re not ultimately contributing to the same body of work, both sides will do themselves a disservice by not bringing every approach to bear in the wide range of interests historians ought to pursue.

Qualitative and idiographic historians will be stuck unable to deal with the deluge of material that can paint us a broader picture of history, and quantitative or nomothetic historians will lose sight of the very human irregularities that make history worth studying in the first place. We must work together.

If we don’t come together, we’re destined to remain punched-card humanists Рthat is, we will always be constrained and led by our methods, not by history.

Creativity Theme Again
Creativity Theme Again

Of course, this divide is a false one. There are no purely quantitative or purely qualitative studies; close-reading historians will continue to say things like ‚Äúrepresentative‚ÄĚ or ‚Äúincreasing‚ÄĚ, and digital historians won‚Äôt start publishing graphs with no interpretation.

Still, silos exist, and some of us have trouble leaving the comfort of our digital humanities conferences or our ‚Äútraditional‚ÄĚ history conferences.

That’s why this conference, I think, is so refreshing. It offers a great mix of both worlds, and I’m privileged and thankful to have been able to attend. While there are a lot of lessons we can still learn from those before us, from my vantage point, I think we’re on the right track, and I look forward to seeing more of those fruitful combinations over the course of today.

Thank you.


  1. This account is influenced from some talks by Ben Schmidt. Any mistakes are from my own faulty memory, and not from his careful arguments.

[f-s d] Cetus

Quoting Liz Losh, Jacqueline Wernimont tweeted that behind every visualization is a spreadsheet.

But what, I wondered, is behind every spreadsheet?

Space whales.

Okay, maybe space whales aren’t behind¬†every spreadsheet, but they’re behind this one, dated 1662,¬†notable for¬†the gigantic nail it hammered into the coffin of our belief that heaven above is¬†perfect and unchanging. The following post¬†is the first in my new series¬†full-stack dev (f-s d), where I explore the secret life of data. 1

Hevelius. Mercurius in Sole visus (1662).
Hevelius. Mercurius in Sole visus (1662).

The Princess Bride teaches¬†us a good story involves “fencing, fighting, torture, revenge, giants, monsters, chases, escapes, true love, miracles”. In this story,¬†Cetus, three¬†of those play a prominent role: (red) giants, (sea) monsters, and (cosmic) miracles.¬†Also Greek myths, interstellar explosions, beer-brewing astronomers, meticulous¬†archivists, and top-secret digitization facilities. All together, they reveal how¬†technologies, people, and stars aligned to stick¬†this 350-year-old spreadsheet in¬†your browser today.

The Sea

When Aethiopian queen¬†Cassiopeia claimed herself more beautiful than all the sea nymphs, Poseidon was, let’s say, less than pleased. Mildly miffed.¬†He maybe sent a sea monster named Cetus to destroy Aethiopia.

Because obviously the best way to stop a flood is to drown a princess, Queen Cassiopeia chained her daughter to the rocks as a sacrifice to Cetus. Thankfully the hero Perseus just happened to be¬†passing through Aethiopia, returning home after beheading Medusa, that snake-haired woman¬†whose eyes turned living creatures¬†to stone.¬†Perseus (depicted below as the world’s most boring 2-ball juggler) revealed¬†Medusa’s severed head to Cetus, turning the sea monster to stone and saving the princess. And then they got married because traditional gender roles¬†I guess?

Corinthian vase depicting Perseus, Andromeda and Ketos.
Corinthian vase depicting Perseus, Andromeda and Ketos. [via]
Cetaceans, you may recall from grade school, are those giant carnivorous sea-mammals that Captain Ahab warned you about. Cetaceans, from Cetus. You may also remember we have a thing for naming star constellations and dividing the sky up into sections (see the Zodiac), and that we have a long history of comparing the sky to the ocean (see Carl Sagan or Star Trek IV).

It should come as no surprise, then, that we’ve designated a whole section of space as ‘The Sea‘, home of Cetus (the whale),¬†Aquarius (the God) and Eridanus (the water¬†pouring from Aquarius’¬†vase, source of river floods),¬†Pisces (two fish tied together by a rope, which makes total sense I promise), Delphinus (the dolphin), and Capricornus (the goat-fish. Listen, I didn’t make these up, okay?).

Jamieson's Celestial Atlas, Plate 21 (1822).
Jamieson’s Celestial Atlas, Plate 21 (1822). [via]
Jamieson's Celestial Atlas, Plate 23 (1822).
Jamieson’s Celestial Atlas, Plate 23 (1822). [via]
Ptolemy listed¬†most of these constellations in his¬†Almagest (ca. 150 A.D.), including Cetus, along with descriptions of over a thousand stars. Ptolemy’s model, with Earth at the center and the constellations¬†just past Saturn, set the course of cosmology for over a thousand years.

Ptolemy's Cosmos [by Robert A. Hatch]
Ptolemy’s Cosmos [by Robert A. Hatch]
In this cosmos, reigning in Western Europe for centuries past Copernicus’ death in 1543, the stars were fixed and motionless.¬†There was no vacuum of space; every planet was embedded in a shell made of aether or quintessence (quint-essence,¬†the fifth element), and each shell sat atop the next until reaching the celestial sphere. This last sphere held the stars, each one fixed to it as with a pushpin.¬†Of course, all of it revolved around the earth.

The domain of heavenly spheres was¬†assumed¬†perfect in all sorts of ways. They slid across each other¬†without friction, and the planets and stars were perfect spheres which could not change and were unmarred by inconsistencies. One reason¬†it was so difficult¬†for even “great thinkers”¬†to believe the earth orbited the sun, rather than vice-versa, was because such a system would be at complete odds with how people knew physics to work. It would break gravity, break¬†motion, and break the outer perfection of the cosmos, which was essential (…heh) 2¬†to our notions of, well, everything.

Which is why, when astronomers with their telescopes and their spreadsheets started systematically observing imperfections in planets and stars, lots of people didn’t believe them‚ÄĒeven other astronomers. Over the course of centuries, though, these imperfections became impossible to ignore, and helped launch the earth in rotation ’round the sun.

This is the story of one such imperfection.

A Star is Born (and then dies)

Around 1296 A.D., over the course of half a year, a red dwarf star some 2 quadrillion miles away grew from 300 to 400 times the size of our sun. Over the next half year, the star shrunk back down to its previous size. Light from the star took 300 years to reach earth, eventually striking the retina of German pastor David Fabricius. It was very early Tuesday morning on August 13, 1596, and Pastor Fabricius was looking for Jupiter. 3

At¬†that time of year, Jupiter¬†would have been near the constellation Cetus (remember our sea monster?), but Fabricius¬†noticed a nearby¬†bright star (labeled ‘Mira’ in the below figure) which he did not remember from¬†Ptolemy or Tycho Brahe’s star charts.

Mira Ceti and Jupiter. [via]
Mira Ceti and Jupiter. [via]
Spotting an unrecognized star wasn’t unusual, but one so bright in so common a constellation was certainly worthy of note. He wrote down some observations of the star throughout September and October,¬†after which it seemed to have disappeared as suddenly as it appeared. The disappearance prompted Fabricius to write a letter about it to famed¬†astronomer Tycho Brahe, who had described a similar appearing-then-disappearing star between 1572 and 1574. Brahe jotted Fabricius’ observations down in his journal. This sort of behavior, after all, was a bit shocking for a supposedly fixed and unchanging celestial sphere.

More shocking, however, was what happened 13 years later, on February 15, 1609. Once again searching for Jupiter, pastor Fabricius spotted another new star in the same spot as the last one. Tycho Brahe having recently died, Fabricius wrote a letter to his astronomical successor, Johannes Kepler, describing the miracle. This was unprecedented. No star had ever vanished and returned, and nobody knew what to make of it.

Unfortunately for Fabricius, nobody did make anything of it. His observations were either ignored or, occasionally, dismissed as an error. To add injury to insult, a local goose thief killed Fabricius with a shovel blow, thus ending his place in this star’s story, among other stories.

Mira Ceti

Three decades passed. On the winter solstice,¬†1638, Johannes Phocylides Holwarda¬†prepared to view a lunar eclipse. He reported with excitement the star’s appearance and, by August 1639, its disappearance. The new star, Holwarda claimed, should be¬†considered of the same class as Brahe, Kepler, and Fabricius’ new stars.¬†As much a surprise to him as Fabricius, Holwarda saw the star again on November 7, 1639. Although he¬†was not aware of it, his new star was the same as the one Fabricius spotted 30 years prior.

Two more decades passed before the new star in the neck of Cetus would be systematically sought and observed, this time by Johannes Hevelius: local politician, astronomer, and brewer of fine beers. By that time many had seen the star, but it was difficult to know whether it was the same celestial body, or even what was going on.

Hevelius brought everything together. He found recorded observations from Holwarda, Fabricius, and others, from today’s Netherlands¬†to Germany to Poland, and realized these disparate observations were of the same star. Befitting its puzzling and seemingly miraculous nature, Hevelius dubbed the star¬†Mira (miraculous) Ceti. The image below, from Hevelius’ Firmamentum Sobiescianum sive Uranographia¬†(1687), depicts Mira Ceti as the bright star in the sea monster’s neck.

Hevelius. Firmamentum Sobiescianum sive Uranographia (1687).
Hevelius. Firmamentum Sobiescianum sive Uranographia (1687).

Going further, from 1659 to 1683, Hevelius observed¬†Mira Ceti in a more consistent fashion than any before. There were eleven¬†recorded observations in the 65 years between Fabricius’ first sighting of the star and Hevelius’ undertaking; in the following three, he had recorded 75 more such observations.¬†Oddly, while Hevelius was a remarkably meticulous observer, he insisted the star was inherently unpredictable, with no regularity in its reappearances or variable brightness.

Beginning shortly after¬†Hevelius, the astronomer¬†Isma√ęl Boulliau also undertook a thirty year search for¬†Mira Ceti. He even published a prediction, that the star would go through its vanishing cycle every 332 days, which turned out to be incredibly accurate. As today’s astronomers note,¬†Mira Ceti‘s brightness increases and decreases by several orders of magnitude every 331 days, caused by an interplay between¬†radiation pressure and gravity in the star’s gaseous exterior.

Mira Ceti composite taken by NASA's Galaxy Evolution Explorer. [via]
Mira Ceti composite taken by NASA’s Galaxy Evolution Explorer. [via]
While of course Boulliau didn’t arrive at today’s explanation for¬†Mira‘s variability, his solution did require a rethinking of the fixity of stars, and eventually contributed to the notion that maybe the same physical laws that apply on Earth also rule the sun and stars.

Spreadsheet Errors

But we’re not here to talk about Boulliau, or Mira Ceti. We’re here to talk about this spreadsheet:

Hevelius. Mercurius in Sole visus (1662).
Hevelius. Mercurius in Sole visus (1662).

This snippet represents Hevelius’ attempt to systematically collected prior observations of¬†Mira Ceti. Unreasonably meticulous readers of this post may note an inconsistency: I wrote that¬†Johannes Phocylides Holwarda observed Mira Ceti on November¬†7th, 1639, yet Hevelius here shows Holwarda observing the star on¬†December 7th, 1639, an entire month later. The little notes on the side are basically the observers saying: “wtf this star keeps reappearing???”

This mistake was not a simple printer’s error. It reappeared in Hevelius’¬†printed books three times:¬†1662, 1668, and 1685.¬†This is an early example of what¬†Raymond Panko and others call a spreadsheet error, which appear¬†in nearly 90% of 21st century spreadsheets. Hand-entry¬†is difficult, and mistakes are bound to happen. In this case, a game of telephone also played a part: Hevelius may have¬†pulled some observations not directly from the original astronomers, but from the notes of Tycho Brahe and Johannes Kepler, to which he had access.

Unfortunately, with so¬†few observations, and many of the early ones so sloppy, mistakes compound themselves. It’s difficult to predict a variable star’s periodicity when you don’t have the right dates of observation, which may have contributed to Hevelius’ continued insistence that¬†Mira Ceti kept no regular schedule. The other contributing factor, of course, is that Hevelius worked without a telescope and under cloudy skies, and stars are hard to measure under even the best circumstances.

To Be Continued

Here ends the first half¬†of¬†Cetus.¬†The second half will cover how Hevelius’ book was preserved,¬†the labor behind its digitization, and a bit about the technologies involved in creating the image you see.

Early modern astronomy is a particularly good pre-digital subject for full-stack dev (f-s d), since it required vast international correspondence networks and distributed labor in order to succeed. Hevelius could not have created this table, compiled from the observations of several others, without access to cutting-edge astronomical instruments and the contemporary scholarly network.

You may ask¬†why I included that whole section on Greek myths and Ptolemy’s constellations. Would as many early modern astronomers have noticed¬†Mira¬†Ceti¬†had it not sat in the center of a familiar constellation, I wonder?

I promised this series will be about the secret life of data, answering the question of what’s behind a spreadsheet.¬†Cetus is only the first story (well, second, I guess), but the idea is to upturn the iceberg underlying seemingly mundane datasets to reveal the complicated stories of their creation and usage. Stay-tuned for future installments.


  1. I’m retroactively adding my blog rant about data underlying an equality visualization to the f-s d series.
  2. this pun is only for historians of science
  3. Most of the historiography in this and the following section are summarized from Robert A. Hatch’s “Discovering Mira Ceti: Celestial Change¬†and Cosmic Continuity


Last week, I publicly outed myself as a non-tenure-track academic diagnosed on the autism spectrum, 1 hoping that doing so might help other struggling academics find solace knowing they are not alone. I was unprepared for the outpouring of private and public support. Friends, colleagues, and strangers thanked me for helping them feel a little less alone, which in turn helped me feel much less alone. Thank you all, deeply and sincerely.

In a similar spirit, for interested allies and struggling fellows, this post is about how my symptoms manifest in the academic world, and how I manage them. 2

Navigating the social world is tough‚ÄĒa fact that may surprise some of my friends and most of my colleagues. I do alright¬†at conferences and in groups, when conversation is polite and skin-deep, but it requires careful concentration and a lot of smoke and mirrors. Inside, it feels like I’m translating from Turkish to Cantonese without knowing either language. Every time this is said, that is the appropriate reply, though I struggle to understand why. I just possess a translation book, and recite what is expected. Stimulus and response. This skill was only recently acquired.

Looking at the point between people‚Äôs eyes makes it appear as though I am making direct eye contact during conversations. Certain observations (‚Äúyou look tired‚ÄĚ) are apparently less well-received than others (‚Äúyou look excited‚ÄĚ), and I‚Äôve mostly learned which are which.

After a long day keeping up this appearance, especially at conferences, I find a nice dark room and stay there. Sharing conference hotel rooms with fellow academics is never an option. Some strategies I figured out myself; others, like the eye contact trick, I built over¬†extended discussions with an old girlfriend after she handed me a severely-highlighted copy of The Partner’s Guide to Asperger Syndrome.

ADHD and Autism Spectrum Disorder are highly co-morbid, and I have been diagnosed with either or both by several independent professionals in the last twenty years. Working is hard, and often takes at least twice as much time for me as it does for the peers with whom I have discussed this. When interested in something, I lose myself entirely in it for hours on end, but a single break in concentration will leave me scrambling. It may take hours or days to return to a task, if I do at all. My best work is done in marathon, and work that takes longer than a few days may never get finished, or may drop in quality precipitously. Keeping the internet disconnected and my phone off during regular periods every day, locked in my windowless office, helps keep distractions at bay. But, I have yet to discover a good strategy to manage long projects. A career in the book-driven humanities may have been a poor choice.

Paying bills on time, keeping schedules, and replying to emails are among the most stressful tasks in my life. When I don’t adequately handle all of these mundane tasks, it sets in motion a cycle of horror that paralyzes my ability to get anything done, until I eventually file for task bankruptcy and inevitably disappoint colleagues, friends, or creditors to whom action is owed. Poor time management and stress-cycles lead me to over-promise and under-deliver. On the bright side, I recently¬†received help in strategies to improve that, and they work. Sometimes.

Friendships, surprisingly, are easy to maintain but difficult to nourish. My friends consider me trustworthy and willing to help (if not necessarily always dependable), but I lose track of friends or family who aren’t geographically close. Deeper emotional relationships are rare or, for swaths of my life, non-existent.¬†I get no fits of anger or depression or elation or excitement. Indeed, my friends and family remark how impossible it is to see if I like a gift they’ve given me.

People occasionally describe my actions as offensive, rude, or short, and I get frustrated trying to understand exactly why what I’m doing fits into those categories. Apparently, early in grad school, I had a bit of a reputation for asking obnoxious questions in lectures. But I don’t like upsetting people, and actively (maybe successfully?) try to curb these traits when they are pointed out.

Thankfully, academic life allows me the freedom to lock myself in a room and focus on a task. Using work as a coping mechanism for social difficulties may be unhealthy, but hey, at least I found a career that rewards my peculiarities.

My life is pretty great. I have good friends, a loving family, and hobbies that challenge me. As long as I maintain the proper controlled environment, my fixations and obsessions are a perfect complement to an academic career, especially in a culture that (unfortunately) rewards workaholism. The same tenacity often compensates for difficulties in navigating romantic relationships, of which I’ve had a few incredibly fulfilling and valuable ones over my life thus-far.

Unfortunately, my experience on the autism spectrum is not shared by all academics. Some have enough difficulty managing the social world that they end up alienating colleagues who are on their tenure committees, to disastrous effect. From private conversations, it seems autistic women suffer more from this than men, as they are expected to perform more service work and to be more social. Supportive administrators can be vital in these situations, and autism-spectrum academics may want to negotiate accommodations for themselves as part of their hiring process.

Despite some frustrations, I have found my atypical way of interacting with the world to be a feature, not a bug. My atypicality presents as what used to be called Asperger Syndrome, and it is easier for me to interact with the world, and easier for the world to interact with me, than many other autistic individuals. That said, whether or not my friends and colleagues notice, I still struggle with many aspects common to those diagnosed on the autism spectrum: social-emotional difficulties, alexithymia, intensity of focus, hypersensitivity, system-oriented thinking, etc.

Relationships or friendships with someone on the spectrum can be tough, even¬†with someone who doesn’t outwardly present common characteristics, like me. An old partner once vented her frustrations that she couldn’t turn to her friends for advice, because: “everyone just said Scott is so normal and I was thinking [no], he’s just very very good at passing [as socially aware].” Like many who grow up non-neurotypical, I learned a complex set of coping strategies to help me fit in and succeed in a neurotypical world. To concentrate on work, I create an office cave to shut out the world. I use a complicated set of journals, calendars, and apps to¬†keep me on task and ensure I pay bills on time. To stay attentive, I sit at the front of a lecture hall‚ÄĒit even works, sometimes. Some ADHD symptoms are managed pharmacologically.

These strategies give¬†me the 80% push I need to be a functioning member of society, to become someone who can sustain relationships, not get kicked out of his house for forgetting rent, and can almost finish a PhD. Almost. It’s not quite enough to prevent me from a dozen incompletes on my transcripts, but I make do. A host of unrealistically patient and caring friends, family, and colleagues helps. (If you’re someone to whom I still owe work, but am too scared to reply to because of how delinquent I am, thanks for understanding! waves and runs away). Caring allies help. A lot.

My life so far has been a series of successes and confusions. Not unlike anybody else’s life, I suppose. I occupy my own corner of weirdness, which is itself unique enough, but everyone has their own corner. I doubt my writing this will help anyone understand themselves any better, but hopefully it will help fellow academics feel a bit safer in their own weirdness. And if this essay helps our neurotypical colleagues be a bit more understanding of our struggles, and better-informed as allies, all the better.


  1. The original article, Stigma, was written for the Conditionally Accepted column of Inside Higher Ed. Jeana Jorgensen, Eric Grollman and Sarah Bray provided invaluable feedback, and I wouldn’t have written it¬†without them. They invited me to write this second article for¬†Inside Higher Ed as well, which was my original intent. I wound up posting it on my blog instead¬†because their¬†posting schedule didn’t quite align¬†with my writing schedule. This¬†shouldn’t be counted as a negative reflection on the process of publishing with that fine establishment.
  2. Let me be clear: I know very little about autism, beyond that I have been diagnosed with it. I’m still learning a lot. This post is about me. Knowing other people face¬†similar struggles¬†has been profoundly helpful, regardless of what causes those struggles.

“Digital History” Can Never Be New

If you claim computational approaches to history¬†(“digital history”) lets historians¬†ask new types of questions, or that they offer¬†new historical approaches to answering or exploring old questions, you are wrong. You’re not actually wrong, but you are institutionally wrong, which is maybe worse.

This is a problem, because rhetoric from practitioners (including me) is that we can bring some “new” to the table, and when we don’t, we’re called out for not doing so. The exchange might¬†(but probably won’t) go like this:

Digital Historian: And this graph explains how velociraptors were of utmost importance to Victorian sensibilities.

Historian in Audience: But how is this telling us anything we haven’t already heard before? Didn’t¬†John Hammond already make the same claim?

DH: That’s true, he did. One thing the graph shows, though, is that velicoraptors in general tend to play much more unimportant roles across hundreds of years, which lends support to the Victorian thesis.

HiA: Yes, but the generalized argument doesn’t account for cultural differences across those times, so doesn’t meaningfully contribute to this (or any other) historical conversation.

New Questions

History (like any discipline) is made of people, and those people have¬†Ideas about what¬†does or doesn’t count as history (well, historiography, but that’s a long word so let’s ignore it). If you ask a new type of question or use a new approach, that new thing¬†probably doesn’t fit historians’¬†Ideas about proper history.

Take culturomics. They make claims like this:

The age of peak celebrity has been consistent over time: about 75 years after birth. But the other parameters have been changing. Fame comes sooner and rises faster. Between the early 19th century and the mid-20th century, the age of initial celebrity declined from 43 to 29 years, and the doubling time fell from 8.1 to 3.3 years.

Historians saw those claims and asked “so what”? It’s not interesting or relevant according to the things historians usually consider interesting or relevant, and it’s problematic in ways historians find things problematic. For example, it ignores cultural differences, does not speak to actual human experiences, and has nothing of use to say about a particular historical moment.

It’s true.¬†Culturomics-style questions do not fit well within a humanities paradigm (incommensurable, anyone?). By the standard measuring stick of what makes a good history project, culturomics does not measure up. A new type of question requires a new measuring stick; in this case, I think a good one for culturomics-style approaches is the extent to which they bridge individual experiences with large-scale social phenomena, or how well they are able to reconcile statistical social regularities with free or contingent choice.

The point, though, is a culturomics presentation would¬†fit few¬†of the boxes expected at a history conference, and so would be considered a failure. Rightly so, too‚ÄĒit’s a bad history presentation.¬†But what culturomics¬†is successfully doing is asking new types of questions, whether or not historians find them legitimate or interesting.¬†Is it good culturomics?

To put too fine a point on it, since history is often a question-driven discipline, new types of questions that are too different from previous types are no longer legitimately within the discipline of history, even if they are intrinsically about human history and do not fit in any other discipline.

What’s more, new types of questions may¬†appear simplistic¬†by historian’s standards, because they fail at fulfilling even the most basic criteria usually measuring historical worth.¬†It’s worth keeping in mind that, to most of the rest of the world, our historical work often fails at meeting¬†their criteria for worth.

New Approaches

New approaches to old questions share a similar fate, but for different reasons. That is, if they are novel, they are not interesting, and if they are interesting, they are not novel.

Traditional historical questions are, let’s face it, not particularly new. Tautologically. Some old questions in my field are: what role did now-silent voices¬†play in constructing knowledge-making instruments in 17th century astronomy? How did scholarship become institutionalized in the 18th century? Why was Isaac Newton so annoying?

My own research¬†is an attempt to provide a broader view of those topics¬†(at least, the first two) using computational means. Since my topical interest has a rich tradition among historians, it’s unlikely any of my historically-focused claims (for example, that scholarly¬†institutions were built to¬†replace the really complicated and precarious role people played in coordinating social networks) will be without precedent.

After decades, or even centuries, of historical work in this area, there will always be examples of historians already having made my claims. My contribution is the bolstering of a particular viewpoint, the expansion of its applicability, the reframing of a discussion. Ultimately, maybe, I convince the world that certain social network conditions play an important role in allowing scholarly activity to be much more successful at its intended goals. My contribution is not, however, a claim that is wholly without precedent.

But this is a problem, since¬†DH rhetoric, even by practitioners, can understandably¬†lead people to expect such novelty. Historians in particular are¬†very good at fitting old patterns to new evidence. It’s what we’re trained to do.

Any historical claim (to an acceptable question within the historical paradigm) can easily be countered with¬†“but we¬†already¬†knew that”. Either the question’s been around long enough that every plausible claim has been covered, or the new evidence or theory is similar enough to something pre-existing that it can be taken as precedent.

The most masterful recent discussion¬†of this topic was Matthew Lincoln’s¬†Confabulation in the humanities, where he shows how easy it is to make up evidence and get historians to agree that they already knew it was true.

To put too fine a point on it, new approaches to old historical questions are destined to produce results which conform to old approaches; or if they don’t, it’s easy enough to stretch the old & new theories together until they fit. New¬†approaches to¬†old questions will¬†fail¬†at producing completely surprising¬†results; this is a bad¬†standard for¬†historical projects.¬†If a novel methodology were to create truly unrecognizable results, it is unlikely those results would be recognized as “good history” within the current paradigm. That is, historians would struggle to care.

What Is This Beast?

What is this beast we call digital history? Boundary-drawing is a tried-and-true tradition in the humanities, digital or otherwise. It’s theoretically kind of stupid but practically incredibly important, since funding decisions, tenure cases, and similar career-altering forces are at play. If digital history is a type of history, it’s fundable as such, tenurable as such; if it isn’t, it ain’t. What’s more, if what culturomics researchers are doing are also history, their already-well-funded machine can start taking slices of the sad NEH pie.

Artist's rendition of sad NEH pie. [via]
Artist’s rendition of sad NEH pie. [via]
So “what counts?” is unfortunately important to answer.

This discussion around what is “legitimate history research” is really important, but I’d like to table it for now, because it’s so often conflated with the discussion of what is “legitimate research”¬†sans history. The former question easily overshadows the latter, since academics are mostly just schlubs¬†trying to make a living.

For the last century or so, history and philosophy of science have been smooshed together in departments and conferences. It’s caused a lot of concern. Does history of science need philosophy of science? Does philosophy of science need history of science? What does it mean to combine the two? Is what comes out of the middle even useful?

Weirdly, the question¬†sometimes comes down to “does history and philosophy of science even exist?”. It’s weird because people identify with that combined title, so I published a citation analysis¬†in Erkenntnis a few years back that basically¬†showed that, indeed, there is an area between the two communities, and indeed those people describe themselves as¬†doing¬†HPS, whatever that means to them.

Look! Right in the middle there, it's history and philosophy of science.
Look! Right in the middle there, it’s history and philosophy of science.

I bring this up because digital history, as many of us practice it, leaves us floating somewhere between public engagement, social science, and history. Culturomics occupies a similar interstitial space, though inching closer to social physics and complex systems.

From this vantage point, we have a couple of¬†options. We can say digital history is just history from a slightly different angle, and¬†try to be¬†evaluated by standard historical measuring sticks‚ÄĒwhich would¬†make our work easily criticized¬†as not particularly novel. Or we can say digital history is something new, occupying that in-between space‚ÄĒwhich could¬†render¬†the work unrecognizable to our usual communities.

The either/or proposition is, of course, ludicrous. The best work being done now skirts the line, offering something just novel enough to be surprising, but not so out of traditional historical bounds as to be grouped with culturomics. But I think we need to more deliberate and organized in this practice, lest we want to be like History and Philosophy of Science, still dealing with basic questions of legitimacy fifty years down the line.

In the short term, this probably means trying not just to avoid the rhetoric of newness, but to actively curtail it. In the long term, it may mean allying with like-minded historians, social scientists, statistical physicists, and complexity scientists to build a new framework of legitimacy that recognizes the forms of knowledge we produce¬†which don’t always align with historiographic standards. As Cassidy Sugimoto and I recently wrote, this often comes with journals, societies, and disciplinary realignment.

The least we can do is steer away from¬†a novelty¬†rhetoric, since what is novel often isn’t¬†history, and what is history often isn’t¬†novel.

“Branding” – An Addendum

After writing this post, I read Amardeep Singh’s call to, among other things, avoid branding:

Here’s a way of thinking that might get us past this muddle (and I think I agree with the authors that the hype around DH is a mistake): let’s stop branding our scholarship. We don’t need Next Big Things and we don’t need Academic Superstars, whether they are DH Superstars or Theory Superstars. What we do need is to find more democratic and inclusive ways of thinking about the value of scholarship and scholarly communities.

This is relevant here, and good, but tough to reconcile with the earlier post. In an ideal world, without disciplinary brandings, we can all try to be welcoming of works on their own merits, without relying¬†our preconceived disciplinary criteria. In the present condition, though, it’s tough to see such an environment forming. In that context, maybe a unified digital history “brand” is the best way to stay afloat. This would build barriers against whatever new thing comes along next, though, so it’s a tough question.

The Turing Point

Below is some¬†crazy, uninformed ramblings about the least-complex possible way to trick someone into thinking a computer is a human, for the purpose of history research. I’d¬†love some genuine AI/Machine Intelligence researchers to point me to the actual discussions on the subject. These aren’t original thoughts; they spring from countless sci-fi novels and AI research from the ’70s-’90s.¬†Humanists beware: this is super sci-fi speculative, but maybe an interesting thought experiment.

If someone’s chatting with a computer, but doesn’t realize her conversation partner isn’t human, that computer passes the¬†Turing Test. Unrelatedly, if¬†a robot or piece of art is just close enough to reality to be creepy, but not close enough to be convincingly real, it lies in the Uncanny Valley.¬†I argue there is a useful concept in the simplest possible computer¬†which is still convincingly human, and that computer will be at the¬†Turing Point. 1¬†

By Smurrayinchester - self-made, based on image by Masahiro Mori and Karl MacDorman at http://www.androidscience.com/theuncannyvalley/proceedings2005/uncannyvalley.html, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=2041097
By Smurrayinchester – self-made, based on image by Masahiro Mori and Karl MacDorman, CC BY-SA 3.0

Forgive my twisting Turing Tests and Uncanny Valleys away from their normal use, for the sake of outlining the Turing Point concept:

  • A human simulacrum is a¬†simulation of a human, or some aspect of a human, in some¬†medium, which is designed¬†to be¬†as-close-as-possible to that which is being modeled, within the scope of that medium.
  • A Turing Test winner is¬†any human simulacrum¬†which humans consistently mistake for the real thing.
  • An occupant of the Uncanny Valley is¬†any human simulacrum which humans consistently doubt as representing a “real” human.
  • Between the Uncanny Valley and Turing Test winners lies the Turing Point, occupied by¬†the least-sophisticated human simulacrum that can still consistently pass as human in a given medium.¬†The Turing Point is a hyperplane¬†in a hypercube, such that there are many points of entry¬†for the simulacrum to “phase-transition” from uncanny to convincing.

Extending the Turing Test

The classic Turing Test scenario is a text-only chatbot which must, in free conversation, be convincing enough for a human to think it is speaking with another human. A piece of software named Eugene Goostman sort-of passed this test in 2014, convincing a third of judges it was a 13-year-old Ukrainian boy.

There are many possible modes in which a computer can act convincingly human. It is easier to make a convincing simulacrum of a 13-year-old non-native English speaker who is confined to text messages than to make a convincing college professor, for example. Thus the former has a lower Turing Point than the latter.

Playing with the constraints of the medium will also affect the Turing Point threshold. The Turing Point for a flesh-covered robot is incredibly difficult to surpass, since so many little details (movement, design, voice quality, etc.) may place it into the Uncanny Valley. A piece of software posing as a Twitter user, however, would have a significantly easier time convincing fellow users it is human.

The Turing Point, then, is flexible to the medium in which the simulacrum intends to deceive, and the sort of human it simulates.

From Type to Token

Convincing the world a simulacrum is any old human is different than convincing the world it is some specific human. This is the token/type distinction; convincingly simulating a specific person (token) is much more difficult than convincingly simulating any old person (type).

Simulations of specific¬†people are all over the place, even if they don’t intend to deceive. Several Twitter-bots exist as simulacra of Donald Trump,¬†reading his tweets and creating new ones in a similar style. Perhaps imitating¬†Poe’s Law, certain people’s styles, or certain types of media (e.g. Twitter), may provide such a low Turing Point that it is genuinely difficult to distinguish humans from machines.

Put differently, the way some Turing Tests may be designed, humans could easily lose.

It’ll be useful to make up and define two terms here. I imagine the concepts already exist, but couldn’t find them, so please comment if they do so I can use less stupid words:

  • A¬†type-bot¬†is a machine designed to be represent something at the type-level. For example, a bot that can be mistaken for some random human, but not some specific human.
  • A¬†token-bot is a machine designed to represent something at the token-level. For example, a bot that can be mistaken for Donald Trump.

Replaying History

Using traces to recreate historical figures (or at least things they could have done) as token-bots is not uncommon. The most recent high-profile example of this is a project to create a new Rembrandt painting in the original style. Shawn Graham and I wrote an article on using simulations to create new plausible histories, among many other examples old and new.

This all got me thinking, if we reach the Turing Point for some social media¬†personalities (that is, it is difficult to distinguish between their social media presence, and a simulacrum of it), what’s to say we can’t reach it for an entire social media ecosystem? Can we take a snapshot of Twitter and project it several seconds/minutes/hours/days into the future, a bit like a meteorological model?

A few questions and obvious problems:

  • Much of Twitter’s dynamics are dependent upon exogenous forces: memes from other media, real world events, etc. Thus, no projection¬†of Twitter¬†alone would¬†ever look like the real thing.¬†One can, however, potentially use such a simulation to predict how certain types of events might affect the system.
  • This is way overkill, and impossibly computationally complex at this scale. You can simulate the dynamics of Twitter without simulating every individual user, because people on average act pretty systematically. That said, for the humanities-inclined, we may gain more insight from the ground-level of the system (individual agents) than macroscopic properties.
  • This is key.¬†Would a set of plausibly-duplicate Twitter personalities¬†on aggregate¬†create a dynamic system that matches Twitter as an aggregate system? That is, just because the algorithms pass the Turing Test, because humans believe them to be humans, does that necessarily imply the algorithms have enough fidelity to accurately recreate the dynamics of a large scale social network? Or will small¬†unnoticeable differences between the simulacrum and the original accrue¬†atop each other, such that in aggregate they no longer act like a real social network?

The last point is I think a theoretically and methodologically fertile one for people working in DH, AI, and Cognitive Science: whether reducing human-appreciable traits between machines and people is sufficient to simulate aggregate social behavior, or whether human-appreciability (i.e., Turing Test) is a strict enough criteria for making accurate predictions about societies.

These points aside, if we ever do manage to simulate specific people (even in a very limited scope) as token-bots based on the traces they leave, it opens up interesting pedagogical and research opportunities for historians. Scott Enderle tweeted a great metaphor for this:

Imagine, as a student, being able to have a plausible discussion with Marie Curie, or sitting in an Enlightenment-era salon. 2 Or imagine, as a researcher (if individual Turing Point machines do aggregate well), being able to do well-grounded counterfactual history that works at the token level rather than at the type level.

Turing Point Simulations

Bringing¬†this slightly back into the realm of the sane, the interesting thing here is the interplay between appreciability (a person’s ability to appreciate enough difference to notice something wrong with a simulacrum) and fidelity.

We can specifically design simulation conditions with incredibly low-threshold Turing Points, even for token-bots. That is to say, we can create a condition where the interactions are simple enough to make a bot that acts indistinguishably from the specific human it is simulating.

At the most extreme end, this is obviously pointless. If our system is one in which a person¬†can only answer “yes” or “no” to pre-selected preference¬†questions (“Do you like ice-cream?”), making a bot to simulate that person convincingly would be trivial.

Putting that aside (lest we get into questions of the Turing Point of a set of Turing Points), we can potentially design reasonably simplistic test scenarios¬†that would allow for an easy-to-reach Turing Point while still being historiographically or sociologically useful. It’s sort of a minimization problem in topological optimizations. Such a goal would limit the burden of the simulation while maximizing the potential research benefit (but only if, as mentioned before, the difference between true fidelity and the ability to win a token-bot Turing Test is small enough to allow for generalization).

In short, the concept of a Turing Point can help us conceptualize and build token-simulacra that are useful for¬†research or teaching. It helps us ask the question: what’s the least-complex-but-still-useful token-simulacra? It’s also kind-of maybe sort-of like Kolmogorov complexity¬†for human appreciability of other humans: that is, the simplest possible representation of a human that is convincing to other humans.

I’ll end by saying, once again, I realize how insane this sounds, and how far-off. And also how much an interloper I am to this space, having never so much as designed a bot. Still, as Bill Hart-Davidson wrote,

the possibility seems¬†more plausible than ever, even if not soon-to-come. I’m not even sure why I posted this on the Irregular, but it seemed like it’d be relevant enough to some regular¬†readers’ interests to be worth spilling some ink.


  1. The name itself is maybe¬†too on-the-nose, being a pun for turning point and thus connected to the rhetoric of singularity, but ¬Į\_(„ÉĄ)_/¬Į
  2. Yes yes I know, this is SecondLife all over again, but hopefully much more useful.

Summary: Martin & Runyon’s “Digital Humanities, Digital Hegemony”

Today’s post just summarizes an article recently shared with me, as an attempt to boost the signal:

Those following along at home know I’ve been exploring how digital humanities infrastructure reinforces pre-existing cultural biases, most recently with Nickoal Eichmann & Jeana Jorgensen looking at DH Conferences, 2000-2015.

One limitation of our¬†study is we know very little about the content of conference presentations or the racial identities of authors, which means we can’t assess bias in those directions. John D. Martin III & Carolyn Runyon recently published preliminary results more thoroughly addressing race & gender in DH from a funding perspective, and focused on the content of grants:

Martin, John D., III, and Carolyn Runyon. ‚ÄúDigital Humanities, Digital Hegemony: Exploring Funding Practices and Unequal Access in the Digital Humanities.‚ÄĚ SIGCAS Computers and Society. 46, no. 1 (March 2016): 20‚Äď26. doi:10.1145/2908216.2908219.

By hand-categorizing 656 DH-oriented NEH grants from 2007-2016, totaling $225 million, Martin & Runyon found 110 projects whose focus involved gender or individuals of a certain gender, and 228 which focused on race/ethnicity or individuals identifiable with particular races/ethnicities.

From the article
From the article

Major findings include:

  • Twice as much money goes to studying men than to women.
  • On average, individual projects about women are better-funded.
  • The top three race/ethnicity categories by funding amount are White ($21 million), Asian ($7 million), and Black ($6.5 million).
  • ¬†White men are discussed¬†as individuals, and women and non-white people are focused on¬†as groups.

Their results fit well with what I and others have found, which is that DH propagates the same cultural bias found elsewhere within and outside academia.

A next¬†step, vital to this project, is to find equivalent¬†metrics for other disciplines and data sources. Until we get a good baseline, we won’t actually know if our interventions are improving the situation. It’s all well and good to say “things are bad”, but until we know the compared-to-what, we won’t have a reliable way of testing what works and what doesn’t.

Who sits in the 41st chair?

tl;dr¬†Rich-get-richer academic prestige in a scarce job market makes meritocracy impossible. Why some things get popular and others don’t. Also agent-based simulations.

Slightly longer tl;dr This post is about why academia isn’t a meritocracy, at no intentional fault of those in power who try to make it one. None of presented ideas are novel on their own, but I do¬†intend this as¬†a novel conceptual contribution in its connection of disparate threads. Especially, I suggest¬†the predictability of research success in a scarce academic economy¬†as a theoretical framework for exploring successes and failures in the history of science.

But mostly I just beat¬†a “musical chairs” metaphor to death.

Positive Feedback

To the victor go the spoils, and to the spoiled go the victories. Think about it: the Yankees; Alexander the Great; Stanford¬†University. Why do the Yankees have twice as many World Series appearances as their nearest competitors, how was¬†Alex’s empire so fucking vast, and why does Stanford get all the cool grants?

The rich get richer. Enough World Series victories, and the Yankees get the reputation and funding to entice the best players. Ol’ Allie-G inherited an amazing army, was taught by Aristotle, and pretty much every place he conquered increased his military’s numbers. Stanford’s known for amazing tech innovation, so they get the funding, which means they can afford even more innovation, which means¬†even more people think they’re worthy of funding, and so on down the line until Stanford and its neighbors (Google, Apple, etc.) destroy the local real estate market and then accidentally blow up the world.

Alexander's Empire [via]
Alexander’s Empire [via]
Okay, maybe I exaggerated that last bit.

Point is, power begets power. Scientists call this a¬†positive feedback loop: when a thing’s size is exactly what makes it¬†grow larger.

You’ve heard it firsthand when a microphoned singer walks¬†too close to her speaker. First the mic picks up¬†what’s already coming out of the speaker. The mic, doings its job, sends what it hears to an amplifier, sending an even louder version to the very same speaker. The speaker replays a louder version of what it just produced, which is once again received by the microphone, until sound feeds back onto itself enough times to produce the¬†ear-shattering squeal fans of live music have come to dread. This is a positive feedback loop.

Feedback loop. [via]
Feedback loop. [via]
Positive feedback loops are everywhere. They’re why the universe counts logarithmically rather than linearly, or why income inequality is so common in¬†free market economies. Left to their¬†own devices,¬†the rich tend to get richer, since¬†it’s easier to make money when you’ve already got¬†some.

Science and academia are equally susceptible to positive feedback loops. Top scientists, the most well-funded research institutes, and world-famous research all got to where they are, in part, because of something called the Matthew Effect.

Matthew Effect

The Matthew Effect isn’t¬†the reality TV show it sounds like.

For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken even that which he hath. ‚ÄĒMatthew 25:29, King James Bible.

It’s the Biblical idea that the rich get richer, and it’s become a popular party trick among sociologists (yes, sociologists go to parties) describing how society works. In academia, the phrase is brought up alongside evidence that shows previous grant-recipients are more likely to receive new grants than their peers, and the more money a researcher has been awarded, the more they’re likely to get going forward.

The Matthew Effect is also employed metaphorically, when it comes to citations. He who gets some citations will accrue more; she who has the most citations will accrue them exponentially faster. There are many correct explanations, but the simplest one will do here: 

If Susan’s article on the danger of velociraptors¬†is cited by 15 other articles, I am more likely to find it and cite her than another article on velociraptors containing the same information, that has never been cited.¬†That’s because when I’m reading research, I look at who’s being cited. The more Susan is cited, the more likely I’ll eventually come across her article and cite it myself, which in turn increases the likelihood that much more that someone else will find her article through my own citations. Continue ad nauseam.

Some of you are thinking this is stupid. Maybe it’s trivially correct, but missing the bigger picture: quality. What if Susan’s velociraptor research is simply better than the competing research, and that’s why it’s getting cited more?

Yes, that’s also an issue. Noticeably awful¬†research simply won’t get much traction. 1¬†Let’s disqualify it from the citation game. The point is there is¬†lots of great research out there, waiting to be read and built upon, and its quality isn’t the sole predictor of its eventual citation success.

In fact, quality is a mostly-necessary but completely insufficient indicator of research success. Superstar popularity of research depends much more on the citation effects I mentioned above Рmore citations begets even more. Previous success is the best predictor of future success, mostly independent of the quality of research being shared.

Example of positive feedback loops pushing some articles to citation stardom.
Example of positive feedback loops pushing some articles to citation stardom. [via]
This is all pretty hand-wavy. How do we know success is more important than quality in predicting success? Uh, basically because of Napster.

Popular Music

If VH1 were to produce a retrospective¬†on the first decade of the 21st century, perhaps its two biggest subjects¬†would be illegal music sharing¬†and VH1’s¬†I Love the 19xx…¬†TV series. Napster came and went, followed by LimeWire, eDonkey2000, AudioGalaxy, and other services¬†sued by Metallica.¬†Well-known early internet memes like Hamster Dance and All Your Base Are Belong To Us spread¬†through the web like socially transmitted diseases, and researchers found this the perfect opportunity to explore¬†how popularity worked. Experimentally.

In 2006, a group of Columbia University social scientists designed a clever experiment to test why some songs became popular and others did not, relying on the public interest in online music sharing. They created a music downloading site which gathered 14,341 users, each one to become a participant in their social experiment.

The cleverness arose out of their experimental design, which allowed them to get past the pesky problem of history only ever happening once. It’s usually hard to¬†learn why something became popular, because you don’t know what aspects of its popularity were simply random chance, and what aspects were genuine quality. If you could, say, just rerun the 1960s, changing a few small aspects here or there, would the Beatles still have been as successful? We can’t know, because the 1960s are pretty much stuck having happened as they did, and there’s not much we can do to change it. 2

But this music-sharing site could rerun history‚ÄĒor at least, it could run a few histories simultaneously. When they signed up, each of the site’s 14,341 users were randomly sorted into different groups, and their group¬†number determined how they were presented music. The musical variety¬†was intentionally obscure, so users wouldn’t have heard the bands before.

A user from the first group, upon logging in, would be¬†shown songs in random order, and were given the option to listen to a song, rate it 1-5, and download it.¬†Users from group #2, instead, were¬†shown the songs ranked in order of their popularity among other members of group #2. Group #3 users were shown a similar rank-order of popular songs, but this time determined by the song’s popularity within group #3. So too for groups #4-#9.¬†Every user could listen to, rate, and download music.

Essentially, the researchers put the participants into 9 different self-contained petri dishes, and waited to see which music would become most popular in each. Ranking and download popularity from group #1 was their control group, in that members judged music based on their quality without having access to social influence. Members of groups #2-#9 could be influenced by what music was popular with their peers within the group. The same songs circulated in each petri dish, and each petri dish presented its own version of history.

Music sharing site from Columbia study.
Music sharing site from Columbia study.

No superstar songs emerged out of the control group. Positive feedback loops weren’t built into the system, since popularity couldn’t beget more popularity if nobody saw what their peers were listening to. The other 8 musical petri dishes told a different story, however. Superstars emerged in each, but each group’s population of popular music was very different.¬†A song’s popularity in each group was slightly related to its quality (as judged by ranking¬†in the control group), but mostly it was social-influence-produced chaos. The authors put it this way:

In general, the “best”¬†songs never do very badly, and the “worst”¬†songs never do extremely well, but almost any¬†other result is possible. ‚ÄĒSalganik, Dodds, & Watts, 2006

These results became even more pronounced when the researchers increased the visibility of social popularity in the system. The rich got even richer still. A lot of it has to do with timing. In each group, the first few good songs to become popular are the ones that eventually do the best, simply by an accident of circumstance. The first few popular songs appear at the top of the list, for others to see, so they in-turn become even more popular, and so ad infinitum.  The authors go on:

experts fail to predict success not because they are incompetent judges or misinformed about the preferences of others, but because when individual decisions are subject to social influence, markets do not simply aggregate pre-existing individual preferences.

In short, quality is a necessary but insufficient criteria for ultimate success. Social influence, timing, randomness, and other non-qualitative features of music are what turn a good piece of music into an off-the-charts hit.

Wait what about science?

Compare this to what makes a “well-respected” scientist: it ain’t all citations and social popularity, but they play a huge role. And as I described above, simply out of exposure-fueled-propagation, the more citations someone accrues, the more citations they are¬†likely to accrue, until we get a situation like the Yankees (40 world series appearances, versus 20 appearances by the Giants)¬†on our hands. Superstars are born, who are miles beyond the majority of working researchers in terms of grants, awards, citations, etc. Social scientists call this¬†preferential attachment.

Which is fine, I guess. Who cares if scientific popularity is so skewed as long as good research is happening? Even if we take the Columbia social music experiment at face-value, an exact analog for scientific success, we know that the most successful are always good scientists, and the least successful are always bad ones, so what does it matter if variability within the ranks of the successful is so detached from quality?

Except, as anyone studying their #OccupyWallstreet¬†knows, it ain’t that simple in a scarce economy. When the rich get richer, that money’s gotta come from somewhere. Like everything else (cf. the¬†law of conservation of mass), academia is a (mostly) zero-sum game, and to the victors go the spoils. To the losers? Meh.

So let’s talk scarcity.

The 41st Chair

The same guy who who introduced the concept of the Matthew Effect to scientific grants and citations, Robert K. Merton (…of Columbia University),¬†also brought up “the 41st chair” in the same 1968 article.

Merton’s pretty great, so I’ll let him do the talking:

In science as in other institutional realms, a special problem in the workings of the reward system turns up when individuals or organizations take on the job of gauging and suitably rewarding lofty performance on behalf of a large community. Thus, that ultimate accolade in 20th-century science, the Nobel prize, is often assumed to mark off its recipients from all the other scientists of the time. Yet this assumption is at odds with the well-known fact that a good number of scientists who have not received the prize and will not receive it have contributed as much to the advancement of science as some of the recipients, or more.

This can be described as the phenomenon of “the 41st chair.” The derivation of this tag is clear enough. The French Academy, it will be remembered, decided early that only a cohort of 40 could qualify as members and so emerge as immortals. This limitation of numbers made inevitable, of course, the exclusion through the centuries of many talented individuals who have won their own immortality. The familiar list of occupants of this 41st chair includes Descartes, Pascal, Moliere, Bayle, Rousseau, Saint-Simon, Diderot, Stendahl, Flaubert, Zola, and Proust


But in greater part, the phenomenon of the 41st chair is an artifact of having a fixed number of places available at the summit of recognition. Moreover, when a particular generation is rich in achievements of a high order, it follows from the rule of fixed numbers that some men whose accomplishments rank as high as those actually given the award will be excluded from the honorific ranks. Indeed, their accomplishments sometimes far outrank those which, in a time of less creativity, proved
enough to qualify men for his high order of recognition.

The Nobel prize retains its luster because errors of the first kind‚ÄĒwhere scientific work of dubious or inferior worth has been mistakenly honored‚ÄĒare uncommonly few. Yet limitations of the second kind cannot be avoided. The small number of awards means that, particularly in times of great scientific advance, there will be many occupants of the 41st chair (and, since the terms governing the award of the prize do not provide for posthumous recognition, permanent occupants of that chair).

Basically, the French Academy allowed only 40 members (chairs) at a time. We can be¬†reasonably certain those members were pretty great, but we can’t be sure that equally great‚ÄĒor greater‚ÄĒwomen existed who simply never got the opportunity to participate because none of the 40 members died in time.

These good-enough-to-be-members-but-weren’t were said to occupy the French Academy’s 41st chair, an inevitable outcome of a scarce economy (40 chairs) when the potential number benefactors of this economy far outnumber the goods available (40). The population occupying the 41st chair is huge, and growing, since the same number of chairs have existed since 1634, but the population of France has¬†quadrupled in the intervening four centuries.

Returning to our question of “so what if rich-get-richer doesn’t stick the best people at the top, since at least we can assume the people at the top are all pretty good anyway?”, scarcity of chairs is the so-what.

Since¬†faculty jobs are stagnating compared to adjunct work, yet new PhDs are being granted¬†faster than new jobs become¬†available, we are presented with the much-discussed crisis in higher education. Don’t worry, we’re told,¬†academia is a meritocracy. With so few jobs, only the cream of the crop will get them. The best¬†work will still be done, even in these hard times.

Recent Science PhD growth in the U.S. [via]
Recent Science PhD growth in the U.S. [via]
Unfortunately, as the Columbia social music study (among many other studies) showed, true meritocracies are impossible in complex social systems.¬†Anyone who plays the academic game knows this already, and many are quick to point it out when they see people in much better jobs doing incredibly stupid things. What those¬†who point out the falsity of meritocracy often get wrong, however, is intention: the idea that there is no meritocracy because those¬†in power talk the meritocracy talk, but don’t then walk the walk. I’ll¬†talk a bit later about how,¬†even if everyone is above board in trying to push the best people forward, occupants of the 41st chair will still often wind up being more deserving than those sitting in chairs 1-40. But more on that later.

For now, let’s start building a metaphor that we’ll eventually over-extend well beyond its usefulness. Remember that kids’ game Musical Chairs, where everyone’s dancing around a bunch of chairs while the music is playing, but as soon as the music stops everyone’s got to find a chair and sit down? The catch, of course, is that there are fewer chairs than people, so someone always¬†loses when the music stops.

The academic meritocracy works a bit like this. It is meritocratic, to a point: you can’t even play the game without proving some worth. The price of admission is a Ph.D. (which, granted, is more an endurance test than an intelligence test, but academic success ain’t all¬†smarts, y’know?), a research area at least a few people find interesting and believe you’d be able to¬†do good work in it, etc. It’s a pretty low meritocratic bar, since it described 50,000 people who graduated in the U.S. in 2008 alone, but it’s a bar nonetheless. And it’s your competition in Academic Musical Chairs.

Academic Musical Chairs

Time to invent a game! It’s called Academic Musical Chairs, the game where everything’s made up and the points don’t matter. It’s like Regular Musical Chairs, but more complicated¬†(see Fig. 1). Also the game is fixed.

Figure 1: Academic Musical Chairs
Figure 1: Academic Musical Chairs

See those 40 chairs in the middle green zone? People sitting in them are the winners.¬†Once they’re seated they have what we call in the game “tenure”, and they don’t get up¬†until they die or write something controversial on twitter. Everyone bustling around them, the active players, are vying for seats while they wait for someone to die; they occupy the yellow zone¬†we call “the 41st chair”. Those¬†beyond that, in the red zone, can’t yet (or may never) afford the price of game admission; they don’t have a Ph.D., they¬†already said something controversial on Twitter, etc. The unwashed masses, you know?

As the music plays, everyone in the 41st chair is walking around in a circle waiting for someone to die and the music to stop. When that happens, everyone rushes to the empty seat. A few invariably reach it simultaneously, until one out-muscles the others and sits down. The sitting winner gets tenure. The music starts again, and the line continues to orbit the circle.

If a player spends too long orbiting in the 41st chair, he is forced to resign. If a player runs out of money while orbiting, she is forced to resign. Other factors may force a player to resign, but they will never appear in the rulebook and will always be a surprise.

Now, some players¬†are more talented than others, whether naturally or through intense training. The game calls this “academic merit”, but it translates here to increased speed and strength, which helps some players reach the empty chair when the music stops, even if they’re a bit further away. The strength certainly helps when competing with others who reach the chair at the same time.

A careful look at Figure 1 will reveal one other way players might increase their chances of success when the music stops. The 41st chair has certain internal shells, or rings, which act a bit like that fake model of an atom everyone learned in high-school chemistry. Players, of course, are the electrons.

Electron shells. [via]
Electron shells. [via]
You may remember that the further out the shell, the more electrons can occupy it(-ish): the first shell holds 2 electrons, the second holds 8; third holds 18; fourth holds 32; and so on. The same holds true for Academic Musical Chairs: the coveted interior ring only fits a handful of players; the second ring fits an order of magnitude more; the third ring an order of magnitude more than that, and so on.

Getting closer to the center isn’t easy, and it has very little to do with your “academic rigor”! Also, of course, the closer you are to the center, the easier it is to reach either the chair, or the next level (remember¬†positive feedback loops?). Contrariwise, the further you are from the center, the less chance you have of ever reaching the core.

Many factors affect whether a player can proceed to the next ring while the music plays, and some factors actively count against a player. Old age and being a woman, for example, take away 1 point. Getting published or cited adds points, as does already being friends with someone sitting in a chair (the details of how many points each adds can be found in your rulebook). Obviously the closer you are to the center, the easier you can make friends with people in the green core, which will contribute to your score even further. Once your score is high enough, you proceed to the next-closest shell.

Hooray, someone died! Let’s watch what happens.

The music stops. The people in the innermost ring who have the luckiest timing (thus are closest to the empty chair) scramble for it, and a few even reach it. Some very well-timed players from the 2nd & 3rd shells also reach it, because their “academic merit” has lent them speed and strength to¬†reach past their position. A struggle ensues. Miraculously, a pregnant black woman sits down (this almost¬†never happens), though not without some bodily harm, and the music begins¬†again.

Oh, and new shells keep getting tacked on as more players can afford the cost of admission to the yellow zone, though the green core remains the same size.

Bizarrely, this is far from the first game of this nature. A Spanish boardgame from 1587 called the¬†Courtly Philosophy¬†had players¬†move figures around a board,¬†inching closer to¬†living a¬†luxurious life¬†in the shadow of a rich patron.¬†Random chance ruled their progression‚ÄĒa role of the dice‚ÄĒand occasionally they’d reach a tile that said things like: “Your patron dies, go back 5 squares”.

The courtier's philosophy. [via]
The courtier’s philosophy. [via]
But I digress. Let’s¬†temporarily table the scarcity/41st-chair discussion and get back to the Matthew Effect.

The View From Inside

A friend recently came to me, excited but nervous about how well they¬†were being treated by their¬†department at the expense of their¬†fellow students. “Is this what¬†the Matthew Effect feels like?” they¬†asked. Their¬†question is the reason I’m writing this post, because I spent the next¬†24 hours scratching my head over “what¬†does the Matthew Effect feel like?”.

I don’t know if anyone’s¬†looked at the psychological effects of the Matthew Effect (if you do, please comment?), but my guess is it¬†encompasses two feelings:¬†1) impostor syndrome, and 2) hard work finally paying off.

Since¬†almost anyone who reaps the benefits of the Matthew Effect in academia will be an intelligent, hard-working academic, a windfall of accruing success should feel like finally¬†reaping the benefits one deserves. You probably realize that luck played a part, and that many of your harder-working, smarter friends have been equally unlucky, but there’s no doubt in your mind that, at least, your hard work is finally paying off and the¬†academic community is beginning to recognize that fact. No matter how unfair it is that your¬†great colleagues aren’t seeing the same success.

But here’s the thing. You know how in¬†physics, gravity and acceleration feel equivalent? How, if you’re in a windowless box, you¬†wouldn’t be able to tell the difference between being stationary on Earth, or being pulled by a spaceship at 9.8 m/s2¬†through deep space? Success from merit or from Matthew Effect probably acts similarly, such that it’s impossible to tell one from the other from the inside.

Gravity vs. Acceleration. [via]
Gravity vs. Acceleration. [via]
Incidentally, that’s why the last advice you ever want to take is someone telling you how to succeed from their own experience.


Since we’ve seen explosive success requires but doesn’t rely on¬†skill, quality, or intent, the most successful people are not necessarily in the best position to understand the reason for their own rise. Their strategies may have paid off, but so did timing, social network effects, and positive feedback loops. The question you should be asking is, why didn’t other people with the same strategies also succeed?

Keep this especially in mind if you’re a¬†student, and your tenured-professor advised you to seek an academic career. They may believe that giving you their strategies for success will help you succeed, when really they’re just giving you one of 50,000 admission tickets to Academic Musical Chairs.

Building a Meritocracy

I’m teetering well-past the edge of speculation here, but I assume the communities of entrenched academics¬†encouraging¬†undergraduates into a research career are the same communities assuming¬†a meritocracy is at play, and are doing everything they can in hiring and tenure review to ensure a meritocratic playing field.

But¬†even if gender¬†bias did not exist,¬†even if¬†everyone responsible for decision-making genuinely wanted a meritocracy,¬†even if the game weren’t rigged at many levels, the economy of scarcity (41st chair) combined with the Matthew Effect would ensure a true meritocracy would be impossible. There are only so many jobs, and hiring committees need to choose¬†some¬†selection criteria; those selection criteria will be subject to scarcity and rich-get-richer effects.

I won’t prove that point here, because original research is beyond the scope of this blog post, but I¬†have a good idea of how to do it. In fact, after I finish writing this, I probably will go do just that. Instead, let me present very similar research, and¬†explain how that method can be used to answer this question.

We want an answer to the question of whether positive feedback loops and a scarce economy are sufficient to prevent the possibility of a meritocracy. In 1971, Tom Schelling asked an unrelated question which he answered using a very relevant method: can racial segregation manifest in a community whose every actor is intent on not living a segregated life? Spoiler alert: yes.

He answered this¬†question using by¬†simulating an artificial world‚ÄĒsimilar in spirit to the Columbia social music experiment, except for using real participants, he¬†experimented on very simple rule-abiding game creatures of his own invention. A bit like having a computer play checkers against itself.

The experiment is simple enough: a bunch of creatures occupy a checker board, and like checker pieces, they’re red or black. Every turn, one creature has the opportunity to move randomly to another empty space on the board, and their decision to move is based on their comfort with their neighbors. Red pieces want red neighbors, and black pieces want black neighbors, and they keep moving randomly ’till they’re all comfortable. Unsurprisingly,¬†segregated creature communities appear in short order.

What if we our checker-creatures were more relaxed in their comforts? They’d be comfortable as long as they were in the majority; say, at least 50% of their neighbors were the same color. Again, let the computer play itself for a while, and within a few cycles the checker board is once again almost completely segregated.

Schelling segregation. [via]
Schelling segregation. [via]
What if the checker pieces are excited about the prospect of a diverse neighborhood? We relax the criteria even more, so red checkers only move if fewer than a third of their neighbors are red¬†(that is, they’re totally comfortable with 66%¬†of their neighbors being black)? If we run the experiment again, we see,¬†again, the checker board breaks up into segregated communities.

Schelling’s claim wasn’t about how the world worked, but about what¬†the simplest conditions were that could still explain racism. In his fictional checkers-world,¬†every piece could be generously interested in living in a diverse neighborhood, and¬†yet the system still eventually resulted in segregation. This offered¬†a powerful¬†support for the theory that racism could operate subtly, even if every actor were¬†well-intended.

Vi Hart and Nicky Case created an interactive visualization/game that¬†teaches¬†Schelling’s segregation model perfectly. Go play it. Then come back. I’ll wait.

Such an experiment can be devised for our 41st-chair/positive-feedback system as well.¬†We can even build a simulation whose rules match the Academic Musical Chairs I described above. All we need to do is show that¬†a system in which both effects operate (a fact empirically proven time and again in academia) produces fundamental challenges for meritocracy. Such a¬†model would be show that simple meritocratic intent is insufficient to produce a meritocracy. Hulk smashing the myth of the meritocracy seems fun; I think I’ll get started¬†soon.

The Social Network

Our world ain’t that simple. For one, as seen in Academic Musical Chairs, your place in the social network influences your chances of success. A¬†heavy-hitting advisor, an old-boys cohort, etc., all improve your starting position when you begin the game.

To put it more operationally, let’s go back to the Columbia social music experiment. Part of a song’s success was due to quality, but the stuff that made stars was much more contingent on chance timing followed by¬†positive feedback loops. Two of the authors from the 2006 study wrote another in 2007, echoing this claim that good timing was more important than individual influence:

models of information cascades, as well as human subjects experiments that have been designed to test the models (Anderson and Holt 1997; Kubler and Weizsacker 2004), are explicitly constructed such that there is nothing special about those individuals, either in terms of their personal characteristics or in their ability to influence others. Thus, whatever influence these individuals exert on the collective outcome is an accidental consequence of their randomly assigned position in the queue.

These articles¬†are part of a large literature in predicting popularity, viral hits, success, and so forth. There’s¬†The Pulse of News in Social Media: Forecasting Popularity by Bandari, Asur, & Huberman, which showed that¬†a top predictor of newspaper shares was the source rather than the content of an article, and that a major¬†chunk of articles that do get shared never really make it to viral status. There’s¬†Can Cascades be Predicted?¬†by¬†Cheng, Adamic, Dow, Kleinberg, and Leskovec (all-star cast if ever I saw one), which shows the remarkable reliance on timing & first impressions in predicting success, and also the reliance on social connectivity. That is, success travels faster through those who are well-connected (shocking, right?), and structural properties of the social network are important. This study by Susarla et al. also shows the importance of¬†location in the social network in helping push those positive feedback loops, effecting the magnitude of success in YouTube Video shares.

Twitter information cascade. [via]
Twitter information cascade. [via]
Now, I know, social media success does not an academic career predict. The point here, instead, is to show that in each of these cases, before sharing occurs and not taking into account social media effects (that is, relying solely on the merit of the thing itself), success is predictable, but stardom is not.

Concluding, Finally

Relating it to Academic Musical Chairs, it’s not too difficult to say whether someone will end up in the 41st chair, but it’s impossible to tell whether they’ll end up in seats 1-40 until you keep an eye on how positive feedback loops are affecting their career.

In the academic world, there’s a fertile prediction market for Nobel Laureates. Social networks and Matthew Effect citation bursts are decent enough¬†predictors, but what anyone who predicts any kind of success will tell you is that it’s much easier to predict the pool of recipients than it is to predict the winners.

Take Economics. How many working economists are there? Tens of thousands, at least. But there’s this¬†Econometric Society¬†which began naming Fellows in 1933, naming 877 Fellows by 2011. And guess what, 60 of 69 Nobel Laureates in Economics before 2011 were Fellows of the society. The other 817 members are or were occupants of the 41st chair.

The point is (again, sorry), academic meritocracy is a myth. Merit is a price of admission to the game, but not a predictor of success in a scarce economy of jobs and resources. Once you pass the basic merit threshold and enter the 41st chair, forces having little to do with intellectual curiosity and rigor guide eventual success (ahem). Small positive biases like gender, well-connected advisors, early citations, lucky timing, etc. feed back into increasingly larger positive biases down the line. And since there are only so many faculty jobs out there, these feedback effects create a naturally imbalanced playing field. Sometimes Einsteins do make it into the middle ring, and sometimes they stay patent clerks. Or adjuncts, I guess. Those who do make it past the 41st chair are poorly-suited to tell you why, because by and large they employed the same strategies as everybody else.

Figure 1: Academic Musical Chairs
Yep, Academic Musical Chairs

And if these six thousand words weren’t¬†enough to convince you, I leave you with this article and this tweet. Have a nice day!

Addendum for Historians

You thought I was done?

As a historian of science, this situation has some interesting repercussions for my research. Perhaps most importantly, it and related concepts from Complex Systems research offer a middle ground framework between environmental/contextual determinism (the world shapes us in fundamentally predictable ways) and individual historical agency (we possess the power to shape the world around us, making the world fundamentally unpredictable).

More concretely, it is historically fruitful to ask not simply what non-“scientific”¬†strategies were employed by famous scientists to get ahead (see Biagioli’s¬†Galileo, Courtier), but also what did or did not set those strategies¬†apart from the masses¬†of people we no longer remember. Galileo, Courtier¬†provides a great example of what we historians can do on a larger scale: it traces Galileo’s machinations to wind up in the good graces of a wealthy patron, and how such a system affected his own research. Using¬†recently-available data on early modern social and scholarly networks, as well as¬†the beginnings of data on people’s activities, interests, practices, and productions, it should be possible to zoom out from¬†Biagioli’s viewpoint and get a fairly sophisticated picture of trajectories and practices of people who¬†weren’t Galileo.

This is all very preliminary, just publicly blogging whims, but I’d be fascinated by what a wide-angle (dare I say, macroscopic?) analysis of the 41st chair in could tell us about how social and “scientific” practices shaped one another in the 16th and 17th centuries. I believe this would bear¬†previously-impossible fruit, since a lone historian grasping ten thousand tertiary actors at once is a fool’s errand, but is a walk in the park for my laptop.

As this really is whim-blogging, I’d love to hear your thoughts.


  1. Unless it’s really awful, but let’s avoid that discussion here.
  2. short of a TARDIS.

Fixing the irregular

Our word “fix” comes from¬†fixus:¬†unwavering; immovable; constant; fixed/fastened. Well,¬†the scottbot¬†irregular¬†has been slowly breaking for years, finally broke last week,¬†and it was time to fix it.

Broken how?

A combination of human error (my own), accruing chaos, the complexities of WordPress, and the awful-but-cheap hosting solution that is bluehost.com. As many noticed, the site’s been slowing down, interactive elements like my photo gallery stopped working, and by last week,¬†pages would go dark¬†for hours at a time.¬†By this¬†week, bluehost no longer allowed me ftp or cpanel access. So yesterday I took my business to ReclaimHosting.com, the hands-down best (and friendliest) hosting service for personal and small-scale academic websites.

Quoth the Server "404"
Quoth the Server “404”

I still haven’t figured out what finally did it in, but with so many moving parts, it seemed better to start¬†fresh than repair the beast. I’m currently working on a jekyll static website;¬†this new wordpress blog you’re reading now is an interim solution. However, I couldn’t just cut my losses and start over, since I’ve put a lot of my soul into the 100+ blog posts & pages I’ve written here¬†since 2009.

More importantly, my site has been cited in dozens of articles,¬†and appears on the syllabus of hundreds of courses, DH and otherwise. If I delete the content, I’m destroying part of the scholarly record, and potentially ruining the morning of professors who assign my blog posts as reading, only to find out at the last minute that it no longer exists.

Here lies the problem. Because I no longer had back-end access to my website, I could not download my content through the usual channels. Because of the peculiarities of my various WordPress customizations, not worth detailing here, I could not use a plugin to export my site and its contents.

Since I wanted the form of my site preserved for the scholarly record, the only solution I could come up with was to crawl my entire site, externally, and download static html versions of each page on scottbot.net as it used to exist.

the old scottbot irregular
the old scottbot irregular

Fixed how?

This is where the double-meaning of¬†fix, described above, comes into play. I wanted the site functioning again, not broken, but I also wanted to preserve the old website as it existed at the URLs everyone has already linked to. For example, a bunch of syllab(uses|i) ¬†link to¬†http://www.scottbot.net/HIAL/?tag=networks-demystified to direct their students to my networks demystified posts. I wanted to make sure that¬†URL would continue to point to the version of the site they intended to link to, while also migrating the old content into a new system that I’d be able to update more fluidly. Thankfully the old directory for the site, /HIAL/ (the site used to be called History Is A Lab), made that easier: the new version of¬†the irregular would reside on scottbot.net, and the archive would remain on scottbot.net/HIAL/.

This apparently isn’t trivial. The first step was to¬†use wget (explained and taught by Ian Milligan on the Programming Historian) to download a static version of the entire original irregular. After fiddling with the wget parameters and redownloading my site a few times, I ended up with a¬†mostly-complete mirror of all the old content. Then I uploaded the entire mirror to my new host in the /HIAL/ directory. Yay!


The only catch was that old dynamic page URLs, like scottbot.net/HIAL/?tag=networks-demystified, were saved by wget as static html pages, like scottbot.net/HIAL/index.html@tag=networks-demystified.html. The solution, Dave Lester helped me figure out last night, was to edit the .htaccess file to make people linking to & visiting HIAL/?tag=networks-demystified automatically redirect to HIAL/index.html@tag=networks-demystified.html

The .htaccess file sits on your server, quietly directing traffic to¬†the various places it should go. In my case, I needed to use regular expressions (remember that thing Shawn, Ian, and I taught in¬†The Historian’s Macroscope?) to redirect all¬†traffic pointing to HIAL/?[anything] to HIAL/index.html@[anything]. An hour or so of learning how .htaccess worked resulted in:

RewriteEngine On
RewriteBase /HIAL/
RewriteCond %{QUERY_STRING} ^(.*)$
RewriteRule ^$ index.html@%1.html? [L,R=301]

which, after some false starts, seems to work. The old site is now fixed, as in constant; secured; unwavering, at scottbot.net/HIAL/. The new irregular, at scottbot.net, is now fixed, as in functional, dynamic. It will continue to evolve and change.

the scottbot irregular is dead. Long live the scottbot irregular!