Contextualizing networks with maps

Last post, I talked about combining textual and network analysis. Both are becoming standard tools in the methodological toolkit of the digital humanist, sitting next to GIS in what seems to be becoming the Big Three in computational humanities.

Data as Context, Data as Contextualized

Humanists are starkly aware that no particular aspect of a subject sits in a vacuum; context is key. A network on its own is a set of meaningless relationships without a knowledge of what travels through and across it, what entities make it up, and how that network interacts with the larger world.  The network must be contextualized by the content. Conversely, the networks in which people and processes are situated deeply affect those entities: medium shapes message and topology shapes influence. The content must be contextualized by the network.

At the risk of the iPhonification of methodologies 1,  textual, network, and geographic analysis may be combined with each other and traditional humanities research so that they might all inform one another. That last post on textual and network analysis was missing one key component for digital humanities: the humanities. Combining textual and network analysis with traditional humanities research (rather than merely using the humanities to inform text and network analysis, or vice-versa) promises to transform the sorts of questions asked and projects undertaken in Academia at large.

Just as networks can be used to contextualize text (and vice-versa), the same can be said of networks and maps (or texts and maps for that matter, or all three, but I’ll leave those for later posts). Now, instead of starting with the maps we all know and love, we’ll start by jumping into the deep end by discussing maps as any sort of representative landscape in which a network can be situated. In fact, I’m going to start off by using the network as a map against which certain relational properties can be overlaid. That is, I’m starting by using a map to contextualize a network, rather than the more intuitive other way around.

Using Maps to Contextualize a Network

The base map we’re discussing here is a map of science. They’ve made their rounds, so you’ve probably seen one, but just in case you haven’t here’s a brief description: some researchers (in this case Kevin Boyack and Richard Klavans) take tons on information from scholarly databases (in this case the Science Citation Index Expanded and the Social Science Citation Index) and create a network diagram from some set of metrics (in this case, citation similarity). They call this network representation a Map of Science.

Base Map of Science built by Boyack and Klavans from 2002 SCIE and SSCI data.

We can debate about the merits of these maps ’till we’re blue in the face, but let’s avoid that for now. To my mind, the maps are useful, interesting, and incomplete, and the map-makers are generally well-aware of their deficiencies. The point here is that it is a map: a landscape against which one can situate oneself, and with which one may be able to find paths and understand the lay of the land.

NSF Funding Profile

In Boyack, Börner 2, and Klavans (2007), the three authors set out to use the map of science to explore the evolution of chemistry research. The purpose of the paper doesn’t really matter here, though; what matters is the idea of overlaying information atop a base network map.

NIH Funding Profile

The images above are the funding profiles of the NIH (National Institutes of Health) and NSF (National Science Foundation). The authors collected publication information attached to all the grants funded by the NSF and NIH and looked at how those publications cited one another. The orange edges show connections between disciplines on the map of science that were more prevalent within the context a particular funding agency than they were compared to the entire map of science. Boyack, Börner 3, and Klavans created a map and used it to contextualize certain funding agencies. They and other parties have since used such maps to contextualize universities, authors, disciplines, and other publication groups.

From Network Maps to Geographic Maps

Of course,  the Where’s The Beef™ section of this post still has yet to be discussed, with the beef in this case being geography. How can we use existing topography to contextualize network topology? Network space rarely corresponds to geographic place, however neither of them alone can ever fully represent the landscape within which we are situated. A purely geographic map of ancient Rome would not accurately represent the world in which the ancient Romans lived, as it does not take into account the shortening of distances through well-trod trade routes.

Roman Network by Elijah Meeks, nodes laid out geographically

Enter Stanford DH ninja Elijah Meeks. In two recent posts, Elijah discussed the topology/topography divide. In the first, he created a network layout algorithm which took a network with nodes originally placed in their geographic coordinates, and then distorted the network visualization to emphasize network distance. The visualization above shows the network laid out geographically. The one below shows the Imperial Roman trade routes with network distances emphasized. As Elijah says, “everything of the same color in the above map is the same network distance from Rome.”

Roman Network by Elijah Meeks, nodes laid out geographically and by network distance.

Of course, the savvy reader has probably observed that this does not take everything into account. These are only land routes; what about the sea?

Elijah’s second post addressed just that, impressively applying GIS techniques to determine the likely route ships took to get from one port to another. This technique drives home the point he was trying to make about transitioning from network topology to network topography. The picture below, incidentally, is Elijah’s re-rendering of the last visualization taking into account both land and see routes. As you can see, the distance from any city to any other has decreased significantly, even taking into account his network-distance algorithm.

Roman Network by Elijah Meeks, nodes laid out using geography and network distance, taking into account two varieties of routes.

The above network visualization combines geography, two types of transportation routes, and network science to provide a more nuanced at-a-glance view of the Imperial Roman landscape. The work he highlighted in his post transitioning from topology to topography in edge shapes is also of utmost importance, however that topic will need to wait for another post.

The Republic of Letters (A Brief Interlude)

Elijah was also involved in another Stanford-based project, one very dear to my heart, Mapping the Republic of Letters. Much of my own research has dealt with the Republic of Letters, especially my time spent under Bob Hatch, and Paula Findlen, Dan Edelstein, and Nicole Coleman at Stanford have been heading up an impressive project on that very subject. I’ll go into more details about the Republic in another post (I know, promises promises), but for now the important thing to look at is their interface for navigating the Republic.

Stanford’s Mapping the Republic of Letters

The team has gone well beyond the interface that currently faces the public, however even the original map is an important step. Overlaid against a map of Europe are the correspondences of many early modern scholars. The flow of information is apparent temporally, spatially, and through the network topology of the Republic itself. Now as any good explorer knows, no map is any substitute for a thorough knowledge of the land itself; instead, it is to be used for finding unexplored areas and for synthesizing information at a large scale. For contextualizing.

If you’ll allow me a brief diversion, I’d like to talk about tools for making these sorts of maps, now that we’re on the subject of letters. Elijah’s post on visualizing network distance included a plugin for Gephi to emphasize network distance. Gephi’s a great tool for making really pretty network visualizations, and it also comes with a small but potent handful of network analysis algorithms.

I’m on the development team of another program, the Sci² Tool, which shares a lot of Gephi’s functionality, although it has a much wider scope and includes algorithms for textual, geographic, and statistical analysis, as well as a somewhat broader range of network analysis algorithms.

This is by no means a suggestion to use Sci² over Gephi; they both have their strengths and weaknesses. Gephi is dead simple to use, produces the most beautiful graphs on the market, and is all-around fantastic software. They both excel in different areas, and by using them (and other tools!) together, it is possible to create maps combining geographic and network features without ever having to resort to programming.

The Correspondence of Hugo Grotius

The above image was generated by combining the Sci² Tool with Gephi. It is the correspondence network of Hugo Grotius, a dataset I worked on while at Huygens ING in The Hague. They are a great group, and another team doing fantastic Republic of Letters research, and they provided this letters dataset. We just developed this particular functionality in Sci² yesterday, so it will take a bit of time before we work out the bugs and release it publicly, however as soon as it is released I’ll be sure to post a full tutorial on how to make maps like the one above.

This ends the public service announcement.

Moving Forward

These maps are not without their critics. Especially prevalent were questions along the lines of “But how is this showing me anything I didn’t already know?” or “All of this is just an artefact of population densities and standard trade routes – what are these maps telling us about the Republic of Letters?” These are legitimate critiques, however as mentioned before, these maps are still useful for at-a-glance synthesis of large scales of information, or learning something new about areas one is not yet an expert in. Another problem has been that the lines on the map don’t represent actual travel routes; those sorts of problems are beginning to be addressed by the type of work Elijah Meeks and other GIS researchers are doing.

To tackle the suggestion that these are merely representing population data, I would like to propose what I believe to be a novel idea. I haven’t published on this yet, and I’m not trying to claim scholarly territory here, but I would ask that if this idea inspires research of your own, please cite this blog post or my publication on the subject, whenever it comes out.

We have a lot of data. Of course it doesn’t feel like we have enough, and it never will, but we have a lot of data. We can use what we have, for example collecting all the correspondences from early modern Europe, and place them on a map like this one. The more data we have, the smaller time slices we can have in our maps. We create a base map that is a combination of geographic properties, statistical location properties, and network properties.

Start with a map of the world. To account for population or related correlations, do something similar to what Elijah did in this post,  encoding population information (or average number of publications per city, or whatever else you’d like to account for) into the map. On top of that, place the biggest network of whatever it is that you’re looking at that you can find. Scholarly communication, citations, whatever. It’s your big Map of YourFavoriteThingHere. All of these together are your base map.

Atop that, place whatever or whomever you are studying. The correspondence of Grotius can be put on this map, like the NIH was overlaid atop the Map of Science, and areas would light up and become larger if they are surprising against the base map. Are there more letters between Paris and The Hague in the Grotius dataset then one would expect if the dataset was just randomly plucked from the whole Republic of Letters? If so, make that line brighter and thicker.

By combining geography, point statistics, and networks, we can create base maps against which we can contextualize whatever we happen to be studying. This is just one possible combination; base maps can be created from any of a myriad of sources of data. The important thing is that we, as humanists, ought to be able to contextualize our data in the same way that we always have. Now that we’re working with a lot more of it, we’re going to need help in those contextualizations. Base maps are one solution.

It’s worth pointing out one major problem with base maps: bias. Until recently, those Maps of Science making their way around the blogosphere represented the humanities as a small island off the coast of social sciences, if they showed them at all. This is because the primary publication venues of the arts and humanities were not represented in the datasets used to create these science maps. We must watch out for similar biases when constructing our own base maps, however the problem is significantly more difficult for historical datasets because the underrepresented are too dead to speak up.  For a brief discussion of historical biases, you can read my UCLA presentation here.

[zotpress item=”I7ZCTTVX”]

Notes:

  1. putting every tool imaginable in one box and using them all at once
  2. Full disclosure: she’s my advisor. She’s also awesome. Hi Katy!
  3. Hi again, Katy!

Topic Modeling and Network Analysis

According to Google Scholar, David Blei’s first topic modeling paper has received 3,540 citations since 2003. Everybody’s talking about topic models. Seriously, I’m afraid of visiting my parents this Hanukkah and hearing them ask “Scott… what’s this topic modeling I keep hearing all about?” They’re powerful, widely applicable, easy to use, and difficult to understand — a dangerous combination.

Since shortly after Blei’s first publication, researchers have been looking into the interplay between networks and topic models. This post will be about that interplay, looking at how they’ve been combined, what sorts of research those combinations can drive, and a few pitfalls to watch out for. I’ll bracket the big elephant in the room until a later discussion, whether these sorts of models capture the semantic meaning for which they’re often used. This post also attempts to introduce topic modeling to those not yet fully converted aware of its potential.

Citations to Blei (2003) from ISI Web of Science. There are even two citations already from 2012; where can I get my time machine?

A brief history of topic modeling

In my recent post on IU’s awesome alchemy project, I briefly mentioned Latent Semantic Analysis (LSA) and Latent Dirichlit Allocation (LDA) during the discussion of topic models. They’re intimately related, though LSA has been around for quite a bit longer. Without getting into too much technical detail, we should start with a brief history of LSA/LDA.

The story starts, more or less, with a tf-idf matrix. Basically, tf-idf ranks words based on how important they are to a document within a larger corpus. Let’s say we want a list of the most important words for each article in an encyclopedia.

Our first pass is obvious. For each article, just attach a list of words sorted by how frequently they’re used. The problem with this is immediately obvious to anyone who has looked at word frequencies; the top words in the entry on the History of Computing would be “the,” “and,” “is,” and so forth, rather than “turing,” “computer,” “machines,” etc. The problem is solved by tf-idf, which scores the words based on how special they are to a particular document within the larger corpus. Turing is rarely used elsewhere, but used exceptionally frequently in our computer history article, so it bubbles up to the top.

LSA and pLSA

LSA utilizes these tf-idf scores 1 within a larger term-document matrix. Every word in the corpus is a different row in the matrix, each document has its own column, and the tf-idf score lies at the intersection of every document and word. Our computing history document will probably have a lot of zeroes next to words like “cow,” “shakespeare,” and “saucer,” and high marks next to words like “computation,” “artificial,” and “digital.” This is called a sparse matrix because it’s mostly filled with zeroes; most documents use very few words related to the entire corpus.

With this matrix, LSA uses singular value decomposition to figure out how each word is related to every other word. Basically, the more often words are used together within a document, the more related they are to one another. 2 It’s worth noting that a “document” is defined somewhat flexibly. For example, we can call every paragraph in a book its own “document,” and run LSA over the individual paragraphs.

To get an idea of the sort of fantastic outputs you can get with LSA, do check out the implementation over at The Chymistry of Isaac Newton.

Newton Project LSA

The method was significantly improved by Puzicha and Hofmann (1999), who did away with the linear algebra approach of LSA in favor of a more statistically sound probabilistic model, called probabilistic latent semantic analysis (pLSA). Now is the part of the blog post where I start getting hand-wavy, because explaining the math is more trouble than I care to take on in this introduction.

Essentially, pLSA imagines an additional layer between words and documents: topics. What if every document isn’t just a set of words, but a set of topics? In this model, our encyclopedia article about computing history might be drawn from several topics. It primarily draws from the big platonic computing topic in the sky, but it also draws from the topics of history, cryptography, lambda calculus, and all sorts of other topics to a greater or lesser degree.

Now, these topics don’t actually exist anywhere. Nobody sat down with the encyclopedia, read every entry, and decided to come up with the 200 topics from which every article draws. pLSA infers topics based on what will hereafter be referred to as black magic. Using the dark arts, pLSA “discovers” a bunch of topics, attaches them to a list of words, and classifies the documents based on those topics.

LDA

Blei et al. (2003) vastly improved upon this idea by turning it into a generative model of documents, calling the model Latent Dirichlet allocation (LDA). By this time, as well, some sounder assumptions were being made about the distribution of words and document length — but we won’t get into that. What’s important here is the generative model.

Imagine you wanted to write a new encyclopedia entry, let’s say about digital humanities. Well, we now know there are three elements that make up that process, right? Words, topics, and documents. Using these elements, how would you go about writing this new article on digital humanities?

First off, let’s figure out what topics our article will consist of. It probably draws heavily from topics about history, digitization, text analysis, and so forth. It also probably draws more weakly from a slew of other topics, concerning interdisciplinarity, the academy, and all sorts of other subjects. Let’s go a bit further and assign weights to these topics; 22% of the document will be about digitization, 19% about history, 5% about the academy, and so on. Okay, the first step is done!

Now it’s time to pull out the topics and start writing. It’s an easy process; each topic is a bag filled with words. Lots of words. All sorts of words. Let’s look in the “digitization” topic bag. It includes words like “israel” and “cheese” and “favoritism,” but they only appear once or twice, and mostly by accident. More importantly, the bag also contains 157 appearances of the word “TEI,” 210 of “OCR,” and 73 of “scanner.”

LDA Model from Blei (2011)

So here you are, you’ve dragged out your digitization bag and your history bag and your academy bag and all sorts of other bags as well. You start writing the digital humanities article by reaching into the digitization bag (remember, you’re going to reach into that bag for 22% of your words), and you pull out “OCR.” You put it on the page. You then reach for the academy bag and reach for a word in there (it happens to be “teaching,”) and you throw that on the page as well. Keep doing that. By the end, you’ve got a document that’s all about the digital humanities. It’s beautiful. Send it in for publication.

Alright, what now?

So why is the generative nature of the model so important? One of the key reasons is the ability to work backwards. If I can generate an (admittedly nonsensical) document using this model, I can also reverse the process an infer, given any new document and a topic model I’ve already generated, what the topics are that the new document draws from.

Another factor contributing to the success of LDA is the ability to extend the model. In this case, we assume there are only documents, topics, and words, but we could also make a model that assumes authors who like particular topics, or assumes that certain documents are influenced by previous documents, or that topics change over time. The possibilities are endless, as evidenced by the absurd number of topic modeling variations that have appeared in the past decade. David Mimno has compiled a wonderful bibliography of many such models.

While the generative model introduced by Blei might seem simplistic, it has been shown to be extremely powerful. When a newcomer sees the results of LDA for the first time, they are immediately taken by how intuitive they seem. People sometimes ask me “but didn’t it take forever to sit down and make all the topics?” thinking that some of the magic had to be done by hand. It wasn’t. Topic modeling yields intuitive results, generating what really feels like topics as we know them 3, with virtually no effort on the human side. Perhaps it is the intuitive utility that appeals so much to humanists.

Topic Modeling and Networks

Topic models can interact with networks in multiple ways. While a lot of the recent interest in digital humanities has surrounded using networks to visualize how documents or topics relate to one another, the interfacing of networks and topic modeling initially worked in the other direction. Instead of inferring networks from topic models, many early (and recent) papers attempt to infer topic models from networks.

Topic Models from Networks

The first research I’m aware of in this niche was from McCallum et al. (2005). Their model is itself an extension of an earlier LDA-based model called the Author-Topic Model (Steyvers et al., 2004), which assumes topics are formed based on the mixtures of authors writing a paper. McCallum et al. extended that model for directed messages in their Author-Recipient-Topic (ART) Model. In ART, it is assumed that topics of letters, e-mails or direct messages between people can be inferred from knowledge of both the author and the recipient. Thus, ART takes into account the social structure of a communication network in order to generate topics. In a later paper (McCallum et al., 2007), they extend this model to one that infers the roles of authors within the social network.

Dietz et al. (2007) created a model that looks at citation networks, where documents are generated by topical innovation and topical inheritance via citations. Nallapati et al. (2008) similarly creates a model that finds topical similarity in citing and cited documents, with the added ability of being able to predict citations that are not present. Blei himself joined the fray in 2009, creating the Relational Topic Model (RTM) with Jonathan Chang, which itself could summarize a network of documents, predict links between them, and predict words within them. Wang et al. (2011) created a model that allows for “the joint analysis of text and links between [people] in a time-evolving social network.” Their model is able to handle situations where links exist even when there is no similarity between the associated texts.

Networks from Topic Models

Some models have been made that infer networks from non-networked text. Broniatowski and Magee (2010 & 2011) extended the Author-Topic Model, building a model that would infer social networks from meeting transcripts. They later added temporal information, which allowed them to infer status hierarchies and individual influence within those social networks.

Many times, however, rather than creating new models, researchers create networks out of topic models that have already been run over a set of data. There are a lot of benefits to this approach, as exemplified by the Newton’s Chymistry project highlighted earlier. Using networks, we can see how documents relate to one another, how they relate to topics, how topics are related to each other, and how all of those are related to words.

Elijah Meeks created a wonderful example combining topic models with networks in Comprehending the Digital Humanities. Using fifty texts that discuss humanities computing, Elijah created a topic model of those documents and used networks to show how documents, topics, and words interacted with one another within the context of the digital humanities.

Network generated by Elijah Meeks to show how digital humanities documents relate to one another via the topics they share.

Elijah Jeff Drouin has also created networks of topic models in Proust, as reported by Elijah.

Peter Leonard recently directed me to TopicNets, a project that combines topic modeling and network analysis in order to create an intuitive and informative navigation interface for documents and topics. This is a great example of an interface that turns topic modeling into a useful scholarly tool, even for those who know little-to-nothing about networks or topic models.

If you want to do something like this yourself, Shawn Graham recently posted a great tutorial on how to create networks using MALLET and Gephi quickly and easily. Prepare your corpus of text, get topics with MALLET, prune the CSV, make a network, visualize it! Easy as pie.

Networks can be a great way to represent topic models. Beyond simple uses of navigation and relatedness as were just displayed, combining the two will put the whole battalion of network analysis tools at the researcher’s disposal. We can use them to find communities of similar documents, pinpoint those documents that were most influential to the rest, or perform any of a number of other workflows designed for network analysis.

As with anything, however, there are a few setbacks. Topic models are rich with data. Every document is related to every other document, if some only barely. Similarly, every topic is related to every other topic. By deciding to represent document similarity over a network, you must make the decision of precisely how similar you want a set of documents to be if they are to be linked. Having a network with every document connected to every other document is scarcely useful, so generally we’ll make our decision such that each document is linked to only a handful of others. This allows for easier visualization and analysis, but it also destroys much of the rich data that went into the topic model to begin with. This information can be more fully preserved using other techniques, such as multidimensional scaling.

A somewhat more theoretical complication makes these network representations useful as a tool for navigation, discovery, and exploration, but not necessarily as evidentiary support. Creating a network of a topic model of a set of documents piles on abstractions. Each of these systems comes with very different assumptions, and it is unclear what complications arise when combining these methods ad hoc.

Getting Started

Although there may be issues with the process, the combination of topic models and networks is sure to yield much fruitful research in the digital humanities. There are some fantastic tutorials out there for getting started with topic modeling in the humanities, such as Shawn Graham’s post on Getting Started with MALLET and Topic Modeling, as well as on combining them with networks, such as this post from the same blog. Shawn is right to point out MALLET, a great tool for starting out, but you can also find the code used for various models on many of the model-makers’ academic websites. One code package that stands out is Chang’s implementation of LDA and related models in R.

[zotpress collection=”H5CJBHX2″ sort=”asc” sortby=”author”]

Notes:

  1. Ted Underwood rightly points out in the comments that other scoring systems are often used in lieu of tf-idf, most frequently log entropy.
  2. Yes yes, this is a simplification of actual LSA, but it’s pretty much how it works. SVD reduces the size of the matrix to filter out noise, and then each word row is treated as a vector shooting off in some direction. The vector of each word is compared to every other word, so that every pair of words has a relatedness score between them. Ted Underwood has a great blog post about why humanists should avoid the SVD step.
  3. They’re not, of course. We’ll worry about that later.

Are we bad social scientists?

There has been a recent slew of fantastic posts about critical theory and discourse in the digital humanities. To sum up: hacking, yacking, we need more of it, we already have enough of it thank you very much, just deal with the French names already, openness, data, Hope! The unabridged version is available for free at an Internet near you. At this point in the conversation, it seems the majority involved agree that the digital needs more humanity, the humans need more digital, and the two aren’t necessarily as distinct as they seem.

The conversation reminds me of a theme that came at the NEH Institute on Computer Simulations in the Humanities this past summer. At the beginning of the workshop, Tony Beavers introduced himself as a Bad Humanist. What is a bad humanist? We tossed the phrase out a lot during those three weeks — we even made ourselves a shirt — but there was never much real discussion of what that meant. We just had the general sense that we were all relatively bad humanists.

One participant was from “The Centre for Exact Humanities” (what is that everyone else is doing?) in Hyderabad, India; many participants had backgrounds in programming or mathematics or economics. All of our projects were heavily computational, some were economic or arguably positivist, and absolutely none of them felt like anything I’d ever read in a humanities journal. Are these sorts of computational humanistic projects Bad Humanities? Of course the question is absurd. These are not Bad Humanities projects, they’re simply new types of research. They are created by people with humanities training, who are studying things about humans and doing so in legitimate (if as-yet-untested) ways.

Stephen Crowley printed this wonderful t-shirt for the workshop participants.

Fast forward to this October at the bounceback for NEH’s Network Analysis in the Humanities summer institute. The same guy who called himself a bad humanist, Tony Beavers, brought up the question of whether we were just adopting old social science methods without bothering to become familiar with the theory behind the social science. As he put it, “are we just bad social scientists?” There is a real danger in adopting tools and methods developed outside of our field for our own uses, especially if we lack the training to know their limitations.

In my mind, however, both the ideas of a bad humanist (lacking the appropriate yack) or of a bad social scientist (lacking the appropriate hack) fundamentally miss the point. The discourse and theory discussion has touched on the changing notions of disciplinarity, as did I the other day. A lot of us are writing and working on projects that don’t fit well within traditional disciplinary structures; their subjects and methods draw liberally from history, linguistics, computer science, sociology, complexity theory, and whatever else seems necessary at the time.

As long as we remain aware of and well-grounded in whatever we’re drawing from, it doesn’t really matter what we call what we do — so long as it’s done well. People studying humans would do well not to ignore the last half-century of humanities research. People using, for example, network analysis should become very familiar with its theoretical and methodological limitations. By and large, though, the computational humanities projects I’ve come across are thoughtful, well-informed, and ultimately good research. Whether it actually is still good humanities, good social science, or good anything else doesn’t feel terribly relevant.

Zotpress is so cool.

So, you may have noticed this site has been overhauled over the past few days. The old WP theme really wasn’t doing it for me, so I decided to switch to the Great and Glorious Suffusion theme, which is more customizable than barrel of monkeys. The switch to the new theme opened up all sorts of real-estate for new content, and a brief look around the #DH blogosphere landed me on Zotpress.

Do you guys use Zotero? You should use Zotero. It’s a fantastic citation management program that snuggles up nice and close to your browser and turns it into a super research machine.

Dear Zotero, I ♥ you.

Anyway, Zotpress is a WordPress plugin that allows you to put the power of Zotero into your blog. Want to reference stuff? Easy! Want to make a list of most recently read items? Cake! (See the right side of this blog for that particular feature.) This is one of those plugins that I never thought I needed, but now that I have it I cannot imagine blogging efficiently without it.

For your reading pleasure, below is a list of some of super cool articles, courtesy of Zotpress:

[zotpress item=”34ABEHCE,D9SRGW5H,H52588XW,EF3KZ27G,HK6XQ3CI,42WF9AT7,SH7RT4P5″]

Bridging the gap

Traditional disciplinary silos have always been useful fictions. They help us organize our research centers, our journals, our academies, and our lives. However much simplicity we gain from quickly and easily being able to place research X into box Y, however, is offset by the requirement of fitting research X into one and only one box Y. What we gain in simplicity, we lose in flexibility.

The academy is facing convergence on two fronts.

A turn toward computation, complicated methodologies, and more nuanced approaches to research is erecting increasingly complex barriers to entry on basic scholarship. Where once disparate disciplines had nothing in common besides membership in the academy, now they are connected by a joint need for computer infrastructure, algorithm expertise, and methodological training. I recently commiserated with a high energy physicist and a geneticist on the difficulties of parallelizing certain data analysis algorithms. Somehow, in the space of minutes, we three very unrelated researchers reached common ground.

An increasing reliance on consilience provides the other converging factor. A steady but relentless rise in interest in interdisciplinarity has manifested itself in scholarly writings through increasingly wide citation patterns. That is, scholars are drawing from sources further from their own, and with growing frequency. 1 Much of this may be attributed to the rise of computer-aided document searches. Whatever the reasons, scholars are drawing from a much wider variety of research, and this in turn often brings more variety to their research.

Google Ngrams shows us how much people like to say "interdisciplinarity."
Measuring the interdisciplinarity of papers over time. From Guo, Hanning, Scott B. Weingart, and Katy Börner. 2011. “Mixed-indicators model for identifying emerging research areas.” Scientometrics 89 (June 21): 421-435.

Methodological and infrastructural convergence, combined with subject consilience, is dislodging scholarship from its traditional disciplinary silos. Perhaps, in an age when one-item-one-box taxonomies are rapidly being replaced by more flexible categorization schemes and machine-assisted self-organizations, these disciplinary distinctions are no longer as useful as they once were.

Unfortunately, the boom of interdisciplinary centers and institutes in the 70’s and 80’s left many graduates untenurable. By focusing on problems out the scope of any one traditional discipline, graduates from these programs often found themselves outside the scope of any particular group that might hire them. A university system that has existed in some recognizable form for the last thousand years cannot help but pick up inertia, and that indeed is what has happened here. While a flexible approach to disciplinarity might be better if starting all over again, the truth is we have to work with what we have, and a total overhaul is unlikely.

The question is this: what are the smallest and easiest possible changes we can make, at the local level, to improve the environment for increasingly convergent research in the long term? Is there a minimal amount of work one can do such that the returns are sufficiently large to support flexibility? One inspiring step is Bethany Nowviskie‘s (and many others’) #alt-ac project and the movement surrounding it, which pushes for alternative or unconventional academic careers.

Alternative Academic Careers

The #alt-ac movement seems to be picking up the most momentum with those straddling the tech/humanities divide, however it is equally important for those crossing all traditional academic divides. This includes divides between traditionally diverse disciplines (e.g., literature and social science), between methods (e.g., unobtrusive measures and surveys), between methodologies (e.g., quantitative and qualitative), or in general between C.P. Snow’s “Two Cultures” of science and the humanities.

These divides are often useful and, given that they are reinforced by tradition, it’s usually not worth the effort to attempt to move beyond them. The majority of scholarly work still fits reasonably well within some pre-existing community. For those working across these largely constructed divides, however, an infrastructure needs to exist to support their research. National and private funding agencies have answered this call admirably, however significant challenges still exist at the career level.

C.P. Snow bridging the "Two Cultures." Image from Scientific American.

Novel and surprising research often comes from connecting previously unrelated silos. For any combination of communities, if there exists interesting research which could be performed at their intersection, it stands to reason that those which have been most difficult to connect would be the most fruitful if combined. These combinations would likely be the ones with the most low-hanging fruit.

The walls between traditional scholarly communities are fading. In order for the academy to remain agile and flexible, it must facilitate and adapt to the changing scholarly landscape. “The academy,” however, is not some unified entity which can suddenly change directions at the whim of a few; it is all of us. What can we do to affect the desired change? On the scholarly communication front, scholars are adapting  by signing pledges to limit publications and reviews to open access venues. We can talk about increasing interdisciplinarity, but what does interdisciplinarity mean when disciplines themselves are so amorphous?

Have any great ideas on what we can do to improve things? Want to tell me how starry-eyed and ignorant I am, and how unnecessary these changes would be? All comments welcome!

[Note: Surprise! I have a conflict of interest. I’m “interdisciplinary” and eventually want to find a job. Help?]

Notes:

  1. Increasingly interdisciplinary citation patterns is a trend I noticed when working on a paper I recently co-authored in Scientometrics. Over the last 30 years, publications in the Proceedings of the National Academy of Sciences have shown a small but statistically significant trend in the interdisciplinarity of citations. Whereas a paper 30 years ago may have cited sources from one or a small set of closely related journals, papers now are somewhat more likely to cite a larger number of journals in increasingly disparate fields of study. This does take into account the average number of references per paper. A similar but more pronounced trend was shown in the journal Scientometrics. While this is by no means a perfect indicator for the rise of interdisciplinarity, a combination of this study and anecdotal evidence leads me to believe it is the case.

Alchemy, Text Analysis, and Networks! Oh my!

“Newton wrote and transcribed about a million words on the subject of alchemy.” —chymistry.org

 

Beside bringing us things like calculus, universal gravitation, and perhaps the inspiration for certain Pink Floyd albums, Isaac Newton spent many years researching what was then known as “chymistry,” a multifaceted precursor to, among other things, what we now call chemistry, pharmacology, and alchemy.

Pink Floyd and the Occult: Discuss.

Researchers at Indiana University, notably William R. Newman, John A. Walsh, Dot Porter, and Wallace Hooper, have spent the last several years developing The Chymistry of Isaac Newton, an absolutely wonderful history of science resource which, as of this past month, has digitized all 59 of Newton’s alchemical manuscripts assembled by John Keynes in 1936. Among the sites features are heavily annotated transcriptions, manuscript images, often scholarly synopses, and examples of alchemical experiments. That you can try at home. That’s right, you can do alchemy with this website. They also managed to introduce alchemical symbols into unicode (U+1F700 – U+1F77F), which is just indescribably cool.

Alchemical experiments at home! http://webapp1.dlib.indiana.edu/newton/reference/mineral.do

What I really want to highlight, though, is a brand new feature introduced by Wallace Hooper: automated Latent Semantic Analysis (LSA) of the entire corpus. For those who are not familiar with it, LSA is somewhat similar LDA, the algorithm driving the increasingly popular Topic Models used in Digital Humanities. They both have their strengths and weaknesses, but essentially what they do is show how documents and terms relate to one another.

Newton Project LSA

In this case, the entire corpus of Newton’s alchemical texts is fed into the LSA implementation (try it for yourself), and then based on the user’s preferences, the algorithm spits out a network of terms, documents, or both together. That is, if the user chooses document-document correlations, a list is produced of the documents that are most similar to one another based on similar word use within them. That list includes weights – how similar are they to one another? – and those weights can be used to create a network of document similarity.

Similar Documents using LSA

One of the really cool features of this new service is that it can export the network either as CSV for the technical among us, or as an nwb file to be loaded into the Network Workbench or the Sci² Tool. From there, you can analyze or visualize the alchemical networks, or you can export the files into a network format of your choice.

Network of how Newton’s alchemical documents relate to one-another visualized using NWB.

It’s great to see more sophisticated textual analyses being automated and actually used. Amber Welch recently posted on Moving Beyond the Word Cloud using the wonderful TAPoR, and Michael Widner just posted a thought-provoking article on using Voyeur Tools for the process of paper revision. With tools this easy to use, it won’t be long now before the first thing a humanist does when approaching a text (or a million texts) is to glance at all the high-level semantic features and various document visualizations before digging in for the close read.

Who am I?

As this blog is still quite new, and I’m still nigh-unknown, now would probably be a good time to mark my scholarly territory. Instead of writing a long description that nobody would read, I figured I’d take a cue from my own data-oriented research and analyze everything I’ve read over the last year. The pictures below give a pretty accurate representation of my research interests.

I’ll post a long tutorial on exactly how to replicate this later, but the process was fairly straightforward and required no programming or complicated data manipulation. First, I exported all my Zotero references since last October in BibTeX format, a common bibliographic standard. I imported that file into the Sci² Tool, a data analysis and visualization tool developed at the center I work in, and normalized all the words in the titles and abstracts. That is, “applied,” “applies” and “apply” were all merged into one entity. I got a raw count of word use and stuck it in everybody’s favorite word cloud tool, Wordle, and the results of that is the first image below. [Post-publication note: Angela does not approve of my word-cloud. I can’t say I blame her. Word clouds are almost wholly useless, but at least it’s still pretty.]

I then used Sci² to extract a word co-occurrence network, connecting two words if they appeared together within the title+abstract of a paper or book I’d read. If they appeared together once, they were appended with a score of 1, if they appeared together twice, 2, and so on. I then re-weighted the connections by exclusivity; that is, if two words appeared exclusively with one another, they scored higher. “Republ” appeared 32 times, “Letter” appeared 47 times, and 31 of those times they appeared together, so their connection is quite strong. On the other hand, “Scienc” appeared 175 times, “Concept” 120 times, but they only appeared together 32 times, so their connection is much weaker. “Republ” and “Letter” appeared with one another just as frequently as “Scienc” and “Concept,” but because “Scienc” and “Concept” were so much more widely used, their connection score is lower.

Once the general network was created, I loaded the data into Gephi, a great new network visualization tool. Gephi clustered the network based on what words co-occurred frequently, and colored the words and their connections based on that clustering. The results are below (click the image to enlarge it).

These images sum up my research interests fairly well, and a look at the network certainly splits my research into the various fields and subfields I often draw from. Neither of these graphics are particularly sophisticated, but they do give a good at-a-glance notion of the scholarly landscape from my perspective. In the coming weeks, I will post tutorials to create these and similar data visualizations or analyses with off-the-shelf tools, so stay-tuned.

Rippling o’er the Wave

The inimitable Elijah Meeks recently shared his reasoning behind joining Google+ over Twitter or Facebook. “G+ seems to be self-consciously a network graph that happens to let one connect and keep in touch.” For those who haven’t made the jump, Google+ feels like a contact list on steroids; it lets you add contacts, organize them into different (often overlapping) “circles,” and ultimately you can share materials based on those circles, video chat, send messages, and so forth. By linking your pre-existing public Google profile (and rolling in old features like Buzz and Google Reader), Google has essentially socialized web presences rather than “web presencifying” the social space.

It’s a wishy-washy distinction, and not entirely true, but it feels true enough that many who never worried about social networking sites are going to Google+. This is also one of the big distinctions between the loved-but-lost Google Wave, which was ultrasocial but also ultraprivate; it was not an extended Twitter, but an extended AIM or gmail — really some Frankenstein of the two. It wasn’t about presences and extending contacts, but about chatting alone.

True to Google form, they’ve already realized the potential of sharing in this semi-public space. If Twitter weren’t so minimalistic, they too would have caught on early. Yesterday, via G+ itself, Ripples rippled through the social space. Google+ Ripples describes itself as “a way to visualize the impact of any public post.” This link 1 shows the “ripples” of Ripples itself 2, or the propagation of news of Ripples through the G+ space.

They do a great job invoking the very circles used to organize contacts. Nested circles show subsequent generations of the shared post, and in most cases nested circles also represent followers of the most recent root node. Below the graph, G+ displays the posting frequency over time and allows the user to rewind the clock, seeing how the network grew. Hidden at the bottom of the page, you can find the people with the most public reshares (“influencers”), basic network statistics (average path length, not terribly meaningful in this situation; longest chain; and shares-per-hour), and languages of reshared posts. You can also read the reshares themselves on the right side of the screen, which immediately moved this from my mental “toy” box to the “research tool” box.

Make no mistake, this is a research tool. Barring the lack of permanent links or the ability to export the data into some manipulable file 3, this is a perfect example of information propagation done well. When doing similar research on Twitter, one often requires API-programming prowess to get even this far; in G+, it’s as simple as copying a link. By making information-propagating-across-a-network something sexy, interesting, and easily accessible to everyone, Google is making diffusion processes part of the common vernacular. For this, I give Google +1.

 

 

Notes:

  1. One feature I would like would be the ability to freeze Ripples links. The linked content will change as more people share the initial post – this is potentially problematic.
  2. Anything you can do I can do meta.
  3. which will be necessary for this to go from “research tool” to “actually used research tool”

Psychology of Science as a New Subdiscipline in Psychology

Feist, G. J. 2011. “Psychology of Science as a New Subdiscipline in Psychology.” Current Directions in Psychological Science 20 (October 5): 330-334. doi:10.1177/0963721411418471.

Gregory Feist, a psychologist from San Jose State University, recently wrote a review of the past decade of findings in the psychology of science. He sets the discipline apart from history, philosophy, anthropology, and sociology of science, defining the psychology of science as “the scientific study of scientific thought and behavior,” both implicit and explicit, in children and adults.

Some interesting results covered in the paper:

  • “People pay more attention to evidence when it concerns plausible theories than when it concerns implausible ones.”
  • “Babies as young as 8 months of age understand probability… children as young as 4 years old can correctly draw causal inferences from bar graphs.” (I’m not sure how much I believe that last one – can grown scientists correctly draw causal inferences from bar graphs?)
  • “children, adolescents, and nonscientist adults use different criteria when evaluating explanations and evidence, they are not very good at separating belief from fact (theory and evidence), and they persistently give their beliefs as evidence for their beliefs.”
  • “one reason for the inability to distinguish theory from evidence is the belief that knowledge is certain and absolute—that is, either right or wrong”
  • “scientists use anomalies and unexpected findings as sources for new theories and experiments and that analogy is very important in generating hypotheses and interpreting results”
  • “the personality traits that make scientific interest more likely are high conscientiousness and low openness, whereas the traits that make scientific creativity more likely are high openness, low conscientiousness, and high confidence.”
  • “scientists are less prone to mental health difficulties than are other creative people,” although “It may be that science tends to weed out those with mental health problems in a way that art, music, and poetry do not.”
It is somewhat surprising that Feist doesn’t mention the old use of “psychology of science,” which largely surrounded Reichenbach’s (1938) context distinctions, as echoed by the Vienna Circle and many others. The context of discovery (rather than the context of justification) deals with the question that, as Salmon (1963) put it, “When a statement has been made, … how did it come to be thought of?” Barry F. Singer (1971) wrote “Toward a Psychology of Science,” where he quoted S.S. Stevens (1936, 1939) on the subject of a scientific psychology of science.
It is exciting that the psychology of science is picking up again as an interesting object of study, although it would have been nice for Feist to cite someone earlier than 1996 when discussing this “new subdiscipline in psychology.”
From Wired Magazine

#humnets paper/review

UCLA’s Networks and Network Analysis for the Humanities this past weekend did not fail to impress. Tim Tangherlini and his mathemagical imps returned in true form, organizing a really impressively realized (and predictably jam-packed) conference that left the participants excited, exhausted, enlightened, and unanimously shouting for more next year (and the year after, and the year after that, and the year after that…) I cannot thank the ODH enough for facilitating this and similar events.

Some particular highlights included Graham Sack’s exceptionally robust comparative analysis of a few hundred early English novels (watch out for him, he’s going to be a Heavy Hitter), Sarah Horowitz‘s really convincing use of epistolary network analysis to weave the importance of women (specifically salonières) in holding together the fabric of French high society, Rob Nelson’s further work on the always impressive Mining the Dispatch, Peter Leonard‘s thoughtful and important discussion on combining text and network analysis (hint: visuals are the way to go), Jon Kleinberg‘s super fantastic wonderful keynote lecture, Glen Worthey‘s inspiring talk about not needing All Of It, Russell Horton’s rhymes, Song Chen‘s rigorous analysis of early Asian family ties, and, well, everyone else’s everything else.

Especially interesting were the discussions, raised most particularly by Kleinberg and Hoyt Long, about what particularly we were looking at when we constructed these networks. The union of so many subjective experiences surely is not the objective truth, but neither is it a proxy of objective truth – what, then, is it? I’m inclined to say that this Big Data aggregated from individual experiences provides us a baseline subjective reality that provides us local basins of attraction; that is, trends we see are measures of how likely a certain person will experience the world in a certain way when situated in whatever part of the network/world they reside. More thought and research must go into what the global and local meaning of this Big Data, and will definitely reveal very interesting results.

 

My talk on bias also seemed to stir some discussion. I gave up counting how many participants looked at me during their presentations and said “and of course the data is biased, but this is preliminary, and this is what I came up with and what justifies that conclusion.” And of course the issues I raised were not new; further, everybody in attendance was already aware of them. What I hoped my presentation to inspire, and it seems to have been successful, was the open discussion of data biases and constraints it puts on conclusions within the context of the presentation of those conclusions.

Some of us were joking that the issues of bias means “you don’t know, you can’t ever know what you don’t know, and you should just give up now.” This is exactly opposite to the point. As long as we’re open an honest about what we do not or cannot know, we can make claims around those gaps, inferring and guessing where we need to, and let the reader decide whether our careful analysis and historical inferences are sufficient to support the conclusions we draw. Honesty is more important than completeness or unshakable proof; indeed, neither of those are yet possible in most of what we study.

 

There was some twittertalk surrounding my presentation, so here’s my draft/notes for anyone interested (click ‘continue reading’ to view):

Continue reading “#humnets paper/review”