Note: This post was originally posted on the Complexity and Social Networks Blog.
Along with Riley Crane (of Darpa Challenge and Colbert Report fame), physicist Gourab Ghoshal, and quantitatively minded art historian Max Schich, I’m putting together a workshop on High Throughput Humanities as a satellite meeting at this years European Conference on Complex Systems in Lisbon this September. The general idea is to put together people who ask interesting questions of massive data sets. More specifically – as the title implies – we want to figure out how to use computers to do research in the humanities in a way extends beyond what can currently be accomplished by human beings.
Entire libraries are in the process of being scanned and we would like to begin to investigate questions like: Are there patterns in history that are currently ‘invisible’ due to the fact that humans have limited bandwidth – that we can only read small fraction of all books in a lifetime?
We have an exciting program committee so it should be an interesting day!
Confirmed Programme Committee Members
- Albert-László Barabási, CCNR Northeastern University, USA.
- Guido Caldarelli, INFM-CNR Rome, Italy.
- Gregory Crane, Tufts University, USA.
- Lars Kai Hansen, Technical University of Denmark.
- Bernardo Huberman, HP Laboratories, USA.
- Martin Kemp, Trinity College, Oxford, UK.
- Roger Malina, Leonardo/ISAST, France.
- Franco Moretti, Stanford University, USA.
- Didier Sornette, ETH Zurich, Switzerland.
Practical information can be found at the conference website. Oh, and did I mention that Lisbon is beautiful in September! Sign up an join us. The workshop abstract is reprinted below.
Abstract
The High Throughput Humanities satellite event at ECCS’10 establishes a forum for high throughput approaches in the humanities and social sciences, within the framework of complex systems science. The symposium aims to go beyond massive data aquisition and to present results beyond what can be manually achieved by a single person or a small group. Bringing together scientists, researchers, and practitioners from relevant fields, the event will stimulate and facilitate discussion, spark collaboration, as well as connect approaches, methods, and ideas.
The main goal of the event is to present novel results based on analyses of Big Data (see NATURE special issue 2009), focusing on emergent complex properties and dynamics, which allow for new insights, applications, and services.
With the advent of the 21st century, increasing amounts of data from the domain of qualitative humanities and social science research have become available for quantitative analysis. Private enterprises (Google Books and Earth, Youtube, Flickr, Twitter, Freebase, IMDb, among others) as well as public and non-profit institutions (Europeana, Wikipedia, DBPedia, Project Gutenberg, WordNet, Perseus, etc) are in the process of collecting, digitizing, and structuring vast amounts of information, and creating technologies, applications, and services (Linked Open Data, Open Calais, Amazon’s Mechanical Turk, ReCaptcha, ManyEyes, etc), which are transforming the way we do research.
Utilizing a complex systems approach to harness these data, the contributors of this event aim to make headway into the territory of traditional humanities and social sciences, understanding history, arts, literature, and society on a global-, meso- and granular level, using computational methods to go beyond the limitations of the traditional researcher.