Tooling Up for Digital Humanities

  • Development Blog
  • Documentation
  • Plugins
  • Suggest Ideas
  • Support Forum
  • Themes
  • WordPress Planet
  • Home
  • Workshop Series
  • About
  • Virtual You
    • 1: Virtual You
    • 2: Keeping a Finger on the Pulse
    • 3: Building Community
    • 4: Further Reading
    • 5: Discussion
  • Digitization
    • 1: Making Documents Digital
    • 2: Metadata and Text Markup
    • 3: Further Reading
    • 4: Discussion
  • Text Analysis
    • 1: The Text Deluge
    • 2: A Brief History
    • 3: Stylometry
    • 4: Content-Based Analysis
    • 5: Metadata Analysis
    • 6: Conclusion
    • 7: Further Reading
    • 8: Discussion
  • Spatial Analysis
    • 1: The Spatial Turn
    • 2: Spatial History Lab
    • 3: Geographic Information Systems
    • 4: Further Reading
    • 5: Discussion
  • Databases
    • 1: The Basics
    • 2: Managing Your Bibliography
    • 3: Cloud Computing
    • 4: Organizing Images
    • 5: Further Reading
    • 6: Discussion
  • Pedagogy
    • 1: In the Classroom
    • 2: Student Collaboration
    • 3: Debating Pedagogical Efficacy
    • 4: Further Reading
    • 5: Discussion
  • Data Visualization
    • 1: Introduction
    • 2: Getting Started
    • 3: For Analysis and Understanding
    • 4: For Communication and Storytelling
    • 5: Visualizations and Accountability
    • 6: Recommended Reading/Viewing
    • 7: Discussion
  • Discussion

1: The Text Deluge

According to one estimate, human beings created some 150 exabytes (billion gigabytes) of data in 2005 alone. This year, we will create approximately 1,200 exabytes. The Library of Congress recently announced its decision archive Twitter, which includes the addition of some 50 million tweets per day. A search in Google Books for the phrase “slave trade” in July 2010, for example, returned the following: “About 1,600,000 results (0.21 seconds).” Scholars once accustomed to studying a handful of letters or a couple hundred diary entries are now faced with massive amounts of data that cannot possibly be analyzed in traditional ways.DataDeluge

The trend towards an increasing deluge of information raises the question posed by Gregory Crane in 2006: “What do you do with a million books?” “My answer to that question” wrote Tanya Clement and others in a 2008 article, “is that whatever you do, you don’t read them, because you can’t.”

Luckily, scholars need not adhere to traditional methods. Increasingly humanities scholars are adopting digital tools to analyze large quantities of data in new ways. New forms of analysis have emerged as computer processing has progressed, allowing greater maneuverability within large amounts of data. The same processing power of a mainframe computer a couple decades ago now fits inside an iPhone. In addition to processing technology, advancements have improved access to data and the speed and ease of transferring it. Text mining allows scholars to deal with this massive quantity of data–to draw out patterns that may not be visible to human readers.

Let's dive right in! 2: A Brief History

Navigation

  • Welcome
  • Workshop Series
  • About
  • Virtual You
  • Digitization
  • Text Analysis
    • 1: The Text Deluge
    • 2: A Brief History
    • 3: Stylometry
    • 4: Content-Based Analysis
    • 5: Metadata Analysis
    • 6: Conclusion
    • 7: Further Reading
    • 8: Discussion
  • Spatial Analysis
  • Databases
  • Pedagogy
  • Data Visualization
  • Discussion
Powered by WordPress | “Blend” from Spectacu.la WP Themes Club