In 2010, the Library of Congress used its Facebook page to announce it was acquiring the entire Twitter archive - all public tweets - back to March 2006. And it has been archiving public tweets ever since. Think about that. In the few minutes it will take you to read this, over three million new tweets will have flooded the Internet and been added to what Twitter estimates is some 400 million new tweets sent every day.
In the couple of years since the Library of Congress made its announcement, no details have emerged as to how this database of tweets is going to be made available to the public. As it turns out, The Library of Congress hasn't figured that out yet.
“People expect fully indexed - if not online searchable - databases, and that's very difficult to apply to massive digital databases in real time, ” said Deputy Librarian of Congress Robert Dizard Jr. “The technology for archival access has to catch up with the technology that has allowed for content creation and distribution on a massive scale. Twitter is focused on creating and distributing content; that's the model. Our focus is on collecting that data, archiving it, stabilizing it and providing access; a very different model.”
Gnip is a Colorado company providing “Full historical access to the Twitter firehose.” Gnip manages the flow of tweets to the Library of Congress archive. Each tweet arrives at the archive with multiple fields of metadata, including where the tweet originated, how many times it was retweeted, who follows the account that posted the tweet, and more. But the Library of Congress has yet to determine how it is going to sort its 133 terabytes of Twitter data, received from Gnip in chronological bundles. Robert Dizzard Jr. says:
It's pretty raw. You often hear a reference to Twitter as a fire hose, that constant stream of tweets going around the world. What we have here is a large and growing lake. What we need is the technology that allows us to both understand and make useful that lake of information.
As it stands, The Library is not able to provide access to people wanting to research the database. It's cost-prohibitive and the Library has been hit with budget cuts. Without a major overhaul to its technological infrastructure, the Library doesn't have the ability to process even the most basic of search requests.
“We know from the testing we've done with even small parts of the data that we are not going to be able to, on our own, provide really useful access at a cost that is reasonable for us,” Dizard said. “For even just the 2006 to 2010 [portion of the] archive, which is about 21 billion tweets, just to do one search could take 24 hours using our existing servers.”
“Milliseconds is not uncommon for expected latency from when the tweet happened to when someone would be able to get it and analyze it,” he said.
One day we'll be able to personally visit the Library of Congress and perform research in person. Dizard says this was a condition of the deal with Twitter which gifted the archive, so that the Library won't be “competing with the commercial sector.”
Certainly this project is further evidence of that fact that what you say online is going to be online forever.
I'm wondering what you think about this project to archive all our tweets? Is this a useful project?