Like most people in the Bay Area, I’ve been watching the tragedy of the natural gas explosion in San Bruno on September 9 with great concern and sympathy. Focused on the current pain and suffering, I hadn’t really thought of the history of the area much, until an avid LookBackMaps user, Robert Bowdidge, posted a comment on an entry he had submitted months earlier from the San Bruno Public Library. It was a photo of the Crestmoor housing development sometime in the sixties, and the newly minted homes you can see in the foreground, he noted, “are the ones that were destroyed by the gas line explosion and subsequent fires on Sept. 9, 2010.”

As I looked closer, and compared this photo to those from news reports and Flickr, I wondered if it preceded the 30″ gas pipeline, as the lower section of the neighborhood had not been built yet. Not familiar with the neighborhood, I used the Flickr map to zoom into the location and find recent photos, where I stumbled across the image below, which had combined what is presumably a press photo, with Google Maps.

This image was posted by Flickr user G Clark to indicate the precise location of the explosion. The source of the photo on the right is unknown.

Investigations will no doubt look into construction in this area and any work that has been done recently. They’ll figure out how and where the pipeline was checked for leaks. It’s interesting to consider what role mapping technology may play in these investigations. It’s clear that community mapping and satellite imagery has played an enormously valuable role in disaster recovery, as we saw recently in Haiti. But in this case, we have archival record of a crime scene. A close look at the location of the blast with Google Street View hauntingly shows multiple pavings above the blast site, and recent Underground Service Alerts showing the pipeline in yellow spraypaint, for gas. We can’t tell at this resolution if it’s properly marked, nor do we know when it was taken, but it seems that records like these, particularly if accompanied with higher resolution, time-stamped images, may be valuable information to the investigation. In any case, exploring the area with Google Street View below, starting at the blast site, allows us to visit a quiet neighborhood which will never be the same again.

This tragic event reminds me of the importance and possibilities of marking images within time and space, and making those images publicly available. A photograph of what is just a normal street scene to us now may offer valuable clues or reminders in the days and years after a disaster. More frequently, they may simply give us a sense of what life was like long after memories have faded. Imagine being able to see a current Google Street View of New Orleans’ Lower 9th Ward, and then be able to turn it back a year or two or even close to five, to see what it looked like after the levies broke. And then to go further still and see that neighborhood as it was before Katrina, to see what we lost when the levies failed the citizens of New Orleans. Presumably, these images still exist somewhere, and by all means should be preserved.

As technology continues to evolve, we’ll continue to improve the tools that allow us to roll back the clock to remember what was there. While the ability to do this may still be feasible only to the Googles and Microsofts, I hope that we’ll see more collaboration with public institutions to protect these images and views of our common history. One hundred years from now, long after I’m gone, perhaps kids will be able to visit their local library website and see a 360 degree view of my block–chances are, by then, it will be a much more immersive experience. Unfortunately, for those who have lost everything in this disaster or others, this may offer little comfort.

I’m proud to announce the launch this week of the new website for the Civil War Data 150 project (“CWD150″). As we explain on the site, CWD150 is a collaborative project to share and connect Civil War related data across local, state and federal institutions during the sesquicentennial of the American Civil War, beginning in April of 2011. The project will utilize Linked Open Data to find and create connections between archives and help increase the discovery of these resources by researchers and the general public alike.

The partnership currently includes the Archives of Michigan, the Internet Archive, Freebase, and the Digital Scholarship Lab at the University of Richmond. In the coming months, we’ll be adding more participating institutions as we begin to move into the data collecting phase.

You may wonder what this has to do with LookBackMaps, and the answer is, “everything.” The purpose of LookBackMaps is to find connections between photographs and places, and the Civil War Data 150 takes that to another level, using Linked Data to enable the public to help make connections between disparate sets of data publicly available on the web. This kind of public research, when shared with agreed upon standards, becomes useful to other researchers, librarians, archivists and others! So what might have been a passion to me may contribute to solving a puzzle someone else is working on!

It’s hard to imagine the magnitude of the 1906 earthquake in San Francisco. But we built the LookBackMaps iPhone app for that very purpose: to see the then and now out on the streets of San Francisco. There’s just something about finding the same place a photographer stood over one hundred years ago, and look through your own camera view to see what they saw then. It certainly puts into perspective the destructive scale of that quake, and for a minute, puts you back into that time and place.

Organizer: Jon Voss, LookBackMaps
Description:
For centuries, libraries, archives, and museums have been creating structured data, organizing information, and managing metadata in order to organize and share cultural artifacts and knowledge with the public. Unfortunately, the bulk of these systems have evolved in isolation, long before the advent of the World Wide Web. However, the convergence of developments in culture and technology are resulting in exciting new ways for individuals and developers alike to interact directly with unprecedented amounts of structured data, historical photos and archives, and more. Expert developers and project managers in this field will lead a discussion focused on the question: How can developers leverage open data from libraries, archives and museums being made available to the public? Panelists will review new developments and highlight examples, considering use cases with Linked Data, Flickr Commons, Smithsonian Commons, mobile apps, and scalability.

There are a lot of great things about being involved with THATCamp, and I’m super excited about having the opportunity to help organize THATCamp Bay Area October 9-10, 2010. Not least of those great perks is meeting all kinds of super smart and motivated people who are using their intellect, expertise, and contacts to do something for each other and for the common good. But there are other aspects which are typical of a growing community or network that require tough decisions that can’t please everyone.

One that we’ve struggled with at THATCamp Bay Area is the question of a curated or crowdsourced gathering, a question very relevant to those in the library, archives, and museum space already! What I mean by that in this context is whether we should open our rendition of THATCamp to anyone and everyone (crowdsource) and let the chips fall where they may, or should we use an application process to vet invitees to create purposeful cross-disciplinary dynamics (curate)? I’ve had the opportunity to be involved in both types of unconferences, as have some of the other organizers, and we’ve certainly seen some of the pros and cons of both. This is something that deserves more discussion amongst the THATCamp community to be sure.

While every regional THATCamp has the ability to organize things their own way, there are several key characteristics (listed on THATCamp.org/about) that we wanted to be sure to abide by. Granted, these are not set in stone, and as THATCamp seems to be a growing movement, some of us have gotten together at THATCamps and other places to talk about these kinds of organizing and network weaving questions. But one of the key elements here that helped us make the decision of curate vs. crowdsource was that THATCamps have no more than 100 participants. Since there has never been a THATCamp in the Bay Area, 75-100 seemed like a reasonable number to shoot for, and we began the search for a (non-academic, different story) space to accommodate about that many people, and sponsors to support it. Once Automattic, the people behind WordPress.com, got behind us and offered to host THATCamp Bay Area at their space, we were on. We figure we can accommodate about 75 people, and we decided that if we got more applicants than that, we’ll need to do some curating.

Now, just over a week and a half into our month long application window, we already have over 75 applicants. Assuming there will be more, we’re going to have to make some tough decisions. So, to the extent that we have to, we’ll be making decisions based on several factors that are intended to extend the reach of THATCamp and inspire more cross-disciplinary events like it in the Bay Area and beyond. For the sake of transparency and in the hope that our process can help inform other organizers, these are the things we’ll be taking into consideration as we curate this gathering.

Your applications matter. We are not asking for a lot of information, but we’re trying to make sure that the people that attend have a passion for their work or vocation or hobby and want to share their experience with others as well as learn new things. You don’t need technical skills or academic credentials.

We’re aiming to create cross-disciplinary connections across a wide array of sectors, and so are looking for applicants from as many diverse fields as possible–with not too many from one organization, institution or sector.

We’re looking for catalysts to keep this conversation going. We hope that people will take what they learn from this and share it widely, act on it, build collaborations, pursue ongoing conversations, and include others. Because we have more people interested than we can facilitate, we have an added responsibility to continue opening the conversation.

Clearly, in future THATCamps in the Bay Area, we’ll need to either plan for more participants, or take a FOOcamp model of having nominations. It will be worth discussing, and in the meantime, there’s great excitement over the demand!

I’ll be on a whirlwind of summer meetings/vacation/travel around the Midwest/East Coast, and may be coming to your town. Definitely drop me a line (jon at lookbackmaps dot net) to set up a get together or informational meeting about LookBackMaps, LookBackApps, Civil War Data 150, or Linked Data in libraries, archives and museums.

Abstract
Jon Voss will discuss Civil War Data 150 (“CWD150″), a collaborative project of the Archives of Michigan, the Internet Archive, and Freebase. CWD150 seeks to link Civil War archives and data from separate state and national sources in an open community-maintained database (Freebase), and create interactive web applications to help crowdsource the data linking. The project presents research questions of particular interest to archivists regarding the use of strong identifiers and shared ontologies, as well as uses for shared metadata in the context of the Semantic Web.

Yesterday I was working on a project to retrieve 1st and 2nd degree Twitter followers for an unconference prior to a list being built. These were listed on individual WordPress pages under a single directory. I used a two step process to extract the Twitter handles.

1. I used an old Java tool called websphinx, which gave me the ability to crawl the directory of the site I was looking at, and concatenate each of the pages into one massive page.

2. I posted that page in the sandbox of my site and directed Dapper to it. From there, I was able to create a Dapp identifying the fields I wanted, group them together, and create a CSV document to put into Excel.

This was my first time playing with Dapper, and can definitely see a lot of great uses for it!

I put together a brief write-up following on my Mapping Social Networks white paper, in which I used the Twitter API to monitor a network of unconference participants before and after the event. This kind of analysis can be a useful proxy for visualizing network connections and growth without using traditional questionnaires or surveys.