Tag Archives: Swift River

Although I usually don’t advocate any particular tool, today I do want to talk about Swift River. Swift enables users to make sense of lots of information across the web in a timely manner. The ability to aggregate data is a key feature that should make this (mostly free) and open-source platform attractive to emergency managers.

Why do emergency managers need to aggregate data from the web and social media sites? The answer is simple, they don’t. But, if your community is using social media to connect to citizens during non-disaster events, you should expect your citizens will use that same conduit to ask for assistance or to inform the city/county of problems during a crisis. The recent snow storm in the Northeast provided ample examples of how this may occur. Cory Booker, the Mayor of Newark, had an active twitter account before the blizzard–over a million followers; after the snow fell in prodigious amounts, the citizens used this communications platform to let him know about problems they were experiencing.

The pic left has a few examples, most were messages about streets that had not been plowed or problems with signs and fire hydrants. Although it appears he was able to handle the amount of information and questions that he had to sift through during the strom’s aftermath, I think this type of engagement made a lot of emergency managers nervous: How will we be able to keep up with the torrent of requests from citizens during large-scale disasters?

That’s one of the reasons I wanted to introduce SwiftRiver today. In the future–very near future, we will need to be able to do four things quickly:

curate relevant/new situational awareness data as seen from our citizens’ perspective (they are everywhere–our city workers are not)

verify information from non-governmental sources

discard duplicative information

display information in a interactive format with access to and from multiple agencies (including potentially volunteer organizations)

Currently there are not too many platforms that will do all of those tasks, but the folks at SwiftRiver have been working on this concept since March of 2009. From their website:

The SwiftRiver platform offers organizations an easy way to combine natural language/artificial intelligence process, data-mining for SMS and Twitter, and verification algorithms for different sources of information. Swift’s user-friendly dashboard means that users need not be experts in artificial intelligence or algorithms to aggregate and validate information. The intuitive dashboard allows users to easily manage sources of information they wish to triangulate, such as email, Twitter, SMS and RSS feeds from the web.

The key word here in this description is “users”. Although there is quite a bit of automation in this platform, it still requires actual humans to comb through the data. Some of the tedious work is eliminated, for example the software deletes duplicates or “flags” potential duplicative information, but the act of assigning a veracity score still depends on the user. If your organization does choose to employ a tool like this one, I would recommend that multiple people be trained on how to sort data. (Could this be a job for CERT members?)

This application works well with the mapping tool “Ushahidi” and its developers are members of the Ushahidi team. Ushahidi, at its core, is simply a platform that allows users to place information on a map. You might recall its use during the Haitian earthquake response in which volunteers mapped crowdsourced information regarding damage and injuries.

I expect that in the future there will be lots of competitors for the Swift product, but for now it is difficult to find anything with all of these capabilities. Nonetheless, the website still feels like a start-up. I’m also not sure how long they will be able to offer the product for free. Currently they still list it as a “free and open source” platform, but their website has a tab for “pricing”. I’m guessing parts of it will remain free, but organizations, such as local governments, will probably be expected to pay for the service if they want to take full advantage of all of its capabilities.

Whenever I mention the concept of obtaining situational-awareness information from citizens, the people in logo shirts cringe. The question of data veracity is always the chief concern, as demonstrated by the discussion on this blog a couple of weeks about the Oil Spill Crisis Map (which displays an aggregation of citizen reports regarding the BP Oil Spill). Others in emergency management completely dismiss the notion out-of-hand.

The international humanitarian response community, however, does not have the luxury of ignoring “real-time streams of data” from citizens impacted by either man-made or natural disaster events. Instead of throwing the baby out with the bathwater, as my Texas mother would say, processes, both human and technological, have been developed to address the issue. I should note that this effort is occurring mostly in the NGO sector. (But , see a tangentially related initiative by the U.S. State Department call Civil Society 2.0, announced last Dec.)

The organization leading the way is the non-profit tech company, Ushahidi. What is Ushahidi?

Ushahidi …specializes in developing free and open source software for information collection,visualization and interactive mapping. We build tools for democratizing information, increasing transparency and lowering the barriers for individuals to share their stories. We’re a disruptive organization that is willing to fail in the pursuit of changing the traditional way that information flows.

Since the Ushahidi software is available for any organization (public or private) to use, the creators developed a guide for users that specifically addresses how to verify data from citizens. You can peruse the one-page document, but in general it touches on everything from direct communication with the source, to looking out for “poison data”, or intentionally misleading information.

Another way to verify data is with the deployment of their newly upgraded software “Swiftriver.” This software enables the user to do several things: mine intelligence from the web; aggregate data from multiple sources; monitor mentions of your company, organization, or agency; and categorize information based on semantic context. From their website:

SwiftRiver is a free and open source platform that helps people make sense of a lot of information in a short amount of time. …

In practice, SwiftRiver enables the filtering and verification of real-time data from channels such as Twitter, SMS, Email and RSS feeds. This free tool is especially useful for organizations who need to sort their data by authority and accuracy, as opposed to popularity. These organizations include the media, emergency response groups, election monitors and more.

The SwiftRiver platform offers organizations an easy way to combine natural language/artificial intelligence process, data-mining for SMS and Twitter, and verification algorithms for different sources of information. Swift’s user-friendly dashboard means that users need not be experts in artificial intelligence or algorithms to aggregate and validate information. The intuitive dashboard allows users to easily manage sources of information they wish to triangulate, such as email, Twitter, SMS and RSS feeds from the web.

I think this is interesting because it is a completely different way to sort information during a response. Although currently Ushahidi might be one of the few companies developing these technologies, I suspect many more software applications will become available as organizations, response and otherwise, see the benefits in “mining data”. I also predict that privacy concerns will surface as these practices become more common.