Last week I attended a brainstorming session at the (newly renamed) ThingTank, an Internet of Things innovation outfit in Toronto. One of the ideas we considered was a home security product that aimed to be cheaper and more connected than the stuff currently on the market by taking advantage of cheap sensors and cheap transmitters. The idea was to piggyback on the Internet connection that most homes already have. The problem is that home Internet connections are infuriatingly unstable, and while that’s a bummer when your movie doesn’t download properly, it could be fatal if your alerts didn’t go off. The liability issues are enormous.

The interesting consequence is that it’s possible to invent something which is strictly better than the alternative (unreliably connected is preferable to not connected at all) but still isn’t suitable for sale. By taking on a challenge that the other product doesn’t even attempt, you take on more responsibility.

Fluid Nexus is a new Android/Windows/Linux app designed to exchange messages without the need for a centralized network. A useful feature if you, say, wanted to coordinate a protest on a public transit system which was shutting down its cell network. Fluid Nexus is explicitly advertised as a tool for activists.

Fluid Nexus bypasses Internet intermediaries’ control over the identification and circulation of messages. This makes Fluid Nexus an important tool for activists. Access to the data stored by Fluid Nexus requires a search warrant for your own devices—or another device running the software. No identifying information regarding the sender is attached to a message, putting the sender in control. And in conjunction with other software such as ObscuraCam identities can be further obfuscated as desired or necessary.

The argument here is that there is a vast different in the responsibility that you hold depending on whether your tool is advertised as being fit for a particular purpose or if people take your tool and use it for an unintended purpose. The moral question that I’d like to ask is: “Is there a moment when the second situation can morph into the first one?”

When things kicked of in Iran, Twitter was quick to recognize the importance that the network had gained for communicating and with some alleged nudging from the State Department, they delayed planned maintenance to keep the tweets running. Twitter gained no small amount of positive press for that decision, just as Facebook has for the way it was used in the various Arab Springs. Is there a moment when these services have to recognize that their services are being used in a certain way and begin to ensure their fitness for that use?

Perhaps that moment already happened. Consider Twitter’s response to the Wikileaks subpoena or with Facebook’s intervention in Tunisia. They know what’s happening. Do they have a moral responsibility to do something about it? These tools aren’t static. They are services undergoing constant revision and upgrades. Feature sets change all the time.

The uneasy dance of authority and individual continues. The US is sponsoring an Internet in a suitcase and cell towers on army bases to keep activists online abroad, while allowing the shutdown of networks at home. And Wikileaks depends on TOR, a technology developed by (amongst others) the US Navy.