Tag Archives: ethics

I got in about 1pm to She’s Geeky, an unconference about women who self-identify as geeks.

I’m here on a mission: to find developers who want to move to Portland! Emma, Urban Airship, Puppet Labs, About Us and BankSimple are all hiring (and BankSimple is even interested in remote hires).

My favorite conference session yesterday was about leadership and management, the difference between the two, and how to work with managers. We had an amazing discussion, with @noisegirl, Allison Randal, and Ursula Kallio leading a lot of the discussion. Topics ranged from how to carve out time for individual contribution when you take on a management role, to dealing with insane micromanagement to exploring the limits of change in an organization.

Another discussion I participated in was “Startup. Now what?” We talked about the issues each woman faced in starting her own business, and I asked a lot of questions. 🙂

The main gist of the discussion was about encouraging congress to think carefully about the legislation and the business environment created (or stifled) by new data regulations. The contention is that activity data stored in “personal data stores” (PDS) is inherently of value — we already know this because our data is bought and sold without our consent or knowledge currently. So, why not create a system where businesses can do this, but with the consent and knowledge of consumers? I’d probably say “citizens” there instead of consumers, but you know. Whatever. 🙂

I’m not sure I fully understand the issues yet. I tried at one point to draw a link between PDS and “owning your own logs.“, but that didn’t seem to resonate. Kaliya said something about respecting definitions, so I think that I still don’t quite understand what defines a PDS.

Or, put another way, I am having a hard time understanding the distinction, because the freedom issues seem to be very much the same.

I tweeted a bit about my thoughts on APIs related to PDS, and here’s one conversation that tumbled out of it:

Forgetting should be built into our applications by default. I just spent the weekend at FooCamp, and I held a session to discuss this idea, and some of the possible consequences if it were implemented.

To explain why I think this, I’m going to take an extreme stance for a moment and argue a position that I’d like to see rebutted. So, please have at it! 🙂

For too long we have allowed decisions made by developers – default application settings – to determine what ultimately become surveillance levels.

There are notable counter examples: 4chan intentionally expires postings every few days. Riseup keeps no logs. The EFF documents what we do and do not legally need to keep. These, however, are the efforts of a tiny minority when considered against the rest of the web.

Over time, our conception of what is reasonable has changed around logging and accounting for vast periods of our activities. Never before would a silly recording taken by a 15-year old be stored indefinitely, and then be documented as a watershed event because of how many times it was viewed in a vast global network, rather than for the content of the cultural artifact itself. The log of views themselves were the cultural artifact, and it is celebrated.

Fading away isn’t evil. But we act like it is when we pipe what once was ephemeral into archive.org indefinite storage.

Why have we decided to participate in this social experiment? It really wasn’t a collective decision. Some software developers and investors decided that archival on a massive scale was important or profitable. We started calling these things “part of history” and just storing them without thinking about it. Saving became default.

I’m not saying that archiving the internet, search robots or “opting in” are bad things. But those who least understand archiving’s effect on personal privacy may be the ones most likely to suffer in the future.

The ripple effects of the decision to move from “default expire” to “default save” are vast. Consider for a moment if we were to call the ability to intentionally forget on the internet a human right.

Instead, what we’ve done is to say to millions of people – you do not have the right to forget. Companies will take your locations and status updates, and never delete them. And privacy is rapidly becoming a privilege of those who can afford to buy it.

For the sake of argument, consider the difference between narrative historical documentation and collections of “facts.” The narrative is an aggregation, full of embellishments and forgetting and kernels of truth. Facts are collected, supposedly objectively. Both approaches to capturing historical thought suffer from the fallacy that historical “fact” is fixed and doesn’t evolve based on the viewer and reteller over time. How much worse is this effect when our collections of facts are now ballooning to include every blog post, photo, tweet and web access log you’ve ever made?

The point is not that individuals wish to change history or even obscure events which may reflect poorly on them. (Even though we all do!)

We need to give people a real choice – not a set of ACLs and rules. Choice about what is archived about them, control over that process and a clear delineation between personal artifact and public property.

Kathy Sierra deleted her twitter stream and was accused of removing a piece of history, and possibly the worse internet offense – taking away conversations. Taken at face value, isn’t that the point of conversation? That it is ephemeral?

Conversations leave echos in changed thoughts and light or deep impressions in the minds of the participants. Just because Twitter has by default chosen to retain these conversations indefinitely doesn’t change the nature of conversation itself. No one would argue that just because we share our thoughts that we are obligated to share every thought.

In the same way, we are not obligated to maintain a record of our sharing. And if we do maintain and share a record of our own end of a conversation, we still have the right to ultimately destroy it.

Once shared, of course, an artifact of a conversation can’t be taken away from those that have copies. But authors and owners of the original work must always retain the right to destroy.

So, that brings me to what is ethical in our applications. When we say: “we’re keeping your data forever” and “delete means your account will still be here when you come back”, application developers and companies are making an ethical choice. They are saying, “your shared thoughts aren’t your own – to remember or forget. We are going to remember all of this for you, and you no longer have the right to remove them.”

Connectedness is not the same as openness. Storing vast logs of data related to individuals which connect thousands of facts over the course of their lives should be presented as the ethical choice it is, rather than a technical choice about “defaults”. Picking what we decide to log and store is an ethical and political decision. And it should also be possible for it to be a personal decision.