The aim is to specify social requirements in information terms, to develop socio-technical standards for online social interaction.

6.1 Access Control

In computing, decision support systems recommend decisions, access control systems permit them, and control systems implement them. Access control began with multi-user computing as users sharing the same system came into conflict (Karp et al., 2009). Traditional access control systemsuse a subject by object access permission matrix to allocate rights (Lampson, 1969). As computing evolved, the logic offered local access control for distributed systems and roles for many person systems. With these variants, the matrix approach has worked for military (Department of Defense, 1985), commercial (Clark & Wilson, 1987), organizational (Ferraiolo & Kuhn, 2004), distributed (Freudenthal et al., 2002), peer-to-peer (Cohen, 2003) and grid environment (Thompson et al., 1999) applications.

Today, access control in social networks (SNs) is more about access than control. The permission matrix for friend interactions increases geometrically not linearly, with group size, so for hundreds of millions of people the possible connections are astronomical. Each person may add hundreds or thousands of photos or comments a year, and they want the sort of domain control previously reserved for system administrators. Social networkers want local control, not just to read, write and execute (Ahmad & Whitworth, 2011), but to define their own social structure without a central permission (Sanders & McCormick, 1993): e.g. to restrict a photo to family or friends. Social networks vastly increase ACS complexity, as millions of users want rights to billions of resources. STS is the perfect storm for the traditional ship of access control.

The current rules of social network interaction are based on designer intuitions rather than formal models, so they vary between systems and over time, with public outrage the only check. There is no agreed scheme for allocating permissions to create, edit, delete or view object entities, let alone manage roles. The aim here is to fill that gap, to develop a socio-technical access control model that is legitimate, efficient, consistent and understandable.

6.2 Rights

Communities, by norms, laws or culture, grant citizens rights, or social permissions to act. Rights reduce physical conflict, as parties who agree on rights do not have to fight. The conflict moves from the physical level to the informational or legal level (footnote 2). Physical society expresses rights in terms of ownership (Freeden, 1991), so specifying who owns what online can specify rights in a way that designers can support and users can understand (Rose, 2000). This does not mechanize online interaction, as rights are choices not obligations; e.g. the right to sue does not force us to sue. Legitimate access defines what online actors can do, not what they must do.

In traditional computing people are software users, as if software were a drug they depended on. Socio-technology calls people actors because they are not just part of the software. Users cannot switch software but actors can. As sales assistants can see a sale or a customer, so IT designers can choose to see a user or an actor. One is the human level and the other the information level.

An online actor is any party in a social interaction that can act independently of input, i.e. it is able to act not just react. Actors initiate acts based on internal choice or autonomy (footnote 3). A program that only responds to input has no autonomy, so is not an actor (footnote 4).

A person is an actor whose ego-self can be held to account. A citizen is a person a community holds to account (footnote 5). To hold people to account is the basis of all social interaction (footnote 6). Currently, only people can both initiate acts and be accountable for them.

While philosophers argue over free will, all communities find their citizens accountable and govern accordingly, e.g. criminals are punished and mentally incompetent are put into care. Communities hold citizens to account for the effects of their acts, not just on themselves but alsoon others (footnote 7). Accountability is the social key without which communities fail. It only applies to people, e.g. in car accidents the driver is accountable not the car (footnote 8). Likewise online, the company that writes an installation program is accountable for it.

Rights arise as the formal expression of legitimacy concepts like fairness. In physical communities, police and courts direct citizens to follow laws that grant rights. Online, the same applies, but now the code is law, police, judge and jury. The following derives informational rights from community requirements stated on the personal level.

In information terms, a right is a community permission given to an actor (A) applying an operation (O) to an entity (E):

Right = (Actor, Entity, Operation) = (A, E, O)

Rights can be stored as (Actor, Entity, Operation) triplets, where an actor represents an accountable peson, an entity is any object, actor (footnote 9) or right, and an operation is any one available to the entity. A right transmitted or stored is often called a permission.

6.3 A Rights Specification

Socio-technical systems can be modeled as data entities and program operations as follows:

Links are discussed elsewhere (Whitworth & Bieber, 2002). The following outlines general principles for online social interactionss in any socio-technical system. While many of these may seem obvious, recall that to software nothing is obvious and everything must be specified.

6.4 The System

The information system itself is the first entity, owned by the system administrator (SA), who is the first actor. A tyrant SA might alter posts or votes by whim, or a benevolent one might follow Plato's best form of rule and give citizens rights. Currently, no online system elects its system administrator; e.g. even Wikipedia is not a democracy. Yet as today's online kings and emperors die, some form of succession will have to be worked out.

An ACS controls the system at the informational level. If it is not to be in charge, it must allocate all system rights to people who are, giving the first ACS operational principle:

P1. All non-null entity rights are allocated to actors at all times.

It is not necessary to allocate null rights that have no effect. This principle implies that every entity is ultimately owned by a person. If it is not so, an access control system must at some point respond to an access request from itself, which is impossible. An information system has no self to act socially. That all rights are always set means they are not added or deleted, but allocated and re-allocated.

6.5 Personae

An online persona represents an offline actor, e.g. an avatar, mail profile, wall or channel can represent an offline person, group or organization. An online persona is activated by a logon operation, which equates it to the offline party. An online computer agent can act for a group, as installation software does for a company, but social acts must ultimately trace back to people and online is no different (footnote 13). If an installation misleads, we sue the company directors, not the software (footnote 14).

Who owns a persona? Open systems let people self-register, to create their personae. If freedom applies online, one should own one's online self. Yet many systems do not permit this. Can you delete a Wikipedia or Wordpress profile? (footnote 15) The requirement of freedom gives the ACS principle:

P2.A persona should be owned by itself.

Some complexities are that a persona can be:

Abandoned. HotMail accounts inactive for over 90 days are permanently deleted, i.e. if not activated, they "die."

Transferred. One can permanently pass a persona to another, along with its reputation (footnote 16).

Delegated. One can ask an agent to act on your behalf, e.g. by a proxy vote.

Orphaned. If the person behind a persona dies, their will is physically respected, but online programs act as if death does not exist, e.g. one can get an eerie Facebook message from a person after going to his funeral. In a few decades Facebook will represent millions of obituaries, so we need online wills. Death is a social reality online and offline.

Spaces. As leaves need branches, so items need spaces; e.g. an online wall that accepts photos is an information space. A space is a complex object with dependents. It can be deleted, edited or viewed like an item but can also contain objects; e.g. a bulletin board is a space. Spaces within spaces give object hierarchies, with the system itself the first space.

A space is a parent to the child entities it contains, who depend on it to exist. So deleting a space deletes its contents; e.g. deleting a board deletes its posts. The move operation changes the parent space of an object. The enter space operation shows the objects on display in it. As every entity is in the system space:

P3:Every entity has a parent space, up to the system space.

If every entity has a parent space (footnote 17), its ancestors are the set of all spaces that contain it, up to the system itself, the first ancestor. The offspring of a space are any child objects it contains, their children, etc. So all entities have owners and ancestors, and any space can have offspring.

Operation sets. Operations can be clustered for access control purposes; e.g. in the delete set: "delete" flags an entity for destruction, "undelete" reverses that, and "destroy" kills it permanently. An ACS that can manage one delete set operation can manage all of them. Likewise in the editset: "edit" alters an entity value, "append" extends it, "version" is edit with backup, and Wikipedia's "revert" is the inverse of version. Again, variants of a set present the same ACS issues, so to resolve one is to resolve all. While edit changes an existing entity, the create operation set adds a new entity; e.g. to create or duplicate a Wikipedia stub for others to edit. Table 6.2 shows the operation sets for various entity types.

View. Operations like view are null acts that do not change their informational level target, but in some cultures staring at another is an act of aggression. The psychological process is social facilitation, where being looked at energizes the viewed party (Geen and Gange, 1983). Viewing people in a social system affects them because success in a social group depends on how others see you. Privacy, to control information about ourselves, is important for the same reason. The act of viewing also has effects on the community level; e.g. a "viral" online video makes others want to view it too.

The right to use an entity cannot be applied if the actor cannot see it, giving the ACS operational principle:

P4: Any right to use an object implies a right to view it.

Communication. In a simple communicative act, a sender creates a message that a receiver views. It is by definition a joint act where both parties have choice so communication should be by mutual consent. Privacy as the right to remain silent, not to communicate and not to receive messages, arises from this mutuality. In the physical world, people ask "Can I talk to you?" to get permission to communicate.

Some online systems, however, like email, do not recognize this. They give anyone the right to send a message to anyone, whether they will or no, and so invite spam. In contrast, in Facebook, chat, Skype and Twitter, one needs prior permission to message someone. The details of legitimate communication, where a channel is opened by mutual consent before messages are sent, are given in Whitworth and Liu (2009). The resulting ACS operational principle is:

P5:Any communication act should have prior mutual consent.

Progress in telephony illustrates how a technical communication has evolved socially . At first phones just transmitted information — the phone rang and one answered, not knowing who was calling. This allowed telemarketing, the forerunner of spam. Then cell phones showed caller id by default, so one could choose to respond, i.e. it was more mutual. Yet cell phone users still have to personally type in contact list names, while social networks let us each type in our own name and for others to add to their contact list. Cell phone companies could use this synergy, but as TV remote engineers are locked into the physical level, so cell-phone companies are locked into an information level mind-set (footnote 18). They cannot see that people naturally share.

6.8 Roles

Roles, like parent, friend or boss, simplify rights management by covering many cases, but still remain understandable, so people can review, evaluate and accept them. They are equally useful online, e.g. Wikipedia citizens can aspire to steward, bureaucrat or sysop roles by good acts. Slashdot's automated rating system offers the moderator role (Benkler, 2002) to readers who are registered (not anonymous), regular users (for a time), and those who have a positive "karma" (how others rate their comments). Every registered reader has five influence points to spend on others as desired over a three day period (or they expire). In this role democracy, highly rated commenters get more karma points and so more say on who is seen. The technology lets a community democratically direct its governance.

In information terms, a role is a variable rights statement, e.g. a friend role is a set of people with extra permissions. Roles are generic rights, giving the ACS operational principle:

P6:A role is a right expressed in general terms using sets.

Roles are the variables of social logic:

Role = (Actor, Entity, Operation)

The bolding indicates a variable, e.g. the owner role can be generally defined as any party who has all rights to an entity:

Making a person the owner just allocates the Ownerset to include their persona. Roles are flexible, e.g. the friend role lets one change who can see photos posted on a Facebook wall:

RoleFriend = (Friend , EntityWall , OperationView )

where Friendis a persona set. To "friend" another is to add them to this role set, and to "unfriend" is to remove them. As a variable can be undefined, so a role can be empty, i.e. a null friend set.

To "friend" is spoken of as an act on a person, but it does not change the persona entity, so it is really an act upon a local role (Figure 6.1). You decide your friend set, and do not need permission to friend or unfriend anyone, so bulletin boards can ban people at will (Figure 6.2). If banning were an act on another's persona it would need their consent. That it is an act on my role gives the ACS principle:

P7.A space owner can ban or give entry to a persona without its owner's permission.

Re-allocating actors is not the only way to alter a role. By definition, one can change a role's:

Actor. The role actor set.

Entity. The entities it applies to.

Operation. The operations it allows.

For example, a friend role could limit the objects it applies to, with some photos for family only. It could also allow adding comments to photos or not. Few current systems fully use the power of local roles; e.g. social networks could let actors define an acquaintance role, with fewer rights than a friend but more than the public, or an extended family role.

6.9 Meta-rights

Owning an object is the right to use it:

RightOwn = R (Owner, Entityi, OperationUse) ,

but an entity right can also be acted on, i.e. re-allocated. A meta-right is the right to re-allocate a right. In formal terms:

RightMetaRight = R (Owner, RightOwn , OperationAllocate ) ,

where the entity acted on is a right. An owner with all rights to an entity also has its meta-rights, i.e. the right to change its rights. Paradoxically, fully owning an entity implies the right to give it away entirely. Reachability (footnote 19) requires meta-rights to be absolute, so there are no meta-meta-rights, giving the ACS operational principle:

P8.A meta-right is the right to allocate any entity right, including the meta-right itself.

Previously, to own an entity was to have all rights, but giving away use rights while keeping meta rights is still ownership; e.g. renting an apartment gives a tenant use rights, but the landlord still owns it, as they keep the meta-right. The tenant can use it but the owner says who can use it.

6.10 The Act of Creation

To create an object from nothing is as impossible in an information space as it is in a physical one. Creation cannot be an act upon the object created, which by definition does not exist before it is created. An actor cannot request ACS permission to create an object that does not exist. To create an information object, its data structure must be known, i.e. exist within the system. So creation is an act upon the system, or in general, an act on the space immediately containing the created object, giving the ACS operational principle:

P9. Creation is an act on a space, up to the system space.

This rule is well defined if the system itself is the first space. Creating is an act upon a space because it changes the space that contains the created object. If creation is an act upon a space, the right to create in a space belongs initially to the space owner:

RightCreate = R (SpaceOwneri , Spacei , OperationCreate )

The right to create in a space initially belongs to its owner, who can delegate it to others. The logic generalizes well; e.g. to add a board post, YouTube video or blog comment requires the board, video, or blog owner's permission. One can only create in a space if its owner permits. Now an ACS can be simply initialized as a system administrator owning the system space with all rights, including create rights. The system administrator must give rights away for the community to evolve.

Creator ownership. Object creation is a simple technical act, but a complex social one, e.g. how are newly created entity rights allocated? The 17th century British philosopher John Locke argued that creators owning what they create is both fair and increases community prosperity, i.e. it is legitimate (Locke, 1690/1963). The logic applied whether the product was a farmer's crop, a painter's painting or a hunter's catch. If the creator of something chooses to sell or give it away, that is another matter. A community that grants producers the right to their products encourages creativity. Conversely, why produce for others to own? This gives the ACS operational principle:

P10. The creator of new entity should immediately gain all rights to it.

Creator ownership conveniently resolves the issue of how to allocate new object rights — they go to its creator, including meta-rights. Yet a program can act any way it likes; e.g. it could make the system administrator own all created objects. Creator ownership is a social not a technical requirement , i.e. a social success condition.

Creation conditions. A creation condition is when a space owner partially delegates creation. It can limit:

Object type. The object type created; e.g. the right to create a conference paper is not the right to create a mini-track space.

Operations. The operations allowed on created objects; e.g. blog comments are not usually editable once added, but ArXiv lets authors edit publications as new versions.

Viewing. Who can view created objects, e.g. bulletin boards let others view your submission but conferences in the paper review phase do not.

Editing. The field values of a created object, e.g. date added, may be non-editable. The space owner may also set field default values.

A space owner can delegate creation rights as needed; e.g. to show vote results only to people who have voted, to avoid bias.

Transparency. Yet fairness dictates a creator's right to know creation conditions in advance. In general, transparency is the right to view rights and rules of governance that affect you or might affect you. So those who create in a space should know the creation rules in advance. The ACS principle is:

P11. A person can view in advance any rights that could apply to him or her.

Successful socio-technical systems like Facebook, YouTube and Wikipedia support transparency. A space owner can delegate the right to create in whole or part, but should disclose creation conditions up front, so that potential creators can decide to create or not.

For any entity, the system can assign these roles:

Owner. With meta rights to the entity.

Parent. The containing space owner.

Ancestors. Ancestor space owners (SA the first ancestor).

Offspring (space only). The owners of any entities contained in a space.

Local public (space only). Actors who can enter the space.

A space's local public role defines what others can do in the space:

RoleLocalPublic = (LocalPublic , Spacei , OperationAny )

It can be set manually, as friends are allocated, or point to a GlobalPublicList.

Ancestor role. A conference paper's ancestors are its mini-track, track and conference chairs. An entity, being part of the space it is in, should be visible to the owner of that space. Privacy does not contradict this, as it refers to the display of personal information, not created object information. Generalizing, the ACS principle is:

P12. A space owner should have the right to view any offspring.

So the ancestor role for any entity is given view rights to it:

RoleAncestor = (Ancestors, Entityi , View)

For example, a paper posted on a conference mini-track should be visible to mini-track, track and conference chairs, but not necessarily to other track or mini-track chairs. Ancestors should be notified of new offspring and offspring of new ancestors. The social logic is that a paper added to a mini-track is also added to the track, so the track chair can also view it.

Offspring role. An entity created in a parent space must be created by an actor with the right to enter that space. If a space bans the owner of an object in it, the object is disowned, contradicting P1. A child object's owner must be able to enter its space to act on it, even if they cannot do anything else. By extension, they can also enter any ancestor space. This does not imply any other rights. The ACS principle is:

P13. An entity owner should be able to enter any ancestor space.

e.g. adding a mini-track paper should let one enter the track and conference spaces, even if one cannot see or do anything there. Any space should allow its offspring owners to enter it:

RoleOffspring = (Offspring, Space , Enter)

Table 6.3 summarizes the basic access rights for entities and spaces.

Entity

View

Delete

Edit

Display

Allocate

Ancestor

√

Parent

√

√1

Owner

√

√

√

√2

√

LocalPublic

√1,2

Space also

Enter

Create

Ancestor

√

Owner

√

√

LocalPublic

√1

√1

1As allocated by the owner. 2 As allocated by the parent.

Table 6.3: Entity and space access rights

6.11 Display

To display an object is to let others view it. The right to display is not the right to view; e.g. viewing a video online does not let you display it on your web site (footnote 20). Display is the meta-right to view, i.e. the right to give the right to view an object to others; e.g. privacy is the meta right to display the persona object. As people have private numbers in a phone book, so Facebook or Linkedin persona are displayed to the public by owner consent. The phone company that owns a phone book list can also choose not to display a listing, giving the ACS principle:

P14. Displaying an entity in a space requires both persona and space owner consent.

Displaying an item in a space is its owner giving display rights to the space owner. For example, to put a physical notice on a shopkeeper's notice board involves these steps:

Creation. Create a notice. You own it and can still change it, or rip it up.

Permission. Ask the board owner if it can be posted on the notice board.

Post. The board owner may vet notices in advance or let people post themselves.

Removal. As the notice is displayed by mutual consent, either can remove it.

The shopkeeper's right to take a notice down is not the right to destroy it, because he or she does not own it. Nor can he or she alter (deface) notices on the board.

The same social logic applies online. Creating a YouTube video gives you view rights to it, but it is not yet displayed to the public. Giving the right to display a YouTube video is like giving a notice to a shopkeeper to post on their board. The item owner delegates the right to display their video to the space owner, YouTube, who then can choose to display it in their space. In general, to display any video, photo or text in any online space requires mutual consent, as one party gives another the right to display, giving the ACS principle:

P15. An entity owner must give view meta-rights to a space owner to display in that space

Display result

Space owner

Accept

Reject

Object owner

Submit

YES

NO

Withdraw

NO

NO

Table 6.4: A display interaction

Display as a rights transaction is the basis of all publishing, whether of a video, a book or a paper. Table 6.4 shows how the display result depends on the interaction between author (object owners) and publisher (space owners) rights. A space can delegate display rights, to let creators display as desired, e.g. YouTube. Or it may vet items before display and reject some, e.g. ArXiv, which also lets authors withdraw submissions. Bulletin boards let anyone submit but not withdraw, and reserve the right to moderate postings, i.e. reject later.

Authors who publish must give some rights to the publisher to publish. After that, they cannot "un-publish", nor can a publisher (footnote 21). Yet authors do not give all rights to publisher, e.g. attributions rights. Usually the right to publish is given once only, but some publisher contracts take the right to do so many times; e.g. publishing an IGI book chapter led to its re-publication in other collections without the author's permission (footnote 22) (Whitworth & Liu, 2008).

Entity creation. Technically, creating an entity is simple — the program just creates it — but socially adding into another's space is not a one-step act. Adding a YouTube video involves:

Registration. Create a YouTube persona.

Entry. Enter YouTube (not banned).

Creation. Create and upload a video.

Edit. Edit video title, notes and properties.

Submit. Request YouTube to display the video to their public.

Display. The public sees it and can vote or comment.

YouTube lets anyone registered in the public role (1) enter their space and (2) create a video, by uploading or recording (3), which they own (4), They can view and edit its details in private. At this point, the video is visible to them and administrators, but not to the public. They can still delete it. (5) It is then submitted to YouTube for display to its public. This occurs quickly as (6) display rights are delegated. To create, edit and display a video are distinct steps. YouTube can still reject videos that fail its copyright or decency rules. This is not a delete, as the owner can still view, edit and resubmit it. A technology design that let space owners permanently delete videos would discourage participation.

Consistency For the above logic to be consistent, it should also apply when the video itself is a space for comments or votes. Indeed it is, as video owners have the choice to allow comments or votes just as YouTube had the right to accept their video (Figure 6.3). That YouTube gives the same rights to others as it takes for itself is a key part of its success as a socio-technical system.

6.12 Rights Operations

The right to re-allocate rights makes social interaction complex, but it also lets socio-technical systems evolve from an initial state of one administrator with all rights to a community sharing rights. Use and meta rights can be re-allocated, as follows:

Transfer. Re-allocate all rights, including meta-rights. Rights are irrevocably given to the new owner; e.g. after selling a house, the old owner has no rights to it.

Delegate. Re-allocate use rights but not meta-rights. It can be reversed, e.g. renting.

Divide. A right divided among an actor set requires them to join to permit the act, so any party can stop it; e.g. couples who jointly own a house must both agree to sell it.

Share. A right shared across an actor set lets each exercise it as if they owned it exclusively; e.g. couples who severally share a bank account can each take out all the money.

Right Operation

Allocated by

Allocated to

Meta-rights

Use rights

Meta-rights

Use rights

Transfer

√

√

Delegate

√

√

Merge use

√

½ √

½ √

Merge all

½ √

½ √

½ √

½ √

Share use

√

√

√

Share all

√

√

√

√

Table 6.5: Results use and meta rights re-allocations

Table 6.5 shows the resultant states of each rights operation for allocator and allocatee. Dividing a right means that all must agree to it, while sharing a right means that any party alone can activate it. In information terms, dividing is an AND set and sharing is an OR set. This is not just splitting hairs, as if a couple owns a house jointly, both must sign the sale deed to sell it, but if they own it severally, either party can sell it and take all the money. Re-allocating rights applies to many social situations; e.g. submitting a paper online can give all rights to a primary author, let the primary author delegate rights to others, merge rights so that all authors must confirm changes, or share rights among all authors. Each has different consequences; e.g. sharing an edit right is risky but invites participation, while merging it among the authors is safe but makes contributing harder.

Delegation. Delegation, by definition, does not give meta-rights, so a delegatee cannot pass rights on. Renting an apartment gives no right to sub-let, and lending a book does not give the right to on-lend it. It is not hard to show that if delegatees delegate, accountability is lost. If one lends a book to someone who lends it to another who loses it, who is accountable? This gives the operational principle:

P16. Delegating does not give the right to delegate.

Allocating use rights to an existing object makes the target person accountable for it, so it requires consent; e.g. one cannot add a paper co-author without agreement. The principle is:

P17. Allocating existing object use rights to a person requires their consent.

An ACS might ask:

"Bob offers you edit rights to 'The Plan', do you accept?"

In contrast, rights to null acts, like view or enter, or to acts like create, can be allocated without consent because they imply no accountability:

P18. Allocating null rights to existing objects, or the right to create, requires no consent.

So Facebook owners owners can freely delegate entry, view or create rights to their wall space to anyone without permission.

Social networks. Social networks currently send messages like:

"Bob wants to be friends with you."

Friendship is assumed to be a tit-for-tat social trade, where I offer to make you a friend if you make me one. Yet by P7, one can befriend another without their permission (footnote 23). If the software allowed it, we might get messages like:

"Bob considers you a friend, please visit his page. "

This is giving friendship, not trading it. As one can love a child unconditionally, even if they do not return the favour, so friendship need not be a commercial transaction.

For a social network to consider that the friends of my friends are also my friends contradicts P16. As liking someone does not guarantee that one will like their friends, so making a friend should not reset my friend list. This illustrates a technical option that failed because it had no social basis.

6.13 Implementation

Traditional access control enforcement is done by a security kernel mechanism. A security kernel is a trusted software module that intercepts every access request call submitted to a system and decides if it should be granted or denied, based on some specified access policy model. Usually, a centralized approach is used, so one policy decision point handles all resource requests. The access request gets either an executed action result or a permission denied message. Social networks have millions of users, so centralized or semi-decentralized certificates are a bottle neck. This plus the social need for local ownership by content contributors suggests a strategy of distributed certificates to implement the ACS policy model outlined here. Allowing local policy decision points to handle resource requests also ensures local user control over resources. If distributed certificates are stored in the stakeholder's namespace, only he or she can access and modify them (Figure 6.4).

6.14 An example: Tapping the Internet

The power of technology will soon allow, if it does not already, the power to store every phone call, text message, e-mail, tweet and online posting, every day from now on. The PRISM system's capacity to record all global communications all the time illustrates what is possible. The privacy problem is that listening in on people gives power over them and computer databases extend that power to the indefinite past. That what you post online can come back to haunt you later will reduce synergy, as in a police state. Security experts say that if you do nothing wrong you have nothing to fear, but who among us is perfect? And happenstance can make anyone a person of interest, e.g. an innocent neighbour or relative of a terrorist, whose online past is then open to public or government scrutiny.

The social view that violating everyone's rights to catch some criminals is counter-productive was built into the US constitution by its founders, with rules preventing random search and unwarranted monitoring. A government should not deny its citizens freedom or privacy without just cause, but it now seems that PRISM has been doing just that online, justifying that firstly it only collects "meta-data" like call length and phone number and secondly that it is necessary for security.

In information terms, meta-data is data about data, like an average. In contrast, call length is data about a call, i.e. just data. In a database, call length is a property of the call entity as call content is. The meta-data argument lacks validity but the claim that community security trumps personal privacy does not, because a citizen is part of the community. One has privacy with respect to others but not with respect to the state one is in. So the fourth amendment allows monitoring given it is truly a community act, as formally checked by a judge. Wiretapping requires a warrant, to stop vested interests from abusing community power, as the US tax department targeting groups based on political affiliation did. As a community cannot permit what it doesn't know of, such systems should not be secret. This social logic applies whether tapping a phone line or the Internet in general.

Technology power offers not just more capability but also more options. In the past, a wire-tap revealed a communication, content and all, but PRISM is ignoring the content and claiming it is ok. A technology evolution has undone a prior social evolution, namely the fourth amendment of the US constitution, so lawmakers have to go back to original causes. The reason for this amendment is shown by a use case: suppose a computer system tapping all phone calls uncovers terrorist plots which thus fail. Organized terrorists then stop using phones and say use texts instead, although foolish novices may be picked up. If texts are tapped, they move on to e-mail, blogs, chat, etc. Spam and virus wars follow this pattern. The result is a system that monitors all the communications of its good citizens but is bypassed by most terrorists and is now ripe for corruption, abuse and hijack. Monitoring all communications may find some terrorists, but is it worth it? The founders of the US constitution certainly did not think so, and electronic surveillance is no different from physical surveillance.

The technology that causes a social problem can also solve it if built with social principles like privacy in mind. One solution is to consider all personal data "radioactive" and so not collect it. What is not stored cannot be stolen or subpoenaed. Companies like Vodafone that personally bill people have to collect personal data but companies like Google have no reason to store names and addresses that governments can later demand. If personal data is collected let it be kept behind a firewall, apart from the general network data that systems like PRISM analyze. The latter could then be provided, but to get individual names would require a warrant, just as an individual phone wiretap does. With a little ingenuity, technology can fight terrorism with without going socially backwards.

6.15 A Sociotechnical Standard

Legitimate access control can assign owner, parent, ancestor, offspring and local public rights to objects and spaces in a way that encourages social system success. We hope the following ACS principles are a first step to developing common standards in socio-technical design:

All non-null entity rights should be allocated to actors.

A persona should be owned by itself.

Every entity has a parent space, up to the system space.

Any right to use an object implies a right to view it.

Any communication act should have prior mutual consent.

A role is a right expressed in general terms using sets.

A space owner can ban or give entry to a persona without its owner's permission.

A meta-right is the right to allocate any entity right, including the meta-right itself.

Creation is an act on a space, up to the system space.

The creator of new entity should immediately gain all rights to it.

A person can view in advance any rights that could apply to them.

A space owner should have the right to view any offspring.

An entity owner should be able to enter any ancestor space.

Displaying an entity in a space requires both persona and space owner consent.

To display an entity in a space, the entity owner gives view meta-rights to the space owner.

Delegating does not give the right to delegate.

Allocating existing object use rights to a person requires their consent.

Allocating null rights to existing objects, or the right to create, requires no consent.

6.16 Discussion Questions

The following questions are designed to encourage thinking on the chapter and exploring socio-technical cases from the Internet. If you are reading this chapter in a class - either at university or commercial – the questions might be discussed in class first, and then students can choose questions to research in pairs and report back to the next class.

What is access control? What types of computer systems use it? Which do not? How does it traditionally work? How do social networks challenge this? How has access control responded?

What is a right in human terms? Is it a directive? How are rights represented as information? Give examples. What is a transmitted right called? Give examples.

What is the difference between a user and an actor? Contrast user goals and actor goals. Why are actors necessary for online community evolution?

Is a person always a citizen? How do communities hold citizens to account? If a car runs over a dog, is the car accountable? Why is the driver accountable? If online software cheats a user, is the software accountable? If not, who is? If automated bidding programs crash the stock market and millions lose their jobs, who is accountable? Can we blame technology for this?

Contrast an entity and an operation. What is a social entity? Is an online persona a person? How is a persona activated? Is this like "possessing" an online body? Is the persona "really" you? If a program activated your persona, would it be an online zombie?

What online programs greet you by name? Do you like that? If an online banking web site welcomes you by name each time, does that improve your relationship with it? Can such a web site be a friend?

Compare how many hours a day you interact with people via technology vs. the time spent interacting with programs alone? Be honest. Can any of the latter be called conversations? Give an example. Are any online programs your friend? Try out an online computer conversation, e.g. with Siri, the iPhone app. Ask it to be your personal friend and report the conversation. Would you like a personal AI friend?

Must all rights be allocated? Explain why. What manages online rights? Are AI programs accountable for rights allocated to them? In the USS Vincennes tragedy, a computer program shot down an Iranian civilian airliner. Why was it not held to account? What actually happened and what changed afterwards?

Who should own a persona and why? For three STSs, create a new persona, use it to communicate, then try to edit it and delete it. Compare what properties you can and cannot change. If you delete it entirely, what remains? Can you resurrect it?

Describe two ways to join an online community and give examples. Which is easier? More secure?

Describe, with examples, current technical responses to the social problems of persona abandonment, transfer, delegation and orphaning. What do you recommend in each case?

Why is choice over displaying oneself to others important for social beings? What is the right to control this called? Who has the right to display your name in a telephone listing? Who has the right to remove it? Does the same apply to an online registry listing? Investigate three online cases and report what they do.

How do information entities differ from objects? How do spaces differ from items? What is the object hierarchy and how does it arise? What is the first space? What operations apply to spaces but not items? What operations apply to items but not spaces? Can an item become a space? Can a space become an item? Give examples.

How do comments differ from messages? Define the right to comment as an AEO triad. If a comment becomes a space, what is it called? Demonstrate with three commenting STSs. Describe how systems with "deep" commenting (comments on comments, etc.) work. Look at who adds the depth. Compare such systems to chats and blogs – what is the main difference?

For each operation set below, explain the differences, give examples, and add a variant to each set:

Delete: Delete, undelete, destroy.

Edit: Edit, append, version, revert.

Create: Create.

Define a fourth operation set.

Is viewing an object an act upon it? Is viewing a person an act upon them? How is viewing a social act? Can viewing an online object be a social act? Why is viewing necessary for social accountability?

What is communication? Is a transfer like a download a communication? Why does social communication require mutual consent? What happens if it is not mutual? How does opening a channel differ from sending a message? Describe online systems that enable channel control.

Answer the following for three different but well known communication systems: Can a sender be anonymous to a receiver? Can a receiver be anonymous to a sender? Can senders or receivers be anonymous to moderators? Can senders or receivers be anonymous to the transmission system?

Answer the following for a landline phone, mobile phone and Skype: How does the communication request manifest? What information does a receiver get and what choices do they have? What happens to anonymous senders? How does one create an address list? What else is different?

What is a role? Can it be empty or null? How is a role like a maths variable or computing pointer? Give role examples from three popular STSs. For each, give the ACS triad, stating what values vary. What other values could vary? Use this to suggest new useful roles for those systems.

How can roles, by definition, vary? For three different STSs, describe how each role variation type might work. Give three different examples of implemented roles and suggest three future developments.

If you unfriend a person, should they be informed? Test and report what actually happens on three common social networks. Must a banned bulletin board "flamer" be notified? What about someone kicked out of a chat room? What is the general principle here?

What is a meta-right? Give physical and online examples. How does it differ from other rights? Is it still a right? Can an ACS act on meta-rights? Are there ACS meta-meta-rights? If not, why not? What then does it mean to "own" an entity?

Why can creating an item not be an act on that item? Why can it not be an act on nothing? What then is it an act upon? Illustrate with online examples.

Who owns a newly created information entity? By what social principle? Must this always be so? Find online cases where you create a thing online but do not fully own it.

In a space, who, initially, has the right to create in it? How can others create in that space? What are creation conditions? What is their justification?

Find online examples of creation conditions that limit the object type, operations allowed, access, visibility and restrict edit rights. How obvious are the conditions to those creating the objects?

Give three examples of creating an entity in a space. For each, specify the owner, parent, ancestors, offspring and local public. Which role(s) can the owner change?

For five different STS genres, demonstrate online creation conditions by creating something in each. How obvious were the creation conditions? Find examples of non-obvious conditions.

For the following, explain why or why not. Suppose you are the chair of a computer conference with several tracks. Should a track chair be able to exclude you, or hide a paper from your view? Should you be able to delete a paper from their track? What about their seeing papers in other tracks? Should a track chair be able to move a paper submitted to their track by error to another track? Investigate and report comments you find on online systems that manage academic conferences.

An online community has put an issue to a member vote. Discuss the effect of these STS options:

Voters can see how others voted, by name, before they vote.

Voters can see the vote average before they vote.

Voters can only see the vote average after they vote, but before all voting is over.

Voters can only see the vote average after all the voting is over.

An online community has put an issue to a member vote. Discuss the effect of these STS options:

Voters are not registered, so one person can vote many times.

Voters are registered, but can change their one vote any time.

Voters are registered, and can only vote once, with no edits.

Which option might you use and when?

Can the person calling a vote legitimately define vote conditions? What happens if they set conditions such as that all votes must be signed and will be made public?

Is posting a video online like posting a notice in a local shop window? Explain, covering permission to post, to display, to withdraw and to delete. Can a post be deleted? Can it be rejected? Explain the difference. Give online examples.

Give physical and online examples of rights re-allocations based on rights and meta-rights. If four authors publish a paper online, list the ownership options. Discuss how each might work out in practice. Which would you prefer and why?

Should delegating give the right to delegate? Explain, with physical and online examples. What happens to ownership and accountability if delegatees can delegate? Discuss a worst case scenario.

If a property is left to you in a will, can you refuse to own it, or is it automatically yours? What rights cannot be allocated without consent? What can? Which of these rights can be freely allocated: Paper author. Paper co-author. Track chair. Being friended. Being banned. Bulletin board member. Logon ID. Bulletin board moderator. Online Christmas card access? Which rights allocations require receiver consent?

Investigate how SN connections multiply. For you and five others find out the number of online friends and calculate the average. Based on this, estimate the average friends of friends in general. Estimate the messages, mails, notifications etc. you get from all your friends per week, and from that calculate an average per friend per day. If you friended all your friend's friends, potentially, how many messages could you expect per day? What if you friended your friend's friend's friends too? Why is the number so large? Discuss the claim of the film Six Degrees of Separation, that everyone in the world is just six links away from everyone else.

Demonstrate how to "unfriend" a person in three social networks. Are they notified? Is unfriending "breaking up"? That an "anti-friend" is an enemy, suggests "anti-Facebook" sites. Investigate technology support for people you hate, e.g. celebrities or my relationship ex. Try anti-organization sites, like sickfacebook.com. What purpose could technology support for anti-friendship serve?

Author(s)

Born in England and brought up in New Zealand, Brian Whitworth currently works at Massey University in Auckland, New Zealand. After doing a mathematics degree, and a Master's thesis on split-brain neuropsychology, Brian joined the New Zealand Army, where he was the first specialist to complete regular army officer cadet training. He worked as an army psychologist, and then in computer operational simulations (wargames), while simultaneously raising four wonderful children, until he retired in 1989 as a Major. Brian then completed his doctorate on online groups, and students at his university used the social voting system he built until the World Wide Web arrived. In 1999, he worked in the USA as a professor, and published in journals like Small Group Research, Group Decision and Negotiation, Communications of the AIS, IEEE Computer, Behavior and Information Technology, and Communications of the ACM. More recently, he was the senior editor of the Handbook of Research on Socio-Technical Design and Social Networking Systems, written by over a hundred leading experts worldwide. His interests include computing, psychology, quantum theory and motor-cycle riding.

Adnan Ahmad was born in Lahore, Pakistan. He received his Bachelor of Science (Hons.) from Govt. College University (GCU, 2005), with a major in software engineering, and a Masters of Science from Lahore University of Management Sciences (LUMS, 2009), with a major in distributed systems. He worked for five years in industry, getting hands on experience with cutting edge software and hardware technologies. His PhD in Information Technology at Massey University, New Zealand, was on a formal model of distributed rights allocation in online social interaction. He has published in well-known conferences like Worldcomp, IFIP SEC, IAS, IAIT and Trustcom. His current research applies socio-technical design principles to computer security, and his other interests include crowd sourcing, distributed systems, spam and image processing.