Tag: ethics

This blog post started life as a comment on Martin Weller’s post about language and how it affects behaviour and thoughts in Edtech. The comment mysteriously disappeared as I posted it so I thought that I would repost it here and link from Martin’s post. The title of the post “Let’s think inside the box“ made me think about Blackboxing .
Whilst I agreed with many of the ideas in the post, Martin’s statement “But this post isn’t about politics” stopped me short. I wondered what those of us thinking of making a submission to OER17 would make of this. I have been puzzling for a few weeks about what politics can mean in the context of “The Politics of Open” theme for OER17, and I guess that others may be doing so too. I think that the concept of politics being broad and interpreted in different contexts will make for a great conference that enriches our understanding of Open Educational Practices and Resources.
So I would like to encourage Martin to think about how his post could be about politics and ‘fit’ OER17 – maybe even rework it as a submission 🙂 I was grateful to him for getting me to think about politics and language and my Scholar Google search turned up 2 corking resources.

My subject discipline was Information Systems (that bears some parallels to Edtech) and in our teaching, we included social and organisational aspects as well the technological ones. Ethics, gender and power were definitely on the curriculum. When we think about education and/or technology without considering the politics – global, national, organisational, personal, interpersonal – we are filtering our understanding and it’s no wonder that our noble aspirations don’t always work out as we would hope. This is why I think the OER17 theme is so exciting – it can help us expand our thinking and our practice.

In my writing, reading and thinking during the last year or so, some of the recurring themes are ethics, learning, diversity, popularity and polarisation in Internet culture.Encouraged by my experience at the smallest federated wiki, I am trying different ways of writing, experimenting with partially-formed ideas, linking with and building on what others have written.I have always blogged in this way but the experience of federated wiki has encouraged me to work in smallish chunks of writing that I can link to others’ smallish chunks without any overarching plan of where I am going.

This post is a bit different from my fedwiki writing. I want to set down some different ideas that seem connected to me because I suspect that I can come back to them later and make different connections, with your help.I did this in an earlier post that is still ripe for connections for me.If this post generates anything like the richness of the earlier post’s comment stream, it will be productive labour.

In some ways, we are moving away from limiting old binaries and dualisms like real/virtual, global/local in our exploration of (digital) communication and culture.

Just like its government equivalent, voting on social networks is also a nice way to give the illusion that anything and anyone can succeed on merit while actually maintaining the status quo through sociotechnical structures. Tech entrepreneurs deploy voting to show allegiance to their fantasy of a color-blind and genderless meritocracy, predicated on what PJ Rey has shown to be an outdated and debunked notion that the Internet allows us to transcend race, class, and gender by entering a space of pure information. Popular posts are good, the logic goes, because only the best makes it to the front page.

David’s critique of voting on social networks argues powerfully that the ‘binaries’ of up-voting and down-voting are inadequate for dealing with ambiguity and divisive topics. They are a tool for polarisation not a means of going beyond it, and as David suggests the status quo of domination of spaces by white males is maintained, and even reinforced by sociotechnical aspects such as ‘voting’. The idea that someone’s popularity lends additional weight to what they have to say is interesting and deserves to be unpicked.

Another ‘binary’ that has attracted much attention is public/ private – which probably never was and certainly is no longer a binary – and deeply embedded in power relations. danah boyd’s work has revealed that young people can regard online privacy as a strategy, more to do with who’s there rather than the features of the space itself.

We are all finding our way in the complex private/public spaces we increasingly inhabit and so it’s important to reflect, acknowledge our successes and mistakes and think of how we might do things differently.

In a powerful post, based on recent events and her own frightening experiences, Audrey Watters drew our attention to the nasty practice of doxxing, posting online someone’s personal information (such as social security number or home address). Audrey highlighted the network aspects of doxxing.

After all, doxxing relies on these sorts of large networks. Doxxing relies on amplification.

So even if we are reposting something already ‘made public’, we can be increasing the risks of the doxxed person being subjected to threats and nuisances by others. We can become part of that network of harm.In the example that Audrey gives, both the original poster and the re-poster may well have seen themselves as on the side of the angels in their wish to defend student privacy.

And for me that links back to polarisation, it may be that we are at most danger of being drawn into networks of harm when we are hell bent on supporting a good cause.

The other thing that is puzzling me is whether or not the binary nature of much of our online participation like/not like, friend/not friend, follow/ not follow, click/not click, upvote/downvote, block/ not block might be seeping into our culture,as well as the platforms on which we enact it.These are hard clickable binaries trying to capture a world where dualisms can get in the way of understanding complex contexts. I am not suggesting an essentialist view that our use of ‘likes’, etc. will cause polarisation of views but wondering what the impacts of ‘binary participation’ may be in different communication contexts. It’s not just about our choice to click/like but also about how that is used by the algorithms that serve up our feeds, shaping our view of what others say and do.

Wikipedia defines a troll as “a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community”

The Electronic Frontier Foundation (EFF) is a serious and respected organisation that defends the Internet as a platform for free speech. There is a culture, visible but not universal in some gaming communities, technical forums and the tech community in general that is voluble about its defence of Internet free speech, seeing it as under threat from any attempt to curb what many of us might see as pathological behaviours. In this culture, the word troll is used ambiguously, suggesting playfulness and defending anything that is done to combat those who are seen to challenge free speech. This article, from its own particular slant , suggests a political link to free speech “To conservatives and right-leaning libertarians, it’s a welcome pushback against left-wing cultural diktat, particularly in the area of gender politics.”

EFF’s defence of anonymity is admirable when considered as a means of enabling whistle-blowers, say, but anonymity is experienced differently when short-lived anonymous Twitter accounts post obscene images and threats to their chosen targets. Internet culture can be a culture of truth/ lies, right/wrong – no shades within the discourse, and some see trolling as a necessary part of free speech online.

“The trolls are the immune system of the internet, and if you have the immune system ganging up on you, then you need to fight back or give up, according to whether they are right or you are right.” Commenter on Kathy Sierra’s article in Wired

There is a growing body of anecdotal evidence of women being abused online but social media may be exposing an existing epidemic rather than causing that abuse. It’s tempting to see a gender split between male trolls and women victims but it’s much more complicated than that. There are trolls and victims of all genders and races, and ‘trolling’ is subjective – the same post being perceived very differently by different people.

For me, the death threats and intimidation that Kathy Sierra experienced were criminal harassment and such harassment should be pursued vigorously under the laws of the country in which it happened. But this extreme behaviour exists in the context of online networks where apparently rational people can disbelieve the existence of threats, or even condone them as being justifiable when the recipient is seen as challenging Internet free speech/ in favour of censorship (even when they haven’t ).

When I searched for posts aimed at Kathy Sierra’s Twitter account, I came across these amongst the last sent to her stream.

I was struck by two aspects: first the characterisation of Kathy Sierra as a passive-aggressive troll; and secondly, the patronising way in which she was advised how to behave online. The author of those tweets expressed himself in a rational manner, not appearing to be the sort of person who would harass anyone online. But it seems to me that in the circles around the extreme harassers are rational people with strong views on freedom who lend support to the less rational who carry out the vile harassment.

The silenced audience amused themselves by completing a bingo card with phrases the male allies used in their speeches.

The thing that it is supposed to come with freedom, responsibility, may help shed some light on what is going here. If the culture in which you live your life online tells you that it’s OK if harassment is collateral damage in the cause of freedom of speech on the Internet, then you are wanting that freedom without responsibility. You may claim that you are not directly responsible. If you are a leader in technology who tells yourself that you are helping to solve gender discrimination in the workplace by telling women how to behave differently, then you are ducking your responsibility.

Homophily is the principle that a contact between similar people occurs at a higher rate than among dissimilar people. On the Internet, (Alstyne & Brynjolfsson, 2004) characterised this as voluntary balkanisation with a loss of shared experiences that was damaging to the development of both democratic societies and de-centralised organisations.

So the same Internet that promised freedom and a Global Village is the one where we allow ourselves to reinforce our views by hanging around with people who think like us, and over time we lose trust in people who might think slightly differently. Surely we can do better than this?

I have been thinking recently about the (lack of )diversity in my own Twitter network, and I captured some of the ideas sparked by others in a storify that I concluded with

“But it is a good question – when we network across multiple channels how can we maintain the tension between diversity and drowning? If we pick “really great people” to connect to we are subject to their (lack of) diversity and to the Spiral of Silence.”

Maybe we can employ intersectionality that Wikipedia defines as “the study of intersections between forms or systems of oppression, domination or discrimination.” So when we think about tech culture, we don’t just think about in terms of gender but also class, race and other categories. I am a practical person, and I am trying to extend my Twitter network to include people whose struggles I understand less well.

My last example is of institutional appropriation of social media and the ethics of anonymous monitoring. Sara Ryan blogged the experiences of her 18 year old son, who had learning disabilities, and was admitted to an Assessment Centre, where he later died, having had an epileptic seizure when unattended in the bath (fuller account here). An independent report found his death to have been preventable and Sara has been using social media to campaign for #justiceforLB.

Last week, Sara discovered that, despite assurances to the contrary from the Chair of the Health Board, her blog had been monitored and a briefing issued that seemed to be aimed at damage limitation and media management. Two worrying issues emerge from the briefing: first that monitoring of the blog revealed Sara’s concerns about the dangers of her son fitting unattended but were ignored; and secondly that the Trust saw itself as the victim of ‘trolling’ on Twitter.

I think that all of these examples show the balkanisation of social, organisational and extra-organisational cultures mediated through face to face and Internet communication. The question is – what can we do about it?

We can look at how we can understand and appreciate the experiences of others by really listening

We can take a break like Kathy Sierra and Julie Pagano have done

We can challenge hostile and malignant cultures by mocking them like #ghcmanwatch did or when a grave injustice has occurred by campaigns like #justiceforLB

Note: These are complex ideas and this post is my first attempt to work through them. I welcome factual corrections/ constructive criticism

My dear friends Jenny Mackness and Mariana Funes who have helped me think about the ethics of online learning and being and how things might be differentTrust and security in open networks
http://marianafun.es/ct101/2014/10/07/reflection-3-your-stalker-my-friend/

Victoria (best wishes with that PhD) who wrote a great post about trolling
http://digitalmentalhealth.co.uk/wp/when-is-a-troll-not-a-troll-and-who-decides-2/

My great Twitter network who help me reflect, look near and far, and cheer me up when I get grumpy

As part of a MOOC on rhizomatic learning that performs itself in many different spaces (Facebook, P2PU, G+, Twitter and others), I am a member of an ‘open’ Facebook group. It is endlessly fascinating, and has given me a lot of scope for reflection about back channels and the exchange of information between open and closed spaces. Of course, I say that as if a space could be categorised as open or closed: it’s often a lot more complicated than that, acted out by technical aspects of the space and by the agency of the people who interact there. Facebook groups can be open, closed or secret, the meanings of these being laid out in the Facebook help.

See what I did there, I linked to the ‘closed’ space of Facebook, only visible to one of the 1.3 billion members of Facebook. Now of course, we don’t know if there are that exactly many Facebook members, but let’s settle for there being a very large number of them. Facebook is not completely open from the outside but doesn’t seem very closed. For example, if I log out of Facebook and Google for “Facebook pictures dog”, I will see lots of peeks into Facebook groups that might entice me to sign up or log in.

Now I am neglecting those of you who aren’t Facebook members, and can’t see the Facebook help page . Facebook describe the visibility of posts as:

Open

Closed

Secret

Who can see what members post in the group?

Anyone

Only members

Only members

Yes that means that anyone who has the link to an open Facebook group post or comment, can share it inside or outside Facebook, and it can be opened by any Facebook (not just group) member. In the case of the rhizo14 MOOC, participants who are not Facebook members are excluded from sight of posts in the Facebook group, whilst a very large number of Facebook members who have never heard of rhizo14 could check it out if you sent them the link.

Ethical dilemmas

I will sketch out a few of the ethical dilemmas I have observed in the rhizo14 Facebook group. These are early reflections, and I would welcome your comments on this post.

How do we behave around here?

The rhizo14 MOOC offers no explicit written norms, behavioural or otherwise, and the strapline for the FB group is “An attempt to create a feed for Rhizomatic Learning posts from around the web.” The Facebook group has become not only a site for sharing blog posts, other rhizo14 creations and links to interesting and relevant stuff, it has also become a place for demonstrations of friendship and affection by some members. It’s clear that a number of people (significantly less than the full 240 ish membership) regard the group as a semi-private backchannel where they can expect support and sympathy for their encounters elsewhere. What the remainder of the membership think about this is less clear. They may be just letting it all flow past or ‘lurking’ – a behaviour that has attracted discussion in the FB group. The implicit norms on lurking in the FB group are to some extent discernible, but the norms on other behaviours sometimes seem to be taken as read by some active members of the group. I posited a relational approach to power early on in the rhizo14 MOOC but that is not a common view in the Facebook group where a one-dimensional view of power is more common. We can discern a flexible approach to ‘rule-breaking’ in rhizomatic learning in the discussion around rhizomatic learning and ethics in the comments on this post so it’s unlikely that a set of ‘rules’ to govern behaviour would work on rhizo14. Teachers and moderators can model ethical behaviour, and communities usually engage with norm-building online where misunderstanding is not uncommon. Overt moderation and norm-building activities have been generally absent from rhizo14 in general and the FB group in particular.

What does sharing mean within and beyond the rhizo14 community?

A lot of sharing goes on at rhizo14, and there is a sense that openness is a value of rhizo14. The remix culture has been very evident in rhizo14, and creativity and remix have brought a lot of pleasure. Communities of Practice literature and others have identified the importance of the boundary in the propagation of knowledge. The facility for stuff and people to cross boundaries presents great opportunities, but with these come tricky questions of how we share and what we do with what is shared. A great set of ‘rules’ that has helped sharing is Creative Commons Licenses, not always enforceable but signifying intent in a sharing and use context.

A dilemma presented by research data sharing is current at rhizo14 FB group, and raises, for me at any rate, some very interesting issues about how we do Open Research. The twitter stream fragment gives an indication of the controversy, and the discussion of what is appropriate uses of open research data by ‘others’ (those ‘outside’ the community) is playing out on the rhizo14 Facebook group. I raised the issue of ethics of use of open/closed data for research purposes in rhizo14 at the time it became clear that a group doing auto-ethnography, and a group of which I am a part were both doing research around rhizo14.

I became involved in this discussion because I had contributed to the auto-ethnography open Google document but had deferred on authorship of any subsequent outputs (as was possible within the invitation). The complication that I presented was that I had asked not to be quoted (on reflection this created problems for the auto-ethnographers).
I resolved my personal dilemma by deleting my contribution, in consultation with members of the auto-ethnography group. However, the issue of who can use the information in the auto-ethnography is still the subject of discussion – on the FB group and publicly on Twitter.

Discussion of Agency

As humans we can have moral agency – this may be the different thoughts and feelings that guide the way we act. I suggest that sharing our ethical stance with others can help our moral agency within a network of human and technical agents. I am not thinking of a set of rules but rather our expectations and ethical stance that we could share with other moral agents. What I have observed in the discussion on the ethics of use of information from the auto-ethnography document is that some participants seem to assume there is a ‘common decency’ approach to the use of ‘open’ information. This is a little dangerous I think and may be explained by an unwarranted assumption of community.

We can also think of technology as ‘moral agent’ where permissions and constraints on agency can be coded into a system, as I described in who can access a FB group. This can be useful, especially if helps clarify expectations- hard rules, hard boundaries can be explained in help pages and observed in action when we fail to access the FB group link because we are not logged into Facebook. Even these hard rules can be overcome by human agency. I cut and paste one part of FB help page – this was within my personal ethics where doing the same with a rhizo14 FB group post would not be. When I did accidentally violate that in a blog post, I tried to resolve the situation, by consulting the person affected.

Some Tentative Conclusions

An important element of the digital moral agent’s backpack to complement their ethical literacy is the digital literacy of having an active understanding of the ethical and other implications of using a digital space/service for communication. This is especially important when one’s practice in spaces is research.

As well as the overhead of ‘cluttering ‘ the communication, there are benefits in clarifying use of information, utterances, multimedia in practice. This may be done informally within a close group of friends, rarely discussed, but the more open the use and sharing of information, the more important it is to clarify how we expect that information to be used, if we wish to minimise problems. This applies just as much to Facebook (with their unclear use in the above extract from Help of the words member – Facebook or group?- and anyone – anyone on the Internet or anyone on Facebook) as to those of us participating and researching in rhizo14. I hope that rhizo14 research subjects benefit from our statement of how we use the research data. I know I would have benefited from a clearer statement of expectations and behaviours in rhizo14. I am not suggesting a set of hard and fast rules but rather a starting point for discussion on how we behave around rhizo14.

Because digital literacies are a moving target (digital literacy is ongoing) and communication in open spaces is tricky, we need flexible repair strategies for when things go wrong. If we state our expectations and promote discussion of expectations within a group as starting point, then we may be able to minimise but not eliminate problems. That’s when it’s useful to promote friendly interpretation of the words and actions of others, and to have some strategies for conflict resolution.

Who said it was easy to practice learning and research in online spaces?