From the 22nd to the 30th of October it was Dutch Design Week (DDW) in Eindhoven, a week wherein hundreds of new design projects are shown to the public. A lot of emergent projects can be found in Eindhoven during DDW, but one project was especially appealing to the press and the visitors of the design fair: an anti-surveillance coat. This coat makes credit cards unreadable and mobile phones untraceable. Project KOVR is the name of the anti-surveillance coat project. KOVR is pronounced as cover, and comes from Esperanto. The designers of this project are Marcha Schagen and Leon Baauw, they designed the coat to protect “you and your privacy against the threads of our information-driven environment” (projectkovr.com).

The anti-surveillance coat, Project KOVR

There are multiple ways to protect online privacy: use a lot of different and complicated passwords, use the Tor Browser, use a VPN connection, block third-party cookies in your browser, obfuscation (TrackMeNot) et cetera. There are few options to protect your credit or debit card and mobile phone from getting hacked. There are metal cases that users can put their cards or phone in, so it is protected. The anti-surveillance coat is easier, as users can just throw their cards, phones or anything that can be attacked by ‘cybercrime’, in the pockets. By putting it in the coat’s pockets, users no longer have to be concerned with their personal online information being stolen outside. A mobile phone in the coat is not only untraceable, it is unreachable as well. The designers understood that users do not always want to be unreachable, that is why there is one pocket that is see-through. This see-through pocket at the front of the coat allows for signals to enter the coat, so users can receive phone calls or messages. This adding shows the designers have really thought about the needs and wishes of its potential customers.

Cybercrime is a growing threat, people are more aware of this threat ever since Edward Snowden blew the whistle about how much online information is being collected about American citizens (Williams). Online privacy is a topic that has been on the radar for quite some time now. The worry of protecting ones online privacy is extending to the real world, with banking cards and mobile phones out in the open that can be hacked. More and more debit cards have an option to pay wireless, which is convenient as well as vulnerable for outside attacks. Furthermore, smartphones are also a vulnerable object containing a lot of personal information that can be hacked nowadays. The anti-surveillance coat claims to protect people’s online privacy even when they are out on streets, resulting in a solution for these problems that are becoming more significant.

Although online privacy is considered to be very important, simultaneously people are sharing a lot about themselves on social media (Soffer 2). However, the kind of information that is shared on social media is not of the same importance as information that can be taken from banking cards or mobile phones. Users take a lot of effort to avoid their personal data being stolen (Cho 410). So if users are willing to undertake action to protect their personal information, they might also be willing to buy the anti-surveillance coat. This coat protects fairly obvious information from users. The online privacy matter reaches beyond the solution of Project KOVR. Companies such as Google and Facebook are considered as the main threads for online privacy. These companies ‘own’ so much information about their users, that they can sell to advertisers. The privacy problem online is in the fact that users can hardly prevent these large companies from collecting data about them, it is a battle between a giant and the midgets of asymmetrical proportions (Brunton). This privacy issue is not easily solved. The coat solves one little but important privacy matter, and that is the way to handle this situation. Step by step users can get more control over their own online personal data and how to protect this.

Unfortunately the Kickstarter project has not been funded enough to actually get realized. It is astonishing that the project only has 34 backers, while there was so much attention from national and international press. Although it is an emergent and relevant design, potential users are scarce. Maybe the unisex design of the coat is not appealing to everyone. Hopefully Schagen and Baauw will find another way to make this project successful, because it is a very interesting and new way to look at the protection of digital privacy. It is a nice contribution to all the online solutions that have been given for the privacy issue.

]]>http://mastersofmedia.hum.uva.nl/blog/2016/12/01/project-kovr-the-anti-surveillance-coat-that-protects-online-privacy-in-the-real-world/feed/1Call for Applications – MA in New Media and Digital Culture at the University of Amsterdam 2017-2018http://mastersofmedia.hum.uva.nl/blog/2016/11/14/call-for-applications-ma-in-new-media-and-digital-culture-at-the-university-of-amsterdam-2017-2018/
http://mastersofmedia.hum.uva.nl/blog/2016/11/14/call-for-applications-ma-in-new-media-and-digital-culture-at-the-university-of-amsterdam-2017-2018/#commentsMon, 14 Nov 2016 11:06:02 +0000http://mastersofmedia.hum.uva.nl/?p=47608

One-year and two-year international Master’s programs in New Media available:

MA Media Studies: New Media and Digital Culture (one year, full time)

Research MA Media Studies: New Media and Digital Culture (two years, full time)

MA New Media and Digital Culture
The MA Program in Media Studies: New Media and Digital Culture offers a comprehensive and critical approach to new media research, practices and theory. It is an internationally renowned program in critical media theory, dedicated to the study of the social transformations brought about by digital culture. The program provides in-depth training in the latest digital research methods, with the opportunity to participate in data sprints and to collaborate with international researchers. It is situated within a pioneering new media cultural scene in Amsterdam and an academic environment ranked among the top 6 universities worldwide (QS World University Rankings by Subject 2016: Communication & Media Studies.

Application and Deadlines

As of mid-November 2016, it will be possible to apply for a Master’s programme at the Graduate School of Humanities. All Master’s start in September 2017. Application website.

“Sydney park destroyed as thousands of Pokémon Go players descend” wrote Irish news website TheJournal.ie on the first of August.This is just one example of a situation wherein Niantic’s popular augmented reality game Pokémon GO puts a considerable strain on public places within the urban environment.

Pokémon GO

For all of you who are late to the party, let me get you up to speed real fast. Pokémon GO is the very popular augmented reality (AR) app created by Niantic.

AR allows users of the app to view digital information and images that have been layered onto the “real physical” environment (Verhoeff & Cooley 209).During gameplay users are exploring the urban environment searching for Pokémon to catch and looking for important in-game locations, so called PokéStops and Poké Gyms (Pokémon GO).Recently Niantic finalized a sponsor deals with McDonald’s, and more are on the way. The partnership enables these commercial companies to attract more customers to their businesses by making the premises of their business an important in-app location.

Participatory democracy?

In a piece for The Guardian, Francesca Perry quotes Patrick Lynch who suggests that a Pokémon GO space is easy accessible open and completely democratic “unlike traditional plazas, whose development is often dictated by historical or economic motives” (Perry)The game has been praised for its social- and health benefits. Pokémon GO supposedly is free to use and stimulates people to go outside to explore their urban environment.

Unfortunately the benefits that this game is said to provide are not equally accessible to everyone. For one, important in-game locations were chosen based on crowd sourced data from Ingress; Niantic’s previous AR game.

The problem here is that Pokémon GO has a far broader reach than Ingress, which was only used by a very select demographic. This led to various problems like uneven distribution of points of interest amongst different neighborhoods: “crowdsourcing is only as representative as the crowd doing the sourcing” (Huffaker).

Also, not everybody can just safely roam around the city at any given time, Huffaker wrote an interesting piece about the experience of black men playing Pokémon GO: “multiple black players have worried that they will face racial profiling while wandering in circles playing the game” (Huffaker). Also for women in general there are some safety concerns in regards to wondering true the city at any given time.

Issues & responsibility

In their article ‘Layar-ed Places’, Liao and Humphreys argue that there has been little empirical research concerning the way that people are using mobile AR technologies and forming social practices around them. They stress that there is more interest in building AR technologies than studying their social implications (Liao and Humphreys 1419).

In the case of Pokémon GO the social implications were huge and not all favorable.

Niantic launched the game without providing the possibility to ‘opt-out’.Public spaces have been flooded with gamers due to the in-game locations that are attached to real-life city landmarks. Also spatial practices within hospitals have been disturbed and inappropriate behaviour at memorial centers has been reported.As a result of this several public places have requested to be excluded from the game (Schiffer).

This leads us to the question: Who is responsible for the unfavorable consequences of the popularity of Niantic’s popular AR game?Julia Ask, a media analyst at Forrester Inc., states that it is not Niantic’s place to tell people how to experience their space and how to behave in it (Schiffer).

This statement is questionable considering the fact that the commercialized geography of the game is also guiding the exploration of the urban environment and therefore influencing the way people experience the public spaces within it.Arguably the way that people experience public spaces is actually already guided by Niantic’s economical interests and the agency of Pokémon GO users is limited.

So, if Niantic is not responsible; who is?

Of course there is the argument that Pokémon GO players are responsible for their own conduct during gameplay and that common sense should rule. Unfortunately, most of the time it doesn’t.News reports of the past months can attest to the fact that when playing AR games people are not exactly acting like the homo economicus that neoliberal society wants them to be (Read), but rather like a moth that is going into a flame (Schiffer).

Damage to public spaces

The responsibility to fix damage doesn’t fall on Niantic’s plate, but instead on the public authorities.

In his article ‘Pokémon GO and public space’, Iveson points out that Niantic turns the physical public spaces of the urban environment into a playground without making any contribution to their provision or maintenance. He argues that Niantic, a private commercial entity, is making money of the public spaces by utilizing them as a playground for their very lucrative AR game but in turn does nothing to support them or even take responsibility for damage resulting from their game. (Iveson)Morisov argues in his article for The Guardian that the inability of governments to tax profits made by big digital corporates actually contributes to hollowing of state capacity to fund public services (Morisov).

This puts the ‘free to use’ claim of Niantic in a whole different perspective.WhenLynch suggests that a Pokémon GO space is completely democratic, he is clearly forgetting about the sponsor deals that are being made between Niantic and companies like McDonald’s which enables them to make the premises of their business an important in-game location, and about the so called lures business owners can buy to attract a lot of Pokémon to their business during a timespan of 30 minutes thereby making their business temporarily a point of interest. The concerns about reducing the agency of augmented reality users and thereby affecting the democracy of the public spaces are questionable concerning the fact that this assumed agency is already limited by the commercialized geography of the game.

As far as the ‘let common sense rule’ argument goes: is it really common sense to let a commercial entity like Niantic be able to to let public authorities clean up after their mess while they are making money off of it?

Read, Jason. “A Genealogy of Homo Economicus: Neoliberalism and the Production of Subjectivity.” A Foucault for the 21st Century: Governmentality, Biopolitics and Discipline in the New Millennium. By Sam Binkley and Jorge Capetillo Ponce. Newcastle upon Tyne: Cambridge Scholars Pub., 2009. 215. Print.

‘Mark Zuckerberg promises to do more about the circulation of fake news on Facebook’ Dutch news website nu.nl said on 13 November 2016.This isn’t the first time Facebook has been criticized lately. The social media network has been under fire for its new untransparent algorithm that determines which information is being prioritized in the user’s timeline.

This algorithm doesn’t only decide who’s baby pictures show up in your timeline, but also what kind of news items you get to see. The algorithm prioritizes content that fits in with your interests and already existing world view, which is based on the user profile Facebook has created of you by analyzing your posts, reactions, likes, personal information, connections, behavior on the site itself, and whichever info they can gather from tracking cookies.

In the podcast ‘Aflevering 105: Facebook en president Trump’ by NUtech, Kraan and Van Hoek warn that this can create a filter bubble because Facebook is only going to show you news that reaffirms- or fits into your established world view (Kraan & Van Hoek).

For example: A user leans towards voting for Trump in the 2016 U.S. presidential election; he likes a couple of positive news items about him on Facebook. Now, suddenly, he only gets to see news items that are positive about Trump and negative about Hillary Clinton – one of the other candidates – or doesn’t receive any news about Clinton at all anymore.

This has lead to speculation about the potential effects this form of news prioritization has had on the outcome of the elections, especially in combination with the high quantity of fake news articles that have circulated.

Fake news articles

Marc Zuckerberg states that Facebook has not influenced the outcome of the elections, and the idea that fake news on the social network has influenced the elections sounds ludicrous to him (Hermans).

When asked about the circulation of fake news Zuckerberg states that 99% of the news on Facebook is real, that only one percent consists of fake news and hoaxes and these don’t originate from just one political side. According to him people are perfectly capable of determining whether a news report is fake or real (Hermans).This is a rather bold claim concerning we look at how much hoaxes are retweeted and reposted every time they emerge.In their podcast, Kraan and Van Hoek explain that while most media companies and news organizations do their own fact- and source-checking – since they are responsible for the authenticity of their content, and have to obey to media laws – Facebook doesn’t.

The social network does not view itself as a media company as such and therefore does not feel the responsibility to vigorously check the authenticity of the content because this isn’t their core business (Kraan & Van Hoek).

Yet, while Facebook does not view or identify itself as a media company and is not an official news organization, a 2015 Pew research survey reveals that 62% of U.S. adults get their news from social media sites like Facebook (Mitchell & Holcomb).

Tools & optimization

As a result of this emerging trend social media platforms have invested in optimization of their streaming services and also have created special publishing tools with user friendly features.

Digital companies have become indispensable as distributors and intermediaries between content and consumers because they enable content publishers to more efficiently reach their target audience. In turn the digital platforms benefit from extended user engagement (Mitchell & Holcomb).The New York Times and Washington Post are already avid users of Facebook’s new publishing tool ‘instant articles’. An Instant Article is an HTML5-document that optimized for mobile devices because it load very quickly and therefore has a lot of commercial potential.

Publishers can introduce their own advertisers or sell via Facebook which will cost them 30% of the revenue. Instant Articles enables Facebook to get even more data about its users which can be used for user profiling (Frankwatching).Now publishers create content that is specially geared for digital production. Digital companies are now the most important intermediaries between content and consumer.

“Increasingly, the data suggest that the impact these technology companies are having on the business of journalism goes far beyond the financial side, to the very core elements of the news industry itself” (Mitchell & Holcomb).

Algorithms and prioritization

Facebook has been under fire for the algorithm that decides which content on our timeline is prioritized. However, is prioritization of news really a new practice or has this practice been around for much longer?Zuckerberg rightly states that newspapers have always prioritized certain news above other news by selecting and ordering it.

This doesn’t take away from the growing influence digital companies have on the way news is produced and consumed. “Over time, technology companies like Facebook and Apple have become an integral, if not dominant player in most of these arenas, supplanting the choices and aims of news outlets with their own choices and goals” (Michell & Holcomb).Facebook is not the only company to jump on the ‘algorithm bandwagon’; in May messaging app Snapchat also announced that they will begin using an algorithm for news story selections.

Facebook, for one, sees these self-learning systems as a logical progression of existing and emerging technology. However, if these algorithms and self-learning systems get to decide – increasingly more autonomous – which information and news items are accessible to us, they effectively influence the way we view the world.

To think of this phenomenon as something “new” leaves us to wonder about its contemporary causes, to name, for instance, new aspects of our very access and engagement to ideas and information. New media, and particularly major platforms such as YouTube, have often been seen as posing new challenges to traditional ways of tackling this problem (Rieder 2009).

As an affiliate of Google, for instance, YouTube is the second-largest search engine in the world, and just as its neighbors in Silicon Valley, often embraces its role as a global infrastructure to allow richer democratic dialogue (Gillespie 2010). But with the success of Google’s PageRank, Facebook’s NewsFeed and custom-made interfaces to access and post information, the platform has, since 2010, specialised in engineering a personalised access to consume and post information (Levene 2010).

The fact that information be processed by extremely sophisticated algorithms delivering tailor-made results to every user’s search queries — and with such astute mastery of circumstances — leaves us to question the extent and the responsibility that YouTube has in impeding users to be exposed to politically differing, if not purely random, information.

This project seeks to examine the role of YouTube’s search ranking algorithm in polarising political dialogue, and how it could be “tweaked” in such a way that information be accessed in a more diplomatic fashion. By “diplomatic fashion”, we intend that search results be suggestive of more diverse and less partial ideas. To do so, we compare a YouTube query on “terrorism” in Urdu and in English respectively, and then envisage how the platform could alter this algorithm to distribute information in such a way that Urdu and English-speaking parties would be able to mediate each other’s points of view on the issue of terrorism.

The issue of terrorism is especially polarising of Urdu and English speakers. In a sense, the difference between both accesses to information on terrorism perpetuates the geo-political differences so divisive of Western and Muslim populations, and the issue of terrorism, as organised by YouTube’s search ranking algorithm, is as vague and unpredictable as the very definition of the term.

Methodology and documentation

Our methodology was not without shortcomings. To have a clear picture of how YouTube’s search ranking algorithm functions is a notoriously difficult task to accomplish; Gillespie and Sandvig comment on how secretive of its internal mechanisms the platform can be in the face of other competitors (Gillespie 2009, Sandvig 2014). Since releasing a paper on their then-coined recommendation system in 2010, hard and fast information on YouTube’s principle algorithms themselves were conspicuously absent. (See Davidson et al. 2010 for the original paper) The information we collected on the algorithm are thus inferences taken from the company’s own paper releases and assessments from third party experts and consultants who have had to study the platform for various reasons. (third-party sources included Snickars and Vondereau 2009, blog posts such as Gielen and Rosen’s, Giamas’, and an interview with YouTube’s chief engineer Cristos Goodrow).

Collecting data on Urdu and English search results with the Digital Method Initiative’s YouTube Data Tools and the Issue Discovery Tool allowed us to fragment video metadata (titles, channels) into more concise terms constitutive of video metadata discourses. However effective these tools have proven to be, some terms do go omitted and require a manual re-construction of the discourses in question in a visualisation we judged was most appropriate.

We also grappled with the notion of providing ‘diplomatic solutions’ without attempting to be paternalistic or having an active editorial stance.

How, then, can YouTube’s search ranking algorithm be deemed to be polarising? Answering this question led us to examine how this algorithm functions and deduce its role in shaping our English and Urdu search results to the query on “terrorism”.

The role of YouTube’s search ranking algorithm in polarising results

Some of YouTube’s engineers purport that the platform’s search ranking and recommendation system is an important realisation of a few new objectives they have set out in 2010 (Davidson et al. 2010). To prevent that information be too scattered and irrelevant, search ranking needs to associate users to the results most relevant to them. (Ibid, 293)

Pinpointing such results by sorting through the semantic data of myriads of videos closest to the query term is the first of many steps the algorithm takes to choose which videos to rank (Davidson et al. 2010, 293). On a second instance, videos are ranked in an order of relevance and affinity with a user’s browsing history, which, being associated to Google’s database, includes data tracked on the entire Google platform. Videos are then sorted based on the global popularity (watch time) amongst users and are diversified across genres and channels (Davidson et al. 2010, 294).

Fig 1. A sketch of YouTube’s search ranking algorithm

Spot the terrorist: polarisation between English and Urdu video discourses on terrorism

The first 50 search results for the search query ‘terrorism’ in English and Urdu were polarised in ways that reflected the intellectual history of the term “terrorism” peculiar to each of these languages and their speakers, and which are seemingly obstructed from crossing-over in inoffensive terms.

English search results were predominantly framed around the notion of a dangerous and violent global phenomenon caused and perpetrated by “radical Islam”, the opinion of which is often articulated by American conservative (news-)channels (Fox News, GOP War Room.) Other left- and centre- leaning channels (CNN, CNBC, CBS) insist that terrorism only be attributed to “extremist” thinkers, and other outliers included results that framed terrorism as part of political conspiracies or as subjects of entertainment.

Another voice to take considerable space in our English results — itself particularly antagonistic to our Urdu results — is that articulated by Indian channels tending to link terrorism to Pakistan. These results often bore a slant against Pakistan, articulating a predominantly Indian perspective of events and ideologies.

Urdu results “respond” on a more defensive stance. “Terrorism” is less of an issue of who, when and why, than of an attempt to understand a concept somewhat ambiguous and alien to Pakistani history. The term was contextualized in a debate rooted in the geo-political conflicts involving Pakistan, referring to domestic terrorist attacks and addressed these as an internal issue, sometimes on localized ethnic or denominational grounds, sometimes on theological grounds.

Still, some videos almost manifested a direct response to Indian allegations that Pakistan be involved in cross-border terrorism, while others included a rhetoric of terrorism as a war against the West.

Fig 2. Search results to the query “terrorism” in English (left) and Urdu (right)

How does YouTube’s search ranking system contribute to this polarisation?

YouTube’s search ranking algorithm may have a role in organising such results in a number of ways, but it is important to note that YouTube’s sorting and filtering mechanisms are not necessarily problematic in themselves. Theirparticular implementation and their consequences is what we focus on.

The fact that YouTube’s search ranking algorithm sorts videos based on semantic data (titles, tags, descriptions) make it such that language is a first element of discrimination. A query in English will give us a range of ideas and information belonging to some worldview organised and articulated by English speakers (who, in this case, were American and Indian) (Rogers 2013, Kavoori 2011, Danet and Herring 2010, Levene 2010, Holt 2004).

Additionally, the very term “terrorism” is itself discriminatory: the fact that it came to be most widely used in the context of the US’ war on terror places uses within paradigms proper to this historical instance, its actors, allies and enemies. (Wolfsfeld 2004) A term like “terrorism” is a profoundly complex ensemble of concepts, and such nuances are often lost in search results due to the ranking system’s favouring of popularity metrics.

Fig 3. One of the first ranking mechanisms sorts videos on the basis of semantic data and user activity data

Prioritising user’s personal affinities to sort search results may have also contributed to polarising our results. Ever since Google celebrated the success of PageRank in 1998, the “echo-chamber effect” has been a problem long documented and attributed to the fast emergence of personalised accesses to data (Barberá 2015, Pariser 2012, Jamieson 2010). The danger in this effect lies not only in being shut off to differing viewpoints, but also lacking the critical disposition to confront one’s own thoughts (Morozov 2013). And, because personalising mechanisms encompass not only search results but browsing videos through YouTube’s (and Google’s) personalised recommendations over the web, the chances that a user steps out homogeneity are simply hard to find, and shield users from engaging with purely random and different content (Helmond 2015).

The categorisation of search results by order of popularity extends this problem. Aggregating videos based on a greater number of people engaging with it for a period of time suggests that dominant and popular discourses will always dominate one’s results, pushing “quieter” voices out of sight, perpetuating a vicious cycle of dominance and well-established power-plays (Yardi and boyd 2010).

Fig. 4 The second ensemble of mechanisms to rank search results give priority to video popularity, a user’s personal affinities and limited diversification

And, although the algorithm’s diversification technique is a notable effort not to enclose users into far-too homogenised information spheres, it does not respond to our need to have information be distributed equally across channels and languages.

Diversifying the YouTube algorithm and interface

To date, both academic and digital actors have come with various solutions to the problem of political polarisation online across multiple disciplines (communication studies, political science, diplomacy, and media studies), but have not aimed particularly at how algorithms could be remediated to address this problem. (Prina et al. 2013, Ahmed and Forst 2005, Holt 2004, Anderson et al. 2004, Prosser 1985).

As (hypothetical) third, academic parties, we propose that YouTube tweaks algorithm’s semantic, user and popularity-centered sorting be changed to realise new “diplomatic” objectives without significantly obstructing with YouTube’s own commercial interests. Such modifications are an effort to make way for diversity (plurality in viewpoints) and better historical contextualisation, as described below.

To the problem of semantic analysis we addressed above, we seek to allow that other video metadata be sorted through multiple languages automatic title translations and video subtitling in the language the user is using, or is known to have used (Danet and Herring 2010). A multi-linguistic solution attempts to broaden the ideological and cultural context of a specific search query without favouring any specific actor or ideology (Alatis 1993). To nuance already-discriminating query terms such as “terrorism”, we suggesting that semantic analysis should allow crawling other historical uses and associations of the queried term (Rogers 2013).

As to prevent that results be an ongoing reflection of a user’s browsing history, we propose that the algorithm’s user specificity sorting mechanism make way for a small margin of randomised results and more extensive snowballing crawling techniques. Here, ultimately, we propose that YouTube encourage a user’s pursuit of curiosity, allowing them to break free from the blinkered worldview that they receive from their general YouTube browsing (Morozov 2013).

This proposition is extended to our tweaking of the video popularity sorting mechanism; here, we propose that other results in other automatically translated languages and results belonging to more distant topic clusters be given a place. The same stands for our alteration of the algorithm’s “diversification” mechanism: instead of uniquely diversifying videos across genres and channels, an additional audio-visual sentiment analysis allows the very content of the video to be diversified (Kacimi and Gamper 2011).

Fig 8. A “diplomatic” YouTube interface

From “tech” to “media”: bearing civic responsibility for better political dialogue

In a sense, such solutions would also help substantiate YouTube and Google’s vision for a more open and transparent world if, at least in this case, YouTube, embraces more transparency regarding its own activities and socio-technical role in our informational environments (Rieder 2009). Doing so would be a constructive solution to engage civil society and political actors into critiquing the digital mediums they use and access information from, and would be the first of many steps towards fostering a less fragmented world.

The alarm rings and you get out of bed. Before doing anything else, you go into the kitchen to make a pot of coffee. Before getting dressed for the day, you sit down in your favourite chair, turn on the radio, and read about last night’s presidential debate. Interrupting your reading is a well-known sound, telling you that you have an e-mail concerning today’s meeting. You decide it is a sign for you to start your day, but first you want to confirm your date is still on for tonight, so you know what to wear.

This short description illustrates just how online our everyday activities can be. You do not even need to leave your home to work, study, shop or socialize. Everyday life is moving towards the world of cyberspace with increasing online innovation and digitalisation.

This does not only change the lives of adults. The youngest generations are today born into a world where an online presence has become normative. Increasingly, children learn, play and socialize through these tools (Livingstone & Das 1).

Children today thus need to explore and learn about the world of internet, concurrently with exploring the world they live in. Although these worlds are not disconnected, they do generate different risks and opportunities.

The risks and opportunities correspondingly modify along with the experience and age of the user, which expectedly is linked to the changing in interests.
For first time users, internet and the online world can however seem like a big scary jungle, with endless opportunities, but where the dangers and risks are luring in-between the threes.

First time users might therefore need a friend to help explore and guide you to the opportunities of the internet, and simultaneously make you aware of the risks that follows this jungle.

ONLINE RISKS & OPPORTUNITIES

The overall topic of this project is “children and media”, it is overarching New Media Studies and Educational Studies. Within this framework the project employs aspects of nudging, privacy and control.

Sonia Livingstone is a professor who has researched youth in today’s digital age. She remarks that the online world needs to be distinguished by their “risks and opportunities” for children (and Haddon et al 16). Risks for younger children exist in overusing technology. When the internet is used for learning children are less likely to encounter risks than when playing games or chatting online. Some examples of what can happen to older children (teenagers) when overusing internet are: “academic failure, difficulty in completing class assignments, lack of attentiveness in class, sleep deprivation, and depression. (Douglas, et al., 2008)” (18). At the same time there also a lot of advantages of using the internet: “fosters a sense of belonging and wellbeing, and boosts their self-esteem — and it also offers opportunities for them to improve their offline social life” (26). So parents should be aware of the risks, but not be too conservative when it comes to using internet. Moreover, new media play an important role in today’s society and this role will very likely increase even more. So it is best to let children use new media and let them discover the online world, as they will need it in their future lives. To conclude: children should have a moderate, regulated use of new media. The use of digital tools should not replace “real life” but be a supplement for it, and the conservative statement that children should not use new media (too much) is condemnable.

Also, Livingstone incorporates the role of parents into her conceptualization of children’s use of new media. Apps or extensions for children should take the role of parents (as supervisors) in consideration. This project believes in letting children explore the web freely, with some parental control that younger children still need (Lee 469).

This project focuses on children aged 7-10. Most products for children are divided into these groups: 0-6, 7-10 and 10+. Seven is also the age children begin going to school and learning skills such as reading and writing. Livingstone (12) argues that seven “is the average age for first [individual] internet use” in many countries. It is also an age when children have a great desire to explore the world in their surrounding and in which social relations gain a new momentum.

We therefore chose to create a browser extension, as our project objective. The extension has two main functions to fulfil: creating opportunities and diminishing risks. In the form of a character, these functions will be achieved, because children listen better to a personalized “thing” than to parents or teachers (Sillitoe and Wainwright). This project is based on both empirical and literature studies, as research on children and media is foremost empirically done.

SURFER.SAFE

Surfer.safe is a browser extension that helps children explore the web, increases opportunities and reduces risks. The extension introduces Elli the elephant as the online friend that helps children explore. The name surfer.safe is chosen, because most parents want online safety for their children (Livingtone et al 34). Since parents will be the ones downloading the extension, it has to foremost appeal to them and then to children.

Elli
Surfer.safe is represented by Elli the elephant. Elli is the online friend for children representing intelligence, geniality, and reliability. Elephants are known for their outstanding intelligence and live in herds where everyone takes good care of one another and their youngsters (worldwildelife.org). Elli is a virtual friend of both parents and children.

Functions of the extension
Surfer.safe offers several functions to help children and parents make children’s surfing experience a great one. The exploring of the web happens with the help of Elli. Elli is an elephant living in the jungle that is called the internet. This jungle is a place full of wonderful aspects to discover, but there also lie some dangers. Elli’s different functions are to help children do their own exploring and to guide them through this jungle – the online world.

Surfer.safe will be on the homepage of the browser and when opening a new tab. So when children start to browse, Elli will be what they first encounter. S/he welcomes the child to the online world, and shows some options for the child to do to help him or her explore. Young children mostly use the web for playing games, watching videos and learning new abilities (Livingstone et al 14). The options Elli gives will become more personal after using the extension for a while, because cookies will be collected to track children’s online behaviour. The collected data will not be shared with third parties and is just for creating the best online experience for children. It is just to give children options; it does not exclude them to explore other websites. “Personalization can be valuable and convenient for users” (Van Eijk 57), but the tracking of the children’s behaviour can make the parents worry about the information being misused (Bolier 2). That’s why surfer.safe emphasizes that the only way the collected data is going to be used is for personalization purposes. Moreover, parents can always decide to delete the collected cookies.

Elli is always in the corner of the browser surfing with the children. When children want to see some options or suggestions during the browsing, they can click on surfing Elli and see some suggestions. If Elli detects inactivity or notices that one website has been opened for quite some time (over an hour), s/he will pop up asking: “Still surfing?”. This function is to gently nudge children to do other activities, when they are for example playing a game for too long. Surfer.safe believes children sometimes need to be reminded they have been doing the same activity for a while. Instead of warning them or telling them the amount of time that has passed, this extension chooses to do it in more a gentle way.

Perhaps some parents will want to manage children’s online time, this is also an option. The stopwatch function can be used to regulate the maximum amount of time children are allowed to spend online. If their online time is over, Elli will show up holding a stop sign.

Even though surfer.safe’s goal is to create opportunities for children, there is also a blocking function that protects them from seeing content that is inappropriate for younger children. Children aged 7-10 are less likely to encounter sexual content than older children (Livingstone et al 28), but a blocking function is still useful. Elli will prevent children from entering unseemly websites by holding up a “turn back” sign. Furthermore inappropriate advertisements will be blocked as well.

Lastly, the extension can be put it in sleep mode, for when parents want to use the browser without Elli intervening. This mode can only be turned on with a password, so children cannot use this function. When the browser is not used for some time, the extension will automatically turn on again, so children do not accidentally surf without Elli. And s/he will always be awake when the browser is start up.

Conclusion
Surfer.safe is the browser extension that arose after a long journey. With the help of Elli children will be able to safely explore and discover the internet jungle. It creates opportunities for children, while being a friend for both parents and children.

Livingstone, Sonia, Jasmina Byrne, and Monica Bulger. Researching children’s rights globally in the digital age. Report of a seminar held on 12-14 February 2015 London School of Economics and Political Science: 2015.

The financial crisis in 2008 marked the advent of the sharing economy, and its motto “access trumps ownership” (The Economist) came out as a good option to deal with the crisis which mainly occurred as a result of the excessive consumption (Henten, Windekilde 5). But to describe the sharing economy, it is necessary to look at different definitions since it is an emergent marketplace that has been developing and changing continuously.

Sundararajan describes the sharing economy as a combination of both commercial and social activity (39). In other words, people do not only provide and buy goods and services, but also establish closer connections, maybe become friends and share further interactions. This social and cultural role of the sharing economy is one of the most important features that distinguishes it from the earlier marketplaces. Moreover, Botsman in her TED talk points out to another important aspect of the sharing economy by describing it as a “social and economic activity driven by network technologies”. Thus, new media technologies facilitate the most essential activities of the sharing economy.

According to Sundararajan, there are five distinctive characteristics of the sharing economy. First of all, the sharing economy fosters economic activities by making both goods and services possible to exchange. This leads to “high-impact capital” which means that every asset can be capitalized and used in “their full capacity”. As the variety of assets – including both goods and services – are expanded, the provision of the money and workforce becomes “decentralized”. Furthermore, the expansion of the type of assets provided by the sharing economy also weakens the boundary between “personal” and “professional”. Many peer-to-peer activities that are regarded as “personal”, such as sharing your house or giving someone a ride, have become a professional activity that help people make profit. Lastly, unlike the traditional markets, the type of labor practiced in the sharing economy does not necessarily require long-time responsibilities and “continuum” which effaces the distinction between work and leisure (Sundararajan 40).

Considering these features of the sharing economy, it is clear that several advantages have been brought by the successful platforms in this economy. Botsman and Rogers express the value of the sharing economy as “the enormous benefits of access to products and services over ownership, and at the same time save money, space, and time; make new friends; and become active citizens again” (qtd. in Henten, Windekilde). Apart from these, the sharing economy has also introduced new transaction options that reduce the costs. For instance, with the help of network technologies, it is considerably easier to search for a right place to stay or find the most convenient means of transportation.
However, the sharing economy has also engendered risks which are mainly caused by the trust issues. Ert et. al suggests that because services provided by the sharing economy platforms are “produced and consumed simultaneously” (63), people cannot know what to expect, so there is a risk in terms of money. As every step in this exchange process takes place online, people risk more than money which increases the importance of trust immensely. Staying in a complete stranger’s house, or sharing a ride with someone we do not know raises the question of safety.

As Sundararajan indicates, the definition of trust can change according to the context, so he proposes James Coleman’s definition as the most suitable one: “a willingness to commit to a collaborative effort before you know how the other person will behave” (79). Therefore, what causes people not to trust someone is mainly the information asymmetry which refers to the situation where different sides do not have the same amount of knowledge (Finley 17). Thus, the more people know about what kind of service they get, or from whom, the more trust can be built.

Airbnb as a case study

To understand how platforms overcome this information asymmetry we decided to look into one of the most prominent pioneers of the sharing economy, namely Airbnb. Airbnb identifies itself as “a trusted community marketplace for people to list, discover, and book unique accommodations around the world” (Airbnb.com). In simpler terms, Airbnb encourages people to invite strangers to their homes and stay at strangers’ places, with relatively unknown consequences. In view that people – or guests and hosts – who engage in such activities get to know and trust each other exclusively through mediation of Airbnb, we argue that the study of this platform will help to understand how trust is established in such newly evolved marketplaces.

In order to overcome information asymmetry, Airbnb provides several services such as building a reliable reputation system which includes reviews, trustworthy pictures and visuals, smooth transactions, and also incorporating social media accounts is an effective way to reduce the information asymmetry.

In the aforementioned self-description, Airbnb itself also recognizes that trust plays a major role in platform’s operations. Joe Gebbia, one of Airbnb’s co-­founders, particularly regards reputation to be the key factor for building trust between hosts and guests (TED). He believes that high reputation based on reviews helps to overcome natural social biases that people tend to have toward strangers. To take it a step further, reputation nowadays can be considered to serve “not only as a psychological reward or currency, but also as an actual currency – called reputation capital” (Botsman and Rogers 337). According to Botsman, reputation capital is gained through participation in collaborative consumption, and the more reputation capital we earn, the more we can participate (338).

It is true that highly ranked Airbnb hosts get more reservations and guests with positive reviews are less likely to get their booking request cancelled (Newman and Antin). Yet, one cannot help but notice that novice users who lack an established reputation can too engage in transactions and reap the benefits of this sharing community. Thus, it is reasonable to assume that other factors that allow individuals to overcome trust barriers and engage in home-sharing interactions may also be at play here.

Visualising the development of trust in Airbnb

In attempt to conveniently convey the possible factors that contribute to establishment of trust, a visual timeline seemed like a logical solution. After all, building trust is likely to be a continuous process that also takes time.

The resulting trust timeline encompasses the period from the creation of Airbnb’s first prototype (Designer’s IDSA Connecting Guide) in October 2007, when co-founders Joe Gebbia and Brian Chesky rented out airbeds in their apartment to other visitors of the event, until the present. It is comprised of:

screenshots of Airbnb’s homepage at www.airbnb.com and other important subpages to capture interface changes;

newly introduced or changed features and policies of the platform;

some of most notable problems Airbnb encountered in course of its development and the company’s responses.

With regard to the last point, we focused on incidents that went public and could potentially damage company’s profile globally.

Additionally, some other interesting points, such as nights bookings’ milestones reported by the Airbnb, emergence of Airbnbhell.com and the time of platform’s rebranding, were included to set up context.

We offer several paths through which to explore the timeline. These pre-created narratives focus on specific aspects of building trust – through visuals, through added functionality and rules, and through company’s reactions to different events. The timeline also allows to see a whole picture of the events “negatively” affecting Airbnb and Airbnb’s “positive” reactions. This view is accessed by pressing the “Show overview” button. From here, users can also zoom into particular elements themselves and move between slides discordantly. We believe, such free interaction with the timeline will allow users to find their own view on the meaning of various developments that took place on Airbnb over time, independently establish possible correlations and form their own opinion about the company and its dealings with trust.

The timeline is published publicly on the Masters of Media blog and Prezi platform and is thus accessible to anyone interested in how trust relations are being established in the sharing economy and how this works at Airbnb in particular. It can also be seen as a tool that could help others companies understand what constitutes participation in the sharing economy and what sort of problems should be anticipated.

Findings

The timeline shows how a fast growing platform such as Airbnb copes with all the elements that are related to trust. When looking at the timeline we see several moments in time where problems with Airnbnb surface, for example the death of a woman due to carbon monoxide poisoning or the stories of vandalized apartments. Some time afterwards new additions were made to Airbnb that might have solved some of the stated problems. It then seems plausible that the level of trust towards Airbnb is in a constant flux.

When we looked at the design changes made to Airbnb’s homepage over the years, it became obvious that photography has grown to be a central element. While in the beginning the images were produced by users, the current website uses only high-quality material, and these visuals take up way more space on all of the pages than before. This seems to coincide with the theory of “visual based trust” (Ert et al.) that has focused on the effect of imagery on user perception of the platform. The research suggests that using images in a substantial amount would meet the hosts’ needs for personal interaction (69).

Our research also revealed several developments that were introduced to the platform in response to unfortunate Airbnb experiences that gained a lot of media attention. For instance, after the a blog post about a trashed apartment went viral, Airbnb added Trust & Safety section to its website and introduced its Host Guarantee policy. And while Airbnb’s reaction to such events may seem appropriate, it nevertheless raises the question of why the company had not been proactive in preventing certain disasters in the first place.

During the research, we have also noticed that in the last two years Airbnb has incorporated its pages devoted to safety into “trust pages”. This seems to be be telling the world that for Airbnb trust is the main element in creating a safe environment for its users. At the same time, it must be noted that Airbnb has done a lot to improve safety.

Limitations

We recognize that the timeline is far from being an exhaustive record of all of the events that could influence trust. We had to rely on our evaluation skills in assessing what information is to be put up on the timeline and what information had to be left out in view of our research topic. Moreover, as some things are not being disclosed by Airbnb, we were able to only use information that made its way out in the open.

It is also difficult to determine whether there actually is any cause and effect relation between two seemingly connected events on the timeline. Establishing this as a fact would only be possible through people involved confirming the story and backing it up with evidence. In the meantime, we can only speculate.

Even though we consider Prezi to be one of the most useful tools for creating presentation-based visualisations in a short period of time, we also recognize its limitations which were reflected in the aesthetics and functional capabilities of the timeline. A diverse colour palette, more text editing options and broader slide customization features could have all helped to produce richer and clearer narratives.

Newman, Riley, and Antin. “Building for Trust: Insights from Our Efforts to Distill the Fuel for the Sharing Economy.” Airbnb Engineering. 29 March 2016. 24 September 2016. <http://nerds.airbnb.com/building­for­trust>

Sundararajan, Arun. The Sharing Economy: The End of Employment and The Rise of Crowd-Based Capitalism. Cambridge: The MIT Press, 2016.

Nowadays technological solutions are everywhere and many more solutions are offered to make our lives easier and more convenient. Many posts on this blog discuss certain technological developments and solutions. In this research we want to reflect on the economy of creating solutions for issues, for the sake of creating solutions. Often it seems that these technological developments create solutions that are not beneficial in the long run. Many people misplace or lose things, a study shows that in the United Kingdom an adult loses about 200.00 items average in a lifetime. This does not mean that these items are entirely lost, but that they have to be looked for (Ahmad, Rouyu and Hussain 530). We wanted to create a “quick fix” for an issue that some experience on a daily basis: not knowing where you parked your bike!

Theoretical Framework | SolutionismSolutionism – “the idea that given the right code, algorithms and robots, technology can solve all of mankind’s problems, effectively making life “frictionless” and trouble-free” (Tucker). Of course this can be a good thing, as long as there is a problem that can be solved. But according to Morozov there’s a much darker side to the idea of solutionism: “It’s an intellectual pathology that recognizes problems as problems based on just one criterion, whether they are ‘solvable’ with a nice and clean technological solution at our disposal. Thus, forgetting and inconsistency become ‘problems’ simply because we have the tools to get rid of them — and not because we’ve weighed all the philosophical pros and cons.” (Perils of Perfection). Many of the problems that are in need of solving according to solutionists, are not even problematic in the first place. Design theorist Michael Dobbins states that “solutionism presumes rather than investigates the problems that it is trying to solve, reaching for the answer before the questions have been fully asked.” (Morozov 6). For the Silicon Valley solutionists whatever the issue, there is no problem too big and no fix too insignificant to launch a new app or start up. Morozov argues that this drive to eliminate imperfection and make everything efficient leads to an algorithm-driven world where Silicon Valley, rather than elected governments, determines the shape of our future (Tucker).Coming up with solutions for problems is in theory not a bad thing, in fact thinking of solutions is and always has been one of our primary instincts. This makes solutionism a complicated debate where defining what’s right or wrong can be a difficult task. For example in the case of gamification: turning social practices into games based on handing out badges, points and rewards. The fact that something can be gamified doesn’t mean it should be: “Do you want people to turn off the lights because they will get a coupon or because they have some ethical, environmental concerns? You don’t hear people in Silicon Valley talk about the ethical and moral dimension. They are not concerned with anything like citizenship at all.” (Tucker). As stated by Morozov “None of this is to deny that technology—from sensors to games—can be used to improve the human condition; as we have seen, it can provoke debate and lead us to question dominant social and political norms. But this can happen only if our geeks, designers, and social engineers take the time to study what makes us human in the first place. Trying to improve the human condition by first assuming that humans are like robots is not going to get us very far.” (350).Even apart from the question if solutionists are knowingly trying to make us complacent or not, the fact that their current solutions make it a possibility should be enough to question ourselves if we really need that new app that reminds us to read the news and rewards us for it. Instead of our first instinct being to think of solutions whenever technology companies claim that our broken world must be fixed, our initial impulse should be to ask: are we sure our world is broken in exactly the same way that Silicon Valley claims it is? What if the engineers are wrong and frustration, inconsistency, forgetting, perhaps even partisanship, are the very features that allow us to morph into the complex social actors that we are (Morozov, Perils of Perfection)?Despite all this, solutionism isn’t all bad; making use of new possibilities provided by technological innovation to make life a little bit easier can both be extremely cost effective and timesaving. This makes solutionism a complicated debate without either being the obvious positive or negative. In creating the BikeLoc app we intended to show how difficult the public and academic debate of solutionism can be. By going through the process of solving a problem using technology, we experience it firsthand in order to gain a better understanding of solutionism and its actors.

An introduction to BikeLocAfter two brainstorm sessions we came up with the idea of an app that could help its user in finding his or her bike. There are already several solutions for finding your bike or other belongings, like Tile and Sherlock, costing around 25 euros or more each. Tile is a small card that you can attach to your keys, put into your wallet or attach it to your bike. You can use the Tile-app to play a sound on the tiny speaker or to locate it with a proximity tracker. The connection that is used to make a connection between the card and the app is based on Bluetooth technology. The downside of this technology is that Bluetooth only works up to 30 meters and that the tile has a battery life of about a year and it cannot be recharged (thetileapp.com). In the specific case of Tile the user has to be close to his belongings to be able to track them and that after about a year the product becomes useless. Something that is even less convenient is that a user can never know in advance when their Tile is running out of power. Sherlock is another solution to track back to the location of your bike. This solution actually works with a tracker that you have to build into a part of your bike, for example the handlebars. Via the app you will almost always be able to track back to the device via GPS technology. Furthermore this device actually has a rechargeable battery and it can be securely locked into your bike. The downsides of this product however are that you have to recharge the battery every week and that it needs its own SIM card to function (sherlock.bike). That creates extra costs and time spent to upkeep your tracking device.

We took these downsides into account while thinking of our own solution for a product that could be developed to help when looking for your bike. Therefore we tried to keep our idea as cheap to produce and simple to use as possible. That is how we came up with BikeLoc. Our idea was that you just need a bike and to have the app installed on your smartphone. Relying on pre-existing technologies utilised by current activity tracker apps, BikeLoc would be able to follow user movements and calculate whether they are cycling or walking. This decision would be based on GPS, pedometer, and movement sensor data. Utilising user data BikeLoc would be able to intervene once they have arrived at their parking spot, notifying them of the apps ability to remember the location by pinning it on the map. The app would quickly show a glimpse of the map with the pinpoint and would then save it in its cache memory. Later when the user comes back to the area looking for their bike the app could alert them they are close to their original parking spot, helping them to more easily locate their bicycle. Conversely, our app makes use of GPS to advise the user if they have strayed too far away from their bike, notifying them with a specific message when they have moved more than 30 meters away from their original pin drop location.

Engaging with the academic debate of solutionismInstead of observing certain technologies from the outside, we think it could lead to more interesting insights if we actually experience the entire process of creating a new app to solve a particular problem. The theoretical framework indicates that one of the main reasons for the complexity of the academic debate is due to the difficulty of defining what’s right or wrong about a certain solution or use of technology. Since these are subjective terms that could change over time and depend on the perspective of the persons involved. Morozov does not provide any tools to investigate the idea of solutionism or to explore if a problem should even be solved by technology. After doing some research on theorists with a similar point of view as Morozov, we discovered an interesting lecture by Neil Postman [video]. He addresses the problems we face in our new technological society by posing a series of questions. Moreover, he states that the questions are more important than the (subjective) answers. Since his questions are primarily focused on the new technologies and media of the 90’s, we made some adjustments in order to ensure that it’s relevant to our project. We selected the following questions to engage with the debate of solutionism:

What is the problem this new app solves?BikeLoc solves the problem of people losing time looking for their bike or the problem of people losing their bike itself.

Whose problem is it?This problem could affect tourists, international students and people who have trouble remembering things. However, indirectly it is also a problem of the citizens of Amsterdam, who are bothered by the bikes that are left behind, and the City Government of Amsterdam because they have to remove those bikes eventually.

What new problems do we create by solving this problem?When you ensure the users of the app that they will always be able to find their bikes, they may park it in any random location instead of using the designated parking facilities. Therefore we potentially create the new problem of people parking their bikes in inconvenient places.
Reflecting on these three questions, we realised we stay in the mindset of our solution to the problem. However, we might not focus enough on the actual problem. The app offers a quick fix, but how can we make sure our solution is the best approach? Therefore, we must look at the origin of the problem.

Why is this a problem?Why do people get lost in Amsterdam? The city centre consists of hundreds of small streets and bridges, and it’s getting more crowded every year. More specific, why do people forget where they parked their bike? Because of all the small streets in the city centre, there are not many large designated parking facilities apart from the ones at the stations and larger squares. So people park their bikes in small bike racks, or any spot they can lock their bikes. The problem (people looking for their bikes) we are trying to solve may not be the fault of the bike owner, but for example of the city government or city planners. Should they create more parking spaces for bikes, or place more direction signs and area apps? However, this does not mean technology cannot be a part of the solution of the problem.

Did we create the right app?Instead of creating an app to find your bike, it could be of more value to create an app that helps you find a place to park it. It could list designated parking spaces and the availability. This way, it is possible to cooperate with the city government and for example bike rental companies in finding a lasting solution.

ConclusionWhen we developed the idea of BikeLoc we took on the role of solutionists ourselves, and completely focused on creating a clever tool to solve our problem. We discovered the seemingly endless possibilities and freedom technology has to offer. After the finalization of BikeLoc, we reflected on our work to investigate the other side of the debate. Raising questions about the problem instead of the solution, made us think critically on our initial approach.In the end, we came to the conclusion that all the different components of a problem need to be taken into consideration in order to find a lasting solution.We think it is important to reflect on the approach, and acknowledge that a quick technological fix might not be preferable in cases where it does not address the cause or origin of the problem. Of course, technology created many innovative and often useful ways of coping with existing problems in our society. However we believe the best results can be achieved when tech entrepreneurs collaborate with other relevant actors.

Autonomous cars are starting to take over our highways faster than you would think. The notion of the self-driving car is a big deal in media and debate as of late. Many car companies are eager to become the market leaders in this new technology. These driverless cars are supposed to make our lives easier and safer by taking over the activity of driving. According to Google, even aging and visually impaired loved ones will be able to drive around independently and safely in the near future. The main question that does not seem to be fully addressed by these companies is what will happen when things go wrong?

First Fatal Car Crash While on Autopilot Mode Was In a Tesla Car

Back in May of 2016, the first known death by a self-driving car happened. Without a doubt, this fatal crash caused many consumers to second-guess the trust they may potentially put into the autonomous vehicle industry. The driver who was killed was in a Tesla Model S on autopilot mode, which controls the car during highway driving (Morris), (Tynan, Dan, and Danny Yadron).

The car’s sensor system failed to distinguish a large white truck against the bright blue sky. When Tesla released a statement regarding that fatal crash they tried to shift the blame of the accident, and said that their autonomous software is designed to “nudge” to keep the drivers hands on the wheel (Mearian), (Tynan, Dan, and Danny Yadron). They go on to say that even though autopilot is getting better all the time, it is not a perfect system and it will still require the driver to remain alert (Morris), (Tynan, Dan, and Danny Yadron).

Implications of the Fatal Crash

This crash was not only a tragic loss of life, but it also brought to light an array of challenges, ethical questions, and in a way theoretical debates have now become very real (Mearian). It is hard to say who is to “blame” for this crash. Tesla’s Autopilot isn’t perfect, but the bigger picture here is that two humans created a situation that the automated system failed to save them from (Mearian), (Morris).

The question with autonomous cars is that of ‘agency’. The word ‘agency’ comes from the Latin word ‘agentia’ which in turn is derived from the verb ‘agere’, meaning ‘to act’ or ‘to do’. In the case of the driverless vehicle, the agency or action is distributed between the car and the human.

Sociologist Bruno Latour states that actions are always dependent on their material context and thus agency is always distributed. Designers can delegate specific responsibilities to an artifact by inscribing them into the design of the object. To illustrate: the speed bump is inscribed with the responsibility to make sure that a driver doesn’t drive too fast (Verbeek, 361-362). As with the example of the speed bump, the driverless car is also delegated by design with certain responsibilities. The most important one too companies and consumers, seems to be driving in a safe way. However, with the notion of autonomous cars this inscription is loaded with a moral dimension.

What exactly the consequences will be are hard to predict, as fully autonomous cars are not embedded in society yet. Therefore, we will look at some of the biggest developers of these cars at the moment, and the current focus of their version of an autonomous car. By doing this we are able to derive the main influences of the current autonomous technologies used in cars, and from there we can provide a basis for thinking about the potential ethical consequences.

Volvo

In their vision of the Volvo of 2020, Volvo is mostly propagating less stress, less emission and more safety. This is made clear by their slogan: ‘Volvo Cars IntelliSafe: Travel calmer, safer, cleaner.’ They intent to be using radar and cameras to avoid critical traffic situations like crashes. Their car will detect a dangerous situation and brake automatically as a consequence. Lin raises this question: if braking automatically is always the best solution? (74) For example, we need to know how hard the car would need to brake, and how it knows what kind of object is in front of it. For the upcoming years, it seems Volvo will keep focusing on this kind of technical or software based tools, as a transition into possible totally autonomous cars.

Volvo’s IntelliSafe brings up the question with whom the responsibility lies when a crash actually does happen. One could say that the owner of a driverless car cannot be held responsible because the actions do not rely on a user (Duffy, Sophia, and Jamie Patrick Hopkins, 111). With the Volvo IntelliSafe cars this is even more complicated because they can be switched into an autonomous drive-mode. So the question here remains, of whether or not these Volvo IntelliSafe cars can truly be called driverless?

Toyota

Toyota seems to have a similar approach as Volvo. Rather than trying to match the efforts of parties like Google to build fully autonomous cars right away, they also envision drivers and software sharing control over the years to come. Gill Pratt, main researcher of the Toyota Research Institute, names this form of autonomous driving ‘parallel autonomy’. Toyota takes safety as one of their main priorities, especially in regard to software taking over in cases of emergency. Gill Pratt, refers to this as the ‘guardian angel’ approach. For assessing these safety issues, Toyota is looking into prediction software as well as pattern recognition software. This software can for example detect kids who are about to cross the streets while looking at their phones. Pratt mentions how techniques like deep learning could be applied to this combination of parallel autonomy and pattern recognition. This brings up two interesting points regarding ethical consequences of Toyota’s view on autonomous cars.

First, the same as with the Volvo approach, the question arrises who is responsible when a crash happens. In this case however, it’s about software that should recognize danger before it actually is present, instead of responding on it by braking. This might lead to people expecting the car to recognize danger, so that they might be paying less attention to the road, as both drivers or pedestrians. According to Forrest and Konca, “What we consider common sense because everyone has to learn the rules of the road would no longer be known by all people on the road” (48). Secondly, although we might expect technology to behave in a particular way, it’s easy for manufacturers to point out the human as still being responsible, when using a parallel autonomous system. But when this autonomy is improved by adding for example deep learning, as Pratt mentions, or other corrective mechanisms, we could actually be able to blame the car when it’s able to understand its fault and learn from it, “which will prevent an artifactually intelligent autonomous robot/softbot from repeating similar mistakes” (Crnkovic and Çürüklü, 7).

Mercedes-Benz

Mercedes-Benz is the first manufacturer to actually give a statement about the ethical part of their design. They state that their new 2017 E-class will always prioritize the safety of the driver since it’s the only safety they can guarantee. When put to practice this means that when the software has to decide between saving a child on the road by steering into a tree or hit the child in question, the software isn’t going to act (Dogson). This might sound crude but according to Bonnefon, Shariff, and Rahwan, the majority of the people want their car to safe them in case of a crash (1514, 1573). Could this be seen as a sort of discrimination against non-drivers? (Lin, 70-73)

The car of Mercedes is “Luxury in Motion”. When driving a semi-autonomous Mercedes you could switch to Driver-Pilot in situations where driving isn’t much fun, so that the driver will be able to do better things with his or her time.

However as Eric Adams of the magazine Wired found out, the Driver-pilot isn’t perfect. It is supposed to take over the action of driving but in reality he had to take control a couple of time while driving. When questioning Mercedes-Benz about responsibility, they replied that the driver should always be ready to take control. If we were to believe the marketing campaign, autonomous cars would allow us to do different activities while driving. They are designed to take over the action of driving but when it comes to liability, the human is still responsible. As Sophia Duffy and Jamie Patrick Hopkins question: is it morally permissible to impose liability on the user based on the duty to pay attention to the road to intervene if necessary? (629)

Tesla

Many cars on the market are trying their hand at creating the best version of the autonomous car, but without a doubt, Tesla has exposed society to driverless tech the most thus far (Muoio). Tesla is seen as the forerunner in this autonomous vehicle craze because the company for a while now has offered drivers of their car autonomous driving features. Some of these safety features are automatic lane change and collision avoidance. The newest Tesla 8.0 software update is their biggest update yet, and it includes significant updates to their semi-autonomous driving system called ‘Autopilot’ (Muoio). CEO of Tesla, Elon Musk, has even stated that this update will make Tesla cars “three times safer than any other cars on the road” (Muoio), (Thompson).

Tesla 8.0 has improved the accuracy of the Autopilot system by making better use of the radar sensor on their vehicles. Until now, the radar system was always a supplementary sensor, but now it has a greater role in determining if an object is in danger or not (Thompson). The data collected by the improved radar system will also have more weight in deciding how the car should react in Autopilot (Thompson).

Tesla Autonomous Car Illustration

The focus on safety and its implications

All of the above mentioned companies seem to have safety, in the use of autonomous technologies, as first priority. Also, at this moment, they all seem to be focussing on the current transition fase in between human driving and fully autonomous driving, by slowly applying more and more autonomous technology to a human driven car. The main question that comes forth is that of with whom the responsibility lies. But there are other, smaller questions that come up, and that are part of this bigger notion of responsibility within the shift of agency.

One small question that arises is which ethics are being programmed into the autonomous car to deal with crash scenarios. Mercedes, for example, will always prioritize the safety of the driver even at the cost of pedestrians. This is the most preferred option in this dilemma, because people will not buy a car that would sacrifice themselves. Volvo and Toyota take an approach of preventing accidents, but they do not give a statement about their ethical choices. The same goes for Tesla, which also stays away from giving their opinion about ethical concerns.

All the car producers try to develop a system or software for dealing with dangerous situations. Auto manufactures can implement for example targeting algorithms or crash-optimization algorithms, as we have seen from the four main developers of autonomous cars and their priorities as discussed above. Lin concludes from this that car manufacturers are discriminating in traffic situations: “Somewhat related to the military sense of selecting targets, crash-optimization algorithms may involve the deliberate and systematic discrimination of, say, large vehicles or Volvos to collide into” (73). A question of concern is: can we let programmers make such decisions? These kind of questions contain judgements about ethics and are difficult to approach.

Another question concerns the liability of autonomous cars. Especially, because according to these car companies, the driver is still responsible for paying attention, even though they are encouraged through marketing to relax and let the car do the work. But when we rely on this system to keep us safe, who is responsible when a crash does happen? Can the driver be held responsible for paying attention while the responsibility of driving is delegated elsewhere. According to Duffy, Sophia, and Jamie Patrick Hopkins: “Existing laws governing vehicles and computers do not provide a means to assess liability for autonomous cars” (123).

These are all questions that the car developers should take into account when designing fully autonomous cars. Just as much and perhaps even more when developing autonomous technology that is added onto human-driven cars. When the moment comes that we as humans have totally no part in driving at all, it might perhaps easier to just blame the car or manufacturer when things go wrong. But it seems the current status of shared agency actually creates more ethical questions to take into account then a status of fully autonomous cars would. The critical thinking on the ethical implications of autonomous software, should push car manufactures and the government in the right direction, one that is truly the best for the society (Duffy, Sophia, and Jamie Patrick Hopkins 123). This so that the promises on safety made by the car manufactures can become reality, but especially that a proper division of responsibility can be made in a society of distributed agency.

Muoio, Danielle. “After Riding in Uber’s Self-Driving Car, I Think Tesla Will Have Some Serious Competition.” Business Insider.2016. 9 October 2016. <http://www.businessinsider.com/teslas-biggest-competitor-with-driverless-tech-is-uber-2016-9?international=true&r=US&IR=T>.

Takahashi, Dean. “Toyota research chief waxes on how to save 1.2M lives a year with driverless cars”.VentureBeat.2016. 7 April 2016. <http://venturebeat.com/2016/04/07/toyota-research-chief-waxes-on-how-to-save-1-2m-lives-a-year-with-driverless-cars/>.

Has it ever occurred to you how much of our precious time can be wasted while struggling to create a family or adopt a child? And we don’t blame you, who does not want that picturesque family idylle. At Tindoption we believe that what the humanity needs right now is a simple and easy solution for adoption. Wouldn’t it make the world a better place if both children and parents could be brought together by a simple swipe right on each others’ pictures? We are certain it would, that is why we present to you the Tindoption app. Your future family is just a swipe away!

The Swipe Society

Did you feel there was something a little scary and wrong, but at the same time familiar about this “new app” presentation? Since the launch of Tinder in 2012 we have evidenced increasing popularity of the apps based on the so called swipe interface. Starting with Tinder being one of the most popular dating apps so far and continuing with swipe logic based apps for finding a job, looking for accommodation, choosing baby names, finding friends for your dog, shopping and deciding on what to eat, we see an evidence of the tendency towards theSwipe Society. What we mean by this concept is a society that normalises and chooses to accommodate various topics and notions into the swiping binary of yes (swipe right) and no (swipe left). In a sense, Tinder’s binary mechanism can become ‘a template for a whole way of life’ (Eler and Peyser).

The increasing popularity of the swipe logic (and therefore an increasing number of apps based on the concept) can be explained by two key characteristics: it is efficient and it is fun (David and Cambre). These two characteristics that make the concept of the swipe logic so attractive to users are the same two reasons that critics are concerned about and that problematise the notion of theSwipe Society itself. Can ‘efficient and fun’ be a solution to various complex social or cultural relations such as, for example, politics? Is it really okay to appropriate some composite and sensitive topics into the swiping dichotomy of 0 and 1? As one of the key figures in questioning solutionism and gamification, Evgeny Morozov, would argue – probably not. The author’s argument can be illustrated in a single quote ‘Constructing a world preoccupied only with the most efficient outcomes – rather than with the process through which these outcomes are achieved – is not likely to make them aware of the depth of human passion, dignity and respect’ (234). And indeed the swipe logic in this case represents a ‘tendency to believe a little magic dust can fix any problem’ (Wu).

Even though many could argue that Tinder or other current swiping based apps may not yet reflect Morozov’s concerns, we believe that the popularity of this simple interface is making a normative claim on what a decision making is supposed to look like (Stanfill, 1061). This grammatization of users’ actions (Agre) (swiping left and right) without any room for other choices could result in this swipe model becoming the de-facto decision making model – the Swipe Society that the critics and academic scholars are already concerned with.

To make it clear, it was never our intention to criticise Tinder and its users or to moralize whether the swipe based apps are necessarily right or wrong. We are rather concerned with the somewhat invisible, ‘technological unconscious’ (Thrift) tendency towards the Swipe Society, which is growing due to the normalisation of swipe logic. Instead of blindly agreeing with Morozov’s work we aim to communicate his critical ideas to the population outside the academic fields, the population that normalises ‘swiping’ without realising it. Therefore, this project aims to start a dialogue by simply asking people: should we start problematizing the swipe logic and when should we start to worry about becoming the Swipe Society.

The tool for conversation: Tindoption app

To address these questions to the public, we needed a tool which would allow us to highlight the aspects of the swipe logic that academics are concerned with and evoke people to engage in the discussion. We came up with an idea of a tool which would mildly play on the shock value to trigger critical thinking. And the product of our reasoning was, as you have probably guessed, Tindoption. Tindoption is an imagined app which is based on Tinder interface, but accommodates a new, sensitive and ethically complex topic of children adoption.

Image 1. Swiping pages and a message page

The app would work in the standard Tinder manner: parents go through children’s profiles, swiping right if they like the child, and left – if they are not interested, children do the same with parents’ profiles. If the parent(s) and the child both swipe right – they are matched, so a conversation can be started about meeting up and further adoption process. To visualise our imagined app, we created mock up interfaces: swiping pages for parents and children (Image 1), a messaging page (Image 1), profile pages (Image 2) and a match page (Image 3).

Image 2. Child profile page

Image 3. A match page

As it was mentioned before, we chose to treat Tindoption and its mock up interface as a tool for discussion rather than an actual app that is to be developed. Therefore, we didn’t necessarily go into detail on how the app would function and work in real life, as long as we have the basic understanding which is relevant and interesting while encouraging a conversation (for example we imagine that children would be supervised by an adult while using the app).

Tindoption, therefore, is our way of communicating the possible problematic side of the Swipe Society to the wider public. As a tool, Tindoption does not state what the issues are (if there are any) but encourages viewers to rethink certain aspects of these new media notions for themselves, from their own moral or rational perspectives.

Interviews

To collect data on participants’ reactions and ideas, we conducted eight semi-structured interviews (and three test interviews). This part was essential to the project to see whether Tindoption actually produces reactions and motivates discussion on the topic.

During the interviews the mock up interfaces and description of the app were presented to the interviewees. The questions concerned the use of Tinder, the opinion and problems with Tindoption, ideas about the increasing amount of swipe logic based apps and other. The interviews were treated as ‘a conversation with a purpose’ (Burgess), therefore were open-ended and interviewees could elaborate on the topics they found the most interesting. The research participants were presented with information and consent sheets before the interviews.

Naturally, we encountered some obstacles in this stage of the research. During the test interviews we noticed a tendency of respondents to get defensive because of the belief that Tindoption is mocking Tinder or its users (in some cases, interviewees themselves). As a result of this feedback, a disclaimer was introduced. We are also aware of the rather small sample used for our research, but believe that it provided meaningful insights especially given the restricted time and resources.

Our Findings

To visualise the analysed data, we created two mind maps representing the main opinions and reactions expressed during the interviews about Tindoption. The first one (Mindmap 1) visualises a significant difference between what people had to say about Tinder and Tindoption. We believe it reflects two major themes on which, our further research was based. Firstly, how normalised Tinder and its use is among our participants. Secondly, how the presentation of Tindoption had a shock value and resulted in interviewees problematising the app. The second mindmap (Mindmap 2) results in various other themes that were coded while analysing the interview transcripts. Some of the most important insights from the qualitative data are explained below.

Mindmap 1. Reactions and Opinions (click for a larger image)

To start with, majority of the interviewees had a strong feeling that Tinder is more appropriate than Tindoption and tried to explain it via rational reasoning (for example ‘adults can consent, while children cannot’) or moral reasoning (‘it seems like you are shopping for a child’). Though everyone concluded that the increasing number of apps is something to be concerned about and admitted that they have not considered the problematic side of it before they were faced with the ridiculousness of Tindoption.

Mindmap 2. Other thoughts on Tindoption (click for a larger image)

What is more, at this stage of the research we noticed that interviewees reacted with an emerging concern about the process of children adoption (realising that in reality children adoption is not that different from the basic principle suggested by Tindoption). We realised that Tindoption, even though originally intended as a tool to problematize the increasing popularity of the binary swipe, as a side effect also questioned and challenged the process of children adoption. We, therefore, acknowledge that Tindoption could be used to satirically intervene and engage with the issues related to this topic, but we still choose to concentrate on our initial object – swipe logic.

One interviewee suggested that the selection of children or a date based on looks existed before these apps, but simplified, swiping based apps like Tinder and Tindoption just allow these notions to grow and become a popular norm of our society. We believe this is a very interesting and valid point. The swipe binary based apps such as Tinder or Tindoption, rather than distorting and ‘flattening’ the content, actually just facilitates and enables acceleration of already existing controversial aspects of human behavior.

Despite a number of interesting claims and remarks, it would be inaccurate to draw some certain generalisations and big claims from such a small sample of interviewees. The most important result from the interviews conducted is the realisation that Tindoption does work as a tool to stir up discussion on the normalisation of the swiping interface. The presentation of the app made people problematise, discuss various contemporary notions and bring out the everyday use of new media from the unconscious. Furthermore, the participants enjoyed being involved in the conversations on Tindoption and were happy about the ideas they came up with themselves (which can be seen from the mostly positive feedback).

Where do we go from here?

Due to the positive feedback during the interviews and a lot of encouragement to share our idea with more people, we decided to create a website which presents Tindoption as a real app. Apart from the presentation of the app, the website includes happy customers’ reviews and a comments section for people, who visit the webpage. This allows us to get a larger reach and, therefore, hopefully – a greater number of engagements with the discourse, questioning the notion of the swipe logic. The comment section is of high importance as it allows us to document and track more reactions people have to the app. It is only through the link to the ‘about the project’ section that the visitor is informed that the app is an imaginary one and that it is a product of a students’ project.

Our main concern with the encouragement to share Tindoption publicly was the Tinder copyright laws. To create Tindoption we quite straightforwardly used Tinder interface and the name of our app (Tindoption) has an obvious reference to Tinder itself. We chose to pursue the website despite the concerns by adding the link to the explanation that the app is a satirical/non-profit intervention by a group of students. Moreover, we believe it is unlikely for our project to reach a level of popularity which could attract the attention of Tinder itself. However, in case it does happen, it would result in the project making the statement it wanted to make.