Category Archives: Ends Of Identity

The US Children’s Online Privacy Protection Act (COPPA) explicitly forbids the collection of data from children under 13 without parental consent.

Rather than go through the complicated verification processes that involve getting parental consent, Facebook, like most online platforms, has previously stated that children under 13 simply cannot have Facebook accounts.

Of course, that has been one of the biggest white lies of the internet, along with clicking the little button which says you’ve read the Terms of Use; many, many kids have had Facebook accounts — or Instagram accounts (another platform wholly-owned by Facebook) — simply by lying about their birth date, which neither Facebook nor Instagram seek to verify if users indicate they’re 13 or older.

Many children have utilised some or all or Facebook’s features using their parent’s or older sibling’s accounts as well. Facebook’s internal messaging functions, and the standalone Messenger app have, at times, been shared by the named adult account holder and one or more of their children.

Sometimes this will involve parent accounts connecting to each other simply so kids can Video Chat, somewhat messing up Facebook’s precious map of connections.

Enter Messenger Kids, Facebook’s new Messenger app explicitly for the under-13s. Messenger Kids is promoted as having all the fun bits, but in a more careful and controlled space directed by parental consent and safety concerns.

To use Messenger Kids, a parent or caregiver uses their own Facebook account to authorise Messenger Kids for their child. That adult then gets a new control panel in Facebook where they can approve (or not) any and all connections that child has.

Kids can video chat, message, access a pre-filtered set of animated GIFs and images, and interact in other playful ways.

PHOTO: The app has controls built into its functionality that allow parents to approve contacts. (Supplied: Facebook)

In the press release introducing Messenger Kids, Facebook emphasises that this product was designed after careful research, with a view to giving parents more control, and giving kids a safe space to interact providing them a space to grow as budding digital creators. Which is likely all true, but only tells part of the story.

As with all of Facebook’s changes and releases, it’s vitally important to ask: what’s in it for Facebook?

While Messenger Kids won’t show ads (to start with), it builds a level of familiarity and trust in Facebook itself. If Messenger Kids allows Facebook to become a space of humour and friendship for years before a “real” Facebook account is allowed, the odds of a child signing up once they’re eligible becomes much greater.

Facebook playing the long game

In an era when teens are showing less and less interest in Facebook’s main platform, Messenger Kids is part of a clear and deliberate strategy to recapture their interest. It won’t happen overnight, but Facebook’s playing the long game here.

If Messenger Kids replaces other video messaging services, then it’s also true that any person kids are talking to will need to have an active Facebook account, whether that’s mum and dad, older cousins or even great grandparents. That’s a clever way to keep a whole range of people actively using Facebook (and actively seeing the ads which make Facebook money).

Facebook wants data about you. It wants data about your networks, connections and interactions. It wants data about your kids. And it wants data about their networks, connections and interactions, too.

When they set up Messenger Kids, parents have to provide their child’s real name. While this is consistent with Facebook’s real names policy, the flexibility to use pseudonyms or other identifiers for kids would demonstrate real commitment to carving out Messenger Kids as something and somewhere different. That’s not the path Facebook has taken.

Facebook might not use this data to sell ads to your kids today, but adding kids into the mix will help Facebook refine its maps of what you do (and stop kids using their parents accounts for Video Chat messing up that data). It will also mean Facebook understands much better who has kids, how old they are, who they’re connected to, and so on.

One more rich source of data (kids) adds more depth to the data that makes Facebook tick. And make Facebook profit. Lots of profit.

Facebook’s main app, Messenger, Instagram, and WhatsApp (all owned by Facebook) are all free to use because the data generated by users is enough to make Facebook money. Messenger Kids isn’t philanthropy; it’s the same business model, just on a longer scale.

Facebook isn’t alone in exploring variations of their apps for children.

Google, Amazon and Apple want your kids

As far back as 2013 Snapchat released SnapKidz, which basically had all the creative elements of Snapchat, but not the sharing ones. However, their kids-specific app was quietly shelved the following year, probably for lack of any sort of business model.

Since early 2017, Google has also shifted to allowing kids to establish an account managed by their parents. It’s not hard to imagine why, when many children now chat with Google daily using the Google Home speakers (which, really, should be called “listeners” first and foremost).

Google Home, Amazon’s Echo and soon Apple’s soon-to-be-released HomePod all but remove the textual and tactile barriers which once prevented kids interacting directly with these online giants.

A child’s Google Account also allows parents to give them access to YouTube Kids. That said, the content that’s permissible on YouTube Kids has been the subject of a lot of attention recently.

In short, if dark parodies of Peppa Pig where Peppa has her teeth painfully removed to the sounds of screaming is going to upset your kids, it’s not safe to leave them alone to navigate YouTube Kids.

Nor will the space created by Messenger Kids stop cyberbullying; it might not be anonymous, but parents will only know there’s a problem if they consistently talk to their children about their online interactions.

Facebook often proves unable to regulate content effectively, in large part because it relies on algorithms and a relatively small team of people to very rapidly decide what does and doesn’t violate Facebook’s already fuzzy guidelines about acceptability. It’s unclear how Messenger Kids content will be policed, but the standard Facebook approach doesn’t seem sufficient.

At the moment, Messenger Kids is only available in the US; before it inevitably arrives in Australia and elsewhere, parents and caregivers need to decide whether they’re comfortable exchanging some of their children’s data for the functionality that the new app provides.

And, to be fair, Messenger Kids may well be very useful; a comparatively safe space where kids can talk to each other, explore tools of digital creativity, and increase their online literacies, certainly has its place.

Most importantly, though, is this simple reminder: Messenger Kids isn’t (just) screen time, it’s social time. And as with most new social situations, from playgrounds to pools, parent and caregiver supervision helps young people understand, navigate and make the most of those situations. The inverse is true, too: a lack of discussion about new spaces and situations will mean that the chances of kids getting into awkward, difficult, or even dangerous situations goes up exponentially.

Messenger Kids isn’t just making Facebook feel normal, familiar and safe for kids. It’s part of Facebook’s long game in terms of staying relevant, while all of Facebook’s existing issues remain.

Tama Leaver is an Associate Professor in the Department of Internet Studies at Curtin University in Perth, Western Australia.

Last week’s Digitising Early Childhood conference here in Perth was a fantastic event which brought together so many engaging and provocative scholars in a supportive and policy/action-orientated environment (which I suppose I should call ‘engagement and impact’-orientated in Australia right now). For a pretty well document overview of the conference itself, you can see the quite substantial tweets collected via the #digikids17 hashtag on Twitter, which I’d really encourage you to look over. My head is still buzzing, so instead to trying to synthesise everyone else’s amazing work, I’m just going to quickly point to the material that arose my three different talks in case anyone wishes to delve further.

If you’d like to hear the talk that goes with the slides, there’s an audio recording you can download here. (I think these were filmed, so if a link becomes available at some point, I’ll update and post it here.) There was a great response to my talk, which was humbling and gratifying at the same time. There was also quite a lot of press interest, too, so here’s the best pieces that are available online (and may prove a more accessible overview of some of the issues I explored):

This paper began as a discussion after our piece about Daddy O Five in The Conversationand where the complicated questions about children in media first became prominent. Crystal wasn’t able to be there in person, but did a fantastic Snapchat-recorded 5-minute intro, while I brought home the rest of the argument live. Crystal has a great background page on her website, linking this to her previous work in the area. There was also press interest in this talk, and the best piece to listen to (and hear Crystal and I in dialogue, even though this was recorded at different times, on different continents!):

Over the next month, I’m lucky enough to be involved in three separate events focused on infancy online, digital media and early childhood. The details …

[1] Thinking the Digital: Children, Young People and Digital Practice – Friday, 8th September, Sydney – is co-hosted by the Office of the eSafety Commissioner; Institute for Culture and Society, Western Sydney University; and Department of Media and Communications, University of Sydney. The event opens with a keynote by visiting LSE Professor Sonia Livingstone, and is followed by three sessions discussing youth, childhood and the digital age is various forms. While Sonia Livingstone is reason enough to be there, the three sessions are populated by some of the best scholars in Australia, and it should be a really fantastic discussion. I’ll be part of the second session on Rights-based Approaches to Digital Research, Policy and Practice. There are limited places, and a small fee, involved if you’re interested in attending, so registration is a must! To follow along on Twitter, the official hashtag is #ThinkingTheDigital.

The US YouTube channel DaddyOFive, which features a husband and wife from Maryland “pranking” their children, has pulled all its videos and issued a public apology amid allegations of child abuse.

The “pranks” would routinely involve the parents fooling their kids into thinking they were in trouble, screaming and swearing at them, only the reveal “it was just a prank” as their children sob on camera.

Despite its removal the content continues to circulate in summary videos from Philip DeFranco and other popular YouTubers who are critiquing the DaddyOFive channel. And you can still find videos of parents pranking their children on other channels around YouTube. But the videos also raise wider issues about children in online media, particularly where the videos make money. With over 760,000 subscribers, it is estimated that DaddyOFive earned between US$200,000-350,000 each year from YouTube advertising revenue.

Philip DeFranco / WOW… We Need To Talk About This…

The rise of influencers

Kid reactions on YouTube are a popular genre, with parents uploading viral videos of their children doing anything from tasting lemons for the first time to engaging in baby speak. Such videos pre-date the internet, with America’s Funniest Home Videos (1989-) and other popular television shows capitalising on “kid moments”.

In the era of mobile devices and networked communication, the ease with which children can be documented and shared online is unprecedented. Every day parents are “sharenting”, archiving and broadcasting images and videos of their children in order to share the experience with friends.

Even with the best intentions, though, one of us (Tama) has argued that photos and videos shared with the best of intentions can inadvertently lead to “intimate surveillance”, where online platforms and corporations use this data to build detailed profiles of children.

YouTube and other social media have seen the rise of influencer commerce, where seemingly ordinary users start featuring products and opinions they’re paid to share. By cultivating personal brands through creating a sense of intimacy with their consumers, these followings can be strong enough for advertisers to invest in their content, usually through advertorials and product placements. While the DaddyOFive channel was clearly for-profit, the distinction between genuine and paid content is often far from clear.

From the womb to celebrity

As with DaddyOFive, these influencers can include entire families, including children whose rights to participate, or choose not to participate, may not always be considered. In some cases, children themselves can be the star, becoming microcelebrities, often produced and promoted by their parents.

South Korean toddler Yebin, for instance, first went viral as a three-year-old in 2014 in a video where her mom was teaching her to avoid strangers. Since then, Yebin and her younger brother have been signed to influencer agencies to manage their content, based on the reach of their channel which has accumulated over 21 million views.

Baby Yebin / Mom Teaches Cute Korean baby Yebin a Life Lesson.

As viral videos become marketable and kid reaction videos become more lucrative, this may well drive more and more elaborate situations and set-ups. Yet, despite their prominence on social media, such children in internet-famous families are not clearly covered by the traditional workplace standards (such as Child Labour Laws and that Coogan Law in the US), which historically protected child stars in mainstream media industries from exploitation.

This is concerning especially since not only are adult influencers featuring their children in advertorials and commercial content, but some are even grooming a new generation of “micro-microcelebrities” whose celebrity and careers begin in the womb.

Greater transparency

The question of children, commerce and labour on social media is far from limited to YouTube. Australian PR director Roxy Jacenko has, for example, defended herself against accusations of exploitation after launching and managing a commercial Instagram account for her her young daughter Pixie, who at three-years-old was dubbed the “Princess of Instagram”. And while Jacenko’s choices for Pixie may differ from many other parents, at least as someone in PR she is in a position to make informed and articulated choices about her daughter’s presence on social media.

Already some influencers are assuring audiences that child participation is voluntary, enjoyable, and optional by broadcasting behind-the-scenes footage.

Television, too, is making the most of children on social media. The Ellen DeGeneres Show, for example, regularly mines YouTube for viral videos starring children in order to invite them as guests on the show. Often they are invited to replicate their viral act for a live audience, and the show disseminates these program clips on its corporate YouTube channel, sometimes contracting viral YouTube children with high attention value to star in their own recurring segments on the show.

Sophia and Rosie Grace featured on Ellen after their viral Nicki Minaj video.

Ultimately, though, children appearing on television are subject to laws and regulations that attempt to protect their well-being. On for-profit channels on YouTube and other social media platforms there is a little transparency about the role children are playing, the conditions of their labour, and how (and if) they are being compensated financially.

Children may be a one-off in parents’ videos, or the star of the show, but across this spectrum, social media like YouTube need rules to ensure that children’s participation is transparent and their well-being paramount.

On Friday, 7 April at 4pm I’ll be giving a public talk entitled “Saving the Dead? Digital Legacy Planning and Posthumous Profiles” as part of the John Curtin Institute of Public Policy (JCIPP)Curtin Corner series. It’ll touch on both ethical and policy issues relating to the traces left behind on digital and social media when someone dies. Here’s the abstract for the talk:

When a person dies, there exist a range of conventions and norms regarding their mourning and the ways in which their material assets are managed. These differ by culture, but the inescapability of death means every cultural group has some formalised rules about death. However, the comparable newness of social media platforms means norms regarding posthumous profiles have yet to emerge. Moreover, the usually commercial and corporate, rather than governmental, control of social media platforms leads to considerable uncertainty as to which, if any, existing laws apply to social media services. Are the photos, videos and other communication history recorded via social media assets? Can they be addressed in wills and be legally accessed by executors? Should users have the right to wholesale delete their informatic trails (or leave instructions to have their media deleted after death)? Questions of ownership, longevity, accessibility, religion and ethics are all provoked when addressing the management of a deceased user’s social media profiles. This talk will outline some of the ways that Facebook and Google currently address the death of a user, the limits of these approaches, and the coming challenges for future internet historians in addressing, accessing and understanding posthumous profiles.

Facebook’s accidental ‘death’ of users reminds us to plan for digital death

The accidental “death” of Facebook founder Mark Zuckerberg and millions of other Facebook users is a timely reminder of what happens to our online content once we do pass away.

Earlier this month, Zuckerberg’s Facebook profile displayed a banner which read: “We hope the people who love Mark will find comfort in the things others share to remember and celebrate his life.” Similar banners populated profiles across the social network.

After a few hours of users finding family members, friends and themselves(!) unexpectedly declared dead, Facebook realised its widespread error. It resurrected those effected, and shelved the offending posthumous pronouncements.

For many of the 1.8-billion users of the popular social media platform, it was a powerful reminder that Facebook is an increasingly vast digital graveyard.

It’s also a reminder for all social media users to consider how they want their profiles, presences and photos managed after they pass away.

The legal uncertainty of digital assets

Your material goods are usually dealt with by an executor after you pass away.

But what about your digital assets – media profiles, photos, videos, messages and other media? Most national laws do not specifically address digital material.

As most social networks and online platforms are headquartered in the US, they tend to have “terms of use” which fiercely protect the rights of individual users, even after they have died.

Requests to access the accounts of deceased loved ones, even by their executors, are routinely denied on privacy grounds.

While most social networks, including Facebook, explicitly state you cannot let another person know or log in with your password, for a time leaving a list of your passwords for your executor seemed the only easy way to allow someone to clean up and curate your digital presence after death.

Five years ago, as the question of death on social media started to gain interest, this legal uncertainty led to an explosion of startups and services that offered solutions from storing passwords for loved ones, to leaving messages and material to be sent posthumously.

Dealing with death

Public tussles with grieving parents and loved ones over access to deceased accounts have led most big social media platforms to develop their own processes for dealing with digital death.

Facebook now allows users to designate a “legacy contact” who, after your death, can change certain elements of a memorialised account. This includes managing new friend requests, changing profile pictures and pinning a notification post about your death.

The only other option is to leave specific instructions for your legacy contact to delete your profile in its entirety.

Instagram, owned by Facebook, allows family members to request deletion or (by default) locks the account into a memorialised state. This respects existing privacy settings and prevents anyone logging into that account or changing it in the future.

Twitter will allow verified family members to request the deletion of a deceased person’s account. It will never allow anyone to access it posthumously.

LinkedIn is very similar to Twitter and also allows family members to request the deletion of an account.

Google’s approach to death is decidedly more complicated, with most posthumous options being managed by the not very well known Google Inactive Account Manager.

This tool allows a Google user assign the data from specific Google tools (such as Gmail, YouTube and Google Photos) to either be deleted or sent to a specific contact person after a specified period of “inactivity”.

The minimum period of inactivity that a user can assign is three months, with a warning one month before the specified actions take place.

But as anyone who has ever managed an estate would know, three months is an absurdly long time to wait to access important information, including essential documents that might be stored in Gmail or Google Drive.

If, like most people, the user did not have the Inactive Account Manager turned on, Google requires a court order issued in the United States before it will consider any other requests for data or deletion of a deceased person’s account.

Planning for your digital death

The advice (above) is for just a few of the more popular social media platforms. There are many more online places where people will have accounts and profiles that may also need to be dealt with after a person’s death.

Currently, the laws in Australia and globally have not kept pace with the rapid digitisation of assets, media and identities.

This paper examines two ‘ends’ of identity online – birth and death – through the analytical lens of specific hashtags on the Instagram platform. These ends are examined in tandem in an attempt to surface commonalities in the way that individuals use visual social media when sharing information about other people. A range of emerging norms in digital discourses about birth and death are uncovered, and it is significant that in both cases the individuals being talked about cannot reply for themselves. Issues of agency in representation therefore frame the analysis. After sorting through a number of entry points, images and videos with the #ultrasound and #funeral hashtags were tracked for three months in 2014. Ultrasound images and videos on Instagram revealed a range of communication and representation strategies, most highlighting social experiences and emotional peaks. There are, however, also significant privacy issues as a significant proportion of public accounts share personally identifiable metadata about the mother and unborn child, although these issue are not apparent in relation to funeral images. Unlike other social media platforms, grief on Instagram is found to be more about personal expressions of loss rather than affording spaces of collective commemoration. A range of related practices and themes, such as commerce and humour, were also documented as a part of the spectrum of activity on the Instagram platform. Norms specific to each collection emerged from this analysis, which are then compared to document research about other social media platforms, especially Facebook.

Visual content is a critical component of everyday social media, on platforms explicitly framed around the visual (Instagram and Vine), on those offering a mix of text and images in myriad forms (Facebook, Twitter, and Tumblr), and in apps and profiles where visual presentation and provision of information are important considerations. However, despite being so prominent in forms such as selfies, looping media, infographics, memes, online videos, and more, sociocultural research into the visual as a central component of online communication has lagged behind the analysis of popular, predominantly text-driven social media. This paper underlines the increasing importance of visual elements to digital, social, and mobile media within everyday life, addressing the significant research gap in methods for tracking, analysing, and understanding visual social media as both image-based and intertextual content. In this paper, we build on our previous methodological considerations of Instagram in isolation to examine further questions, challenges, and benefits of studying visual social media more broadly, including methodological and ethical considerations. Our discussion is intended as a rallying cry and provocation for further research into visual (and textual and mixed) social media content, practices, and cultures, mindful of both the specificities of each form, but also, and importantly, the ongoing dialogues and interrelations between them as communication forms.

At yesterday’s outstanding Controlling Data: Somebody Think of the Children symposium I presented the first version of my new paper “Intimate Surveillance: Normalizing Parental Monitoring and Mediation of Infants Online.” Here’s the abstract:

Parents are increasingly sharing information about infants online in various forms and capacities. In order to more meaningfully understand the way parents decide what to share about young people, and the way those decisions are being shaped, this paper focuses on two overlapping areas: parental monitoring of babies and infants through the example of wearable technologies; and parental mediation through the example of the public sharing practices of celebrity and influencer parents. The paper begins by contextualizing these parental practices within the literature on surveillance, with particular attention to online surveillance and the increasing importance of affect. It then gives a brief overview of work on pregnancy mediation, monitoring on social media, and via pregnancy apps, which is the obvious precursor to examining parental sharing and monitoring practices regarding babies and infants. The examples of parental monitoring and parental mediation will then build on the idea of “intimate surveillance” which entails close and seemingly invasive monitoring by parents. Parental monitoring and mediation contribute to the normalization of intimate surveillance to the extent that surveillance is (re)situated as a necessary culture of care. The choice to not survey infants is thus positioned, worryingly, as a failure of parenting.

My chapter is a key part of my Ends of Identity project; here I start to think about ‘intimate surveillance’ which is where parents and loved ones digitally document and survey their offspring, from sharing ultrasound photos to tracking newborn feeding and eating patterns. Intimate surveillance is a deliberately contradictory term: something done with the best of intentions but with possibly quite problematic outcomes. Here’s the full abstract:

The moment of birth was once the instant where parents and others first saw their child in the world, but with the advent of various imaging technologies, most notably the ultrasound, the first photos often precede birth (Lupton, 2013). In the past several decades, the question is no longer just when the first images are produced, but who should see them, via which, if any, communication platforms? Should sonograms (the ultrasound photos) be used to announce the impending arrival of a new person in the world? Moreover, while that question is ostensibly quite benign, it does usher in an era where parents and loved ones are, for the first years of life, the ones deciding what, if any, social media presence young people have before they’re in a position to start contributing to those decisions.

This chapter addresses this comparatively new online terrain, postulating the provocative term intimate surveillance, which deliberately turns surveillance on its head, begging the question whether sharing affectionately, and with the best of intentions, can or should be understood as a form of surveillance. Firstly, this chapter will examine the idea of co-creating online identities, touching on some of the standard ways of thinking about identity online, and then starting to look at how these approaches do and do not explicitly address the creation of identity for others, especially parents creating online identities for their kids. I will then review some ideas about surveillance and counter-surveillance with a view to situating these creative parental acts in terms of the kids and others being created. Finally, this chapter will explore several examples of parental monitoring, capturing and sharing of data and media about their children, using various mobile apps, contextualising these activities not with a moral finger-waving, but by surfacing specific questions and literacies which parents may need to develop in order to use these tools mindfully, and ensure decisions made about their children’s’ online presences are purposeful decisions.

The ‘beginnings’ issue of the M/C Journal, edited by Bjorn Nansen (University of Melbourne) and me, has just been published. We’re really pleased with how this issue has turned out: a number of articles engage with the beginnings of life — from pregnancy apps to social media microcelebrity infants to infant media use – but there are also some fantastically creative engagements, from the beginnings of spreadsheets in terms of both history and practice through to the rhetoric beginnings of new technologies such as smart contact lenses. As with all issues of M/C, the content is free and open access.