A blog by sociologist Deborah Lupton

Main menu

Digital health technologies configure a certain type of practising medicine and public health, a certain type of patient or lay person and a specific perspective on the human body. The techno-utopian approach to using digital health technologies tends to assume that these tacit norms and assumptions are shared and accepted by all the actors involved, and that they are acting on a universal human body. Yet a cursory examination of surveys of digital health technology use demonstrates that social structural factors such as age, gender, education level, occupation and race/ethnicity, as well as people’s state of health and their geographical location play a major role in influencing how such technologies are taken up among lay people or the extent to which they are able to access the technologies.

An American study of the use of some digital health technologies using representative data collected by the National Cancer Institute in 2012, for example, found no evidence of differences by race or ethnicity, but significant differences for gender, age and socioeconomic status (Kontos et al. 2014). Female respondents were more likely to use online technologies for health-related information, as were younger people (under less than 65) and those of higher socioeconomic status. People of low socioeconomic status were less likely to go online to look for a healthcare provider, use email or the internet to connect with a doctor, track their personal health information online, using a website to track to help track diet, weight or physical activity or download health information to a mobile device. However they were more likely to use social media sites to access or share health information. Women were more likely than men to engage in all of these activities.

While there is little academic research on how different social groups use apps, market research reports have generated some insights. One report showed that women install 40 per cent more apps than men and buy 17 per cent more paid apps. Men use health and fitness apps slightly more (10 per cent) than women (Koetsier 2013). A Nielsen market report on the use of wearable devices found that while men and women used fitness activity bands in equal numbers, women were more likely to use diet and calorie counter apps (Nielsen 2014).

As these findings suggest, gender is one important characteristic that structures the use of digital health technologies. The digital technology culture is generally male-dominated: most technology designers, developers and entrepreneurs are male. As a result, a certain blindness to the needs of women can be evident. For example, when the Apple Health app was announced in 2014, destined to be included as part of a suite of apps on the Apple Watch, it did not include a function for the tracking of menstrual cycles (Eveleth 2014). Gender stereotypes are routinely reproduced in devices such as health and medical apps. As I noted in my study of sexuality and reproduction self-tracking apps, the sexuality apps tend to focus on documenting and celebrating male sexual performance, with little acknowledgement of women’s sexuality, while reproduction apps emphasise women’s over men’s fertility.

App designers and those who develop many other digital technologies for medical and health-related purposes often fail to recognise the social and cultural differences that may influence how people interact with them. Just as cultural beliefs about health and illness vary from culture to culture, so too do responses to the cultural artefacts that are digital health technologies. Aboriginal people living in a remote region of Australia, for example, have very different notions of embodiment, health and disease from those that tend to feature in the health literacy apps that have been developed for mainstream white Australian culture (Christie and Verran 2014). It is therefore not surprising that a review of the efficacy of a number of social media and apps developed for health promotion interventions targeted at Aboriginal Australians found no evidence of their effectiveness or benefit to this population (Brusse et al. 2014).

Few other analyses have sought to highlight the cultural differences in which people respond to and use digital health technologies. This kind of research is surely imperative to challenge existing assumptions about ‘the user’ of these technologies and provide greater insights into their benefits and limitations.

Next month I will be visiting England to give talks and meet colleagues. It’s a whirlwind visit, with 8 talks at 7 universities in five days. The itinerary and further details are provided below for those who might be interested in coming along to any of the talks.

As a digital sociologist, I have become fascinated by the social and cultural implications of 3D printing technologies. Few sociologists or any other critical academic commentators have begun to investigate how 3D printing is beginning to affect society. Yet as 3D printing technologies move into an expanding realm of contexts, there is much opportunity to analyse their effects. Not only are these technologies having an impact on industrial manufacturing and the distribution of goods, makers, artists and designers are taking them up in intriguing ways. 3D printing is being used in medicine and dentistry, public relations and marketing and in fan cultures. These technologies are being introduced into schools and incorporated into the curriculum. As the price of 3D printers falls, they will become an addition to more households. There are significant environmental and legal issues in relation to how they are used, including questions about intellectual property.

As part of my initial explorations into the sociology of 3D printing, last week I published two pieces on these technologies. One was an article for The Conversation, in which I discussed the phenomenon of the 3D self replica. This is a figurine that can be made of a person using the digital data derived from 3D scanning software. The technologies to generate these artefacts are rapidly moving into a range of leisure domains, including sporting events, shopping centres, airports, concerts and amusement parks as well as fan cultures and marketing programs. 3D printed self replicas can even be made at home using a software package developed for the Xbox Kinect game box and a home 3D printer. Some commentators have referred to these replicas as ‘3D selfies’ because they involve the production of a personal likeness. In the article I speculated about the ways in which people may start to use these figures as markers or mementos of their bodies and social relationships.

The second piece was an academic article that discusses the use of 3D printing of what I entitle ‘digital body objects’ for medical and health-related purposes. The article explores the use of non-organic materialisations of people’s body parts for medical purposes as well as the fabrication of self-tracked bodily data into objects. Here is the abstract: the full paper can be accessed here:

The advent of 3D printing technologies has generated new ways of representing and conceptualising health and illness, medical practice and the body. There are many social, cultural and political implications of 3D printing, but a critical sociology of 3D printing is only beginning to emerge. In this article I seek to contribute to this nascent literature by addressing some of the ways in which 3D printing technologies are being used to convert digital data collected on human bodies and fabricate them into tangible forms that can be touched and held. I focus in particular on the use of 3D printing to manufacture non-organic replicas of individuals’ bodies, body parts or bodily functions and activities. The article is also a reflection on a specific set of digital data practices and the meaning of such data to individuals. In analysing these new forms of human bodies, I draw on sociomaterialist perspectives as well as the recent work of scholars who have sought to reflect on selfhood, embodiment, place and space in digital society and the nature of people’s interactions with digital data. I argue that these objects incite intriguing ways of thinking about the ways in digital data on embodiment, health and illnesses are interpreted and used across a range of contexts. The article ends with some speculations about where these technologies may be headed and outlining future research directions.

These initial forays into a sociology of 3D printing represent merely a small component of possible avenues for theorising and research into the social impact of this technology. What I am particularly interested in at the moment is the implications for people’s data practices, or how the material objects that are generated from 3D printing technologies act as ‘solidified’ personal data. Future writings will investigate these issues in greater depth.

Digital Sociology has now been published (click here for the Amazon link and here for the publisher’s link).

The publisher’s blurb is below:

We now live in a digital society. New digital technologies have had a profound influence on everyday life, social relations, government, commerce, the economy and the production and dissemination of knowledge. People’s movements in space, their purchasing habits and their online communication with others are now monitored in detail by digital technologies. We are increasingly becoming digital data subjects, whether we like it or not, and whether we choose this or not.

The sub-discipline of digital sociology provides a means by which the impact, development and use of these technologies and their incorporation into social worlds, social institutions and concepts of selfhood and embodiment may be investigated, analysed and understood. This book introduces a range of interesting social, cultural and political dimensions of digital society and discusses some of the important debates occurring in research and scholarship on these aspects. It covers the new knowledge economy and big data, reconceptualising research in the digital era, the digitisation of higher education, the diversity of digital use, digital politics and citizen digital engagement, the politics of surveillance, privacy issues, the contribution of digital devices to embodiment and concepts of selfhood and many other topics.

Digital Sociology is essential reading not only for students and academics in sociology, anthropology, media and communication, digital cultures, digital humanities, internet studies, science and technology studies, cultural geography and social computing, but for other readers interested in the social impact of digital technologies.

The latest except from my forthcoming book Digital Sociology (due to be released by Routledge on 12 November 2014). This one is from Chapter 7: Digital Politics and Citizen Digital Public Engagement.

The distinction between public and private has become challenged and transformed via digital media practices. Indeed it has been contended that via the use of online confessional practices, as well as the accumulating masses of data that are generated about digital technology users’ everyday habits, activities and preferences, the concept of privacy has changed. Increasingly, as data from many other users are aggregated and interpreted using algorithms, one’s own data has an impact on others by predicting their tastes and preferences (boyd, 2012). The concept of ‘networked privacy’ developed by danah boyd (2012) acknowledges this complexity. As she points out, it is difficult to make a case for privacy as an individual issue in the age of social media networks and sousveillance. Many people who upload images or comments to social media sites include other people in the material, either deliberately or inadvertently. As boyd (2012: 348) observes, ‘I can’t even count the number of photos that were taken by strangers with me in the background at the Taj Mahal’.

Many users have come to realise that the information about themselves and their friends and family members that they choose to share on social media platforms may be accessible to others, depending on the privacy policy of the platform and the ways in which users have operated privacy settings. Information that is shared on Facebook, for example, is far easier to limit to Facebook friends if privacy settings restrict access than are data that users upload to platforms such as Twitter, YouTube or Instagram, which have few, if any, settings that can be used to limit access to personal content. Even within Facebook, however, users must accept that their data may be accessed by those that they have chosen as friends. They may be included in photos that are uploaded by their friends even if they do not wish others to view the photo, for example.

Open source data harvesting tools are now available that allow people to search their friends’ data. Using a tool such as Facebook Graph Search, people who have joined that social media platform can mine the data uploaded by their friends and search for patterns. Such elements as ‘photos of my friends in New York’ or ‘restaurants my friends like’ can be identified using this tool. In certain professions, such as academia, others can use search engines to find out many details about one’s employment details and accomplishments (just one example is Google Scholar, which lists academics’ publications as well as how often and where they have been cited by others). Such personal data as online photographs or videos of people, their social media profiles and online comments can easily be accessed by others by using search engines.

Furthermore, not only are individuals’ personal data shared in social networks, they may now be used to make predictions about others’ actions, interests, preferences or even health states (Andrejevic, 2013; boyd, 2012). When people’s small data are aggregated with others to produce big data, the resultant datasets are used for predictive analytics (Chapter 5). As part of algorithmic veillance and the production of algorithmic identities, people become represented as configurations of others in the social media networks with which they engage and the websites people characterised as ‘like them’ visit. There is little, if any, opportunity to opt out of participation in these data assemblages that are configured about oneself.

A significant tension exists in discourses about online privacy. Research suggests that people hold ambivalent and sometimes paradoxical ideas about privacy in digital society. Many people value the use of dataveillance for security purposes and for improving economic and social wellbeing. It is common for digital media users to state that they are not concerned about being monitored by others online because they have nothing to hide (Best, 2010). On the other hand, however, there is evidence of unease about the continuous, ubiquitous and pervasive nature of digital surveillance. It has become recognised that there are limits to the extent to which privacy can be protected, at least in terms of individuals being able to exert control over access to digital data about themselves or enjoy the right to be forgotten (Rosen, 2012; Rosenzweig, 2012). Some commentators have contended that notions of privacy, indeed, need to be rethought in the digital era. Rosenzweig (2012) has described previous concepts as ‘antique privacy’, which require challenging and reassessment in the contemporary world of ubiquitous dataveillance. He asserts that in weighing up rights and freedoms, the means, ends and consequences of any dataveillance program should be individually assessed.

Recent surveys of Americans by the Pew Research Center (Rainie and Madden, 2013) have found that the majority still value the notion of personal privacy but also value the protection against criminals or terrorists that breaches of their own privacy may offer. Digital technology users for the most part are aware of the trade-off between protecting their personal data from others’ scrutiny or commercial use, and gaining benefits from using digital media platforms that collect these data as a condition of use. This research demonstrates that the context in which personal data are collected is important to people’s assessments of whether their privacy should be intruded upon. The Americans surveyed were more concerned about others knowing the content of their emails than their internet searches, and were more likely to experience or witness breaches of privacy in their own social media networks than to be aware of government surveillance of their personal data.

Another study using qualitative interviews with Britons (The Wellcome Trust, 2013) investigated public attitudes to personal data and the linking of these data. The research found that many interviewees demonstrated a positive perspective on the use of big data for national security and the prevention and detection of crime, improving government services, the allocation of resources and planning, identifying social and population trends, convenience and time-saving when doing shopping and other online transactions, identifying dishonest practices and making vital medical information available in an emergency. However the interviewees also expressed a number of concerns about the use of their data, including the potential for the data to be lost, stolen, hacked or leaked and shared without consent, the invasion of privacy when used for surveillance, unsolicited marketing and advertising, the difficulty of correcting inaccurate data on oneself and the use of the data to discriminate against people. Those interviewees of low socioeconomic status were more likely to feel powerless about dealing with potential personal data breaches, identity theft or the use of their data to discriminate against them.

References

Andrejevic, M. (2013) Infoglut: How Too Much Information is Changing the Way We Think and KnowNew York: Routledge.

Best, K. (2010) Living in the control society: surveillance, users and digital screen technologies. International Journal of Cultural Studies,13, 5-24.

Another excerpt from my forthcoming book Digital Sociology (due to be released on 12 November 2014). From chapter 8: ‘The Digitised Body/Self’.

Such is the extent of our intimate relations with digital technologies that we often respond emotionally to the devices themselves and to the content contained within or created by these devices. The design of digital devices and software interfaces is highly important to users’ responses to them. Devices such as iPhones are often described in highly affective and aestheticised terms: as beautiful playthings, glossy and shiny objects of desire, even as edible or delicious. Advertising for the iPhone and other Apple devices often focus on inspiring child-like wonder at their beauty and magical capabilities (Cannon and Barker 2012).

Affective responses to material objects are integral to their biographical meaning to their owners and their participation in intimate relationships. Writers on material culture and affect have noted the entangling of bodies/selves with physical objects and how artefacts act as extensions or prostheses of the body/self, becoming markers of personhood. Objects become invested with sentimental value by virtue of their association with specific people and places, and thus move from anonymous, mass-produced items to biographically-inscribed artefacts that bear with them personal meanings. Over use and with time, such initially anonymised objects become personalised prosthetics of the self, their purely functional status and monetary value replaced by more personal and sentimental value (Miller 2008, Turkle 2007).

… Bell and Dourish (2011) refer to the mythologies and the mess of ubiquitous computing technologies. By myths they mean the cultural stories, values and meanings that are drawn upon to make sense and represent these technologies. The types of myths surrounding new digital technologies tend to focus on their very novelty, their apparent divergence from what has come before them and their ability to provide solutions to problems. The ‘mess’ of digital technologies inheres in the challenges to myths that suggest that they are infallible, offer an ideal solution to a problem: the ‘practical reality’ of their everyday use (Bell & Dourish, 2011, p. 4). When digital technologies operate as we expect them to, they feel as they are inextricably part of our bodies and selves. Inevitably, however, there are moments when we become aware of our dependence on technologies, or find them annoying or difficult to use, or lose interest in them. Technologies break down, fail to work as expected; infrastructure and government regulations may not support them adequately; users may become bored with using them or their bodies may rebel and develop over-use symptoms. There may be resistances, personal or organised, to their use, and contestations over their meanings and value (Lupton, 1995; Miller & Horst, 2012).

Freund (2004, p. 273) uses the term ‘technological habitus’ to describe the ‘internalised control’ and kinds of consciousness required of individuals to function in technological environments such as those currently offered in contemporary western societies. The human/machine entity, he argues, is not seamless: rather there are disjunctions – or, as he puts it, ‘seams in the cyborg’ – where fleshly body and machine do not intermesh smoothly, and discomfort, stress or disempowerment may result. Sleep patterns, increasing work and commuting time and a decrease in leisure time, for example, can be disrupted by the use of technologies, causing illness, stress and fatigue. Our bodies may begin to alert us that these objects are material in the ways that they affect our embodiment: through eye-strain, hand, neck or back pain or headaches from using the devices too much (Lupton, 1995).

People may feel overwhelmed by the sheer mass of data conveyed by their digital devices and the need to keep up with social network updates. Analyses of social media platforms such as Facebook are beginning to appear that suggest that users may simultaneously recognise their dependence upon social media to maintain their social network but may also resent this dependence and the time that is taken up in engaging with them, even fearing that they may be ‘addicted’ to their use (Davis, 2012). Users may also feel ‘invaded’ by the sheer overload of data that may be generated by membership of social networking sites and the difficulty of switching off mobile devices and taking time out from using them (boyd, 2008).

Technology developers are constantly working on ways to incorporate digital devices into embodiment and everyday life, to render them ever less obtrusive and ever more part of our bodies and selves. As the technical lead and manager of the Google Glass (a wearable device that is worn on the face like spectacles) project contends, ‘bringing technology and computing closer to the body can actually improve communication and attention – allowing technology to get further out of the way’ (Starner, 2013, p. no page numbers given, emphasis in the original). He asserts that by rendering these devices smaller and more easily worn on the body, they recede further into the background rather than dominating users’ attention (as is so overtly the case with the current popular smartphone and tablet computers). Despite these efforts, Glass wearers have been subjected to constant attention from others that is often negative and based on the presumption that the device is too obvious, unstylish and unattractive, or that the people who wear them are wealthy computer nerds who do not respect the privacy of others. They have reported many incidences of angry responses from others when wearing Glass in public, even to the point of people ripping the device off their faces or asking them to leave a venue (Gross, 2014). The design of digital devices, therefore, may incite emotional responses not only in the users themselves but also in onlookers.

Some people find wearable self-tracking devices not fashionable enough, or not water-proof enough, or too clunky or heavy, or not comfortable enough to wear, or find that they get destroyed in the washing machine when the user forgets to remove them from their clothing. One designer (Darmour, 2013) has argued that if these technologies remain too obvious, ‘bolting’ these devices to our bodies will ‘distract, disrupt, and ultimately disengage us from others, ultimately degrading our human experience’. She asserts that instead these objects need to be designed more carefully so that they may be integrated into the ‘fabric of our lives’. Her suggested ways of doing this include making them look more beautiful, like jewellery (broaches, necklaces, bracelets, rings), incorporating them into fashionable garments, making them peripheral and making them meaningful: using colours or vibrations rather than numbers to display data readings from these devices.

I have had a new article published in the journal of Sport, Education and Society on the topic of how school health and physical education (HPE) is becoming digitised and technologies of self-tracking are being introduced into classes. As its title suggests – ‘Data assemblages, sentient schools and digitised HPE (response to Gard)’ – the article outlines some thoughts in response to a piece published in the same journal by another Australian sociologist, Michael Gard. Gard contends that a new era of HPE seems to be emerging in the wake of the digitising of society in general and the commercialising of education, which is incorporating the use of digital technologies.

Few commentators in education, health promotion or sports studies have begun to realise the extent to which digital data surveillance (‘dataveillance’) and analytics are now encroaching into many social institutions and settings and the ways in which actors and agencies in the digital knowledge economy are appropriating these data. In my article I give some examples of the types of surveillance technologies that are being introduced into school HPE. Apps such as Coach’s Eye and Ubersense are beginning to be advocated in HPE circles, as are other health and fitness apps. Some self-tracking apps have been designed specifically for HPE teachers for use with their students. For example the Polar GoFit app with a set of heart rate sensors is expressly designed for HPE teachers as a monitoring tool for students’ physical activities during lessons. It allows teachers to distribute the heart rate sensors to students, set a target zone for heart rate levels and then monitor these online while the lesson takes place, either for individuals or the class as a group.

I argue that there are significant political and ethical implications of the move towards mobilising digital devices to collect personal data on school students. I have elsewhere identified a typology of five modes of self-tracking that involve different levels of voluntary engagement and ways in which personal data are employed. ‘Private’ self-tracking is undertaken voluntarily and initiated by the participant for personal reasons, ‘communal’ self-tracking involves the voluntary sharing of one’s personal data with others, ‘pushed’ self-tracking involves ‘nudging’ or persuasion, ‘imposed’ self-tracking is forced upon people and ‘exploited’ self-tracking involves the use of personal data for the express purposes of others.

Digitised HPE potentially involves all five of these modes. In the context of the institution of the school and the more specific site of HPE, the previous tendencies of HPE to represent paternalistic disciplinary control over the unruly bodies of children and young people and to exercise authority over what the concepts of ‘health’, ‘the ideal body’ and ‘fitness’ should mean can only be exacerbated. More enthusiastic students who enjoy sport and fitness activities may willingly and voluntarily adopt or consent to dataveillance of their bodies as part of achieving personal fitness or sporting performance goals. However when students are forced to wear heart rate monitors to demonstrate that they are conforming to the exertions demanded of them by the HPE teacher, there is little room for resistance. When certain very specific targets of appropriate number of steps, heart-rate levels, body fat or BMI measurements and the like are set and students’ digitised data compared against them, the capacity for the apparatus of HPE to constitute a normalising, surveilling and disciplinary gaze on children and young people and the capacity for using these data for public shaming are enhanced.

The abstract of the article is below. If you would like a copy, please email me on deborah.lupton@canberra.edu.au.

Michael Gard (2014) raises some important issues in his opinion piece on digitised health and physical education (HPE) in the school setting. His piece represents the beginning of a more critical approach to the instrumental and solutionist perspectives that are currently offered on digitised HPE. Few commentators in education, health promotion or sports studies have begun to realise the extent to which digital data surveillance and analytics are now encroaching into many social institutions and settings and the ways in which actors and agencies in the digital knowledge economy are appropriating these data. Identifying what is happening and the implications for concepts of selfhood, the body and social relations, not to mention the more specific issues of privacy and the commercialisation and exploitation of personal data, requires much greater attention than these issues have previously received in the critical social literature. While Gard has begun to do this in his article, there is much more to discuss. In this response, I present some discussion that seeks to provide a complementary commentary on the broader context in which digitised HPE is developing and manifesting. Whether or not one takes a position that is techno-utopian, dystopian or somewhere in between, I would argue that to fully understand the social, cultural and political resonances of digitised HPE, such contextualising is vital.

A body of literature on self-tracking has been established in human-computer interaction studies. Contributors to this literature tend to take a cognitive or behavioural psychology approach to theorising and explaining self-tracking. Such an approach is limited to understanding individual behaviour. Yet self-tracking is a profoundly social practice, both in terms of the enculturated meanings with which it is invested and the social encounters and social institutions that are part of the self-tracking phenomenon. In this paper I contend that sociological perspectives can contribute some intriguing possibilities for human-computer interaction research, particularly in developing an understanding of the wider social, cultural and political dimensions of what I refer to as ‘self-tracking cultures’. The discussion focuses on the following topics: self-optimisation and governing the self; entanglements of bodies and technologies; the valorisation of data; data doubles; and social inequalities and self-tracking. The paper ends with outlining some directions for future research on self-tracking cultures that goes beyond the individual to the social.

Tomorrow the Big Data Cultures symposium that I have convened at the University of Canberra is taking place. There is a very interesting program from a range of Australian academics working on the social, cultural and political dimensions of the big data phenomenon. Here are the abstracts:

Keynote: ‘Visual dimensions’

Greg More, RMIT University

It’s a small problem for data to scale, but a wicked problem for us to make sense of big data that scales to infinity. The aim of this article is to explore the translation of data into geometrical relationships: the art and design of creative forms of data visualisation to give data a meaningful visual dimension. Data has dimensionality, but not in a geometrical sense. Topology – the mathematical study of shape – will be used as lens to examine projects where designers utilise metaphors and abstraction to construct visual languages for data. Consider this cartography of data that makes sense of a scaleless territory. What is important in this examination how the designers of data visualisations understand the character of the data itself – the texture, nuance and signal contained within the information – and use this to make data tangible and at a scale we can interact with.

‘To hold a social form in your hand: how far are interactive holograms of social data?’

Alexia Maddox, Deakin University and Curtin University

Starting with the question, ‘can we reanimate social data into three dimensional forms?’, this paper explores the possibility of presenting research findings in three dimensional formats. These formats could include information that we can print through 3D printers or animate through interactive holograms. This paper will interrogate this approach to data presentation and discuss from a sociological point of view the ways it could engage with Big Data. The combination of visual presentation derived from digital trace data provides us with a lens through which to investigate social patterns and trends. Building data into three-dimensional formats has the capacity to enhance the cognitive literacy of information and its presentation to diverse stakeholders. A social surface, that which is defined by form, needs a conceptual framework upon which to gain dynamic presence and dimension in space. Through my research into the Herpetological community, I explored the interior structures of community and patterns of socio-technical engagement. The resulting conceptual approach from this work seeks to situate mediated sociability within social ecologies and build social data into social form. This environmental approach aligns with current trends in geodemographic analysis and incorporates the socio-technical actor that moves beyond physical space and into virtual terrains. The challenge of this conceptual approach is to explore how Big Data can be incorporated as environmental information or digital trace data.

‘Stranded deviations: Big Data and the contextually marginalised’

Andrew McNicol, University of New South Wales

As social and practical interactions moved to the digital realm, facilitated by technological breakthroughs and social pressures, many have become understandably concerned about user privacy. With the increased scale and complexity of stored information, giving rise to the term ‘Big Data’, the potential for another person to scrutinise our personal information in a way that makes us uncomfortable increases. However, as attention is a finite resource, in the majority of cases user information never comes under scrutiny by unwanted human eyes – it is lost in the noise and is only treated as data available for computational analysis. In a big data society privacy breaches increasingly occur as a result of algorithms allowing targets to emerge from data sets. This means that in any context certain individuals become disproportionately targeted for unwanted privacy breaches and those who are regularly contextually marginalised have the most to lose from participating in a culture of Big Data, raising issues of equal access. In this paper I bring these ideas together to argue that the privacy discourse should not only focus on the potential for scrutiny of personal data, but also the systems in place, both social and technological, that facilitate an environment where some users are more safe than others.

‘Health, big data and the culture of irresponsibility’

Bruce Baer Arnold and Wendy Bonython, University of Canberra

The analysis of whole-of-population clinical, hospital and genomic data offers potential major benefits regarding improved public health administration, pharmaceutical research and wellness through identification of susceptibilities to health conditions. Achievement of those benefits will be fundamentally inhibited by ‘health big data culture of irresponsibility’ in the public and private sectors. This paper critiques health big data cultures through reference to problematical initiatives such as 23andme (a global direct-to-consumer DNA service) and mismanaged release of weakly de-identified health data covering millions of people in the UK. It notes whole-of-population health data mining projects such as those involving DeCODE (Iceland) and Maccabi-Merck (Israel) that are more problematical than the so-called ‘vampire project’ involving Indigenous peoples. It draws on the authors’ work regarding privacy, bioethics, consumer protection and the OECD Health Information Infrastructure initiative. It highlights the need for coherent national and global health data management frameworks that address issues such as the genomic commons, intergenerational implications of genetic data and insurance redlining. It also highlights questions about media representations of big data governance.

While Big Data have clear implications for the knowledges produced by the social sciences, the various practices of the Digital Humanities have taken the methods associated with Big Data and applied them to objects rarely thought to be ‘Big’ or even ‘Data’. Scholars have used computation to examine literary history, visualising massive literary data sets in ways to make claims that, methodologically at least, are often perceived as threats to the humanities at a moment where traditional methods of teaching and performing humanistic scholarship are likewise under attack from a corporatized managerial university system. This paper uses the debates surrounding the Digital Humanities to investigate the political and institutional arguments that have emerged around Big Data methodologies in the humanities, along with the contrasting knowledge claims that ground these debates. I argue that, in its emphasis on methodology, these discussions overlook how academic publics have been transformed over the past decades. I suggest that normative claims about Big Data in the humanities must investigate its ‘public problems’—moments in which a specific culture defined around the technologically mediated circulation of discourse produces internal norms that are concealed for the sake of external legitimation and funding.

‘Big data’s golems: bots as a technique of tactical media’

Chris Rodley, University of Sydney

Big data has enabled the creation of a diverse range of bots which collect, analyse and process digital information programmatically. While corporations and political parties were early adopters of bots, a growing number of activists, artists and programmers have recently begun to create their own data-driven bots on social platforms such as Twitter as a way of critiquing or disrupting dominant discourses. This paper considers a selection of bots created to comment on issues including NSA surveillance and gun control, arguing that they represent a radical departure from the Situationist strategy of détournement or the tactical disruptions envisaged by Michel de Certeau. It considers the ethics of adopting the techniques of the sensor society – or what Mark Andrejevic has termed “drone logic” – and the implications of bots entering the public sphere as semi-autonomous political actors. Like the Golem of Prague in Jewish folklore, these personifications of big data may simultaneously represent both a powerful defensive strategy as well as a potentially destructive, uncontrollable force.

‘“Paranoid nominalism” as cultural technique of the quantified self’

Christopher O’Neill, University of Melbourne

The Quantified Self movement constitutes a growing community of those committed to practices of self-tracking through mobile sensors and apps. This paper will offer a critique of contemporary Quantified Self discourse, arguing that it is characterised by a certain ‘paranoid nominalism’. That is, an inability to ‘reconcile’ the intimacy of sensors with the abstraction of statistical technologies. This critique shall be pursued through a genealogical investigation of the precursors of some of the key technologies of the Quantified Self movement, especially Étienne-Jules Marey’s work on developing a ‘second positivism’ through sensor technologies, and Adolphe Quetelet’s production of statistical technologies of governance. Drawing on the ‘cultural techniques’ approach of media theory, this paper will investigate these technological prehistories of the Quantified Self movement in order to probe its ideological aporias.

‘There’s an app for that: digital culture and the rise of technologism’

Doug Lorman, Deakin University

Humans have always used technology to overcome bodily and mental boundaries and limitations in the pursuit of personal transcendence. The development of digital technologies such as ‘apps’ and wearable technology have helped to further this pursuit. Digital technologies allow us to collect, store and analyse data on ourselves and take appropriate action. The growth of self-quantification means that technology is no longer disconnected from us, but is part of being human. Technology and its user are mutually constitutive; one influences the other.

The benefits of self-quantification have been touted elsewhere. My concern is that with our inherent desire to conquer nature and override the natural way of doing things we are placing an inordinate amount of faith in the ability of technology to resolve our issues. My talk will argue that the development of a blind faith in digital technologies is creating a phenomenon I call technologism; the belief that technological outputs or results (big data) are the absolute and only justifiable solutions to personal issues. The result of this is that we pay less attention to our surroundings, our lived events and put our faith in technology, relying on it to guide us, help us, heal us, and so on.

‘Database activism’

Mathieu O’Neil, University of Canberra

When data was rare, the focus lay in finding it and collecting it. Now that there is an overabundance of data, datatabases have assumed a central role for the sorting, organising, querying and representation of data. In the realm of science, databases operate as both scientific instruments and as a means of communicating results (Hine 2006). Similarly in the news media field, journalists are increasingly using databases to render the flow of data meaningful and, through visualisation, to make important and pertinent information memorable. Like scientists, data journalists have to be concerned with the integrity of data, and present their methods and findings; database literacy is increasingly framed as a mandatory journalistic skill. At the same time the reliance on databases has led to the emergence of new forms of collective emotions and indignations (Parasie 2013). Unlike journalists, “civic hackers” (such as for example maplight.org which tracks the influence of money on US politics) do not aim to reveal victims and guilty parties hidden in the data, or to organise collective indignations. Data itself is held to be captive from governing authorities and must be freed: civic hackers reveal, without denouncing.

Hine, C. (2006) “Databases as scientific instruments and their role in the ordering of scientific work”, Social Studies of Science 36(2), pp. 269-298.

A fascinating, cross-cutting case study in big data cultures lies in the dynamic, evolving, and contested space of contemporary disability and digital technology. Disability is now recognized as a significant part of social life, identity, and the life course. Over the past twenty years, digital technology – especially computers, the Internet, mobile media, social media, apps, geolocation technologies, and now, wearable computers, and even technologies such as driverless cars ­– have emerged as a significant part of the mediascape, cultural infrastructure, social support system, and personal identity and repertoire of many people with disabilities. New social relations of disability are premised on ­– and increasingly ‘congealed’ in – forms of digital technology. In the Australian context, we might think, for instance, of the present conjuncture and its coincidence of two big national projects where disability and digital technology are both entangled – the National Disability Insurance Scheme (NDIS) and National Broadband Network (NBN).

There is an emerging research, policy, design, and activist engagement with disability and digital technology, but as yet questions of disability and big data have been not so well canvassed. This is significant, given that, historically, the emergence of forms of data concerning disability has been bound up with classification, exclusion, government, and discrimination, as well as the new forms of knowledge and governmentality associated with new socially oriented models and paradigms of disability.

Accordingly, this paper provides a preliminary exploration of the forms, affordances, characteristics, issues, challenges, ethics, and possibilities of what might be termed ‘disability data cultures’. Firstly, I identify and discuss particular kinds of digital technologies, infrastructures, and softwares, and their distinctive affordances and design trajectories relating to disability. As well as explicitly nominated and dedicated disability data technologies, I also discuss the emergence of health, self-tracking, and quantified self apps by which normalcy and ability is exnominated (or naturalized). Secondly, I look at the kinds of applications, harvesting, computational logics, and the will to power, emerging in order to provide more comprehensive and targeted data on disability ­– for citizens and users, and service, political, and cultural intermediaries, as well as disability service providers, agencies, and governments. Thirdly, I look at the nascent disability-inflected contribution to, and participation in, open data and citizen data initiatives and experiments.

‘Theoretical perspectives on privacy, selfhood and big data’

Janice Richardson, Monash University

Big data practices produce specific anxieties about privacy, based upon the fact that information about us, of which we were previously unaware, may be revealed to our detriment. The concerns of the “masters of suspicion” (Nietzsche, Marx, Freud) provides a cultural background view that some important aspect of our lives are hidden or inaccessible to us. This framework has given way to the Foucauldian position that big data could be characterised as having the potential to create new ways in which we are categorised rather than revealing our hidden essence or truth. However, this shift from revelation to construction does nothing to undermine our need to control such potentially harmful practices by both companies and government. As a result, it is necessary to consider how to conceptualise an ethical basis of such privacy claims, which arise as a result of unpredictable knowledge that is produced, rather than as a breach of confidence of pre-existing knowledge. I consider the potential for Spinoza – and his distinction between adequate and inadequate knowledge – to provide such a framework.

The phenomenon of big data represents a socio-technical assemblage of services and devices that are involved in data collection and analysis. One example of this is through personal ‘sensor’ devices (Andrejevic and Burdon 2014), like smartphones. Here users are interfaced into big data, simultaneously using big data for their own needs, while fuelling it with their personal information, being the target of data collection and dataveillance/surveillance. With the popularity of these devices it is important to consider what implications this interfacing has, and the relationship between users and big data/surveillance. This paper describes the results of empirical research into users and their interfaces – conceptualised under Lee’s (2013) idea of convergent mobile technologies (CMTs) – and the implications of user interfacing with big data and surveillance. Highlighted is how these interfaces valorise user experiences that are ‘immediate’ over all others. In the context of their relationship with big data this is problematic, as users dismiss or disengage from issues of security and surveillance as long as rapidity is maintained. These CMT interfaces can be thus understood as contributing to the creation of ‘docile data subjects’, who happily bleed personal information into the big data (and surveillant) assemblage(s), in exchange for an experiential state deemed valuable.

For nearly a decade citizens have taken to social media to launch public conversations and connective action around issues of civic concern – conversations which have various impacts on the shaping of policy, regulation and governance. Now Facebook, LinkedIn and Twitter are increasingly being used to build, inform and influence informal expert networks, particularly around emerging technologies and practices, and their associated policy problems. Such networks link actors from data cultures such as computing science and medical research to those in hybrid industrial ecologies, like that of mobile health software development. Their conversations are often transnational. They promote and market as much as debate and mobilise. Thus they complicate Gov 2.0 assumptions about democratic participation and engagement, as well as data security. In this paper we argue that it is vital to have new analytic frameworks to measure and evaluate the identity, reach, and relative agency of actors in those networks, in order to understand their potential impact on policy development.

We model one such framework – a mixed method social media network analysis (SNMA) and digital ethnography used to analyse agency and influence in Twitter conversations about mhealth. Using hashtagged conversations captured in the wake of the U.S. Food & Drug Administration’s September 2013 release of guidelines on Mobile Medical Applications, we visualize the network communications then locate and profile the key influencers, exploring their motivations for engagement. Drawing on this data and altmetrics research we discuss registers of impact in expert social media networks and propose a research agenda for exploring the political, cultural and economic value of Twitter conversations in policy formation.

‘Capturing capacity: quantified self technologies and the commodification of affect’

Miranda Bruce, Australian National University

The Quantified Self (QS) movement is part of a growing technological trend that exploits and modulates the potential of human life. QS finds ways to quantify the active and passive dimensions of the daily processes of human existence, in order to extract meaning from them and modify the ways that we move in and through the world. This paper will explore, firstly, the idea that QS represents a commodification of human capacity, an extraction of power, or form of immaterial labour consistent with the logic of neoliberal capitalism. I will then turn to Deleuzian affect theory to open up a quite different ontological and ultimately practical approach to the problem of QS, which stresses the excessive, and thus un-capturable, nature of lived potential. Finally, I will offer some reflections on the relationship of this technology to broader trends concerning the technological modulation of human capacity.​

‘Live data/sociology: what digital sociologists can learn from artists’ responses to big data’

Deborah Lupton, University of Canberra

The big data phenomenon has attracted much publicity in public forums, both in terms of its potential for offering insights into manifold aspects of social and economic life and for its negative associations with mass surveillance and the reduction of the complexity of behaviour into quantifiable data. In this paper I will discuss some of the ways in which artists have responded to big data. I contend that their conceptualisations and critiques of big data offer intriguing insights into the tacit assumptions and emotions (fears and anxieties as well as pleasures and satisfactions) that these digitised methods of knowledge production engender. Digital data are lively in a number of ways: they have become forms of ‘lively capital’ (that is, drawing commercial value from human embodiment, or life itself); they generate embodied and affective responses; they contribute recursively to life itself; and they have a social life of their own, constantly circulating and transforming as they are appropriated and re-purposed. Artists’ responses can contribute to what might be described as a ‘live data/sociology’ (drawing on Les Back’s concept of a ‘live sociology’ that departs from ‘zombie sociology’) which identifies and theorises the forms of liveliness that big digital data may encompass.