On Friday journalist Paul Mason published a fairly long article in the Guardian entitled ‘The End of Capitalism Has Begun.’ It features some interesting thoughts, and will hopefully help disseminate some ideas which have been floating about in academia for quite a while to a broader audience. That said, there are a few things in the piece which I think are somewhat naieve and require a response to.

The main thrust of Mason’s argument is that capitalism is inevitably on the way out because of several social changes being wrought by contemporary networked information processing technologies. Firstly, Mason argues that because of the increased levels of automation brought by digital systems, there will be a dramatic reduction in the volume of work required within a society. Secondly, he argues that the fundamental laws of economics have been broken by an information economy within the contemporary state of informational abundance. Finally, he argues that ‘cognitive capitalism’ is predicated on a mode of collaborative and networked social production which itself is contradictory to the type of individualised wealth production associated with capitalism.

The first of these points is hardly new. The displacement of labour from humans into various forms of machinery is, of course, something which has occurred for at least a couple of hundred years, as was presciently observed and described by Karl Marx (in the Fragment on Machines, a text which Mason cites later in his essay). Alongside the ongoing historical transformation of production processes, there have always been the claim that technology will make everyone’s life better by reducing the need for arduous and boring labour tasks, instead freeing humanity to enjoy increased levels of leisure time accompanied by a higher level of material wealth and comfort. And whilst there are certainly some humans who are in that situation today, we could also point to the increasing precariousness of work, particularly within neoliberal economies where full employment has never been an important goal, as a reminder that decreasing the overall level of manual labour does not necessarily entail benefits for all.

Rather than seeing work and wealth equally being divided amongst citizens, today we instead find millions of unemployed or underemployed humans who are effectively used as an industrial reserve force to reduce any demands for increased wages, reduced working hours and other kinds of benefits which were associated with the collective action of the twentieth century trade union movements. Whilst a relatively small number of humans become more materially wealthy than any of their predecessors, this occurs alongside a growing inequality between the global super rich and everyone else. As research last year found, the richest 85 individuals on the planet now own more than the poorest 50% of the global population, around 3.5 billion people.

Additionally, in a ‘creative’ ‘digital economy where communicative acts are themselves commodified over corporate social networks, what does and does not count as productive work is itself problematised. Theorists ranging from autonomist Marxists such as Franco Berardi through to cyberutopian capitalists such as Clay Shirky have argued that what used to count as leisure time is now a key motor of wealth generation, as your online ‘leisure’ activities are used to tailor personal, location-aware advertising to your behaviour.

Which brings us to Mason’s second point, that economics is predicated upon scarcity, and that the current abundance of information demarcates that we have entered an era where traditional economic theory cannot adequately function. Again, rhetoric surrounding the end of the economics of scarcity is not new, but such thinking fundamentally fails to grasp the dynamics of scarcity surrounding informational systems, and systems is a key word here, because economics is about circulation and flows, not a single thing (be it information, energy or anything else). Information is certainly a crucial component of digital networked ecologies, and the volume of contemporary information – what Mark Andrejevic and Berardi have both described as information overload – is certainly not one of scarcity, but the key is to think in systemic terms as to what type of scarcity is generated as a consequence of the abundance of information. The answer, is that human attention is what become scarce when information is abundant.

Indeed, the notion of the attention economy is not that new, with early versions of the term being deployed by authors such as Michael Goldhaber and Georg Franck around the turn of the century. For an excellent overview of contemporary debates surrounding economies of attention I would suggest reading this article by Patrick Crogan and Sam Kinsley. The key point, is that far from rendering the economics of scarcity redundant, what we instead find is that the abundance of online information means that human attention is increasingly scarce and thus becomes a desirable and lucrative commodity, which is why heavily targeted online advertising is a booming multi-billion business, one which ventures such as Google’s search engine, Facebook, YouTube and other major online players are almost entirely dependent upon for their revenues and astronomical market valuations.

The third point Mason raises, that online networks are predicated upon modes of social cooperation and collectivity which are contradictory to the mode of capitalism they are located within, and thus contain the seeds of a new social system which will eventually replace capitalism itself, is arguably the most complex and interesting point he raises. However, this too is hardly a new statement, as it is one of the central tenets of Michael Hardt and Antonio Negri’s triad of books Empire, Multitude and Commonwealth, as well as being an argument which has been raised in differing forms by theorists such as Bernard Stiegler (via the economy of contribution) and Michel Bauwens (via peer-to-peer production). I wont go into these positions in much detail here, but what I do think is worth highlighting is that many of these claims about biopolitical production, economies of contribution and peer-to-peer production were originally made quite a while ago (Empire was released in 2000), and that since those times, there has been the emergence of the the big corporate social media players whose financial model is entirely predicated on the exploitation of the free cooperative labour of their users.

This isn’t to say that people don’t get anything from Facebook (basically some cost-free server storage, a fairly clean user interface, and access to the billion-plus strong Facebook network), but that Facebook’s market valuation of over 250 billion US dollars is entirely built upon its ability to commodify the social relationships of its users. Far from existing outside of, and in opposition to a capitalism which is wrongly assumed to by monolithic and rigid, we see the way that capitalism (which depends upon finding new areas to provide growth) has found a way of extending what it understood to be a commodity, so that many aspects of our social lives, which were previously thought to be intangible, unquantifiable and thus could not be monetised, are now major players in global financial markets.

Indeed, whereas during the early days of the internet, the underlying technology itself and the modes of cooperation it made possible such as the distributed mode of production that underpins Free and Open Source software were seen as radical new technologically-enabled alternatives to neoliberal capitalism, what we have seen more recently has been the way that capitalism has been able to find novel ways of reintegrating these innovations into financial markets, such as the way that Google utilises open source software outside of search in areas such as Android and Chrome. Indeed, one of the most interesting analyses of contemporary capitalism comes from Jodi Dean, who argues that our current era is marked by a stage of communicative capitalism, whereby far from forming alternatives to global capitalism, participation in networked digital telecommunications has become a central driver of the capitalist economy.

Mason surmises his argument by stating that:

The main contradiction today is between the possibility of free, abundant goods and information; and a system of monopolies, banks and governments trying to keep things private, scarce and commercial. Everything comes down to the struggle between the network and the hierarchy: between old forms of society moulded around capitalism and new forms of society that prefigure what comes next

This presents a straightforward binary opposition between network and hierarchy, between the new, good digital ways which point towards a postcapitalism and the bad, old ones which represent our capitalist past and present. However much I might wish this to be the case, and it would be really lovely to think that current technologies will inevitably lead to the replacement of a system of gross global social inequalities and catastrophic climate change with something better, I find the kind of technological determinism present in Mason’s essay to be blinkered at best. As Gilles Deleuze and Felix Guattari remind us in the introduction to A Thousand Plateaus, it is not a case of opposing hierarchical models with networked and decentralised ones, but a case of understanding how these two tendencies occur in different ways in actual systems which are almost always a combination of the two.

Thinking this way means mapping the new hierarchies and modes of exploitation associated digital technologies whilst also looking for the lines of flight, or positive ways of transforming the situation that the new technological formations present. That doesn’t mean that there can be no hope for change that involves technology, but that positing this situation as a good/bad binary opposition, or suggesting that technology itself holds essential characteristics which will necessarily transform society in a particular direction is a misguided approach. Indeed, some of the most interesting materials coming out of the P2P foundation recently have argued that openess is not enough, that just making things open or collaborative can lead to growing inequalities as the actors with the most attentional, algorithmic and economic resources are ususally those best placed to leverage open data, open culture and open source ventures. Alongside openess, they argue that we need to think about sustainability and solidarity in order to bring about the type of social and ecological transformation that would mark the end of capitalism. That to me sounds like a far more productive call to action than simply gesturing towards the digital technologies whose introduction has not thus far been accompanied by a more egalitarian and sustainable global society.

Last Friday I was at the Loops + Splices symposium hosted at Victoria University Wellington, and co-organised by Victoria and some of my Massey colleagues. Overall it was a fun and entertaining day with some really strong talks.

The keynote speaker was Professor Ian Christie, who was over from Birkbeck College in London. Christie’s presentation was entitled ‘Denying depth: uncovering the hidden history of 3D in photography and film’ and provided a genealogical/media archaeological exploration of stereography, moving from a range of pre-film technologies through to contemporary 3D cinema such as Avatar. Christie’s starting point was the outright dismissal of 3D as a gimmick by film critics such as the late Roger Ebert and respected editor Walter Murch who argued that ‘It (3D) doesn’t work with our brains and it never will.’ Christie’s argument developed through the interest which pioneering film theorists such as Andre Bazin and Sergei Eisenstein both had in stereoscopy, and passed through a variety of technologies and techniques by which 3D cinema was a reality in the 19th and 20th centuries. It was interesting to learn that until the stabilisation of photographic techniques through the standardisation enacted by cheap consumer cameras such as the Eastman Kodak, that 3D images were as popular and common as their non-stereo counterparts. Christie argued that the adoption of 3D imaging and simulation apparatus by professions such as surgeons and pilots demonstrates the range of utility presented by stereo imaging techniques, and that it is wrong to dismiss the technology on the basis of some of the poor narrative qualities of the 3D films which followed Avatar. It also feels worth noting that Christie was a model keynote speaker throughout the day, being engaged with all the panels, asking thoughtful and pertinent questions, and being kind and generous about the various presentations which followed his keynote.

The morning panel following the keynote was composed of Allan Cameron from the University of Auckland, and my Massey colleagues Kevin Glynn, Max Schleser along with myself. Allan’s paper, “Facing the Glitch: Abstraction, Abjection, and the Digital Face” examined the history of glitch as a form within both music and video, and specifically explored the role of the face within glitch videos. The paper outlined ways in which forms of long group of picture compression and the generation of intermediate frames which are interpreted from keyframes serves as a framework for work which uses compression artifacts, pixelation and glitch as an aesthetic strategy. I have to say, that on a personal level, any paper which shows clips of glitched up sequences from David Cronenberg’s Videodrome is a winner.

Kevin’s paper “Technologies of Indigeneity: Māori Television and Convergence Culture” comes out of his Marsden-funded project working with Julie Cupples on ‘Geographies of Media Convergence: Spaces of Democracy, Connectivity and the Reconfiguration of Cultural Citizenship.’ The paper focused on NZ media representations of the Urewera raids of 2007, and a more recent case where Air NZ, who prominently feature Maori iconography in their branding, terminated an interview with a woman for having a ta moko (traditional body markings), which they claimed would unsettle their customers. The paper explored impacts associated with the introduction of Maori TV and social networking software such as Facebook and Twitter on the ability of Maori to represent themselves and partake in mediated debates surrounding cultural identity.

Max’s paper “A Decade of Mobile Moving Image Practice” was an overview of some of the changes that have occurred over last the ten years with regards to mobile phone filmmaking. Going from the early days of experimenting with low resolution 3GP files which were not designed to be ingested or edited, through to the contemporary situation whereby a range of mobile phone apps exist to provide varying levels of control for users working in High Definition, Max mapped out some of the ways that the portability and intimacy afforded by mobile phones allow for modes of filmmaking which depart from the intrusive nature of working with digital cinema cameras. It was also highly entertaining to see some decade-old pictures of Max looking very young.

My paper, “ArchEcologies of Ewaste” was a look at how media archaeology and media ecologies can be complementary methods in examining a range of issues pertaining to materiality and the deleterious impacts caused by the toxic digital detritus that we discard, focusing particularly on ewaste in New Zealand, where there currently isn’t a mandatory (or even free) nationwide ewaste collection scheme, unlike in the EU where the WEEE directive mandates that all ewaste must be recycled in high tech local facilities. The prezi for the talk is here if you’re interested.

After lunch there were a couple of panels, with some varied and interesting presentations, from which my two highlights were the papers from Michael Daubs and Allen Meek. Michael’s paper “What’s New is Past: Flash Animation and Cartoon History” conducted a re-evaluation of early rhetorics of the revolutionary newness and democratic and transformative potentials of Flash animation, exploring the way in which a range of cell animation techniques such as layering and keyframing were appropriated into Flash, alongside a detailed history of Flash’s adoption in Web-based animation. The paper concluded by mobilising this archaeological exhumation of past claims surrounding deterministic claims for democratising technologies to interrogate some of the hyperbole surrounding HTML5 and CSS3, the currently-still-being-finalised web standards which incorporate scalable vector graphics into the web itself, thus removing the need for a proprietary layer of Flash on top of web-native code.

Allen’s paper, “Testimony and the chronophotographic gesture” examined the historical relationships between gesture, imaging technology and biopolitics. The paper began by exploring ways that early film was utilised under Taylorism as a means by which to quantify bodily movements and gestures in order to recombine them in the most temporally efficient manner so as to enact a form of disciplinarity upon the workforce. This history of gesture on film as a tool for quantifying gesture was contrasted with material from Claude Lanzmann’s Shoah, where gesture was used to reawaken embodied but subconscious memories of the Holocaust, which are being recorded for the documentary as a means of bearing witness to those memories. The dialectic between the employment of film as an apparatus of disciplinarity and as a means of witnessing was theorised via Agamben and Foucauldian biopolitics, and made for a fascinating paper.

There were also interesting and enjoyable papers from Damion Sturm, who examined T20 cricket in Australia as an exemplar of the increasing mediatisation of sport, Kirsten Moana Thompson, who used A Single Man as a case study to explore a range of phenomena surrounding digital technologies and colour in cinema, and Leon Gurevitch who examined some of the relationships between industrial design practices and computational 3D design and animation practices .

On the whole, it was a hugely enjoyable day, and it was great to meet a range of researchers doing various forms of work around media, archaeology, history and technology. A big thanks to the Loops + Splices organising committee, Kirsten Thompson, Miriam Ross, Kathleen Kuehn, Alex Bevan, Radha O’Meara, and Michelle Menzies, for putting everything together.

Last week I was in Auckland for a couple of days to go to the MINA (Mobile Innovation Network Aotearoa) 2013 Symposium at the Auckland University of Technology. Having just recently arrived in New Zealand the symposium seemed like a great opportunity to meet some researchers and artists working in and around pervasive/locative media, and to see what kinds of mobile media research and praxis are going on in New Zealand.

The conference kicked off with a fascinating keynote from Larissa Hjorth from RMIT in Melbourne. Hjorth looked at practices surrounding current cultural usages of mobile imaging technologies from an ethnographic perspective, and charaterised this as second generation research in camera phone studies. Whereas the first wave focussed on mobile imaging through the perspectives of networked visuality, sharing/storing/saving, and vernacular creativity, she characterises second generation camera phone studies as focussing on the notions of emplacement through movement, the prominence of geo-temporal tagging and spatial connectivity, intimate co-presence and re-conceptualising casual play as ambient play.

My other highlights on the first day were a fantastic session on activism and mobile video practices which features papers from Lorenzo Dalvit and Ben Lenzner. Dalvit explored the use of user uploaded mobile phone videos to a tabloid online newspaper The Daily Sun, which provides a public forum for citizens to publish and attract widespread attention to instances of police brutality within South Africa. In particular Dalvit focussed on a case where police dragged a Mozambican taxi driver to his death through the streets, and mobile footage posted to the Daily Sun was used to contradict the official police account that the taxi driver was armed, and was thus pivotal in bringing the policeman in question to face trial for their actions. Dalvit also highlighted the utility of audiovisual media in cultural contexts where literacy cannot be assumed as universal, and the ways that the Daily Sun provided a forum of public discussion surrounding the commonplace acts of police brutality which are primarily aimed at impoverished black youths in SA.

This was followed by a look at some of Lenzner’s PhD research which compares the usage of mobile video streaming techniques by US activist such as Tim Poole and the Indian community-activist group India Unheard. Similarly to Dalvit’s South African case study, Cole’s footage of Occupy Wall Street was used in court to quash bogus charges fabricated by police against an Occupy protester, again highlighting the ways that citizen journalism and in particular video evidence can provide a powerful tool in providing counter-narratives to official accounts which are often pure fabrications. Whereas Cole was able to stream video live on to UStream, community video activists working for India Unheard have to go somewhere to compress and upload material due to the difference in bandwidth between New York and Mumbai. This forced pause means that they produce activist video which is closer to traditional forms of video activism, providing edited stories rather than just a live stream of events. Both these papers were fantastic examples of how the increasing access to media production tools provides ways for previously unheard voices to be heard, and within a legal context, to provide very strong evidence to contradict official statements from powerful institutions linked to the state.

Also on the Thursday were really interesting papers from Craig Hight and Trudy Lane. Hight’s paper focussed on the implications of emerging software digital video, and in particular various ways that numerous forms of consumer/prosumer software are automating increasing amounts of the editing process. The paper outlined a number of fairly new tools, such as Magisto, which claims to ‘automatically turns your everyday videos into beautifully edited movies, perfect for sharing. It’s free, quick, and easy as pie!’ Within the software you select which clips you wish to use, a song to act as the soundtrack and a title, and Magisto assembles your video for you. While Hight was quite critical of the extremely formulaic videos this process produces, it’s interesting to think about what this does in turns of algorithmic agency and the unique ability of software to make the types of decisions normatively only associated with human (what Adrian Mackenzie has described as secondary agency).

Lane by contrast is an artist whose recent project the A Walk Through Deep Time was the subject of her paper. While the deep time here is not the same as Sigfried Zielinski’s work into mediation and deep time, it does present an exploration of a non-anthropocentric geological temporality, intially realised through a walk along a 457m fence to represent 4.57 billion years of evolution. The project uses an open-source locative platform called roundware which provides locative audio with the ability for users to upload content themselves whilst in situ, allowing the soundscape to become an evolving and dynamic entity. The ecological praxis at the heart of Lane’s work was something that really resonated with my interests, and it was great to see that there are really interesting locative art/ecology projects going on here.

The second day of the symposium opened with a keynote from Helen Keegan from the University of Salford. Keegan’s presentation centred on a unit she had run as an alternate reality game entitled Who is Rufi Franzen. The project was a way of getting students to engage in a curious and critical way with the course, rather than the traditional ways of learning we encounter within lectures and seminars. The project saw the students working together across numerous social media platforms to try and piece together the clues as to whom Rufi was, how he had been able to contact them, and what he wanted. The project climaxed with the students having been led to the triangle in Manchester, where they were astonished to see their works projected on the BBC controlled big screen there. It looked like a great project, and a fantastic experience.

My highlight of the second day was a paper by Mark McGuire from the University of Otago who presented on the topic of Twitter, Instagram and Micro-Narratives (Mark’s presentation slides are online via a link on his blog and well worth a look). Taking cues from Henry Jenkin’s recent work into spreadable media, which emphasises the ways that contemporary networked media foregrounds the flow of ideas in easy to share formats, McGuire went on to explore the ways that micro-narratives create a shred collaborative experience whereby the frequent sharing of ideas and experiences, content creators become entangled within a web of feedback or creative ecologies which productively drives the artistic work. Looking at Brian Eno’s notion of an ecology of talent and applying interdisciplinary notions of connectionist thinking and ecological thought and metaphors, McGuire made a convincing case as to why feedback rich networks provide a material infrastructure which cultivates communities who learn to act creatively together.

There was also a really interesting paper on the second day from Marsha Berry from RMIT, Melbourne, who built upon Hjorth’s notions of emplaced visuality to explore how creative practices and networked sociality are becoming increasingly entangled. Looking in detail at practices of creating retro-aesthticised images using numerous mobile tools including Instagram and retro camera filters, Berry explored these images as continuity with analogue imaging, as a form of paradox, as Derridean hauntology – as a nostalgia for a lost future, and finally as the impulse to create poetic imagery, highlighting that for teenagers today there is no nostalgia for 1970s imaging technologies and techniques which pre-date their birth.

Max Schleser and Daniel Wagner also presented interesting papers, looking at projects they had respectively been running which used mobile phone filmmaking. Schleser outlined the 24 Frames 24 Hours project for workshop videos, which featured a really nice UI designed by Tim Turnidge, and looked like a really nice tool for integrating video, metadata and maps. Schleser explored how mobile filmmaking is important to the emergence of new types of interactive documentary, touching on some of the conceptual material surrounding iDocs. Wagner presented the evolution of ELVSS (Entertainment Lab for the Very Small Screen), a collaborative project which has seen Wagner’s Unitec students working alongside teams from AUT, University of Salford, Bogota and Strasbourg to collectively craft video based mobile phone projects. The scale of the project is really quite inspiring in terms of thinking what it’s possible to create in terms of global networked interdisciplinary collaborations within higher education today.

Overall, I really enjoyed attending MINA 2013. The community seems friendly, relaxed and very welcoming, the standard of presentations, artworks and keynotes was really high and it’s really helped me in terms of feeling that there are academic networks within and around New Zealand who’re involved in really interesting work. Roll on MINA 2014.

Last year’s iDocs conference at the Watershed in Bristol was a lively and engaging event which looked at a range of critical, conceptual and practical issues around the emerging field of interactive documentary. It focused on several key themes surrounding the genre: participation and authorship, activism, pervasive/locative media and HTML 5 authoring tools.

The conference featured a number of practitioners involved in fantastic projects, such as Jigar Metha’s 18 Days in Egypt, Brett Gaylor, who made the excellent RIP: a remix manifesto and is now at Mozilla working on their popcorn maker, an HTML 5 based javascript library for making interactive web documentaries, and Kat Cizek (via Skype) whose Highrise project is well worth a look. There were also more theoretically inflected contributions from the likes of Brian Winston, Mandy Rose, Jon Dovey and Sandra Gaudenzi (among many others) which made for a really stimulating couple of days.

Whilst This Land is Our Land presents a really useful introduction to the notion of the commons, demarcating a range of types of commons ranging from communally managed land, through to ‘natural resources’ such as air and water, to public services and the Internet – I think that it’s worth taking a step back and considering whether or not classifying these phenomena as the same thing is really all that useful. Whilst they all are not forms of private property, they do exhibit some differing characteristics that are worth further explication.

The first mode of commons I’d like to discuss is the model of common land – what we could think of as a pre-industrial mode of commons, albeit one which still exists today through our shared ownership and access to things like air. Land which was accessible for commoners to graze cattle or sheep, or to collect firewood or cut turf for fuel. Anyone had access to this communal resource and there was no formal hierarchical management of the common land – no manager or boss who ensured that no one took too much wood or had too many sheep grazing on the land (although there did exist arable commons where lots were allocated on an annual basis). So access and ownership of this communal resource was distributed, management was horizontal rather than hierarchical, but access effectively depended upon geographical proximity to the site in question.

A second mode of commons is that of the public service, which we could conceptualise as an industrial model of commonwealth. For example consider the example of the National Health Service in the UK: unlike common land, this was a public service designed to operate on a national scale, for the common good of the approximately 50 million inhabitants of the UK. In order to manage such a large scale, industrial operation, logic dictated that a strict chain of managerial hierarchy be established to run and maintain the health service – simply leaving the British population to self-organise the health service would undoubtedly have been disastrous.

This appear to be a case which supports the logic later espoused by Garret Hardin in his famed 1968 essay the Tragedy of the Commons, whereby Hardin, an American ecologist forcefully argued that the model of the commons could only be successful in relatively small-scale endeavours, and that within industrial society this would inevitably lead to ruin, as individuals sought to maximise their own benefit, whilst overburdening the communal resource. Interestingly, Hardin’s central concern was actually overpopulation, and he argued in the essay that ‘The only way we can preserver and nurture other, more precious freedoms, is by relinquishing the freedom to breed.’ Years later he would suggest that it morally wrong to give aid to famine victims in Ethopia as this simply encouraged overpopulation.

More recent developments, however, have shown quite conclusively that Hardin was wrong: the model of the commons is not doomed to failure in large-scale projects. In part this is due to the fact that Hardin’s model of the commons was predicated on a complete absence of rules – it was not a communally managed asset, but a free-for-all, and partially this can be understood as a result of the evolution of information processing technologies which have revolutionised the ways in which distributed access, project management and self-organisation can occur. This contemporary mode of the commons, described by Yochai Benler and others as commons-led peer production, or by other proponents simply as peer-to-peer(P2P) resembles aspects of the distributed and horizontal access characteristic of pre-modern commons, but allows access to these projects on a nonlocal scale.

Emblematic of P2P process has been the Free and Open Source Software (FOSS) and Creative Commons movement. FOSS projects often include thousands of workers who cooperate on making a piece of software which is then made readily available as a form of digital commons, unlike proprietary software which seeks to reduce access to a good whose cost of reproduction is effectively zero. In addition to the software itself, the source code of the program is made available, crucially meaning that other can examine, explore, alter and improve upon existing versions of FOSS. Popular examples of FOSS include WordPress – which is now used to create most new websites as it allows users with little technical coding ability to create complex and stylish participatory websites – the web browsers Firefox and Chrome, and the combination of Apache (web server software) and Linux (operating system) which together form the back end for most of the servers which host World Wide Web content.

What is really interesting, is that in each of these cases, a commons-led approach has been able to economically outcompete proprietary alternatives – which in each case have had huge sums of money invested into them. The prevailing economic logic throughout industrial culture – that hierarchically organised private companies were most effective and efficient at generating reliable and functional goods was shown to be wrong. A further example which highlights this is Wikipedia, the online open-access encyclopaedia which according to research is not only the largest repository of encyclopaedic knowledge, but for scientific and mathematical subjects is the most detailed and accurate. Had you said 15 years ago that a disparate group of individuals who freely cooperated in their free time over the Internet and evolved community guidelines for moderating content which anyone could alter, would be able to create a more accurate and detailed informational resource than a well-funded established professional company (say Encyclopaedia Brittanica) most economists would have laughed. But again, the ability of people to self-organise over the Internet based on their own understanding of their interests and competencies has been shown to be a tremendously powerful way of organising.

Of course there are various attempts to integrate this type of crowd-sourced P2P model into new forms of capitalism – it would be foolish to think that powerful economic actors would simply ignore the hyper-productive aspects of P2P. But for people interested in commons and alternative ways of organising, a lot can be taken from the successes of FOSS and creative commons.

Now where some this gets really interesting, is in the current moves towards Open Source Hardware (OSH), what is sometimes referred to as maker culture, where we move from simply talking about software, or digital content which can be entirely shared over telecommunications networks. OSH is where the design information for various kinds of device are shared. Key amongst these are 3D printers, things like RepRap, an OSH project to design a machine allowing individuals to print their own 3D objects. Users simply download 3D Computer-Assisted-Design (CAD) files, which they can then customise if they wish, before hitting a print button – just as would print a word document, but the information is sent to a 3D rather than 2D printer. Rather than relying on a complex globalised network whereby manufacturing largely occurs in China, this empowers people to start making a great deal of things themselves. It reduces reliance on big companies to provide the products that people require in day-to-day life and so presents a glimpse of a nascent future in which most things are made locally, using a freely available design commons. Rather than relying on economies of scale, this postulates a system of self-production which could offer a functional alternative which would have notable positive social and ecological ramifications.

Under the current economic situation though, people who contribute to these communities alongside other forms of commons are often not rewarded for the work they put into things, and so have to sell their labour power elsewhere in order to make ends meet financially. Indeed, this isn’t new, capitalism has always been especially bad at remunerating people who do various kinds of work which is absolutely crucial the the functioning of a society – with domestic work and raising children being the prime example. So the question is, how could this be changed so as to reward people for contributing to cultural, digital and other forms of commons?

One possible answer which has attracted a lot of commentary is the notion of a universal basic income. Here the idea is that as all citizens are understood to actively contribute to society via their participation in the commons, everyone should receive sufficient income to subsist – to pay rent, bills, feed themselves and their dependants, alongside having access to education, health care and some form of information technology. This basic income could be supplemented through additional work – and it is likely that most people would choose to do this (not many people enjoy scraping by with the bare minimum) – however, if individuals wanted to focus on assisting sick relatives, contributing to FOSS projects or helping out at a local food growing cooperative they would be empowered to do so without the fear of financial ruin. As an idea it’s something that has attracted interest and support from a spectrum including post-Marxists such as Michael Hardt and Antonio Negri through to liberals such as British Green Party. It certainly seems an idea worth considering, albeit one which is miles away from the Tory rhetoric of Strivers and Skivers.

For more details on P2P check out the Peer to Peer Foundation which hosts a broad array of excellent articles on the subject.

The article is called Escaping Attention: Digital Media Hardware, Materiality and Ecological Cost, and it looks at ways that discourses around the attention economy and immateriality tend to obscure various material ecological impacts of digital technologies. It’s part of a special edition on the attention economy which was co-edited by Patrick Crogan and Sam Kinsley from the the Digital Cultures Research Centre at UWE. Material for the journal was drawn from the 2010 European Science Foundation funded conference entitled ‘Paying Attention: Digital Media Cultures and Generational Responsibility,’ which was convened by the Digital Cultures Research Centre.

Alongside my contribution, the special edition features a substantial introduction by the editors, which presents a critical examination of the workings of the ‘attention economy’ in the context of today’s rapidly emerging realtime, ubiquitous, online digital technoculture. It re-focusses work on this theme of attention in light of the current and emerging digital technocultural media sphere of smart devices, the pervasive mediation of experience, and the massive financial speculation in the attention capturing potential of social networking media. The special issue includes an interview by Kinsley with peer2peer co-founder, Michel Bauwens, essays by key theorists of attention Jonathan Beller, Bernard Stiegler, Tiziana Terranova, and several papers on topics from Facebook and Free and Open Software, to the problematic role of digital social networking in Istanbul’s recent (2010) European Capital of Culture project. Its really great to be published alongside such thought provoking and insightful pieces.

The Pervasive Media Cookbook is a Hackspace meets Jamie Oliver mash-up where art and engineering mix to inspire entry level producers to get involved with the new world of location based media experiences. It promotes the emerging field of Pervasive Media by showing how 12 innovative experiments were made.

The Cookbook launch is the culmination of a two-year AHRC-funded Knowledge Transfer Fellowship led by Professor Jon Dovey as a partnership with the Watershed Arts Trust. The project worked with Creative Economy partners to define the language of Pervasive Media and support the development of its market. Over two years the team ran user tests, workshops, conferences and seminars reaching over 400 people and working closely with SW digital businesses.

The video features excerpts from talks given at the launch event by Professor Jon Dovey of the DCRC and Clare Reddington, director of iShed and the Pervasive Media Studio, alongside interviews with a number of participants whose work is featured in the Pervasive Media Cookbook