Tag Archives: augmented reality

About two years ago, the exhibition On Display: Immemory, Soft Cinema, After Video at Bilkent University in Ankara brought together projects by Chris Marker, Lev Manovich, and the contributors to the “video book” after.video — including the collaborative AR piece “Scannable Images” that Karin Denson and I made. Recently, Oliver Lerone Schultz (one of the editors of after.video) brought to my attention this “critical tour” of the exhibition, which takes the form of a discussion between Ersan Ocak and Andreas Treske. It is audio only, and you might need to turn up the volume a bit, but it’s an interesting discussion of video and media art.

(See here for more on after.video. Also, I should note that the AR on “Scannable Images” is currently not working due to the ephemeral business models of AR platforms these days, but I hope to port it over to a new platform and get it up and running again soon!)

I am excited to be participating in the the NEH-funded Virtual and Augmented Reality Digital Humanities Institute — or V/AR-DHI — next month (July 23 – August 3, 2018) at Duke University. I am hoping to adapt “deformative” methods (as described by Mark Sample following a provocation from Lisa Samuels and Jerome McGann) as a means of transformatively interrogating audiovisual media such as film and digital video in the spaces opened up by virtual and augmented reality technologies. In preparation, I have been experimenting with photogrammetric methods to reconstruct the three-dimensional spaces depicted on two-dimensional screens. The results, so far, have been … modest — nothing yet in comparison to artist Claire Hentschker’s excellent Shining360(2016) or Gregory Chatonsky’s The Kiss(2015). There is something interesting, though, about the dispersal of the character Neo’s body into an amorphous blob and the disappearance of bullet time’s eponymous bullet in this scene from The Matrix, and there’s something incredibly eerie about the hidden image behind the image in this famous scene from Frankenstein, where the monster’s face is first revealed and his head made virtually to protrude from the screen through a series of jump cuts. Certainly, these tests stand in an intriguing (if uncertain) deformative relation to these iconic moments. In any case, I look forward to seeing where (if anywhere) this leads, and to experimenting further at the Institute next month.

At the upcoming SCMS conference in Chicago, I will be participating in a workshop on “Deformative Criticism and Digital Experimentations in Film & Media Studies” (panel K3 on Friday, March 24, 2017 at 9:00am):

Deformative criticism has emerged as an innovative site of critical practice within media studies and digital humanities, revealing new insights into media texts by “breaking” them in controlled or chaotic ways. Deformative criticism includes a wide range of digital experiments that generate heretical and non-normative readings of media texts; because the results of these experiments are impossible to know in advance, they shift the boundaries of critical scholarship. Media scholars are particularly well situated to such experimentation, as many of our objects of study exist in digital forms that lend themselves to wide-ranging manipulation. Thus, deformative criticism offers a crucial venue for defining not only contemporary scholarly practice, but also media studies’ growing relationship to digital humanities.

Also participating in the workshop will be Jason Mittell (Middlebury College), Stephanie Boluk (UC Davis), Kevin L. Ferguson (Queens College, City University of New York), Mark Sample (Davidson College), and Virginia Kuhn (USC).

My own presentation/workshop contribution will focus on glitches and augmented reality as a deformative means of engaging with changing media-perceptual configurations, including the following case study:

Glitch, Augment, Scan

Scannable Images is a collaborative art/theory project by Karin + Shane Denson that interrogates post-cinema – its perceptual patterns, hyperinformatic simultaneities, and dispersals of attention – through an assemblage of static and animated images, databending and datamoshing techniques, and augmented reality (AR) video overlays. Viewed through the small screen of a smartphone or tablet – itself directed at a computer screen – only a small portion of the entire spectacle can be seen at once, thus reflecting and emulating the selective, scanning regard of post-cinematic images and confronting the viewer with the materiality of the post-cinematic media regime through the interplay of screens, pixels, people, and the physical and virtual spaces they occupy.

The augmented reality piece featured on the cover of Post-Cinema: Theorizing 21st-Century Film (http://reframe.sussex.ac.uk/post-cinema/), a collaborative piece made by Karin Denson and me, was displayed recently at a glitch-oriented gallery show organized by some nice people associated with Savannah College of Art and Design.

Recently, I posted about a project called after.video, which contains an augmented (AR) glitch/video/image-based theory piece that Karin Denson and I collaborated on. It has now been announced that the official launch of after.video, Volume 1: Assemblages — a “video book” consisting of a paperback book and video elements stored on a Raspberry Pi computer packaged in a VHS case, which will also be available online — will take place at the Libre Graphics Meeting 2016 in London (Sunday, April 17th at 4:20pm).

I just saw the official announcement for this exciting project, which I’m proud to be a part of (with a collaborative piece I made with Karin Denson).

after.video, Volume 1: Assemblages is a “video book” — a paperback book and video stored on a Raspberry Pi computer packaged in a VHS case. It will also be available as online video and book PDF download.

Edited by Oliver Lerone Schultz, Adnan Hadzi, Pablo de Soto, and Laila Shereen Sakr (VJ Um Amel), it will be published this year (2016) by Open Humanities Press.

The piece I developed with Karin is a theory/practice hybrid called “Scannable Images: Materialities of Post-Cinema after Video.” It involves digital video, databending/datamoshing, generative text, animated gifs, and augmented reality components, in addition to several paintings in acrylic (not included in the video book).

Theorising a World of Video

after.video realizes the world through moving images and reassembles theory after video. Extending the formats of ‘theory’, it reflects a new situation in which world and video have grown together.

This is an edited collection of assembled and annotated video essays living in two instantiations: an online version – located on the web at http://after.video/assemblages, and an offline version – stored on a server inside a VHS (Video Home System) case. This is both a digital and analog object: manifested, in a scholarly gesture, as a ‘video book’.

We hope that different tribes — from DIY hackercamps and medialabs, to unsatisfied academic visionaries, avantgarde-mesh-videographers and independent media collectives, even iTV and home-cinema addicted sofasurfers — will cherish this contribution to an ever more fragmented, ever more colorful spectrum of video-culture, consumption and appropriation…

Cover artwork and booklet design: Jacob Friedman
Copyright: the authors
Licence: after.video is dual licensed under the terms of the MIT license and the GPL3http://www.gnu.org/licenses/gpl-3.0.html
Language: English
Assembly On-demand
OpenMute Press

Acknowledgements

Co-Initiated + Funded by

Art + Civic Media as part of Centre for Digital Cultures @ Leuphana University.
Art + Civic Media was funded through Innovation Incubator, a major EU project financed by the European Regional Development Fund (ERDF) and the federal state of Lower Saxony.

Ever since our old AR platform was bought out and shut down by Apple, the “data gnomes” that Karin and I developed in conjunction with the Duke S-1: Speculative Sensation Lab’s “Manifest Data” project have been bumbling about in digital limbo, banished to 404 hell. So today I finally made the first steps in migrating our beloved creatures over to a new AR platform (Wikitude), where they’re starting to feel at home. While I was at it, I went ahead and reprogrammed my business card:

The QR code on the front now redirects the browser to shanedenson.com, while the AR content on the back side is made visible with the Wikitude app (free on iOS or Android) — just search for “Shane Denson” and point your phone/tablet’s camera at the image below:

(In case you’re wondering what this is: it’s a “data portrait” generated from my Internet browsing behavior. You can make your own with the code included in the S-1 Lab’s Manifest Data kit.)