Action Coding

BodyLang gestures

P5 gesture & coding window (in real time) performed by Caitlin Sikora

P5 Dictionary

P5 Dictionary

P5 Dictionary

Arduino version, performed at Judson Church, Spring 2016

Arduino version performed at Your Move dance festival, October 2015

In this performance of the system, dancers are performing computer code, which is read by a kinect, translated by gesture recognition software into computer functions and commands. is an experiment in live, choreographed computer coding. Mistakes may happen and performers may adapt. Dancers write computer code through movement, their movement is captured by the computer’s vision (a Kinect), and translated through a gestural recognition program into variables, commands, and functions, e.g., { ,< , digitalWrite, rest, ().
Dancers will each code a different program instructing the computer to perform various on and off states for light bulbs.

An Excerpt from the Arduino Dictionary drawn for motion capture skeleton

Arduino dictionary detail

System schematic

ACTION CODING is a research work asking the question: what if computer coding were an embodied and visible practice? Could coding be performed and learned in dance clubs, at recess, or in gyms if somewhere between sport, hip-hop and sign language?

Action Coding challenges current systemic biases of the software development by inserting the body as input device into an increasingly disembodied system. It is a speculative project concerned with the future of the body in a digital world and is a working system consisting of open-source machine learning software and Kinect that translates physical input into digital output within a wide variety of coding environments.

Investigations in computer vision, movement languages, and machine learning resulted in gesture libraries for three coding environments—Arduino, P5, and BodyLang, a custom stack language written by Ramsey Nasser—performed in two live performances, and several video works.

Action Coding is a system comprised of a Kinect and a gesture recognition application that can translate full-body movements into various information inputs for a variety of other applications (arduino, processing, maxmsp). It is an artist’s research project into alternate methods of learning how to code. Inspired by the artist’s inability to learn the abstract literacies and processes of coding, the question arose as to how to make coding more visible and tangible, like breakdancing, sign language, or aerobic exercise.

A visible and series of full-body actions aids in the transfer of the building blocks of coding: the mind learns through the body, syntax becomes repeatable phrases and logic becomes physical patterns and phrases, like a dance. The procedural memory required by the physical process amplifies the procedural memory required by computer coding; and the motor programs acquired by this process underscore the computational programs of code. Because in Action Coding, coding is a function, in part, of motor learning, a new ‘coder’ may learn and internalize syntax and logic patterns more quickly(1,2), and because they are taken in through the full neuromuscular system, retain them longer(3).

To shift the performance of coding from fingers on a keyboard to the body in space, the artist collaborated with Gene Kogan, a machine-learning expert and Morgan Hille-Refakis, a choreographer. In the first phase of the project, the group devised a movement language and a system for recognizing the elements of that language and translating it into the functions, variables and syntax of Arduino. Arduino was chosen because it is a relatively simple language to learn in process and syntax and offers a physical outcome (making a sound, turning a light on, etc.)

By performing gestures from this movement dictionary, anyone (recognized by the Kinect and performing in time to a pre-set capture window) can begin to string the necessary grammar and syntax of Arduino together to set a single light or group of lights to blinking, for example.

Based on this first version, a second library for coding in the P5 environment was created with Caitlin Sikora allowing anyone to perform drawings and animations in a whole new mode of physical engagement.

Ramsey Nasser wrote a custom language, BodyLang, for this project. Based on Logo, BodyLang is stack language for drawing. Of all three iterations of the project, the nature of the stack language allows the most direct connection between gesture and code. As new lines are added to the stack, the code executes in real time without need to compile, upload, or play.

The system requires a Kinect, and a laptop through which to run the Kinect. The Kinect/PC configuration is connected another laptop to run Gene Kogan’s Kinect2Gesture application, as well as necessary peripherals required by the coding environment, such as Arduino or P5 (IE, an Arduino board, and/or monitor or projector.)

Kinect2Gesture is a free and open source application which uses a neural network to automatically classify, in real-time, the physical motions of a full-body coder who is being tracked by a Kinect depth camera. For instance, the coder performs a sequence of choreographed gestures in improvised order, each of have been associated with a particular class or follow-up action. Simultaneous to the performance, the application sends the classification decisions over a network to other computers or applications which act upon the data, for example an Arduino or some audiovisual software. This has the effect of augmenting the dancer's movements across multiple modes and media.

Kinect2Gesture, the application written by Gene Kogan for Action Coding differs from other full-body gestural systems in that it uses machine learning algorithms in the creation of gesture libraries. Users may build and train a computer to recognize any single gesture performed within a pre-set time-frame used by the application to define the start and end parameters of the movement. To train the system in a new gesture, the gesture is be performed repeatedly (20-60 times). Each repeated performance generates a data set that the computer ‘learns’. The wider variety of approaches in the training process, the greater the accuracy the system has when predicting. As users update this system to create their own gesture libraries, they can also apply those libraries to a variety of coding environments: from Arduino to P5 and beyond. As an application, Kinect2Gesture is not constrained to any particular development environment, nor is anyone who might want to engage with Kinect2Gesture in this manner be constrained to a limited library of pre-made gestures.

Action Coding is best understood in the demonstration format. A system operator can both run the system and engage visitors in the project’s details and movement language, perform pieces of the language for the visitor, and teach pieces of the language to the visitor so that he or she may most fully understand the concept and outcome.