ABSTRACT. Increasingly popular wearable sensing and feedback technology is starting to consider the emotional body as a means for creating affect-competent real life applications. In this talk, I will discuss opportunities offered by the sensed emotional body and the challenges that need to be addressed for real world ubiquitous applications. The discussion will be grounded on our work to support chronic pain physical rehabilitation in everyday activity and on altering one’s body perception in healthy population and in people with a certain body dysmorphic disorder.

Piano&Dancer - Interaction Between a Dancer and an Acoustic Instrument

SPEAKER: unknown

ABSTRACT. Piano&Dancer is an interactive piece for a dancer and an electromechanical
acoustic piano. e piece presents the dancer and the
piano as two performers on stage whose bodily movements are
mutually interdependent. is interdependence reveals a close relationship
between physical and musical gestures. Accordingly,
the realisation of the piece has been based on creative processes
that merge choreographic and compositional methods. In order
to relate the expressive movement qualities of a dancer to the creation
of musical material, the piece employs a variety of techniques.
ese include methods for movement tracking and feature analysis,
generative algorithms for creating musical structures, and the
application of non-conventional scales and chord transformations
to shape the modal characteristics of the music. e publication
contextualises Piano&Dancer by relating its creation to concepts of
embodiment, interactivity and musical structure and by discussing
opportunities for creative cross-fertilisation between dance choreography
and musical composition. It also provides some details
about the challenges and potentials of integrating a mechanical musical
instrument into an interactive seing for a dance performance.
Finally, the paper highlights some of the technical and aesthetic
principles that were used in order to connect expressive qualities
of body movements to the creation of music structures.

ABSTRACT. The performing arts, and dance in particular, have been considered as intangible cultural heritage by UNESCO since 2003. This acknowledgement reflects the importance of preserving the knowledge generated within this art form for future generations. Nevertheless, what and how this sensible material should be preserved is still lacking clear methodological approaches. When considering creative processes, this seems an even more daunting task, as it goes beyond simply documenting the final product of a creation.
Recent advancements in technology has allowed to consider other approaches of capturing data apart from video or photography, which are mostly static and have a single viewpoint. In this paper we describe how 3D data capture and point cloud visualization techniques have been used to capture and document João Fiadeiro's choreographic and compositional processes. Together with Fiadeiro we have identified a sub-set of core concepts of his method which have then been used to conduct two improvisation sessions involving Fiadeiro's dancers and himself. Those concepts have been used as the basis for the development of new visualization techniques that better illustrate, in an interactive system, the complexity of Fiadeiro's creative process.

ABSTRACT. This paper explores movement and its capacity for meaning-making and eliciting affect in human-robot interaction. Bringing together creative robotics, dance and machine learning, our research project develops a novel relational approach that harnesses dancers’ movement expertise to design a non-anthropomorphic robot, its potential to move and capacity to learn. The project challenges the common assumption that robots need to appear human or animal-like to enable people to form connections with them. Our performative body-mapping (PBM) approach, in contrast, embraces the difference of machinic embodiment and places movement and its connection-making, knowledge-generating potential at the center of our social encounters. The paper discusses the first stage of the project, in which we collaborated with dancers to study how movement propels the becoming-body of a robot, and outlines our embodied approach to machine learning, grounded in the robot’s performative capacity.

The Delay Mirror: a Technological Innovation Specific to the Dance Studio

SPEAKER: unknown

ABSTRACT. This paper evaluates the use of the Delay Mirror (DM) in the dance studio. The DM is a device that records a video stream which is rendered immediately on a large screen, but with a delay of a few seconds. A dancer can observe her own movements in the same way she would do so when looking at a normal mirror. However, the delay allows her to observe dynamic movements which cannot usually be observed other than in video. We evaluate whether this device can be useful in the context of a dance class, and whether it complements the normal mirror, while being less intrusive that a normal video recording which is recorded and then re-played, possibly interrupting workflow. Qualitative evaluation from participant observation and in-depth interviews was performed in the context of an advanced-level adult ballet course.

ABSTRACT. The paper discusses issues of rhythmicality in the MotionComposer, a therapy device for persons with different abilities that turns movement into music using video-based motion tracking. Aiming at both a low entry fee, and that the result should be rhythmical enough to induce further movement, the design of the device faces an inherent challenge, namely that since users are both dancing to the beat of the music, and creating it at same time, then they must be rhythmic enough in their movements to produce a satisfying result, or the feedback loop will break down. In addressing the problem, we apply a number of strategies named triggering, accenting and adaptive. This paper discusses the pros and cons of the various approaches, referring to experiences gathered in the field, and concludes by summarizing possibilities for improvements in the next version of the device.

ABSTRACT. Jinn-Ginnaye is an exploration of movement in place. It is a collection of dance pieces exploring issues of bringing western dance performance to the United Arab Emirates, where local modesty laws influence how women can be shown in public. The pieces use video compositing, motion capture, and Virtual Reality techniques to remove the body of the dancer, but leave behind the dance, and the traces of the desert in which it was created.

ABSTRACT. We have implemented a 2D serious game based on collaboration between players instead of the competitive scenario. It based on the control of the Center Of Mass of players, a physical concept which links participants in real-time. We will explain the main pedagogical impacts of this collaborative movement from K-12 to university.

ABSTRACT. This paper presents our recent works dealing with the relevance of introducing digital technologies into the education field, especially at the kindergarten. Specifically, we focus on digital technologies that allow any kind of movement tracking and how it can enhance teaching and learning potential on various fields. The prototype we submit is the first of a series focusing on writing and reading education. We present the various influences that led us to this prototype and describe the perspectives for further experimentation. We also mention how this former work can engage similar studies into others fields dealing with body gestures.

ABSTRACT. Becoming Light is an immersive world made for live performance and for virtual reality. As a virtual reality installation, participants are free to interact with the world of light and sound on a path through memory and dream-like space. The motions of the body are remembered within the world and re-encountered as ghost-like storytellers along the journey. As the pathway through the world unfolds, voices and recorded poetry is discovered revealing an ethereal narrative. The shape, timing, and velocity of the body changes the way the story is experienced.

As a performance piece, the virtual reality headset is replaced with projectors and a stage. A solo dancer guides the audience through the world of light while following an improvisational somatic movement score.

ABSTRACT. The Box is a prototype 3D puzzle game to study two-handed motion control schemes for spatial rotation. Using two handheld motion tracking devices, players are tasked to rotate a maze cube to roll a small sphere towards a goal inside the cube. We designed the game to iteratively observe how people would spontaneously use controls and develop and refine a ‘natural’ control scheme from that. Initial results indicate no immediate clear principles of best practices.

ABSTRACT. The Stream Project’s founding members, two dancers and a neuroscientist, explored the possibilities of using the dancer’s physiological information to create a series of works, called Wired, that disrupts and informs the viewer’s understanding of their own physiological state. Brain wave states, heart rate variability and respiratory rate was used to create a series of artistic dance works. It focuses on bringing scientific exploration into a creative environment, taking full advantage of the visual and auditory possibilities already being used within the field. Wired is an exciting collaboration between dance performance, neuroscience, film, sound and lighting, culminating in a live-feed multi-media performance.

The project worked with a creative coder to develop an installation that involved using the audience member's heart rate to view different sections of dance film footage. The footage filmed dancer, Genevieve Say, in the peak district dancing on a bridge. The audience member holds on to an object that has heart rate sensors embedded in it and depending on the audience's heart rate speed, this then dictates the section of the film shown.
The installation was shown as part of the Wired series at FACT (Foundation for Art and Creative Technology), Liverpool in 2015. The footage has also been used to make a dance film.

We would like to propose to show the Wired 2 installation at MOCO17 as we think that it would be a good opportunity for us to show the work and get feedback on areas for development.

ABSTRACT. We present a platform to assist dance students, teachers and choreographs in learning, practicing, teaching dance principles and creating new choreographies. The aim of this work is to present the on-going progresses of the WHOLODANCE EU ICT project. Our demonstrations will consist of showing different aspects and applications developed under the framework of the project including: an online repository of different dance sequences captured with motion capture from different dance styles, a browsing and visualization interface of the repository that make use of augmented reality and holographic displays, a “movement sketch” application where participants’ movement will be recorded using low-spec technology, analysed and used to retrieve similar examples from the existing repository, and an authoring tool that starting from two different dance sequences allows to blend and merge them in a new dance sequence.

ABSTRACT. We demonstrate our prototype called COMO, which allows for distributed and collective gesture recognition. Gestures can be recorded using mobiles, thanks to the motion-sensing capabilities of smartphones. All the recorded gestures are then available on a server, and can be retrieved on any other connected mobiles. The recognition algorithm can run on webpages, on each mobiles. During the demonstration, users will be able to test the system using their own mobiles, and participate some collective scenarios, including gesture-design and music playing using user-defined gestures.

ABSTRACT. A phenomenological approach to interaction design puts the body at the center of inquiry. When designing body-centric interfaces, reflective awareness of how the body moves is an important aspect for consideration. This paper presents a full-body interactive system that allows end users to explore movements using dynamic feedforward visualizations of movement pathways. We propose a demonstration of the system as an interactive installation through which participants are encouraged to move in order to interact with visual representations of their movement characteristics. The system captures each participant’s movement in terms of spatial trajectories and dynamic qualities. It then encourages the participants to improvise using existing movement ideas by embodying, appropriating, and variating them. The global visual canvas makes visible the movement traces of the participants over a specified period of time as feedback and the future possibilities and movement potential as feedforward.

ABSTRACT. Polytropos Project is a set of experiments designed to explore aspects of creativity through the blending of elements from multiple mediums of expression and communication including sound, language, image and movement. The theory underlying and directing the process of blending is Joseph Goguen’s computational account of conceptual blending complemented by an understanding of style as a choice of blending principles. In our view, such experiments allow us to explore the creative possibilities of computational conceptual blending in the field of multimedia art practice presenting possibilities of self-propagating AI modes of creativity.

ABSTRACT. This paper describes an experiment in which the subjects performed a sound-tracing task to vocal melodies. They could move freely in the air with two hands, and their motion was captured using an infrared, marker-based system. We present a typology of distinct strategies used by the recruited participants to represent their perception of the melodies. These strategies appear as ways to represent time and space through the finite motion possibilities of two hands moving freely in space. We observe these strategies and present their typology through qualitative analysis. We numerically verify the consistency of these strategies by conducting tests of significance between labeled and random samples.

ABSTRACT. A first approach to an autonomous virtual agent able to play body percussion with real body percussionists is presented. The agent is autonomous in the sense that it can recognize the artists' calls and react to them by playing back a prerecorded sequence. The agent architecture is described focusing mainly on the artists' calls recognition module. This work, still under construction, produced two artistic performances which were presented in front of an audience.

ABSTRACT. Musical gestures are complex human movements whose machine recognition remains challenging. One difficulty is to be able to understand expressive variation where changes in the quality of gesture result in different musical articulations. We make available two datasets of instrumental gesture performed with several expressive variations. The first is comprised of violinists performing standard pedagogical phrases with variation in dynamics and tempo where gestures are captured by inertial and physiological muscle sensors. The second is a multimodal dataset consisting of motion capture recordings of pianists performing a repertoire piece with variations in tempo, dynamics and articulation. To illustrate the utility of these datasets, we show that they embed different movement qualities (such as motion dynamics and tension) that are reflected in the recorded data. In addition, for the violin dataset, we report on gesture recognition tests on the 5 recorded gestures using two state-of-the-art recognizers: 1) a statistical model learning spatio-temporal variations of input examples and 2) a dynamical model that adapts to a set of predefined variations. While both approaches have limitations, they demonstrate the value of the datasets to create opportunities for further research in the field.

ABSTRACT. Expert musicians’ performances embed a timing variability pattern that can be used to recognize individual performance. However, it is not clear if such a property of performance variability is a consequence of learning or an intrinsic characteristic of human performance. In addition, little evidence exists about the role of timing and motion in recognizing individual music performance. In this paper we investigate these questions in the context of piano playing. We conducted a study during which we asked non-musicians to perform a musical sequence at different speeds. Then we tested their learning performance at a fixed tempo. Focusing on the possibility to identify the participant based on performance features of timing and motion variability, we show that participant classification increases with practice. This suggests that 1) the individual timing signatures are affected by learning and 2) timing and motion variability is structured. Moreover, we show that motion features better classify individual performances than timing features.

ABSTRACT. This paper presents an observational study of the interaction of professional percussionists with a simplified hand percussion instrument. We reflect on how the sound-producing gestural language of the percussionists developed over the course of an hour session, focusing on the elements of their gestural vocabulary that remained in place at the end of the session, and on those that ceased to be used. From these observations we propose a model of movement-based digital musical instruments as a projection downwards from a multidimensional body language to a reduced set of sonic features or behaviours. Many factors of an instrument's design, above and beyond the mapping of sensor degrees of freedom to dimensions of control, condition the way this projection downwards happens. We argue that there exists a world of richness of gesture beyond that which the sensors capture, but which can be implicitly captured by the design of the instrument through its physicality, constituent materials and form. We provide a case study of this model in action.

ABSTRACT. Mirror game is an improvisation exercise for two people, where one person moves and the other acts as their mirror. In the game, the roles of leader and follower can be switched, and eventually the roles can be abolished, and the pair shares leadership, both mutually mirroring each other. The mirror game has been adapted to scientific research, where the game has been simplified to a 1D version with buttons on sliders, and a 2D version where participants move their hands as if drawing in the air. In these studies, the condition of joint leadership has been found to produce movements that are better synchronised and smoother than those in the leader-follower conditions. We extended this game to four people, and are investigating it as a) a method for studying group dynamics in movement coordination, and b) a measure of intersubjectivity .

We use optical motion capture to record these four-player games. Participants stand in a circle, with their right arm and index finger extended towards the centre. Participants are instructed to mirror each others' hand movements, and these hand movements are most accurately tracked by a reflective marker on the participants' index fingers. Other markers on participants' upper body joints allow a whole-body analysis of motion. The average velocity of all the markers, or the quantity of motion of each player, can then be cross-correlated with those of the other players, producing a correlation matrix for the game, showing the dynamics of following and leading in the group.

Our pilot results suggest that the four-person game gives rise to "conflicts" where a performer must make a quick decision about which other player to align their behaviour with, and as a consequence, which other player to not align with. This makes the four-player game very interesting from social psychological point of view. Comparing two games, played before and after a different group improvisation exercise, the latter game produced more group synchrony, and facilitated the introduction of larger movements. This indicates, that the four-player game has potential as an intersubjectivity measure.

In this ongoing research project, more data will be collected and analysed during spring 2017.

Kinetic predictors of spectators' segmentation of a live dance performance

SPEAKER: unknown

ABSTRACT. We present a pilot study that explores connection between accelerations in dance movements and the temporal segmentation that is perceived by spectators during a live performance. Our data set consists of recorded accelerations from two 7 minutes long duo dances that were annotated by 12 spectators in real-time. The annotations were indications of perceived starts and endings in the dance. We were able to create an acceleration based predictor that has a significant correlation with the pooled subjective annotations. Our approach can be useful in analysis of improvised dance where the segmentation cannot rely on repetitive patterns of steps. We also present suggestions for future development of acceleration based dance analysis.

ABSTRACT. The Syntax Error installation's aim is to create a generative, ever transforming sculpture representation from real-time recorded motion data of a dancer, blurring the borders between physical space and digital realm through an interactive feedback loop, that constantly re-informs the dancer with the generated audiovisual feedback through projection mapping. The digital sculpture's aesthetics are defined by the dancer's constant motion and continuously variable relationships between moments, creating new input for the physical dance performance. Syntax Error is a metaphor of real life processes influenced by the precision and fine mechanism of choreography's informal and temporal patterns, that emerge, which are being used as a digital experience translated back into physical space . The virtual tectonics accompanied by an interactive noise field mostly capture the fragility of dancers’ movements, showing the beauty of human inaccuracy in the syntax of a programmed dance sequence. The digital sculpture is a representation of individual human interpretation and implies the attributes that contrast human behavior from mechanical perfection.

A custom algorithm weighs incoming datasets from the choreography and creates different tectonics and subdivisions that represent persistence and change at the same time, like one dancer subjectively interprets the sound/music/directions in his/her performance differently. The dancer moves to an interactive projection and generated noise fields, where a simple modification of the random seed could iteratively create new versions of the performance. Kinect cameras are used to put together the intersection of the images to a three-dimensional volume.

In a second, further setup the project becomes an audiovisual, real-time performance. A interactive, reactive system between the audio and the image, between the man and the machine. The refined algorithm creates geometry, defined by the velocity of multiple visitors and mixes it with the sound information to the time of the recording, therefore producing a 3d-printable geometry output of each individual visitor. A Digital representation was created through use of several CAD software for Post Processing to create a non-representational collage of the whole performance in a physical, 3d-printed model. The sculpture captures the motion of the visitors as well as the music played,which directly influenced the CAD data output. The dancers' representations are printed and handed out to the performers in an effort to create a common and shared memory of each participant's individual actions and motions.

ABSTRACT. We demonstrate a soft, malleable fabric contoller. Attendees can use it to explore sounds, collaboratively create soundscapes and music. This soft input device - Musical Skin - senses where it is touched and how much pressure is exerted on it. This is done using a method consisting entirely of fabric components. Using this textile matrix sensor, the performer's role is changed from that of manipulating a rigid device to engaging with a malleable material. The sensor pushes the performer to explore how the motion of their body map to the sound, changing not only the performers experience but also engaging the audience beyond what typical electronic musical input devices would. In this extended abstract, we discuss the sensing mechanism and describe the installation we envision the musical skin to be used in.

ABSTRACT. This practice as research (PaR) paper, examines the effects of auditory perception on a dance physical performance and how internal and external sounds affected the performer’s movement and behaviour during the whole process of creation. This PaR combines research of physical theatre and dance practices along with cognitive neuroscience. Throughout this paper, an interdisciplinary approach has been established by investigating the auditory perception of the human brain. The orientation of the performer’s body helped to direct the movements in space, enabling the performer to pace their actions to sounds, especially during dance-physical actions. Likewise, this paper includes cognitive psychology, which is an integral part of cognitive neuroscience, demonstrating how emotional response changes through different sounds and gestures.

For this experimental and practical qualitative research on auditory perception, the main methodology incorporated was the use of sound technology. Thus, two different types of microphones were used; a regular microphone with its stand and ten contact microphones. These were used to enhance the performer’s internal sounds, as means of exploring the different levels of sound providing high-quality hearing perception. Contact microphones were linked with three different platforms (rostra), creating three ‘sound stages’ on which the sounds of dance-physical movements were amplified and exaggerated, resulting in further explorations. These movements involved full-body actions, as the PaR examines dance-physical theatre performance and practice. Through this experimental research and with the use of sound technology, a specific methodology was defined, providing binaural audio within the performance space.

At the beginning of this PaR, internal and external sounds are investigated. Starting with the body and exploring its potential for inner and outer sounds physical sounds of varying pitch levels, such as the breath, voice, teeth, nails, etc., portrayed how each sound can affect the performer’s physicality. Afterwards, this paper demonstrates how external sounds, such as an ambulance siren and an aircraft can interrupt gestures, movements, and even silence or immobility, affecting the quality of the motion and emotion. Finally, this PaR focuses on how these sounds and their affections can influence the performer in the creation and execution of different movement patterns.

As follows, through the combination of the theoretical and practical research during the experimental period, this paper analyses the findings of this process which were presented through a solo performance. The findings provide a clear understanding on how amplification combined with the contrast of fluid and organic movements and jerky/fragmented gestures, created depth and variety of tone for the final practice, which resulted in a multi-dimensional performance.

Lastly, this paper offers propositions on how these results can help in further research on auditory perception combined with performing arts. More specifically, from this paper new questions have emerged and will expand in a PhD research such as how internal and external sounds affect the performer in the creation and execution of different movements during the creative process of a performance. Furthermore, the ultimate goal of this research is the establishment of a specific methodology and tools for use in the creation of performances, examining how different sounds affect the creation of movement patterns. Thus, the paper suggests a development of this practice, with the use of Electroencephalogram (EEG) Headset, which can provide analysis on the brain’s wavelets and behaviour through different acoustic stimulations. Also, a final suggestion for the establishment of a methodology is to analyse movements with the use of Microsoft Kinect System. Finally, with the use of EEG Headset and Microsoft Kinect, the research hypothesis can provide quantitative results which will contribute in gaining further insight into the complex mechanisms of the human brain, which is involved in the perception and processing of auditory information.

Measuring Impact of Social Presence Through Gesture Analysis in Musical Performances

SPEAKER: unknown

ABSTRACT. Immersive Virtual Environments combined with motion capture systems have been used as experimental set-ups for studying the influence of the presence of an audience on musicians' performances. This study highlights that musicians playing with different expressive manners move differently, through increasing their kinetic energy and body twisting. These factors are increased or decreased by the presence of an audience, depending on the difficulty of the task. Such behavior fits Zajonc's theory of social facilitation.

ABSTRACT. There are many approaches to generating movement for a robot, programming motors, keyframing poses, recording hand-manipulated sequences (puppeteering), recorded motion capture sequences, live motion capture and using motion capture to train a neural network to name a few. This research centres around the performance project Pinoke in which a number of methods for enabling a robot to dance, with and without a human partner, were explored.

ABSTRACT. In 1996, Pablo Ventura turns his attention to the choreography software Life Forms to find out whether the at the time revolutionary new tool for the creation of dance movements can lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.

ABSTRACT. This paper presents an investigation into interdisciplinary collaboration between practitioners of distinct performative art forms, which was carried out with the aim of developing a
framework for collaboration informed by the biological phenomenon of symbiosis.

Symbiosis is a pervasive occurrence in nature, describing the close and persistent interaction among organisms of different species. The aim of symbiosis is for at least one of the interacting organisms, or symbionts, to extract benefit from their association, with the different types of symbiotic interactions – mutualism, commensalism, and parasitism – denoting the fitness outcome for each of the symbionts. Over the years, the fine details of symbiosis have been the subject of controversy among researchers of General Biology and General Ecology. However, nowadays there is consensus on the phenomenon’s ubiquity, and its importance in accelerating the rate of many species’ evolutionary process.

Through observing the manner in which diverse organisms interact with each other within symbiotic relationships, I have developed a framework which aims to facilitate collaboration with artists of different disciplines in developing live performance works. By interpreting the different types of symbiotic interactions, as well as their key observable traits – interspecificity, closeness, and persistence – the framework provides artists with a set of actions and precepts that can be employed during all stages of the collaborative practice, including authorship, hierarchy in creative control, aesthetics, development, and live interaction interfaces.

The development of the framework draws insight from the findings emerging from my own practice, which focuses on the collaboration between disciplines utilising sound and physical movement as their predominant mediums of expression. Furthermore, key theories in similar collaborative practices have also contributed in supporting the framework’s development, such as the long-term collaboration between John Cage and Merce Cunningham, as well as contemporary precedents from practitioners such as Jo Hyde, Sophy Smith, and Marco Donnarumma.

Further to the theoretical presentation of the framework, I also present a number of performance works which activate the notion of symbiosis within collaborative practice.

ABSTRACT. With the continual proliferation of new devices and techniques for motion capture, there is an essential need for the formalization of high-level semantic features describing human movement. The large variety of available sensors provides various perspectives on movement information, but it also comes at the cost of a large disparity of data representations that complicates movement analysis and interaction design. This paper describes a collaborative initiative aiming at formalizing current knowledge in movement signal processing. We introduce the Movement Features Database, an open online repository that collects and formalizes movement feature extraction techniques.

ABSTRACT. Laban Movement Analysis (LMA) is an expert-based method by which Certified Movement Analysts observe, analyze, describe and write movement. LMA is increasingly used in Human Computer Interaction through articulating a precise use of langage for describing movement expression. In this paper we propose Motif Tiles, a tool developed to analyse human movement with the help of physical objects called Motif Tiles. It is composed of tangible Tiles that allow for both analyzing and generating temporal patterns of movement. Through our design research experiments, we will unfold how the tangible tool engage movement experts and dance professionals with the analysis and the embodiment of movement and collaborative focus on human movement patterns.

ABSTRACT. This paper reports an educational experience with 75 graduate students on action preparation as a function of sound semantics. In 6 hours of lesson and by working in a group project, students were able to investigate modulations in motor preparation timing induced by sounds that fall within their peri-personal space (PPS) in non-visual virtual reality setting. The original modus operandi of our experimental approach allowed to study and analyse human motor planning of a simple action against a potential threats which were virtual sound sources with different emotional rates (i.e. ranging from 4.4 to 5.4 of arousal rating and from 2.1 to 5.7 of valence rating) rendered via headphones. Results from this experience suggests that semantics differently modulates the process of PPS estimation due to auditory stimulation in terms of pre-motor reaction time and distance perception.

ABSTRACT. Partnering dancing requires skilled coordination of synergies between dancers. It is very challenging to quantify the unfolding of this dynamic coupled rhythm in ways that can provide dancers and choreographers immediate feedback on their performance. In particular, there is a paucity of methods that automatically reveal synergies within the body parts of each participating dancer, while also providing metrics of coupled synergies. In this paper, we introduce a new platform for the tracking of coupled dynamical systems with a direct application to partnering dancing. We present visualisation tools of "togetherness" in body parts and profile the stochastic signatures of each individual dancer along with those of the coupled components of their bodies moving in tandem. We use complex dancing segments and non-dancing segments of rehearsal snippets or staged poses to illustrate the use of our methods. Further, we suggest possible ways to quantify inherent variability in the subtle motion fluctuations that -although seemingly invisible- help the dancers entrain from moment to moment. We hope that these tools are of use to the movement computing community.

Incorporating Kinesthetic Creativity and Gestural Play into Immersive Modeling

SPEAKER: unknown

ABSTRACT. The 3D modeling methods and approach presented in this paper attempt to bring the richness and spontaneity of human kinesthetic interaction in the physical world to the process of shaping digital form, by exploring playfully creative interaction techniques that augment gestural movement. The principal contribution of our research is a novel dynamics-driven approach for immersive freeform modeling, which extends our physical reach and supports new forms of expression. In this paper we examine three augmentations of freehand 3D interaction that are inspired by the dynamics of physical phenomena. These are experienced via immersive augmented reality to intensify the virtual physicality and heighten the sense of creative empowerment.

Data-Driven Design of Sound for Enhancing the Perception of Expressive Robotic Movement

SPEAKER: unknown

ABSTRACT. Since people communicate intentions and inner states through movement, robots can better interact with humans if they too can modify their movements to communicate changing state. These movements, which may be seen as supplementary to those required for workspace tasks, may be termed ``expressive.'' However, robot hardware, which cannot recreate the same range of dynamics as human limbs, often limit expressive capacity. One solution is to augment expressive robotic movement with expressive sound. To that end, this paper presents a study to find a qualitative mapping between movement and sound. Musicians were asked to vocalize sounds in response to animations of a simple simulated upper body movement performed with different movement qualities, parametrized according to Laban's Effort System. Qualitative labelling and quantitative signal analysis of these sounds suggests a number of correspondences between movement qualities and sound qualities. These correspondences are presented and analyzed here to set up future work that will test user perceptions when expressive movements and sounds are used in conjunction.

ABSTRACT. We present a method for the interactive generation of stylised letters, curves and motion paths that are similar to the ones that can be observed in art forms such as graffiti art and calligraphy. We describe various hand styles with a bi-level representation, in which a common geometrical structure is given by a sparse sequence of targets, and different stylisations of the same target sequence are generated by optimising the trajectories of a dynamical system that tracks the spatial layout of the targets. The evolution of the dynamical system is then driven with a stochastic formulation of optimal control, in which we define each target probabilistically as a multivariate Gaussian. The covariance of each Gaussian explicitly defines the variability as well as the curvilinear evolution of trajectory segments. Given this probabilistic definition, the optimisation procedure results in a {\em trajectory distribution} rather than a single path. This makes it possible to stochastically sample from the distribution a number of dynamically and aesthetically consistent trajectories, which mimics the variability that is typical of a human made movements. We demonstrate how this system can be used with a simple user interface to explore different stylisations of a letter or shape, which are visually and dynamically similar to the ones made by a human when drawing or writing.

Storytelling with Interactive Physical Theatre: A case study of Dot and the Kangaroo

SPEAKER: unknown

ABSTRACT. This paper examines the way embodied interactive visuals were used as a storytelling device in the physical theatre production of Creature:An adaptation of Dot and the Kangaroo. A number of performers and artists involved in the production were interviewed and their perceptions of the interactive technology have been contrasted against a previous study into abstract dance. The animated backgrounds and interactive animal graphics were found to reduce the density of script by describing the location of action and spirit of the character, reducing the necessity for this to be spoken. Peak moments of the show were identified by those interviewed and a scene analysis revealed that the most successful scenes featured a more integrated storytelling where the interaction between performers and the digital projections portrayed a key narrative message.

ABSTRACT. We propose the presentation of 0⏎, a performance that is both a game and an experiment in real-time computer generated performance. 0⏎ uses audio and projection to direct performers in a series of escalating actions and movements determined by the computer during the performance in real-time. Performers attempt--and often fail--to carry out the computer’s instructions, which range from the specific and simple to the complex and figurative but are always new and unexpected. The result is an off-balance, hilarious, and occasionally arresting experience that questions the nature of human/computer relationships and interfaces. As our lives are increasingly governed by algorithms, what do the new structures of power and control look like, and what are the distinctions--if there are any--between human and artificial identity?

0⏎ is performed by a core group of three performers, accompanied by two or three additional performers that are “drafted” from the local community. The new performers rehearse the piece once or twice before the performance but are otherwise left to respond naturally to the computer’s instructions. MOCO’s assistance in identifying the additional performers and securing a rehearsal space would be appreciated, but is not necessary.

We would like to present 0⏎ as a forty-five minute to one hour performance. The performance can be staged anywhere from a proscenium stage to a gallery, but it works best when presented in an intimate setting where audience members are free to sit, stand, and move around the space. The performance area must have a projection screen or wall at the back, at least 1.7m x 3m in size. The space must also have a sound system. The projector and sound system are run from a laptop. 0⏎’s performance system is designed for flexibility and portability, so it can be installed quickly and easily and adapted to a variety of presentation formats. Post-performance, we will engage in a brief discussion with audience members who will then be invited to try out the rules of the game for themselves.

The piece is inspired by the collaborators’ decades of working with dance and computation: experimenting with sensors, machine learning, computer vision and other emerging technologies and interfaces. 0⏎ interrogates the ethical and cultural issues raised by this work in a bidirectional manner. On the one hand, how does approaching computation from a dance perspective inform our use of technology by problematizing issues that are often overlooked by the scientific and engineering mainstreams? On the other hand, how does approaching dance through computer code influence our choreographic and movement practices? MOCO’s community and topics provide an ideal environment for raising and discussing these questions, and we look forward to the possibility of presenting our work.

ABSTRACT. There is no stillness in human movement. Even when simply standing, the body is constantly falling, catching itself, and subtly adapting to the environment. And the experience of this seeming stillness is filled with a myriad of inner bodily sensations that remain unseen to the eye. Micro-movements induced by breath and weight shifts connect us to the force of gravity and an organic flow of embodied rhythms through time.

still, moving is a performance for two dancers in which an interactive sound environment responds to subtle changes in muscular activity, disclosing and extending the inner bodily experiences of the performers. We explore how the sonification of micro-movements can increase or disrupt the performers' kinesthetic awareness, and how it affects the kinesthetic empathy between the performers and the audience. The performers are equipped with two Myo Armbands that capture physiological signals such as muscle tension and subtle accelerations. The system mediates the relation between movement and sound through interactive machine learning, in a design that evolves over time, in response to the interplay between the performers.

ABSTRACT. This piece is written for MYO armbands, electro-acoustic ensemble and gestural recognition of soundpainting conducting. In the piece, soundpainting-inspired gestures guide the performers, as does a graphic/textual score which defines the sonic palette and instrumental gestures available to the players. Tensions are negotiated between acoustic and electronic sources, and between bottom-up structured improvisation and top-down guiding via conducting. These continuums are amplified and explored through another layer of shared articulation: machine learning has been applied to recognition of the composer/conductors gestures, with symbolic recognition opening up channels of electronic processing, which then acts upon the acoustic players at moments in the piece. Continuous mappings between conducted motion and sonic transformations have also been learned, creating a tension between the symbology of conducted instruction and that of continuously co-constructed sound, as the conducting and performer share signals and intentional resonance in performance.

This piece was created for my Electro-Acoustic Orchestra (EAO) ensemble at York University, a mixed electronic/acoustic ensemble that is comprised of a combination of York students and Toronto-area professional musicians. The piece was premiered in a concert at my DisPerSion Lab at York U. For this updated version for MOCO, my proposition is to conduct the EAO telematically, who would be performing from my lab in Toronto.

ABSTRACT. For MoCo 2017 I propose to present a live coding performance: WebPage in Three Acts, an assemblage of graphic experiments into a hybrid form of composition, combining principles of choreography with the formal structures of web-coding.
As in choreography, web-design also deals with space, time and movement qualities. It has been defining ways of moving, collectively or individually, through fluid nonetheless complex landscapes of information displays, networked spaces, and multimedia environments. The performance being presented and the notion of ‘choreographic coding’ is a technical as much as social, cultural and aesthetic experiment which can be expanded both at the level of web-design as well at the one of choreography.