Creativity and Collaboration: Revisiting Cybernetic Serendipity

The two-day colloquium Creativity and Collaboration: Revisiting Cybernetic Serendipity, which explored how a combination of art, design, science, engineering and medical research can yield productive partnerships, was preceded by a one-day symposium where students from a wide range of disciplines presented their work

National Academy of Sciences, Washington DC 12-14 March 2018

by ALLIE BISWAS

Arthur M Sackler (1913-87), the New York-born doctor, art collector and philanthropist, was a long-standing advocate for the commonalities between art and science. For Sackler, these were not two separate disciplines but, rather, completely interwoven cultures. As well as contributing groundbreaking research within the field of schizophrenia and becoming a renowned publisher and editor of medical research, Sackler was a dedicated patron of the arts. He established museums and art galleries at the Metropolitan Museum of Art, Princeton University, Harvard University and the Smithsonian Institution, among others.

The Sackler Colloquia of the National Academy of Sciences, which takes the form of four individual conferences held annually, has continued to develop his work. It was founded by Dame Jill Sackler to commemorate her late husband’s passions. The talks presented at these events expand on current scientific research, while engaging with the boundary-less nature of such topics, and have become integral to the Academy’s range of scientific programmes held throughout the year.

The two-day colloquium, Creativity and Collaboration: Revisiting Cybernetic Serendipity, set out to underline the close connections between art and technology, specifically examining how multidisciplinarity is impacting on creative practices today. The colloquium came into being thanks to Miguel Benavides, editor of Studio International. In July 1968, a special issue of the magazine was published, Cybernetic Serendipity: The Computer and the Arts, which was realised in relation to the Cybernetic Serendipity exhibition held at London’s Institute of Contemporary Art from August to October of the same year. The show drew attention to the inventive ways that artists, scientists and technology experts were working together to make computer-generated graphics, animations and music, as well as cybernetic machines and environments. The idea was that artistic forms could be, and were, spawned by technology. The exhibition was curated by Jasia Reichardt, who also edited the themed edition of Studio International.

Cybernetic Serendipity: the computer and the arts. Edited by Jasia Reichardt. Published by Studio International (special issue), 1968.

To mark the 50th anniversary of this pioneering publication and exhibition, Benavides considered it important that a symposium and accompanying show take place. The National Academy of Science (NAS) provided the most fitting context for these initiatives. As well as the colloquium, a display of Paul Brown’s work has been curated that outlines the seminal artist’s experimentation with digital computers – a medium he first discovered after visiting Cybernetic Serendipity.

The first day of the programme was dedicated to the work being undertaken by a diverse range of postgraduate students based at US universities. The group of 54 participants came from all across the country and presented on disciplines including cognitive mapping, bioart, climate awareness, data collection and artist-centric programming tools.

It was the first student symposium incorporated within the NAS’s Sackler Colloquia and received additional funding from Google, signalling the importance of the projects being carried out by these emerging practitioners.

The student symposium was spearheaded by Liese Zahabi, a graphic/interaction designer and assistant professor of graphic/interaction design at the University of Maryland, and Molly Morin, an artist working at the intersection of digital and analogue practices who is assistant professor of art at Weber State University.

Liese Zahabi: Co-Chair, Sackler Student Symposium

Molly Morin: Co-Chair, Sackler Student Symposium

Morin gave a lecture titled Informational Material: poetic data visualisation, CNC fabrication, and embodied systems, in which she elaborated on her approach to art-making, which involves creating material representations of data through generative drawing, digital fabrication and fabric sculptures. Morin received degrees in sculpture before taking on her role at Weber, where she fosters methods for making art with digital tools.

Adam Haar Horowitz, a charismatic researcher at the cutting-edge MIT Media Lab, spoke about his work relating to hypnagogia, the liminal space between wakefulness and sleep. The topic was immediately captivating – and easy to relate to – given that this is a mental state that applies to all of us.

Adam Haar Horowitz: Interaction During Sleep for Expanded Creativity

Haar Horowitz believes that technology has the ability to reveal parts of ourselves that otherwise remain invisible, and studies how to control and capture dreams during the moment of hypnagogia, asserting that such access leads not only to revelation but, ultimately, wellbeing. Haar Horowitz studied mindfulness meditation and mind-wandering during his earlier work at MIT’s McGovern Institute for Brain Research, and also spent time as an artist-scientist at the Marina Abramović Institute.

Jifei Ou, another member of the Media Lab at MIT, is a compelling designer whose subject matter is transformable materials. Starting with the premise that physical materials are generally thought to be static and permanent, Ou seeks to reformat such materials with digital characteristics, making them programmable, for instance, or able to change their shape. Bio-mimicry and bio-derived materials inform his research, and the natural world is just as informative to his thinking as digital technology. The new materials that result from these experiments could be used to serve a number of purposes, from the construction of responsive living environments to the enhancement of existing interactions with products.

Suprahuman is a project in the form of a book led by William Wiebe, a photography student at the School of the Art Institute of Chicago, in collaboration with Dr John Santerre, a computer scientist at the University of Chicago. Identifying that contemporary military analysis revolves around the management of huge quantities of surveillance data aided by automated image recognition, Wiebe explained Suprahuman’s ambition to examine the new visual culture that is formed due to the images produced being increasingly envisaged for computer vision, rather than for human vision. Their research applies machine learning technologies to aerial surveillance and media images of individuals held without charge by the US. The pair derived a program with which to analyse these images, before using t-SNE modelling (a method for visualising high-dimensional data) to rearrange them in three-dimensional space, according to the landscape features the program had learned during its training. Suprahuman consequently makes the political precariousness of such bodies into a physical entity.

An installation by Amy Wetsch, an MFA student at Mount Royal School of Multidisciplinary Art, filled the foyer outside the conference hall. The work detailed her continuing explorations of the human immune system through the medium of sculpture. Wetsch’s practice considers the impact of disease and illness on the body, specifically in relation to auto-immune conditions, something with which the artist has personal experience. Her fascination with the body’s misdirected inclination to attack itself provides the impetus for her visual explorations.

Amy Wetsch: Case Study: Bodily Dissonance and Treatment Validity

For the symposium, Wetsch created Case Study: Bodily Dissonance and Treatment Validity, a group of dome-like objects that emerged from the ground, rising like volcanic eruptions. Covered in moss and gauze, as well as objects such as melted medicine cups and infusion tubes, these configurations in bright pink, lime green and orange signalled Wetsch’s interest in the manipulation between synthetic resources and organic forms. The artist has discussed her consideration of trees in relation to the installation, commenting on the interconnected nature of roots, and, more widely, the qualities of invasive species. The parallels the artist is drawing here between nature and the intrusive nature of medicine were apparent, and her display evidently pointed to the dangers of contemporary prescription culture.

• Abstracts by all the Student Fellows are provided at the end of this review.

Creativity and Collaboration: Revisiting Cybernetic Serendipity

The student presentations were followed by two days of talks that formed the central component of the Creativity and Collaboration colloquium. The main event was organised by a group of scholars, chaired by Ben Shneiderman from the University of Maryland. Shneiderman founded the university’s Human-Computer Interaction Lab and is a distinguished professor in the department of computer science. In his introductory talk, he provided some important contextual details relating to the NAS, which was established in 1863 by Abraham Lincoln as a non-governmental organisation of academics that would advise the government on any topic that related to science or art. This combination of the two disciplines was linked by Shneiderman to the spirit of the Renaissance, in particular Leonardo da Vinci, who famously integrated art, science and engineering in his creations and experiments. Such themes were again picked up by David Skorton, Secretary of the Smithsonian Institution, who later on gave the Annual Sackler Lecture, where he borrowed the words of Albert Einstein, who underlined that the sciences and the arts are “branches from the same tree”.

Whereas many of the ideas discussed during the student symposium touched on processes related to making art, Revisiting Cybernetic Serendipity was more concerned with the notion of design, particularly in relation to internet-enabled collaborations, deliberated here as the key tool for creativity as well as the basis of contemporariness.

One of the first talks was given by Jasia Reichardt, curator of Cybernetic Serendipity, which outlined the development of technology during the 1950s before its fully fledged presence within her exhibition. Reichardt’s show was held as a catalyst for research breakthroughs, and was used as the basis of the first panel discussion. Sara Diamond, president of OCAD University, the largest art, design and media university in Canada, spoke about the creation of the university’s Visual Analytics Laboratory, which she leads. The growing need for the visual analysis of complicated data is met by Diamond and her team of designers, artists, graphics specialists and software developers, who deliver new tools and services for a variety of data-dependent industries such as finance, health, social media and entertainmen. Curtis Wong, a principal researcher at Microsoft, discussed his expansion of technologies that enhance the browsing and sharing of interactive media experiences. Wong conceived and developed ArtMuseum.net, the first broadband 3D virtual museum on the web and, during the early part of his career, developed his skills at Voyager Company, a pioneer in interactive media-tech that created the first eBooks and produced the first multimedia CD-Roms for Windows.

There were several other standouts during the two-day programme. Katy Börner, professor of information science at Indiana University, was persuasive in her advocacy of the power of interactive maps that allow us to visualise scientific results. Börner is known for her book, Atlas of Science: Visualizing What We Know, which features science maps, data charts and an extensive timeline of science-mapping milestones that serve as a visual index to the evolution of modern science. Börner’s publication followed her curation of the exhibition Places & Spaces: Mapping Science, which described and showed successful mapping techniques. Fernanda Viégas, a computational designer, offered insight into her work at Google, where her focus is to bring design thinking to machine learning. She co-leads Google’s “Big Picture” visualisation group, which focuses on ways to illuminate the data and algorithms used in machine intelligence.

Jonathan Corum, science graphics editor at the New York Times. Photograph: Ben Shneiderman.

Perhaps most relatable, and captivating, to non-scientists in the audience was Jonathan Corum’s presentation, Revealing Hidden Worlds, which illuminated his role as science graphics editor at the New York Times. Corum’s job is to translate scientific ideas to a general readership, through the creation of meaningful and clear designs that convey the richness of the data at hand. Corum is required to provide visualisations on anything from climate change to the solar system, and spoke about the need to connect his graphic images to the text that accompanies the article. Corum’s approach to images was to “collage and combine” in order to find the right balance.

While, in the main, the colloquium was tied to the astonishing developments taking place within the realm of computer technology and machine learning, with reference to art and design as tools with which to formalise and explicate findings, all the speakers made an impressive and convincing case for the critical importance that collaboration plays within their work. There is little doubt that we have come full circle since the time of Leonardo, where art, design, science and engineering are considered not only equal, but in close connection to each other. The partnerships that are being forged between these fields, as outlined by the research of the colloquium’s speakers, are addressing how the future will be formed.

Traditional community theatre tends to be a geographically bounded undertaking, produced by materially co-present practitioners before materially co-present communities. A Stage Reborn (ASR) makes community theatre that is geographically dispersed; produced and performed within the virtual environs of a synthetic videogame world; and generative of an internet-mediated theatre community and audience. Drawing on an ongoing ethnographic study of ASR (a not-for-profit incorporated in Seattle that operates primarily on a data-server in Montreal in the virtual world of the Japanese-made massively multiplayer online roleplaying video game Final Fantasy XIV), this talk will consider how this new mode of theatre-making balances technological affordances with aesthetic goals; expands our understanding of networked videogames as culturally significant spaces for creative practice and development of technical repertoire; and demands a continued reconsideration of ethnographic methodology itself. In doing so, it will gesture towards broader reframings of art, scholarship and community in a technologically mediated global context.Saleh Ahmed: The University of ArizonaBringing Science to the Society: A Creative Collaboration to Promote Climate Awareness in Coastal Bangladesh

Because of increasing exposures to various climate risks around the world, engaging local community in climate discourse is more important than ever before. This is particularly important for societies with limited resources where people often suffer from social marginalisation, poverty, illiteracy, and even have restricted access to modern media. While the entire country is exposed to various climate stresses, the densely inhabited coastal Bangladesh along the Bay of Bengal constitutes a vulnerability “frontline”. Addressing this challenge, this project highlights a creative way of disseminating information for climate awareness by designing a local street theatre. Street theatre in rural Bangladesh has been providing social, philosophical and spiritual education to the people since long before recorded history. The use of street theatre for raising climate awareness has great, untapped potential because it does not just disseminate climate information in local communities, but also actively dramatizes and inspires people to use climate knowledge. This proposed project creatively utilizes rural arts, cultures, and traditions to be an integral part of local efforts to address climate challenges with a desire to have positive impacts on people’s livelihoods.

At the most basic level – be it among atoms, cells, or biological systems – communication is the propagation of information across elements of a system. Similarly, individuals and groups within human societies exchange information through communication. It is hypothesized that most of us develop internal models of the world to make decisions, and these models can change when new information becomes available. Here, we demonstrate how people’s mental models evolve when they communicate with other agents in a collaborative decision-making group whose members present knowledge diversity. To sketch what people develop in their mind, we use Fuzzy-Logic Cognitive Maps (FCM). We will then hybridize FCMs and Agent-Based modeling to simulate knowledge transfer mechanisms through which agents upgrade their mental models while interacting with other agents in a group. Results will provide new insights about knowledge transfer and learning determinants in multidisciplinary collaborative groups whose members communicate different knowledge systems.

Composing music involves constructing sounds and manipulating them using notations. Such processes require the transcription of imagined sound. Conventional sounds can be notated using basic parameters such as frequency, amplitude and duration. Given the representational limits of conventional notation, composers working with more subtle parameters and more complex sound structures are faced with a significant challenge. In this project, I illustrate one solution via mathematical modeling. Modeling can create a more concise representation of complex sound structures that employ a wider array of parameters. This ability to represent more intricate sounds opens up the possibility of compositionally utilizing and manipulating sounds that would otherwise not be at our disposal. I will show the compositional benefits of this approach using visual and audio materials, and conclude by playing a four-minute piece, which I composed.

The evolution of the web in the last years has offered many opportunities to develop novel tools to communicate science and engage people. In this context, interactive web visualizations arise like the perfect candidates to use as medium to teach science and astronomy in particular. Motivated by this, we have created http://astrollytelling.io, a project that was born to bring together astronomy and storytelling. Scrollytelling has gained huge popularity in the last years, especially in journalism, and it is used as a way to tell visual stories step by step by simply scrolling down a webpage. Astrollytelling has adapted this technique to explain basic astronomy concepts and ideas through visual stories in a compelling and easy to understand style using the JavaScript library d3.js. Originally aimed at undergrad and grad students, this visualization tool is also useful for any astronomy enthusiast who wants to learn more about the science astronomers do.

With the advances in virtual reality, physiological sensing technology and immersive computer-mediated communication through lifelike characteristics is now possible. In response to the current lack of culture, expression and emotions in VR avatars, we propose a twofold solution. First, integration of bio-signal sensors into the head-mounted display and techniques to detect aspects of the emotional state of the user. Second, the use of this data to generate expressive avatars which we refer to as Emotional Beasts. The creation of Emotional Beasts allowed us to experiment with the manipulation of a user’s self-expression in VR space as well as the perception of others in it, with the goal of pulling the avatar design away from the uncanny valley and making it more expressive and more relatable to our own mannerisms. We have implemented a prototype system in which VR, human motion and physiological signals are integrated to allow avatars to become more expressive in virtual environments in real time.

Elisa Bonnin: University of Washington Cascadia: An Interactive Climate Change Story

Throughout human history, knowledge and information have been handed down through narratives. We are primed to take information in a narrative sense, and this is particularly true for complex issues such as climate change, whose slow-acting yet potentially devastating effects on human society are often difficult to communicate, resulting in a lack of motivation to take action. In order to inspire conversations about the effects of climate change on human society in the future, I have created Cascadia, a narrative-driven video game in partnership with EarthGames UW that explores the lives of people living in a distant, post-climate change future. Cascadia blends elements of storytelling and narrative design with an interactive format that allows players’ decisions to change the outcome of the story.

Jacklyn Brickman: The Ohio State University Spellbreaker: a tree/human interrelationship

Spellbreaker is an interactive techno-sculpture. The black walnut tree, an efficient carbon absorber, is known in folk and herb culture to be a spellbreaker of heredity and the environment. The nut is a physical reference to the signature of the human head and brain. Spellbreaker aims to honor and bridge the gap between folk culture and technology. It references the connections human bodies have to nature by using human breath to activate the artwork that then creates an approximate output of the amount of CO2 the tree absorbs via ink made from the nut husks. As a viewer breathes, a CO2 sensor is activated and drips ink into a carved wooden basin. Just as human existence overwhelms the ecosystem with more CO2 than it can absorb, this system’s basin fills, puddles and the wood becomes oversaturated.

Iliza Butera: Vanderbilt University Music For Cochlear Implants

Cochlear implants (CIs) allow individuals with profound hearing loss to experience sound, some of them for the first time. While often very successful for speech recognition, most users struggle to perceive or enjoy standard music. For instance, some CI users only discriminate notes a full octave apart, making a typical chord sound like noise. To explore potential creative solutions, I began a collaboration with Artiphon – a startup designing adaptive digital instruments – to quantify listening thresholds and tailor musical compositions to those parameters. Unintuitively, this has led us to expand our concept of musicality while limiting tonal range, reverb, and polyphony. I believe these methods can improve the musical experience for CI users … even if the results sound quite foreign to acoustic listeners.

Sharath Chandra Ram: University of Texas at Dallas Listening Machines – New interfaces for Art-Science and Technology Policy

An art-science intervention as a mode of policy engagement is an emerging practice across the intersections of technology, law and society. I am a transmission artist dealing with the material embodiment of “wirelessness” and “signals”, and my PhD research employs creative infrastructural mediation to expose complex dynamics surrounding data ecosystems. An ongoing project, Whose Weather is it Anyway? (ISEA 2017, Manizales & Science Gallery, Bangalore) entails sonification of local weather data archived and decoded from polar orbiting satellites and remote airport sensors. The resulting artistic exploration exposes multiple assemblages that challenge dominant notions of centralized infrastructures tending towards technocultural singularities. Another project involves sonification of NIH genomic sequence data to enhance STEM learning across both science and policy studies. Ongoing research in this direction employs signal processing and machine listening techniques, data sonication and learning algorithms to provide new ways of performing research at the intersection of media art, science and engineering.

Body, Building, Block is a comparative analysis of the architecture of public space and social media. The echo chamber is not simply an architectural analogy, it is an interdisciplinary design challenge. Architectural representation is political representation. At this watershed moment in US political history – voting turnout in the 2016 Presidential elections was the lowest in 20 years while the number of Facebook users surpassed two billion in 2017 and generate four million “likes” every minute – discourse in public space and on the internet inform and influence one another at an unprecedented scale and produce concrete political outcomes. This project presents an analytical framework for the design of public space and social media and explores how the two can work together to counteract polarizing forces and increase civic participation. The analysis follows three dimensions: 1) public/private nature of the spaces/platforms; 2) personalization of architectural typology/content; and 3) user-experience design of the buildings/interfaces.

John Desnoyers-Stewart: University of Regina, Canada Designing Expressive Mixed Reality Interfaces through Practice-Based Research

As a professional engineer, I often found my creativity hampered by flat interfaces and deterministic methodologies where the result was defined at the outset. I am pursuing a Master’s in Fine Arts in search of ways to bring engineering and art closer together. Through artistically motivated, practice-based research, I have been able to develop new, expressive interfaces that expand on the potential of virtual reality including a “Mixed Reality MIDI Keyboard,” simulated touchscreens, and a fluid-simulation-based natural user interface. These expressive interfaces for mixed reality allow for an embodied mode of interaction which can provide a new paradigm for creativity and participation. Their development demonstrates the creative and productive power which can be harnessed by combining engineering and art: creating new possibilities through artistic methods and rationalizing results through an engineering approach.

This paper/presentation outlines the development of computational aesthetics, art and design practices afforded by computer technologies, which reflect collaboration between computer programmers and digital artists. Stemming from the work of John Berger, I argue that computational methods afford distinct “ways of seeing” and “narrative regimes” that influence how we think about and see data. Clips and images from early computational films illustrate the development of primitive computer art as critical to the advancement of a new way of seeing. In tension to this claim, I show how programmers defined as technicians rather than artists have served, and continue to serve, as a limiting fallacy. Highlighting the historical confluence and evolving collaboration of programmer and conceptual artist, I hope to invoke discussion on how a more nuanced historical understanding of cybernetic methods can rethink the role of abstraction in computational arts and design.

Madison Elliot: University of British Columbia Experimental methodologies for vision scientists and visualization experts: innovation through collaboration

Modern technology produces an abundance of data, and communicating information from this data in a way that can be understood by humans is often challenging and problematic. Effective information visualizations can leverage the strengths of the visual system, allowing us to keep humans and their creativity involved in subsequent inference and decision-making. Vision science offers a rich set of methodologies to help us understand how we see patterns and deduce meaning from visual representations of data. Recently, a surge in collaborative effort between visualization and vision science researchers has yielded exciting innovation in both fields. In this talk, I will review some of the more interesting results of this cross-disciplinary effort, as well as my recent work on modeling color perception, which is inspired by questions and needs from vision science, visualization and graphic design.

Although there is an abundance of software that uses Artificial Intelligence (AI) to emulate a single creative process, to our knowledge, there are no systems which provide an accessible interface facilitating an open-ended exploration of AI-based creative processes. Undoubtedly, the complexity of implementing and training a neural network is a barrier to many artists who wish to seamlessly integrate AI into their work. To address this issue, we present a flexible and accessible framework that allows the user to interact with neural networks on a high level, to design systems that generate the creative content of their choosing. This talk will outline the framework and provide interactive demonstrations of its capabilities. The main contribution of this work is a platform which encourages experimentation at a high level, harnessing the generative capacity of AI, to create innovative art in a variety of domains.

Angela Gao: University of Illinois at Chicago Comparing the Effectiveness and Engagement of Comics to 3D Animation in Teaching Advancements in Nanomedicine

Communication of new findings between scientists and medical professionals is essential for discoveries to translate into therapies, especially in the emerging field of nanomedicine. Understanding of mechanisms on the cellular and molecular scale is facilitated through the use of visualizations, such as 3D animation. This project seeks to validate the knowledge transfer of complex biomedical information in nanomedicine using the comic-book format, which has been effective for science communication but untested in medical education. A comic book about how a synthetic high-density lipoprotein gold nanoparticle may be used as a potential therapy for lymphoma will be created and compared to a 3D animation with identical content. This project will explore how these visualization formats – comic v 3D animation – differ in terms of knowledge acquisition, ease of use, engagement, and preference, in order to determine the viability of comics in enhancing communication of nanomedical discoveries to medical students.

Yelena Gluzman: University of California, San Diego Analyzing the Analyst: A reflexive, experimental approach to thinking theater and cognitive neuroscience together

Transdisciplinary efforts to bring together embodied cognition paradigms from cognitive science and theater practice often do so by reinforcing the divisions between disciplines. In practice, however, such arts-science “collaborations,” despite a shared interest in embodiment, tend to re-inscribe a division between theory/practice and mind/body dualisms that embodiment theory seeks to challenge. This talk presents an alternative paradigm to pursue interdisciplinary dialogue between these disciplines. Analyzing the Analyst is a project undertaken by myself in collaboration with a cognitive neuroscientist, in which an experimental laboratory analysis is inflected through the reflexive layering of experimental theater. The staged and interrogative form of experiments themselves afford this sort of hybridity: experiments are performative in a particular and orchestrated way; they are sites of interaction whose staging shapes the sorts of phenomena that become visible. While experiments are central to the way that cognitive neuroscientists approach the empirical, they are also central to a range of artistic and theatrical practices that interrogate the embodied, empirical conditions of their production and reception.

Jeff Gregorio: Drexel University DrumHenge

We present a system for augmentation of acoustic drums using electromagnetic actuation of the resonant membrane, driven with continuous audio signals. Use of combinations of synthesized tones and feedback taken from the batter membrane extends the timbral and functional range of the drum. The system is designed to run on an embedded, wifi-enabled platform, allowing multiple augmented drums to serve as voices of a spatially distributed polyphonic synthesizer. Semi-autonomous behavior is also explored, with individual drums configured as nodes in a directed graph. EM actuation and wireless connectivity enables a network of augmented drums to function in traditionally percussive roles, as well as harmonic, melodic, and textural roles. This work is developed by an engineer in close collaboration with an artist in residence for use in live performance and interactive sound installation.

Sleep is a forgotten country of the mind: a vast majority of our technologies are built for our awake state, even though a third of our lives are spent asleep. Current human-computer-interaction interfaces miss out on an opportunity to access information from the unique cognition ongoing during dreams and drowsiness. Working with neuroscientists, roboticists and artists I have been able to augment human creativity by extending, influencing and capturing dreams in stage 1 sleep. A window of opportunity arises during sleep onset in the form of hypnagogia, a semi-lucid sleep state where we all begin dreaming before we fall fully unconscious. This cognitive state has inspired thinkers as diverse as Dalí and Edison, suspending them between rationality and irrationality, an interface to the interdisciplinary thinkers in each of our heads. This project tracks and directs this state of mind for creative ideation.

Anna Henson: Carnegie Mellon University AR/VR/MR, Body, My Body

Our bodies increasingly collide with computation, as social and personal habits shift with the dissemination of new technologies. The race to define – and own – social space within virtual environments is a multibillion-dollar conquest. Defining human representation and interaction in these new realities (VR/AR/MR) is a complex and vital intersection of the arts and sciences, requiring skilled translation and negotiation of ideas and processes. This talk will discuss Body, My Body, an interdisciplinary artistic and computational exploration of embodied experiences in virtual and mixed realities. An ongoing collaboration between dancers, creative technologists and computer scientists, the project uses volumetric capture to create a virtual reality music video, and HTC Vive trackers with ambisonic sound to explore the overlap between physical and digital environments. Body, My Body asks: “How do we perceive and connect to each other within these new worlds?”

Andi Hess: Arizona State UniversityUsing Interdisciplinary Translation for More Effective Team Science

Collaborative disciplinary-spanning projects are high-risk and high-reward endeavors. Knowledge of alternative jargons and concepts outside a researcher’s primary discipline make the barriers to participation high. Interdisciplinary products are also often undervalued, partially due to the perception that they are of lower quality than those produced from disciplinary expertise. Because there can be high institutional and personal costs associated with undertaking interdisciplinary projects, many researchers choose not to participate in them. If these risks could be mitigated, both research participants and their institutions could benefit from such opportunities, including larger potential audiences, greater impact on social issues, and innovative research outcomes. When properly executed, interdisciplinary research does produce innovative outcomes. Interdisciplinary Translation, a process for actively facilitating the exchange of knowledge across disciplinary languages, mediates the high risk of interdisciplinarity projects and assists teams in bridging disciplinary boundaries and producing increased interdisciplinary integration and more effective outcomes.

Mac Hill: North Carolina State University, College of Design Developing a Visual Language for Uncertainty in Data Journalism

Data journalism has become a pervasive part of mass media, with infographics and visualizations appearing in print, online, and in television coverage. While visualizations in mass media can render data accessible to the public, they can also give viewers a false sense of truth and certainty. Uncertainty exists in all data and visualizations; it can be introduced during collection, analysis, or even visualization. Conveying the uncertainty involved in a data set provides viewers with a fuller picture and more robust understanding of an issue. Currently, there is not a perceptually sound visual language for conveying that uncertainty. Drawing from graphic design methods and frameworks, in addition to statistical and scientific methods for conveying uncertainty, this study explores new techniques that data journalists can use to convey uncertainty in statistical and scientific information to a non-expert audience.

This talk explores socially engaged, environmentally oriented art in the context of the 6th Extinction, through the lens of my engagement with weedy plants and urban ecology. Focusing on my collaborative project, the Environmental Performance Agency (EPA), I trace a transdisciplinary genealogy from feminist land art of the 1960s through social practice art and tactical biopolitics in the 2000s, setting the stage for my current commitment to an artistic methodology I call “public fieldwork”. Drawing on EPA activities, I investigate the benefits of open, publicly accessible exchange between art and ecology, and between humans and nonhumans. Responding to Born and Berry’s concept of the “public experiment”, I explore tactics for resisting damaging dualisms arising from the simplification of fields such as invasion biology and restoration ecology. The artist/fieldworker becomes an intermediary to dispel plant blindness and encourage place-based awareness of the more-than-human habitats we create in constructing our cities.

Garrett Johnson: Arizona State University Responsive Media and Semantically Shallow Computation: Ascending from the Depths of Computational Complexity

We are over our head in deep computation; machine learning, artificial intelligence and big data promise smooth, seamless futures of smart homes, full automation and synthetic intelligences, but are haunted by untold consequences. The stakes are high, code is brittle – programmers, artists, engineers and futurists must hedge their bets against computation of categorization. Responsive computation offers another way: dumb tech, shallow semantic computation and signal processing leverage the richness of our everyday experience. I present here a research/creation apparatus called Lanterns, a digital-physical hybrid system built for full-bodied interaction, as a sandbox for exploring experiential and material approaches to computation. The code amplifies the sensed movements of the lantern’s analog matter through sound and light media feedbacks, de-emphasizing algorithmic virtuosity under what is already relational, social, and lived. Collapsing the computational black box, Lanterns allow us to imagine futures of dwelling with techne outside dominant narratives of technological progress.

Peter Marting: Arizona State University Forest of the Glowing Symbionts and Azteca Ants

As scientists, we discover small, beautiful truths hidden among us and use rigorous, quantitative tools to understand how they fit into our world. I am passionate about celebrating this endeavor by expressing my original research through different artistic media. I study the collective personality of Azteca ants that live symbiotically in Cecropia trees in the rainforests of Panama. Plant plants provide hollow internodes for nesting and nutrient-rich food bodies; in return, the ants provide protection from herbivores and encroaching vines. I would like to present two projects – first, Cecropia Treesongs uses the data from the ants’ spatial distribution within the tree to generate a unique electronic musical composition for each colony. Second, Forest of the Glowing Symbionts are five large, interactive, glowing tree sculptures that light up with the activity patterns of their ant colony symbionts via hundreds of programmed LEDs and respond when shaken as the colony does.

Absurdity as a critical response to technology is a technique that was formally developed within the early 20th-century art movement Dada. Drawing on this tradition, Absurdist Electronics is a design exploration practice aimed at exploring the relationship between the technologically augmented body, humor, alienation and anxiety. It has grown out of a series of artworks titled Urban Armor (2014-ongoing), strange wearable technology pieces which I have built and documented in public spaces and disseminated through the internet. In 2014, the second piece in the Urban Armor series, The Personal Space Dress, a robotic dress that expands when someone comes too close to it, went viral, with major news sources picking it up and reacting to it as if it were a commercial product. The strange and frenzied experience of this attention led me to consider the relationship between the technologically augmented physical body (cyborg), absurdity (clown), and anxiety, in an digital-obsessed era that plans for the body’s obsolescence.

Community-led air-quality data collection has had a troubled history. How might artists bridge the gap of engagement in a meaningful way that would contribute to expert and non-expert knowledge-making? Airtracs is a two-year community project that synthesizes air-quality science, low-cost technologies and data visualization. The artist initiated the project by appropriating the popularity of citizen science and do-it-yourself making as tactics for engaging residents about air quality in their neighborhood. In part one of the project, youth participants augmented remote control toy trucks (rovers) with cellular networking abilities and inexpensive sensors. This coincided with the NY State Department of Environmental Conservation’s air-quality study in the neighborhood, prompting the artist to reach out to the scientists. She invited them to join the youth during a public “rover walk” to collect air data, resulting in an unexpected bridging between residents and scientific research efforts. The second part of the project engages participants in creative mapping and other audiovisual methods using the DEC’s year-long data collection.

Kieran Murphy (with Xinyi Zhu): Physics, University of ChicagoRandomness and Mind Games in Virtual Reality

The inability of our brains to make sense of a situation – often ascribed to randomness, whether correctly or incorrectly – is an animator’s tool for creating intrigue and a physicist’s constant foe to push against through research and theories. More broadly, “randomness” is a ubiquitous presence in our lives: much of it we ignore, some of it leads to humor and some to hardship, and all of it adds color and uniqueness to our existence. In this collaboration, we combine our backgrounds to create virtual reality (VR) experiences which experiment with various flavors of randomness and put the player in the middle of this immersive discussion. Our work together has made each of us rethink the role of randomness in our studies and in our daily lives.

Riding his bicycle home from work, 28-year-old Hector Avalos was struck and killed by a drunk driver. In the aftermath, playwright Thomas Murray spoke with Avalos’ family and friends to assemble an oral history of the fallen cyclist. Excerpts from those recorded interviews and contextual interviews with civil engineers, urban planners and transportation historians comprise The Right of Way, a documentary play with immersive multimedia which asks audiences to consider who their city streets have been built for – and how those ideas are changing. Using video excerpts from a recent performance, Murray will introduce the play’s interdisciplinary creation through public workshops with fellow academics and activists in Atlanta and Washington DC. He will also discuss how building partnerships with non-arts organizations can increase citizen participation in planning processes and how the sharing of oral histories can be effective in raising public opinion for Complete Streets initiatives.

Jifei Ou: MIT Media LabArchitecting Materials: Cilllia

This presentation will share my research on designing programmable materials at mesoscale, specifically a collaborative project that was biologically inspired and derived, computationally designed, robotically fabricated and artistically presented. The project demonstrates new materials with input (sensing) and output (actuation) capabilities, which can be exploited for the next generation of Human-Material Interaction design. I would also like to share the experience of interdisciplinary collaboration between scientists, engineers, designers and artists at each stage. Cilllia is an effort of creating functional materials at the smallest scale that designer can achieve today. Through the voxel-based modeling tool we developed, we can now 3D print dense hair structure with customizable geometry of each hair. Such hair structure is used to design passive actuator and acoustic sensors for human touch.

Janet Panoch: Indiana University – Purdue University Indianapolis (IUPUI)Translating Videos to Interactive Role-Play Video Simulations for High School Health Classes: Teens Inform PACE-talk – The Game

Healthcare consumers are often unprepared to actively participate in decision-making with providers. While medical schools require the fundamentals of communication for medical interviewing, patients do not receive equal preparation. High school health classes are required for graduation and should include evidence-based patient communication skills training. PACE-talk uses the PACE patient training model in videos of passive/active patients and doctors. It was tested at a high school with significant results using the Medical Communication Competence Scale. Student suggestions for improvement overwhelmingly included less watching and more interaction though they liked the role-play scenarios. PACE-talk – The Game will adapt the role-play videos to an interactive learning game with the expertise of Yale’s Center for Health and Learning Games. High school stakeholders will inform and test two theoretically grounded prototypes in 2018-19. A condensed video will precede the game, giving students the opportunity to interactively practice the skills in the role-play video simulation.

Kelly Park: The University of Texas at DallasSemiotics of Pain

According to 2016 US census data, almost 40% of the US population speaks a language other than English at home. This means two in every five American families do not speak English. This language barrier complicates interactions within the healthcare system. Limited English Proficieny (LEP) patients experience a huge disparity due to the language barrier: less health education, poor interpersonal care, and lower patient satisfaction. To improve the experience of LEP patients, I propose to design a semiotic system of pain to assist the communication between LEP patients and English-only speaking healthcare providers. This communication tool is useful not only for LEP patients but also for those who struggle to describe pain precisely, without needing to know a specific language.

This paper presentation will explore the queer ecology of relationships between human artists/scientists and nonhuman matter in the feminist art laboratory. Exploring practices of care and maintenance enacted by bioartists, informed by interviews I have conduced with a number of bioart practitioners, my paper will address and celebrate the intimate and meaningful interactions occurring between artists and their semi-living subjects/muses/collaborators. Caring for semi-living (the term semi-living as defined by Oron Catts and Ionat Zurr to describe transgenic beings that occupy an inbetween status as living creatures that rely upon outside forces to stay alive, they are not self-sustaining) entities requires an excess of emotional, physical and intellectual labour on the part of the artist. The regimented masculine lab protocols are broken and reframed through the eyes of artists, such as Catts and Zurr, WhiteFeather Hunter, Nicole Clouston, Kathy High, and Tarsh Bates, who allow themselves to celebrate their maintenance activities, engage with nonhuman bodies and feel empathy towards their specimens. I will present on my own ethnographic and practice-based research in feminist bio-art laboratories in North America illustrating the ways in which, through the maintenance and pedagogical practices of making and public display, feminist bioartists impart their knowledge of scientific processes and their caring empathy for nonhuman life.

Research on creative cognition reveals a fundamental disagreement about the nature of creative thought, specifically, whether it is primarily based on automatic, associative (Type-1) or executive, controlled (Type-2) processes. However, expertise must be considered to better understand cognition in artistic domains. Neuroscience and eminent artists accounts present converging evidence that Type-1 processes become dominant as one achieves mastery. We examined jazz improvisation across three studies, using explicit “be creative” instructions and transcranial direct-current stimulation to modulate musicians’ cognition. The results of the first two studies revealed a significant interaction between expertise and modulation technique, such that ramping up Type-2 processes significantly increased novices’ quality ratings and significantly decreased ratings of the most experienced musicians. In study three, musician reports of Flow significantly predicted the quality ratings of their improvisations. We will present an overview of the results from these studies and discuss next steps and challenges when examining musical creativity.

Nina Sakhnini: University of Illinois at ChicagoWalking the talk: Generating memory cues to help people with dementia in everyday conversations

People with mild to moderate dementia have several forms of cognitive impairments, such as memory impairment. Dementia patients with memory impairments have difficulties finding the right words when engaged in a conversation. To help people with dementia recall words in everyday conversations, our research is exploring how to design a proactive digital aid. We are designing an ambient speech-recognition system that will be listening to the user’s conversations. By listening to user’s conversation, the system will detect the need of a memory trigger. Such as if the user forgets a name, or a place. Also, the system will give the user context-based memory-refreshing triggers. We are exploring different modalities to provide memory triggers such as visual and vocal content. Sensing (of speech) can be turned on or off at demand. Our system will help in triggering memory cues for people with dementia and thereby may improve the quality of everyday life.

Nazmus Saquib: MITReimagining Early-Childhood Classrooms

Children often do not know how to express their needs and interests to teachers. In a busy classroom, teachers hardly have time to observe every child and learn about their individual needs, which makes personalizing the curriculum and experience for every child very difficult. I have developed minimally invasive social proximity sensors and swarm robots that capture valuable hidden insights in a classroom. Using the sensors, we reconstruct the daily social network, teacher-student time distribution, and learning time, which provides unique insights to teachers about their teaching style and the time they spend with each child. Additionally, swarm robots embedded in learning materials provide novel data about individual styles of learning and struggles. These technologies contributed significantly to the formation of Wildflower Schools in 2016, a group of experimental schools aimed towards reimagining the future of early-childhood classrooms. I co-led this research collaboration among educators, psychologists, computer scientists and design engineers.

Gabi Schaffzin: University of California, San DiegoQuantified Self and multimodal interrogation

My project combines the benefits of multimodal interrogation: built upon critical theory, based on the use of quantified-self devices, and presented in a manner accessible to a general audience. A series of artworks incorporate data from quantified-self devices, reinterpreted into forms that highlight a critical property of that data (eg, its propriety and/or arbitrariness). Carey (1989) notes: “Things can become so familiar that we no longer perceive them at all. Art, however, can … wrench these ordinary phenomena out of the backdrop of existence and force them into the foreground of consideration.” Carey’s position drives this project as I reframe otherwise mundane data about a quantified-self into a means to raise questions about power, meaning, and identity not found in other QS-related discourse. This exercise is part of a larger project in which the nature of computable subjectivity is questioned and users are empowered to reclaim the self from the quantified-self.

The digital art landscape has rapidly expanded since the passing of Visual Rights Act of 1990 (Baron, 1996; CAA, 2013). With the recent advent of blockchain technologies, derived from Nakamoto’s Bitcoin currency, new possibilities have emerged for the way artistic materials can be exchanged and how communications can be conducted. This research examines emerging applications for decentralized blockchain technologies in community-based art projects and digital art startups. The work of three organizations – ConsenSys, Ethereum and Monegraph – is explored. Through the use of blockchain technologies, designers and digital artists can create a traceable and tradable record of their work, while generating a critical discourse around the reproducibility of media. In this research, I investigate the potential uses of design and art in the blockchain, and its educational value in visual arts education.

Akshita Sivakumar: University of California, San DiegoPerformative Experiment: Shadow as Boundary Object

This talk makes a case for the benefits of a spatial and embodied practice in fields that don’t traditionally consider themselves concerned with either. Such practices challenge unchecked knowledge-making regimes and disciplinary canons. I position myself at the intersection of architectural design education and cognitive science, to sketch out unique crossovers of material and methodology. Through two case studies, I demonstrate how including a spatial and embodied approach in visual perception experiments in the cognitive sciences can open up fruitful lines of questioning. Shadows and animation play out this position. Ultimately, I argue that to form truly interdisciplinary, expanded bodies of knowledge, we need to expose students to the benefits of being trained to frame questions in their own fields in spatial and aesthetic ways.

Nikki Stevens: Arizona State UniversityInclusivity in Data Lifecycles

Open Demographics is a project that is creating a “standard” way to ask empathetic, inclusive and expansive demographic questions and providing recommendations for UX, data analysis and data display. As traditional data structures influence the questions we ask, this project asks a different question: what would demographic data look like if we were not bound by booleans and integers or checkboxes and radio buttons? OpenDemographics explores how we can collect, analyze and display data in ways that account for the variety of identities (gender, sexual orientation, ethnicity, dis/ability) that are meaningful for individuals. How can we use existing data structures to support the diversity of identity markers that are meaningful to people? Inclusive data is not messy; inclusive questions are not clumsy; and inclusive algorithms are possible. This project embeds inclusivity at every phase of the data life cycle and is a praxis-based approach to the consumption, analysis and display of human identity data. Is it possible that we can become more visible in aggregate?+Kira Street: University of Texas at AustinCrochet Forms and Architecture

Crochet is an old craft with a rich history and a global reach. As it grew in popularity in the west, it became associated with domesticity and femininity. Because of this, its forms and applications have been constrained in the domestic world although, as seen by projects in free-form crochet and architecture, crochet has the potential to be applied beyond the domestic. As part of my research, I’m using generative and computational design techniques to explore crochet forms, experiment with materials, and propose new applications. I’ll show the issue of how crochet has mainly been constrained within feminine and domestic applications and how crochet can be extended into the design world through an exploration of form. I’ll be displaying crocheted physical artifacts using the computer program I’ve made as well as the computer program itself.

My research focuses on the physical landscape’s capacity to store history, and its ability to act as a metaphor for the act of remembering – or forgetting. There are an estimated 50,000 abandoned coal mines in the United States. Their stories are not only of historical import; they speak of changes in industry, politics and the lives of those who labored there. Today, they pose hazards to humans and the environment. My current project attempts to convey the sheer number of these spaces using a sculptural map which projects a point of light for each abandoned mine in Appalachia. By creating poetic and immersive visualizations of data, I hope to compel viewers to contemplate, investigate, and think critically about our approaches to land use and remediation.

Loan Tran (with Kelly Park): University of Texas at DallasChromatic Structure and Family Resemblance in Large Art Collections – Exemplary Quantification and Visualizations

The proliferation of visual data has allowed researchers to perform quantitative analysis on large art datasets. Computer algorithms are able to identify duplicate photos in image archives, find artworks given a certain object, and detect architectural styles of buildings. What is missing is a rigorous reconciliation between state-of-the-art computer science techniques and established art historical standards based on trained observation and hermeneutic interpretation. We aim to address this gap in two ways. First, through visualizing the chromatic structure of paintings by consistently sorting color pixels, we uncover hidden color patterns of individual paintings, artist oeuvres, periods, and collections. Second, using deep learning and dimension reduction, we calculate visual family resemblance and generate visualizations. During two courses by Dr Maximilian Schich, we have produced a series of visualization on chromatic structure using the Dallas Museum of Art dataset. The deep learning aspect has been extended through collaboration with DataLab at the University of Washington, Seattle. We are adding more data and performing further analysis.

Amy Wetsch: Maryland Institute College of Art (MICA) Sculpture and the body’s immune system

In my multidisciplinary practice, I explore why the body’s immune system attacks itself, as well as medical mysteries that defy explanation by modern science. It is a visual representation of my personal experiences, researc, and fascination regarding illness, specifically autoimmune disorders. Through a creative lens, I investigate the intentions and results of medical treatments and our dependency on them to repair our bodies. I hope to create a standalone installation that discusses these issues. I envision it to be colorful, created from unusual materials, and abstracted from looking at sources such as cells, bacteria, and other organic forms. This will be an inviting and intriguing way for viewers to assess visual information and bring forth a larger conversation for things that are not commonly or openly discussed.

William Wiebe: School of the Art Institute of ChicagoSights Unseen: Computer Vision, Human Memory

With the support of an Arts, Science & Culture Initiative Graduate Collaboration Grant from the University of Chicago, I worked with John Santerre, a computer science PhD at UChicago, to train a deep neural network (DNN) on a publicly available dataset of satellite images. We then introduced slight pixel perturbations to the dataset to create adversarial images that were capable of fooling the DNN but whose differences were imperceptible to the human eye. We used this research as a means to explore the history of the indexical image in its relation to aerial surveillance, developing speculative camouflages for future use against artificial intelligence. This presentation will touch on our collaboration, its outcomes, and the enduring impact it has had on my artistic practice. I am also excited to exhibit a copy of our book (and its companion augmented reality application) in the creative exhibition.

This talk situates the practice of designing mediation technologies as artistic tools to expand the human repertoire. Three art-science collaborations: Mandala, Imagining the Universe, and Resonance of the Heart are elaborated as proof-of-concept case studies. Scientifically, our research examines the mappings from (bodily) action to (sound/visual) perception in technology-mediated performance art. Empirical data show evidence of identifying the audience’s degree of engagement from synchronization, to embodied attuning, and to empathy‒the human connections. Theoretically, we synthesize media arts practices on a level of defining general design principles and post-human artistic identities. Realized by a group of multinational media artists, computer engineers and cognitive scientists, our work preserves, promotes, and further explores minority cultures with emerging technologies. This practice-based research and the emergence of entirely new fields of scholarship and artistic creation result in significant changes on how concepts are formulated in disciplines of the humanities.

Xinyi Zhu (with Kieran Murphy): Film, Video, New Media and Animation, School of the Art Institute of ChicagoRandomness and Mind Games in Virtual Reality (Abstract can be found under Kieran Murphy entry.)