Group 2: UniTongue Design Process

Design Process

World → Resistance/Revolution → Primary/Secondary Research → System → Refinement

We first designed a world where pollution has become so severe that it has almost complete eradicated safe, healthy verbal communication while outdoors. Environmentalists and governments have no made any efforts toward solving this problem, leaving people who want to openly communicate (no face masks) exposed to nasty health effects.

Once we had this world, we had to narrow down our resistance and revolution. This wasn’t an easy task for us, and it developed and matured along the way, especially with the help of guest critics. Overall, we decided that our system should be a preventive solution that revolutionizes communication (with an educational approach), ultimately resisting the environment, verbal speech, and trust in environmentalist and the government to solve our world’s problem.

During primary research we conducted ten interviews and sent out a survey that received five responses. From the interviews we received an overall positive/neutral rating and some popular reserves. These include the fear of accidentally expressing private thoughts; movement glitches; and the inability to properly express emotions. These reserves inspired additional system functionality including the “suggestion only” feature (as opposed to just forcing users into making a gesture) and “emotional cues” (emitted into users ears to cue them when to make facial expressions as well). From the survey we received an overall negative/neutral rating and generally just mass confusion about what the system would actually be doing. From here we realized that we really needed to narrow down our functionality and components since it was not coming across well on its own. We needed to be sure to make it known we are not solving health issues. We are solving communication issues.

Secondary research helped us to realized that if we broke our system down into smaller components, the functionality of those components was currently achievable or really close to being done. Reading brainwaves was the furthest away in the future by far (we’re estimating 10 years). Additionally, we looked at articles talking about current smog pollution levels for cities around the world and predictions for rising pollution levels (examples can be seen below in six major cities). We determined that pollution will eradicate healthy speech within in the next 20-50 years, which helped us to establish our world timeline, and eventually our system timeline. Because we wanted this system to be preventative, and our world is 20-50 years in the future, we are shooting for a 2030 plan.

NEW DELHI, INDIA

DUBAI, UAE

LOS ANGELES, USA

NEW DELHI, INDIA

CANTERBURY, NEW ZEALAND

BEIJING, CHINA

Finally, for our prototype we designed a gesture assistant system. Along the way we considered multiple hardware units with varying functionality, but in the end we knew we had to narrow down the components and functionality to make it more desirable and less overwhelming to potential users while not distracting them from the actual purpose. We finalized our ideal system to consist of the following six components:

Camera

detects the gestures of other people

Speakers

emits the gesture’s translation into the wearer’s ear

Electrodes

controls/stimulates the user’s upper extremeties to create the desired gestures

CPU

the main unit controlling all functionality including translation and proximity detection

Wearable head unit

this will host all hardware components besides the electrodes

EEGs

these will be used to detect brainwaves/thoughts

we won’t showcase these on our prototype since they would be built into the wearable head unit

With this design and approach, users would not be dependent on our system for the rest of their life. They would eventually become proficient enough in this form of sign language that they could sign the appropriate gestures themselves without system stimulation.

Smog examples, prototype sketches, storyboards, and primary/secondary research for our system, as presented to critics.

Further, with prompting from critics, we made system refinements and thought about some of the broader impacts our system, since it is flexible enough to solve more issues. These include:

teaching sign language

assisting those with limited motor functions

universal communication (e.g. across cultures, deaf community, etc.)

Keeping all of this in mind, we went on to design how we wanted to prototype this in today’s world (not all functionality can be present). Additionally, we wanted to make sure we had a memorable name and video to promote our system, which were the last three design steps all detailed below.

Prototype Design → Branding → Video

Prototype Setup/Design

We first created a higher fidelity prototype design, shown below, as opposed to the sketches we had been using. Then worked out how we wanted to prototype this design.

Medium fidelity design of our system prototype, officially known as UniTongue. This is the design we based our actual prototype off of.

We want to go with a Wizard of Oz approach to achieve full “functionality” of the system. This allows us to showcase the desired functionality in an affordable manner. The materials we will be using for the prototype include:

From left to right, wireless earbuds, googly eyes, tens unit, extra electrodes, and headbands. These were all the materials we ordered and were used to create two system prototypes. Not pictured: two more earbuds and two more googly eyes, used to create two more system prototypes (for a total of four).

Headband

to be used as the wearable head unit

Bluetooth Headphone

user will place it in their ear and it will function as the speaker

will serve to link phone calls so “behind the scenes” CPU “unit” (actually a person) can instruct users how to sign the gestures

Electrodes

these will be placed on the user’s upper extremities

won’t actually be connected or functional

Googly Eyes

these will be attached into the headband to simulate the camera

no, these won’t be functional eyes

“Camera lens” made from googly eyes. We made four total.

Tens Unit

this won’t be a part of the “prototype” but will be used to demonstrate how it would feel to actually have the electrodes move your upper extremities

Cardboard CPU and power supply

this will not be functional, but will just be attached to the head unit to showcase what they might look like

The outline for our “CPU” and “power supply” before folding. These were made from the headband packaging. Additional ones were made from card stock. We made eight total.

We will have one person acting as the “CPU” unit and remain behind the scenes. The “camera” and wireless earphones will be attached to the headband and function as the wearable unit. We will also fashion and attach a non-function, placeholder CPU.

Once the user is wearing this, we will place placebo electrodes on their upper extremities.

The user will “think” what they want to say by choosing from a listed phrase that we will have displayed and saying it so the “CPU” can hear. The “CPU” will then “translate” this and describe to them what to do with their arms, hands, and/or fingers (simulating the electrode stimulation and control). Once this is complete the other person’s “camera” will detect these gestures and the “CPU” will translate by emitting the translation into their ear from the wireless headphones. This process can then be repeated.

The connections between the “CPU” and the users will be established through a phone call.

Additionally, there will be a board of phrases to choose from, and a drawing of upper extremities with labeled positions. This will cut down on confusion when directing users how to position their upper extremities.

And specifically for presenting, we will all be wearing the prototype (we made four versions), as shown below.

Each group member is wearing a UniTongue prototype.

Branding

We came up with branding ideas by brainstorming. Ideas included:

RosettaTongue, UniTongue, SoloTongue, or GestureTongue

Result: UniTongue

Next we made a logo:

Branding logo for our system. Each “u” is meant to look like a tongue.

We went one step further and coined a slogan:

“UniTongue: expression without speaking. Tongue not included.”

Video

We began designing for our video by constructing story boards of how we could showcase the prototype, deciding on a particular video style, working out a script and its scene timings, then filming and editing. Each step further detailed below.

Story board

without the system to demonstrate negative health effects

Story board idea for demonstrating the negative health effects associated with not using our system. In the end, we decided not to use this in our video.

with the system to demonstrate functionality

This slideshow requires JavaScript.

Story board idea for demonstrating the functionality of our system. In the end, we decided to use this idea for our video.

Originally we thought using about using both storyboards: one to demonstrate negative health effects from not using our system, and one to demonstrate positive results of using our system. However, we thought for this video it would be best to solely focus on demonstrating the functionality of the system to not distract users or take away from functionality by focusing on health effects.

Furthermore, to make the video more relatable, we decided to photograph the latter scenario (prototype functionality), as shown below.

This slideshow requires JavaScript.

We later edited the images with thought bubbles and descriptions to convey the “functionality” before adding them to the video.

Style

We first thought about having a stop motion video, and then quickly diverged into a live motion video that we frequently stop/pause to narrate the inner workings of our prototype with “speech bubbles.” Once we started looking up examples and resources for this type, another styling idea came to us. We began planning a “Because A then B. Because B then C. … Because Y then Z.” type of video, which played into the dramatic effect we wanted, however we quickly realized this wouldn’t be informative enough.

So, we took a step back and decided to design the script first. Once we did this the styling fell into place. We decided to have pictures matching the subject of the script. These consisted of polluted cities, sketches and designs of our prototype, actual pictures of the finished prototype, a storyboard to demonstrate functionality, and a clip of our prototype being worn. The style of the video will be mostly educational: it presents a problem in our world and showcases our system as the solution.

We decided to film a 360 degree view of our prototype. To do this we rotated the camera around someone wearing the prototype as they stood still. Additionally, we started from afar and zoomed in the view closer on the user for another shot. This allowed us to focus on the wearable head unit (where the majority of the components are), but still display the entire prototype including the electrodes on the arms.

Further, we took pictures of each transition for the storyboard. Later, the thought bubbles and descriptions were editing in before adding them to the video.

Evaluation Plan

We started our evaluation plan by working out the main sections we wanted to learn more about. These include usability, aesthetics/design, and the plausibility of functionality. We created a survey with 13, 6, and 6 questions in each category, respectively.

We really wanted to hit upon if people would use the system, want to use the system, how they would use it, and why. For aesthetics/design, we wanted to gauge how well we did with combining the functionality into one unit. Does the design affect if users would want this system? Would users potentially use this if it came in a different form? And Lastly, for the plausibility of functionality we wanted to verify if people believed we picked a feasible timeline and what features might have been too ambitious or not enough.

The plan is to evaluate our system with previous interviewees and more individuals outside of their majors and/or age ranges.

We also did some impromptu evaluating of peer reactions to the electrodes moving your fingers, as pictured below.

Using the tens unit to evaluate peer reactions to having electrodes influence the behavior of their fingers. This will be what we use to demonstrate the “suggestive” movement that will be simulated by our UniTongue system in order to create the appropriate gestures.

Results

We were able to evaluate eight participants. The prototype was showcased to each of them and they were able to wear it or examine it for as long as they liked.

Analysis

Word cloud of the raw data (without the questions) from the evaluations.

Initially, a large amount of participants had a distrust in the system with the continuous fear of their “brains being hacked”, but many of them said that knowing the data transference would be encrypted made them feel much better about the subject. Also, while almost all participants believed that the device accomplishes it purpose, many also believe that the device could be used with other purposes that they believed could be a better use of the system. Many participants also believed that the device might be a bit discomforting at first, but almost all of them believed that after a few uses, they would become accustomed to the device and be much more comfortable using it. A few people also expressed a fair amount of interest in having different or aesthetically modifiable versions of the headset (such as making a glasses versions, or changing the look of the electrodes).

Overall, users predominately said they would use our system as a solution to this problem, and especially as a solution to some of the aforementioned broader impacts.