Meta

User Testing

User testing was a great start in understanding practicalities of the project. We are targeting a very specific set of users who regularly practice Indian classical singing. The most time consuming part was explaining the concepts of this artform to our fellow ITP students who acted as the users. The bustling, energetic class activity not only threw some light on unforeseen problems (and unforeseen merits alike), but also made us aware of a couple of features that we had taken for granted unconsciously. It even turned out that in spite of working together our own ideas of a few minute features were still quite dissimilar. Overall the testing process was a good success, because failures were anticipated.

We tried different approaches with user testing. Each test session involved two testers and went on for around 20 minutes. In some sessions we explained the project to the two testers together. In others we talked to them separately and then listened to their reactions collectively in a group. In one session we explained the whole interaction verbally, and mostly in the others we used essential oils as response to testers’ vocals. We documented most of the tests by video recording and transcribing the discussions.

Broadly there were two categories of inputs from the users.

1. The Awesome Users!

In my opinion these people understood the purpose of the exercise and their role in it really well. They acted purely as potential users of the system rather than ITP students, until at some point they were asked to express their critiques on the experience and interactions offered by the system. Following are some great inputs:

“Is there a noticeable mapping between what I sing and the resultant smell?”

“Will the smell become more intense with time? Will it linger around for a while?”

“When I switch from singing a note to the next one, would there be a gradual change in the fragrance?”

“I might be immune to the smell in some time…”

“I breath out slowly while I sing- and I might breath in really quick. So how much would the fragrance really affect me?”

“Where does the smell actually come from? Is it a device that I can see and touch and interact with?”

2. Designer Mode On!

I think getting rid of the designer bias is virtually impossible. When I acted as a tester for others’ projects, I had to make myself constantly aware of the rules of play. While testing our project, a few testers directly proceeded with proposing new features, interactions and enhancements. These users and inputs were as much important:

“I would prefer candles and vapors.”

“There should be a visual feedback, like a live waveform that shows my performance against an ideal one.”

“I would design a few buttons to control volume of electronic tanpura and intensity of smell.”

“This is so cool! Make an installation with eight helmet-like spheres hanging from the ceiling, one for each note in the octave. User wears the first one and sings the first note. It releases a smell in that enclosure only.”

“Should the users know beforehand that they will smell a fragrance?”

“Make a system for a concert. That would create olfactory vibe in the whole room for the audience as well.”

As a result, first of all we understood that our own ideas need to be extremely clear and all team members need to be on same page. We need to let go a few ideas in the process and it is ok. For example, offering controls that are useful but a little out of scope— such as the whole range of pitch, timber and scale selections available on an electronic tanpura— can be discarded, and we shall just focus on essential inputs such as on-off switch, and maybe volume levels. We further decided that it needs to be a learning tool for beginners, and not an experience enhancer for seasoned artists. Next, a user might need an explicit way to tell the system what note they are planning to sing. It is certainly an overhead from user’s perspective, but it makes the system much simpler and dumb. In future we will like to implement intelligence that determines the note the user is trying to sing and adjusts itself accordingly. Finally, as of now we are little uncertain about the impacts of smell- will it linger for too long? Will the user develop non-susceptibility to a typical smell? Will the breathing pattern of a user affect the impact of smell? These can be fine-tuned with iterative designs and developments. We plan to make a reasonably functional system and then test it with new users.