SoundsTo.mehttp://soundsto.me
Intersections of Sound Technology, Music Cognition, Sound Healing, and Music TherapySun, 17 Jun 2018 14:09:51 +0000en-UShourly1SoundsTo.mecleanSoundsTo.meneesha.music@gmail.comneesha.music@gmail.com (SoundsTo.me)Sound Technology, Music Cognition, Sound Healing, and Music TherapySoundsTo.mehttp://soundsto.me/wp-content/uploads/powerpress/stm-logo-itunes.jpghttp://soundsto.me
Interview with Marco Buongiorno Nardelli at ICMC 2015http://soundsto.me/interview-marco-buongiorno-nardelli-icmc-2015/
Sat, 13 Aug 2016 16:35:32 +0000http://soundsto.me/?p=3352http://soundsto.me/interview-marco-buongiorno-nardelli-icmc-2015/#respondhttp://soundsto.me/interview-marco-buongiorno-nardelli-icmc-2015/feed/0In the third and final interview from the 2015 International Computer Music Conference (ICMC 2015), we had the privilege of speaking with Dr. Marco Buongiorno Nardelli. Dr. Nardelli is a distinguished research professor of physics and chemistry at the University of North Texas. He has been able to successfully merge his career as a computational […]In the third and final interview from the 2015 International Computer Music Conference (ICMC 2015), we had the privilege of speaking with Dr. Marco Buongiorno Nardelli. Dr. Nardelli is a distinguished research professor of physics and chemistry at the University of North Texas. He has been able to successfully merge his career as a computational materials physicist and his passion as a composer with his materialssoundmusic project. He is a member of the Initiative for Advance Research of Technology in the Arts, a fellow of the American Physical Society and the Institute of Physics, a founding member of the A Flow Consortium, and a Parma recordings artist.

Anderson: As a composer and a physicist, you have a passion for both music and physics. When did you first become interested in those fields?

I have many more years of professional life in physics than I have as a composer but the two things have always gone together.

Nardelli: I’ve been a musician way before I became a physicist actually because I started studying music as child maybe 6 or 7 years old. And, at a certain point I wanted to get into a professional career in music, and it was at the end of my high school. But, the situation with the conservatories in Italy was a little bit of a mess at the time. So, one thing and the other thing, and I decided to go in physics. I went into physics, but I always kept music at least on the side. I never stopped that.

Anderson: So would you consider yourself more of a physicist?

Nardelli: Right now, I would like to see myself as both. Clearly I have many more years of professional life in physics than I have as a composer but the two things have always gone together.

Anderson: So you have managed to combine these two interests, and I’d like to hear about that. Before we broach that subject, how about telling us a little bit about what you do in physics.

Nardelli: Yes, so as you said I’m a computational materials physicist which means that I study materials from a theoretical and computational point of view. We use computers to simulate the properties of materials on very microscopic levels. We are talking about atoms, electrons, and protons. Basically on that level.

[O]ne of our goals in the consortium is to develop databases of material properties that would allow scientists and engineers to find…materials that might be suitable for applications that have not been considered before.

In that way we are able to be predictive on properties of materials that might not have been synthesized yet or properties of materials under conditions that would be difficult to realize in a laboratory experiments.

In general we are able to get a lot of information that can be used to categorize the particular materials for a given application. When I talk about applications here, I mean any kind of electronics that you may carry in your hands or your purse or your bag. It’s made of advanced materials.

There is a need for materials that are able to push the technology farther, and so one of our goals in the consortium is to develop databases of material properties that would allow scientists and engineers to find – through the analysis of our data – materials that might be suitable for applications that have not been considered before.

So, right now, for instance, most of our technology is based on silicon. Silicon is material that was known for centuries or millennia. We’ve been using this for almost a century, and many of the applications that we would like to have or have already are not possible just with silicon. You need more advanced materials. You need to design more advanced materials to go beyond the technology that you have right now. So, that’s kind of the framework.

These databases that we develop using our theories and computational techniques are massive. Our database right now is almost a million entries of material entries. Actually you can go online on www.aflowlib.org and you have a periodic table and you click on the elements that you want in your material. If the material has been calculated then it pops up. It shows you the structure, the electronic properties, all the things that you need to know to decide if that composition is the right composition for your application.

So that’s kind of the work that we are doing, and this has been going on for a long time. We got a big push in 2011 with an initiative from the White House because the Obama Administration launched the Materials Genome Initiative. This is an initiative meant to accelerate material discovery and deployment from the research side to the application side in order to have this fast track into the commercial system as soon as possible. So, that’s where we fit in this picture.

Anderson: So, you have been able to take your background as a composer and a musician with this aflowlib repository of materials’ properties and marry them together in this project.

Nardelli: Yes. It’s something that I’ve been thinking about for quite some time. Actually, about a year ago I really started to work on this idea of taking this massive amount of data and using it to make music.

[A]bout a year ago I really started to work on this idea of taking this massive amount of data and using it to make music.

There are two aspects here. There is an aspect that is more scientific in a sense. That is the aspect of sonification. That is a way of realizing the data in a way that you can hear rather than see. The parallel of visualization for your eyes, you have sonification for your ears. One of the main uses of databases is you want to data mine the database because you want to look for particular elements.

One way of categorizing the elements is via visualizing some of the properties or, as I started to do, associate sounds with some of the properties so you can distinguish between two different materials not by just looking at them but by hearing the sound signature of the materials. The scientific drive here is to find ways of data analysis that are not just based on the visual but based on the aural.

The other aspect that is the more artist one is to use this flow of data as the building block or starting point for a composition process. The thinking is, or at least what I see is, you can always look at music as a data stream that is modified in real time. Even if you think about your scale — your scale is a data flow. You modify your scale and you have melodies which are modifications of the data flow. Now, instead of adding scales here or starting with a predefined set of elements, I take the data as they come out. I do some mapping to map the data onto pitches, for instance, or durations or intensities. Then I use that as my data stream that I manipulate to make a composition. So, that’s kind of the flow and so far I’m having a lot of fun doing it.

Anderson: So, you’re able to use data driven audio engineering both as a tool for sonification – for perceiving materials’ properties – as well as for composition.

You can hear the difference between data distributions that look very similar to each other. It would be very difficult to distinguish one from the other just by looking at them while you can hear an immediate difference. It’s very distinct.

Nardelli: Yes. I see there is this initial step that is more kind of the mapping. You have this data and you want to map them into something that can be heard, into sound. The mapping that I choose is very simple. I just map the data onto a media event that can be then read by software, basically.

I wrote all these interfaces that allow me to get the data, then translate them into sound, and drive them through a chain of applications, mainly MAX that is this code that all the electronic musicians use, and then into a workstation. In this process, I developed an interface that I call the data player that, on one hand, allows me to do the sonification of the data or at least some part of this data. On the other, it allows me to do the composition process.

There are examples on my website www.materialsoundmusic.com. One of the menu items is Materials. If you click on Materials you go to four examples of sonification of data for scientific purposes. You can hear the difference between data distributions that look very similar to each other. It would be very difficult to distinguish one from the other just by looking at them while you can hear an immediate difference. It’s very distinct.

Anderson: On that note, would you say there are advantages to using the auditory system to the visual system for some data sets?

Nardelli: That is actually something has been discussed extensively. Our hearing is much more sophisticated in a sense than our vision. We are able to distinguish many nuances by hearing that we couldn’t do just by looking at things.

We have no universal standards. We have not yet defined the equivalent of the Cartesian axis. We can plot things on a Cartesian axis, and we know what it means. We don’t really have something similar in sonification.

Paradoxically, I think the best would be if we could map data into smells because I think we have more receptors in our noses than in any other part of our bodies. Smells would be interesting, but I don’t know of any kinds of applications in this sense.

But, I think there is value. There are many examples of sonifications that have been done over the years. The typical ones are the ones that involve visually impaired people that need to guided by hearing other than vision.

There are examples of scientists that are blind, but they are to analyze data because the data has been sonified. I read about an astrophysicist. She was able to, using the sonification from data from telescopes or something, work with this in a scientific way.

There are not many examples because we are so much focused on visualization of data than sonification. The problem is that we need to train ourselves to use this for everyday scientific applications. We have no universal standards. We have not yet defined the equivalent of the Cartesian axis. We can plot things on a Cartesian axis, and we know what it means. We don’t really have something similar in sonification.

Anderson: With that in mind do you see sonification not only as a way of understanding or perceiving data but also as a predictive tool?

Nardelli: It would be nice. I don’t have an example, but if you train yourself to look for sound signatures that are descriptors of some property then that becomes a predictive tool because by hearing you can distinguish between systems, and you can say this is the system we need to use because it has right signature that the other ones don’t have.

Anderson: Do you think there is any reason why visual representations of data have dominated over audio ones?

We have a lot of data. There are data on anything and everything. I think visualizing things in just two dimensions is no longer enough. We need to find additional dimensions to represent the data so we can clearly navigate them.

Nardelli: One thing is that, so far, we are living in a time where data is becoming pervasive. We have a lot of data. There are data on anything and everything. I think visualizing things in just two dimensions is no longer enough. We need to find additional dimensions to represent the data so we can clearly navigate them.

There are examples of visualization caves, a kind of three dimensional environment where you visualize the data in three dimensions by walking inside an environment. But, I think that without adding other components, and I am convinced that sound is essential, we don’t go very far. Sound, has more dimensions than just a two dimensional piece of paper. So, you can really explore different regions of the data that you wouldn’t be able to do just by looking at a graph.

Anderson: As somebody who is just learning about sonification or data driven composition, do you have any recommended listening?

Nardelli: One of the problems of sonification is typically when you listen to a sonification of something it is not very interesting. Part of the problem is that most of the scientists are not composers and most of the composers are not scientists. So, it is very dry.

With what I’m trying to do, I have to admit I am manipulating the data. We are moving away from sonification here because I’m thinking of more from an artistic and aesthetic perspective. So, from the point of view of the scientific part, I think we can make the sound more interesting than just the dry sonification that we are used to.

We are very familiar with the universe and the stars, black holes. We have sonifications of astrophysical data. We have representations of climate data in music, or in sculpture, or in art; but, we don’t have anything for materials.

But I think that the richness of the material, and the way in which we can use it for artistic developments, has other applications and advantages that are maybe not completely scientific in the sense that they are not helping you to develop a model of the data. But they are compelling in the view that they are opening this huge space of research to people that are not scientists.

One of the things that I’m actively working on is to develop models of crystalline system materials that can be used as installations to make the audiences appreciate the fact that materials are important. We are very familiar with the universe and the stars, black holes. We have sonifications of astrophysical data. We have representations of climate data in music, or in sculpture, or in art; but, we don’t have anything for materials.

So, one of the motivations to start our project was: can we make something that we can then use as a tool to engage the public in the fact that anything we have and we use would not be there without the materials used to make it. I think music in this respect is very useful. One of the projects that I hope will work out right now if we can get the funding to do it is an installation where we have a gigantic crystal structure that is embedded into the sounds and the music that comes from the sonification and composition of the data that come from that material.

Anderson: Wow. I imagine you can cultivate a better intuition about the cells or atoms.

Nardelli: You are a physicist so you know what a material is, but most people don’t know what a material is. They might have studied chemistry in school. They know that there are atoms, but what is the material that we use to make all the circuitry of your iPhone? Or the glass of your computer? Or the cpu? Or anything that you have in your kitchen or your car? I think that is a huge market that has not been yet touched.

With what I’m trying to do, I have to admit I am manipulating the data. We are moving away from sonification here because I’m thinking of more from an artistic and aesthetic perspective. So, from the point of view of the scientific part, I think we can make the sound more interesting than just the dry sonification that we are used to.

Anderson: So, obviously, you background in music and composition has influenced how you want to engage the world with material science. Is the flip side true? When you put on your composer hat and you’re just thinking about writing music, has working with materials and all of this changed the way you view music more generally?

Nardelli: Yes, I think it did and it does in this more general sense of data driven composition. The fact that I start from a data flow. It does change the way that I approach the composition process. Now, the other advantage is knowing the data that I am using, because I’m an insider, I can manipulate them in ways that maybe other people wouldn’t think of. I know that if want to have a particular effect that maybe I should take data from here rather than from there. That is part of the freedom of the compositional process.

Anderson: Somebody kind of needs the background in both to first understand the data they want to sonify and then having the compositional background to turn that into a piece of music.

Nardelli: Yes, and this is part of the continuous effort interdisciplinary between the arts and the sciences that is being pushed on many different levels but isn’t really happening. I mean it’s happening, but on a very small scale. If you do a little research you will find visual artists that are using information and scientific research to incorporate in their art. There are many examples in biology for instance or climate or environmental sciences. But there is very little sound.

Anderson: Looking forward, where do see sound and science going into the next decade?

Nardelli: Well, one of the things that is essential and is something that is again part of the background on the work that we are doing on databases is quantity. So, one of my goals scientifically is to take our database, that is a database that is living in a sense – as we speak there are computers all over that are running calculations and that are added to the database – to make the sound signatures of all these data available in the database as part of the standard data that define the repository.

[S]onification as part of the datasets so that we will be able to run specific algorithmic approaches that would maybe be predictive…That is something that I think has to be done and has value;

So, that is one thing that I think is needed because when we talk about big data, the quality is in the quantity because when you have one entry you cannot do much with it. You cannot do any correlation and analysis in the big space. Here we have a big space. So, one of things that I want to do is keep pushing this aspect of sonification as part of the datasets so that we will be able to run specific algorithmic approaches that would maybe be predictive in a sense that using the data we can predict a new set of data. That is something that I think has to be done and has value; and, on the other hand, I will keep composing. That is more of the outreach aspect of this application.

Anderson: Dr. Nardelli it’s been a pleasure talking to you. Before we conclude I wanted to ask you is there anything that maybe we didn’t address that you’d like to talk about?

Nardelli: In the context of the International Computer Music Conference, there is a lot that can be applied to a project like mine. I think there is a lot of interest in this big data approach in music, not necessarily related to science, but anyways a big data approach. I think we are moving in the right direction. There are not many efforts but some efforts that I learned in research. It’s a good starting point.

Anderson: Dr. Nardelli thank you so much.

]]>In the third and final interview from the 2015 International Computer Music Conference (ICMC 2015), we had the privilege of speaking with Dr. Marco Buongiorno Nardelli. Dr. Nardelli is a distinguished research professor of physics and chemistry at the ...materialssoundmusic project. He is a member of the Initiative for Advance Research of Technology in the Arts, a fellow of the American Physical Society and the Institute of Physics, a founding member of the A Flow Consortium, and a Parma recordings artist.

Anderson: As a composer and a physicist, you have a passion for both music and physics. When did you first become interested in those fields?
Nardelli: I’ve been a musician way before I became a physicist actually because I started studying music as child maybe 6 or 7 years old. And, at a certain point I wanted to get into a professional career in music, and it was at the end of my high school. But, the situation with the conservatories in Italy was a little bit of a mess at the time. So, one thing and the other thing, and I decided to go in physics. I went into physics, but I always kept music at least on the side. I never stopped that.
Anderson: So would you consider yourself more of a physicist?
Nardelli: Right now, I would like to see myself as both. Clearly I have many more years of professional life in physics than I have as a composer but the two things have always gone together.
Anderson: So you have managed to combine these two interests, and I’d like to hear about that. Before we broach that subject, how about telling us a little bit about what you do in physics.
Nardelli: Yes, so as you said I’m a computational materials physicist which means that I study materials from a theoretical and computational point of view. We use computers to simulate the properties of materials on very microscopic levels. We are talking about atoms, electrons, and protons. Basically on that level.
In that way we are able to be predictive on properties of materials that might not have been synthesized yet or properties of materials under conditions that would be difficult to realize in a laboratory experiments.
In general we are able to get a lot of information that can be used to categorize the particular materials for a given application. When I talk about applications here, I mean any kind of electronics that you may carry in your hands or your purse or your bag. It’s made of advanced materials.
There is a need for materials that are able to push the technology farther, and so one of our goals in the consortium is to develop databases of material properties that would allow scientists and engineers to find – through the analysis of our data – materials that might be suitable for applications that have not been considered before.
So, right now, for instance, most of our technology is based on silicon. Silicon is material that was known for centuries or millennia. We’ve been using this for almost a century, and many of the applications that we would like to have or have already are not possible just with silicon. You need more advanced materials. You need to design more advanced materials to go beyond the technology that you have right now. So, that’s kind of the framework.
These databases that we develop using our theories and computational techniques are massive. Our database right now is almost a million entries of material entries. Actually you can go online on www.aflowlib.org and you have a periodic table and you click on the elements that you want in your material. If the material has been calculated then it pops up.]]>SoundsTo.meclean28:56Interview with Simon Lui at ICMC 2015http://soundsto.me/interview-simon-lui-icmc-2015/
Mon, 08 Aug 2016 21:46:07 +0000http://soundsto.me/?p=3346http://soundsto.me/interview-simon-lui-icmc-2015/#respondhttp://soundsto.me/interview-simon-lui-icmc-2015/feed/0Please enjoy the 2nd in a series of interviews from the 2015 International Computer Music Conference (ICMC 2015) in Denton, Texas. Each year, this conference brings together scientists and artists from around the world to share their latest projects in the field of computer music. In this episode, we interview Dr. Simon Lui, an assistant […]Please enjoy the 2nd in a series of interviews from the 2015 International Computer Music Conference (ICMC 2015) in Denton, Texas. Each year, this conference brings together scientists and artists from around the world to share their latest projects in the field of computer music.

In this episode, we interview Dr. Simon Lui, an assistant professor of the information systems technology and design pillar at Singapore University of Technology and Design. Lui received his PhD in computer science from the Hong Kong University of Science and Technology. He has extensive experience in mobile application development especially for audio applications. He started his own business in mobile application development in 2009. Dr. Lui has inventions on the iphone and ipad platforms, including number one best-selling apps. His work is widely reported by international media including CNN International and many other magazines, newspapers, and television programs. Dr. Lui is also a composer of computer music as well as an award-winning performer.

Tonks: Dr. Lui thank you so much for speaking with us today. Just to start off are a few basic questions. What is computer music and how did you first become interested in it?

I’m from the computer science background. I’m doing machine learning and artificial intelligence. So I’m using my skills to do something that I really like on musical sounds by using computer technology.

Lui: Okay, computer music is using a computer as a platform for music application. For example, for music performance or music analysis or music classification. So something that we cannot do as a human, for example, to produce some interesting sound or to deal with a large number of music we have to go to a computer and that’s why computer music. For me, I’m from the computer science background. I’m doing machine learning and artificial intelligence. So I’m using my skills to do something that I really like on music sounds by using computer technology.

Tonks: That’s very interesting. How does your work involve sonification?

Lui: Ah, sonification. My work mostly is using sound, using audio to help people and so that’s why I have to know what is sonification. What is the design, what is the principle of our audio signal and so that’s why I’m doing sound analysis and sound application and then use it for stroke patients and for deaf patients and for sports people in order to use music to enhance performance of some of their processes.

Tonks: That’s very interesting. Now I saw in one of the applications that you have involving replication of sound and your graded upon that. It’s very interesting to me because I’m learning Welsh and so approximating the intonation and approximating the way that you speak is very important in different languages. Do you see your work moving into the education of language?

[W]e are actually developing a new application using education so people can look… at the shape of all the different vowels so they can know what is the correct way of doing a British pronunciation ,or American accents, or even Cantonese or Mandarin.

Lui: Yeah ,that’s true. Because there are many kind of representations of sound for example audio representation and we also have vibration. When you touch the lips you’ll find that they are vibrating or you can look at the sound by doing some our conversions, for example, you can see the shape of the volume of your sound or the contour of your sound. So we are making use of some applications (iphone app) to visualize sounds so that you can look at it and then learn what is the correct shape with the shape of your sound. So we are actually developing a new application using education so people can look at ‘a’,’e’,’i’,’o’,’u’, at the shape of all the different vowels so they can know what is the correct way of doing a British pronunciation or American accents or even Cantonese or Mandarin. Yes, so we are actually making some small applications which can run efficiently to help people to look at the correct way of their pronunciation.
Tonks: That’s very interesting and I really think that’s groundbreaking and the future of that could be monumental for you and your colleagues. Tell us about how your work with biofeedback.

Lui: Yes. Because with music they have emotions. Usually with music they are expressive. So when people listen to music they have some response and we call that emotion and there are two kinds of emotion: induced emotion and perceived emotion. So how to classify them for example when you listen to a certain kind of music do you feel happy or do you feel sad. We have to evaluate it. So how to evaluate it? That’s what our project’s doing. We put some sensors, for example we put some EEG brain signal sensors here. I’ll put some skin contact sensors on your fingers and with the heart rate and respiration rate, we try to figure out all the biofeedback from your body when you listen to happy music and when you listen to sad music and compare the biofeedback to see how’s your body response tells us your emotion at that stage when you listen to the music. We are using those are signals to tell whether a person is happy or not or healthy or not or how their progress during the stroke rehabilitation process or the sports science status. Yeah, so we are doing our experiment in order to figure out what kind of music can give us the best results in doing such kind of music process.

Tonks: That’s just brilliant. You recently worked on a music therapy game designed for stroke rehabilitation. Can you tell us more about that.

[T]hey listen to the music and they have to do the exercise together with the music. In this case we find that certain kinds of music can help them to move faster, to move more efficiently, have better angles in their arm movements, and have better speed and better balance in the body when they’re walking.

Lui: Sure. Because there is a thing we call auditory motor synchronization which is something that’s inherent in people and that is present after they’re born. When they listen to some strong music they will synchronize and shake with the beat. It is something that we can not only picture. Everyone can do that. So for stroke patients since they have some problem in mobility they cannot move their arm or move their neck. So we wrote an application to help those patients to play them some music. So when we’re playing the music, they listen to the music and they have to do the exercise together with the music. In this case we find that are certain kinds of music can help them to move faster, to move more efficiently, have better angles in their arm movements, and have better speed and better balance in the body when they’re walking. Yeah, so that’s our project we have to choose the best kind of music to enhance their movement in the exercise.

Tonks: I find that very interesting how this application is also related to sport. I myself play football or soccer and a lot of times there’s a running soundtrack in my mind as the cadence. That’s very interesting.

So how is your work with sonification changed the way you think about music? In what way has your work changed your perception of music? Have you found which music that has touched you personally and been able to may be self-medicate, personal therapy, not only through your research but the application of your research to yourself?

We find out the results are the same but the participants tell us that they feel happier when using music itself instead of metronome. So for us, music research is not just the power, how effective or how accurate it is, but it’s also about… your experience when you’re using such a medium to help you do or achieve something.

Lui: I think it has a big impact on me for example in sport science we use something like a metronome. We can use a metronome to run faster. But at the same time we can also use the same piece of music of similar tempo to replay the music you follow the beat and you can also run faster. We find out the results are the same but the participants tell us that they feel happier when the music itself instead of metronome. So for us music research isn’t not just the power, how effective or how accurate it is, but it’s also about how you feel, how’s your response and how you experience when you’re using such a medium to help you do or achieve something. So for me I think music is something more than effectiveness more than accuracy. It is about how to help people to feel happier and how they are enjoying their life. It is about enjoyment, about cheers.
Tonks: That’s beautiful. Now do you see data-driven audio engineering more as a tool for music therapy or a tool for composition?

Lui: Oh, I think both because now it is the world of big data. People are using a set of data driven thing, something, they have a library, a database. They get some knowledge from the database and then they use the database for some other purposes. So actually, this kind of data driven model, a Markov model, a machine model – we can use it for composition because there’s some rules, something we can extract from it. We can use it for our music production but at the same time the tools can also have a lot of different kinds of applications. So the goal for us as professors and scholars we have to build a lot of tools. And then how to use the tools depends on users. So we have to make the work as scalable as possible so that they can transfer the knowledge to different fields of people so that the different people can use the same tools.

Tonks: So it has to be somewhat subjective but in the general sense it has to apply to the population.

Lui: Yeah, exactly.

Tonks: Do you think our auditory system has any kind of advantage over our visual system in interpreting or finding meaning in data?

[M]usic is something that I think is at a higher level – a higher semantic level. Other than what you can hear, you can sense the emotion, you can sense the layers, and you can sense the progression in some music. It gives you more information other than what you can see or what you can listen to in the audio spectrum.

Lui: Yeah, there are two ways. First is for disabled people for example for blind patients, actually that’s one of the papers I have for ICMC, for blind people they can identify direction by sound. So they turn left or turn right only by sound. And on the other hand for healthy persons sound can tell them additional information other than visual. For example something that is visual is 2D. To see 2D they will see the pixels or the kind of colors, whatever, but music it is something that i think it is at a higher level – a higher semantic level. Other than what you can hear, you can sense the emotion, you can sense the layers, and you can sense the progression in some music. It gives you more information other than what you can see or what you can listen to in the audio spectrum. So it is very interesting.

Tonks: So you would say that the auditory system is a multi-dimensional while the visual system is kind of two-dimensional?

Lui: I think so.

Tonks: Very interesting. Why do you think sound is a powerful therapeutic tool?

Lui: Because music induces emotion. When you listen to music the auditory motor synchronization will induce some reaction inside your brain in the alpha wave and also your brain and also your heart rate your overall content that’s inside your body. It is something that you cannot control. So it is something that can really help you to do something, that can enhance your motivation, to change your mind, change your attitude, and for some encouragement to push you to do something better, in terms of your performance in many kinds of processes and that’s the measure of music.

Tonks: That’s brilliant. Now I know that you are involved in composing as well as performing music. What kind of music do you enjoy listening to?

Lui: Oh, for me I like our wide range of music. But in particular I love a cappella. I love listening to classic choir as well as jazz music because this is music that I love to perform when I was in school I used to listen to many of these kinds and perform a lot of all. I have special interests in this kind of music — jazz, classical, and a cappella.

Tonks: Now with a cappella that’s very interesting to me because there aren’t many dimensions; it’s very succinct. There is no accompaniment. That’s very interesting. Could you expound on that?

[U]sing human voice we can do a bass, soprano, also do a guitar, many kind of strings, interesting sounds. And we can combine them together to create an interesting experience. I love a challenge. I love limitation. And love what I can achieve under this bound.

Lui: Yeah, because I myself love to work within some limitations, having to be challenged. For example, in a cappella you can only use a human vocal to achieve something which is very full, very complete. So you have to have some special skills in musical arrangement so that it can sound full. For example, for me I write a lot of a cappella music for my a cappella team in Hong Kong. We’ve done some competitions in Hong Kong. In one competition we became champions in Hong Kong. So that was fun because only using human voice we can do a bass, soprano, also do a guitar, many kind of strings, interesting sounds. And we can combine them together to create an interesting experience. So I love a challenge. I love limitation. And love what I can achieve under this bound.

Tonks: It’s very interesting that you enjoyed working with the restrictions of a very succinct form. A capella is something much different than other kinds of music. In addition to casual listening how does music and sound impact your daily life?

Lui: When I go to school, when I have to do revision I always have to listen to music because music helps me to concentrate and actually that is also imbues some of my work. There are some moves in music that can help people to concentrate. So music is very important in my life. It gives me motivation, for leisure, and it’s the best accompaniment when I’m bored and when I need something more to enhance, to motivate me. Music is very important in my life actually.

Tonks: That’s very interesting. Now where do you see your work with sound and music therapy going into the next decade?

Lui: Now people have very limited knowledge in this field. Most of the recent work that is used in some kind of therapy but our most of the work has lack of justification. That is people know that this kind of process can help you to do music therapy, but they don’t know why. For example, what kind of music elements can help them, what kind of music, songs, what kind of tempo, whatever can help a music therapy process to become more effective. So I think in the next decade people have to find a justification, find the reasons, by some carefully designed listening test, clinical tests in order to justify. After that we can use the knowledge to help more people in order to use music in music therapy.

Tonks: So music has been defined in many different ways by many different people. How would you define music?

[T]he boundaries between all these 3 classes — music, sound, and noise are becoming less clear. Because for contemporary music people tend to use sound and even random noise in order to compose something as music.

Lui: Music, sound, and noise they have a very clear separation of classification. For music it is something they have a certain format, for example a melody, a baseline, a chord progression or whatever; and then on the other hand, we have something we call sound. That is any kind of audio thing that we can listen by ear but some of them are more structured; on the other hand, we have something we call noise which is totally random like white noise, purple noise. I can say for now the boundaries between all these 3 classes — music, sound, and noise are becoming less clear. Because for contemporary music people tend to use sound and even random noise in order to compose something as music. The reason is they find that this kind of audio composition is interesting and they can infuse some emotion and there are some different kind of interesting response. People are trying to explore into a new field. So I would say these three categories, they’re separated but now they are trying to bound together.

Tonks: So you would agree that one of the definitions of music is everything that one listens to with the intention of listening to music? So with with that definition it seems as though those boundaries have been nullified. It’s been a very interesting talking to you. Before we conclude the interview I want to give you a chance to direct the conversation. So, on that note, is there anything that we didn’t discuss now that you’d like to address?

Lui: Before we have more computer applications, people find that music is something very difficult, very high level, and only a musician can do composition. Only a rich people can buy music or go to a musical concert, but for me I think that’s not true. I think music should be for everyone even for laymen, for my mother. I think we should write some applications such as my mother even can enjoy writing music by using some tools, something that can help them to understand the basics even if they don’t understand the music principles, but they can use tools to write some simple music that they can enjoy. So I think that’s my job — to do something.

Tonks: So you see music compelling as an educational tool?

Lui: Yeah, an education tool and for self-enjoyment and which more people are going to enjoy with less barriers.

Tonks: Very interesting. We really appreciate your time and thank you so much for what you do and I wish you all the best in your future endeavors.

]]>Please enjoy the 2nd in a series of interviews from the 2015 International Computer Music Conference (ICMC 2015) in Denton, Texas. Each year, this conference brings together scientists and artists from around the world to share their latest projects in...
In this episode, we interview Dr. Simon Lui, an assistant professor of the information systems technology and design pillar at Singapore University of Technology and Design. Lui received his PhD in computer science from the Hong Kong University of Science and Technology. He has extensive experience in mobile application development especially for audio applications. He started his own business in mobile application development in 2009. Dr. Lui has inventions on the iphone and ipad platforms, including number one best-selling apps. His work is widely reported by international media including CNN International and many other magazines, newspapers, and television programs. Dr. Lui is also a composer of computer music as well as an award-winning performer.

Tonks: Dr. Lui thank you so much for speaking with us today. Just to start off are a few basic questions. What is computer music and how did you first become interested in it?
Lui: Okay, computer music is using a computer as a platform for music application. For example, for music performance or music analysis or music classification. So something that we cannot do as a human, for example, to produce some interesting sound or to deal with a large number of music we have to go to a computer and that’s why computer music. For me, I’m from the computer science background. I’m doing machine learning and artificial intelligence. So I’m using my skills to do something that I really like on music sounds by using computer technology.
Tonks: That’s very interesting. How does your work involve sonification?
Lui: Ah, sonification. My work mostly is using sound, using audio to help people and so that’s why I have to know what is sonification. What is the design, what is the principle of our audio signal and so that’s why I’m doing sound analysis and sound application and then use it for stroke patients and for deaf patients and for sports people in order to use music to enhance performance of some of their processes.
Tonks: That’s very interesting. Now I saw in one of the applications that you have involving replication of sound and your graded upon that. It’s very interesting to me because I’m learning Welsh and so approximating the intonation and approximating the way that you speak is very important in different languages. Do you see your work moving into the education of language?
Lui: Yeah ,that’s true. Because there are many kind of representations of sound for example audio representation and we also have vibration. When you touch the lips you’ll find that they are vibrating or you can look at the sound by doing some our conversions, for example, you can see the shape of the volume of your sound or the contour of your sound. So we are making use of some applications (iphone app) to visualize sounds so that you can look at it and then learn what is the correct shape with the shape of your sound. So we are actually developing a new application using education so people can look at ‘a’,’e’,’i’,’o’,’u’, at the shape of all the different vowels so they can know what is the correct way of doing a British pronunciation or American accents or even Cantonese or Mandarin. Yes, so we are actually making some small applications which can run efficiently to help people to look at the correct way of their pronunciation.

Tonks: That’s very interesting and I really think that’s groundbreaking and the future of that could be monumental for you and your colleagues. Tell us about how your work with biofeedback.
Lui: Yes.]]>SoundsTo.meclean18:21Interview with Mark Ballora at ICMC 2015http://soundsto.me/interview-mark-ballora-icmc-2015/
Thu, 04 Aug 2016 15:45:45 +0000http://soundsto.me/?p=3331http://soundsto.me/interview-mark-ballora-icmc-2015/#respondhttp://soundsto.me/interview-mark-ballora-icmc-2015/feed/0Please enjoy this interview from the 2015 International Computer Music Conference (ICMC) in Denton, Texas. The conference brought together scientists and artists from all around the world gathered to share their latest projects in the field of computer music. In this interview, Jef Tonks speaks with Dr. Mark Ballora. Ballora holds joint appointments in the […]Please enjoy this interview from the 2015 International Computer Music Conference (ICMC) in Denton, Texas. The conference brought together scientists and artists from all around the world gathered to share their latest projects in the field of computer music.

In this interview, Jef Tonks speaks with Dr. Mark Ballora. Ballora holds joint appointments in the school of music and school of theater at Penn State University. He is the author of Essentials of Music Technology and The Science of Music. His compositions have been played at electro-acoustic music festivals around the world. He has also written articles describing uses of sonification in the areas of cardiology and computer network security. His sonifications have been used in a collaborative effort with musician Mickey Hart and cosmologist George Smoot in the film Rhythms of the Universe.

Tonks: To begin can you tell us what is computer music and how did you first become interested in it?

Ballora: What is computer music? I mean you could say that now everything is computer music if you wanted to because just about all of the music that we hear is recorded which means that it’s digitized. So you could make the argument that just about everything we hear is computer music.

But traditionally it’s meant to describe how people used the computer as a music-making machine. The term dates back to the fifties and sixties when it was a pretty far-out thing to do. Computers were high-end calculators so the idea of using one to make music was pretty science fiction but it’s democratized quite a bit since then so now you buy any laptop and it’s going to come with music-making capabilities. Multimedia and computers is pretty mainstream now so computer music like I said can be just about anything. Everybody does it now.

At a conference like this people are talking about developing new tools for the computer. Creating new kinds of software. Creating new approaches to creating music, new ways of understanding music or processing music or analyzing music through through a computer. So the term is much broader than it used to be. The community is a lot more diverse than it used to be when this conference got started which was I think in the late seventies. It was a pretty small group of people now it really includes everybody from all walks of the Arts.

Tonks: Very interesting so I’ve heard the word sonification being used at this conference. Now what is sonification?

Ballora: Yeah it’s actually Carla Scalletti who is one of our key note speakers here is the person who coined the term I think or she’s one of the one of the people who did. She’s the one whose definition we use which is a mapping of numerical relations on to sound.

It’s kind of nice – there’s the artistic side and there’s the informatics of it.

Sonification means representing information with sound. I tell people it’s just like visualization really where you map information to information that your eyes can take in. Sonification you do the same thing but you do it for the ears so when you’re doing visualization your mapping information to things like color or height or size and when you doing sonification your mapping information to things like pitch or volume or stereo pan position or different types of timbre. So that’s what it is in broad strokes.

For someone like me it’s a compelling compositional / artistic pursuit how can you make music out of scientific information.

Tonks: That’s very interesting. How does your recent work involve sonification?

Ballora: Well I go around and I meet scientists and I ask them if I can sonify their data and those that let me they give me data and then I come up with the a way of making sound out of it.

So you I think you mentioned in the introduction I did some work with Mickey Hart and George Smoot. That was basically everything we could find having to do with the cosmos and translating that into sound. I guess I ought to be kind of careful about this. There are a number of ways of of going about it and I wouldn’t want you thinking that it’s literal like recordings of things. So it’s not like we went out and we went up to a galaxy and we stuck a microphone in it and we recorded it and say “here’s the sound of the galaxy”. It’s more you can go online you can get information about the spectrum of different galaxies and the spectrum is is a bunch of numbers and you can take numbers and you can translate numbers into sound.

I meet scientists and I ask them if I can sonify their data. Those that let me they give me data and then I come up with the a way of making sound out of it.

The spectrum lends itself well to sound because sound also has a spectrum. So you can you can easily translate a light spectrum to a sound spectrum but it’s a matter of transposition. It’s a matter of coming up with a sound model using a synthesis program and synthesizing a sound that is just intuitively suggestive of the phenomenon that you’re working with and then playing that instrument with the numbers of the data set and transposing the numbers in such a way so that what you hear is informative to somebody who wishes to study it. It’s kind of nice – there’s the artistic side and there’s the informatics of it.

Like for the galaxies. We did a bunch of galaxies for this film that I contributed to. I had this idea that galaxies are sort of like wind chimes up there in the sky. So I wanted a wind chimey type sound so I came up with something that was kind of bell like. Then I mapped the spectrum wavelength values to pitch values. So it was this sequence of pitches and when I first played it, it sounded kind of like a machine gun. It was like “oh ok, that doesn’t do.” What I want is a nice bell sound but it really sounds kind of dreadful so what can I do about this and then I thought well rather than play them at regular intervals in time to time it so that the timing of a particular bell depends on the difference in intensity value between this data point and the last data point.

So then I got a kind of irregular bell sound and that had the wind chime kind of quality that I wanted. I thought it worked nicely not that we’ve used it to study the spectra but it was important that we could if we wanted to. So there’s kind of a redundancy there. You could tell how the the spectral data behaves by the succession of pitches but also by the timing differences of it – the rhythm. So they reinforce each other and that that works nicely when you have more than one cue describing the same thing. I think that was about the first thing I did for them and that was the first time when Mickey [Hart] was like oh this is great this is great thank you.Tonks: So how has your work with sonification change the way that you think about music?

I don’t know. I don’t know that it’s changed the way I think about it because I’m just following in the footsteps of other people that did this kind of thing. It goes back to composers like Iannis Xenakis who was using probability theory in the 1950s to generate musical material. He called it stochastic music. As the number of events approaches infinity, what kind of behavior do you get? He would he would liken it to like a swarm of insects or the sound of rain over a roof or something like that and what is this, what are the characteristics of this sound cloud that results when you’re generating a lot of events. So you’re not focused on the individual event you’re focused on the the overall sequence of a whole bunch of events occurring at about the same time.

It led to an approach to music called granular synthesis where you you use the computer to generate hundreds, thousands of really short sound events per second and so you’re not so much composing each event but you’re using the computer to calculate events that fall at random within a range of let’s say pitches or volumes or wave types.

So as a composer you’re dealing with the kind of the overall and you’re letting the computer really do the really fine grained work of coming up with individual events. So I knew about those things before I started doing sonification. Sonification just seemed like a new way to approach those approaches to composition which I had already found pretty interesting.

Tonks: Do you see sonification as a way to understand data in a new way or as a tool for music composition?

Ballora: Yeah. Both. I mean there is an auditory display community. I mean we’re here at the international computer music community at their conference. There’s also an international conference of auditory display that meets each year and they they tend to be focused on understanding data in new ways through sound which makes a lot of sense.

I mean when it when people started using the term and this was in the early nineties they were beginning to recognize that this was you know the information age and we’re you know we’re putting up new sensors all the time giving us more information. We’re at a point where we have more information than we know what to do with right now. We get more data than we can meaningfully comprehend.

Some of my friends in the Information Science and Technology College back at Penn State call it “cogminutiae fragmentossa.” I think that’s that the the the term they’ve coined for it. They say we get all this information and we just create digital landfills out of it because we don’t know what to do with it once we get it. We don’t have time to go through all of it and try to make sense out of it, so there’s more than we can handle.

We get all this information and we just create digital landfills out of it because we don’t know what to do with it once we get it. We don’t have time to go through all of it and try to make sense out of it, so there’s more than we can handle.

That’s what Gregory Kramer was writing about in the early nineties the guy who organized the first auditory display conference. His take on it was that the ear is also a pretty important sense. I mean we’re used to visualizing data. In fact we visualize it even when there’s no clear need for it but we’re coming up with new ways to visualize stuff just because we get our kicks out of it or we think it’s fun or something. But you know just as we go through life and you know for healthy we have both eyes and ears and they serve complementary and supplementary functions as we navigate our way through our environment, it really makes sense if we’re getting more and more information from different sources, it seems only natural that we should rely on more than one sense to makes sense out of that.

Tonks: Do you think our auditory system has any kind of advantage over the visual system for interpreting and finding meeting and data?

Ballora: Yeah well that’s that’s interesting I mean that’s what perceptual psychologists get into. Why do we have ears and why do we have eyes? Why do we have both? What do they mean? They both give us a picture of of our environment and we tend to rely on the eyes. But there’s a lot of information that the ears get and we tend to disregard it.

You see blind people who walk around and they seem to know where they’re going. People have told me if you blindfold yourself you learn to see with your ears pretty quickly. You start relying on that information that the ears are getting that we usually don’t pay much attention to, but it’s there. You can hear subtle differences in things. Right now I’m talking to you and I can hear my voice bouncing off that wall over there. I can hear it bouncing back at me. You can hear reflections off of things. It’s kind of like sonar. Its kind of like what dolphins do and under water creatures. They hear reflections off of things. We do that. That’s how you can hear where the door is. That’s what a bat does.

Tonks: Echo location.

Ballora: Echo location, yeah. I was in a house not too long ago and a bat got stuck in there. It’s like, “just open the door he’s going to find his way out pretty soon.” And he did. Because he’s doing echolocation. He hears “over there, over there. Its not bouncing back at me” and flies out that way. You can learn to do that. That’s what blind people doing as they tap their canes. They’re listening for echoes. They’re listening for reflections. You can hear where the staircases are, where the door is pretty easily.

The eyes are good at giving us pretty static information about things you know like size, color, texture, shape.Tonks: Very rudimentary?

Ballora: Rudimentary, if you want. But mostly, static things. That wall hasn’t changed much since we started talking. It’s pretty much the same as it was. But the ears are really good at dynamics. At things that change.

I just heard somebody on NPR a week or so we’re go saying that the ear processes information about 20 times faster than the eyes do. So the ears really set the stage for everything else that our senses bring to us. It’s all based on that impression we get with our ears.

My line I’ve been using for the past few years is: if you don’t believe me, there’s a simple test you can do at home. Next time you you rent a DVD and you watch it just turn the sound down and then turn on the subtitles so that you know what they’re saying you can follow the plot but I guarantee you you’re not going to enjoy the movie as much without the soundtrack is really what embraces you. It’s really what brings you into the world of that movie. When you see the movie you may be more struck by the visuals and the special effects and everything like that but it’s really the sound that captures your attention and that puts you in that environment.

It really makes sense if we’re getting more and more information from different sources, it seems only natural that we should rely on more than one sense to makes sense out of that.

So in sonification that’s what we’re trying to do with data. We’re trying to create this sonic environment that you can live in. That you can interact with. So like the galaxy I was talking about a minute ago. I had redundant cues for spectral components of it. So there was a pitch component and there was a rhythmic component. Our ears are really good at those things. Our ears are really good at detecting rhythms. At detecting small changes in pitch. You don’t need any musical training to sense those things. We’re born with that.

It’s probably something that’s hardwired into us from evolution and probably goes back to you know the days when we had predators and we had to fight or flight. Our ears developed into very sensitive sensors of change in the environment. You know, “here’s something over there. Let’s do something about it. Let’s run or let’s fire an arrow.” The ears are really good at dynamic changes.

The ears also happen to be really good at following multiple patterns of information at the same time. That’s what we do when we listen like to chamber music, to multi-part counterpoint and we can hear all these simultaneous melodies.

So in sonification that’s what we’re trying to do with data. We’re trying to create this sonic environment that you can live in. That you can interact with.

There have been tests that people have done. Gregory Kramer did one. They did a simulated operating room. They had a virtual patient that they were they were taking through surgery and they had people doing different roles you know. “You’re the nurse. You’re the doctor. You’re the anesthesiologist.” They had to follow the vital signs of the patient. They did it where they only had visual cues and where they only had auditory cues.

With the visual cues there’s the the fact that you’ve got to be watching something. So you gotta have your eyes fixed on a screen and that’s a disadvantage if you’re in an environment where there’s a lot going on and you got to stay watching this screen that shuts you off from everything else.

But aside from that they found that people reacted more quickly and more accurately to the sound cues. When the patient was having a change in vital signs, there’s evidence to show that we that we can follow multiple streams of information and respond more quickly and accurately to multiple streams of [sound] information than we can to multiple visual streams of information.

So sonification seems only common sense when we’re talking about a highly dimensional data. Data is hard to visualize above about four dimensions so if you need to go higher than that it’s hard to do it visually. So the ears are the natural sense to turn to in that case. You turn it into a piece of chamber music and you have five instruments playing and then you’ve got a better chance of following it.

Tonks: So you can’t get much more compelling than sonification of the universe but, what are some of the most compelling sonifications that you’ve listened to?

Ballora: Well, I could tell you about some other things I’ve worked on that have been kind of fun. So last year I met somebody in meteorology at Penn State and she studies hurricanes. She was going to an international hurricane workshop and conference in Korea last December. So you can see satellite videos of hurricanes at the NOAA website, National Oceanic and Atmospheric Association, I think. So you can see these satellite videos. It’s like watching a silent movie. You see this thing spinning and you see different colors and that’s what they use.

She was curious about what it would sound like if I put a soundtrack to those. So she gave me some data sets. The datasets had information about air pressure and latitude and longitude and the symmetry of the of the storm system. So I made these things and I put them on there and she liked it and then she asked for a few more. I wound up doing 11 of these storms and that’s what she took to Korea.

I said you know, “how’d it go over?” and she’s said “You know, pretty good. They liked it they said they found it engaging.” I said “well did you get anything from the sound that you didn’t get from the visuals?” She said “Yeah, you know actually, when the storm gets more symmetrical, it becomes more intense. It’s like when an ice skater brings her arms in and then she spins more quickly. It is the same kind of phenomenon and we can see that. But there were some times when the storm got more intense when it didn’t get more symmetrical. I wouldn’t have known that by watching the video but by listening to I could hear it. It was very interesting to hear the storm getting more intense but also to watch the shape of the storm and notice when it’s not symmetrical and look at that, it’s more intense.”

I was like, okay good, there’s there’s my golden ticket there. We got something out of the sound that we couldn’t have gotten from just the the visuals. That was pretty nice. I don’t know how useful it’s going to be for hurricane researchers. I mean they may enjoy it. I don’t know if it’s going to teach them anything.

Maybe it will. That was a pretty nice line that we we could hear the intensity when we couldn’t see it. But I think she was most interested in it for like a freshman in her intro to meteorology class. It’s the engagement thing that I was talking about before. If they hear a sound track to it that they find compelling, they’ll find it more interesting and they’ll study it a little bit more.

That was a fun one because it was multi-dimensional. I had these four data dimensions that I had to work into sound somehow. That was that was kind of fun.

[T]here’s evidence to show that we that we can follow multiple streams of information and respond more quickly and accurately to multiple streams of [sound] information than we can to multiple visual streams of information.

I met somebody in forestry and he studies Arctic squirrels. Squirrel body temperatures. I was like “Really you study squirrel body temperatures, do you?”

“Yes we do. We look at this population of squirrels up by the arctic circle. In this environment, we know how many squirrels there are, and we know where they live. We know what they’re doing.”

They capture them and they implant them with a little sensor and then they go out and they let him run
around for a year. Then they capture as many of them as they can in a year later and they take the sensor out and then they’ve got a year’s worth of body temperature data for the the squirrels. So they can hear when they give birth. They can hear when they go underground in preparation for hibernation. They can hear when they hibernate. They can hear when they come out of hibernation.

This is interesting because of all of these interlocking cycles that make up an ecosystem. When they breed affects the predators who eat the squirrels, and do they have their food supply so that they can they can reproduce.

The example he gives in his paper is like if birds go to a particular breeding ground each year to lay their eggs and they go there when the temperatures are right. But let’s say because of climate change, it gets warmer a little earlier in a particular year. So the birds get there before the plants have grown that they need to eat. So they don’t have the food supply that they need. That affects their population the next year and that affects the population of the animals that feed on the birds. So it’s a phenology or phrenology. I never get the term right but it has to do with life cycles of animals and plants.

This is what this fellow goes around teaching. He focuses on the squirrels. He goes into schools. He tells grade school kids about squirrels and their body temperature cycles. He sometimes found that they don’t seem to be fully engaged in the graphs that he shows them of the body temperatures of the squirrel over the course of a year.

What I’m interested in is sonification as educational enhancement. As a way of introducing people to a field of study that is new to them. Making that part of the doorway into that. This is what gets them interested in it.

So we’re thinking if there were kind of a soundtrack and there were this kind of groove that they could hear so you hear. You get a regular body temperature cycle, it’s a daily cycle as it gets light and dark and light and dark. Then they go underground. Depending on whether they’re male or female they’re there for a certain period of time before they go into torpor when they’re in hibernation.

So they’re conscious, but they’re in sensory deprivation. They don’t know when it’s light and when it’s dark. The body temperature gets irregular sounding. It is kind of interesting. Then when they go into hibernation, it plunges. I mean you can hear that, but I have a feeling the interesting part is going to be the change from active and preparation-for-hibernation because that’s when it’s kinda regular and then it gets kinda drunk and irregular before it just plunges.

So what I’m interested in is sonification as educational enhancement. As a way of introducing people to a field of study that is new to them. Making that part of the doorway into that. This is what gets them interested in it.

If we make this a part of how kids study science now, we’ll get a generation of kids who grow up with the idea. I grew up with the idea that you look at science. I mean I was making graphs in about the 4th grade. That’s what they had us do you know. If you brought kids up with the idea that you also listen to science then we’re going to have a different kind of scientist in the next generation. We’re gonna have a scientist who’s used to listening to stuff as well as looking at stuff.

It’s not going to be like this sudden thing where somebody invents a sonification that suddenly changes everything. That’s what I was hoping for for about 10 years and I don’t see that happening. You know, maybe it will. That would be dandy. But, you know I would be happy enough to plant some seeds and just let the next generation of people develop scientific tools that incorporates sound in new ways that help them understand the cosmos and the earth differently than I grew up doing it.

Tonks: So would you say that probably most compelling idea of sonification is education?

Ballora: Yeah, I would. At least, that’s what I am making my focus right now. I go to the the ICAD conference and they’re mostly like scientists and psychologists there. I’m kind of the artsy music guy who is usually there. Sometimes there are a few others but I mean I’ve done all these weird and wacky things with you know George Smoot and Mickey Hart, and this meteorology person, and this forestry person. I think the range of things I’ve sonified is unusual.

What I liked about working with Mickey Hart was that he wanted to have integrity, but I also knew that it was going to have to grab him musically right away. If he didn’t like it right away it wouldn’t matter how well I could explain why it was valid. If it didn’t sound good if it didn’t grab him in the gut right away he wasn’t going to be interested in using it. So it had to be musically compelling.

So it had two bottom lines that I needed to address: It had to be musically compelling and it had to be
scientifically informative. Even if we haven’t gotten around to studying it scientifically, there has to be that potential there.

It’s got to have the integrity so that if you wanted to you could study it. Using music as a scientific education tool, I think is a really compelling idea right now. That’s what I’m working toward with some colleagues back at Penn State. I’ve got a colleague in music education he wants to work with me on this – to help create some educational programs.

If you brought kids up with the idea that you also listen to science then we’re going to have a different kind of scientist in the next generation. We’re gonna have a scientist who’s used to listening to stuff as well as looking at stuff.

I’ve gone around and I’ve talked to some people about the film. They’re like visualization people and
science people. They come up to me and they say “So what assessment data you have on this?” and I say “I don’t have any. I mean, it’s a movie. What assessment data should I have?” I’m not trying to be snarky. You’re making a movie, you’re making an album, what assessment do you have?

I mean, people buy it or they don’t buy it. But how do you assess how good that is? I understand you have to do that but I don’t know what the right way of doing it is. I think that rather than like getting together a focus group than having them listen to some things and doing a bunch of statistic gymnastics with it. I mean I could do that. I guess. But I’m more interested in putting a nice thing online and having teachers be able to get to this from all over the place and tell me how it works out for them.

That, to me, would be a more interesting kind of assessment. More qualitative. Not such a quantitative assessment. But I think that’s the direction I see it going in now.

She’s done some work with Sony and Folkways. So the other day I got together with her and she was showing me like you know there’s the mariachi’s site. You can bring up this little animation and here’s a mariachi band you can click on this guy and you can hear the trumpet and click on this guy you can hear the guitar and click on this guy you can hear the vocals. So you can hear the different components of the mariachi band. This really nice engaging introduction to this form of music. If we could do something analogous to that in an educational setting, I think that could be really valuable.

It had to be musically compelling and it had to be scientifically informative. Even if we haven’t gotten around to studying it scientifically, there has to be that potential there.

I think museum settings would be valuable. The earth and mineral sciences museum at Penn State was interested in these hurricane sonifications. So we’re just about ready to put a little exhibit together where people can go in and there will be a touch screen and you can pick your hurricane and you can listen to it and you can watch it. Then you can go to another screen that breaks down what the sonification is. You can hear, okay here’s how we did pressure and here’s how we did latitude here’s how we did longitude. Then you can go back and you can listen to another hurricane. You can hear all of these things at once. The director of the museum there is interested in doing more of this. He’s interested in the squirrel data. He’s interested in something I did two years ago with somebody in the Arctic times.

He had Antarctic ice data that went back four hundred thousand years about the volume of the ice and the surface area of the ice and the solar energy reaching the ice. All this stuff from 400,000 years ago. I worked on this with a grad student and people enjoyed that. They enjoyed hearing all of these things working in combination. There were the kind of quasi-cyclic patterns you can hear and so he would like to have that in the museum also. You know a museum exhibit where people can go and explore these data sets in that way would be pretty nice.

If we also found a way to put it online so that it weren’t site-specific so you can listen to it online too. And then if you can get to a museum there’s a bigger and better display and it’s better loudspeakers. Just like the Smithsonian Folkways site is in association with the Smithsonian Museum. I mean you can go there if you can get to DC. You can go there, but you can also look at it online anywhere.

If we could do that with sonification then it would be really interesting to see how people like it. How do people take to it? How does that help people approach studying squirrels or studying the Antarctic in different ways? It’s just a different lens with which to to experience this this kind of
information.Tonks: Dr. Ballora, it has been such an honor to talk to you.

]]>Please enjoy this interview from the 2015 International Computer Music Conference (ICMC) in Denton, Texas. The conference brought together scientists and artists from all around the world gathered to share their latest projects in the field of computer...
In this interview, Jef Tonks speaks with Dr. Mark Ballora. Ballora holds joint appointments in the school of music and school of theater at Penn State University. He is the author of Essentials of Music Technology and The Science of Music. His compositions have been played at electro-acoustic music festivals around the world. He has also written articles describing uses of sonification in the areas of cardiology and computer network security. His sonifications have been used in a collaborative effort with musician Mickey Hart and cosmologist George Smoot in the film Rhythms of the Universe.

Tonks: To begin can you tell us what is computer music and how did you first become interested in it?
Ballora: What is computer music? I mean you could say that now everything is computer music if you wanted to because just about all of the music that we hear is recorded which means that it’s digitized. So you could make the argument that just about everything we hear is computer music.
But traditionally it’s meant to describe how people used the computer as a music-making machine. The term dates back to the fifties and sixties when it was a pretty far-out thing to do. Computers were high-end calculators so the idea of using one to make music was pretty science fiction but it’s democratized quite a bit since then so now you buy any laptop and it’s going to come with music-making capabilities. Multimedia and computers is pretty mainstream now so computer music like I said can be just about anything. Everybody does it now.
At a conference like this people are talking about developing new tools for the computer. Creating new kinds of software. Creating new approaches to creating music, new ways of understanding music or processing music or analyzing music through through a computer. So the term is much broader than it used to be. The community is a lot more diverse than it used to be when this conference got started which was I think in the late seventies. It was a pretty small group of people now it really includes everybody from all walks of the Arts.
Tonks: Very interesting so I’ve heard the word sonification being used at this conference. Now what is sonification?
Ballora: Yeah it’s actually Carla Scalletti who is one of our key note speakers here is the person who coined the term I think or she’s one of the one of the people who did. She’s the one whose definition we use which is a mapping of numerical relations on to sound.
Sonification means representing information with sound. I tell people it’s just like visualization really where you map information to information that your eyes can take in. Sonification you do the same thing but you do it for the ears so when you’re doing visualization your mapping information to things like color or height or size and when you doing sonification your mapping information to things like pitch or volume or stereo pan position or different types of timbre. So that’s what it is in broad strokes.
For someone like me it’s a compelling compositional / artistic pursuit how can you make music out of scientific information.
Tonks: That’s very interesting. How does your recent work involve sonification?
Ballora: Well I go around and I meet scientists and I ask them if I can sonify their data and those that let me they give me data and then I come up with the a way of making sound out of it.
So you I think you mentioned in the introduction I did some work with Mickey Hart and George Smoot. That was basically everything we could find having to do with the cosmos and translating that...]]>SoundsTo.meclean36:35