Maxine-Maxareddu Musicbots at 12th Annual Outsound New Music Summit!!

One of my favorite parts of the Summit is the “meet the artists” intro. Composer Ritwik Banerji talked about his Max/MSP-based software musicbot, Maxine, referring to it as his “child” which he was teaching. I found the statement very intriguing. Rittwik has a background of teaching children and has a young child. The other performer, Joe Lasqo used his prior Artificial Intelligence work in expert systems and natural language/speech processing, which he applies to programming Maxareddu.

(Max/MSP is very flexible software that is used for composing electronic music, and other music and video uses.)

Ritwik Banerji and Joe Lasqo used their “Improvising Agents”, “artificial-intelligence software entities that listen to, interpret, and the produce their own music in response.”

Appropriately, the set opened with a chat between two animation people in a video projection. Their “conversation” was very funny. I thought someone wrote it, but it was completely “improvised” by the two characters. I don’t hear much humor in experimental music. Way too serious!!

The two human musicians joined the conversation on their acoustic instruments(Banerji: sax, Lasqo: piano). Since I’m an acoustic musican (percussion) also I really liked this part.

Maxine “appeared in early 2009 as a being, deeply inspired by Banerji’s work with children in Chicago. Like one would hope of a child, this project focuses on the creation of a social agent, finding ways through sound to make its presence known, while respecting and enhancing the presence of others. Recently this project has more strongly engaged the issue of astromusicology, or the real-time musical diplomacy between human sound makers and the spectral bodies of Maxine.”

Musicbots Maxine and Maxareddu used a microphone to “listen” and then improvise to the sounds of the Banerji’s saxophone, Lasqo’s acoustic piano, and ambient sounds in the room.

Warren Stringer very seldom performs, so it was a real treat to have him there doing video projections. Joe Lasqo, the curator, invited him. Per Joe, “I met Warren through the SF Electronic Music Meetup (SF EMM), which we both belong to. I gave him a ride to a show by the SLOrk (Stanford Laptop Orchestra) that various people from this group were attending, that got us talking and one thing led to another..”

“He didn’t develop the chatbots, those were from the Cornell University Artificial Intelligence Lab. However, the original video of the chatbots from the AI lab was only about 90 seconds, so all the video transformations of their images after that were due to Warren’s software.”

“… his custom visual synthesis software/system can listen to the music and change the visuals accordingly on its own, and it can also operate under his command (i.e., both it and he can improvise the visual track along with the musicians).”

Since I’m working on getting set up to play video projections behind my group, Ear Spray, I spent a little time before the performance, chatting with Warren. He was using a webcam, iphone and an ipad and a Macbook Pro, which is what I was planning on using. So I got the names of all his equipment so I could check it out later. He was also using touchpad controller of his own design with software that he wrote using C++

I assumed he was just a ‘regular’ video person. I was really wrong!! Fortunately, Joe Lasqo told me about him.