Newsroom > News Release

San Diego, Calif., Oct. 29, 2013 — Michelle Daniels, a graduate student researcher in Sonic Arts Research & Development at the University of California, San Diego’s Qualcomm Institute, won a Gold Award Oct. 20 at the Audio Engineering Society’s Student Design Competition in New York for the streaming audio system she created for high-resolution display walls.

Michelle Daniels

The open-source middleware – called SAM after its central Streaming Audio Manager – adds audio support to the video streaming middleware known as SAGE (Scalable Adaptable Graphics Environment). SAGE makes it possible to simultaneously display multiple networked applications on high-resolution tiled displays. A SAGE user can, for example, stream multiple datasets or videos to the same display at the same time, and also simultaneously share those datasets and videos with another SAGE user.

“The developers who wrote the SAGE middleware originally designed it for high-resolution images,” explained Daniels, who is a graduate student in the UCSD Department of Music and is advised by Music Professor Shlomo Dubnov. “They had the facility for streaming video to a display wall from remote machines but could not listen to the corresponding audio .

“The challenge with a system like SAGE is clients are added and removed from the system dynamically, but we lacked a way to handle all of those incoming audio connections on the fly,” she continued. "There are audio streaming hardware devices, and a number of point-to-point streaming audio software tools, but a hardware solution didn’t fit SAGE’s needs. None of the existing software is both dynamically adaptive and provides uncompressed streaming. That’s a problem since compression, which we’ve all experienced with tools like Skype, is necessary when network bandwidth is limited but typically increases latency and reduces the audio quality, often significantly.”

Made possible in part with funds by NTT with support from Pacific Interface, SAM meets that challenge by taking advantage of the high-bandwidth networks used for SAGE and providing the necessary professional quality audio, “which goes along with SAGE’s concept of streaming uncompressed pixels to a wall,” Daniels added. “In addition to uncompressed pixels, we can now provide uncompressed samples.”

Another plus: Daniels was able to design the system without requiring a complicated audio set-up. She noted that the researchers who typically use SAGE are engineers, biologists and data scientists who are focused primarily on visuals and might only have a simple two-speaker set-up at their disposal.

Such a system is robust enough to run SAM, but, Daniels said, “for more advanced audio setups, such as with a very large display wall that can play many videos simultaneously,and people who know how to hook up audio equipment, SAM provides an interface for adding third party audio renderers into the system.” These audio rendering ‘plugins,’ such as those written by QI Sonic Arts Research & Development's Zachary Seldess, can position an audio stream spatially to correspond with a video stream on the display wall or provide other modes of advanced audio rendering.

Added Daniels: “The nice thing about SAM is that anybody can write their own renderer if they want to experiment with a different algorithm. SAM is also designed so that anybody can create their own user interface or use my client library to stream audio from their own software to SAM, so it can also be used for many applications outside of SAGE.”

Although some scientific visualization can be done without the need for corresponding audio, Daniels noted that audio often enhances research and collaboration in ways that a video-only system cannot provide.

“The higher quality picture and sound you get, the better interaction you get between people,” she said. “And in situations where researchers are collaborating remotely, for example, the audio is usually much more important than the visuals. If your video connection dies or isn’t that great, it’s not the end of the world. You can often still have a productive conversation using just the audio. It’s just a question of if people have the necessary bandwidth available. Companies are using high-bandwidth networks for multimedia remote collaboration as part of their everyday workflow.”

Daniels competed with two other graduate student finalists for the top prize, and tied with another student, Ph.D. candidate Nicholas Bryan of Stanford University, for the Gold medal. Bryan’s research pertains to interactive source separation that can be used for audio forensics (which makes it possible to pick out individual sounds from a recording and attribute them to a source). A third student, graduate student Illia Balashov of New York University, won a Silver award for his system for upmixing from stereo to multichannel recordings.

Sonic Arts Director Peter Otto, a technical director in the UCSD Department of Music who recruited Daniels to work with his group, called her work in the field of advanced audio networking “important and distinguished.”

“This work has implications for business, communications and the arts, including musical performance, filmmaking and remote collaboration,” said Otto. “We’re really proud of SAM, and it propels a lot of the other related work in networking, visualization and audio well into the future. Michelle’s work is indicative of the potential for collaboration between UCSD academics and research in conjunction with industry partners.”