Humans are far better at identifying data pattern changes audibly than they are graphically in two dimensions, researchers exploring a radical concept say. They think that servers full of big data would be far more understandable if the numbers were all moved off the computer screens or hardcopies and sonified, or converted into sound.

That's because when listening to music, nuances, can jump out at you — a bad note, for example. And researchers at Virginia Tech say the same thing may apply with number crunching. Data-set anomaly spotting, or comprehension overall, could be enhanced.

The team behind a project to prove this is testing the theory with a recently built 129-loudspeaker array installed in a giant immersive cube in Virginia Tech’s performance space/science lab, the school's Moss Arts Center.

How researchers are testing their big data theory

The earth’s upper-atmosphere data sets are the test subjects being used, with each bit of atmospheric data converted into a unique sound. The pieces of audio are varied by using changes in amplitude, pitch, and volume.

The school’s immersive Cube contains one of the biggest multichannel audio systems in the world, the university claims, and sounds are produced in a special 360-degree 3D format.

“Users experience spatial sound, which means they can hear everything around them,” the school says in a news article. “Sounds [are] actually placed in specific spots in the room.”

Each section of the globe’s atmosphere is assigned to one of the Cube’s 129 speakers, which are arranged to project audio in a half-dome-like pattern, thus replicating a hemisphere. Participants wander the Cube while operating an interface that lets them rewind the 3D sounds, zoom in, slow down the audio, and so on.

The gesture-based interface they carry then captures the study user data (which amusingly, in turn, needs to be analyzed).

“It makes sense that we would want to go beyond two-dimensional graphical models of information and make new discoveries using senses other than our eyes,” says Ivica Ico Bukvic, in another article on university's website. He is associate professor of music composition and multimedia in the Virginia Tech's College of Liberal Arts and Human Sciences and one of the collaborators. He is working with Greg Earle, an electrical and computer engineering professor.

Previous research in using sound to explore data

John Beckman, founder of Narro, a text-to-audio converter website, alluded to it once in a personal blog post. “It’s hard to miss a discordant note or change in volume, even when attention is elsewhere,” Beckman said in 2015.

Unrelated to SADIE and Merced, Beckman was posing the question then as to why more data analysis isn’t performed over sound.

He points out that sounds and visuals are the two main ways people interact with electronics. However, visual is currently the only way people analyze large data sets.

“It seems like our hearing is primed to pick up minute changes, just as much as our sight,” Beckman said then.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.