Hoping to make audio more shareable, WNYC introduces “audiograms” for social media

If you think about the way you scroll through social media in a given day, you’ll realize that it includes a lot of sampling — clicking through and scanning an article, or watching a short video with the sound off.

It hasn’t been totally clear how audio fits into this. “I come from the TV world, where there’s a lot of video for us to use,” Delaney Simmons, WNYC’s social media director, told me. “[In radio], we have a unique problem in that our content isn’t necessarily shareable.” After all, how do you skim a podcast or listen to an audio clip on mute?

WNYC is working to solve this problem with a new tool called “audiograms” that turns a piece of audio into a video file. The result looks like an audio player, but it plays like a movie. The audiograms can be posted to Facebook, Twitter, and Instagram, and users can also embed them. But the best way to actually describe this is just to show you one:

The new podcast, There Goes the Neighborhood, is done in collaboration with The Nation, and the first full episode airs Wednesday, March 9. But in the meantime, WNYC created more than 10 pieces of audio content for social promotion. (Simmons came up with the concept for the tool, and it was built by Noah Veltman, a developer on the WNYC Data News team.)

The team has been experimenting with the concept of audio content on social for awhile. Two years ago, it was an early publisher to use Twitter’s ability to embed audio in a tweet. In February, WNYC started experimenting with Anchor, an iOS app that lets users share short audio clips. But Anchor is more about user-generated content than about posting WNYC’s own content on social platforms. Last December, the station published a full-length episode of Here’s the Thing on Facebook. “From that experience, we learned there is an appetite for audio on social media,” Simmons said. But many people didn’t listen to the entire thing on Facebook. Some did, but more just listened to the first part.

“We think shorter, more snackable content is the way to go,” Simmons said. “People on social media are always scrolling through their news feeds looking for the next thing, and after we took a look at the backend analytics [for that episode], we decided the shorter, the better: Get people in, get people out, give people the best piece of content you can at the time, and hopefully they’re going to find it really interesting and engaging.” After that, they’ll be funneled to iTunes, the WNYC app, and other podcasting apps to download the full thing for, say, their commute.

How short is short? “We’re telling producers that under a minute feels about right, though I’d always recommend quality over cutting yourself off at the knees,” Simmons said. One recent successful experiment: WNYC has a Chris Christie–themed podcast called The Christie Tracker. In one episode, the hosts read aloud some of the most interesting and funny tweets from the Republican debates. “We took one of the tweets that the host read, put it in an audiogram, and tweeted it out,” Simmons said. “It was just an audio version of a tweet that had gone out the night before! But people loved hearing it.”

Hearing all of this made me wonder why Facebook hasn’t yet launched an audiogram-like audio initiative itself. Simmons cautioned that she can’t speak for the company, but said, “I think that they are interested in audio, and I know they’ve been doing some tests as well.” One of those tests is with podcast previews for Serial.

“On our side, we’ve seen social video take off, and [Facebook’s] algorithms prefer video content over links or photos, so we’re going to dive headfirst into creating as many audio videos as we can,” Simmons said. “If in six months, Facebook’s like, ‘here’s an audio player,’ we’ll test that, too.”

For now, WNYC plans to use only existing audio from podcast episodes in its audiograms. Down the line, though, “we are looking to create social-specific audio,” Simmons said. That could include “clips left on the editing-room floor” or extra content that was removed for length reasons. Also on the way: subtitles. A lot of users watch videos with the sound off, but that obviously doesn’t work for audio. For now, at the very least, the audiograms show visible soundwaves to make it clear that a user needs to enable sound or put on headphones to listen.

In addition, down the line, WNYC might license or open-source its audiogram tool to other publishers. WNYC may have taken the first step, but “that doesn’t mean it’s not affecting other people in the industry, and radio has always been a very inclusive environment,” Simmons said. “This is the first of many steps, and we hope it’s the beginning of solving the social audio problem.”

These are interview edits without production, context or identification other than the visual on which one clicks. The examples don’t seem like bits of sound interesting enough in themselves for people to want to share (maybe that’s just my taste) but at least some moment of sonic tagging of the long piece’s title, the name of the person speaking, the topic — something! — seems advisable.

The WNYC audiogram experiment sounds a lot like Clammr, a tool that lets anyone make a shareable 24-second audio clip from a published podcast. http://www.clammr.com/

CarScott

Before taking the word “audiogram” certainly someone did a basic dictionary search on the term. A real “audiogram” is a graph used in medicine to record and display hearing thresholds and other data about hearing.