“Traditional neural networks assume that all inputs, and therefore outputs, are independent of each other. That didn’t work for predicting song lyrics because the output produced by traditional neural networks aren’t based on the original text input,” the company explains. “With a recurrent neural network (RNN), the output (i.e. the words generated) are all dependent on the previous input. If we input Carrie Underwood lyrics, our RNN will ‘look back’ to those lyrics and the output will take them into account.”

In other words, by studying other songs written or performed by the Grammy-winning artist, US Direct’s AI will write a song that “sounds” like Carrie, learning her phrasing, style and word-choice patterns to create something that feels authentic to listeners.

“We took the lyrics from Carrie Underwood’s six studio albums, excluding verses sung by guest artists, and used them to train a recurrent neural network,” US Direct says. “The neural network produced hundreds of lines of music, which we then cleaned up and reformatted to make Carrie Underwood’s next song (as predicted by a bot).”

Among the song’s lyrics:

“Every horizon holds a storm for sirens”

“The days were whiskey.”

“I’m praying I’m fearless.”

“I need tears for miles.”

Considering this is a woman who counts among her first, uh, hits a song about beating the snot out of a cheating lover’s car, and another one in which she implores Jesus to steer her car through life’s storms, it’s fitting.

The CMAs are Wednesday night — If you watch the ceremony, let us know what you think of the song. Can you tell it wasn’t written by a person?

JOIN OUR MEETUP!

Join Alan Cross and our Music and Technology experts at our FREE events!

The Music Technology MeetUp is for anyone who is interested in connecting with veterans in the Music and Technology industries and those who simply love music and want to learn more about how technology has shaped the way we consume, create, and monetize music. Check out the video from one of our events below.