In the past, to understand what drives creative performance, there’s traditionally been two ways to break down a creative execution into its elemental pieces:

1. You can eyeball it. This creative featured the dog and performed better than that creative that featured the cat.

2. You could manually code the creative for the presence or absence of specific elements of interest—such as whether or not there was a voice over, a product shot, or human presence, among other things.

The problem is, eyeballing it isn’t an exact science, nor is it very granular. And coding remains an extremely manual, often prohibitively expensive, process that requires you to:

(Step 1) Brainstorm

Identify all the creative elements you think might have had an effect on performance (don’t miss any!).

(Step 2) Create Data Structure

Construct a code frame to organize and capture the elements.

(Step 3) Manually Enter Data

Find someone with a lot of time on their hands to pore over every ad multiple times and fill out the code frame for each individual ad you’re curious about.

(Step 4) Review

And, finally, analyze performance and interpret the results.

Ain’t nobody got time for that.

But, what if we could eliminate eyeballing all together AND skip right to Step 4?

At VidMob, we’re building solutions that utilize the latest advances in computer vision, optical character recognition, and human-in-the-loop to allow marketers to gain a near real-time understanding of how key creative characteristics differently drive audience engagement. With this insight from our Agile Creative Studio (currently in beta), marketers can make data-driven creative decisions and more rapidly put into market beautiful ads that will resonate with their audiences.