David Clevinger: The typical use case that we've been seeing is media entities that have large back catalogs of content, that was originally created when they didn't have complex metadata tool sets, didn't necessarily have the right people applying metadata, didn't think of all the use cases on the output side. Maybe it's historical content.

A very concrete example is work that we've done for the US Open. We actually took hundreds of thousands of video clips and photos and news articles and vocabulary terms and proper names and fed it to Watson and helped Watson to understand what tennis was about so that it could do things like, when you heard the word Ashe, it was capital-A-s-h-e, Arthur Ashe, as opposed to lowercase-a-s-h. So there was a lot of training around that. The output then became our ability to create clips based on what was happening with an event, but also to describe historical video as well.

That's critical for companies with large media back catalogs who then optimize that moving forward. You can apply it to live, of course, but that's a typical use case that we see.

It's a recursive learning system. So we took a cross-section of a set of video assets, described it to Watson, said, “This is what's going on. This is who this player is, this is what is being said.” We were able to turn it loose really on other unstructured assets, have it say what it thought it was finding, and then we were able to correct it.

So we were basically able to train it up to understand tennis specifically. And then the output was, we could then turn it loose on a bunch of different kinds of outputs for the client. That was the idea.

Nadine Krefetz: And the outputs are?

David Clevinger: Closed captioning, video clips, excitement scoring. I know you've got a video here that talks about the Masters and some work we've done there, but we were able to do things like listen for crowd noise and say, "This must be really exciting, because the crowd is making a lot of noise at this moment." So we were able to turn that into an excitement score. But we wouldn't be able to do that if we didn't help the algorithm understand what it was looking at and how it should be thinking about that body of work.

Nadine Krefetz: So you were training the algorithm to get smarter and smarter, and then you let it go?

David Clevinger: Exactly. And then we just turned it loose and let it go. And that's the idea: to get it to the point where you can just turn it loose and let it run and them move onto the next one.

RealEyes Director of Technology Jun Heider discusses the importance of internal self-assessment and which use-case elements to consider when choosing a platform for video AI in this clip from Streaming Media East 2018.

Microsoft Principal Product Manager Rafah Hosn discusses the benefits and limitations of a content personalization strategy based on supervised machine learning in this clip from Streaming Media East 2018.

Comcast Technical Solutions Architect Ribal Najjar discusses how operationalizing commonalities between QoE and QoS metrics to deliver a "super-powerful" dataset in this clip from Streaming Media East 2018.

Citrix' Josh Gray provides tips on AI model development and Reality Software's Nadine Krefetz and IBM's David Clevinger speculate on the possibilities of metadata-as-a-service in this clip from Streaming Media East 2018.