It seems just as I get to post an article on the dawning of new forms of video search along comes, like the proverbial bus, two more at the same time. I have mentioned the moves Google are making also but this just out Wired Article – The Super Network – Why Yahoo! will be the center of the million-channel universe – goes into some good detail in the personalize media space…

“A billion hours of programming is meaningless without an efficient way to search it. Think of trying to find a book in the Library of Congress with no database, no card catalog, no Dewey decimal system. Today’s prominent search engines work great for Web pages and OK for still images, which usually contain captions or other identifying information. But video is much harder to sort through.” Josh McHugh

This article and a second looking at ESPN talks about to the whole long tail thing and goes a lot further in contextualizing collaborative filtering, psychographic profiling and social programming. It also talks as in my post from a couple of days ago about the importance of getting strong audio/visual data on-board as soon as possible.

“Several companies are logging closed-captioned transcripts so that shows can be searched with traditional text-search methods, and San Francisco startup Blinkx recently began captioning videostreams with voice recognition software. But computers are still a long way from watching and understanding TV. The thousands of data-center blade servers inhaling and annotating programs around the clock for Yahoo!, Google, and Blinkx are no more able to extract meaning than an ATM is able to know you’re having an affair by analyzing your withdrawal patterns. “I know how far we are from true computer vision,” says Horowitz, leaning back in his chair in a conference room at Yahoo!’s Sunnyvale headquarters.”

Horovitz is ex-MIT and founded Virage, a leading company in video analysing (again mentioned in my previous post), who are now embedded with Autonomy. Horovitz was apparently inspired by Marvin Minsky’s project for his MIT class – “get a computer to ‘see’ what is actually in a photograph”.

The important thing with this topical wave of interest in video personalization is what next? Just finding bits of film or TV down using searches such asÂ “the episode in Friends where Daphne has flu”Â or “that film where frogs fall out of the sky” or “which films contain the phraseÂ “the future is already here” – is the first step. We need to get the “creatives”, you and I, thinking of the great cross-platform interactive services that are enabled by this – lots more, so much more on this to come 😉