looking at the results on some video feeds, it clearly does not account for any camera movement as it’s simply just processing video and comparing changes in pixels, but that being said i have a feeling their algorithm is more complex than just looping through the pixels and checking each one for a change as that technique has been around for years. It is probably doing something based on regions and tunneling down to each pixel change.

I agree, I don’t see how great this is unless it is able to actually detect movement while the camera is moving. They show no videos (that I found anyway) on their site of anything other than a static camera.

Back in ’00 in graduate school, we used bayesian algorthims to analyze arial photos to distinguish between cars, carrion, etc… with the hopes of measuring traffic flows. Seriously, there’s nothing new here. In fact, if you wanted to do this with moving cameras I bet it would work there as well as long as the camera wasn’t moving too fast.

PS don’t ask me to try to remember the actall algorithm or code, it’s been way, way too long.

it saddens me Sony wont make a version of CHDK had I known about CHDK I would have bought a cannon, my friends with cannons are too yellow to put CHDK on thier cameras… Im sure if you could find a seiral connection on a Sony you could get the OS and tweek it… there’s a mission…

this CAN’t be patented, there is prior art:
The effectv program for Linux features an effect called “hologram” or something like that, which does exactly the same thing, but instead of butt-ugly green overlay, it overlays a cool retro-scifi hologram effect.

You know most all videocodec of the last decades have used motiondetection to compress video.
and this is in fact just comparing pixels, in short it’s backwards and I’m going to say retarded to release this now as so novel and clever, the claim to fame they can make is putting it in the canon, but seeing canon cameras also have video compressors on chip it can probably be done much better using its hardware to assist.
We live in a day and age where cheap consumer cameras can freaking detect when you are smiling and this should impress? come on now.

Claiming there is nothing novel here is like claiming that there was nothing novel about quicksort because bubble sort already existed. It’s naive and foolish. I skimmed through the paper. The algorithm is new, and they are just showing off the efficiency by running it on CHDK.

I will agree though that it goes against the spirit of academic research to patent algorithms.

I quickly skimmed over the patent papers, and it seems the algorithm uses an interesting adaptive background subtraction technique. Putting the whole issue of software patents aside, I’m not sure whether this patent is justifiable at all in terms of originality.

It IS interesting (I skimmed the algorithm part of the paper) but it seems more like an incremental improvement. It seems marginally better than naive background subtraction if you forget about the ‘history’ aspect of what this algorithm is doing. However, I cannot speak to whether or not it is a huge improvement in efficiency.

Yes, it goes against the spirit of academia. But Sergey and Brin patented their algorithms. GIF was patented. RSA has patents on encryption algorithms. I think even the SUSAN corner detection algorithm is patented. I’m for openness but I’m also for people being able to profit from their hard work.

am I the only person who recognizes that Photobooth (in mac os) has been able to do this for years? It’s not that hard. So congrats to you U of L people for re-inventing something that didn’t need reinventing. This would be acceptable to me if they said that they “ported the ability to filter motion from the background” instead of “developed an algorithm”

You really bastardized the hell out of that write up. The slashdot article was much more informative. You made it sound like motion detection and background subtraction has never been done before. The cool thing here is that it’s beeing done directly on the camera.

The principle of “Background subtraction” exists for about 30 years now. Television too…
Why would this mean that you cannot innovate on it anymore?

On the author’s site, there is a sequence for a version of the algorithm that only requires 1 comparison per pixel and 1 byte of memory. To me, this seems to be the absolute bottom line in terms of computational resources. Not surprising then that you can embed it in a digital camera. Nice demo anyway.

Should also be noted that in releasing in binary-only form they’re (quite significantly, in my opinion) limiting it to x86-only applications; can’t use one of the many embedded ARM systems w/ the ability to use a camera.

I quite like the way that this algorithm keeps objects hilighted even after they have stopped moving. Does anyone know of a more open source algorithm that does that? I’m using AForge at the moment and I’m looking for a faster/better alternative.