Thanks Bruce for your kind perusal. But, stereo vision would give me one dimension that is depth (if I am correct). What about the other dimension(other axis), that I need at the same time. My purpose is to get position of the boat in Cartesian coordinates (x,y) from the camera on ground.
Thanks

Nadeem - Optical flow and stereo vision are two very different things. Optical flow is a class of algorithms that estimate motion, while stereo vision is the extraction of 3D information from a pair of cameras. If your cameras are on stable ground (not another boat) then you could use optical flow to find the boat and stereo vision to determine its distance from the cameras. If your cameras are on a boat, then you need an algorithm that is invariant to camera motion.

Andrei - I don't imagine Kalman filtering being particularly useful in smoke detection, which is a particularly hard challenge. You might want to keep blob detection, but use color or some other attribute as the basis for segmentation. Maybe vision.ForegroundDetector would work better along with moving data to the HSV color space and finding the color plane that shows the greatest difference between smoke and non-smoke.

Hi Bruse. I'm a student of Saratov State Technical University. I develop program detection smoke by video. Based on program tracking cars, i can detect motion using optical flow, method Lucas Canade. But, i can't to difference between smoke and non smoke. I try to use blob analysis, but also don't get results. So, my question to you, using KalmanFilter in object tracking, is it possible to distinguish smoke from human?

I watched your video and downloaded the file. Its really a nice work. I have to find the real world x,y position data accurately for a slow moving boat(having a ball on the CG point) in order to incorporate in my research. In your opinion what should I use: the optical flow or stereo vision. I have two Fastcam MC2 cameras. Moreover, I have very little knowledge of Image processing and Computer Vision System. Kindly guide me. Thanks in anticipation.

The technique I used in this demo assumed a camera at a fixed location. For when a car is moving, I suggest looking at this example: http://www.mathworks.com/help/vision/examples/tracking-pedestrians-from-a-moving-car.html

You will need to train an object detector to recognize cars. You can do this with the trainCascadeObjectDetector function and several hundred images of cars as seen from the camera as you expect to mount it.

vision.ForegroundDetector provides a binary mask of "pixels" in the foreground of a video.

vision.PeopleDetector requires at least grayscale images, if not color, to detect upright standing people. It wasn't designed to be used in combination. Rather, it is a replacement.

If the camera is stationary, then vision.ForegroundDetector is much faster at detecting moving objects, including people. If the camera is moving, then that algorithm fails to detect people, but vision.PeopleDetector will still work.

Hye Bruce Tannenbaum, I have try use the foreground for vision.PeopleDetector but it seem not support that function. Is it when we use peopleDetector we can not add the function of foreground? Because the example you give for foreground is by using the blobAnalysis. Thank you.