hello: i record animal behaviour on video. specifically, individual rats are placed in a square box in which they will find some stationary objects. I would like to track the rat's movement. Thus, the only thing that changes from image to image is the location of the animal, nothing else moves. The video recording is grayscale, and there is no sound.

what i want to do is to divide up the square video image into 16 or 25 smaller squares, and then just count how many frames the rat stays in each rectangle. these frequencies will then simply mapped to a color spectrum, and in the end each square will be in the color corresponding to the length it was visited by the rat.

i have no clue whatsoever whether that is possible in runtime (or livecode), and how to go about reading and analyzing these files. any tips?
best,
Olli.

Could you give a little more information about your project.
How large are your movies, i.e what is the framerate and the recording time, are you using time-lapse?
How is the background of the rat, is the background changing or is the background / lightning constant.
How high is the contrast rat/background. I.e. will it be easy to filter out the rat or does one have to do subtraction of images.

We did work on a barnes maze for a simple tracking system. I did not even attempt to automate detection, we just tracked the rats for the short period manually, worked quite well.

One can do a lot of things with frames of video with rev. Actually that is one of the reasons I use Rev. Just dont expect real-time performance. Rev has to do a lot of computing if you work with images. But if you dont have a ready made solution for your problem then Rev can do a lot of things.
I give you an example. We do tracking of cell movement in video microscopy. These are time-lapse movies of about 30 - 50 seconds at a frame rate of 25 representing about 16 hours of real time = 750 to 1250 frames = individual images of a resolution of 768*576 = about 440.000 pixel. For Rev a pixel is 4 bytes of information.
Now at times the collagen matrix in which the cells migrate 'moves" also. These matrix shifts make the recordings useless. Since it is a lot of work/time to do these assays one shift can wipe out a whole days work.
So I figured to let the user track a part of the matrix that does not migrate itself but represents the shift of the matrix. From the coordinates of that track I take individual images of the video and 'deshift' the images. That is a lot of pixels shifting plus I copy a timestamp into the image and draw an outline of the 'safe' area for later tracking. That takes about 3 minutes of computing time for about 1000 frames on an Intel-iMac with a core 2 duo processor 2.16 GHz.
This is just to give you an idea of the time it takes to do calculations on images in Rev. I have to add that I let Quicktime Player take apart the movies into individual frames and put them back together again (via Applescript).

So if you consider Rev for this you might get to what you want. It will take some learning of the language of course.

I just read in another of your posts that you are familiar with Rev/Livecode. And that you use a Mac.
So the above would be relatively easy to do.
You would have to take the imagedata of an image and do your calculations on that.

Hello Bernd: Thank you for getting back to me regarding this issue. Indeed, I am very experienced with coding in runtime, but i am rather unexperienced when it comes to analyzing image data, using any programming language.

Here are some technical data on the video files, taken from quicktime movie inspector panel:
Format: "WRAW", 240x240, 256 (i guess the 256 stands for bit depth?); the size is 240x250 pixels per frame
Frame per Second: 4

As each video file is 5 min long, and i have about 100 of these per experiment, so i planned to run the computations overnight. it is not time critical.

We score our files by hand because tracking for more precise exploration is pointless and will not work in my opinion, as too often the human evaluator needs to make a decision about a behavior that cannot be easily formalized into an algorithm that a computer can use.

So you think the plan to convert each video into an image series and than running computations on these images is a doable plan, or is that bound to failure?
Any other tips are highly appreciated!
Best,
Olli.

I think the best would be to see one of your movies. Could you upload them somewhere?
I wonder whether tracking the movie wouldn't be the easier solution. We do manual tracking in our lab because the background is to noisy for automated tracking.
Since you track, as you say, the movement anyways, one could bring the movies to 25 frames and one track would take about a minute or so. If you have the coordinates you would also have the time spend in each of the areas of interest. This of course depends on what exactly you are after and what the movies look like. If you think of posting them you could send me a link to niggemann at uni-wh dot de
If you dont have a place where you can upload this I could point you to one. We could discuss the specifics outside of this forum.