Further testing has shown that there is still a leak in the clustering code, but I can’t figure out where it is. It does seem that cv::Mat is somehow at the centre of this issue though. I had previously noticed that I could stop the memory increase just by commenting out the mergeImages() code (that averages images for two percepts). This week I realized that I can also stop the memory increase if I don’t normalize the feature vector before clustering. Now this really does not make sense since I rewrote all the feature-related code to change from a vector of floats to a cv::Mat so I could use the opencv normalization functions. There is no code overlap between the previous version of normalize and this new version, and yet if I run normalization and clustering memory increases (red line), if I run either independently (blue and green) there is no memory increase:

I can’t recall why I thought this memory increase was solved before. I may have not been running the normalization code in that test, i.e. I was testing clustering with only foreground objects, where normalization is not needed due to perceptual distances in CIELuv.

After confirming no leaks in the segmentation code I reran the clustering unit test. Even over 1000 frames, this clustering code showed a significant leak: (The black line is a linear model fitted to the data.)

The leak was caused by the way I was replacing percepUnit instances with newly merged percepUnit instances. The solution was to change the mergePerceps() function from returning the merged percepUnit to modifying a percepUnit in place using pass by reference. A 10,000 frame test overnight has shown that there is no longer a leak in the clustering code:

I did notice some strange percepUnits while debugging the memory leak. I need to confirm that the new background segmenter is producing reasonable results before moving on. After that, the next steps are to merge all these changes from unit tests into the trunk code, and then I can begin rewriting the way threading and rendering is done. At which point I will have caught up with where I was at the start of the summer.

So what causes a non-human animal to empathize? I’ve been thinking a lot about how humans and non-human animals differ. I’ve come to the conclusion that there are two major differences:

I believe humans have an unparalleled ability to abstract, that is build hierarchies of mental representations where details are thrown away to encapsulate larger concepts that can be broadly applied. This is what allows us to convince ourselves of untruths, to mistake our expectations for reality, to exploit others by defining away their suffering and to imagine and build technologies that extend our cognition.

Many of us are not in a day-to-day struggle for survival. I presume that much of the morality, empathy and free will that we consider crucial to our human identity would melt away under constant threats to survival. Consider cannibalism and infanticide amongst chimps, who are genetically closer to us than they are to any other apes.

This seal in the video certainly has little empathy for the penguins, so why the empathy for the photographer? Perhaps all animals have lines that define “us” and “them” where we choose to empathize or to exploit. I further expect that these lines arise from biological survival: If you are below me on the food chain, then you are “them”, if you are equal on the chain then you are “us”.

Clearly this is a little more complex in humans, but I expect only because of our ability to abstract. We create a concept (say, race or gender) and then use that to move where our line of “us” and “them” is.

The authors discuss the development of self-organizing artworks. Context Machines are a family of site-specific, conceptual and generative artworks that capture photographic images from their environment in the construction of creative compositions. Resurfacing produces interactive temporal landscapes from images captured over time. Memory AssociationMachine’s free-associative process, modeled after Gabora’s theory of creativity, traverses a self-organized map of images collected from the environment. In the Dreaming Machine installations, these free associations are framed as dreams. The self-organizing map is applied to thousands of images in Self-Organized Landscapes—high resolution collages intended for print reproduction. Context Machines invite us to reconsider what is essentially human and to look at ourselves, and our world, anew.

A test over this weekend showed that there is no memory leak or CPU time spikes in any of the segmentation code (the y axis is rss in megabytes):

CPU time is constant over this test, and we can see here that although there are spikes in memory usage, that there is no leak as a linear model of the data (black line) shows a decrease in memory usage over the whole test. The next step is to write a unit test for the clustering code to confirm there is no leak there, and then we can move onto writing new threading/rendering code and adding ML to the system!

In the last post I talked about a different segmentation approach rather than trying to figure out why the FloodFill() operation was using more and more time (Eventually getting to over 200 seconds per frame). A quick look at the Creativity and Cognition (C&C) version did not show any functional differences compared to the unitTest code. Of course the C&C version was crashing after about 24 hours, which likely was not enough to exhibit the problem. I took a few days to rewrite the backgroundSegmentation() function from scratch. In doing so I noticed a nice new function: meanShiftSegmentation(). The code is now much cleaner, but unfortunately it is not any faster than the old flood-fill version. This is partially because the segmentation is happening at the full 1080p resolution (not 1/4 the pixels as in the floodFill version). I should be able to get it under 1s with some optimization. The good news is that this segmenter works much better, there are no more “blind spots” as the segmenter breaks up the whole image and joins small regions into larger ones rather than leaving unsegmented areas. Here is an example of the segmentation results (in improperly mapped HSV colourspace): (more…)