If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register or Login
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

memory usage profiling

I programmed a simulation model that uses lot's (~1e8) small objects, stored in std::map. These number objects increases as the simulation progresses in time. The model naturally uses a lot of ram, but according to my calculation is uses much more than expected, at least double. I tried to profile the program with both massif and by overloading the allocation functions. Both profiles give me expected memory usage, but as said, resident memory usage (linux) is much higher than the total allocated memory according to the profiles.

- I checked for memory leaks, but according to the profiles, and memcheck there are no leaks.
- I considered memory fragmentation, and tried to use boost's pool allocator, which helps somewhat, but not a lot (about 10%). I tried to use large pools (nestsize=10MB) , which increases initial memory usage a lot, but final memory usage is somewhat less.
- I build an 'early garbarge' collector which periodically erases a lot of objects that are stricktly speaking not needed anymore from the maps. But although this should reduce the number of objects with a factor 5 or so it reduces resident memory usage only by about 15%, but does increase computational times tremendously (10 times or so without the pool allocator. With the pool allocator it seems to be better).
- I tried to double the size of the small objects by including dummy variables, and that does increase memory usage, but only by about 20%.

Right now I am a bit puzzled where the large overhead comes from. Any suggestions for what It could be would be welcome. And especially, I would like to know how I could make it visible in the memory profiles.

Re: memory usage profiling

Hello jouke.postma
I'ts not surprising that your program takes up more memory than just your small objects. A map is typically implemented as a tree structure, so there will be overhead for the key and pointers to nodes. If your objects are really small, the overhead may be more than the objects.
Storing 1 million structs of 2 integers in a map uses 126,000K on my system. Storing the same in a vector uses only 60,000K

If you overloaded the allocation functions for your small objects, the map overhead would not show up there. Nor would it show as a memory leak.

Re: memory usage profiling

bertandernie,

Thanks for the reply, I considered that, but I think there is more happening. Let me give an example of a short run which I profiled. (My final runs run out of memory on a system with 64 GB ram)
Resident memory use 340 MB
Estimated number of objects 0.8e6
Average size of small objects is 32 bytes (about 3 doubles and an a 64 bit pointer).
Map overhead 3 integers, plus a double which is used for sorting, makes it 64 bytes/stored small object
Estimated memory usage 64*0.8e6~=54mb
Sum of memory allocated with new[] and new ~= 35mb, not far from the estimated 54 mb, but what is the other 300 mb used for?
Similarly, a longer run which uses ~800 mb resident memory shows only 150 mb total in the profiling graph.

Anyway, my biggest question is why, when I erase a large part of the objects in the maps, does the memory usage only go down about 10%. That made me think that the data in the maps is only a small fraction of the memory usage, but I can't work out what than is using the memory.

Re: memory usage profiling

Well, I wouldn't expect resident memory usage to go down fast just because you stopped using some of it. Just because the program isn't using it at that second, the OS doesn't know it won't again, and so it might keep a fair amount of it reserved for the program.

Re: memory usage profiling

Sorry I cannot help you much more.
On Linux, with a simple test program, the VM size and RSS do not go down after the map is populated and then cleared. But, on subsequent population/clearing, the memory usage does not go up. The program reserves the memory, and reuses it (as Lindley suggested). On Linux, the memory usage is about what you’d expect using your calculations about object size and map overhead so, without seeing your source code, I don’t know about the extra 300Mb that your program is using.

Re: memory usage profiling

Thank you both for your answers.
Lindley, I was aware of that, but when I delete a bunch of objects, on subsequent generation of objects the resident memory shouldn't go up or not as much. But it does. As far as I know only severe memory fragmentation could cause that, or what I deleted was really only small compared to what was allocated, meaning that not the data but something else is using a lot of ram.

I did a bit more testing. If I add a large array of zeros to one of the objects, resident memory use goes up about 17 mb, the memory profiler shows the same increase of 17 mb, when I implement this array as an std::map memory usage goes up 137 mb but the profiler shows only an increase of 103 mb. So theres seems to be a 'hidden' cost in using map, but it is percent wise much smaller than the 'hidden' cost in the model. I did vary the map size, and the hidden cost is about 30&#37; of the total size.