The memory *is* freed. But it is freed back to the process memory pool; not back to the OS.

This is normal. Asking the OS to allocate memory to a process is an expensive operation, so once the process has requested memory, it prefers to hang onto it when one part of the program is finished with it, so that it can be reused by a later part of the program.

Think of it as memory caching. If every time you freed up a small chunk you gave it back to the OS; and then had to go back to the OS a split second later to request it back again for another part of the code; your program would run very slowly.

There are exceptions. If your program requests a particularly large single chunk of memory, that may be allocated directly from the OS rather than the process memory pool; and given back as a single chunk when your program is done with it.

But when you process an XML file into a nested data structure, the memory is not allocated all in one chunk, but rather in lots of small bits as the XML file is processed. This is done from the process memory pool, and when it is freed, it goes back to that pool.

This is why you do not see any change in the size of the process' memory allocation, when you free the XML::Simple object. But rest assured, that memory is available to the rest of the program should it require it.

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.

"Science is about questioning the status quo. Questioning authority".

In the absence of evidence, opinion is indistinguishable from prejudice.

This is why you do not see any change in the size of the process' memory allocation, when you free te XML::Simple object. But reast assured, that memory is available to the rest of the program should it require it.

The OP used (145908-35476)/4=27608 pages. Some of those pages HAVE to be returned to the OS and removed from the process, anything else is fragmentation leading to a DOS or a leak. All his memory could be speculative reserves for growth space for existing allocations by the memory allocator and therefore is not accessible.

Some of those pages HAVE to be returned to the OS and removed from the process, anything else is fragmentation leading to a DOS or a leak. All his memory could be speculative reserves for growth space for existing allocations by the memory allocator and therefore is not accessible.

After undefing it, 8MB is return to the OS, leaving 24MB in the memory pool.

Why?

Because the AV for 1e6 array of scalars requires a single chunk of contiguous memory to hold the 1e6 RV*'s. On my 64-bit system, pointers are 8-bytes; hence the AV requires a single allocation of 8*1e6 == 8MB. Because this is greater than some predefined size -- I think 1MB but that may vary between platforms or builds -- this part of the total memory allocation required by the array is treated specially and is "allocated directly from the OS rather than the process memory pool".

However, the 1e6 SVs -- each 24 bytes -- are (individually) not greater than that limit, and are allocated from process memory pool.

And there you have it: 8MB for the AV (in one chunk); 24MB for the scalars (in 1e6 x 24 byte chunks) giving 32MB total; of which 8MB is returned to the OS when freed and 24MB that isn't.

Now to re-quote you:

Some of those pages HAVE to be returned to the OS and removed from the process, anything else is fragmentation leading to a DOS or a leak. All his memory could be speculative reserves for growth space for existing allocations by the memory allocator and therefore is not accessible.

Why "have to"? Where do you get that information from? What do you base that speculation on? What do those two sentences actually mean?

Because I'm fairly knowledgeable about what Perl (on my platform) does with memory. Hard won knowledge; empirical evidence derived from practical, repeatable, demonstrable experiments; and I can not make any real sense of those two sentences, nor make them gel with what I know.

If you have a better explanation; or can cite some reference; or provide some evidence in the form of a demonstration to support them; please share?

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.

"Science is about questioning the status quo. Questioning authority".

In the absence of evidence, opinion is indistinguishable from prejudice.

When building an array piecemeal this way, the base AV starts with 8 elements, therefore requires 64 bytes contiguous memory be allocated. When you add the 9th element to it, the AV has to be reallocate to accommodate that new element and so a new chunk of memory double the size (128 bytes) is allocated; the existing 8 elements are copied across and the new 9th element is added (leaving space for 7 more new elements). Then and the first AV is freed back to the memory pool.

Then when you go to add the 17th element; the size is doubled again (256 byte AV is allocated) and the 128 are freed back to the pool. This process of doubling continues, until the size of the size of the AV required gets bigger than the pool limit, and then rather than allocating the newly doubled AV from the pool; it is allocated from the OS.

During the process of creating the array the first time, the pool is expanded -- by requesting more memory from the OS -- to accommodate the AVs when they are less than the size where they get allocated directly from the OS. And when they are done with, those allocations are returned to the pool and will be reused for other allocations including the many 24-bytes SVs. by the time the first array is freed, the once contiguous chunks have been re-allocated to smaller subdivisions.

Hence, when the array is built the second time, the continuous chunks originally allocated from the OS to accommodate the intermediate AVs are no longer available as contiguous chunks, so as the array is built the second time, it is necessary to go back to the OS for new contiguous allocations, despite that the final array will be the same size as the first.

Hashes go through similar processes as they expand.

As XML::Simple builds nested structures of hashes and arrays, it has similar requirements for contiguous chunks of memory, that may need to be reallocated when re-building the same data-structure a second time.

(Note for the pedants: The above may not be a totally accurate description of the process; but it is sufficient to explain the point I am trying to convey.)

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.

"Science is about questioning the status quo. Questioning authority".

In the absence of evidence, opinion is indistinguishable from prejudice.

You are really great :-)
By the way. Do you know a good Package for searching
memory leaks under Windows XP and perl version 5.8.8.
I have memory problems, although I use allways
strict.pm (working only with local vars 'me $var') and have no global vars.
The program has about 40000 Lines in 100 packages.
I would like to have something like the symdump does
to produce a list of all variables whose
size has changed or are new allocated between two program-positions: