Mark,
thanks a lot for your helpful comments.
So, now I am somewhat more confused :-)
I've tried your code and, yes, I am able to allocate up to 3G of memory
in 124K chunks. Unfortunately this doesn't not help me because the
memory needed is allocated for a large software package, written in
Fortran, that makes heavy use of all kinds of libraries (libc among
others) over which I have no control.
Also, if I change your code to try to allocate the available memory in
one chunk I am obviously in the same situation as before. If I
understand you correctly, this is because small chunks of memory are
allocated with sbrk, large ones with mmap. I notice from the output of
your program that the allocated memory is also not in a contiguous
block. This must be because Redhat's prelinking of glibc to a fixed
address in memory as noted by David Lombard.
What I dont understand at all then is why your second code example
(mmap) is able to return
2861 MB : 157286400
or even more memory upon changing size to 4.e9. Isn't this supposently
simply overwriting the area where glibc is in? That confuses me now.
Will that prevent me from using stdio. There is no problem linking
statically for me. I am doing that for other reasons anyway.
Best regards and many thanks for your input.
Roland
--- Mark Hahn <hahn at physics.mcmaster.ca> wrote:
>> yes. unless you are quite careful, your address space looks like
> this:
>> 0-128M zero page
> 128M + small program text
> sbrk heap (grows up)
> 1GB mmap arena (grows up)
> 3GB - small stack base (grows down)
> 3GB-4GB kernel direct-mapped area
>> your ~1GB is allocated in the sbrk heap (above text, below 1GB).
> the ~2GB is allocated in the mmap arena (glibc puts large allocations
> there, if possible, since you can munmap arbitrary pages, but heaps
> can
> only rarely shrink).
>> interestingly, you can avoid the mmap arena entirely if you try
> (static linking,
> avoid even static stdio). that leaves nearly 3 GB available for the
> heap or stack.
> also interesting is that you can use mmap with MAP_FIXED to avoid the
> default
> mmap-arena at 1GB. the following code demonstrates all of these.
> the last time
> I tried, you could also move around the default mmap base
> (TASK_UNMAPPED_BASE,
> and could squeeze the 3G barier, too (TASK_SIZE). I've seen patches
> to make
> TASK_UNMAPPED_BASE a /proc setting, and to make the mmap arena grow
> down
> (which lets you start it at a little under 3G, leaving a few hundred
> MB for stack).
> finally, there is a patch which does away with the kernel's 1G chunk
> entirely
> (leaving 4G:4G, but necessitating some nastiness on context switches)
>>> #include <stdlib.h>
> #include <unistd.h>
> #include <sys/mman.h>
>> void print(char *message) {
> unsigned l = strlen(message);
> write(1,message,l);
> }
> void printuint(unsigned u) {
> char buf[20];
> char *p = buf + sizeof(buf) - 1;
> *p-- = 0;
> do {
> *p-- = "0123456789"[u % 10];
> u /= 10;
> } while (u);
> print(p+1);
> }
>> int main() {
> #if 1
> // unsigned chunk = 128*1024;
>> unsigned chunk = 124*1024;
> unsigned total = 0;
> void *p;
>> while (p = malloc(chunk)) {
> total += chunk;
> printuint(total);
> print("MB\t: ");
> printuint((unsigned)p);
> print("\n");
> }
> #else
> unsigned offset = 150*1024*1024;
> unsigned size = (unsigned) 3e9;
> void *p = mmap((void*) offset,
> size,
> PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS,
> 0,0);
> printuint(size >> 20);
> print(" MB\t: ");
> printuint((unsigned) p);
> print("\n");
> #endif
> return 0;
> }
>> > Also has someone experience with the various kernel patches for
> large
> > memory out there (im's 4G/4G or IBM's 3.5G/0.5G hack)?
>> there's nothing IBM-specific about 3.5/.5, that's for sure.
>> as it happens, I'm going to be doing some measurements of performance
> soon.
>
__________________________________
Do you Yahoo!?
Free Pop-Up Blocker - Get it now
http://companion.yahoo.com/
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf