Static Memory allocation

This is a discussion on Static Memory allocation within the C Programming forums, part of the General Programming Boards category; Hello everybody.
I'm working on this physical simulation with c language.
I need to allocate very large arrays, so I ...

Static Memory allocation

Hello everybody.
I'm working on this physical simulation with c language.
I need to allocate very large arrays, so I exploit much memory on my pc... I'm not so used in these kind of topics so I have some questions about static memory allocation. (Sorry, but I can't achieve to find the infos I need on the net)
(,,I'm not using dynamic allocation cause it's not necessary for my purpuses)

I work on the university lab machines (more or less 0.5 Gb of total memory when I type "top" on the shell) and on my laptop (more or less 3 Gb...).
My question is: if I try to allocate too much static memory for my process I will get segfault?

I'm asking that because my simulation works well with certain parameters but, when the arrays exceed some "critical dimensions", the program doesn't work and I get a segfault.
Besides, when I run it with gdb the segfault occurs on the declaration of the first array, regardless of the latter's dimensions.

Another problem is that I can manage to run simulation with bigger arrays on the university machines, than on my laptop.
I mean, when I work on the 0.5 Gb pc, I can push the dimensions of the array further than when I work on the 3 Gb machine...
Moreover, suppose that I'm working a little below these "critical dimensions", on the 0.5 Gb machine and I type "top" on the shell, I find that the memory is almost all occupied (as expected), while on the 3 Gb pc I have much more than 2 Gb of free memory.

The beauty of static memory allocation is that static memory can't touch that 0.5GB (or 3GB) of ram. Your static memory is usually limited to somewhere between 1MB and 4MB (depending on your settings). If you want more, you'll have to allocate it yourself (dynamically).

It does sound like you are trying to allocate too much on the stack.
This would indeed crash the program. Especially too large static arrays; somewhat common more newbies.
Still, as kermit says, it is best to show an example so we can tell you more instead of merely guessing.

What platforms are you working on? Perhaps your pc is really a Unix workstation owing to references to top and shell and I assume your laptop is a Windows machine. The difference in memory utilization between the pc and laptop may very well be due to the different memory management schemes used by Unix and Windows.

Not sure how a process is treated under Windows but on Unix it is divided into three logical sections: text, data and stack. If your array is allocated outside of any function then it falls into the data segment whose size is controlled on most Unix's by the maxdsize (max data size) kernel parameter and if it is declared inside a function then it goes into the stack and is controlled by the maxssize (max stack size) parameter inside the kernel. If you want to test it out you can bump one of the relevant kernel parameters and see if your array can now be allocated beyond "critical mass" without segfaulting.

Is the 0.5 Gb on your pc virtual or physical memory and do you have any swap space allocated on it?

Platform should not matter. If it does, then it will be platform specific and not portable which is bad.
The solution to the problem, if indeed the size is the problem, is to use dynamic memory. End of story.

Platform should not matter. If it does, then it will be platform specific and not portable which is bad.
The solution to the problem, if indeed the size is the problem, is to use dynamic memory. End of story.

Size will be a problem even with dynamic memory as you can run out of space in the heap.

Yes, naturally. But I was referring to hitting a limit with the static allocation.
Dynamic memory is the way to get access to all of the memory available.
Of course, if the problem is that it's allocating more memory than what is available... then I am afraid there is no solution but to get more ram or consume less.

When I run the simulation with "smaller" arrays it works, and I get the results that I expect from it. The segfaults comes out when I raise the dimensions of the variables. That's why I don't think that there's a bug.
I have more than 1000 lines of code and several files, that's why I didn't post it. Here's the declaration of the arrays

I am not quite following you here. Where do you get the 1MB from? (Not trying to nitpick, but rather trying to learn )

From nowhere... The stack isn't guaranteed to be any particular size. For that matter, I'm not sure the C standard forces there to be any stack at all, so long as the language semantics work (i.e. the "stack" could be implemented as a linked list of activation records)

I was just wondering if I had missed something pertinent in one of the posts. As noted, the stack size varies, but whatever the case is not suitable for really large static arrays.

I suppose it does not matter much to the OP -- it looks like he is going to have to go ahead and use dynamic memory allocation -- , but if he wanted to get an idea of how much stack memory (in kbytes) he is allowed per process, he could do

Code:

ulimit -s

on his machine. This setting differs from machine to machine, and this command may also vary (i.e., may not function as described). If it does exist, one can change the size with the same command. If the stack size is indeed the trouble for the OP, (as seems quite probable) then there is a very good chance that ulimit -s will yield different values on each machine he runs the program on.

When I run the simulation with "smaller" arrays it works, and I get the results that I expect from it. The segfaults comes out when I raise the dimensions of the variables. That's why I don't think that there's a bug.
I have more than 1000 lines of code and several files, that's why I didn't post it. Here's the declaration of the arrays

That doesn't stem from a bug in your code but the fact that the bigger the array the more the memory consumed.

And then I have some others array which are created locally in some subroutines...

Considering that a double is 8 bytes on most machines, the storage requirements of those arrays would be "13824*3*3*3*2 + 576*12*2*1*9*9*2 + 576*2" times 8 bytes which is about 6Mb plus the size of the arrays defined in the other subroutines.

Originally Posted by p3rry

I work on unix, here's what I get on my laptop with "top" on shell, when I say 0.5 Gb I refer to Mem.