Some Unixes (Solaris, for example) do support more fine-grained control of resources on processes, process groups, project groups, etc. But I mention this only as a comment because you've tagged this 'linux' although you say 'in unix'.
–
jrgSep 2 '10 at 20:11

Your question is extremely vague about what you actually want to do. Do you want to limit the amount of address space the process can use? Do you want to limit how many physical pages of memory it can have resident?
–
David SchwartzOct 23 '11 at 6:01

I'm not very sure about this, but you could also use cgroups to limit the memory usage. The advantage of cgroups is that you can control processes that are already running. By the way systemd will use cgroups to control the system services.

Unfortunately I've experimented a bit and they don't seem to work very well on my Fedora 13 system.

cgroups are the Right Way Forward for the future. Won't work in older distributions, but from basically now onward, it's the way forward. Look into libcgroupsourceforge.net/projects/libcg for utilities to control them.
–
mattdmNov 26 '10 at 15:09

1

The package is named libcgroup-tools on Fedora 17.
–
Cristian CiupituJul 13 '12 at 3:21

To set the limit when starting the program, use ulimit -v 400, as indicated by polemon. This sets the limit for the shell and all its descendants, so in a script you might want to use something like (ulimit -v 400; myprogram) to limit the scope.

If you need to change the limit for a running process, there's no utility for that. You have to get the process to execute the setrlimit system call. This can often be done with a debugger, although it doesn't always work reliably. Here's how you might do this with gdb (untested; 9 is the value of RLIMIT_AS on Linux):

There is the setrlimit() function, which allows to configure a process' limits in C. Write a C program to call setrlimit then to exec the command you want to be limited. setrlimit cannot change other processes' limits.

Luckily someone already wrote something similar. It can be downloaded from freshmeat. I had a quick look at the source code and it seems to be fine. Use rlimit at your own discretion. Note that rlimit also cannot change other processes' limits.

Edit: Gilles proposed a nice hack with gdb: Attach to the process with gdb then make the process call setrlimit. This would perhaps solve the problem to limit an already running process.

If you just want to test and measure the memory usage of your program please look at time.
You can measure the resource usage of your program in many aspects including the terms of CPU Time and memory usage. Following command will give you the memory usage and CPU time usage of myProgram:

/usr/bin/time myProgram

(Be sure to give the absolute path to distinguish it from bash built-in time command.)

If you just want to limit the resources of your process, i recommend you to create a test user for this specific task. Limit the resources of this user according to your needs and run the process by the user. It seems that, in *nix world, resource management based on users is much more advanced than resource management based on processes.

You can check /etc/security/limits.conf to limit the resources of a user. Or you can use ulimit after logged in with the to-be-limited user.

On some versions of Unix/Linux, this isn't going to be helpful. (The man pages says this might be the case.) On a CentOS 5.5 box, I get "0" for maxresident every time. However, it does work on Fedora 14.
–
mattdmNov 26 '10 at 15:18