Tag Info

Take a look at ionice. From man ionice:
This program sets or gets the io scheduling class and priority for a program. If no arguments or just -p is given, ionice will query the current io scheduling class and priority for that process.
To run du with the "idle" I/O class, which is the lowest priority available, you can do something like this:
ionice ...

The reason is that the operating system needs memory to manage each open file, and memory is a limited resource - especially on embedded systems.
As root user you can change the maximum of the open files count per process (via ulimit -n) and per system (e.g. echo 800000 > /proc/sys/fs/file-max).

ulimit is made for this.
You can setup defaults for ulimit on a per user or a per group basis in
/etc/security/limits.conf
ulimit -v KBYTES sets max virtual memory size. I don't think you can give a max amount of swap. It's just a limit on the amount of virtual memory the user can use.
So you limits.conf would have the line (to a maximum of 4G of ...

Under Linux, execute the sched_setaffinity system call. The affinity of a process is the set of processors on which it can run. There's a standard shell wrapper: taskset. For example, to pin a process to CPU #0 (you need to choose a specific CPU):
taskset -c 0 mycommand --option # start a command with the given affinity
taskset -c -p 0 1234 # ...

Have a look at trickle a userspace bandwidth shaper. Just start your shell with trickle and specify the speed, e.g.:
trickle -d 100 zsh
which tries to limit the download speed to 100KB/s for all programs launched inside this shell.
As trickle uses LD_PRELOAD this won't work with static linked programs but this isn't a problem for most programs.

It's not that difficult to decipher in fact.
This piece of code just defines a function named : which calls two instances of itself in a pipeline: :|:&. After the definition an instance of this function is started.
This leads to a fast increasing number of subshell processes. Unprotected systems (systems without a process number limit per user) will be ...

No but you should close all active sessions windows. They still remember the old values. In other words, log out and back in.
Every remote new session or a local secure shell take effect of the limits changes.

ulimit -v, it's a shell builtin, but it should do what you want.
I use that in init scripts sometimes:
ulimit -v 128k
command
ulimit -v unlimited
It seems however, that you want ways of manipulating the maximum allocatable memory while the program is running, is that correct? Probably something like renice for manipulating the Priority.
There is, ...

That is certainly not trivial task that can't be done in userspace. Fortunately, it is possible to do on Linux, using cgroup mechanizm and its blkio controller.
Setting up cgroup is somehow distribution specific as it may already be mounted or even used somewhere. Here's general idea, however (assuming you have proper kernel configuration):
mount tmpfs ...

When logging in using SSH, you use a pseudo-terminal (a pty) allocated to the SSH daemon, not a real one (a tty). Pseudo-terminals are created and destroyed as needed. You can find the number of ptys allowed to be allocated at one time at /proc/sys/kernel/pty/max, and this value can be modified using the kernel.pty.max sysctl variable. Assuming that no other ...

If your program doesn't need to write any OTHER files that would be larger than this limit, you can inform the kernel of this limit using ulimit. Before you run your command, run this to setup a 200MB file size limit for all process run in your current shell session:
ulimit -f $((200*1024))
This will protect your system but it might be jaring for the ...

I'm not very sure about this, but you could also use cgroups to limit the memory usage. The advantage of cgroups is that you can control processes that are already running. By the way systemd will use cgroups to control the system services.
Unfortunately I've experimented a bit and they don't seem to work very well on my Fedora 13 system.

You can use pv to throttle the bandwidth of a pipe. Since your use case is strongly IO-bound, the added CPU overhead of going through a pipe shouldn't be noticeable, and you don't need to do any CPU throttling.
tar cf - mydata | pv -L 1m >/media/MYDISK/backup.tar

ionice from the util-linux does something similar to what you want.
It doesn't set absolute IO limits, it sets IO priority and 'niceness' - similar to what nice does for a process' CPU priority.
From the man page:
ionice - set or get process I/O scheduling class and priority
DESCRIPTION
This program sets or gets the I/O scheduling class and priority ...

A process can change its limits via the setrlimit(2) system call. When you run ulimit -n you should see a number. That's the current limit on number of open file descriptors (which includes files, sockets, pipes, etc) for the process. The ulimit command executed the getrlimit(2) system call to find out what the current value is.
Here's the key point: a ...

Important is to know that there are two kinds of limit:
hard limit is configurable by root only. This is the highest possible value (limit) for the soft limit.
soft limit can be set by ordinary user. This is the actual limit in effect.
Solution for a single session
In the shell set the soft limit:
ulimit -Sn 2048
This example will raise the actual ...

I would assume you are trying not to disrupt other activity. Recent versions of Linux include ionice which does allow you to control the scheduling of IO.
Besides allowing various priorities, there is an additional option to limit IO to times when the disk is otherwise idle. The command man ionice will display the documentation.
Try copying the file ...

While it can be an abuse for memory, it isn't for CPU: when a CPU is idle, a running process (by "running", I mean that the process isn't waiting for I/O or something else) will take 100% CPU time by default. And there's no reason to enforce a limit.
Now, you can set up priorities thanks to nice. If you want them to apply to all processes for a given user, ...

From within a program, call setrlimit(RLIMIT_CPU, ...). From the shell, call ulimit -t 42 (this is not standard but supported by most shells (including bash and ksh) on most unix variants). This causes the current process to be killed once it has used up N seconds of CPU time. The limitation is inherited by child processes. A common shell idiom is (ulimit -t ...

You can probably achieve something like that by using cgroups with the Memory resource controller.
I guess you'd put all your resource-consuming tasks in a limited (CPU & RAM) cgroup, and leave sshd "outside" so that it isn't restricted.
(Adding more swap, even in the form of a swap file, might be a good option though.)

Here we see evidence of a problem:
tail: inotify resources exhausted
By default, Linux only allocates 8192 watches for inotify, which is ridiculously low. And when it runs out, the error is also No space left on device, which may be confusing if you aren't explicitly looking for this issue.
Raise this value with the appropriate sysctl:
...

If your application (ie. run_program) does not support limiting the size of the log file, then you can check the file size periodically in a loop with an external application or script.
You can also use logrotate(8) to rotate your logs, it has size parameter which you can use for your purpose:
With this, the log file is rotated when the specified size ...

If you read the manpage for semget, in the Notes section you'll notice:
System wide maximum number of semaphore sets: policy dependent (on Linux, this limit can be read and modified via the fourth field of /proc/sys/kernel/sem).
On my system, cat /proc/sys/kernel/sem reports:
250 32000 32 128
So do that on your system, and then echo it back after ...

I don't have HP-UX available to me, and I've never been a big HP-UX fan.
It appears that on Linux, a per-process or maybe per-user limit on how many child processes exists. You can see it with the limit Zsh built-in (seems to be analogous to ulimit -u in bash):
1002 % limit
cputime unlimited
filesize unlimited
datasize unlimited
...

Limits are process-specific. ulimit is a shell bultin and changes the limit only for that shell and the processes started from that shell. sudo ulimit wouldn't make any sense even if it worked since the limit would only change in the processes started under that sudo, and there are none.
In order to raise your limit above the hard limit you have to either ...

The pam_limits.so module can help you there.
It allows you to set certain limits on specific individual users and groups or wildcards or ranges of users and groups.
The limits you can set are typically ulimit settings but also on the number of concurrent login sessions, processes, CPU time, default priority and maximum priority (renice). Check the ...