Tag Info

Usually, sysstat, which provides a sar command, keeps logs in /var/log/sysstat/ or /var/log/sa/ with filenames such as /var/log/sysstat/sadd where dd is a numeric value for the day of the week (starting at 01). By default, the file from the current day is used; however, you can change the file that is used with the -f command line switch. Thus for the 3rd ...

This difference dates back to the original Berkeley Unix, and stems from the fact that the kernel can't actually keep a rolling average; it would need to retain a large number of past readings in order to do so, and especially in the old days there just wasn't memory to spare for it. The algorithm used instead has the advantage that all the kernel needs to ...

I found this thread on lkml that answers your question a little. (It seems even Linus himself was puzzled as to how to find out the origin of those threads.)
Basically, there are two ways of doing this:
$ echo workqueue:workqueue_queue_work > /sys/kernel/debug/tracing/set_event
$ cat /sys/kernel/debug/tracing/trace_pipe > out.txt
(wait a few secs)
...

I suggest to use atop. It's a daemon gathering all 'top' information every 10 minutes by default and you can just go back in time viewing these 'top' snapshots. Adjust the default interval setting to your needs (consumes more disk space if set more frequently).
Just yesterday, I answered a similar question, in which I included a very short how-to.

As a clarification point, load is not directly tied to CPU. This is one of the most common misconceptions about load. The fact that you mention disk seems to acknowledge that you're aware of this, but I just wanted to mention it as I see comments that indicate some believe otherwise.
Load is defined as the number of processes waiting on system resources. ...

1.0 is an average of one job waiting over the given time period, not 1 core at 100% utilisation.
An idle computer has a load number of 0 and each process using or waiting for CPU (the ready queue or run queue) increments the load number by 1. Most UNIX systems count only processes in the running (on CPU) or runnable (waiting for CPU) states. However, Linux ...

Judging by indication of htop I would assume you're running Linux.
You can take a look at a utility called sar, which is frequently used on Solaris but I've rarely seen it in use on Linux. It is capable of recording system activity for a day and then reporting it at various intervals. You can also look at Orca but the data statistics are still per system.
...

You're asking the wrong question: you've got an overheating system which should be solved by cooling the system. Playing games with process load is going to yield an unsatisfying hack. And since you've got hardware running at its thermal limits, you can fairly expect that problem to worsen.
If you cannot remedy the hardware, see if you can slow the whole ...

Try the mod_qos Apache module. The current version has the following control mechanisms.
The maximum number of concurrent requests to a location/resource
(URL) or virtual host.
Limitation of the bandwidth such as the maximum allowed number of
requests per second to an URL or the maximum/minimum of downloaded
kbytes per second.
Limits the number of request ...

You can't load a kernel module at a specific physical address. You can't load a kernel module at a specific virtual address. The kernel decides where it loads the module.
Inside the kernel, of course, you can do what you want. But I think arranging to load a driver at a specific address would require a lot of deep changes.
I fail to see what would require ...

Unless you set up a data collection tool, the answer is no, there is no such built-in utility, which will log the utilization of different resources.
On the other hand, every Linux installation comes with sar utility, which addresses the subject matter you are talking about. I am not going to go into any detail of how you collect data and how you extract ...

The solution is: I don't know how to find the cause. Nobody told me so far.
But talking with the BTRFS developers revealed a bug in the btrfs drivers when writing many many small files in a very short time period. This is an issue on kernels from 3.0 upto 3.1. Maybe it gets fixed in 3.2.
In the meantime I got a patch for the current kernel that solved the ...

You can write up your own script that uses ps to list all processes in the run/runnable state without a nice value greater than 0. The specific syntax you need to use will differ based on your version of ps. Something like this may work:
ps -eo state,nice | awk 'BEGIN {c=0} $2<=0 && $1 ~ /R/ { c++ } END {print c-2}'
It runs ps collecting the ...

Load average is usually described as "average length of run queue". So few CPU-consuming processes or threads can raise LA above 1. There is no problem if LA is less than total number of CPU cores. But if it gets higher than number of CPUs, this means some threads/processes will stay in queue, ready to run, but waiting for free CPU.

Traditionally, Unix systems have considered processes "runnable" if they weren't sleeping or they were sleeping for very short term waits, e.g. page waits. To this list Linux has added uninterruptible I/O waits that can inflate the load average values.
Also, nice'd processes might be runnable but because of their lower priority receive no actual CPU time ...

Changing the nice value will not directly reduce system load. It can however be used to leave more resources available to the remaining processes, which I suspect is what you really want.
From http://linux.101hacks.com/monitoring-performance/hack-100-nice-command-examples/
Kernel decides how much processor time is required for a process based on the ...

Those are not "CPU load averages" but system "load averages". It doesn't mean necessarily that your CPU is busy, but something in your system is. This value comes from /proc/loadavg which man proc explains it in more detail:
/proc/loadavg
The first three fields in this file are load average figures giving the number of jobs in the run queue ...

System load isn't really directly related to how much work the system is doing. You could have a load average of only 2.0 and be doing a lot more work than a load average of 8.0.
All the load tells you is the average number of programs eligible to be run. If they are all waiting on your overloaded disk, your CPU won't be doing much of anything, but your ...

Changing the nice level of a process is unlikely to affect the system load value. The system load value is the average length of the run queue, which is basically the number of processes wanting to use the CPU.
If you are running a CPU-bound process (rsync isn't, but just for example), then it will always want to use CPU time whenever there is some ...

If you're using the default elpa settings, the .el files will be installed in subdirectories of ~/.emacs.d/elpa. When you use require, it doesn't recursively search the directories in your load path. To get this effect, you can use the following snippet:
(let ((default-directory "~/.emacs.d/elpa"))
(normal-top-level-add-subdirs-to-load-path))

The numbers that are used to calculate load average are tasks in the run or uninterruptable state and the amount of work done in the time slice of the moving average. These tasks can be part of a multithreaded process. The fields are fuzzy the farther back in time due to smoothing results from the algorithm used.
A load of 1 is equal to 100% of one CPUs ...

In order to fully utilize the system it presents all services the same resources and the kernel will try to keep all of them running with the same priority. You could set the nice level of the sshd process to the highest level.
See here: http://serverfault.com/questions/355342/prioritise-ssh-logins-nice
That won't solve your issue with memory. You would ...

Occupying one CPU at 100% (minus overhead) is easy in the shell:
while true; do :; done
If you want to reduce the load, introduce sleeps.
i=0; while [ $i -ne 0 ] || sleep 0.001; do i=$(( (i+1) % 10000 )); done
Tune 10000 up or down to get the desired load.
The scheduling priority is set by nice. You'll need to be root to set a higher-than-default ...

With "nice" you can control the priority. For the highest priority (only available for root):
nice -n -20 yourprogram
And for the lowest:
nice -n 19 yourprogram
If you need control also the IO, use ionice. "man nice" and "man ionice" for the documentation.

The amount of swap used suggests that swapping might be to blame. The output of vmstat would show this better during the problem scenario.
vmstat 1 30
However, neither top or vmstat are well suited for diagnosing issues after the fact.
My general advice would be to install the sysstat package. This will enable system metrics to be saved periodically ...

There are two utilities to load modules: insmod and modprobe. insmod is a low-level utility: it loads a module file, given by its full path. modprobe is a high-level utility: you pass it a module name, and it looks up that module name in the module database, loads any necessary dependencies then loads the module itself. If the module isn't recorded in the ...