On UNIX this requires a C compiler (e.g. gcc) installed. On Windows pip will
automatically retrieve a pre-compiled wheel version from
PyPI repository.
Alternatively, see more detailed
install
instructions.

Return system CPU times as a named tuple.
Every attribute represents the seconds the CPU has spent in the given mode.
The attributes availability varies depending on the platform:

user: time spent by normal processes executing in user mode; on Linux
this also includes guest time

system: time spent by processes executing in kernel mode

idle: time spent doing nothing

Platform-specific fields:

nice(UNIX): time spent by niced (prioritized) processes executing in
user mode; on Linux this also includes guest_nice time

iowait(Linux): time spent waiting for I/O to complete

irq(Linux, BSD): time spent for servicing hardware interrupts

softirq(Linux): time spent for servicing software interrupts

steal(Linux 2.6.11+): time spent by other operating systems running
in a virtualized environment

guest(Linux 2.6.24+): time spent running a virtual CPU for guest
operating systems under the control of the Linux kernel

guest_nice(Linux 3.2.0+): time spent running a niced guest
(virtual CPU for guest operating systems under the control of the Linux
kernel)

interrupt(Windows): time spent for servicing hardware interrupts (
similar to “irq” on UNIX)

dpc(Windows): time spent servicing deferred procedure calls (DPCs);
DPCs are interrupts that run at a lower priority than standard interrupts.

When percpu is True return a list of named tuples for each logical CPU
on the system.
First element of the list refers to first CPU, second element to second CPU
and so on.
The order of the list is consistent across calls.
Example output on Linux:

Return a float representing the current system-wide CPU utilization as a
percentage. When interval is > 0.0 compares system CPU times elapsed
before and after the interval (blocking).
When interval is 0.0 or None compares system CPU times elapsed
since last call or module import, returning immediately.
That means the first time this is called it will return a meaningless 0.0
value which you are supposed to ignore.
In this case it is recommended for accuracy that this function be called with
at least 0.1 seconds between calls.
When percpu is True returns a list of floats representing the
utilization as a percentage for each CPU.
First element of the list refers to first CPU, second element to second CPU
and so on. The order of the list is consistent across calls.

Same as cpu_percent() but provides utilization percentages for each
specific CPU time as is returned by
psutil.cpu_times(percpu=True).
interval and
percpu arguments have the same meaning as in cpu_percent().
On Linux “guest” and “guest_nice” percentages are not accounted in “user”
and “user_nice” percentages.

Warning

the first time this function is called with interval = 0.0 or
None it will return a meaningless 0.0 value which you are supposed
to ignore.

Changed in version 4.1.0: two new interrupt and dpc fields are returned on Windows.

Return the number of logical CPUs in the system (same as
os.cpu_count()
in Python 3.4) or None if undetermined.
If logical is False return the number of physical cores only (hyper
thread CPUs are excluded) or None if undetermined.
On OpenBSD and NetBSD psutil.cpu_count(logical=False) always return
None.
Example on a system having 2 physical hyper-thread CPU cores:

Note that this number is not equivalent to the number of CPUs the current
process can actually use.
That can vary in case process CPU affinity has been changed, Linux cgroups
are being used or on Windows systems using processor groups or having more
than 64 CPUs.
The number of usable CPUs can be obtained with:

Return CPU frequency as a nameduple including current, min and max
frequencies expressed in Mhz.
On Linux current frequency reports the real-time value, on all other
platforms it represents the nominal “fixed” value.
If percpu is True and the system supports per-cpu frequency
retrieval (Linux only) a list of frequencies is returned for each CPU,
if not, a list with a single element is returned.
If min and max cannot be determined they are set to 0.

Return statistics about system memory usage as a named tuple including the
following fields, expressed in bytes. Main metrics:

total: total physical memory.

available: the memory that can be given instantly to processes without
the system going into swap.
This is calculated by summing different memory values depending on the
platform and it is supposed to be used to monitor actual memory usage in a
cross platform fashion.

Other metrics:

used: memory used, calculated differently depending on the platform and
designed for informational purposes only. total - free does not
necessarily match used.

free: memory not being used at all (zeroed) that is readily available;
note that this doesn’t reflect the actual memory available (use
available instead). total - used does not necessarily match
free.

active(UNIX): memory currently in use or very recently used, and so
it is in RAM.

inactive(UNIX): memory that is marked as not used.

buffers(Linux, BSD): cache for things like file system metadata.

cached(Linux, BSD): cache for various things.

shared(Linux, BSD): memory that may be simultaneously accessed by
multiple processes.

slab(Linux): in-kernel data structures cache.

wired(BSD, macOS): memory that is marked to always stay in RAM. It is
never moved to disk.

The sum of used and available does not necessarily equal total.
On Windows available and free are the same.
See meminfo.py
script providing an example on how to convert bytes in a human readable form.

Note

if you just want to know how much physical memory is left in a
cross platform fashion simply rely on the available field.

Changed in version 5.2.3: on Linux this function relies on /proc fs instead
of sysinfo() syscall so that it can be used in conjunction with
psutil.PROCFS_PATH in order to retrieve memory info about
Linux containers such as Docker and Heroku.

Return all mounted disk partitions as a list of named tuples including device,
mount point and filesystem type, similarly to “df” command on UNIX. If all
parameter is False it tries to distinguish and return physical devices
only (e.g. hard disks, cd-rom drives, USB keys) and ignore all others
(e.g. memory partitions such as
/dev/shm).
Note that this may not be fully reliable on all systems (e.g. on BSD this
parameter is ignored).
Named tuple’s fstype field is a string which varies depending on the
platform.
On Linux it can be one of the values found in /proc/filesystems (e.g.
'ext3' for an ext3 hard drive o 'iso9660' for the CD-ROM drive).
On Windows it is determined via
GetDriveType
and can be either "removable", "fixed", "remote", "cdrom",
"unmounted" or "ramdisk". On macOS and BSD it is retrieved via
getfsstat(2). See
disk_usage.py
script providing an example usage.

Return disk usage statistics about the partition which contains the given
path as a named tuple including total, used and free space
expressed in bytes, plus the percentage usage.
OSError is
raised if path does not exist.
Starting from Python 3.3 this is
also available as
shutil.disk_usage().
See disk_usage.py script providing an example usage.

UNIX usually reserves 5% of the total disk space for the root user.
total and used fields on UNIX refer to the overall total and used
space, whereas free represents the space available for the user and
percent represents the user utilization (see
source code).
That is why percent value may look 5% bigger than what you would expect
it to be.
Also note that both 4 values match “df” cmdline utility.

If perdisk is True return the same information for every physical disk
installed on the system as a dictionary with partition names as the keys and
the named tuple described above as the values.
See iotop.py
for an example application.
On some systems such as Linux, on a very busy or long-lived system, the
numbers returned by the kernel may overflow and wrap (restart from zero).
If nowrap is True psutil will detect and adjust those numbers across
function calls and add “old value” to “new value” so that the returned
numbers will always be increasing or remain the same, but never decrease.
disk_io_counters.cache_clear() can be used to invalidate the nowrap
cache.
On Windows it may be ncessary to issue diskperf-y command from cmd.exe
first in order to enable IO counters.
On diskless machines this function will return None or {} if
perdisk is True.

Return system-wide network I/O statistics as a named tuple including the
following attributes:

bytes_sent: number of bytes sent

bytes_recv: number of bytes received

packets_sent: number of packets sent

packets_recv: number of packets received

errin: total number of errors while receiving

errout: total number of errors while sending

dropin: total number of incoming packets which were dropped

dropout: total number of outgoing packets which were dropped (always 0
on macOS and BSD)

If pernic is True return the same information for every network
interface installed on the system as a dictionary with network interface
names as the keys and the named tuple described above as the values.
On some systems such as Linux, on a very busy or long-lived system, the
numbers returned by the kernel may overflow and wrap (restart from zero).
If nowrap is True psutil will detect and adjust those numbers across
function calls and add “old value” to “new value” so that the returned
numbers will always be increasing or remain the same, but never decrease.
net_io_counters.cache_clear() can be used to invalidate the nowrap
cache.
On machines with no network iterfaces this function will return None or
{} if pernic is True.

laddr: the local address as a (ip,port) named tuple or a path
in case of AF_UNIX sockets. For UNIX sockets see notes below.

raddr: the remote address as a (ip,port) named tuple or an
absolute path in case of UNIX sockets.
When the remote endpoint is not connected you’ll get an empty tuple
(AF_INET*) or "" (AF_UNIX). For UNIX sockets see notes below.

status: represents the status of a TCP connection. The return value
is one of the psutil.CONN_* constants
(a string).
For UDP and UNIX sockets this is always going to be
psutil.CONN_NONE.

pid: the PID of the process which opened the socket, if retrievable,
else None. On some platforms (e.g. Linux) the availability of this
field changes depending on process privileges (root is needed).

The kind parameter is a string which filters for connections matching the
following criteria:

Return the addresses associated to each NIC (network interface card)
installed on the system as a dictionary whose keys are the NIC names and
value is a list of named tuples for each address assigned to the NIC.
Each named tuple includes 5 fields:

Return hardware temperatures. Each entry is a named tuple representing a
certain hardware temperature sensor (it may be a CPU, an hard disk or
something else, depending on the OS and its configuration).
All temperatures are expressed in celsius unless fahrenheit is set to
True.
If sensors are not supported by the OS an empty dict is returned.
Example:

Return hardware fans speed. Each entry is a named tuple representing a
certain hardware sensor fan.
Fan speed is expressed in RPM (rounds per minute).
If sensors are not supported by the OS an empty dict is returned.
Example:

Return an iterator yielding a Process class instance for all running
processes on the local machine.
Every instance is only created once and then cached into an internal table
which is updated every time an element is yielded.
Cached Process instances are checked for identity so that you’re
safe in case a PID has been reused by another process, in which case the
cached instance is updated.
This is preferred over psutil.pids() for iterating over processes.
Sorting order in which processes are returned is based on their PID.
attrs and ad_value have the same meaning as in Process.as_dict().
If attrs is specified Process.as_dict() is called internally and
the resulting dict is stored as a info attribute which is attached to the
returned Process instances.
If attrs is an empty list it will retrieve all process info (slow).
Example usage:

Convenience function which waits for a list of Process instances to
terminate. Return a (gone,alive) tuple indicating which processes are
gone and which ones are still alive. The gone ones will have a new
returncode attribute indicating process exit status (will be None for
processes which are not our children).
callback is a function which gets called when one of the processes being
waited on is terminated and a Process instance is passed as callback
argument).
This function will return as soon as all processes terminate or when
timeout (seconds) occurs.
Differently from Process.wait() it will not raise
TimeoutExpired if timeout occurs.
A typical use case may be:

Raised by Process class methods when no process with the given
pid is found in the current process list or when a process no longer
exists. name is the name the process had before disappearing
and gets set only if Process.name() was previously called.

This may be raised by Process class methods when querying a zombie
process on UNIX (Windows doesn’t have zombie processes). Depending on the
method called the OS may be able to succeed in retrieving the process
information or not.
Note: this is a subclass of NoSuchProcess so if you’re not
interested in retrieving zombies (e.g. when using process_iter())
you can ignore this exception and just catch NoSuchProcess.

Represents an OS process with the given pid.
If pid is omitted current process pid
(os.getpid()) is used.
Raise NoSuchProcess if pid does not exist.
On Linux pid can also refer to a thread ID (the id field returned by
threads() method).
When accessing methods of this class always be prepared to catch
NoSuchProcess, ZombieProcess and AccessDenied
exceptions.
hash() builtin can
be used against instances of this class in order to identify a process
univocally over time (the hash is determined by mixing process PID
and creation time). As such it can also be used with
set()s.

Note

In order to efficiently fetch more than one information about the process
at the same time, make sure to use either as_dict() or
oneshot() context manager.

Note

the way this class is bound to a process is uniquely via its PID.
That means that if the process terminates and the OS reuses its PID you may
end up interacting with another process.
The only exceptions for which process identity is preemptively checked
(via PID + creation time) is for the following methods:
nice() (set),
ionice() (set),
cpu_affinity() (set),
rlimit() (set),
children(),
parent(),
suspend()resume(),
send_signal(),
terminate()kill().
To prevent this problem for all other methods you can use
is_running() before querying the process or
process_iter() in case you’re iterating over all processes.
It must be noted though that unless you deal with very “old” (inactive)
Process instances this will hardly represent a problem.

Utility context manager which considerably speeds up the retrieval of
multiple process information at the same time.
Internally different process info (e.g. name(), ppid(),
uids(), create_time(), …) may be fetched by using the same
routine, but only one value is returned and the others are discarded.
When using this context manager the internal routine is executed once (in
the example below on name()) the value of interest is returned and
the others are cached.
The subsequent calls sharing the same internal routine will return the
cached value.
The cache is cleared when exiting the context manager block.
The advice is to use this every time you retrieve more than one information
about the process. If you’re lucky, you’ll get a hell of a speedup.
Example:

Here’s a list of methods which can take advantage of the speedup depending
on what platform you’re on.
In the table below horizontal emtpy rows indicate what process methods can
be efficiently grouped together internally.
The last column (speedup) shows an approximation of the speedup you can get
if you call all the methods together (best case scenario).

Get or set
process I/O niceness (priority).
On Linux ioclass is one of the
psutil.IOPRIO_CLASS_* constants.
value is a number which goes from 0 to 7. The higher the value,
the lower the I/O priority of the process. On Windows only ioclass is
used and it can be set to 2 (normal), 1 (low) or 0 (very low).
The example below sets IDLE priority class for the current process,
meaning it will only get I/O time when no other process needs the disk:

read_count: the number of read operations performed (cumulative).
This is supposed to count the number of read-related syscalls such as
read() and pread() on UNIX.

write_count: the number of write operations performed (cumulative).
This is supposed to count the number of write-related syscalls such as
write() and pwrite() on UNIX.

read_bytes: the number of bytes read (cumulative).
Always -1 on BSD.

write_bytes: the number of bytes written (cumulative).
Always -1 on BSD.

Linux specific:

read_chars(Linux): the amount of bytes which this process passed
to read() and pread() syscalls (cumulative).
Differently from read_bytes it doesn’t care whether or not actual
physical disk I/O occurred.

write_chars(Linux): the amount of bytes which this process passed
to write() and pwrite() syscalls (cumulative).
Differently from write_bytes it doesn’t care whether or not actual
physical disk I/O occurred.

Windows specific:

other_count(Windows): the number of I/O operations performed
other than read and write operations.

other_bytes(Windows): the number of bytes transferred during
operations other than read and write operations.

Return a (user, system, children_user, children_system) named tuple
representing the accumulated process time, in seconds (see
explanation).
On Windows and macOS only user and system are filled, the others are
set to 0.
This is similar to
os.times()
but can be used for any process PID.

Changed in version 4.1.0: return two extra fields: children_user and children_system.

Return a float representing the process CPU utilization as a percentage
which can also be >100.0 in case of a process running multiple threads
on different CPUs.
When interval is > 0.0 compares process times to system CPU times
elapsed before and after the interval (blocking). When interval is 0.0
or None compares process times to system CPU times elapsed since last
call, returning immediately. That means the first time this is called it
will return a meaningless 0.0 value which you are supposed to ignore.
In this case is recommended for accuracy that this function be called a
second time with at least 0.1 seconds between calls.
Example:

the returned value can be > 100.0 in case of a process running multiple
threads on different CPU cores.

Note

the returned value is explicitly not split evenly between all available
CPUs (differently from psutil.cpu_percent()).
This means that a busy loop process running on a system with 2 logical
CPUs will be reported as having 100% CPU utilization instead of 50%.
This was done in order to be consistent with top UNIX utility
and also to make it easier to identify processes hogging CPU resources
independently from the number of CPUs.
It must be noted that taskmgr.exe on Windows does not behave like
this (it would report 50% usage instead).
To emulate Windows taskmgr.exe behavior you can do:
p.cpu_percent()/psutil.cpu_count().

Warning

the first time this method is called with interval = 0.0 or
None it will return a meaningless 0.0 value which you are
supposed to ignore.

Get or set process current
CPU affinity.
CPU affinity consists in telling the OS to run a process on a limited set
of CPUs only (on Linux cmdline, taskset command is typically used).
If no argument is passed it returns the current CPU affinity as a list
of integers.
If passed it must be a list of integers specifying the new CPUs affinity.
If an empty list is passed all eligible CPUs are assumed (and set).
On some systems such as Linux this may not necessarily mean all available
logical CPUs as in list(range(psutil.cpu_count()))).

Return what CPU this process is currently running on.
The returned number should be <=psutil.cpu_count().
On FreeBSD certain kernel process may return -1.
It may be used in conjunction with psutil.cpu_percent(percpu=True) to
observe the system workload distributed across multiple CPUs as shown by
cpu_distribution.py example script.

Return a named tuple with variable fields depending on the platform
representing memory information about the process.
The “portable” fields available on all plaforms are rss and vms.
All numbers are expressed in bytes.

Linux

macOS

BSD

Solaris

AIX

Windows

rss

rss

rss

rss

rss

rss (alias for wset)

vms

vms

vms

vms

vms

vms (alias for pagefile)

shared

pfaults

text

num_page_faults

text

pageins

data

peak_wset

lib

stack

wset

data

peak_paged_pool

dirty

paged_pool

peak_nonpaged_pool

nonpaged_pool

pagefile

peak_pagefile

private

rss: aka “Resident Set Size”, this is the non-swapped physical
memory a process has used.
On UNIX it matches “top“‘s RES column
(see doc).
On Windows this is an alias for wset field and it matches “Mem Usage”
column of taskmgr.exe.

vms: aka “Virtual Memory Size”, this is the total amount of virtual
memory used by the process.
On UNIX it matches “top“‘s VIRT column
(see doc).
On Windows this is an alias for pagefile field and it matches
“Mem Usage” “VM Size” column of taskmgr.exe.

shared: (Linux)
memory that could be potentially shared with other processes.
This matches “top“‘s SHR column
(see doc).

This method returns the same information as memory_info(), plus, on
some platform (Linux, macOS, Windows), also provides additional metrics
(USS, PSS and swap).
The additional metrics provide a better representation of “effective”
process memory consumption (in case of USS) as explained in detail in this
blog post.
It does so by passing through the whole process address.
As such it usually requires higher user privileges than
memory_info() and is considerably slower.
On platforms where extra fields are not implemented this simply returns the
same metrics as memory_info().

uss(Linux, macOS, Windows):
aka “Unique Set Size”, this is the memory which is unique to a process
and which would be freed if the process was terminated right now.

pss(Linux): aka “Proportional Set Size”, is the amount of memory
shared with other processes, accounted in a way that the amount is
divided evenly between the processes that share it.
I.e. if a process has 10 MBs all to itself and 10 MBs shared with
another process its PSS will be 15 MBs.

swap(Linux): amount of memory that has been swapped out to disk.

Note

uss is probably the most representative metric for determining how
much memory is actually being used by a process.
It represents the amount of memory that would be freed if the process
was terminated right now.

Compare process memory to total physical system memory and calculate
process memory utilization as a percentage.
memtype argument is a string that dictates what type of process memory
you want to compare against. You can choose between the named tuple field
names returned by memory_info() and memory_full_info()
(defaults to "rss").

Return process’s mapped memory regions as a list of named tuples whose
fields are variable depending on the platform.
This method is useful to obtain a detailed representation of process
memory usage as explained
here
(the most important value is “private” memory).
If grouped is True the mapped regions with the same path are
grouped together and the different memory fields are summed. If grouped
is False each mapped region is shown as a single entity and the
named tuple will also include the mapped region’s address space (addr)
and permission set (perms).
See pmap.py
for an example application.

Return regular files opened by process as a list of named tuples including
the following fields:

path: the absolute file name.

fd: the file descriptor number; on Windows this is always -1.

Linux only:

position (Linux): the file (offset) position.

mode (Linux): a string indicating how the file was opened, similarly
open’s
mode argument. Possible values are 'r', 'w', 'a',
'r+' and 'a+'. There’s no distinction between files opened in
bynary or text mode ("b" or "t").

on Windows this method is not reliable due to some limitations of the
underlying Windows API which may hang when retrieving certain file
handles.
In order to work around that psutil spawns a thread for each handle and
kills it if it’s not responding after 100ms.
That implies that this method on Windows is not guaranteed to enumerate
all regular file handles (see
issue 597).
Also, it will only list files living in the C:\ drive (see
issue 1020).

Warning

on BSD this method can return files with a null path (“”) due to a
kernel bug, hence it’s not reliable
(see issue 595).

Changed in version 3.1.0: no longer hangs on Windows.

Changed in version 4.1.0: new position, mode and flags fields on Linux.

laddr: the local address as a (ip,port) named tuple or a path
in case of AF_UNIX sockets. For UNIX sockets see notes below.

raddr: the remote address as a (ip,port) named tuple or an
absolute path in case of UNIX sockets.
When the remote endpoint is not connected you’ll get an empty tuple
(AF_INET*) or "" (AF_UNIX). For UNIX sockets see notes below.

status: represents the status of a TCP connection. The return value
is one of the psutil.CONN_* constants.
For UDP and UNIX sockets this is always going to be
psutil.CONN_NONE.

The kind parameter is a string which filters for connections that fit the
following criteria:

Return whether the current process is running in the current process list.
This is reliable also in case the process is gone and its PID reused by
another process, therefore it must be preferred over doing
psutil.pid_exists(p.pid).

Note

this will return True also if the process is a zombie
(p.status()==psutil.STATUS_ZOMBIE).

Send a signal to process (see
signal module
constants) preemptively checking whether PID has been reused.
On UNIX this is the same as os.kill(pid,sig).
On Windows only SIGTERM, CTRL_C_EVENT and CTRL_BREAK_EVENT signals
are supported and SIGTERM is treated as an alias for kill().
See also how to kill a process tree and
terminate my children.

Changed in version 3.2.0: support for CTRL_C_EVENT and CTRL_BREAK_EVENT signals on Windows
was added.

Suspend process execution with SIGSTOP signal preemptively checking
whether PID has been reused.
On UNIX this is the same as os.kill(pid,signal.SIGSTOP).
On Windows this is done by suspending all process threads execution.

Resume process execution with SIGCONT signal preemptively checking
whether PID has been reused.
On UNIX this is the same as os.kill(pid,signal.SIGCONT).
On Windows this is done by resuming all process threads execution.

Terminate the process with SIGTERM signal preemptively checking
whether PID has been reused.
On UNIX this is the same as os.kill(pid,signal.SIGTERM).
On Windows this is an alias for kill().
See also how to kill a process tree and
terminate my children.

Wait for process termination and if the process is a child of the current
one also return the exit code, else None. On Windows there’s
no such limitation (exit code is always returned). If the process is
already terminated immediately return None instead of raising
NoSuchProcess.
timeout is expressed in seconds. If specified and the process is still
alive raise TimeoutExpired exception.
timeout=0 can be used in non-blocking apps: it will either return
immediately or raise TimeoutExpired.
To wait for multiple processes use psutil.wait_procs().

The path of the /proc filesystem on Linux, Solaris and AIX (defaults to
"/proc").
You may want to re-set this constant right after importing psutil in case
your /proc filesystem is mounted elsewhere or if you want to retrieve
information about Linux containers such as
Docker,
Heroku or
LXC (see
here
for more info).
It must be noted that this trick works only for APIs which rely on /proc
filesystem (e.g. memory APIs and most Process class methods).

A set of integers representing the I/O priority of a process on Linux. They
can be used in conjunction with psutil.Process.ionice() to get or set
process I/O priority.
IOPRIO_CLASS_NONE and IOPRIO_CLASS_BE (best effort) is the default for
any process that hasn’t set a specific I/O priority.
IOPRIO_CLASS_RT (real time) means the process is given first access to the
disk, regardless of what else is going on in the system.
IOPRIO_CLASS_IDLE means the process will get I/O time when no-one else
needs the disk.
For further information refer to manuals of
ionice
command line utility or
ioprio_get
system call.

Availability: Linux

Changed in version 3.0.0: on Python >= 3.4 these constants are
enums
instead of a plain integer.

Constants which identifies whether a NIC (network interface card) has full or
half mode speed. NIC_DUPLEX_FULL means the NIC is able to send and receive
data (files) simultaneously, NIC_DUPLEX_FULL means the NIC can either send or
receive data at a time.
To be used in conjunction with psutil.net_if_stats().

importosimportsignalimportpsutildefkill_proc_tree(pid,sig=signal.SIGTERM,include_parent=True,timeout=None,on_terminate=None):"""Kill a process tree (including grandchildren) with signal "sig" and return a (gone, still_alive) tuple. "on_terminate", if specified, is a callabck function which is called as soon as a child terminates. """ifpid==os.getpid():raiseRuntimeError("I refuse to kill myself")parent=psutil.Process(pid)children=parent.children(recursive=True)ifinclude_parent:children.append(parent)forpinchildren:p.send_signal(sig)gone,alive=psutil.wait_procs(children,timeout=timeout,callback=on_terminate)return(gone,alive)

A: From Windows Vista onwards, both 32 and 64 bit versions.
Latest binary (wheel / exe) release which supports Windows 2000, XP
and 2003 server is
psutil 3.4.2.
On such old systems psutil is no longer tested or maintained, but it can
still be compiled from sources (you’ll need Visual Studio)
and it should “work” (more or less).

Q: What Python versions are supported?

A: From 2.6 to 3.6, both 32 and 64 bit versions. Last version supporting
Python 2.4 and 2.5 is psutil 2.1.3.
PyPy is also known to work.

A: This may happen when you query processess owned by another user,
especially on macOS and
Windows.
Unfortunately there’s not much you can do about this except running the
Python process with higher privileges.
On Unix you may run the the Python process as root or use the SUID bit
(this is the trick used by tools such as ps and netstat).
On Windows you may run the Python process as NT AUTHORITY\SYSTEM or install
the Python script as a Windows service (this is the trick used by tools
such as ProcessHacker).

Q: What about load average?

A: psutil does not expose any load average function as it’s already available
in python as
os.getloadavg.