New Linux tool, curt, uses the perf command's Python scripting capabilities to analyze system utilization by process, by task, and by CPU.

Get the newsletter

Modern computers are ever increasing in performance and capacity. This matters little if that increasing capacity is not well utilized. Following is a description of the motivation and work behind "curt," a new tool for Linux systems for measuring and breaking down system utilization by process, by task, and by CPU using the perf command's Python scripting capabilities.

I had the privilege of presenting this topic at Texas Linux Fest 2018, and here I've gone a bit deeper into the details, included links to further information, and expanded the scope of my talk.

System utilization

In discussing computation, let's begin with some assertions:

Every computational system is equally fast at doing nothing.

Computational systems were created to do things.

A computational system is better at doing things when it is doing something than when it is doing nothing.

Modern computational systems have many streams of execution:

Often, very large systems are created by literally wiring together smaller systems. At IBM, these smaller systems are sometimes called CECs (short for Central Electronics Complexes and pronounced "keks").

There are multiple sockets for processor modules in each system.

There are sometimes multiple chips per socket (in the form of dual-chip modules—DCMs—or multi-chip modules—MCMs).

There are multiple cores per chip.

There are multiple threads per core.

In sum, there are potentially thousands of execution threads across a single computational system.

Ideally, all these execution streams are 100% busy doing useful work. One measure of utilization for an individual execution stream (CPU thread) is the percentage of time that thread has tasks scheduled and running. (Note that I didn't say "doing useful work." Creating a tool that measures useful work is left as an exercise for the reader.) By extension, system utilization is the overall percentage of time that all execution streams of a system have tasks scheduled and running. Similarly, utilization can be defined with respect to an individual task. Task utilization is the percentage of the lifetime of the task that was spent actively running on any CPU thread. By extension, process utilization is the collective utilization of its tasks.

Utilization measurement tools

There are tools that measure system utilization: uptime, vmstat, mpstat, nmon, etc. There are tools that measure individual process utilization: time. There are not many tools that measure system-wide per-process and per-task utilization. One such command is curt on AIX. According to IBM's Knowledge Center: "The curt command takes an AIX trace file as input and produces a number of statistics related to processor (CPU) utilization and process/thread/pthread activity."

This seems like a good model for a similar command for a Linux system.

Utilization data

Before starting to create any tools for utilization analysis, it is important to know what data is required. Since utilization is directly related to whether a task is actively running or not, related scheduling events are required: When is the task made to run, and when is it paused? Tracking on which CPU the task runs is important, so migration events are required for implicit migrations. There are also certain system calls that force explicit migrations. Creation and deletion of tasks are obviously important. Since we want to understand user time, system time, hypervisor time, and interrupt time, events that show the transitions between those task states are required.

The Linux kernel contains "tracepoints" for all those events. It is possible to enable tracing for those events directly in the kernel's debugfs filesystem, usually mounted at /sys/kernel/debug, in the tracing directory (/sys/kernel/debug/tracing).

An easier way to record tracing data is with the Linux perf command.

The perf command

perf is a very powerful userspace command for tracing or counting both hardware and software events.

Software events are predefined in the kernel, can be predefined in userspace code, and can be dynamically created (as "probes") in kernel or userspace code.

perf can do much more than just trace and count, though.

perf stat

The stat subcommand of perf will run a command, count some events commonly found interesting, and produce a simple report:

This example shows a program that spends about 60% of its running time in the function main and about 20% each in subfunctions sum_sub and sum_add. Note that the default event used by perf record is "cycles." Later examples will show how to use perf record with other events.

perf report can further report runtime statistics by source code line (if the compilation was performed with the -g flag to produce debug information):

perf top

Similar to the top command, which displays (at a regular update interval) the processes using the most CPU time, perf top will display the functions using the most CPU time among all processes on the system, a nice leap in granularity.

Events labeled as Hardware event, Hardware cache event, Kernel PMU event, and most (if not all) of the events under the categories like cache, floating point, frontend, and memory are hardware events counted by the hardware and triggered each time a certain count is reached. Once triggered, an entry is made into the kernel trace buffer with the current state of the associated task. Raw hardware event codes are alphanumeric encodings of the hardware events. These are mostly needed when the hardware is newer than the kernel and the user needs to enable events that are new for that hardware. Users will rarely, if ever, need to use raw event codes.

Events labeled Tracepoint event are embedded in the kernel. These are triggered when that section of code is executed by the kernel. There are "syscalls" events for every system call supported by the kernel. raw_syscalls events are triggered for every system call. Since there is a limit to the number of events being actively traced, the raw_syscalls events may be more practical when a large number of system calls need to be traced.

Events labeled SDT event are for software-defined tracepoints (SDTs). These can be embedded in application or library code and enabled as needed. When enabled, they behave just like other events: When that section of code is executed (by any task being traced on the system), an entry is made in the kernel trace buffer with the current state of the associated task. This is a very powerful capability that can prove very useful.

perf buildid-cache and perf probe

Enabling SDTs is easy. First, make the SDTs for a certain library known to perf:

Processing perf trace data

The preceding discussion and examples show that perf can collect the data required for system utilization analysis. However, how can that data be processed to produce the desired results?

perf eBPF

A relatively new and emerging technology with perf is called eBPF. BPF is an acronym for Berkeley Packet Filter, and it is a C-like language originally for, not surprisingly, network packet filtering in the kernel. eBPF is an acronym for extended BPF, a similar, but more robust C-like language based on BPF.

Recent versions of perf can be used to incorporate compiled eBPF code into the kernel to securely and intelligently handle events for any number of purposes, with some limitations.

The capability is very powerful and quite useful for real-time, continuous updates of event-related data and statistics.

However, as this capability is emerging, support is mixed on current releases of Linux distributions. It's a bit complicated (or, put differently, I have not figured it out yet). It's also only for online use; there is no offline capability. For these reasons, I won't cover it further here.

perf data file

perf record produces a perf.data file. The file is a structured binary file, is not particularly well documented, has no programming interface for access, and is unclear on what compatibility guarantees exist. For these reasons, I chose not to directly use the perf.data file.

perf script

One of the last examples above showed how perf script is used for walking through the perf.data file and emitting basic information about each record there. This is an appropriate model for what would be needed to process the file and track the state changes and compute the statistics required for system utilization analysis.

perf script has several modes of operation, including several higher-level scripts that come with perf that produce statistics based on the trace data in a perf.data file.

The function raw_syscalls__sys_exit has parameters for all the data for the associated event. The rest of the function only increments a counter associated with the command, process ID, and system call. The rest of the code doesn't do that much. Most of the complexity is in the function signature for the event-handling routine.

Fortunately, perf makes it easy to figure out the proper signatures for various tracepoint event-handling functions.

perf script –gen-script

For the raw_syscalls events, we can generate a trace containing just those events:

Now we have a template on which to base writing a Python script to parse the events of interest for reporting system utilization.

perf scripting

The Python scripts generated by perf script –gen-script are not directly executable. They must be invoked by perf:

$ perf script –s ./perf-script.py

What's really going on here?

First, perf starts. The script subcommand's -s option indicates that an external script will be used.

perf establishes a Python runtime environment.

perf loads the specified script.

perf runs the script. The script can perform normal initialization and even handle command line arguments, although passing the arguments is slightly awkward, requiring a -- separator between the arguments for perf and for the script:

$ perf script -s ./perf-script.py -- --script-arg1 [...]

perf processes each record of the trace file, calling the appropriate event-handling function in the script. Those event-handling functions can do whatever they need to do.

Utilization

It appears that perf scripting has sufficient capabilities for a workable solution. What sort of information is required to generate the statistics for system utilization?

Task creation (fork, pthread_create)

Task termination (exit)

Task replacement (exec)

Task migration, explicit or implicit, and current CPU

Task scheduling

System calls

Hypervisor calls

Interrupts

It can be helpful to understand what portion of time a task spends in various system calls, handling interrupts, or making explicit calls out to the hypervisor. Each of these categories of time can be considered a "state" for the task, and the methods of transitioning from one state to another need to be tracked:

The curt command for Linux

perf provides a suitable infrastructure with which to capture the necessary data for system utilization. There are a sufficient set of events available for tracing in the Linux kernel. The Python scripting capabilities permit a powerful and flexible means of processing the trace data. It's time to write the tool.

High-level design

In processing each event, the relevant state of the affected tasks must be updated:

New task? Create and initialize data structures to track the task's state

Command

Process ID

Task ID

Migration count (0)

Current CPU

New CPU for this task? Create and initialize data structures for CPU-specific data

User time (0)

System time (0)

Hypervisor time (0)

Interrupt time (0)

Idle time (0)

New transaction for this task? Create and initialize data structures for transaction-specific data

High-level example

If this task has not been seen before, allocate and initialize a new task data structure

If the CPU is new for this task, allocate and initialize a new CPU data structure

If this system call is new for this task, allocate and initialize a new call data structure

In the task data structure:

Accumulate the time since the last state change in a bucket for the current state ("user")

Set the new state ("system")

Save the current timestamp as the start of this time period for the new state

Edge cases

sys_exit as a task's first event

If the first event in the trace for a task is raw_syscalls:sys_exit:

There is no matching raw_syscalls:sys_enter with which to determine the start time of this system call.

The accumulated time since the start of the trace was all spent in the system call and needs to be added to the overall elapsed time spent in all calls to this system call.

The elapsed time of this system call is unknown.

It would be inaccurate to account for this elapsed time in the average, minimum, or maximum statistics for this system call.

In this case, the tool creates a separate bucket called "pending" for time spent in the system call that cannot be accounted for in the average, minimum, or maximum.

A "pending" bucket is required for all transactional events (system calls, hypervisor calls, and interrupts).

sys_enter as a task's last event

Similarly, If the last event in the trace for a task is raw_syscalls:sys_enter:

There is no matching raw_syscalls:sys_exit with which to determine the end time of this system call.

The accumulated time from the start of the system call to the end of the trace was all spent in the system call and needs to be added to the overall elapsed time spent in all calls to this system call.

The elapsed time of this system call is unknown.

It would be inaccurate to account for this elapsed time in the average, minimum, or maximum statistics for this system call.

This elapsed time is also accumulated in the "pending" bucket.

A "pending" bucket is required for all transactional events (system calls, hypervisor calls, and interrupts).

Since this condition can only be discovered at the end of the trace, a final "wrap-up" step is required in the tool where the statistics for all known tasks are completed based on their final states.

Indeterminable state

It is possible that a very busy task (or a short trace) will never see an event for a task from which the task's state can be determined. For example, if only sched:sched_switch or sched:sched_task_migrate events are seen for a task, it is impossible to determine that task's state. However, the task is known to exist and to be running.

Since the actual state cannot be determined, the runtime for the task is accumulated in a separate bucket, arbitrarily called "busy-unknown." For completeness, this time is also displayed in the final report.

Invisible tasks

For very, very busy tasks (or a short trace), it is possible that a task was actively running during the entire time the trace was being collected, but no events for that task appear in the trace. It was never migrated, paused, or forced to wait.

Such tasks cannot be known to exist by the tool and will not appear in the report.

# compute time spent in the previous state ('user')
delta = timestamp – task.timestamp# accumulate 'user' time for this task/CPU
task.cpus[task.cpu].user += deltaifidnotin task.syscalls:# new system call for this task!
task.syscalls[id]= Call()

# change task's state
task.mode='sys'

# save the timestamp for the last event (this one) for this task
task.timestamp= timestamp

# accumulate time for this task's state on this CPU
task.cpus[common_cpu].system += delta

# change task's state
task.mode='user'

# save the timestamp for the last event (this one) for this task
task.timestamp= timestamp

Handling the edge cases

Following are some of the edge cases that are possible and must be handled.

Sys_exit as first event

As a system-wide trace can be started at an arbitrary time, it is certainly possible that the first event for a task is raw_syscalls:sys_exit. This requires adding the same code for new task discovery from the event-handling function for raw_syscalls:sys_enter to the handler for raw_syscalls:sys_exit. This:

Another issue is that it is impossible to properly accumulate the data for this system call since there is no timestamp for the start of the system call. The time from the start of the trace until this event has been spent by this task in the system call. It would be inaccurate to ignore this time. It would also be inaccurate to incorporate this time such that it is used to compute the average, minimum, or maximum. The only reasonable option is to accumulate this separately, calling it "pending" system time. To accurately compute this time, the timestamp of the first event of the trace must be known. Since any event could be the first event in the trace, every event must conditionally save its timestamp if it is the first event. A global variable is required:

start_timestamp =0

And every event-handling function must conditionally save its timestamp:

# compute elapsed time for this system call
delta = start_timestamp - timestamp

# accumulate time for this task/system call
task.syscalls[id].pending += delta

# accumulate time for this task's state on this CPU
task.cpus[common_cpu].system += delta

# change task's state
task.mode='user'

# save the timestamp for the last event (this one) for this task
task.timestamp= timestamp

Sys_enter as last event

A similar issue to having sys_exit as the first event for a task is when sys_enter is the last event seen for a task. The time spent in the system call must be accumulated for completeness but can't accurately impact the average, minimum, nor maximum. This time will also be accumulated in for a separate "pending" state.

To accurately determine the elapsed time of the pending system call, from sys_entry to the end of the trace period, the timestamp of the final event in the trace file is required. Unfortunately, there is no way to know which event is the last event until that event has already been processed. So, all events must save their respective timestamps in a global variable.

It may be that many tasks are in the state where the last event seen for them was sys_enter. Thus, after the last event is processed, a final "wrap up" step is required to complete the statistics for those tasks. Fortunately, there is a trace_end function which is called by perf after the final event has been processed.

def trace_end(): for tid in tasks.keys():
task = tasks[tid]# if this task ended while executing a system callif task.mode=='sys':# compute the time from the entry to the system call to the end of the trace period
delta = curr_timestamp - task.timestamp# accumulate the elapsed time for this system call
task.syscalls[task.syscall].pending += delta# accumulate the system time for this task/CPU
task.cpus[task.cpu].sys += delta

Migrations

A task migration is when a task running on one CPU is moved to another CPU. This can happen by either:

Explicit request (e.g., a call to sched_setaffinity), or

Implicitly by the kernel (e.g., load balancing or vacating a CPU being taken offline)

When detected:

The migration count for the task should be incremented

The statistics for the previous CPU should be updated

A new CPU data structure may need to be updated and initialized if the CPU is new for the task

The task's current CPU is set to the new CPU

For accurate statistics, task migrations must be detected as soon as possible. The first case, explicit request, happens within a system call and can be detected in the sys_exit event for that system call. The second case has its own event, sched:sched_migrate_task, so it will need a new event-handling function.

# compute elapsed time for this system call
delta = start_timestamp - timestamp

# accumulate time for this task/system call
task.syscalls[id].pending += delta

If common_cpu != task.cpu:
task.migrations +=1# divide the time spent in this syscall in half...
delta /=2# and give have to the previous CPU, below, and half to the new CPU, later
task.cpus[task.cpu].system += delta

# accumulate time for this task's state on this CPU
task.cpus[common_cpu].system += delta

# change task's state
task.mode='user'

# save the timestamp for the last event (this one) for this task
task.timestamp= timestamp

Task creation

To accurately collect statistics for a task, it is essential to know when the task is created. Tasks can be created with fork(), which creates a new process, or pthread_create(), which creates a new task within the same process. Fortunately, both are manifested by a clone system call and made evident by a sched:sched_process_fork event. The lifetime of the task starts at the sched_process_fork event. The edge case that arises is that the first likely events for the new task are:

sched_switch when the new task starts running. The new task should be considered idle at creation until this event occurs

sys_exit for the clone system call. The initial state of the new task needs to be based on the state of the task that creates it, including being within the clone system call.

One edge case that must be handled is if the creating task (parent) is not yet known, it must be created and initialized, and the presumption is that it has been actively running since the start of the trace.

Task exit

Similarly, for complete and accurate task statistics, it is essential to know when a task has terminated. There's an event for that: sched:sched_process_exit. This one is pretty easy to handle, in that the effort is just to close out the statistics and set the mode appropriately, so any end-of-trace processing will not think the task is still active:

Output

What follows is an example of the report displayed by curt, slightly reformatted to fit on a narrower page width and with the idle-time classification data (which makes the output very wide) removed, and for brevity. Seen are two processes, 1497 and 2857. Process 1497 has two tasks, 1497 and 1523. Each task has a per-CPU summary and system-wide ("ALL" CPUs) summary. Each task's data is followed by the system call data for that task (if any), hypervisor call data (if any), and interrupt data (if any). After each process's respective tasks is a per-process summary. Process 2857 has a task 2857-0 that is the previous task image before an exec() system call replaced the process image. After all processes is a system-wide summary.

Hurdles and issues

Following are some of the issues encountered in the development of curt.

Out-of-order events

One of the more challenging issues is the discovery that events in a perf.data file can be out of time order. For a program trying to monitor state transitions carefully, this is a serious issue. For example, a trace could include the following sequence of events, displayed as they appear in the trace file:

Just blindly processing these events in the order they are presented to their respective event-handling functions (in the wrong time order) will result in incorrect statistics (or worse).

The most user-friendly ways to handle out-of-order events include:

Prevent traces from having out-of-order events in the first place by changing the way perf record works

Providing a means to reorder events in a trace file, perhaps by enhancing perf inject

Modifying how perf script works to present the events to the event-handling functions in time order

But user-friendly is not the same as straightforward, nor easy. Also, none of the above are in the user's control.

I chose to implement a queue for incoming events that would be sufficiently deep to allow for proper reordering of all events. This required a significant redesign of the code, including implementation of classes for each event, and moving the event processing for each event type into a method in that event's class.

In the redesigned code, the actual event handlers' only job is to save the relevant data from the event into an instance of the event class, queue it, then process the top (oldest in time) event from the queue:

Note that the size of the queue is configurable, primarily for performance and to limit memory consumption. The function will report when that queue size is insufficient to eliminate out-of-order events. It is worth considering whether to consider this case a catastrophic failure and elect to terminate the program.

Implementing a class for each event type led to some consideration for refactoring, such that common code could coalesce into a base class:

Further refactoring is evident above, as well, moving the common code that updates relevant statistics based on a task's state change and the state change itself into a change_mode method of the Task class.

Start-of-trace timestamp

As mentioned above, for scripts that depend on elapsed time, there should be an easier way to get the first timestamp in the trace other than forcing every event-handling function to conditionally save its timestamp as the start-of-trace timestamp.

Awkward invocation

The syntax for invoking a perf Python script, including script parameters, is slightly awkward:

Simultaneous event registration required

Will result in a trace file that begins with the following series of events for a single task (the perf command itself):

sys_enter
sys_enter
sys_enter
…

This happens because perf will register the sys_enter event for every CPU on the system (because of the -a argument), then it will register the sys_exit event for every CPU. In the latter case, since the sys_enter event has already been enabled for each CPU, that event shows up in the trace; but since the sys_exit has not been enabled on each CPU until after the call returns, the sys_exit call does not show up in the trace. The reverse issue happens at the end of the trace file, with a series of sys_exit events in the trace because the sys_enter event has already been disabled.

The solution to this issue is to group the events, which is not well documented:

The solution to this issue is to enable the script to perform the record step itself, by itself invoking perf. A further enhancement is to proceed after the recording is complete and report the statistics from that recording:

Process IDs and perf API change

Process IDs are treated a bit cavalierly by perf scripting. Note well above that one of the common parameters for the generated event-handling functions is named common_pid. This is not the process ID, but the task ID. In fact, on many current Linux-based distributions, there is no way to determine a task's process ID from within a perf Python script. This presents a serious problem for a script that wants to compute statistics for a process.

Fortunately, in Linux kernel v4.14, an additional parameter was provided to each of the event-handling functions—perf_sample_dict—a dictionary from which the process ID could be extracted: (perf_sample_dict['sample']['pid']).

Unfortunately, current Linux distributions may not have that version of the Linux kernel. If the script is written to expect that extra parameter, the script will fail and report an error:

Ideally, a means to automatically discover if the additional parameter is passed would be available to permit a script to easily run with both the old and new APIs and to take advantage of the new API if it is available. Unfortunately, such a means is not readily apparent.

Since there is clearly value in using the new API to determine process-wide statistics, curt provides a command line option to use the new API. curt then takes advantage of Python's lazy function binding to adjust, at run-time, which API to use:

Note that the event-handling function for the older API will make use of the function for the newer API, passing a statically defined dictionary containing just enough data such that accessing it as perf_sample_dict['sample']['pid'] will work (resulting in 'unknown').

Events reported on other CPUs

Not all events that refer to a task are reported from a CPU on which the task is running. This could result in an artificially high migration count and other incorrect statistics. For these types of events (sched_stat), the event CPU is ignored.

Explicit migrations (no sched_migrate event)

While there is conveniently an event for when the kernel decides to migrate a task from one CPU to another, there is no event for when the task requests a migration on its own. These are effected by system calls (sched_setaffinity), so the sys_exit event handler must compare the event CPU to the task's CPU, and if different, presume a migration has occurred. (This is described above, but repeated here in the "issues" section for completeness.)

Mapping system call IDs to names is architecture-specific

System calls are identified in events only as unique numeric identifiers. These identifiers are not readily interpreted by humans in the report. These numeric identifiers are not readily mapped to their mnemonics because they are architecture-specific, and new system calls can be added in newer kernels. Fortunately, perf provides a means to map system call numeric identifiers to system call names. A simple example follows:

Unfortunately, using syscall_name introduces a dependency on the audit python bindings. This dependency is being removed in upstream versions of perf.

Mapping hypervisor call IDs to names is non-existent

Similar to system calls, hypervisor calls are also identified only with numeric identifiers. For IBM's POWER hypervisor, they are statically defined. Unfortunately, perf does not provide a means to map hypervisor call identifiers to mnemonics. curt includes a (hardcoded) function to do just that:

Command strings as bytearrays

perf stores command names and string arguments in Python bytearrays. Unfortunately, printing bytearrays in Python prints every character in the bytearray—even if the string is null-terminated. For example:

Dynamic mappings, like IRQ number to name

Dissimilar to system calls and hypervisor calls, interrupt numbers (IRQs) are dynamically assigned by the kernel on demand, so there can't be a static table mapping an IRQ number to a name. Fortunately, perf passes the name to the event's irq_handler_entry routine. This allows a script to create a dictionary that maps the IRQ number to a name:

Somewhat oddly, perf does not pass the name to the irq_handler_exit routine. So, it is possible that a trace may only see an irq_handler_exit for an IRQ and must be able to tolerate that. Here, instead of mapping the IRQ to a name, the IRQ number is returned as a string instead:

Task 0

Task 0 shows up everywhere. It's not a real task. It's a substitute for the "idle" state. It's the task ID given to the sched_switch event handler when the CPU is going to (or coming from) the "idle" state. It's often the task that is "interrupted" by interrupts. Tracking the statistics for task 0 as if it were a real task would not make sense. Currently, curt ignores task 0. However, this loses some information, like some time spent in interrupt processing. curt should, but currently doesn't, track interesting (non-idle) time for task 0.

Spurious sched_migrate_task events (same CPU)

Rarely, a sched_migrate_task event occurs in which the source and target CPUs are the same. In other words, the task is not migrated. To avoid artificially inflated migration counts, this case must be explicitly ignored:

exec

The semantics of the exec system call are that the image of the current process is replaced by a completely new process image without changing the process ID. This is awkward for tracking the statistics of a process (really, a task) based on the process (task) ID. The change is significant enough that the statistics for each task should be accumulated separately, so the current task's statistics need to be closed out and a new set of statistics should be initialized. The challenge is that both the old and new tasks have the same process (task) ID. curt addresses this by tagging the task's task ID with a numeric suffix:

This will clearly separate the statistics for the different process images. In the example below, the perf command (task "9614-0") exec'd exec.sh (task "9614-1"), which in turn exec'd itself (task "9614"):

Distribution support

Surprisingly, there is currently no support for perf's Python bindings in Ubuntu. Follow the saga for more detail.

Limit on number of traced events

As curt gets more sophisticated, it is likely that more and more events may be required to be included in the trace file. perf currently requires one file descriptor per event per CPU. This becomes a problem when the maximum number of open file descriptors is not a large multiple of the number of CPUs on the system. On systems with large numbers of CPUs, this quickly becomes a problem. For example, the default maximum number of open file descriptors is often 1,024. An IBM POWER8 system with four sockets may have 12 cores per socket and eight threads (CPUs) per core. Such a system has 4 * 12 * 8 = 392 CPUs. In that case, perf could trace only about two events! A workaround is to (significantly) increase the maximum number of open file descriptors (ulimit –n if the system administrator has configured the hard limits high enough; or the administrator can set the limits higher in /etc/security/limits.conf for nofile).

Summary

I hope this article shows the power of perf—and specifically the utility and flexibility of the Python scripting enabled with perf—to perform sophisticated processing of kernel trace data. Also, it shows some of the issues and edge cases that can be encountered when the boundaries of such technologies are tested.

Please feel free to download and make use of the curt tool described here, report problems, suggest improvements, or contribute code of your own on the curt GitHub page.

Footer

The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat.

Opensource.com aspires to publish all content under a Creative Commons license but may not be able to do so in all cases. You are responsible for ensuring that you have the necessary permission to reuse any work on this site. Red Hat and the Shadowman logo are trademarks of Red Hat, Inc., registered in the United States and other countries.