The goal of this documentation is to comprehensively explain the Node.js
API, both from a reference as well as a conceptual point of view. Each
section describes a built-in module or high-level concept.

Where appropriate, property types, method arguments, and the arguments
provided to event handlers are detailed in a list underneath the topic
heading.

Every file is generated based on the corresponding .md file in the
doc/api/ folder in Node.js's source tree. The documentation is generated
using the tools/doc/generate.js program. An HTML template is located at
doc/template.html.

Throughout the documentation are indications of a section's
stability. The Node.js API is still somewhat changing, and as it
matures, certain parts are more reliable than others. Some are so
proven, and so relied upon, that they are unlikely to ever change at
all. Others are brand new and experimental, or known to be hazardous
and in the process of being redesigned.

Stability: 1 - Experimental. This feature is still under active development
and subject to non-backward compatible changes or removal in any future
version. Use of the feature is not recommended in production environments.
Experimental features are not subject to the Node.js Semantic Versioning
model.

Stability: 2 - Stable. Compatibility with the npm ecosystem is a high
priority.

Caution must be used when making use of Experimental features, particularly
within modules that may be used as dependencies (or dependencies of
dependencies) within a Node.js application. End users may not be aware that
experimental features are being used, and therefore may experience unexpected
failures or behavior changes when API modifications occur. To help avoid such
surprises, Experimental features may require a command-line flag to
explicitly enable them, or may cause a process warning to be emitted.
By default, such warnings are printed to stderr and may be handled by
attaching a listener to the 'warning' event.

Every .html document has a corresponding .json document presenting
the same information in a structured manner. This feature is
experimental, and added for the benefit of IDEs and other utilities that
wish to do programmatic things with the documentation.

System calls like open(2) and read(2) define the interface between user programs
and the underlying operating system. Node.js functions
which simply wrap a syscall,
like fs.open(), will document that. The docs link to the corresponding man
pages (short for manual pages) which describe how the syscalls work.

Most Unix syscalls have Windows equivalents, but behavior may differ on Windows
relative to Linux and macOS. For an example of the subtle ways in which it's
sometimes impossible to replace Unix syscall semantics on Windows, see Node.js
issue 4760.

An example of a web server written with Node.js which responds with
'Hello, World!':

Commands displayed in this document are shown starting with $ or >
to replicate how they would appear in a user's terminal.
Do not include the $ and > characters. They are there to
indicate the start of each command.

There are many tutorials and examples that follow this
convention: $ or > for commands run as a regular user, and #
for commands that should be executed as an administrator.

Lines that don’t start with $ or > character are typically showing
the output of the previous command.

Firstly, make sure to have downloaded and installed Node.js.
See this guide for further install information.

Now, create an empty project folder called projects, then navigate into it.
The project folder can be named based on the user's current project title, but
this example will use projects as the project folder.

Linux and Mac:

$ mkdir ~/projects
$ cd ~/projects

Windows CMD:

> mkdir %USERPROFILE%\projects
> cd %USERPROFILE%\projects

Windows PowerShell:

> mkdir $env:USERPROFILE\projects
> cd $env:USERPROFILE\projects

Next, create a new source file in the projects
folder and call it hello-world.js.

In Node.js it is considered good style to use
hyphens (-) or underscores (_) to separate
multiple words in filenames.

Open hello-world.js in any preferred text editor and
paste in the following content:

message<string> If provided, the error message is going to be set to this
value.

actual<any> The actual property on the error instance is going to
contain this value. Internally used for the actual error input in case
e.g., assert.strictEqual() is used.

expected<any> The expected property on the error instance is going to
contain this value. Internally used for the expected error input in case
e.g., assert.strictEqual() is used.

operator<string> The operator property on the error instance is going
to contain this value. Internally used to indicate what operation was used
for comparison (or what assertion function triggered the error).

stackStartFn<Function> If provided, the generated stack trace is going to
remove all frames up to the provided function.

If the values are not equal, an AssertionError is thrown with a message
property set equal to the value of the message parameter. If the message
parameter is undefined, a default error message is assigned. If the message
parameter is an instance of an Error then it will be thrown instead of the
AssertionError.

If the values are not equal, an AssertionError is thrown with a message
property set equal to the value of the message parameter. If the message
parameter is undefined, a default error message is assigned. If the message
parameter is an instance of an Error then it will be thrown instead of the
AssertionError.

Awaits the asyncFn promise or, if asyncFn is a function, immediately
calls the function and awaits the returned promise to complete. It will then
check that the promise is not rejected.

If asyncFn is a function and it throws an error synchronously,
assert.doesNotReject() will return a rejected Promise with that error. If
the function does not return a promise, assert.doesNotReject() will return a
rejected Promise with an ERR_INVALID_RETURN_VALUE error. In both cases
the error handler is skipped.

Using assert.doesNotReject() is actually not useful because there is little
benefit in catching a rejection and then rejecting it again. Instead, consider
adding a comment next to the specific code path that should not reject and keep
error messages as expressive as possible.

Using assert.doesNotThrow() is actually not useful because there
is no benefit in catching an error and then rethrowing it. Instead, consider
adding a comment next to the specific code path that should not throw and keep
error messages as expressive as possible.

When assert.doesNotThrow() is called, it will immediately call the fn
function.

If an error is thrown and it is the same type as that specified by the error
parameter, then an AssertionError is thrown. If the error is of a different
type, or if the error parameter is undefined, the error is propagated back
to the caller.

If the values are not equal, an AssertionError is thrown with a message
property set equal to the value of the message parameter. If the message
parameter is undefined, a default error message is assigned. If the message
parameter is an instance of an Error then it will be thrown instead of the
AssertionError.

If message is falsy, the error message is set as the values of actual and
expected separated by the provided operator. If just the two actual and
expected arguments are provided, operator will default to '!='. If
message is provided as third argument it will be used as the error message and
the other arguments will be stored as properties on the thrown object. If
stackStartFn is provided, all stack frames above that function will be
removed from stacktrace (see Error.captureStackTrace). If no arguments are
given, the default message Failed will be used.

Throws value if value is not undefined or null. This is useful when
testing the error argument in callbacks. The stack trace contains all frames
from the error passed to ifError() including the potential new frames for
ifError() itself.

If the values are deeply equal, an AssertionError is thrown with a message
property set equal to the value of the message parameter. If the message
parameter is undefined, a default error message is assigned. If the message
parameter is an instance of an Error then it will be thrown instead of the
AssertionError.

If the values are deeply and strictly equal, an AssertionError is thrown with
a message property set equal to the value of the message parameter. If the
message parameter is undefined, a default error message is assigned. If the
message parameter is an instance of an Error then it will be thrown
instead of the AssertionError.

If the values are equal, an AssertionError is thrown with a message property
set equal to the value of the message parameter. If the message parameter is
undefined, a default error message is assigned. If the message parameter is an
instance of an Error then it will be thrown instead of the
AssertionError.

If the values are strictly equal, an AssertionError is thrown with a message
property set equal to the value of the message parameter. If the message
parameter is undefined, a default error message is assigned. If the message
parameter is an instance of an Error then it will be thrown instead of the
AssertionError.

Tests if value is truthy. It is equivalent to
assert.equal(!!value, true, message).

If value is not truthy, an AssertionError is thrown with a message
property set equal to the value of the message parameter. If the message
parameter is undefined, a default error message is assigned. If the message
parameter is an instance of an Error then it will be thrown instead of the
AssertionError.
If no arguments are passed in at all message will be set to the string:
'No value argument passed to `assert.ok()`'.

Be aware that in the repl the error message will be different to the one
thrown in a file! See below for further details.

Awaits the asyncFn promise or, if asyncFn is a function, immediately
calls the function and awaits the returned promise to complete. It will then
check that the promise is rejected.

If asyncFn is a function and it throws an error synchronously,
assert.rejects() will return a rejected Promise with that error. If the
function does not return a promise, assert.rejects() will return a rejected
Promise with an ERR_INVALID_RETURN_VALUE error. In both cases the error
handler is skipped.

Besides the async nature to await the completion behaves identically to
assert.throws().

If specified, error can be a Class, RegExp, a validation function,
an object where each property will be tested for, or an instance of error where
each property will be tested for including the non-enumerable message and
name properties.

If specified, message will be the message provided by the AssertionError if
the asyncFn fails to reject.

Note that error cannot be a string. If a string is provided as the second
argument, then error is assumed to be omitted and the string will be used for
message instead. This can lead to easy-to-miss mistakes. Please read the
example in assert.throws() carefully if using a string as the second
argument gets considered.

If the values are not strictly equal, an AssertionError is thrown with a
message property set equal to the value of the message parameter. If the
message parameter is undefined, a default error message is assigned. If the
message parameter is an instance of an Error then it will be thrown
instead of the AssertionError.

If specified, error can be a Class, RegExp, a validation function,
a validation object where each property will be tested for strict deep equality,
or an instance of error where each property will be tested for strict deep
equality including the non-enumerable message and name properties. When
using an object, it is also possible to use a regular expression, when
validating against a string property. See below for examples.

If specified, message will be appended to the message provided by the
AssertionError if the fn call fails to throw or in case the error validation
fails.

Note that error cannot be a string. If a string is provided as the second
argument, then error is assumed to be omitted and the string will be used for
message instead. This can lead to easy-to-miss mistakes. Using the same
message as the thrown error message is going to result in an
ERR_AMBIGUOUS_ARGUMENT error. Please read the example below carefully if using
a string as the second argument gets considered:

function throwingFirst() {
throw new Error('First');
}
function throwingSecond() {
throw new Error('Second');
}
function notThrowing() {}
// The second argument is a string and the input function threw an Error.
// The first case will not throw as it does not match for the error message
// thrown by the input function!
assert.throws(throwingFirst, 'Second');
// In the next example the message has no benefit over the message from the
// error and since it is not clear if the user intended to actually match
// against the error message, Node.js throws an `ERR_AMBIGUOUS_ARGUMENT` error.
assert.throws(throwingSecond, 'Second');
// TypeError [ERR_AMBIGUOUS_ARGUMENT]
// The string is only used (as message) in case the function does not throw:
assert.throws(notThrowing, 'Second');
// AssertionError [ERR_ASSERTION]: Missing expected exception: Second
// If it was intended to match for the error message do this instead:
// It does not throw because the error messages match.
assert.throws(throwingSecond, /Second$/);
// If the error message does not match, the error from within the function is
// not caught.
assert.throws(throwingFirst, /Second$/);
// Error: First
// at throwingFirst (repl:2:9)

Due to the confusing notation, it is recommended not to use a string as the
second argument. This might lead to difficult-to-spot errors.

An asynchronous resource represents an object with an associated callback.
This callback may be called multiple times, for example, the 'connection'
event in net.createServer(), or just a single time like in fs.open().
A resource can also be closed before the callback is called. AsyncHook does
not explicitly distinguish between these different cases but will represent them
as the abstract concept that is a resource.

If Workers are used, each thread has an independent async_hooks
interface, and each thread will use a new set of async IDs.

const async_hooks = require('async_hooks');
// Return the ID of the current execution context.
const eid = async_hooks.executionAsyncId();
// Return the ID of the handle responsible for triggering the callback of the
// current execution scope to call.
const tid = async_hooks.triggerAsyncId();
// Create a new AsyncHook instance. All of these callbacks are optional.
const asyncHook =
async_hooks.createHook({ init, before, after, destroy, promiseResolve });
// Allow callbacks of this AsyncHook instance to call. This is not an implicit
// action after running the constructor, and must be explicitly run to begin
// executing callbacks.
asyncHook.enable();
// Disable listening for new asynchronous events.
asyncHook.disable();
//
// The following are the callbacks that can be passed to createHook().
//
// init is called during object construction. The resource may not have
// completed construction when this callback runs, therefore all fields of the
// resource referenced by "asyncId" may not have been populated.
function init(asyncId, type, triggerAsyncId, resource) { }
// Before is called just before the resource's callback is called. It can be
// called 0-N times for handles (e.g. TCPWrap), and will be called exactly 1
// time for requests (e.g. FSReqCallback).
function before(asyncId) { }
// After is called just after the resource's callback has finished.
function after(asyncId) { }
// destroy is called when an AsyncWrap instance is destroyed.
function destroy(asyncId) { }
// promiseResolve is called only for promise resources, when the
// `resolve` function passed to the `Promise` constructor is invoked
// (either directly or through other means of resolving a promise).
function promiseResolve(asyncId) { }

Registers functions to be called for different lifetime events of each async
operation.

The callbacks init()/before()/after()/destroy() are called for the
respective asynchronous event during a resource's lifetime.

All callbacks are optional. For example, if only resource cleanup needs to
be tracked, then only the destroy callback needs to be passed. The
specifics of all functions that can be passed to callbacks is in the
Hook Callbacks section.

If any AsyncHook callbacks throw, the application will print the stack trace
and exit. The exit path does follow that of an uncaught exception, but
all 'uncaughtException' listeners are removed, thus forcing the process to
exit. The 'exit' callbacks will still be called unless the application is run
with --abort-on-uncaught-exception, in which case a stack trace will be
printed and the application exits, leaving a core file.

The reason for this error handling behavior is that these callbacks are running
at potentially volatile points in an object's lifetime, for example during
class construction and destruction. Because of this, it is deemed necessary to
bring down the process quickly in order to prevent an unintentional abort in the
future. This is subject to change in the future if a comprehensive analysis is
performed to ensure an exception can follow the normal control flow without
unintentional side effects.

Because printing to the console is an asynchronous operation, console.log()
will cause the AsyncHooks callbacks to be called. Using console.log() or
similar asynchronous operations inside an AsyncHooks callback function will thus
cause an infinite recursion. An easy solution to this when debugging is to use a
synchronous logging operation such as fs.writeFileSync(file, msg, flag).
This will print to the file and will not invoke AsyncHooks recursively because
it is synchronous.

If an asynchronous operation is needed for logging, it is possible to keep
track of what caused the asynchronous operation using the information
provided by AsyncHooks itself. The logging should then be skipped when
it was the logging itself that caused AsyncHooks callback to call. By
doing this the otherwise infinite recursion is broken.

triggerAsyncId<number> The unique ID of the async resource in whose
execution context this async resource was created.

resource<Object> Reference to the resource representing the async
operation, needs to be released during destroy.

Called when a class is constructed that has the possibility to emit an
asynchronous event. This does not mean the instance must call
before/after before destroy is called, only that the possibility
exists.

This behavior can be observed by doing something like opening a resource then
closing it before the resource can be used. The following snippet demonstrates
this.

triggerAsyncId is the asyncId of the resource that caused (or "triggered")
the new resource to initialize and that caused init to call. This is different
from async_hooks.executionAsyncId() that only shows when a resource was
created, while triggerAsyncId shows why a resource was created.

The TCPWRAP is the new connection from the client. When a new
connection is made, the TCPWrap instance is immediately constructed. This
happens outside of any JavaScript stack. (An executionAsyncId() of 0 means
that it is being executed from C++ with no JavaScript stack above it.) With only
that information, it would be impossible to link resources together in
terms of what caused them to be created, so triggerAsyncId is given the task
of propagating what resource is responsible for the new resource's existence.

resource is an object that represents the actual async resource that has
been initialized. This can contain useful information that can vary based on
the value of type. For instance, for the GETADDRINFOREQWRAP resource type,
resource provides the hostname used when looking up the IP address for the
host in net.Server.listen(). The API for accessing this information is
currently not considered public, but using the Embedder API, users can provide
and document their own resource objects. For example, such a resource object
could contain the SQL query being executed.

In the case of Promises, the resource object will have promise property
that refers to the Promise that is being initialized, and an
isChainedPromise property, set to true if the promise has a parent promise,
and false otherwise. For example, in the case of b = a.then(handler), a is
considered a parent Promise of b. Here, b is considered a chained promise.

In some cases the resource object is reused for performance reasons, it is
thus not safe to use it as a key in a WeakMap or add properties to it.

The following is an example with additional information about the calls to
init between the before and after calls, specifically what the
callback to listen() will look like. The output formatting is slightly more
elaborate to make calling context easier to see.

As illustrated in the example, executionAsyncId() and execution each specify
the value of the current execution context; which is delineated by calls to
before and after.

Only using execution to graph resource allocation results in the following:

Timeout(7) -> TickObject(6) -> root(1)

The TCPSERVERWRAP is not part of this graph, even though it was the reason for
console.log() being called. This is because binding to a port without a
hostname is a synchronous operation, but to maintain a completely asynchronous
API the user's callback is placed in a process.nextTick().

The graph only shows when a resource was created, not why, so to track
the why use triggerAsyncId.

When an asynchronous operation is initiated (such as a TCP server receiving a
new connection) or completes (such as writing data to disk) a callback is
called to notify the user. The before callback is called just before said
callback is executed. asyncId is the unique identifier assigned to the
resource about to execute the callback.

The before callback will be called 0 to N times. The before callback
will typically be called 0 times if the asynchronous operation was cancelled
or, for example, if no connections are received by a TCP server. Persistent
asynchronous resources like a TCP server will typically call the before
callback multiple times, while other operations like fs.open() will call
it only once.

Called after the resource corresponding to asyncId is destroyed. It is also
called asynchronously from the embedder API emitDestroy().

Some resources depend on garbage collection for cleanup, so if a reference is
made to the resource object passed to init it is possible that destroy
will never be called, causing a memory leak in the application. If the resource
does not depend on garbage collection, then this will not be an issue.

The ID returned from executionAsyncId() is related to execution timing, not
causality (which is covered by triggerAsyncId()):

const server = net.createServer((conn) => {
// Returns the ID of the server, not of the new connection, because the
// callback runs in the execution scope of the server's MakeCallback().
async_hooks.executionAsyncId();
}).listen(port, () => {
// Returns the ID of a TickObject (i.e. process.nextTick()) because all
// callbacks passed to .listen() are wrapped in a nextTick().
async_hooks.executionAsyncId();
});

Returns: <number> The ID of the resource responsible for calling the callback
that is currently being executed.

const server = net.createServer((conn) => {
// The resource that caused (or triggered) this callback to be called
// was that of the new connection. Thus the return value of triggerAsyncId()
// is the asyncId of "conn".
async_hooks.triggerAsyncId();
}).listen(port, () => {
// Even though all callbacks passed to .listen() are wrapped in a nextTick()
// the callback itself exists because the call to the server's .listen()
// was made. So the return value would be the ID of the server.
async_hooks.triggerAsyncId();
});

By default, promise executions are not assigned asyncIds due to the relatively
expensive nature of the promise introspection API provided by
V8. This means that programs using promises or async/await will not get
correct execution and trigger ids for promise callback contexts by default.

Observe that the then() callback claims to have executed in the context of the
outer scope even though there was an asynchronous hop involved. Also note that
the triggerAsyncId value is 0, which means that we are missing context about
the resource that caused (triggered) the then() callback to be executed.

In this example, adding any actual hook function enabled the tracking of
promises. There are two promises in the example above; the promise created by
Promise.resolve() and the promise returned by the call to then(). In the
example above, the first promise got the asyncId6 and the latter got
asyncId7. During the execution of the then() callback, we are executing
in the context of promise with asyncId7. This promise was triggered by
async resource 6.

Another subtlety with promises is that before and after callbacks are run
only on chained promises. That means promises not created by then()/catch()
will not have the before and after callbacks fired on them. For more details
see the details of the V8 PromiseHooks API.

Library developers that handle their own asynchronous resources performing tasks
like I/O, connection pooling, or managing callback queues may use the
AsyncWrap JavaScript API so that all the appropriate callbacks are called.

triggerAsyncId<number> The ID of the execution context that created this
async event. Default:executionAsyncId().

requireManualDestroy<boolean> Disables automatic emitDestroy when the
object is garbage collected. This usually does not need to be set (even if
emitDestroy is called manually), unless the resource's asyncId is
retrieved and the sensitive API's emitDestroy is called with it.
Default:false.

Call the provided function with the provided arguments in the execution context
of the async resource. This will establish the context, trigger the AsyncHooks
before callbacks, call the function, trigger the AsyncHooks after callbacks, and
then restore the original execution context.

Call all before callbacks to notify that a new asynchronous execution context
is being entered. If nested calls to emitBefore() are made, the stack of
asyncIds will be tracked and properly unwound.

before and after calls must be unwound in the same order that they
are called. Otherwise, an unrecoverable exception will occur and the process
will abort. For this reason, the emitBefore and emitAfter APIs are
considered deprecated. Please use runInAsyncScope, as it provides a much safer
alternative.

Call all after callbacks. If nested calls to emitBefore() were made, then
make sure the stack is unwound properly. Otherwise an error will be thrown.

If the user's callback throws an exception, emitAfter() will automatically be
called for all asyncIds on the stack if the error is handled by a domain or
'uncaughtException' handler.

before and after calls must be unwound in the same order that they
are called. Otherwise, an unrecoverable exception will occur and the process
will abort. For this reason, the emitBefore and emitAfter APIs are
considered deprecated. Please use runInAsyncScope, as it provides a much safer
alternative.

Call all destroy hooks. This should only ever be called once. An error will
be thrown if it is called more than once. This must be manually called. If
the resource is left to be collected by the GC then the destroy hooks will
never be called.

Prior to the introduction of TypedArray, the JavaScript language had no
mechanism for reading or manipulating streams of binary data. The Buffer class
was introduced as part of the Node.js API to enable interaction with octet
streams in TCP streams, file system operations, and other contexts.

With TypedArray now available, the Buffer class implements the
Uint8Array API in a manner that is more optimized and suitable for Node.js.

Instances of the Buffer class are similar to arrays of integers but
correspond to fixed-sized, raw memory allocations outside the V8 heap.
The size of the Buffer is established when it is created and cannot be
changed.

The Buffer class is within the global scope, making it unlikely that one
would need to ever use require('buffer').Buffer.

In versions of Node.js prior to 6.0.0, Buffer instances were created using the
Buffer constructor function, which allocates the returned Buffer
differently based on what arguments are provided:

Passing a number as the first argument to Buffer() (e.g. new Buffer(10))
allocates a new Buffer object of the specified size. Prior to Node.js 8.0.0,
the memory allocated for such Buffer instances is not initialized and
can contain sensitive data. Such Buffer instances must be subsequently
initialized by using either buf.fill(0) or by writing to the
entire Buffer. While this behavior is intentional to improve performance,
development experience has demonstrated that a more explicit distinction is
required between creating a fast-but-uninitialized Buffer versus creating a
slower-but-safer Buffer. Starting in Node.js 8.0.0, Buffer(num) and
new Buffer(num) will return a Buffer with initialized memory.

Passing a string, array, or Buffer as the first argument copies the
passed object's data into the Buffer.

Because the behavior of new Buffer() is different depending on the type of the
first argument, security and reliability issues can be inadvertently introduced
into applications when argument validation or Buffer initialization is not
performed.

To make the creation of Buffer instances more reliable and less error-prone,
the various forms of the new Buffer() constructor have been deprecated
and replaced by separate Buffer.from(), Buffer.alloc(), and
Buffer.allocUnsafe() methods.

Developers should migrate all existing uses of the new Buffer() constructors
to one of these new APIs.

Node.js can be started using the --zero-fill-buffers command line option to
cause all newly allocated Buffer instances to be zero-filled upon creation by
default, including buffers returned by new Buffer(size),
Buffer.allocUnsafe(), Buffer.allocUnsafeSlow(), and new SlowBuffer(size). Use of this flag can have a significant negative impact on
performance. Use of the --zero-fill-buffers option is recommended only when
necessary to enforce that newly allocated Buffer instances cannot contain old
data that is potentially sensitive.

What makes `Buffer.allocUnsafe()` and `Buffer.allocUnsafeSlow()` "unsafe"?#

When calling Buffer.allocUnsafe() and Buffer.allocUnsafeSlow(), the
segment of allocated memory is uninitialized (it is not zeroed-out). While
this design makes the allocation of memory quite fast, the allocated segment of
memory might contain old data that is potentially sensitive. Using a Buffer
created by Buffer.allocUnsafe() without completely overwriting the memory
can allow this old data to be leaked when the Buffer memory is read.

While there are clear performance advantages to using Buffer.allocUnsafe(),
extra care must be taken in order to avoid introducing security
vulnerabilities into an application.

'base64' - Base64 encoding. When creating a Buffer from a string,
this encoding will also correctly accept "URL and Filename Safe Alphabet" as
specified in RFC4648, Section 5.

'latin1' - A way of encoding the Buffer into a one-byte encoded string
(as defined by the IANA in RFC1345,
page 63, to be the Latin-1 supplement block and C0/C1 control codes).

'binary' - Alias for 'latin1'.

'hex' - Encode each byte as two hexadecimal characters.

Modern Web browsers follow the WHATWG Encoding Standard which aliases
both 'latin1' and 'ISO-8859-1' to 'win-1252'. This means that while doing
something like http.get(), if the returned charset is one of those listed in
the WHATWG specification it is possible that the server actually returned
'win-1252'-encoded data, and using 'latin1' encoding may incorrectly decode
the characters.

The Buffer object's memory is interpreted as an array of distinct
elements, and not as a byte array of the target type. That is,
new Uint32Array(Buffer.from([1, 2, 3, 4])) creates a 4-element Uint32Array
with elements [1, 2, 3, 4], not a Uint32Array with a single element
[0x1020304] or [0x4030201].

It is possible to create a new Buffer that shares the same allocated memory as
a TypedArray instance by using the TypeArray object's .buffer property.

The Buffer.from() and TypedArray.from() have different signatures and
implementations. Specifically, the TypedArray variants accept a second
argument that is a mapping function that is invoked on every element of the
typed array:

TypedArray.from(source[, mapFn[, thisArg]])

The Buffer.from() method, however, does not support the use of a mapping
function:

length<integer> Number of bytes to expose.
Default:arrayBuffer.length - byteOffset.

This creates a view of the ArrayBuffer or SharedArrayBuffer without
copying the underlying memory. For example, when passed a reference to the
.buffer property of a TypedArray instance, the newly created Buffer will
share the same allocated memory as the TypedArray.

The optional byteOffset and length arguments specify a memory range within
the arrayBuffer that will be shared by the Buffer.

Prior to Node.js 8.0.0, the underlying memory for Buffer instances
created in this way is not initialized. The contents of a newly created
Buffer are unknown and may contain sensitive data. Use
Buffer.alloc(size) instead to initialize a Buffer
with zeroes.

Creates a new Buffer containing string. The encoding parameter identifies
the character encoding of string.

const buf1 = new Buffer('this is a tést');
const buf2 = new Buffer('7468697320697320612074c3a97374', 'hex');
console.log(buf1.toString());
// Prints: this is a tést
console.log(buf2.toString());
// Prints: this is a tést
console.log(buf1.toString('ascii'));
// Prints: this is a tC)st

The underlying memory for Buffer instances created in this way is not
initialized. The contents of the newly created Buffer are unknown and
may contain sensitive data. Use Buffer.alloc() instead to initialize
Buffer instances with zeroes.

Note that the Buffer module pre-allocates an internal Buffer instance of
size Buffer.poolSize that is used as a pool for the fast allocation of new
Buffer instances created using Buffer.allocUnsafe() and the deprecated
new Buffer(size) constructor only when size is less than or equal to
Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two).

Use of this pre-allocated internal memory pool is a key difference between
calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill).
Specifically, Buffer.alloc(size, fill) will never use the internal Buffer
pool, while Buffer.allocUnsafe(size).fill(fill)will use the internal
Buffer pool if size is less than or equal to half Buffer.poolSize. The
difference is subtle but can be important when an application requires the
additional performance that Buffer.allocUnsafe() provides.

The underlying memory for Buffer instances created in this way is not
initialized. The contents of the newly created Buffer are unknown and
may contain sensitive data. Use buf.fill(0) to initialize
such Buffer instances with zeroes.

When using Buffer.allocUnsafe() to allocate new Buffer instances,
allocations under 4KB are sliced from a single pre-allocated Buffer. This
allows applications to avoid the garbage collection overhead of creating many
individually allocated Buffer instances. This approach improves both
performance and memory usage by eliminating the need to track and clean up as
many persistent objects.

However, in the case where a developer may need to retain a small chunk of
memory from a pool for an indeterminate amount of time, it may be appropriate
to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() and
then copying out the relevant bits.

Returns the actual byte length of a string. This is not the same as
String.prototype.length since that returns the number of characters in
a string.

For 'base64' and 'hex', this function assumes valid input. For strings that
contain non-Base64/Hex-encoded data (e.g. whitespace), the return value might be
greater than the length of a Buffer created from the string.

Returns a new Buffer which is the result of concatenating all the Buffer
instances in the list together.

If the list has no items, or if the totalLength is 0, then a new zero-length
Buffer is returned.

If totalLength is not provided, it is calculated from the Buffer instances
in list. This however causes an additional loop to be executed in order to
calculate the totalLength, so it is faster to provide the length explicitly if
it is already known.

If totalLength is provided, it is coerced to an unsigned integer. If the
combined length of the Buffers in list exceeds totalLength, the result is
truncated to totalLength.

length<integer> Number of bytes to expose.
Default:arrayBuffer.length - byteOffset.

This creates a view of the ArrayBuffer without copying the underlying
memory. For example, when passed a reference to the .buffer property of a
TypedArray instance, the newly created Buffer will share the same
allocated memory as the TypedArray.

Creates a new Buffer containing string. The encoding parameter identifies
the character encoding of string.

const buf1 = Buffer.from('this is a tést');
const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex');
console.log(buf1.toString());
// Prints: this is a tést
console.log(buf2.toString());
// Prints: this is a tést
console.log(buf1.toString('ascii'));
// Prints: this is a tC)st

The index operator [index] can be used to get and set the octet at position
index in buf. The values refer to individual bytes, so the legal value
range is between 0x00 and 0xFF (hex) or 0 and 255 (decimal).

This operator is inherited from Uint8Array, so its behavior on out-of-bounds
access is the same as UInt8Array - that is, getting returns undefined and
setting does nothing.

<integer> The byteOffset on the underlying ArrayBuffer object based on
which this Buffer object is created.

When setting byteOffset in Buffer.from(ArrayBuffer, byteOffset, length)
or sometimes when allocating a buffer smaller than Buffer.poolSize the
buffer doesn't start from a zero offset on the underlying ArrayBuffer.

This can cause problems when accessing the underlying ArrayBuffer directly
using buf.buffer, as the first bytes in this ArrayBuffer may be unrelated
to the buf object itself.

A common issue is when casting a Buffer object to a TypedArray object,
in this case one needs to specify the byteOffset correctly:

While the length property is not immutable, changing the value of length
can result in undefined and inconsistent behavior. Applications that wish to
modify the length of a Buffer should therefore treat length as read-only and
use buf.slice() to create a new Buffer.

Writes string to buf at offset according to the character encoding in
encoding. The length parameter is the number of bytes to write. If buf did
not contain enough space to fit the entire string, only part of string will be
written. However, partially encoded characters will not be written.

Writes value to buf at the specified offset with specified endian
format (writeInt16BE() writes big endian, writeInt16LE() writes little
endian). valueshould be a valid signed 16-bit integer. Behavior is
undefined when value is anything other than a signed 16-bit integer.

value is interpreted and written as a two's complement signed integer.

Writes value to buf at the specified offset with specified endian
format (writeInt32BE() writes big endian, writeInt32LE() writes little
endian). valueshould be a valid signed 32-bit integer. Behavior is
undefined when value is anything other than a signed 32-bit integer.

value is interpreted and written as a two's complement signed integer.

In order to avoid the garbage collection overhead of creating many individually
allocated Buffer instances, by default allocations under 4KB are sliced from a
single larger allocated object.

In the case where a developer may need to retain a small chunk of memory from a
pool for an indeterminate amount of time, it may be appropriate to create an
un-pooled Buffer instance using SlowBuffer then copy out the relevant bits.

The underlying memory for SlowBuffer instances is not initialized. The
contents of a newly created SlowBuffer are unknown and may contain sensitive
data. Use buf.fill(0) to initialize a SlowBuffer with
zeroes.

Node.js Addons are dynamically-linked shared objects, written in C++, that
can be loaded into Node.js using the require() function, and used
just as if they were an ordinary Node.js module. They are used primarily to
provide an interface between JavaScript running in Node.js and C/C++ libraries.

At the moment, the method for implementing Addons is rather complicated,
involving knowledge of several components and APIs:

V8: the C++ library Node.js currently uses to provide the
JavaScript implementation. V8 provides the mechanisms for creating objects,
calling functions, etc. V8's API is documented mostly in the
v8.h header file (deps/v8/include/v8.h in the Node.js source
tree), which is also available online.

libuv: The C library that implements the Node.js event loop, its worker
threads and all of the asynchronous behaviors of the platform. It also
serves as a cross-platform abstraction library, giving easy, POSIX-like
access across all major operating systems to many common system tasks, such
as interacting with the filesystem, sockets, timers, and system events. libuv
also provides a pthreads-like threading abstraction that may be used to
power more sophisticated asynchronous Addons that need to move beyond the
standard event loop. Addon authors are encouraged to think about how to
avoid blocking the event loop with I/O or other time-intensive tasks by
off-loading work via libuv to non-blocking system operations, worker threads
or a custom use of libuv's threads.

Internal Node.js libraries. Node.js itself exports a number of C++ APIs
that Addons can use — the most important of which is the
node::ObjectWrap class.

Node.js includes a number of other statically linked libraries including
OpenSSL. These other libraries are located in the deps/ directory in the
Node.js source tree. Only the libuv, OpenSSL, V8 and zlib symbols are
purposefully re-exported by Node.js and may be used to various extents by
Addons.
See Linking to Node.js' own dependencies for additional information.

All of the following examples are available for download and may
be used as the starting-point for an Addon.

There are environments in which Node.js addons may need to be loaded multiple
times in multiple contexts. For example, the Electron runtime runs multiple
instances of Node.js in a single process. Each instance will have its own
require() cache, and thus each instance will need a native addon to behave
correctly when loaded via require(). From the addon's perspective, this means
that it must support multiple initializations.

A context-aware addon can be constructed by using the macro
NODE_MODULE_INITIALIZER, which expands to the name of a function which Node.js
will expect to find when it loads an addon. An addon can thus be initialized as
in the following example:

Another option is to use the macro NODE_MODULE_INIT(), which will also
construct a context-aware addon. Unlike NODE_MODULE(), which is used to
construct an addon around a given addon initializer function,
NODE_MODULE_INIT() serves as the declaration of such an initializer to be
followed by a function body.

The following three variables may be used inside the function body following an
invocation of NODE_MODULE_INIT():

Local<Object> exports,

Local<Value> module, and

Local<Context> context

The choice to build a context-aware addon carries with it the responsibility of
carefully managing global static data. Since the addon may be loaded multiple
times, potentially even from different threads, any global static data stored
in the addon must be properly protected, and must not contain any persistent
references to JavaScript objects. The reason for this is that JavaScript
objects are only valid in one context, and will likely cause a crash when
accessed from the wrong context or from a different thread than the one on which
they were created.

The context-aware addon can be structured to avoid global static data by
performing the following steps:

defining a class which will hold per-addon-instance data. Such
a class should include a v8::Persistent<v8::Object> which will hold a weak
reference to the addon's exports object. The callback associated with the weak
reference will then destroy the instance of the class.

constructing an instance of this class in the addon initializer such that the
v8::Persistent<v8::Object> is set to the exports object.

storing the instance of the class in a v8::External, and

passing the v8::External to all methods exposed to JavaScript by passing it
to the v8::FunctionTemplate constructor which creates the native-backed
JavaScript functions. The v8::FunctionTemplate constructor's third parameter
accepts the v8::External.

This will ensure that the per-addon-instance data reaches each binding that can
be called from JavaScript. The per-addon-instance data must also be passed into
any asynchronous callbacks the addon may create.

The following example illustrates the implementation of a context-aware addon:

Once the source code has been written, it must be compiled into the binary
addon.node file. To do so, create a file called binding.gyp in the
top-level of the project describing the build configuration of the module
using a JSON-like format. This file is used by node-gyp — a tool written
specifically to compile Node.js Addons.

A version of the node-gyp utility is bundled and distributed with
Node.js as part of npm. This version is not made directly available for
developers to use and is intended only to support the ability to use the
npm install command to compile and install Addons. Developers who wish to
use node-gyp directly can install it using the command
npm install -g node-gyp. See the node-gypinstallation instructions for
more information, including platform-specific requirements.

Once the binding.gyp file has been created, use node-gyp configure to
generate the appropriate project build files for the current platform. This
will generate either a Makefile (on Unix platforms) or a vcxproj file
(on Windows) in the build/ directory.

Next, invoke the node-gyp build command to generate the compiled addon.node
file. This will be put into the build/Release/ directory.

When using npm install to install a Node.js Addon, npm uses its own bundled
version of node-gyp to perform this same set of actions, generating a
compiled version of the Addon for the user's platform on demand.

Once built, the binary Addon can be used from within Node.js by pointing
require() to the built addon.node module:

Because the exact path to the compiled Addon binary can vary depending on how
it is compiled (i.e. sometimes it may be in ./build/Debug/), Addons can use
the bindings package to load the compiled module.

Note that while the bindings package implementation is more sophisticated
in how it locates Addon modules, it is essentially using a try-catch pattern
similar to:

Node.js uses a number of statically linked libraries such as V8, libuv and
OpenSSL. All Addons are required to link to V8 and may link to any of the
other dependencies as well. Typically, this is as simple as including
the appropriate #include <...> statements (e.g. #include <v8.h>) and
node-gyp will locate the appropriate headers automatically. However, there
are a few caveats to be aware of:

When node-gyp runs, it will detect the specific release version of Node.js
and download either the full source tarball or just the headers. If the full
source is downloaded, Addons will have complete access to the full set of
Node.js dependencies. However, if only the Node.js headers are downloaded, then
only the symbols exported by Node.js will be available.

node-gyp can be run using the --nodedir flag pointing at a local Node.js
source image. Using this option, the Addon will have access to the full set of
dependencies.

The filename extension of the compiled Addon binary is .node (as opposed
to .dll or .so). The require() function is written to look for
files with the .node file extension and initialize those as dynamically-linked
libraries.

When calling require(), the .node extension can usually be
omitted and Node.js will still find and initialize the Addon. One caveat,
however, is that Node.js will first attempt to locate and load modules or
JavaScript files that happen to share the same base name. For instance, if
there is a file addon.js in the same directory as the binary addon.node,
then require('addon') will give precedence to the addon.js file
and load it instead.

Each of the examples illustrated in this document make direct use of the
Node.js and V8 APIs for implementing Addons. It is important to understand
that the V8 API can, and has, changed dramatically from one V8 release to the
next (and one major Node.js release to the next). With each change, Addons may
need to be updated and recompiled in order to continue functioning. The Node.js
release schedule is designed to minimize the frequency and impact of such
changes but there is little that Node.js can do currently to ensure stability
of the V8 APIs.

The Native Abstractions for Node.js (or nan) provide a set of tools that
Addon developers are recommended to use to keep compatibility between past and
future releases of V8 and Node.js. See the nanexamples for an
illustration of how it can be used.

N-API is an API for building native Addons. It is independent from
the underlying JavaScript runtime (e.g. V8) and is maintained as part of
Node.js itself. This API will be Application Binary Interface (ABI) stable
across versions of Node.js. It is intended to insulate Addons from
changes in the underlying JavaScript engine and allow modules
compiled for one version to run on later versions of Node.js without
recompilation. Addons are built/packaged with the same approach/tools
outlined in this document (node-gyp, etc.). The only difference is the
set of APIs that are used by the native code. Instead of using the V8
or Native Abstractions for Node.js APIs, the functions available
in the N-API are used.

Following are some example Addons intended to help developers get started. The
examples make use of the V8 APIs. Refer to the online V8 reference
for help with the various V8 calls, and V8's Embedder's Guide for an
explanation of several concepts used such as handles, scopes, function
templates, etc.

Addons will typically expose objects and functions that can be accessed from
JavaScript running within Node.js. When functions are invoked from JavaScript,
the input arguments and return value must be mapped to and from the C/C++
code.

The following example illustrates how to read function arguments passed from
JavaScript and how to return a result:

Note that this example uses a two-argument form of Init() that receives
the full module object as the second argument. This allows the Addon
to completely overwrite exports with a single function instead of
adding the function as a property of exports.

Addons can create and return new objects from within a C++ function as
illustrated in the following example. An object is created and returned with a
property msg that echoes the string passed to createObject():

The destructor for a wrapper object will run when the object is
garbage-collected. For destructor testing, there are command-line flags that
can be used to make it possible to force garbage collection. These flags are
provided by the underlying V8 JavaScript engine. They are subject to change
or removal at any time. They are not documented by Node.js or V8, and they
should never be used outside of testing.

In addition to wrapping and returning C++ objects, it is possible to pass
wrapped objects around by unwrapping them with the Node.js helper function
node::ObjectWrap::Unwrap. The following examples shows a function add()
that can take two MyObject objects as input arguments:

An AtExit hook is a function that is invoked after the Node.js event loop
has ended but before the JavaScript VM is terminated and Node.js shuts down.
AtExit hooks are registered using the node::AtExit API.

N-API (pronounced N as in the letter, followed by API)
is an API for building native Addons. It is independent from
the underlying JavaScript runtime (ex V8) and is maintained as part of
Node.js itself. This API will be Application Binary Interface (ABI) stable
across versions of Node.js. It is intended to insulate Addons from
changes in the underlying JavaScript engine and allow modules
compiled for one major version to run on later major versions of Node.js without
recompilation. The ABI Stability guide provides a more in-depth explanation.

Addons are built/packaged with the same approach/tools
outlined in the section titled C++ Addons.
The only difference is the set of APIs that are used by the native code.
Instead of using the V8 or Native Abstractions for Node.js APIs,
the functions available in the N-API are used.

APIs exposed by N-API are generally used to create and manipulate
JavaScript values. Concepts and operations generally map to ideas specified
in the ECMA262 Language Specification. The APIs have the following
properties:

All N-API calls return a status code of type napi_status. This
status indicates whether the API call succeeded or failed.

The API's return value is passed via an out parameter.

All JavaScript values are abstracted behind an opaque type named
napi_value.

In case of an error status code, additional information can be obtained
using napi_get_last_error_info. More information can be found in the error
handling section Error Handling.

The N-API is a C API that ensures ABI stability across Node.js versions
and different compiler levels. A C++ API can be easier to use.
To support using C++, the project maintains a
C++ wrapper module called
node-addon-api.
This wrapper provides an inlineable C++ API. Binaries built
with node-addon-api will depend on the symbols for the N-API C-based
functions exported by Node.js. node-addon-api is a more
efficient way to write code that calls N-API. Take, for example, the
following node-addon-api code. The first section shows the
node-addon-api code and the second section shows what actually gets
used in the addon.

Although N-API provides an ABI stability guarantee, other parts of Node.js do
not, and any external libraries used from the addon may not. In particular,
none of the following APIs provide an ABI stability guarantee across major
versions:

In order to use the N-API functions, include the file
node_api.h
which is located in the src directory in the node development tree:

#include <node_api.h>

This will opt into the default NAPI_VERSION for the given release of Node.js.
In order to ensure compatibility with specific versions of N-API, the version
can be specified explicitly when including the header:

#define NAPI_VERSION 3
#include <node_api.h>

This restricts the N-API surface to just the functionality that was available in
the specified (and earlier) versions.

Some of the N-API surface is considered experimental and requires explicit
opt-in to access those APIs:

#define NAPI_EXPERIMENTAL
#include <node_api.h>

In this case the entire API surface, including any experimental APIs, will be
available to the module code.

The N-APIs associated strictly with accessing ECMAScript features from native
code can be found separately in js_native_api.h and js_native_api_types.h.
The APIs defined in these headers are included in node_api.h and
node_api_types.h. The headers are structured in this way in order to allow
implementations of N-API outside of Node.js. For those implementations the
Node.js specific APIs may not be applicable.

The Node.js-specific parts of an addon can be separated from the code that
exposes the actual functionality to the JavaScript environment so that the
latter may be used with multiple implementations of N-API. In the example below,
addon.c and addon.h refer only to js_native_api.h. This ensures that
addon.c can be reused to compile against either the Node.js implementation of
N-API or any implementation of N-API outside of Node.js.

addon_node.c is a separate file that contains the Node.js specific entry point
to the addon and which instantiates the addon by calling into addon.c when the
addon is loaded into a Node.js environment.

napi_env is used to represent a context that the underlying N-API
implementation can use to persist VM-specific state. This structure is passed
to native functions when they're invoked, and it must be passed back when
making N-API calls. Specifically, the same napi_env that was passed in when
the initial native function was called must be passed to any subsequent
nested N-API calls. Caching the napi_env for the purpose of general reuse is
not allowed.

A value to be given to napi_release_threadsafe_function() to indicate whether
the thread-safe function is to be closed immediately (napi_tsfn_abort) or
merely released (napi_tsfn_release) and thus available for subsequent use via
napi_acquire_threadsafe_function() and napi_call_threadsafe_function().

This is an abstraction used to control and modify the lifetime of objects
created within a particular scope. In general, N-API values are created within
the context of a handle scope. When a native method is called from
JavaScript, a default handle scope will exist. If the user does not explicitly
create a new handle scope, N-API values will be created in the default handle
scope. For any invocations of code outside the execution of a native method
(for instance, during a libuv callback invocation), the module is required to
create a scope before invoking any functions that can result in the creation
of JavaScript values.

Handle scopes are created using napi_open_handle_scope and are destroyed
using napi_close_handle_scope. Closing the scope can indicate to the GC
that all napi_values created during the lifetime of the handle scope are no
longer referenced from the current stack frame.

Function pointer type for add-on provided functions that allow the user to be
notified when externally-owned data is ready to be cleaned up because the
object with which it was associated with, has been garbage-collected. The user
must provide a function satisfying the following signature which would get
called upon the object's collection. Currently, napi_finalize can be used for
finding out when objects that have external data are collected.

Implementations of this type of function should avoid making any N-API calls
that could result in the execution of JavaScript or interaction with
JavaScript objects. Most often, any code that needs to make N-API
calls should be made in napi_async_complete_callback instead.

Function pointer used with asynchronous thread-safe function calls. The callback
will be called on the main thread. Its purpose is to use a data item arriving
via the queue from one of the secondary threads to construct the parameters
necessary for a call into JavaScript, usually via napi_call_function, and then
make the call into JavaScript.

The data arriving from the secondary thread via the queue is given in the data
parameter and the JavaScript function to call is given in the js_callback
parameter.

N-API sets up the environment prior to calling this callback, so it is
sufficient to call the JavaScript function via napi_call_function rather than
via napi_make_callback.

[in] env: The environment to use for API calls, or NULL if the thread-safe
function is being torn down and data may need to be freed.

[in] js_callback: The JavaScript function to call, or NULL if the
thread-safe function is being torn down and data may need to be freed.

[in] context: The optional data with which the thread-safe function was
created.

[in] data: Data created by the secondary thread. It is the responsibility of
the callback to convert this native data to JavaScript values (with N-API
functions) that can be passed as parameters when js_callback is invoked. This
pointer is managed entirely by the threads and this callback. Thus this callback
should free the data.

All of the N-API functions share the same error handling pattern. The
return type of all API functions is napi_status.

The return value will be napi_ok if the request was successful and
no uncaught JavaScript exception was thrown. If an error occurred AND
an exception was thrown, the napi_status value for the error
will be returned. If an exception was thrown, and no error occurred,
napi_pending_exception will be returned.

In cases where a return value other than napi_ok or
napi_pending_exception is returned, napi_is_exception_pending
must be called to check if an exception is pending.
See the section on exceptions for more details.

The full set of possible napi_status values is defined
in napi_api_types.h.

The napi_status return value provides a VM-independent representation of
the error which occurred. In some cases it is useful to be able to get
more detailed information, including a string representing the error as well as
VM (engine)-specific information.

In order to retrieve this information napi_get_last_error_info
is provided which returns a napi_extended_error_info structure.
The format of the napi_extended_error_info structure is as follows:

Any N-API function call may result in a pending JavaScript exception. This is
obviously the case for any function that may cause the execution of
JavaScript, but N-API specifies that an exception may be pending
on return from any of the API functions.

If the napi_status returned by a function is napi_ok then no
exception is pending and no additional action is required. If the
napi_status returned is anything other than napi_ok or
napi_pending_exception, in order to try to recover and continue
instead of simply returning immediately, napi_is_exception_pending
must be called in order to determine if an exception is pending or not.

When an exception is pending one of two approaches can be employed.

The first approach is to do any appropriate cleanup and then return so that
execution will return to JavaScript. As part of the transition back to
JavaScript the exception will be thrown at the point in the JavaScript
code where the native method was invoked. The behavior of most N-API calls
is unspecified while an exception is pending, and many will simply return
napi_pending_exception, so it is important to do as little as possible
and then return to JavaScript where the exception can be handled.

The second approach is to try to handle the exception. There will be cases
where the native code can catch the exception, take the appropriate action,
and then continue. This is only recommended in specific cases
where it is known that the exception can be safely handled. In these
cases napi_get_and_clear_last_exception can be used to get and
clear the exception. On success, result will contain the handle to
the last JavaScript Object thrown. If it is determined, after
retrieving the exception, the exception cannot be handled after all
it can be re-thrown it with napi_throw where error is the
JavaScript Error object to be thrown.

The Node.js project is adding error codes to all of the errors
generated internally. The goal is for applications to use these
error codes for all error checking. The associated error messages
will remain, but will only be meant to be used for logging and
display with the expectation that the message can change without
SemVer applying. In order to support this model with N-API, both
in internal functionality and for module specific functionality
(as its good practice), the throw_ and create_ functions
take an optional code parameter which is the string for the code
to be added to the error object. If the optional parameter is NULL
then no code will be associated with the error. If a code is provided,
the name associated with the error is also updated to be:

originalName [code]

where originalName is the original name associated with the error
and code is the code that was provided. For example, if the code
is 'ERR_ERROR_1' and a TypeError is being created the name will be:

As N-API calls are made, handles to objects in the heap for the underlying
VM may be returned as napi_values. These handles must hold the
objects 'live' until they are no longer required by the native code,
otherwise the objects could be collected before the native code was
finished using them.

As object handles are returned they are associated with a
'scope'. The lifespan for the default scope is tied to the lifespan
of the native method call. The result is that, by default, handles
remain valid and the objects associated with these handles will be
held live for the lifespan of the native method call.

In many cases, however, it is necessary that the handles remain valid for
either a shorter or longer lifespan than that of the native method.
The sections which follow describe the N-API functions that can be used
to change the handle lifespan from the default.

It is often necessary to make the lifespan of handles shorter than
the lifespan of a native method. For example, consider a native method
that has a loop which iterates through the elements in a large array:

This would result in a large number of handles being created, consuming
substantial resources. In addition, even though the native code could only
use the most recent handle, all of the associated objects would also be
kept alive since they all share the same scope.

To handle this case, N-API provides the ability to establish a new 'scope' to
which newly created handles will be associated. Once those handles
are no longer required, the scope can be 'closed' and any handles associated
with the scope are invalidated. The methods available to open/close scopes are
napi_open_handle_scope and napi_close_handle_scope.

N-API only supports a single nested hierarchy of scopes. There is only one
active scope at any time, and all new handles will be associated with that
scope while it is active. Scopes must be closed in the reverse order from
which they are opened. In addition, all scopes created within a native method
must be closed before returning from that method.

When nesting scopes, there are cases where a handle from an
inner scope needs to live beyond the lifespan of that scope. N-API supports an
'escapable scope' in order to support this case. An escapable scope
allows one handle to be 'promoted' so that it 'escapes' the
current scope and the lifespan of the handle changes from the current
scope to that of the outer scope.

[in] escapee: napi_value representing the JavaScript Object to be
escaped.

[out] result: napi_value representing the handle to the escaped
Object in the outer scope.

Returns napi_ok if the API succeeded.

This API promotes the handle to the JavaScript object so that it is valid
for the lifetime of the outer scope. It can only be called once per scope.
If it is called more than once an error will be returned.

This API can be called even if there is a pending JavaScript exception.

References to objects with a lifespan longer than that of the native method#

In some cases an addon will need to be able to create and reference objects
with a lifespan longer than that of a single native method invocation. For
example, to create a constructor and later use that constructor
in a request to creates instances, it must be possible to reference
the constructor object across many different instance creation requests. This
would not be possible with a normal handle returned as a napi_value as
described in the earlier section. The lifespan of a normal handle is
managed by scopes and all scopes must be closed before the end of a native
method.

N-API provides methods to create persistent references to an object.
Each persistent reference has an associated count with a value of 0
or higher. The count determines if the reference will keep
the corresponding object live. References with a count of 0 do not
prevent the object from being collected and are often called 'weak'
references. Any count greater than 0 will prevent the object
from being collected.

References can be created with an initial reference count. The count can
then be modified through napi_reference_ref and
napi_reference_unref. If an object is collected while the count
for a reference is 0, all subsequent calls to
get the object associated with the reference napi_get_reference_value
will return NULL for the returned napi_value. An attempt to call
napi_reference_ref for a reference whose object has been collected
will result in an error.

References must be deleted once they are no longer required by the addon. When
a reference is deleted it will no longer prevent the corresponding object from
being collected. Failure to delete a persistent reference will result in
a 'memory leak' with both the native memory for the persistent reference and
the corresponding object on the heap being retained forever.

There can be multiple persistent references created which refer to the same
object, each of which will either keep the object live or not based on its
individual count.

While a Node.js process typically releases all its resources when exiting,
embedders of Node.js, or future Worker support, may require addons to register
clean-up hooks that will be run once the current Node.js instance exits.

N-API provides functions for registering and un-registering such callbacks.
When those callbacks are run, all resources that are being held by the addon
should be freed up.

Registers fun as a function to be run with the arg parameter once the
current Node.js environment exits.

A function can safely be specified multiple times with different
arg values. In that case, it will be called multiple times as well.
Providing the same fun and arg values multiple times is not allowed
and will lead the process to abort.

The hooks will be called in reverse order, i.e. the most recently added one
will be called first.

Removing this hook can be done by using napi_remove_env_cleanup_hook.
Typically, that happens when the resource for which this hook was added
is being torn down anyway.

N-API modules are registered in a manner similar to other modules
except that instead of using the NODE_MODULE macro the following
is used:

NAPI_MODULE(NODE_GYP_MODULE_NAME, Init)

The next difference is the signature for the Init method. For a N-API
module it is as follows:

napi_value Init(napi_env env, napi_value exports);

The return value from Init is treated as the exports object for the module.
The Init method is passed an empty object via the exports parameter as a
convenience. If Init returns NULL, the parameter passed as exports is
exported by the module. N-API modules cannot modify the module object but can
specify anything as the exports property of the module.

To add the method hello as a function so that it can be called as a method
provided by the addon:

This macro includes NAPI_MODULE, and declares an Init function with a
special name and with visibility beyond the addon. This will allow Node.js to
initialize the module even if it is loaded multiple times.

There are a few design considerations when declaring a module that may be loaded
multiple times. The documentation of context-aware addons provides more
details.

The variables env and exports will be available inside the function body
following the macro invocation.

Fundamentally, these APIs are used to do one of the following:
1. Create a new JavaScript object
2. Convert from a primitive C type to an N-API value
3. Convert from N-API value to a primitive C type
4. Get global instances including undefined and null

N-API values are represented by the type napi_value.
Any N-API call that requires a JavaScript value takes in a napi_value.
In some cases, the API does check the type of the napi_value up-front.
However, for better performance, it's better for the caller to make sure that
the napi_value in question is of the JavaScript type expected by the API.

Describes the type of a napi_value. This generally corresponds to the types
described in
Section 6.1 of
the ECMAScript Language Specification.
In addition to types in that section, napi_valuetype can also represent
Functions and Objects with external data.

A JavaScript value of type napi_external appears in JavaScript as a plain
object such that no properties can be set on it, and no prototype.

This API returns an N-API value corresponding to a JavaScript Array type.
The Array's length property is set to the passed-in length parameter.
However, the underlying buffer is not guaranteed to be pre-allocated by the VM
when the array is created - that behavior is left to the underlying VM
implementation.
If the buffer must be a contiguous block of memory that can be
directly read and/or written via C, consider using
napi_create_external_arraybuffer.

JavaScript arrays are described in
Section 22.1 of the ECMAScript Language Specification.

This API returns an N-API value corresponding to a JavaScript ArrayBuffer.
ArrayBuffers are used to represent fixed-length binary data buffers. They are
normally used as a backing-buffer for TypedArray objects.
The ArrayBuffer allocated will have an underlying byte buffer whose size is
determined by the length parameter that's passed in.
The underlying buffer is optionally returned back to the caller in case the
caller wants to directly manipulate the buffer. This buffer can only be
written to directly from native code. To write to this buffer from JavaScript,
a typed array or DataView object would need to be created.

JavaScript ArrayBuffer objects are described in
Section 24.1 of the ECMAScript Language Specification.

[in] size: Size in bytes of the input buffer (should be the same as the
size of the new buffer).

[in] data: Raw pointer to the underlying buffer to copy from.

[out] result_data: Pointer to the new Buffer's underlying data buffer.

[out] result: A napi_value representing a node::Buffer.

Returns napi_ok if the API succeeded.

This API allocates a node::Buffer object and initializes it with data copied
from the passed-in buffer. While this is still a fully-supported data
structure, in most cases using a TypedArray will suffice.

[in] finalize_cb: Optional callback to call when the external value
is being collected.

[in] finalize_hint: Optional hint to pass to the finalize callback
during collection.

[out] result: A napi_value representing an external value.

Returns napi_ok if the API succeeded.

This API allocates a JavaScript value with external data attached to it. This
is used to pass external data through JavaScript code, so it can be retrieved
later by native code. The API allows the caller to pass in a finalize callback,
in case the underlying native resource needs to be cleaned up when the external
JavaScript value gets collected.

The created value is not an object, and therefore does not support additional
properties. It is considered a distinct value type: calling napi_typeof() with
an external value yields napi_external.

[in] external_data: Pointer to the underlying byte buffer of the
ArrayBuffer.

[in] byte_length: The length in bytes of the underlying buffer.

[in] finalize_cb: Optional callback to call when the ArrayBuffer is
being collected.

[in] finalize_hint: Optional hint to pass to the finalize callback
during collection.

[out] result: A napi_value representing a JavaScript ArrayBuffer.

Returns napi_ok if the API succeeded.

This API returns an N-API value corresponding to a JavaScript ArrayBuffer.
The underlying byte buffer of the ArrayBuffer is externally allocated and
managed. The caller must ensure that the byte buffer remains valid until the
finalize callback is called.

JavaScript ArrayBuffers are described in
Section 24.1 of the ECMAScript Language Specification.

[in] length: Size in bytes of the input buffer (should be the same as
the size of the new buffer).

[in] data: Raw pointer to the underlying buffer to copy from.

[in] finalize_cb: Optional callback to call when the ArrayBuffer is
being collected.

[in] finalize_hint: Optional hint to pass to the finalize callback
during collection.

[out] result: A napi_value representing a node::Buffer.

Returns napi_ok if the API succeeded.

This API allocates a node::Buffer object and initializes it with data
backed by the passed in buffer. While this is still a fully-supported data
structure, in most cases using a TypedArray will suffice.

[in] byte_offset: The byte offset within the ArrayBuffer from which to
start projecting the TypedArray.

[out] result: A napi_value representing a JavaScript TypedArray.

Returns napi_ok if the API succeeded.

This API creates a JavaScript TypedArray object over an existing
ArrayBuffer. TypedArray objects provide an array-like view over an
underlying data buffer where each element has the same underlying binary scalar
datatype.

It's required that (length * size_of_element) + byte_offset should
be <= the size in bytes of the array passed in. If not, a RangeError exception
is raised.

JavaScript TypedArray objects are described in
Section 22.2 of the ECMAScript Language Specification.

[in] byte_offset: The byte offset within the ArrayBuffer from which to
start projecting the DataView.

[out] result: A napi_value representing a JavaScript DataView.

Returns napi_ok if the API succeeded.

This API creates a JavaScript DataView object over an existing ArrayBuffer.
DataView objects provide an array-like view over an underlying data buffer,
but one which allows items of different size and type in the ArrayBuffer.

It is required that byte_length + byte_offset is less than or equal to the
size in bytes of the array passed in. If not, a RangeError exception is
raised.

JavaScript DataView objects are described in
Section 24.3 of the ECMAScript Language Specification.

This API is used to convert from the C int64_t type to the JavaScript
Number type.

The JavaScript Number type is described in Section 6.1.6
of the ECMAScript Language Specification. Note the complete range of int64_t
cannot be represented with full precision in JavaScript. Integer values
outside the range of
Number.MIN_SAFE_INTEGER
-(2^53 - 1) -
Number.MAX_SAFE_INTEGER
(2^53 - 1) will lose precision.

This API is used to retrieve the underlying data buffer of an ArrayBuffer and
its length.

WARNING: Use caution while using this API. The lifetime of the underlying data
buffer is managed by the ArrayBuffer even after it's returned. A
possible safe way to use this API is in conjunction with
napi_create_reference, which can be used to guarantee control over the
lifetime of the ArrayBuffer. It's also safe to use the returned data buffer
within the same callback as long as there are no calls to other APIs that might
trigger a GC.

[out] data: The data buffer underlying the TypedArray adjusted by
the byte_offset value so that it points to the first element in the
TypedArray.

[out] arraybuffer: The ArrayBuffer underlying the TypedArray.

[out] byte_offset: The byte offset within the underlying native array
at which the first element of the arrays is located. The value for the data
parameter has already been adjusted so that data points to the first element
in the array. Therefore, the first byte of the native array would be at
data - byte_offset.

Returns napi_ok if the API succeeded.

This API returns various properties of a typed array.

Warning: Use caution while using this API since the underlying data buffer
is managed by the VM.

[out] sign_bit: Integer representing if the JavaScript BigInt is positive
or negative.

[in/out] word_count: Must be initialized to the length of the words
array. Upon return, it will be set to the actual number of words that
would be needed to store this BigInt.

[out] words: Pointer to a pre-allocated 64-bit word array.

Returns napi_ok if the API succeeded.

This API converts a single BigInt value into a sign bit, 64-bit little-endian
array, and the number of elements in the array. sign_bit and words may be
both set to NULL, in order to get only word_count.

Returns napi_ok if the API succeeded. If a non-number napi_value
is passed in napi_number_expected.

This API returns the C int32 primitive equivalent
of the given JavaScript Number.

If the number exceeds the range of the 32 bit integer, then the result is
truncated to the equivalent of the bottom 32 bits. This can result in a large
positive number becoming a negative number if the value is > 2^31 -1.

Non-finite number values (NaN, +Infinity, or -Infinity) set the
result to zero.

These APIs support doing one of the following:
1. Coerce JavaScript values to specific JavaScript types (such as Number or
String).
2. Check the type of a JavaScript value.
3. Check for equality between two JavaScript values.

napi_invalid_arg if the type of value is not a known ECMAScript type and
value is not an External value.

This API represents behavior similar to invoking the typeof Operator on
the object as defined in Section 12.5.5 of the ECMAScript Language
Specification. However, it has support for detecting an External value.
If value has a type that is invalid, an error is returned.

Properties in JavaScript are represented as a tuple of a key and a value.
Fundamentally, all property keys in N-API can be represented in one of the
following forms:

Named: a simple UTF8-encoded string

Integer-Indexed: an index value represented by uint32_t

JavaScript value: these are represented in N-API by napi_value. This can
be a napi_value representing a String, Number, or Symbol.

N-API values are represented by the type napi_value.
Any N-API call that requires a JavaScript value takes in a napi_value.
However, it's the caller's responsibility to make sure that the
napi_value in question is of the JavaScript type expected by the API.

The APIs documented in this section provide a simple interface to
get and set properties on arbitrary JavaScript objects represented by
napi_value.

For instance, consider the following JavaScript code snippet:

const obj = {};
obj.myProp = 123;

The equivalent can be done using N-API values with the following snippet:

napi_property_attributes are flags used to control the behavior of properties
set on a JavaScript object. Other than napi_static they correspond to the
attributes listed in Section 6.1.7.1
of the ECMAScript Language Specification.
They can be one or more of the following bitflags:

napi_default - Used to indicate that no explicit attributes are set on the
given property. By default, a property is read only, not enumerable and not
configurable.

napi_writable - Used to indicate that a given property is writable.

napi_enumerable - Used to indicate that a given property is enumerable.

napi_static - Used to indicate that the property will be defined as
a static property on a class as opposed to an instance property, which is the
default. This is used only by napi_define_class. It is ignored by
napi_define_properties.

utf8name: Optional String describing the key for the property,
encoded as UTF8. One of utf8name or name must be provided for the
property.

name: Optional napi_value that points to a JavaScript string or symbol
to be used as the key for the property. One of utf8name or name must
be provided for the property.

value: The value that's retrieved by a get access of the property if the
property is a data property. If this is passed in, set getter, setter,
method and data to NULL (since these members won't be used).

getter: A function to call when a get access of the property is performed.
If this is passed in, set value and method to NULL (since these members
won't be used). The given function is called implicitly by the runtime when the
property is accessed from JavaScript code (or if a get on the property is
performed using a N-API call).

setter: A function to call when a set access of the property is performed.
If this is passed in, set value and method to NULL (since these members
won't be used). The given function is called implicitly by the runtime when the
property is set from JavaScript code (or if a set on the property is
performed using a N-API call).

method: Set this to make the property descriptor object's value
property to be a JavaScript function represented by method. If this is
passed in, set value, getter and setter to NULL (since these members
won't be used).

[out] result: A napi_value representing an array of JavaScript values
that represent the property names of the object. The API can be used to
iterate over result using napi_get_array_length
and napi_get_element.

Returns napi_ok if the API succeeded.

This API returns the names of the enumerable properties of object as an array
of strings. The properties of object whose key is a symbol will not be
included.

This method allows the efficient definition of multiple properties on a given
object. The properties are defined using property descriptors (see
napi_property_descriptor). Given an array of such property descriptors,
this API will set the properties on the object one at a time, as defined by
DefineOwnProperty() (described in Section 9.1.6 of the ECMA262
specification).

N-API provides a set of APIs that allow JavaScript code to
call back into native code. N-API APIs that support calling back
into native code take in a callback functions represented by
the napi_callback type. When the JavaScript VM calls back to
native code, the napi_callback function provided is invoked. The APIs
documented in this section allow the callback function to do the
following:

Get information about the context in which the callback was invoked.

Get the arguments passed into the callback.

Return a napi_value back from the callback.

Additionally, N-API provides a set of functions which allow calling
JavaScript functions from native code. One can either call a function
like a regular JavaScript function call, or as a constructor
function.

Any non-NULL data which is passed to this API via the data field of the
napi_property_descriptor items can be associated with object and freed
whenever object is garbage-collected by passing both object and the data to
napi_add_finalizer.

[in] func: napi_value representing the JavaScript function
to be invoked.

[in] argc: The count of elements in the argv array.

[in] argv: Array of napi_values representing JavaScript values passed
in as arguments to the function.

[out] result: napi_value representing the JavaScript object returned.

Returns napi_ok if the API succeeded.

This method allows a JavaScript function object to be called from a native
add-on. This is the primary mechanism of calling back from the add-on's
native code into JavaScript. For the special case of calling into JavaScript
after an async operation, see napi_make_callback.

A sample use case might look as follows. Consider the following JavaScript
snippet:

function AddTwo(num) {
return num + 2;
}

Then, the above function can be invoked from a native add-on using the
following code:

[in] utf8Name: The name of the function encoded as UTF8. This is visible
within JavaScript as the new function object's name property.

[in] length: The length of the utf8name in bytes, or
NAPI_AUTO_LENGTH if it is null-terminated.

[in] cb: The native function which should be called when this function
object is invoked.

[in] data: User-provided data context. This will be passed back into the
function when invoked later.

[out] result: napi_value representing the JavaScript function object for
the newly created function.

Returns napi_ok if the API succeeded.

This API allows an add-on author to create a function object in native code.
This is the primary mechanism to allow calling into the add-on's native code
from JavaScript.

The newly created function is not automatically visible from script after this
call. Instead, a property must be explicitly set on any object that is visible
to JavaScript, in order for the function to be accessible from script.

In order to expose a function as part of the
add-on's module exports, set the newly created function on the exports
object. A sample module might look as follows:

Given the above code, the add-on can be used from JavaScript as follows:

const myaddon = require('./addon');
myaddon.sayHello();

The string passed to require() is the name of the target in binding.gyp
responsible for creating the .node file.

Any non-NULL data which is passed to this API via the data parameter can
be associated with the resulting JavaScript function (which is returned in the
result parameter) and freed whenever the function is garbage-collected by
passing both the JavaScript function and the data to napi_add_finalizer.

JavaScript Functions are described in
Section 19.2
of the ECMAScript Language Specification.

[in-out] argc: Specifies the size of the provided argv array
and receives the actual count of arguments.

[out] argv: Buffer to which the napi_value representing the
arguments are copied. If there are more arguments than the provided
count, only the requested number of arguments are copied. If there are fewer
arguments provided than claimed, the rest of argv is filled with napi_value
values that represent undefined.

[out] this: Receives the JavaScript this argument for the call.

[out] data: Receives the data pointer for the callback.

Returns napi_ok if the API succeeded.

This method is used within a callback function to retrieve details about the
call like the arguments and the this pointer from a given callback info.

N-API offers a way to "wrap" C++ classes and instances so that the class
constructor and methods can be called from JavaScript.

The napi_define_class API defines a JavaScript class with constructor,
static properties and methods, and instance properties and methods that
correspond to the C++ class.

When JavaScript code invokes the constructor, the constructor callback
uses napi_wrap to wrap a new C++ instance in a JavaScript object,
then returns the wrapper object.

When JavaScript code invokes a method or property accessor on the class,
the corresponding napi_callback C++ function is invoked. For an instance
callback, napi_unwrap obtains the C++ instance that is the target of
the call.

For wrapped objects it may be difficult to distinguish between a function
called on a class prototype and a function called on an instance of a class.
A common pattern used to address this problem is to save a persistent
reference to the class constructor for later instanceof checks.

[in] utf8name: Name of the JavaScript constructor function; this is
not required to be the same as the C++ class name, though it is recommended
for clarity.

[in] length: The length of the utf8name in bytes, or NAPI_AUTO_LENGTH
if it is null-terminated.

[in] constructor: Callback function that handles constructing instances
of the class. (This should be a static method on the class, not an actual
C++ constructor function.)

[in] data: Optional data to be passed to the constructor callback as
the data property of the callback info.

[in] property_count: Number of items in the properties array argument.

[in] properties: Array of property descriptors describing static and
instance data properties, accessors, and methods on the class
See napi_property_descriptor.

[out] result: A napi_value representing the constructor function for
the class.

Returns napi_ok if the API succeeded.

Defines a JavaScript class that corresponds to a C++ class, including:

A JavaScript constructor function that has the class name and invokes the
provided C++ constructor callback.

Properties on the constructor function corresponding to static data
properties, accessors, and methods of the C++ class (defined by
property descriptors with the napi_static attribute).

Properties on the constructor function's prototype object corresponding to
non-static data properties, accessors, and methods of the C++ class
(defined by property descriptors without the napi_static attribute).

The C++ constructor callback should be a static method on the class that calls
the actual class constructor, then wraps the new C++ instance in a JavaScript
object, and returns the wrapper object. See napi_wrap() for details.

The JavaScript constructor function returned from napi_define_class is
often saved and used later, to construct new instances of the class from native
code, and/or check whether provided values are instances of the class. In that
case, to prevent the function value from being garbage-collected, create a
persistent reference to it using napi_create_reference and ensure the
reference count is kept >= 1.

Any non-NULL data which is passed to this API via the data parameter or via
the data field of the napi_property_descriptor array items can be associated
with the resulting JavaScript constructor (which is returned in the result
parameter) and freed whenever the class is garbage-collected by passing both
the JavaScript function and the data to napi_add_finalizer.

[in] js_object: The JavaScript object that will be the wrapper for the
native object.

[in] native_object: The native instance that will be wrapped in the
JavaScript object.

[in] finalize_cb: Optional native callback that can be used to free the
native instance when the JavaScript object is ready for garbage-collection.

[in] finalize_hint: Optional contextual hint that is passed to the
finalize callback.

[out] result: Optional reference to the wrapped object.

Returns napi_ok if the API succeeded.

Wraps a native instance in a JavaScript object. The native instance can be
retrieved later using napi_unwrap().

When JavaScript code invokes a constructor for a class that was defined using
napi_define_class(), the napi_callback for the constructor is invoked.
After constructing an instance of the native class, the callback must then call
napi_wrap() to wrap the newly constructed instance in the already-created
JavaScript object that is the this argument to the constructor callback.
(That this object was created from the constructor function's prototype,
so it already has definitions of all the instance properties and methods.)

Typically when wrapping a class instance, a finalize callback should be
provided that simply deletes the native instance that is received as the data
argument to the finalize callback.

The optional returned reference is initially a weak reference, meaning it
has a reference count of 0. Typically this reference count would be incremented
temporarily during async operations that require the instance to remain valid.

Caution: The optional returned reference (if obtained) should be deleted via
napi_delete_reference ONLY in response to the finalize callback
invocation. If it is deleted before then, then the finalize callback may never
be invoked. Therefore, when obtaining a reference a finalize callback is also
required in order to enable correct disposal of the reference.

Calling napi_wrap() a second time on an object will return an error. To
associate another native instance with the object, use napi_remove_wrap()
first.

Retrieves a native instance that was previously wrapped in a JavaScript
object using napi_wrap().

When JavaScript code invokes a method or property accessor on the class, the
corresponding napi_callback is invoked. If the callback is for an instance
method or accessor, then the this argument to the callback is the wrapper
object; the wrapped C++ instance that is the target of the call can be obtained
then by calling napi_unwrap() on the wrapper object.

Retrieves a native instance that was previously wrapped in the JavaScript
object js_object using napi_wrap() and removes the wrapping. If a finalize
callback was associated with the wrapping, it will no longer be called when the
JavaScript object becomes garbage-collected.

[in] js_object: The JavaScript object to which the native data will be
attached.

[in] native_object: The native data that will be attached to the JavaScript
object.

[in] finalize_cb: Native callback that will be used to free the
native data when the JavaScript object is ready for garbage-collection.

[in] finalize_hint: Optional contextual hint that is passed to the
finalize callback.

[out] result: Optional reference to the JavaScript object.

Returns napi_ok if the API succeeded.

Adds a napi_finalize callback which will be called when the JavaScript object
in js_object is ready for garbage collection. This API is similar to
napi_wrap() except that

the native data cannot be retrieved later using napi_unwrap(),

nor can it be removed later using napi_remove_wrap(), and

the API can be called multiple times with different data items in order to
attach each of them to the JavaScript object.

Caution: The optional returned reference (if obtained) should be deleted via
napi_delete_reference ONLY in response to the finalize callback
invocation. If it is deleted before then, then the finalize callback may never
be invoked. Therefore, when obtaining a reference a finalize callback is also
required in order to enable correct disposal of the reference.

Addon modules often need to leverage async helpers from libuv as part of their
implementation. This allows them to schedule work to be executed asynchronously
so that their methods can return in advance of the work being completed. This
is important in order to allow them to avoid blocking overall execution
of the Node.js application.

N-API provides an ABI-stable interface for these
supporting functions which covers the most common asynchronous use cases.

The execute and complete callbacks are functions that will be
invoked when the executor is ready to execute and when it completes its
task respectively.

The execute function should avoid making any N-API calls
that could result in the execution of JavaScript or interaction with
JavaScript objects. Most often, any code that needs to make N-API
calls should be made in complete callback instead.

After calling napi_cancel_async_work, the complete callback
will be invoked with a status value of napi_cancelled.
The work should not be deleted before the complete
callback invocation, even when it was cancelled.

[in] async_resource: An optional object associated with the async work
that will be passed to possible async_hooksinit hooks.

[in] async_resource_name: Identifier for the kind of resource that is
being provided for diagnostic information exposed by the async_hooks API.

[in] execute: The native function which should be called to execute
the logic asynchronously. The given function is called from a worker pool
thread and can execute in parallel with the main event loop thread.

[in] complete: The native function which will be called when the
asynchronous logic is completed or is cancelled. The given function is called
from the main event loop thread.

[in] data: User-provided data context. This will be passed back into the
execute and complete functions.

[out] result: napi_async_work* which is the handle to the newly created
async work.

Returns napi_ok if the API succeeded.

This API allocates a work object that is used to execute logic asynchronously.
It should be freed using napi_delete_async_work once the work is no longer
required.

async_resource_name should be a null-terminated, UTF-8-encoded string.

The async_resource_name identifier is provided by the user and should be
representative of the type of async work being performed. It is also recommended
to apply namespacing to the identifier, e.g. by including the module name. See
the async_hooks documentation for more information.

This API cancels queued work if it has not yet
been started. If it has already started executing, it cannot be
cancelled and napi_generic_failure will be returned. If successful,
the complete callback will be invoked with a status value of
napi_cancelled. The work should not be deleted before the complete
callback invocation, even if it has been successfully cancelled.

This API can be called even if there is a pending JavaScript exception.

The simple asynchronous work APIs above may not be appropriate for every
scenario. When using any other asynchronous mechanism, the following APIs
are necessary to ensure an asynchronous operation is properly tracked by
the runtime.

[in] async_context: Context for the async operation that is
invoking the callback. This should normally be a value previously
obtained from napi_async_init. However NULL is also allowed,
which indicates the current async context (if any) is to be used
for the callback.

[in] recv: The this object passed to the called function.

[in] func: napi_value representing the JavaScript function
to be invoked.

[in] argc: The count of elements in the argv array.

[in] argv: Array of JavaScript values as napi_value
representing the arguments to the function.

[out] result: napi_value representing the JavaScript object returned.

Returns napi_ok if the API succeeded.

This method allows a JavaScript function object to be called from a native
add-on. This API is similar to napi_call_function. However, it is used to call
from native code back into JavaScript after returning from an async
operation (when there is no other script on the stack). It is a fairly simple
wrapper around node::MakeCallback.

Note it is not necessary to use napi_make_callback from within a
napi_async_complete_callback; in that situation the callback's async
context has already been set up, so a direct call to napi_call_function
is sufficient and appropriate. Use of the napi_make_callback function
may be required when implementing custom async behavior that does not use
napi_create_async_work.

[in] resource_object: An optional object associated with the async work
that will be passed to possible async_hooksinit hooks.

[in] context: Context for the async operation that is
invoking the callback. This should be a value previously obtained
from napi_async_init.

[out] result: The newly created scope.

There are cases (for example, resolving promises) where it is
necessary to have the equivalent of the scope associated with a callback
in place when making certain N-API calls. If there is no other script on
the stack the napi_open_callback_scope and
napi_close_callback_scope functions can be used to open/close
the required scope.

This API returns the highest N-API version supported by the
Node.js runtime. N-API is planned to be additive such that
newer releases of Node.js may support additional API functions.
In order to allow an addon to use a newer function when running with
versions of Node.js that support it, while providing
fallback behavior when running with Node.js versions that don't
support it:

Call napi_get_version() to determine if the API is available.

If available, dynamically load a pointer to the function using uv_dlsym().

Use the dynamically loaded pointer to invoke the function.

If the function is not available, provide an alternate implementation
that does not use the function.

[in] change_in_bytes: The change in externally allocated memory that is
kept alive by JavaScript objects.

[out] result: The adjusted value

Returns napi_ok if the API succeeded.

This function gives V8 an indication of the amount of externally allocated
memory that is kept alive by JavaScript objects (i.e. a JavaScript object
that points to its own memory allocated by a native module). Registering
externally allocated memory will trigger global garbage collections more
often than it would otherwise.

N-API provides facilities for creating Promise objects as described in
Section 25.4 of the ECMA specification. It implements promises as a pair of
objects. When a promise is created by napi_create_promise(), a "deferred"
object is created and returned alongside the Promise. The deferred object is
bound to the created Promise and is the only means to resolve or reject the
Promise using napi_resolve_deferred() or napi_reject_deferred(). The
deferred object that is created by napi_create_promise() is freed by
napi_resolve_deferred() or napi_reject_deferred(). The Promise object may
be returned to JavaScript where it can be used in the usual fashion.

For example, to create a promise and pass it to an asynchronous worker:

This API resolves a JavaScript promise by way of the deferred object
with which it is associated. Thus, it can only be used to resolve JavaScript
promises for which the corresponding deferred object is available. This
effectively means that the promise must have been created using
napi_create_promise() and the deferred object returned from that call must
have been retained in order to be passed to this API.

This API rejects a JavaScript promise by way of the deferred object
with which it is associated. Thus, it can only be used to reject JavaScript
promises for which the corresponding deferred object is available. This
effectively means that the promise must have been created using
napi_create_promise() and the deferred object returned from that call must
have been retained in order to be passed to this API.

JavaScript functions can normally only be called from a native addon's main
thread. If an addon creates additional threads, then N-API functions that
require a napi_env, napi_value, or napi_ref must not be called from those
threads.

When an addon has additional threads and JavaScript functions need to be invoked
based on the processing completed by those threads, those threads must
communicate with the addon's main thread so that the main thread can invoke the
JavaScript function on their behalf. The thread-safe function APIs provide an
easy way to do this.

These APIs provide the type napi_threadsafe_function as well as APIs to
create, destroy, and call objects of this type.
napi_create_threadsafe_function() creates a persistent reference to a
napi_value that holds a JavaScript function which can be called from multiple
threads. The calls happen asynchronously. This means that values with which the
JavaScript callback is to be called will be placed in a queue, and, for each
value in the queue, a call will eventually be made to the JavaScript function.

Upon creation of a napi_threadsafe_function a napi_finalize callback can be
provided. This callback will be invoked on the main thread when the thread-safe
function is about to be destroyed. It receives the context and the finalize data
given during construction, and provides an opportunity for cleaning up after the
threads e.g. by calling uv_thread_join(). It is important that, aside from
the main loop thread, there be no threads left using the thread-safe function
after the finalize callback completes.

The context given during the call to napi_create_threadsafe_function() can
be retrieved from any thread with a call to
napi_get_threadsafe_function_context().

napi_call_threadsafe_function() can then be used for initiating a call into
JavaScript. napi_call_threadsafe_function() accepts a parameter which controls
whether the API behaves blockingly. If set to napi_tsfn_nonblocking, the API
behaves non-blockingly, returning napi_queue_full if the queue was full,
preventing data from being successfully added to the queue. If set to
napi_tsfn_blocking, the API blocks until space becomes available in the queue.
napi_call_threadsafe_function() never blocks if the thread-safe function was
created with a maximum queue size of 0.

The actual call into JavaScript is controlled by the callback given via the
call_js_cb parameter. call_js_cb is invoked on the main thread once for each
value that was placed into the queue by a successful call to
napi_call_threadsafe_function(). If such a callback is not given, a default
callback will be used, and the resulting JavaScript call will have no arguments.
The call_js_cb callback receives the JavaScript function to call as a
napi_value in its parameters, as well as the void* context pointer used when
creating the napi_threadsafe_function, and the next data pointer that was
created by one of the secondary threads. The callback can then use an API such
as napi_call_function() to call into JavaScript.

The callback may also be invoked with env and call_js_cb both set to NULL
to indicate that calls into JavaScript are no longer possible, while items
remain in the queue that may need to be freed. This normally occurs when the
Node.js process exits while there is a thread-safe function still active.

It is not necessary to call into JavaScript via napi_make_callback() because
N-API runs call_js_cb in a context appropriate for callbacks.

Threads can be added to and removed from a napi_threadsafe_function object
during its existence. Thus, in addition to specifying an initial number of
threads upon creation, napi_acquire_threadsafe_function can be called to
indicate that a new thread will start making use of the thread-safe function.
Similarly, napi_release_threadsafe_function can be called to indicate that an
existing thread will stop making use of the thread-safe function.

napi_threadsafe_function objects are destroyed when every thread which uses
the object has called napi_release_threadsafe_function() or has received a
return status of napi_closing in response to a call to
napi_call_threadsafe_function. The queue is emptied before the
napi_threadsafe_function is destroyed. It is important that
napi_release_threadsafe_function() be the last API call made in conjunction
with a given napi_threadsafe_function, because after the call completes, there
is no guarantee that the napi_threadsafe_function is still allocated. For the
same reason it is also important that no more use be made of a thread-safe
function after receiving a return value of napi_closing in response to a call
to napi_call_threadsafe_function. Data associated with the
napi_threadsafe_function can be freed in its napi_finalize callback which
was passed to napi_create_threadsafe_function().

Once the number of threads making use of a napi_threadsafe_function reaches
zero, no further threads can start making use of it by calling
napi_acquire_threadsafe_function(). In fact, all subsequent API calls
associated with it, except napi_release_threadsafe_function(), will return an
error value of napi_closing.

The thread-safe function can be "aborted" by giving a value of napi_tsfn_abort
to napi_release_threadsafe_function(). This will cause all subsequent APIs
associated with the thread-safe function except
napi_release_threadsafe_function() to return napi_closing even before its
reference count reaches zero. In particular, napi_call_threadsafe_function()
will return napi_closing, thus informing the threads that it is no longer
possible to make asynchronous calls to the thread-safe function. This can be
used as a criterion for terminating the thread. Upon receiving a return value
of napi_closing from napi_call_threadsafe_function() a thread must make no
further use of the thread-safe function because it is no longer guaranteed to
be allocated.

Similarly to libuv handles, thread-safe functions can be "referenced" and
"unreferenced". A "referenced" thread-safe function will cause the event loop on
the thread on which it is created to remain alive until the thread-safe function
is destroyed. In contrast, an "unreferenced" thread-safe function will not
prevent the event loop from exiting. The APIs napi_ref_threadsafe_function and
napi_unref_threadsafe_function exist for this purpose.

[in] async_resource: An optional object associated with the async work that
will be passed to possible async_hooksinit hooks.

[in] async_resource_name: A JavaScript string to provide an identifier for
the kind of resource that is being provided for diagnostic information exposed
by the async_hooks API.

[in] max_queue_size: Maximum size of the queue. 0 for no limit.

[in] initial_thread_count: The initial number of threads, including the main
thread, which will be making use of this function.

[in] thread_finalize_data: Optional data to be passed to thread_finalize_cb.

[in] thread_finalize_cb: Optional function to call when the
napi_threadsafe_function is being destroyed.

[in] context: Optional data to attach to the resulting
napi_threadsafe_function.

[in] call_js_cb: Optional callback which calls the JavaScript function in
response to a call on a different thread. This callback will be called on the
main thread. If not given, the JavaScript function will be called with no
parameters and with undefined as its this value.

[in] data: Data to send into JavaScript via the callback call_js_cb
provided during the creation of the thread-safe JavaScript function.

[in] is_blocking: Flag whose value can be either napi_tsfn_blocking to
indicate that the call should block if the queue is full or
napi_tsfn_nonblocking to indicate that the call should return immediately with
a status of napi_queue_full whenever the queue is full.

This API will return napi_closing if napi_release_threadsafe_function() was
called with abort set to napi_tsfn_abort from any thread. The value is only
added to the queue if the API returns napi_ok.

[in] func: The asynchronous thread-safe JavaScript function to start making
use of.

A thread should call this API before passing func to any other thread-safe
function APIs to indicate that it will be making use of func. This prevents
func from being destroyed when all other threads have stopped making use of
it.

This API may be called from any thread which will start making use of func.

[in] mode: Flag whose value can be either napi_tsfn_release to indicate
that the current thread will make no further calls to the thread-safe function,
or napi_tsfn_abort to indicate that in addition to the current thread, no
other thread should make any further calls to the thread-safe function. If set
to napi_tsfn_abort, further calls to napi_call_threadsafe_function() will
return napi_closing, and no further values will be placed in the queue.

A thread should call this API when it stops making use of func. Passing func
to any thread-safe APIs after having called this API has undefined results, as
func may have been destroyed.

This API may be called from any thread which will stop making use of func.

The child_process module provides the ability to spawn child processes in
a manner that is similar, but not identical, to popen(3). This capability
is primarily provided by the child_process.spawn() function:

By default, pipes for stdin, stdout, and stderr are established between
the parent Node.js process and the spawned child. These pipes have
limited (and platform-specific) capacity. If the child process writes to
stdout in excess of that limit without the output being captured, the child
process will block waiting for the pipe buffer to accept more data. This is
identical to the behavior of pipes in the shell. Use the { stdio: 'ignore' }
option if the output will not be consumed.

The child_process.spawn() method spawns the child process asynchronously,
without blocking the Node.js event loop. The child_process.spawnSync()
function provides equivalent functionality in a synchronous manner that blocks
the event loop until the spawned process either exits or is terminated.

For certain use cases, such as automating shell scripts, the
synchronous counterparts may be more convenient. In many cases, however,
the synchronous methods can have significant impact on performance due to
stalling the event loop while spawned processes complete.

Each of the methods returns a ChildProcess instance. These objects
implement the Node.js EventEmitter API, allowing the parent process to
register listener functions that are called when certain events occur during
the life cycle of the child process.

The importance of the distinction between child_process.exec() and
child_process.execFile() can vary based on platform. On Unix-type operating
systems (Unix, Linux, macOS) child_process.execFile() can be more efficient
because it does not spawn a shell by default. On Windows, however, .bat and .cmd
files are not executable on their own without a terminal, and therefore cannot
be launched using child_process.execFile(). When running on Windows, .bat
and .cmd files can be invoked using child_process.spawn() with the shell
option set, with child_process.exec(), or by spawning cmd.exe and passing
the .bat or .cmd file as an argument (which is what the shell option and
child_process.exec() do). In any case, if the script filename contains
spaces it needs to be quoted.

Spawns a shell then executes the command within that shell, buffering any
generated output. The command string passed to the exec function is processed
directly by the shell and special characters (vary based on
shell)
need to be dealt with accordingly:

exec('"/path/to/test file/test.sh" arg1 arg2');
// Double quotes are used so that the space in the path is not interpreted as
// multiple arguments
exec('echo "The \\$HOME variable is $HOME"');
// The $HOME variable is escaped in the first instance, but not in the second

Never pass unsanitized user input to this function. Any input containing shell
metacharacters may be used to trigger arbitrary command execution.

If a callback function is provided, it is called with the arguments
(error, stdout, stderr). On success, error will be null. On error,
error will be an instance of Error. The error.code property will be
the exit code of the child process while error.signal will be set to the
signal that terminated the process. Any exit code other than 0 is considered
to be an error.

The stdout and stderr arguments passed to the callback will contain the
stdout and stderr output of the child process. By default, Node.js will decode
the output as UTF-8 and pass strings to the callback. The encoding option
can be used to specify the character encoding used to decode the stdout and
stderr output. If encoding is 'buffer', or an unrecognized character
encoding, Buffer objects will be passed to the callback instead.

If timeout is greater than 0, the parent will send the signal
identified by the killSignal property (the default is 'SIGTERM') if the
child runs longer than timeout milliseconds.

Unlike the exec(3) POSIX system call, child_process.exec() does not replace
the existing process and uses a shell to execute the command.

If this method is invoked as its util.promisify()ed version, it returns
a Promise for an Object with stdout and stderr properties. In case of an
error (including any error resulting in an exit code other than 0), a rejected
promise is returned, with the same error object given in the callback, but
with an additional two properties stdout and stderr.

The child_process.execFile() function is similar to child_process.exec()
except that it does not spawn a shell by default. Rather, the specified executable file
is spawned directly as a new process making it slightly more efficient than
child_process.exec().

The same options as child_process.exec() are supported. Since a shell is not
spawned, behaviors such as I/O redirection and file globbing are not supported.

The stdout and stderr arguments passed to the callback will contain the
stdout and stderr output of the child process. By default, Node.js will decode
the output as UTF-8 and pass strings to the callback. The encoding option
can be used to specify the character encoding used to decode the stdout and
stderr output. If encoding is 'buffer', or an unrecognized character
encoding, Buffer objects will be passed to the callback instead.

If this method is invoked as its util.promisify()ed version, it returns
a Promise for an Object with stdout and stderr properties. In case of an
error (including any error resulting in an exit code other than 0), a rejected
promise is returned, with the same error object given in the
callback, but with an additional two properties stdout and stderr.

execArgv<string[]> List of string arguments passed to the executable.
Default:process.execArgv.

silent<boolean> If true, stdin, stdout, and stderr of the child will be
piped to the parent, otherwise they will be inherited from the parent, see
the 'pipe' and 'inherit' options for child_process.spawn()'s
stdio for more details. Default:false.

stdio<Array> | <string> See child_process.spawn()'s stdio.
When this option is provided, it overrides silent. If the array variant
is used, it must contain exactly one item with value 'ipc' or an error
will be thrown. For instance [0, 1, 2, 'ipc'].

windowsVerbatimArguments<boolean> No quoting or escaping of arguments is
done on Windows. Ignored on Unix. Default:false.

It is important to keep in mind that spawned Node.js child processes are
independent of the parent with exception of the IPC communication channel
that is established between the two. Each process has its own memory, with
their own V8 instances. Because of the additional resource allocations
required, spawning a large number of child Node.js processes is not
recommended.

By default, child_process.fork() will spawn new Node.js instances using the
process.execPath of the parent process. The execPath property in the
options object allows for an alternative execution path to be used.

Node.js processes launched with a custom execPath will communicate with the
parent process using the file descriptor (fd) identified using the
environment variable NODE_CHANNEL_FD on the child process.

Unlike the fork(2) POSIX system call, child_process.fork() does not clone the
current process.

The shell option available in child_process.spawn() is not supported by
child_process.fork() and will be ignored if set.

Certain platforms (macOS, Linux) will use the value of argv[0] for the process
title while others (Windows, SunOS) will use command.

Node.js currently overwrites argv[0] with process.execPath on startup, so
process.argv[0] in a Node.js child process will not match the argv0
parameter passed to spawn from the parent, retrieve it with the
process.argv0 property instead.

On Windows, setting options.detached to true makes it possible for the
child process to continue running after the parent exits. The child will have
its own console window. Once enabled for a child process, it cannot be
disabled.

On non-Windows platforms, if options.detached is set to true, the child
process will be made the leader of a new process group and session. Note that
child processes may continue running after the parent exits regardless of
whether they are detached or not. See setsid(2) for more information.

By default, the parent will wait for the detached child to exit. To prevent the
parent from waiting for a given subprocess to exit, use the
subprocess.unref() method. Doing so will cause the parent's event loop to not
include the child in its reference count, allowing the parent to exit
independently of the child, unless there is an established IPC channel between
the child and the parent.

When using the detached option to start a long-running process, the process
will not stay running in the background after the parent exits unless it is
provided with a stdio configuration that is not connected to the parent.
If the parent's stdio is inherited, the child will remain attached to the
controlling terminal.

Example of a long-running process, by detaching and also ignoring its parent
stdio file descriptors, in order to ignore the parent's termination:

The options.stdio option is used to configure the pipes that are established
between the parent and child process. By default, the child's stdin, stdout,
and stderr are redirected to corresponding subprocess.stdin,
subprocess.stdout, and subprocess.stderr streams on the
ChildProcess object. This is equivalent to setting the options.stdio
equal to ['pipe', 'pipe', 'pipe'].

Otherwise, the value of options.stdio is an array where each index corresponds
to an fd in the child. The fds 0, 1, and 2 correspond to stdin, stdout,
and stderr, respectively. Additional fds can be specified to create additional
pipes between the parent and child. The value is one of the following:

Accessing the IPC channel fd in any way other than process.send()
or using the IPC channel with a child process that is not a Node.js instance
is not supported.

'ignore' - Instructs Node.js to ignore the fd in the child. While Node.js
will always open fds 0 - 2 for the processes it spawns, setting the fd to
'ignore' will cause Node.js to open /dev/null and attach it to the
child's fd.

'inherit' - Pass through the corresponding stdio stream to/from the
parent process. In the first three positions, this is equivalent to
process.stdin, process.stdout, and process.stderr, respectively. In
any other position, equivalent to 'ignore'.

<Stream> object - Share a readable or writable stream that refers to a tty,
file, socket, or a pipe with the child process. The stream's underlying
file descriptor is duplicated in the child process to the fd that
corresponds to the index in the stdio array. Note that the stream must
have an underlying descriptor (file streams do not until the 'open'
event has occurred).

Positive integer - The integer value is interpreted as a file descriptor
that is currently open in the parent process. It is shared with the child
process, similar to how <Stream> objects can be shared.

null, undefined - Use default value. For stdio fds 0, 1, and 2 (in other
words, stdin, stdout, and stderr) a pipe is created. For fd 3 and up, the
default is 'ignore'.

It is worth noting that when an IPC channel is established between the
parent and child processes, and the child is a Node.js process, the child
is launched with the IPC channel unreferenced (using unref()) until the
child registers an event handler for the 'disconnect' event
or the 'message' event. This allows the child to exit
normally without the process being held open by the open IPC channel.

On UNIX-like operating systems, the child_process.spawn() method
performs memory operations synchronously before decoupling the event loop
from the child. Applications with a large memory footprint may find frequent
child_process.spawn() calls to be a bottleneck. For more information,
see V8 issue 7381.

The child_process.execFileSync() method is generally identical to
child_process.execFile() with the exception that the method will not return
until the child process has fully closed. When a timeout has been encountered
and killSignal is sent, the method won't return until the process has
completely exited.

If the child process intercepts and handles the SIGTERM signal and
does not exit, the parent process will still wait until the child process has
exited.

If the process times out or has a non-zero exit code, this method will
throw an Error that will include the full result of the underlying
child_process.spawnSync().

If the shell option is enabled, do not pass unsanitized user input to this
function. Any input containing shell metacharacters may be used to trigger
arbitrary command execution.

The child_process.execSync() method is generally identical to
child_process.exec() with the exception that the method will not return until
the child process has fully closed. When a timeout has been encountered and
killSignal is sent, the method won't return until the process has completely
exited. Note that if the child process intercepts and handles the SIGTERM
signal and doesn't exit, the parent process will wait until the child
process has exited.

If the process times out or has a non-zero exit code, this method will
throw. The Error object will contain the entire result from
child_process.spawnSync().

Never pass unsanitized user input to this function. Any input containing shell
metacharacters may be used to trigger arbitrary command execution.

error<Error> The error object if the child process failed or timed out.

The child_process.spawnSync() method is generally identical to
child_process.spawn() with the exception that the function will not return
until the child process has fully closed. When a timeout has been encountered
and killSignal is sent, the method won't return until the process has
completely exited. Note that if the process intercepts and handles the
SIGTERM signal and doesn't exit, the parent process will wait until the child
process has exited.

If the shell option is enabled, do not pass unsanitized user input to this
function. Any input containing shell metacharacters may be used to trigger
arbitrary command execution.

The 'exit' event may or may not fire after an error has occurred. When
listening to both the 'exit' and 'error' events, it is important to guard
against accidentally invoking handler functions multiple times.

The 'exit' event is emitted after the child process ends. If the process
exited, code is the final exit code of the process, otherwise null. If the
process terminated due to receipt of a signal, signal is the string name of
the signal, otherwise null. One of the two will always be non-null.

Note that when the 'exit' event is triggered, child process stdio streams
might still be open.

Also, note that Node.js establishes signal handlers for SIGINT and
SIGTERM and Node.js processes will not terminate immediately due to receipt
of those signals. Rather, Node.js will perform a sequence of cleanup actions
and then will re-raise the handled signal.

The subprocess.connected property indicates whether it is still possible to
send and receive messages from a child process. When subprocess.connected is
false, it is no longer possible to send or receive messages.

Closes the IPC channel between parent and child, allowing the child to exit
gracefully once there are no other connections keeping it alive. After calling
this method the subprocess.connected and process.connected properties in
both the parent and child (respectively) will be set to false, and it will be
no longer possible to pass messages between the processes.

The 'disconnect' event will be emitted when there are no messages in the
process of being received. This will most often be triggered immediately after
calling subprocess.disconnect().

Note that when the child process is a Node.js instance (e.g. spawned using
child_process.fork()), the process.disconnect() method can be invoked
within the child process to close the IPC channel as well.

The ChildProcess object may emit an 'error' event if the signal cannot be
delivered. Sending a signal to a child process that has already exited is not
an error but may have unforeseen consequences. Specifically, if the process
identifier (PID) has been reassigned to another process, the signal will be
delivered to that process instead which can have unexpected results.

Note that while the function is called kill, the signal delivered to the
child process may not actually terminate the process.

On Linux, child processes of child processes will not be terminated
when attempting to kill their parent. This is likely to happen when running a
new process in a shell or with the use of the shell option of ChildProcess:

<boolean> Set to true after subprocess.kill() is used to successfully
send a signal to the child process.

The subprocess.killed property indicates whether the child process
successfully received a signal from subprocess.kill(). The killed property
does not indicate that the child process has been terminated.

Calling subprocess.ref() after making a call to subprocess.unref() will
restore the removed reference count for the child process, forcing the parent
to wait for the child to exit before exiting itself.

When an IPC channel has been established between the parent and child (
i.e. when using child_process.fork()), the subprocess.send() method can
be used to send messages to the child process. When the child process is a
Node.js instance, these messages can be received via the 'message' event.

The message goes through serialization and parsing. The resulting
message might not be the same as what is originally sent.

Child Node.js processes will have a process.send() method of their own that
allows the child to send messages back to the parent.

There is a special case when sending a {cmd: 'NODE_foo'} message. Messages
containing a NODE_ prefix in the cmd property are reserved for use within
Node.js core and will not be emitted in the child's 'message'
event. Rather, such messages are emitted using the
'internalMessage' event and are consumed internally by Node.js.
Applications should avoid using such messages or listening for
'internalMessage' events as it is subject to change without notice.

The optional sendHandle argument that may be passed to subprocess.send() is
for passing a TCP server or socket object to the child process. The child will
receive the object as the second argument passed to the callback function
registered on the 'message' event. Any data that is received
and buffered in the socket will not be sent to the child.

The optional callback is a function that is invoked after the message is
sent but before the child may have received it. The function is called with a
single argument: null on success, or an Error object on failure.

If no callback function is provided and the message cannot be sent, an
'error' event will be emitted by the ChildProcess object. This can happen,
for instance, when the child process has already exited.

subprocess.send() will return false if the channel has closed or when the
backlog of unsent messages exceeds a threshold that makes it unwise to send
more. Otherwise, the method returns true. The callback function can be
used to implement flow control.

Once the server is now shared between the parent and child, some connections
can be handled by the parent and some by the child.

While the example above uses a server created using the net module, dgram
module servers use exactly the same workflow with the exceptions of listening on
a 'message' event instead of 'connection' and using server.bind() instead of
server.listen(). This is, however, currently only supported on UNIX platforms.

Similarly, the sendHandler argument can be used to pass the handle of a
socket to the child process. The example below spawns two children that each
handle connections with "normal" or "special" priority:

const { fork } = require('child_process');
const normal = fork('subprocess.js', ['normal']);
const special = fork('subprocess.js', ['special']);
// Open up the server and send sockets to child. Use pauseOnConnect to prevent
// the sockets from being read before they are sent to the child process.
const server = require('net').createServer({ pauseOnConnect: true });
server.on('connection', (socket) => {
// If this is special priority
if (socket.remoteAddress === '74.125.127.100') {
special.send('socket', socket);
return;
}
// This is normal priority
normal.send('socket', socket);
});
server.listen(1337);

The subprocess.js would receive the socket handle as the second argument
passed to the event callback function:

process.on('message', (m, socket) => {
if (m === 'socket') {
if (socket) {
// Check that the client socket exists.
// It is possible for the socket to be closed between the time it is
// sent and the time it is received in the child process.
socket.end(`Request handled with ${process.argv[2]} priority`);
}
}
});

Once a socket has been passed to a child, the parent is no longer capable of
tracking when the socket is destroyed. To indicate this, the .connections
property becomes null. It is recommended not to use .maxConnections when
this occurs.

It is also recommended that any 'message' handlers in the child process
verify that socket exists, as the connection may have been closed during the
time it takes to send the connection to the child.

A sparse array of pipes to the child process, corresponding with positions in
the stdio option passed to child_process.spawn() that have been set
to the value 'pipe'. Note that subprocess.stdio[0], subprocess.stdio[1],
and subprocess.stdio[2] are also available as subprocess.stdin,
subprocess.stdout, and subprocess.stderr, respectively.

In the following example, only the child's fd 1 (stdout) is configured as a
pipe, so only the parent's subprocess.stdio[1] is a stream, all other values
in the array are null.

By default, the parent will wait for the detached child to exit. To prevent the
parent from waiting for a given subprocess to exit, use the
subprocess.unref() method. Doing so will cause the parent's event loop to not
include the child in its reference count, allowing the parent to exit
independently of the child, unless there is an established IPC channel between
the child and the parent.

The maxBuffer option specifies the largest number of bytes allowed on stdout
or stderr. If this value is exceeded, then the child process is terminated.
This impacts output that includes multibyte character encodings such as UTF-8 or
UTF-16. For instance, console.log('中文测试') will send 13 UTF-8 encoded bytes
to stdout although there are only 4 characters.

Although Microsoft specifies %COMSPEC% must contain the path to
'cmd.exe' in the root environment, child processes are not always subject to
the same requirement. Thus, in child_process functions where a shell can be
spawned, 'cmd.exe' is used as a fallback if process.env.ComSpec is
unavailable.

The worker processes are spawned using the child_process.fork() method,
so that they can communicate with the parent via IPC and pass server
handles back and forth.

The cluster module supports two methods of distributing incoming
connections.

The first one (and the default one on all platforms except Windows),
is the round-robin approach, where the master process listens on a
port, accepts new connections and distributes them across the workers
in a round-robin fashion, with some built-in smarts to avoid
overloading a worker process.

The second approach is where the master process creates the listen
socket and sends it to interested workers. The workers then accept
incoming connections directly.

The second approach should, in theory, give the best performance.
In practice however, distribution tends to be very unbalanced due
to operating system scheduler vagaries. Loads have been observed
where over 70% of all connections ended up in just two processes,
out of a total of eight.

Because server.listen() hands off most of the work to the master
process, there are three cases where the behavior between a normal
Node.js process and a cluster worker differs:

server.listen({fd: 7}) Because the message is passed to the master,
file descriptor 7 in the parent will be listened on, and the
handle passed to the worker, rather than listening to the worker's
idea of what the number 7 file descriptor references.

server.listen(handle) Listening on handles explicitly will cause
the worker to use the supplied handle, rather than talk to the master
process.

server.listen(0) Normally, this will cause servers to listen on a
random port. However, in a cluster, each worker will receive the
same "random" port each time they do listen(0). In essence, the
port is random the first time, but predictable thereafter. To listen
on a unique port, generate a port number based on the cluster worker ID.

Node.js does not provide routing logic. It is, therefore important to design an
application such that it does not rely too heavily on in-memory data objects for
things like sessions and login.

Because workers are all separate processes, they can be killed or
re-spawned depending on a program's needs, without affecting other
workers. As long as there are some workers still alive, the server will
continue to accept connections. If no workers are alive, existing connections
will be dropped and new connections will be refused. Node.js does not
automatically manage the number of workers, however. It is the application's
responsibility to manage the worker pool based on its own needs.

Although a primary use case for the cluster module is networking, it can
also be used for other use cases requiring worker processes.

In a worker, this function will close all servers, wait for the 'close' event
on those servers, and then disconnect the IPC channel.

In the master, an internal message is sent to the worker causing it to call
.disconnect() on itself.

Causes .exitedAfterDisconnect to be set.

Note that after a server is closed, it will no longer accept new connections,
but connections may be accepted by any other listening worker. Existing
connections will be allowed to close as usual. When no more connections exist,
see server.close(), the IPC channel to the worker will close allowing it
to die gracefully.

The above applies only to server connections, client connections are not
automatically closed by workers, and disconnect does not wait for them to close
before exiting.

Note that in a worker, process.disconnect exists, but it is not this function,
it is disconnect.

Because long living server connections may block workers from disconnecting, it
may be useful to send a message, so application specific actions may be taken to
close them. It also may be useful to implement a timeout, killing a worker if
the 'disconnect' event has not been emitted after some time.

This function returns true if the worker is connected to its master via its
IPC channel, false otherwise. A worker is connected to its master after it
has been created. It is disconnected after the 'disconnect' event is emitted.

This function will kill the worker. In the master, it does this by disconnecting
the worker.process, and once disconnected, killing with signal. In the
worker, it does it by disconnecting the channel, and then exiting with code 0.

Because kill() attempts to gracefully disconnect the worker process, it is
susceptible to waiting indefinitely for the disconnect to complete. For example,
if the worker enters an infinite loop, a graceful disconnect will never occur.
If the graceful disconnect behavior is not needed, use worker.process.kill().

Causes .exitedAfterDisconnect to be set.

This method is aliased as worker.destroy() for backwards compatibility.

Note that in a worker, process.kill() exists, but it is not this function,
it is kill.

After calling listen() from a worker, when the 'listening' event is emitted
on the server a 'listening' event will also be emitted on cluster in the
master.

The event handler is executed with two arguments, the worker contains the
worker object and the address object contains the following connection
properties: address, port and addressType. This is very useful if the
worker is listening on more than one address.

After forking a new worker, the worker should respond with an online message.
When the master receives an online message it will emit this event.
The difference between 'fork' and 'online' is that fork is emitted when the
master forks a worker, and 'online' is emitted when the worker is running.

cluster.on('online', (worker) => {
console.log('Yay, the worker responded after it was forked');
});

The scheduling policy, either cluster.SCHED_RR for round-robin or
cluster.SCHED_NONE to leave it to the operating system. This is a
global setting and effectively frozen once either the first worker is spawned,
or cluster.setupMaster() is called, whichever comes first.

SCHED_RR is the default on all operating systems except Windows.
Windows will change to SCHED_RR once libuv is able to effectively
distribute IOCP handles without incurring a large performance hit.

cluster.schedulingPolicy can also be set through the
NODE_CLUSTER_SCHED_POLICY environment variable. Valid
values are 'rr' and 'none'.

cwd<string> Current working directory of the worker process. Default:undefined (inherits from parent process).

silent<boolean> Whether or not to send output to parent's stdio.
Default:false.

stdio<Array> Configures the stdio of forked processes. Because the
cluster module relies on IPC to function, this configuration must contain an
'ipc' entry. When this option is provided, it overrides silent.

inspectPort<number> | <Function> Sets inspector port of worker.
This can be a number, or a function that takes no arguments and returns a
number. By default each worker gets its own port, incremented from the
master's process.debugPort.

windowsHide<boolean> Hide the forked processes console window that would
normally be created on Windows systems. Default:false.

After calling .setupMaster() (or .fork()) this settings object will contain
the settings, including the default values.

A hash that stores the active worker objects, keyed by id field. Makes it
easy to loop through all the workers. It is only available in the master
process.

A worker is removed from cluster.workers after the worker has disconnected
and exited. The order between these two events cannot be determined in
advance. However, it is guaranteed that the removal from the cluster.workers
list happens before last 'disconnect' or 'exit' event is emitted.

Indicate the end of node options. Pass the rest of the arguments to the script.
If no script filename or eval/print script is supplied prior to this, then
the next argument will be used as a script filename.

V8 inspector integration allows tools such as Chrome DevTools and IDEs to debug
and profile Node.js instances. The tools attach to Node.js instances via a
tcp port and communicate using the Chrome DevTools Protocol.

Warning: binding inspector to a public IP:port combination is insecure#

Binding the inspector to a public IP (including 0.0.0.0) with an open port is
insecure, as it allows external hosts to connect to the inspector and perform
a remote code execution attack.

If you specify a host, make sure that at least one of the following is true:
either the host is not public, or the port is properly firewalled to disallow
unwanted connections.

More specifically, --inspect=0.0.0.0 is insecure if the port (9229 by
default) is not firewall-protected.

Pending deprecations are generally identical to a runtime deprecation with the
notable exception that they are turned off by default and will not be emitted
unless either the --pending-deprecation command line flag, or the
NODE_PENDING_DEPRECATION=1 environment variable, is set. Pending deprecations
are used to provide a kind of selective "early warning" mechanism that
developers may leverage to detect deprecated API usage.

Instructs the module loader to preserve symbolic links when resolving and
caching modules.

By default, when Node.js loads a module from a path that is symbolically linked
to a different on-disk location, Node.js will dereference the link and use the
actual on-disk "real path" of the module as both an identifier and as a root
path to locate other dependency modules. In most cases, this default behavior
is acceptable. However, when using symbolically linked peer dependencies, as
illustrated in the example below, the default behavior causes an exception to
be thrown if moduleA attempts to require moduleB as a peer dependency:

The --preserve-symlinks command line flag instructs Node.js to use the
symlink path for modules as opposed to the real path, allowing symbolically
linked peer dependencies to be found.

Note, however, that using --preserve-symlinks can have other side effects.
Specifically, symbolically linked native modules can fail to load if those
are linked from more than one location in the dependency tree (Node.js would
see those as two separate modules and would attempt to load the module multiple
times, causing an exception to be thrown).

The --preserve-symlinks flag does not apply to the main module, which allows
node --preserve-symlinks node_module/.bin/<foo> to work. To apply the same
behavior for the main module, also use --preserve-symlinks-main.

Instructs the module loader to preserve symbolic links when resolving and
caching the main module (require.main).

This flag exists so that the main module can be opted-in to the same behavior
that --preserve-symlinks gives to all other imports; they are separate flags,
however, for backward compatibility with older Node.js versions.

Note that --preserve-symlinks-main does not imply --preserve-symlinks; it
is expected that --preserve-symlinks-main will be used in addition to
--preserve-symlinks when it is not desirable to follow symlinks before
resolving relative paths.

Write process warnings to the given file instead of printing to stderr. The
file will be created if it does not exist, and will be appended to if it does.
If an error occurs while attempting to write the warning to the file, the
warning will be written to stderr instead.

Use bundled Mozilla CA store as supplied by current Node.js version
or use OpenSSL's default CA store. The default store is selectable
at build-time.

The bundled CA store, as supplied by Node.js, is a snapshot of Mozilla CA store
that is fixed at release time. It is identical on all supported platforms.

Using OpenSSL store allows for external modifications of the store. For most
Linux and BSD distributions, this store is maintained by the distribution
maintainers and system administrators. OpenSSL CA store location is dependent on
configuration of the OpenSSL library but this can be altered at runtime using
environment variables.

When set, the well known "root" CAs (like VeriSign) will be extended with the
extra certificates in file. The file should consist of one or more trusted
certificates in PEM format. A message will be emitted (once) with
process.emitWarning() if the file is missing or
malformed, but any errors are otherwise ignored.

Note that neither the well known nor extra certificates are used when the ca
options property is explicitly specified for a TLS or HTTPS client or server.

This environment variable is ignored when node runs as setuid root or
has Linux file capabilities set.

A space-separated list of command line options. options... are interpreted as
if they had been specified on the command line before the actual command line
(so they can be overridden). Node.js will exit with an error if an option
that is not allowed in the environment is used, such as -p or a script file.

Pending deprecations are generally identical to a runtime deprecation with the
notable exception that they are turned off by default and will not be emitted
unless either the --pending-deprecation command line flag, or the
NODE_PENDING_DEPRECATION=1 environment variable, is set. Pending deprecations
are used to provide a kind of selective "early warning" mechanism that
developers may leverage to detect deprecated API usage.

When set, process warnings will be emitted to the given file instead of
printing to stderr. The file will be created if it does not exist, and will be
appended to if it does. If an error occurs while attempting to write the
warning to the file, the warning will be written to stderr instead. This is
equivalent to using the --redirect-warnings=file command-line flag.

Path to the file used to store the persistent REPL history. The default path is
~/.node_repl_history, which is overridden by this variable. Setting the value
to an empty string ('' or ' ') disables persistent REPL history.

NODE_V8_COVERAGE will automatically propagate to subprocesses, making it
easier to instrument applications that call the child_process.spawn() family
of functions. NODE_V8_COVERAGE can be set to an empty string, to prevent
propagation.

At this time coverage is only collected in the main thread and will not be
output for code executed by worker threads.

If --use-openssl-ca is enabled, this overrides and sets OpenSSL's directory
containing trusted certificates.

Be aware that unless the child environment is explicitly set, this environment
variable will be inherited by any child processes, and if they use OpenSSL, it
may cause them to trust the same CAs as node.

If --use-openssl-ca is enabled, this overrides and sets OpenSSL's file
containing trusted certificates.

Be aware that unless the child environment is explicitly set, this environment
variable will be inherited by any child processes, and if they use OpenSSL, it
may cause them to trust the same CAs as node.

Asynchronous system APIs are used by Node.js whenever possible, but where they
do not exist, libuv's threadpool is used to create asynchronous node APIs based
on synchronous system APIs. Node.js APIs that use the threadpool are:

all fs APIs, other than the file watcher APIs and those that are explicitly
synchronous

crypto.pbkdf2()

crypto.randomBytes(), unless it is used without a callback

crypto.randomFill()

dns.lookup()

all zlib APIs, other than those that are explicitly synchronous

Because libuv's threadpool has a fixed size, it means that if for whatever
reason any of these APIs takes a long time, other (seemingly unrelated) APIs
that run in libuv's threadpool will experience degraded performance. In order to
mitigate this issue, one potential solution is to increase the size of libuv's
threadpool by setting the 'UV_THREADPOOL_SIZE' environment variable to a value
greater than 4 (its current default value). For more information, see the
libuv threadpool documentation.

The console module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.

The module exports two specific components:

A Console class with methods such as console.log(), console.error() and
console.warn() that can be used to write to any Node.js stream.

A global console instance configured to write to process.stdout and
process.stderr. The global console can be used without calling
require('console').

Warning: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the note on process I/O for
more information.

Errors that occur while writing to the underlying streams will now be ignored by default.

The Console class can be used to create a simple logger with configurable
output streams and can be accessed using either require('console').Console
or console.Console (or their destructured counterparts):

ignoreErrors<boolean> Ignore errors when writing to the underlying
streams. Default:true.

colorMode<boolean> | <string> Set color support for this Console instance.
Setting to true enables coloring while inspecting values, setting to
'auto' will make color support depend on the value of the isTTY property
and the value returned by getColorDepth() on the respective stream.
Default:'auto'.

Creates a new Console with one or two writable stream instances. stdout is a
writable stream to print log or info output. stderr is used for warning or
error output. If stderr is not provided, stdout is used for stderr.

...message<any> All arguments besides value are used as error message.

A simple assertion test that verifies whether value is truthy. If it is not,
Assertion failed is logged. If provided, the error message is formatted
using util.format() by passing along all message arguments. The output is
used as the error message.

When stdout is a TTY, calling console.clear() will attempt to clear the
TTY. When stdout is not a TTY, this method does nothing.

The specific operation of console.clear() can vary across operating systems
and terminal types. For most Linux operating systems, console.clear()
operates similarly to the clear shell command. On Windows, console.clear()
will clear only the output in the current terminal viewport for the Node.js
binary.

showHidden<boolean> If true then the object's non-enumerable and symbol
properties will be shown too. Default:false.

depth<number> Tells util.inspect() how many times to recurse while
formatting the object. This is useful for inspecting large complicated
objects. To make it recurse indefinitely, pass null. Default:2.

Prints to stderr with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3) (the arguments are all passed to
util.format()).

If formatting elements (e.g. %d) are not found in the first string then
util.inspect() is called on each argument and the resulting string
values are concatenated. See util.format() for more information.

Prints to stdout with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3) (the arguments are all passed to
util.format()).

Try to construct a table with the columns of the properties of tabularData
(or use properties) and rows of tabularData and log it. Falls back to just
logging the argument if it can’t be parsed as tabular.

Starts a timer that can be used to compute the duration of an operation. Timers
are identified by a unique label. Use the same label when calling
console.timeEnd() to stop the timer and output the elapsed time in
milliseconds to stdout. Timer durations are accurate to the sub-millisecond.

This method does not display anything unless used in the inspector. The
console.profile() method starts a JavaScript CPU profile with an optional
label until console.profileEnd() is called. The profile is then added to
the Profile panel of the inspector.

console.profile('MyLabel');
// Some code
console.profileEnd('MyLabel');
// Adds the profile 'MyLabel' to the Profiles panel of the inspector.

This method does not display anything unless used in the inspector. Stops the
current JavaScript CPU profiling session if one has been started and prints
the report to the Profiles panel of the inspector. See
console.profile() for an example.

If this method is called without a label, the most recently started profile is
stopped.

SPKAC is a Certificate Signing Request mechanism originally implemented by
Netscape and was specified formally as part of HTML5's keygen element.

Note that <keygen> is deprecated since HTML 5.2 and new projects
should not use this element anymore.

The crypto module provides the Certificate class for working with SPKAC
data. The most common usage is handling output generated by the HTML5
<keygen> element. Node.js uses OpenSSL's SPKAC implementation internally.

Returns: <Buffer> When using an authenticated encryption mode (GCM, CCM
and OCB are currently supported), the cipher.getAuthTag() method returns a
Buffer containing the authentication tag that has been computed from
the given data.

The cipher.getAuthTag() method should only be called after encryption has
been completed using the cipher.final() method.

When using block encryption algorithms, the Cipher class will automatically
add padding to the input data to the appropriate block size. To disable the
default padding call cipher.setAutoPadding(false).

When autoPadding is false, the length of the entire input data must be a
multiple of the cipher's block size or cipher.final() will throw an error.
Disabling automatic padding is useful for non-standard padding, for instance
using 0x0 instead of PKCS padding.

The cipher.setAutoPadding() method must be called before
cipher.final().

Updates the cipher with data. If the inputEncoding argument is given,
the data
argument is a string using the specified encoding. If the inputEncoding
argument is not given, data must be a Buffer, TypedArray, or
DataView. If data is a Buffer, TypedArray, or DataView, then
inputEncoding is ignored.

The outputEncoding specifies the output format of the enciphered
data. If the outputEncoding
is specified, a string using the specified encoding is returned. If no
outputEncoding is provided, a Buffer is returned.

The cipher.update() method can be called multiple times with new data until
cipher.final() is called. Calling cipher.update() after
cipher.final() will result in an error being thrown.

When using an authenticated encryption mode (GCM, CCM and OCB are
currently supported), the decipher.setAuthTag() method is used to pass in the
received authentication tag. If no tag is provided, or if the cipher text
has been tampered with, decipher.final() will throw, indicating that the
cipher text should be discarded due to failed authentication. If the tag length
is invalid according to NIST SP 800-38D or does not match the value of the
authTagLength option, decipher.setAuthTag() will throw an error.

The decipher.setAuthTag() method must be called before
decipher.final() and can only be called once.

Updates the decipher with data. If the inputEncoding argument is given,
the data
argument is a string using the specified encoding. If the inputEncoding
argument is not given, data must be a Buffer. If data is a
Buffer then inputEncoding is ignored.

The outputEncoding specifies the output format of the enciphered
data. If the outputEncoding
is specified, a string using the specified encoding is returned. If no
outputEncoding is provided, a Buffer is returned.

The decipher.update() method can be called multiple times with new data until
decipher.final() is called. Calling decipher.update() after
decipher.final() will result in an error being thrown.

Computes the shared secret using otherPublicKey as the other
party's public key and returns the computed shared secret. The supplied
key is interpreted using the specified inputEncoding, and secret is
encoded using specified outputEncoding.
If the inputEncoding is not
provided, otherPublicKey is expected to be a Buffer,
TypedArray, or DataView.

If outputEncoding is given a string is returned; otherwise, a
Buffer is returned.

Generates private and public Diffie-Hellman key values, and returns
the public key in the specified encoding. This key should be
transferred to the other party.
If encoding is provided a string is returned; otherwise a
Buffer is returned.

Sets the Diffie-Hellman private key. If the encoding argument is provided,
privateKey is expected
to be a string. If no encoding is provided, privateKey is expected
to be a Buffer, TypedArray, or DataView.

Sets the Diffie-Hellman public key. If the encoding argument is provided,
publicKey is expected
to be a string. If no encoding is provided, publicKey is expected
to be a Buffer, TypedArray, or DataView.

Converts the EC Diffie-Hellman public key specified by key and curve to the
format specified by format. The format argument specifies point encoding
and can be 'compressed', 'uncompressed' or 'hybrid'. The supplied key is
interpreted using the specified inputEncoding, and the returned key is encoded
using the specified outputEncoding.

Use crypto.getCurves() to obtain a list of available curve names.
On recent OpenSSL releases, openssl ecparam -list_curves will also display
the name and description of each available elliptic curve.

If format is not specified the point will be returned in 'uncompressed'
format.

If the inputEncoding is not provided, key is expected to be a Buffer,
TypedArray, or DataView.

Computes the shared secret using otherPublicKey as the other
party's public key and returns the computed shared secret. The supplied
key is interpreted using specified inputEncoding, and the returned secret
is encoded using the specified outputEncoding.
If the inputEncoding is not
provided, otherPublicKey is expected to be a Buffer, TypedArray, or
DataView.

If outputEncoding is given a string will be returned; otherwise a
Buffer is returned.

ecdh.computeSecret will throw an
ERR_CRYPTO_ECDH_INVALID_PUBLIC_KEY error when otherPublicKey
lies outside of the elliptic curve. Since otherPublicKey is
usually supplied from a remote user over an insecure network,
its recommended for developers to handle this exception accordingly.

Sets the EC Diffie-Hellman private key.
If encoding is provided, privateKey is expected
to be a string; otherwise privateKey is expected to be a Buffer,
TypedArray, or DataView.

If privateKey is not valid for the curve specified when the ECDH object was
created, an error is thrown. Upon setting the private key, the associated
public point (key) is also generated and set in the ECDH object.

Sets the EC Diffie-Hellman public key.
If encoding is provided publicKey is expected to
be a string; otherwise a Buffer, TypedArray, or DataView is expected.

Note that there is not normally a reason to call this method because ECDH
only requires a private key and the other party's public key to compute the
shared secret. Typically either ecdh.generateKeys() or
ecdh.setPrivateKey() will be called. The ecdh.setPrivateKey() method
attempts to generate the public point/key associated with the private key being
set.

Updates the hash content with the given data, the encoding of which
is given in inputEncoding.
If encoding is not provided, and the data is a string, an
encoding of 'utf8' is enforced. If data is a Buffer, TypedArray, or
DataView, then inputEncoding is ignored.

Updates the Hmac content with the given data, the encoding of which
is given in inputEncoding.
If encoding is not provided, and the data is a string, an
encoding of 'utf8' is enforced. If data is a Buffer, TypedArray, or
DataView, then inputEncoding is ignored.

In some cases, a Sign instance can also be created by passing in a signature
algorithm name, such as 'RSA-SHA256'. This will use the corresponding digest
algorithm. This does not work for all signature algorithms, such as
'ecdsa-with-SHA256'. Use digest names instead.

The privateKey argument can be an object or a string. If privateKey is a
string, it is treated as a raw key with no passphrase. If privateKey is an
object, it must contain one or more of the following properties:

padding: <integer> - Optional padding value for RSA, one of the following:

crypto.constants.RSA_PKCS1_PADDING (default)

crypto.constants.RSA_PKCS1_PSS_PADDING

Note that RSA_PKCS1_PSS_PADDING will use MGF1 with the same hash function
used to sign the message as specified in section 3.1 of RFC 4055.

saltLength: <integer> - salt length for when padding is
RSA_PKCS1_PSS_PADDING. The special value
crypto.constants.RSA_PSS_SALTLEN_DIGEST sets the salt length to the digest
size, crypto.constants.RSA_PSS_SALTLEN_MAX_SIGN (default) sets it to the
maximum permissible value.

If outputEncoding is provided a string is returned; otherwise a Buffer
is returned.

The Sign object can not be again used after sign.sign() method has been
called. Multiple calls to sign.sign() will result in an error being thrown.

Updates the Sign content with the given data, the encoding of which
is given in inputEncoding.
If encoding is not provided, and the data is a string, an
encoding of 'utf8' is enforced. If data is a Buffer, TypedArray, or
DataView, then inputEncoding is ignored.

Updates the Verify content with the given data, the encoding of which
is given in inputEncoding.
If inputEncoding is not provided, and the data is a string, an
encoding of 'utf8' is enforced. If data is a Buffer, TypedArray, or
DataView, then inputEncoding is ignored.

Returns: <boolean>true or false depending on the validity of the
signature for the data and public key.

Verifies the provided data using the given object and signature.
The object argument can be either a string containing a PEM encoded object,
which can be an RSA public key, a DSA public key, or an X.509 certificate,
or an object with one or more of the following properties: