You may not use the identified files except in compliance with the Apache
License, Version 2.0 (the “License.”)

You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0.

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and
limitations under the License.

The node-oracledb API is a generic Oracle Database access layer.
Almost all the functionality described here is common across all
current Oracle Databases. However the documentation may describe some
database features that are in specific Oracle Database versions,
editions, or require additional database options or packs.

1.1 Getting Started with Node-oracledb

Download node-oracledb examples or create a script like the one
below. As well as callbacks, node-oracledb can also use
Promises and Async/Await
functions.

Locate your Oracle Database username and password, and the database
connection string. The connection string is
commonly of the format hostname/servicename, using the hostname
where the database is running and the Oracle Database service name of
the database instance.

Substitute your username, password and connection string in the code.
For downloaded examples, put these in dbconfig.js.

2. Errors

The last parameter of each method is a callback, unless
Promises are being used. The first parameter of
the callback is an Error object that contains error information if
the call fails. If the call succeeds, then the object is null.

When using Promises, the catch() callback’s error object will
contain error information when the Promise chain fails.

If an invalid value is set for a property, then an error occurs. The
same is true for invalid operations on read-only or write-only
properties. If an unrecognized property name is used, it will be
ignored.

2.1 Error Properties

The Error object contains errorNum, message and offset properties.

2.1.1 errorNum

Number errorNum

The Oracle error number. This value is undefined for non-Oracle
errors and for messages prefixed with NJS or DPI.

2.1.2 message

String message

The text of the error message.

The error may be a standard Oracle message with a prefix like ORA or
PLS. Alternatively it may be a node-oracledb specific error prefixed with
NJS or DPI.

2.1.3 offset

Number offset

The character offset into the SQL text that resulted in the Oracle
error. The value may be 0 in non-SQL contexts. This value is
undefined for non-Oracle errors and for messages prefixed with NJS or
DPI.

3. Oracledb Class

The Oracledb object is the factory class for Pool and Connection objects.

The Oracledb object is instantiated by loading node-oracledb:

varoracledb=require("oracledb");

Internally, the add-on creates the Oracledb object as a singleton.
Reloading it in the same Node.js process creates a new pointer to the
same object.

3.1 Oracledb Constants

These constants are defined in the oracledb module. Usage is
described later in this document.

The numeric values for the constants are shown to aid debugging. They
may change in future, so use the constant names in applications.

oracledb.BLOB // (2007) Bind a BLOB to a Node.js Stream or create a temporary BLOB, or for fetchAsBuffer and fetchInfo
oracledb.BUFFER // (2005) Bind a RAW, LONG RAW or BLOB to a Node.js Buffer
oracledb.CLOB // (2006) Bind a CLOB to a Node.js Stream, create a temporary CLOB, or for fetchAsString and fetchInfo
oracledb.CURSOR // (2004) Bind a REF CURSOR to a node-oracledb ResultSet class
oracledb.DATE // (2003) Bind as JavaScript date type. Can also be used for fetchAsString and fetchInfo
oracledb.DEFAULT // (0) Used with fetchInfo to reset the fetch type to the database type
oracledb.NUMBER // (2002) Bind as JavaScript number type. Can also be used for fetchAsString and fetchInfo
oracledb.STRING // (2001) Bind as JavaScript String type. Can be used for most database types.

3.1.3 Oracle Database Type Constants

The values of these types are shown
in extended metadata for queries and REF
CURSORS. They indicate the Oracle Database type.

3.2 Oracledb Properties

The properties of the Oracledb object are used for setting up
configuration parameters for deployment.

If required, these properties can be overridden for the Pool or
Connection objects.

These properties may be read or modified. If a property is modified,
only subsequent invocations of the createPool() or getConnection()
methods will be affected. Objects that exist before a property is
modified are not altered.

Invalid values, or combinations of values, for pool configuration
properties can result in the error ORA-24413: Invalid number of
sessions specified.

Each of the configuration properties is described below.

3.2.1 oracledb.autoCommit

Boolean autoCommit

If this property is true, then the transaction in the current
connection is automatically committed at the end of statement
execution.

For example, where two different kinds of users share one pool, you
might set connectionClass to ‘HRPOOL’ for connections that access a
Human Resources system, and it might be set to ‘OEPOOL’ for users of an
Order Entry system. Users will only be given sessions of the
appropriate class, allowing maximal reuse of resources in each case,
and preventing any session information leaking between the two systems.

Note prior to node-oracledb 0.5 this property was called
isExternalAuth.

Example

varoracledb=require('oracledb');oracledb.externalAuth=false;

3.2.5 oracledb.fetchArraySize

Number fetchArraySize

This property sets the size of an internal buffer used for fetching
query rows from Oracle Database. Changing it may affect query
performance but does not affect how many rows are returned to the
application.

Increasing this value reduces the number of round trips to the
database but increases memory usage for each data fetch. For queries
that return a large number of rows, higher values of fetchArraySize
may give better performance. For queries that only return a few rows,
reduce the value of fetchArraySize to minimize the amount of memory
management during data fetches. JavaScript memory fragmentation may
occur in some cases, see Fetching Rows with Direct
Fetches.

For direct fetches (those using execute() option resultSet:
false), the internal buffer size will be based
on the lesser of maxRows and fetchArraySize.

The property was introduced in node-oracledb version 2. It replaces
prefetchRows.

Example

varoracledb=require('oracledb');oracledb.fetchArraySize=100;

3.2.6 oracledb.fetchAsBuffer

Array fetchAsBuffer

An array of node-oracledb types. Currently the only valid type
is oracledb.BLOB. When a BLOB column is
queried with execute()
or queryStream(), the column data is returned as a
Buffer instead of the default representation.

By default in node-oracledb, all columns are returned as native types
or as Lob instances, in the case of CLOB and BLOB types.

Example

3.2.8 oracledb.lobPrefetchSize

Number lobPrefetchSize

This attribute is temporarily disabled. Setting it has no effect.

Node-oracledb internally uses Oracle LOB Locators to manipulate long
object (LOB) data. LOB Prefetching allows LOB data to be returned
early to node-oracledb when these locators are first returned.
This is similar to the way row prefetching allows
for efficient use of resources and round-trips between node-oracledb
and the database.

Prefetching of LOBs is mostly useful for small LOBs.

The default size is 16384.

Example

varoracledb=require('oracledb');oracledb.lobPrefetchSize=16384;

3.2.9 oracledb.maxRows

Number maxRows

The maximum number of rows that are fetched by a query with
connection.execute() when not using a
ResultSet. Rows beyond this limit are not fetched
from the database. A value of 0 means there is no limit.

To improve database efficiency, SQL queries should use a row limiting
clause like OFFSET / FETCH or equivalent. The maxRows
property can be used to stop badly coded queries from returning
unexpectedly large numbers of rows.

When the number of query rows is relatively big, or can’t be
predicted, it is recommended to use a ResultSet or
queryStream(). This allows applications to process
rows in smaller chunks or individually, preventing the Node.js memory
limit being exceeded or query results being unexpectedly truncated by
a maxRows limit.

This can be either of
the Oracledb constantsoracledb.ARRAY or
oracledb.OBJECT. The default value is oracledb.ARRAY which is more efficient.

If specified as oracledb.ARRAY, each row is fetched as an array of column
values.

If specified as oracledb.OBJECT, each row is fetched as a JavaScript object.
The object has a property for each column name, with the property
value set to the respective column value. The property name follows
Oracle’s standard name-casing rules. It will commonly be uppercase,
since most applications create tables using unquoted, case-insensitive
names.

3.2.13 oracledb.poolMax

See Connections and Number of Threads for why you
should not increase this value beyond 128. Importantly, if you
increase poolMax you should also increase the number of threads
available to node-oracledb.

Example

varoracledb=require('oracledb');oracledb.poolMin=0;

3.2.15 oracledb.poolPingInterval

Number poolPingInterval

When a pool getConnection() is called and the
connection has been idle in the pool for at least poolPingInterval
seconds, an internal “ping” will be performed first to check the
aliveness of the connection. At the cost of some overhead for
infrequently accessed connection pools, connection pinging improves
the chance a pooled connection is valid when it is used because
identified un-unusable connections will not be returned to the
application by getConnection().

Note when node-oracledb is built with version 12.2 of the Oracle
client library, the value of poolPingInterval is ignored. Oracle
client 12.2 has a lightweight, always-enabled connection check that
replaces explicit pinging.

With Oracle client 12.1 or earlier, unless poolPingInterval is 0,
it is possible for un-usable connections to be returned by a pool
getConnection() call. Since it is also possible for network outages
to occur after getConnection() is called, applications should
implement appropriate statement execution error checking.

Example

3.2.16 oracledb.poolTimeout

Number poolTimeout

The number of seconds after which idle connections (unused in the
pool) are terminated. Idle connections are terminated only when the
pool is accessed. If the poolTimeout is set to 0, then idle
connections are never terminated.

Example

3.2.19 oracledb.queueRequests

Boolean queueRequests

If this property is true and the number of connections “checked out”
from the pool has reached the number specified by
poolMax, then new requests for connections are
queued until in-use connections are released.

If this property is false and a request for a connection is made
from a pool where the number of “checked out” connections has reached
poolMax, then an ORA-24418 error indicating that further sessions
cannot be opened will be returned.

Description

The default properties may be overridden by specifying new properties
in the poolAttrs parameter.

It is possible to add pools to the pool cache when calling createPool().
This allows pools to later be accessed by name, removing the need to
pass the pool object through code.
See Connection Pool Cache for more details.

A pool should be terminated with the pool.close()
call, but only after all connections have been released.

3.3.1.1 createPool(): Parameters and Attributes

Object poolAttrs

The poolAttrs parameter object provides connection credentials and
pool-specific configuration properties, such as the maximum or minimum
number of connections for the pool, or the statement cache size for
the connections.

The properties provided in the poolAttrs parameter override the
default pooling properties of the Oracledb object. If an attribute
is not set, or is null, the value of the related Oracledb property
will be used.

Note that the poolAttrs parameter may have configuration
properties that are not used by the createPool() method. These are
ignored.

The properties of poolAttrs are described below.

3.3.1.1.1 user

String user

The database user name. Can be a simple user name or a proxy of the
form alison[fred]. See the Client Access Through a Proxy
section in the Oracle Call Interface manual for more details about
proxy authentication.

3.3.1.1.2 password

String password

The password of the database user. A password is also necessary if a
proxy user is specified.

3.3.1.1.3 connectString, connectionString

String connectString
String connectionString

The two properties are aliases for each other. Use only one of the properties.

The Oracle database instance to connect to. The string can be an Easy
Connect string, or a Net Service Name from a tnsnames.ora file, or the
name of a local Oracle database instance. See
Connection Strings for examples.

3.3.1.1.5 stmtCacheSize

3.3.1.1.6 poolAlias

String poolAlias

The poolAlias is an optional property that is used to explicitly add pools to the
connection pool cache. If a pool alias is provided, then the new pool will be added
to the connection pool cache and the poolAlias value can then be used with methods
that utilize the connection pool cache, such as oracledb.getPool() and
oracledb.getConnection().

3.3.1.1.8 poolMax

See Connections and Number of Threads for why you
should not increase this value beyond 128. Importantly, if you
increase poolMax you should also increase the number of threads
available to node-oracledb.

3.3.1.1.10 poolPingInterval

Number poolPingInterval

When a pool getConnection() is called and the
connection has been idle in the pool for at least poolPingInterval
seconds, an internal “ping” will be performed first to check the
aliveness of the connection.

Note this attribute is ignored when node-oracledb is built with Oracle
client 12.2, since this has its own lightweight, always-enabled
connection check.

3.3.2 oracledb.getConnection()

Prototype

Description

Obtains a connection from a pool in the connection pool
cache or creates a new, standalone, non-pooled
connection.

For situations where connections are used infrequently, creating a
standalone connection may be more efficient than creating and managing
a connection pool. However, in most cases, Oracle recommends getting
connections from a connection pool.

The following table shows the various signatures that can be used when invoking
getConnection and describes how the function will behave as a result.

Signature

Description

oracledb.getConnection()

Gets a connection from the previously created default pool. Returns a promise.

oracledb.getConnection(callback)

Gets a connection from the previously created default pool. Invokes the callback.

oracledb.getConnection(poolAlias)

Gets a connection from the previously created pool with the specified poolAlias. Returns a promise.

oracledb.getConnection(poolAlias, callback)

Gets a connection from the previously created pool with the specified poolAlias. Invokes the callback.

3.3.2.1.2 Connection Attributes

Any connAttrs properties that are not used by the getConnection()
method are ignored.

The properties of the connAttrs object are described below.

3.3.2.1.2.1 user

String user

The database user name. Can be a simple user name or a proxy of the
form alison[fred]. See the Client Access Through a Proxy
section in the Oracle Call Interface manual for more details about
proxy authentication.

3.3.2.1.2.2 password

String password

The password of the database user. A password is also necessary if a
proxy user is specified.

3.3.2.1.2.3 connectString, connectionString

String connectString
String connectionString

The two properties are aliases for each other. Use only one of the properties.

The Oracle database instance to connect to. The string can be an Easy Connect string, or a
Net Service Name from a tnsnames.ora file, or the name of a local
Oracle database instance. See
Connection Strings for examples.

4.1.5 connection.stmtCacheSize

The number of statements to be cached in the
statement cache of the connection. The default value is
the stmtCacheSize property in effect in the Pool object when the
connection is created in the pool.

4.2 Connection Methods

4.2.1 connection.break()

Prototype

Callback:

break(function(Error error){});

Promise:

promise = break();

Description

This call stops the currently running operation on the connection.

If there is no operation in progress or the operation has completed by
the time the break is issued, the break() is effectively a no-op.

If the running asynchronous operation is interrupted, its callback
will return an error.

Parameters

function(Error error)

The parameters of the callback function are:

Callback function parameter

Description

Error error

If break() succeeds, error is NULL. If an error occurs, then error contains the error message.

4.2.2 connection.close()

Prototype

Callback:

close(function(Error error){});

Promise:

promise = close();

Description

Releases a connection. If the connection was obtained from the pool,
the connection is returned to the pool and is available for reuse.

Calling close() as soon as a connection is no longer required is
strongly encouraged. Releasing early can improve system efficiency.
Calling close() for pooled connections is required to prevent the
pool running out of connections.

When a connection is released, any ongoing transaction on the
connection is rolled back.

After releasing a connection to a pool, there is no
guarantee a subsequent getConnection() call gets back the same
database connection. The application must redo any ALTER SESSION
statements on the new connection object, as required.

If an error occurs on a pooled connection and that error is known to
make the connection unusable, then close() will drop that connection
from the connection pool. So a future pooled getConnection() call
that grows the pool will create a new, valid connection.

Parameters

function(Error error)

The parameters of the callback function are:

Callback function parameter

Description

Error error

If close() succeeds, error is NULL. If an error occurs, then error contains the error message.

4.2.3 connection.commit()

Prototype

Callback:

commit(function(Error error){});

Promise:

promise = commit();

Description

This call commits the current transaction in progress on the connection.

Parameters

function(Error error)

The parameters of the callback function are:

Callback function parameter

Description

Error error

If commit() succeeds, error is NULL. If an error occurs, then error contains the error message.

4.2.4 connection.createLob()

Prototype

Callback:

createLob(Number type, function(Error error, Lob lob){});

Promise:

promise = createLob(Number type);

Description

Creates a Lob as an Oracle temporary LOB. The LOB
is initially empty. Data can be streamed to the LOB, which can then
be passed into PL/SQL blocks, or inserted into the database.

When no longer required, Lobs created with createLob() should be
closed with lob.close() because Oracle Database
resources are held open if temporary LOBs are not closed.

4.2.5.1 execute(): SQL Statement

String sql

The SQL or PL/SQL statement that execute() executes. The statement
may contain bind variables.

4.2.5.2 execute(): Bind Parameters

Object bindParams

The execute() function bindParams parameter is needed if there are
bind variables in the statement, or if options
are used. It can be either an object that associates values or
JavaScript variables to the statement’s bind variables by name, or an
array of values or JavaScript variables that associate to the
statement’s bind variables by their relative positions.
See Bind Parameters for Prepared Statements for more details
on binding.

If a bind value is an object it may have the following properties:

Bind Property

Description

dir

The direction of the bind. One of the Oracledb Constantsoracledb.BIND_IN, oracledb.BIND_INOUT, or oracledb.BIND_OUT. The default is oracledb.BIND_IN.

The maximum number of bytes that an OUT or IN OUT bind variable of type oracledb.STRING or oracledb.BUFFER can use to get data. The default value is 200. The maximum limit depends on the database type, see below. When binding IN OUT, then maxSize refers to the size of the returned value: the input value can be smaller or bigger. For IN binds, maxSize is ignored.

type

The node-oracledb or JavaScript data type to be bound. One of the Oracledb Constantsoracledb.BLOB, oracledb.BUFFER, oracledb.CLOB, oracledb.CURSOR, oracledb.DATE, oracledb.NUMBER, or oracledb.STRING. With IN or IN OUT binds the type can be explicitly set with type or it will default to the type of the input data value. With OUT binds, the type defaults to oracledb.STRING whenever type is not specified.

val

The input value or variable to be used for an IN or IN OUT bind variable.

The limit for maxSize when binding as oracledb.BUFFER is 2000
bytes, and as oracledb.STRING is 4000 bytes unless you are using
Oracle Database 12c and the database initialization parameter
MAX_STRING_SIZE has a value of EXTENDED. In this case the limit
is 32767 bytes.

When binding Oracle LOBs as oracledb.STRING or oracledb.BUFFER,
the value of maxSize can be much larger, see the limits
in LOB Bind Parameters.

When binding to get a UROWID value from the database, note that
UROWIDs can take up to 5267 bytes when fetched from the database so
maxSize should be set to at least this value.

Note oracledb.CURSOR bind variables can be used only for PL/SQL OUT binds.

4.2.5.3 execute(): Options

Object options

This is an optional parameter to execute() that may be used to
control statement execution.

If there are no bind variables in the SQL statement, then a null
bindParams, for example {}, must be specified before options
otherwise you will get an error like ORA-01036: Illegal variable
name/number or NJS-012: encountered invalid bind data type in parameter.

The following properties can be set or overridden for the execution of
a statement.

The fetchInfo property type can be set to oracledb.STRING for number and
date columns in a query to indicate they should be returned as Strings
instead of their native format. CLOB column data can also be returned
as Strings instead of Lob instances.

When fetchInfo is set to oracledb.BUFFER for a BLOB column, each BLOB item
will be returned as a Buffer instead of a Lob instance.

Using oracledb.DEFAULT overrides any global mapping given
by fetchAsString
or fetchAsBuffer. The column data is
returned in native format.

Each column is specified by name, using Oracle’s standard naming
convention.

The maximum length of a string created by type mapping non-CLOB
columns is 200 bytes. If a database column that is already being
fetched as type oracledb.STRING is specified in fetchInfo, then the actual
database metadata will be used to determine the maximum length.

Strings and Buffers created for LOB columns will generally be limited
by Node.js and V8 memory restrictions.

Columns fetched from REF CURSORS are not mapped by fetchInfo
settings in the execute() call. Use the
global fetchAsString
or fetchAsBuffer settings instead.

To get query metadata without fetching rows, use a
ResultSet. Access
resultset.metaData and then close the ResultSet. Do
not call getRow() or getRows(). Preferably use a query clause
such as WHERE 1 = 0 so the database does minimal work.

4.2.5.4.2 outBinds

Array/object outBinds

This contains the output values of OUT and IN OUT binds.
If bindParams is passed as an array, then
outBinds is returned as an array. If bindParams is passed as an
object, then outBinds is returned as an object.

4.2.5.4.3 resultSet

When using this option, resultSet.close() must be called
when the ResultSet is no longer needed. This is true whether or not
rows have been fetched from the ResultSet.

4.2.5.4.4 rows

Array rows

For SELECT statements using direct fetches, rows
contains an array of fetched rows. It will be NULL if there is an
error or the SQL statement was not a SELECT statement. By default,
the rows are in an array of column value arrays, but this can be
changed to arrays of objects by setting
outFormat to oracledb.OBJECT. If a single row
is fetched, then rows is an array that contains one single row.

The number of rows returned is limited by
oracledb.maxRows or the
maxRows option in an execute() call. If
maxRows is 0, then the number of rows is limited by Node.js memory
constraints.

4.2.5.4.5 rowsAffected

Number rowsAffected

For DML statements (including SELECT FOR UPDATE) this contains
the number of rows affected, for example the number of rows
inserted. For non-DML statements such as queries, or if no rows are
affected, then rowsAffected will appear as undefined.

5.1.3 lob.pieceSize

For efficiency, it is recommended that pieceSize be a multiple of
chunkSize.

The property should not be reset in the middle of streaming since data
will be lost when internal buffers are resized.

The maximum value for pieceSize is limited to the value of UINT_MAX.

5.1.4 lob.type

readonly Number type

This read-only attribute shows the type of Lob being used. It will
have the value of one of the constants
oracledb.BLOB or
oracledb.CLOB. The value is derived from the
bind type when using LOB bind variables, or from the column type when
a LOB is returned by a query.

5.2 Lob Methods

5.2.1 lob.close()

Prototype

Callback:

close(function(Error error){});

Promise:

promise = close();

Description

Explicitly closes a Lob.

Lobs created with createLob() should be
explicitly closed with lob.close() when no longer
needed. This frees resources in node-oracledb and in Oracle Database.

Persistent or temporary Lobs returned from the database may also be
closed with lob.close() as long as streaming is not currently
happening. Note these Lobs are automatically closed when streamed to
completion or used as the source for an IN OUT bind. If you try to
close a Lob being used for streaming you will get the error NJS-023:
concurrent operations on a Lob are not allowed.

The lob.close() method emits the Node.js Stream ‘close’ event
unless the Lob has already been explicitly or automatically closed.

The connection must be open when calling lob.close() on a temporary
LOB, such as those created by createLob().

6.1.6 pool.poolMin

6.1.7 pool.poolPingInterval

readonly Number poolPingInterval

The maximum number of seconds that a connection can remain idle in a
connection pool (not “checked out” to the application by
getConnection()) before node-oracledb pings the database prior to
returning that connection to the application.

Parameters

If close() succeeds, error is NULL. If an error occurs, then error contains the error message.

6.2.2 pool.getConnection()

Prototype

Callback:

getConnection(function(Error error, Connection conn){});

Promise:

promise = getConnection();

Description

This method obtains a connection from the connection pool.

If a previously opened connection is available in the pool, that
connection is returned. If all connections in the pool are in use, a
new connection is created and returned to the caller, as long as the
number of connections does not exceed the specified maximum for the
pool. If the pool is at its maximum limit, the getConnection() call
results in an error, such as ORA-24418: Cannot open further sessions.

6.2.3 pool.terminate()

7. ResultSet Class

ResultSets allow query results to fetched from the database one at a
time, or in groups of rows. They can also be converted to Readable
Streams. ResultSets enable applications to process very large data
sets.

ResultSets should also be used where the number of query rows cannot
be predicted and may be larger than Node.js can handle in a single
array.

A ResultSet object is obtained by setting resultSet: true in the
options parameter of the Connectionexecute() method
when executing a query. A ResultSet is also returned to
node-oracledb when binding as type oracledb.CURSOR to a
PL/SQL REF CURSOR bind parameter.

7.2.4 resultset.toQueryStream()

Prototype

Return Value

Description

This synchronous method converts a ResultSet into a stream.

It can be used to make ResultSets from top-level queries or from REF
CURSOR bind variables streamable. To make top-level queries
streamable, the alternative connection.queryStream()
method may be easier to use.

8. Connection Handling

varoracledb=require('oracledb');oracledb.getConnection({user:"hr",password:"welcome",connectString:"localhost/XE"},function(err,connection){if(err){console.error(err.message);return;}...// use connectionconnection.close(function(err){if(err){console.error(err.message);}});});

Applications which are heavy users of connections should create and
use a Connection Pool.

8.2 Connections and Number of Threads

If you open more than four connections, such as via
increasing poolMax, you should increase the
number of worker threads available to node-oracledb. The thread pool
size should be at least equal to the maximum number of connections.
If the application does database and non-database work concurrently,
extra threads could also be required for optimal throughput.

Increase the thread pool size by setting the environment variable
UV_THREADPOOL_SIZE before starting Node. For example, in a
Linux terminal, the number of Node.js worker threads can be increased
to 10 by using the following command:

$ UV_THREADPOOL_SIZE=10 node myapp.js

If the value is set inside the application with
process.env.UV_THREADPOOL_SIZE ensure it is set prior to any
asynchronous call that uses the thread pool otherwise the default size
of 4 will still be used.

Note the ‘libuv’ library used by Node.js limits the number of
threads to 128. This implies the maxiumum number of connections
opened, i.e. poolMax, should be less than 128.

Connections can handle one database operation at a time. Node.js
worker threads executing database statements on a connection will wait
until round-trips between node-oracledb and the database are complete.
When an application handles a sustained number of user requests, and
database operations take some time to execute or the network is slow,
then all available threads may be held in use. This prevents other
connections from beginning work and stops Node.js from handling more
user load. Increasing the number of worker threads may improve
throughput and prevent deadlocks.

As well as correctly setting the thread pool size, structure your code
to avoid starting parallel operations on a connection. For example,
instead of using async.parallel or async.each() which call each
of their items in parallel, use async.series or async.eachSeries().
When you use parallel calls on a connection, the queuing ends up being
done in the C layer via a mutex. However libuv isn’t aware that a
connection can only do one thing at a time - it only knows when it has
background threads available and so it sends off the work to be done.
If your application runs operations in parallel on a connection, you
could use more than one background thread (perhaps all of them) and
each could be waiting on the one before it to finish its “execute”. Of
course other users or transactions can’t use the threads at
that time either. When you use methods like async.series or
async.eachSeries(), the queuing is instead done in the main
JavaScript thread.

8.3 Connection Pooling

When applications use a lot of connections for short periods, Oracle
recommends using a connection pool for efficiency. Each pool can
contain one or more connections. A pool can grow or shrink, as
needed. Each node-oracledb process can use one or more local pools of
connections.

Pool expansion happens when the following are all true:
(i) getConnection() is called and (ii) all the
currently established connections in the pool are “checked out” by
previous getConnection() calls and are in-use by the application,
and (iii) the number of those connections is less than the pool’s
poolMax setting.

The Oracle Real-World Performance Group’s general recommendation for
client connection pools is for the minimum and maximum number of
connections to be the same. This avoids connection storms which can
decrease throughput. They also recommend sizing connection pools so
that the sum of all connections from all applications accessing a
database gives 1-10 connections per database server CPU core. See
About Optimizing Real-World Performance with Static Connection
Pools.

8.3.1 Connection Pool Cache

Node-oracledb has an internal connection pool cache which can be used
to facilitate sharing pools across modules and simplify getting
connections. At creation time, a pool can be given a named alias.
The alias can later be used to retrieve the related pool object for
use.

pool.close() - automatically removes the pool from the cache if needed

Pools are added to the cache if
a poolAlias property is provided in
the poolAttrs object when invoking
oracledb.createPool(). There can be multiple pools in the cache if
each pool is created with a unique alias.

If a pool is created without providing a pool alias, and a pool with
an alias of ‘default’ is not in the cache already, this pool will be
cached using the alias ‘default’. This pool is used by default in
methods that utilize the connection pool cache. If subsequent pools
are created without explicit aliases, they will be not stored in the
pool cache.

Examples using the default pool

Assuming the connection pool cache is empty, the following will create a new pool
and cache it using the pool alias ‘default’:

varoracledb=require('oracledb');oracledb.getConnection('hrpool',function(err,conn){// or 'shpool'...// Use connection from the pool and then release it});

8.3.2 Connection Pool Queue

If the application has called getConnection() so that all
connections in the pool are in use, and
further pool.getConnection() requests
(or oracledb.getConnection() calls that use a
pool) are made, then each new request will be queued until an in-use
connection is released back to the pool
with connection.close(). If poolMax has not
been reached, then connections can be satisfied and are not queued.

The pool queue can be disabled by setting the pool property
queueRequests to false. When the queue is
disabled, getConnection() requests to a pool that cannot immediately be
satisfied will return an error.

The amount of time that a queued request will wait for a free
connection can be configured with queueTimeout.
When connections are timed out of the queue, they will return the
error NJS-040: connection request timeout to the application.

Internally the queue is implemented in node-oracledb’s JavaScript top
level. A queued connection request is dequeued and passed down to
node-oracledb’s underlying C++ connection pool when an active
connection is released, and the number of
connections in use drops below the value of
poolMax.

8.3.3 Connection Pool Monitoring and Throughput

Connection pool usage should be monitored to choose the appropriate
connection pool settings for your workload.

When using a pool queue, further statistics
can be enabled by setting the createPool()poolAttrs parameter _enableStats to true. Statistics
can be output to the console by calling the pool._logStats()
method. The underscore prefixes indicate that these are private
attributes and methods. This interface may be altered or
enhanced in the future.

Statistics

Number of getConnection() requests made by the application to this pool.

total requests enqueued

Number of getConnection() requests that could not be immediately satisfied because every connection in this pool was already being used, and so they had to be queued waiting for the application to return an in-use connection to the pool.

total requests dequeued

Number of getConnection() requests that were dequeued when a connection in this pool became available for use.

total requests failed

Number of getConnection() requests that invoked the underlying C++ getConnection() callback with an error state. Does not include queue request timeout errors.

total request timeouts

Number of queued getConnection() requests that were timed out after they had spent queueTimeout or longer in this pool’s queue.

max queue length

Maximum number of getConnection() requests that were ever waiting at one time.

sum of time in queue

The sum of the time (milliseconds) that dequeued requests spent in the queue.

min time in queue

The minimum time (milliseconds) that any dequeued request spent in the queue.

max time in queue

The maximum time (milliseconds) that any dequeued request spent in the queue.

avg time in queue

The average time (milliseconds) that dequeued requests spent in the queue.

pool connections in use

The number of connections from this pool that getConnection() returned successfully to the application and have not yet been released back to the pool.

pool connections open

The number of connections in this pool that have been established to the database.

Note that for efficiency, the minimum, maximum, average, and sum of
times in the queue are calculated when requests are removed from the
queue. They do not take into account times for connection requests
still waiting in the queue.

Attribute Values

The _logStats() method also shows attribute values in effect for the pool:

Related Environment Variables

The number of worker threads for this process. Note this shows the value of the variable, however if this variable was set after the thread pool starts, the thread pool will actually be the default size of 4.

8.3.4 Connection Pool Pinging

Node-oracledb can ‘ping’ connections returned from pooled
getConnection() calls to check for their aliveness. The frequency
of pinging can be controlled with
the oracledb.poolPingInterval property or
during pool creation. The default ping interval is
60 seconds.

Without pinging, when connections are idle in a connection pool, there
is the possibility that a network or database instance failure makes
those connections unusable. A getConnection() call will happily
return a connection from the pool but an error will occur when the
application later uses the connection.

Note that explicit pinging is unnecessary and is not performed when
node-oracledb is built with version 12.2 of the underlying Oracle
client library. This has its own lightweight, always-enabled
connection check. It will return a valid connection to the
node-oracledb driver, which in turn returns it via getConnection().
The value of poolPingInterval is ignored.

With Oracle client 12.1 and earlier, when a
pool getConnection() is called and the
connection has been idle in the pool (not “checked out” to the
application by getConnection()) for the specified poolPingInterval
then an internal “ping” will be performed first. At the cost of some
overhead for infrequently accessed connection pools, connection
pinging improves the chance a pooled connection is valid when it is
first used because identified un-unusable connections will not be
returned to the application by getConnection(). For active
applications that are getting and releasing connections rapidly, the
connections will generally not have been idle longer than
poolPingInterval so no pings will be performed and there will be no
overhead.

If a ping detects the connection is invalid, for example if the
network had disconnected, then node-oracledb internally drops the
unusable connection and obtains another from the pool. This second
connection may also need a ping. This ping-and-release process may be
repeated until:

an existing connection that doesn’t qualify for pinging is obtained. The getConnection() call returns this to the application. Note it is not guaranteed to be usable

a new, usable connection is opened. This is returned to the application

a number of unsuccessful attempts to find a valid connection have been made, after which an error is returned to the application

Applications should continue to do appropriate error checking when
using connections in case they have become invalid in the time since
getConnection() was called. This error checking will also protect
against cases where there was a network outage but a connection was
idle in the pool for less than poolPingInterval seconds and so
getConnection() did not ping.

In all cases, when a bad connection is released
back to the pool, the connection is automatically destroyed. This
allows a valid connection to be opened by some subsequent
getConnection() call.

You can tune poolPingInterval to meet your quality of service
requirements.

8.4 Database Resident Connection Pooling (DRCP)

Database Resident Connection Pooling (DRCP) enables database
resource sharing for applications that run in multiple client
processes or run on multiple middle-tier application servers. DRCP
reduces the overall number of connections that a database must handle.

DRCP is useful for applications which share the same database credentials, have
similar session settings (for example date format settings and PL/SQL
package state), and where the application gets a database connection,
works on it for a relatively short duration, and then releases it.

To use DRCP in node-oracledb:

The DRCP pool must be started in the database: SQL> execute dbms_connection_pool.start_pool();

The connectionClass should be set by the node-oracledb application. If it is not set, the pooled server session memory will not be reused optimally, and the statistic views will record large values for NUM_MISSES.

The pool.createPool() or oracledb.getConnection() property connectString (or its alias connectionString) must specify to use a pooled server, either by the Easy Connect syntax like myhost/sales:POOLED, or by using a tnsnames.ora alias for a connection that contains (SERVER=POOLED).

For efficiency, it is recommended that DRCP connections should be used
with node-oracledb’s local connection pool.

The DRCP ‘Purity’ is SELF for DRCP connections. This allows reuse of
both the pooled server process and session memory, giving maximum
benefit from DRCP. See the Oracle documentation on benefiting from
scalability.

There are a number of Oracle Database V$ views that can be used to
monitor DRCP. These are discussed in the Oracle documentation and in
the Oracle white paper PHP Scalability and High Availability.
This paper also gives more detail on configuring DRCP.

8.5 External Authentication

External Authentication allows applications to use an external
password store (such as Oracle Wallet), the Secure Socket
Layer (SSL), or the operating system to validate user
access. One of the benefits is that database credentials do not need
to be hard coded in the application.

When externalAuth is set, any subsequent connections obtained using
the oracledb.getConnection() or
pool.getConnection() calls will use external
authentication. Setting this property does not affect the operation
of existing connections or pools.

Using externalAuth in the connAttrs parameter of a
pool.getConnection() call is not possible. The connections from a Pool
object are always obtained in the manner in which the pool was
initially created.

For pools created with external authentication, the number of
connections initially created is zero even if a larger value is
specified for poolMin. The pool increment is
always 1, regardless of the value of
poolIncrement. Once the number
of open connections exceeds poolMin and connections are idle for
more than the poolTimeout seconds, then the
number of open connections does not fall below poolMin.

Administrative privileges can allow access to a database instance even
when the database is not open. Control of these privileges is totally
outside of the database itself. Care must be taken with
authentication to ensure security. See the Database Administrators
Guide for information.

8.7 Securely Encrypting Network Traffic to Oracle Database

Data transferred between Oracle Database and the Oracle client
libraries used by node-oracledb can be encrypted so that
unauthorized parties are not able to view plain text data as it passes
over the network. The easiest configuration is Oracle’s native
network encryption. The standard SSL protocol can also be used if you
have a PKI, but setup is necessarily more involved.

With native network encryption, the client and database server
negotiate a key using Diffie-Hellman key exchange. There is
protection against man-in-the-middle attacks.

Native network encryption can be configured by editing Oracle Net’s
optional sqlnet.ora configuration files, on either the database server
and/or on each node-oracledb ‘client’. Parameters control whether
data integrity checking and encryption is required or just allowed,
and which algorithms the client and server should consider for use.

As an example, to ensure all connections to the database are checked
for integrity and are also encrypted, create or edit the Oracle
Database $ORACLE_HOME/network/admin/sqlnet.ora file. Set the
checksum negotiation to always validate a checksum and set the
checksum type to your desired value. The network encryption settings
can similarly be set. For example, to use the SHA512 checksum and
AES256 encryption use:

If you definitely know that the database server enforces integrity and
encryption, then you do not need to configure Node.js separately.
However you can also, or alternatively, do so depending on your
business needs. Create a sqlnet.ora and locate it with other
Optional Client Configuration Files:

The client and server sides can negotiate the protocols used if the
settings indicate more than one value is accepted.

Note these are example settings only. You must review your security
requirements and read the documentation for your Oracle version. In
particular review the available algorithms for security and
performance.

The NETWORK_SERVICE_BANNER column of the database view
V$SESSION_CONNECT_INFO can be used to verify the encryption
status of a connection.

For more information about Oracle Data Network Encryption and
Integrity, and for information about configuring SSL network
encryption, refer to the Oracle Database Security Guide. This
manual also contains information about other important security
features that Oracle Database provides, such Transparent Data
Encryption of data-at-rest in the database.

8.8 Connections and High Availability

For applications that need to be highly available, you may want to
configure your OS network settings and Oracle Net (which handles
communication between node-oracledb and the database).

8.9 Optional Client Configuration Files

Optional Oracle Client configuration files are read when node-oracledb
is loaded. These files affect connections and applications. Common
files include tnsnames.ora, sqlnet.ora, ldap.ora, and
oraaccess.xml.

Default locations for these files include:

/opt/oracle/instantclient_12_2/network/admin if Instant Client is in /opt/oracle/instantclient_12_2.

/usr/lib/oracle/12.2/client64/lib/network/admin if Oracle 12.2 Instant Client RPMs are used on Linux.

$ORACLE_HOME/network/admin if node-oracledb is using libraries from the database installation.

Alternatively, Oracle Client configuration files can be put in
another, accessible directory. Then set the environment variable
TNS_ADMIN to that directory name. For example, if the file
/etc/my-oracle-config/tnsnames.ora is being used, set TNS_ADMIN to
/etc/my-oracle-config.

9. SQL Execution

A single SQL or PL/SQL statement may be executed using the
Connectionexecute() method. The callback style shown
below, or promises, or
Async/Await may be used.

For queries that return a large number of rows, the network traffic
for fetching data from Oracle Database can be optimized by increasing
oracledb.fetchArraySize. For queries that
are known to return a small set of rows, reduce
fetchArraySize to avoid unnecessary memory
allocation. The execute() option
fetchArraySize can be used to override
the global property for individual queries.

Connections can handle one database operation at a time. Other
database operations will block. Structure your code to avoid starting
parallel operations on a connection. For example, instead of using
async.parallel or async.each() which calls each of its items in
parallel, use async.series or async.eachSeries(). Also
see Connections and Number of Threads.

After all database calls on the connection complete, the application
should use the connection.close() call to
release the connection.

9.1 SELECT Statements

9.1.1 Fetching Rows with Direct Fetches

By default, queries are handled as ‘direct fetches’, meaning all
results are returned in the callback result.rows
property:

connection.execute(`SELECT department_id, department_name
FROM departments
WHERE department_id = :did`,[180],{maxRows:10},// a maximum of 10 rows will be returnedfunction(err,result){if(err){console.error(err.message);return;}console.log(result.rows);// print all returned rows});

Any rows beyond the maxRows limit are not returned. If maxRows is
0, then the number of rows is only limited by Node.js memory.

To improve database efficiency, SQL queries should use a row limiting
clause like OFFSET / FETCH or equivalent. The maxRows
property can be used to stop badly coded queries from returning
unexpectedly large numbers of rows.

Internally, rows are fetched from Oracle Database in batches. The
internal batch size is based on the lesser of fetchArraySize and
maxRows. Each batch is concatenated into the array returned to the
application.

For queries expected to return a small number of rows, reduce
maxRows or fetchArraySize to reduce
internal memory overhead by node-oracledb.

For direct fetches, JavaScript memory can become a limitation in two
cases:

the absolute amount of data returned is simply too large for
JavaScript to hold in a single array.

the JavaScript heap can be exceeded, or become fragmented, due to
concatenation of the buffers of records fetched from the database.
To minimize this, use a fetchArraySize value determined by tuning.

9.1.2 Working with Result Sets

When the number of query rows is relatively big, or can’t be
predicted, it is recommended to use a ResultSet
with callbacks, as described in this section, or via query streaming,
as described later. This prevents query results
being unexpectedly truncated by the maxRows limit,
or exceeding Node.js memory constraints. Otherwise, for queries that
return a known small number of rows, non-ResultSet queries may have
less overhead.

A ResultSet is created when the execute() option property
resultSet is true. ResultSet rows can be
fetched using getRow() or getRows() on the
execute() callback function’s result.resultSet property.

For ResultSets, the maxRows limit is ignored. All
rows can be fetched.

When all rows have been fetched, or the application does not want to
continue getting more rows, then the ResultSet should be freed using
close(). The ResultSet should also be explicitly closed
in the cases where no rows will be fetched from it.

REF CURSORS returned from a PL/SQL block via an oracledb.CURSOR OUT
binds are also available as a ResultSet. See
REF CURSOR Bind Parameters.

The format of each row will be an array or object, depending on the
value of outFormat.

connection.execute("SELECT employee_id, last_name FROM employees ORDER BY employee_id",[],// no bind variables{resultSet:true},// return a Result Set. Default is falsefunction(err,result){if(err){...}fetchOneRowFromRS(connection,result.resultSet);});});functionfetchOneRowFromRS(connection,resultSet){resultSet.getRow(// get one rowfunction(err,row){if(err){...// close the Result Set and release the connection}elseif(!row){// no rows, or no more rows...// close the Result Set and release the connection}else{console.log(row);fetchOneRowFromRS(connection,resultSet);// get next row}});}

To fetch multiple rows at a time, use getRows():

varnumRows=10;// number of rows to return from each call to getRows()connection.execute("SELECT employee_id, last_name FROM employees ORDER BY employee_id",[],// no bind variables{resultSet:true},// return a ResultSet. Default is falsefunction(err,result){if(err){...}fetchRowsFromRS(connection,result.resultSet,numRows);});});functionfetchRowsFromRS(connection,resultSet,numRows){resultSet.getRows(// get numRows rowsnumRows,function(err,rows){if(err){...// close the ResultSet and release the connection}elseif(rows.length>0){// got some rowsconsole.log(rows);// process rowsif(rows.length===numRows)// might be more rowsfetchRowsFromRS(connection,resultSet,numRows);else// got fewer rows than requested so must be at end...// close the ResultSet and release the connection}else{// else no rows...// close the ResultSet and release the connection}});}

9.1.3 Query Streaming

Streaming of query results allows data to be piped to other streams, for
example when dealing with HTTP responses.

With streaming, each row is returned as a data event. Query
metadata is available via a metadata event. The end event
indicates the end of the query results.

Query results should be fetched to completion to avoid resource leaks,
or (from Node.js 8 onwards) the Stream destroy() method can be
used to terminate a stream early. For older Node.js versions use a
ResultSet with callbacks if you need to stop a
query before retrieving all data. Note the previous, experimental
_close() method no longer emits a ‘close’ event.

In the preceding example, each row is a JavaScript object that
specifies column names and their respective values. Note the property
names follow Oracle’s standard name-casing rules. They will commonly
be uppercase, since most applications create tables using unquoted,
case-insensitive names.

9.1.5 Query Column Metadata

The column names of a query are returned in the execute() callback’s
result.metaData attribute:

connection.execute(`SELECT department_id, department_name
FROM departments
WHERE manager_id < :id`,[110],// bind value for :idfunction(err,result){if(err){console.error(err.message);return;}console.log(result.metaData);// show the metadata});

9.1.6 Query Result Type Mapping

Supported Oracle number, date, character, ROWID, UROWID, LONG and LONG
RAW column types are selected as Numbers, Dates, Strings, or Buffers.
BLOBs and CLOBs are selected into Lobs.

The default mapping for some types can be changed
using fetchAsBuffer,
or fetchAsString.
The fetchInfo property can also be used to
change the default mapping, or override a global mapping, for
individual columns.

Data types in SELECT statements that are unsupported give an error
NJS-010: unsupported data type in select list. These include
INTERVAL, BFILE and XMLType types.

Details are in the following sections.

9.1.6.1 Fetching CHAR, VARCHAR2, NCHAR and NVARCHAR

Columns of database type CHAR, VARCHAR2, NCHAR and NVARCHAR are
returned from queries as JavaScript strings.

Note that binding NCHAR and NVARCHAR for DML is not supported
and may cause unexpected character set translation, see Bind Data
Type Notes.

9.1.6.2 Fetching Numbers

By default all numeric columns are mapped to JavaScript numbers.
Node.js uses double floating point numbers as its native number type.

When numbers are fetched from the database, conversion to JavaScript’s
less precise binary number format can result in “unexpected”
representations. For example:

Similar issues can occur with binary floating-point arithmetic
purely in Node.js, for example:

console.log(0.2+0.7);// gives 0.8999999999999999

Node.js can also only represent numbers up to 2 ^ 53
which is 9007199254740992. Numbers larger than this will be truncated.

The primary recommendation for number handling is to use Oracle SQL or
PL/SQL for mathematical operations, particularly for currency
calculations.

To reliably work with numbers in Node.js, use fetchAsString or
fetchInfo (see below) to fetch numbers in
string format, and then use one of the available third-party
JavaScript number libraries that handles large values and more
precision.

9.1.6.3 Fetching Dates and Timestamps

By default, date and timestamp columns are mapped to JavaScript Date
objects. Internally, DATE, TIMESTAMP, TIMESTAMP WITH LOCAL TIME
ZONE, and TIMESTAMP WITH TIME ZONE columns are fetched as
TIMESTAMP WITH LOCAL TIME ZONE using the session time zone.
Oracle INTERVAL types are not supported.

Note that JavaScript Date has millisecond precision therefore
timestamps will lose any sub-millisecond fractional part when fetched.

To make applications more portable, it is recommended to always set
the session time zone to a pre-determined value, such as UTC. This
can be done by setting the environment variable ORA_SDTZ
before starting Node.js, for example:

$ export ORA_SDTZ='UTC'
$ node myapp.js

The session time zone can also be changed at runtime for each connection by
executing:

9.1.6.4 Fetching Numbers and Dates as String

The global fetchAsString property can be
used to force all number or date columns
(and CLOB columns) queried by an application to be
fetched as strings instead of in native format. Allowing data to be
fetched as strings helps avoid situations where using JavaScript types
can lead to numeric precision loss, or where date conversion is
unwanted.

For example, to force all dates and numbers used by queries in an
application to be fetched as strings:

For dates and numbers, the maximum length of a string created can be
200 bytes.

Individual queries can use the execute() option
fetchInfo to map individual number or date columns
to strings without affecting other columns or other queries. Any
global fetchAsString setting can be overridden to allow specific
columns to have data returned in native format:

varoracledb=require('oracledb');oracledb.fetchAsString=[oracledb.NUMBER];// any number queried will be returned as a stringoracledb.getConnection({user:"hr",password:"welcome",connectString:"localhost/XE"},function(err,connection){if(err){console.error(err.message);return;}connection.execute("SELECT last_name, hire_date, salary, commission_pct FROM employees WHERE employee_id = :id",[178],{fetchInfo:{"HIRE_DATE":{type:oracledb.STRING},// return the date as a string"COMMISSION_PCT":{type:oracledb.DEFAULT}// override oracledb.fetchAsString and fetch as native type}},function(err,result){if(err){console.error(err.message);return;}console.log(result.rows);});});

The output is:

[ [ 'Grant', '24-MAY-07', '7000', 0.15 ] ]

The date and salary columns are returned as strings, but the
commission is a number. The date is mapped using the current session
date format, which was DD-MON-YY in this example. The default date
format can be set, for example, with the environment variable
NLS_DATE_FORMAT. Note this variable will only be read if NLS_LANG
is also set.

Without the mapping capabilities provided by fetchAsString and
fetchInfo the hire date would have been a JavaScript date in the
local time zone, and both numeric columns would have been
represented as numbers:

[ [ 'Grant', Thu May 24 2007 00:00:00 GMT+1000 (AEST), 7000, 0.15 ] ]

To map columns returned from REF CURSORS, use fetchAsString. The
fetchInfo settings do not apply.

When using fetchAsString or fetchInfo for numbers, you may need to
explicitly use NLS_NUMERIC_CHARACTERS to override your NLS settings
and force the decimal separator to be a period. This can be done for
each connection by executing the statement:

Note that binding NCLOB for DML is not supported and may cause
unexpected character set translation, see Bind Data Type
Notes.

9.1.6.6 Fetching LONG and LONG RAW

LONG columns in queries will be fetched as Strings. LONG RAW columns
will be fetched as Buffers.

Unlike for LOBs, there is no support for streaming LONG types. Oracle
Database allows values 2 GB in length, but Node.js and V8 memory
limitations typically only allow memory chunks in the order of tens of
megabytes. This means complete data may not be able to fetched from
the database. The SQL function TO_LOB can be used to migrate
data to LOB columns which can be streamed to node-oracledb, however
TO_LOB cannot be used directly in a SELECT.

9.1.6.7 Fetching ROWID and UROWID

Queries will return ROWID and UROWID columns as Strings.

9.1.6.8 Fetching XMLType

9.1.6.9 Fetching RAW

Queries will return RAW columns as Node.js Buffers.

9.1.6.10 Mapping Custom Types

Data types such as an Oracle Locator SDO_GEOMETRY, or your own custom
types, cannot be fetched directly in node-oracledb. Instead, utilize
techniques such as using an intermediary PL/SQL procedure to map the
type components to scalar values, or use a pipelined table.

For example, consider a CUSTOMERS table having a CUST_GEO_LOCATION
column of type SDO_GEOMETRY, as created in this example
schema:

Instead of attempting to get CUST_GEO_LOCATION by directly calling a
PL/SQL procedure that returns an SDO_GEOMETRY parameter, you could
instead get the scalar coordinates by using an intermediary PL/SQL
block that decomposes the geometry:

Note the JavaScript precision difference. In this particular example,
you may want to bind using type: oracledb.STRING. Output would be:

{ x: '-71.48923', y: '42.72347' }

9.1.7 Limiting Rows and Creating Paged Datasets

Query data is commonly broken into small sets for two reasons:

‘Web pagination’ that allows moving from one set of rows to a next,
or previous, set.

Fetching of consectitive small sets of data for processing. This
happens because the number of records is too large for Node.js to
handle at the same time.

The latter can be handled by ResultSets or
queryStream() with one execution of the SQL query as
discsussed in those links.

How to do ‘web pagination’ is discussed in this section. For each
‘page’ of results, a SQL query is executed to get the appropriate set
of rows from a table. Since the query will be executed more than
once, make sure to use bind variables for row numbers and row limits.

Oracle Database 12c SQL has an OFFSET / FETCH clause, which is
similar to the LIMIT keyword of MySQL.

You can use a basic execute() or a
ResultSet, or queryStream() with
your query. For basic execute() fetches, make sure that
oracledb.maxRows is greater than the value bound to :maxnumrows,
or set to 0 (meaning unlimited).

In applications where the SQL query is not known in advance, this
method sometimes involves appending the OFFSET clause to the ‘real’
user query. Be very careful to avoid SQL injection security issues.

As an anti-example, another way to limit the number of rows returned
involves setting maxRows. However it is more
efficient to let Oracle Database do the row selection in the SQL query
and only return the exact number of rows required to node-oracledb.

For Oracle Database 11g and earlier there are several alternative ways
to limit the number of rows returned. Refer to Oracle Magazine
for details.

The old, canonical paging query is:

SELECT *
FROM (SELECT a.*, ROWNUM AS rnum
FROM (YOUR_QUERY_GOES_HERE -- including the order by) a
WHERE ROWNUM <= MAX_ROW)
WHERE rnum >= MIN_ROW

Here, MIN_ROW is the row number of first row and MAX_ROW is the
row number of the last row to return. For example:

SELECT *
FROM (SELECT a.*, ROWNUM AS rnum
FROM (SELECT last_name FROM employees ORDER BY last_name) a
WHERE ROWNUM <= 20)
WHERE rnum >= 1

This always has an ‘extra’ column, here called RNUM.

An alternative and preferred query syntax for Oracle Database 11g uses
the analytic ROW_NUMBER() function. For example to get the 1st to
20th names the query is:

SELECT last_name FROM
(SELECT last_name,
ROW_NUMBER() OVER (ORDER BY last_name) AS myr
FROM employees)
WHERE myr BETWEEN 1 and 20

9.1.8 Auto-Increment Columns

In Oracle Database 12c you can create tables with auto-incremented
values. This is useful to generate unique primary keys for your data
when ROWID or UROWID are not preferred.

Prior to Oracle Database 12c, auto-increment columns in Oracle
Database can be created using a sequence generator and a trigger.

Sequence generators are defined in the database and return Oracle
numbers. Sequence numbers are generated independently of tables.
Therefore, the same sequence generator can be used for more than one
table or anywhere that you want to use a unique number. You can get a
new value from a sequence generator using the NEXTVAL operator in a
SQL statement. This gives the next available number and increments
the generator. The similar CURRVAL operator returns the current value
of a sequence without incrementing the generator.

A trigger is a PL/SQL procedure that is automatically invoked at a
predetermined point. In this example a trigger is invoked whenever an
insert is made to a table.

9.2 Cursor Management

Developers starting out with Node have to get to grips with the
‘different’ programming style of JavaScript that seems to cause
methods to be called when least expected! While you are still in the
initial hacking-around-with-node-oracledb phase you may sometimes
encounter the error ORA-01000: maximum open cursors exceeded. A
cursor is a “handle for the session-specific private SQL area that
holds a parsed SQL statement and other processing information”.

Here are things to do when you see an ORA-1000:

Avoid having too many incompletely processed statements open at one time:

Make sure your application is handling connections and statements
in the order you expect.

If cursors are opened with DBMS_SQL.OPEN_CURSOR() in a PL/SQL
block, close them before the block returns - except for REF
CURSORs being passed back to node-oracledb.

Choose the appropriate Statement Cache size. Node-oracledb has a
statement cache per connection. When node-oracledb internally
releases a statement it will be put into the statement cache of that
connection, and its cursor will remain open. This makes statement
re-execution very efficient.

The cache size is settable with the
oracle.stmtCacheSize attribute.
The size you choose will depend on your knowledge of the
locality of the statements, and of the resources available to the
application. Are statements re-executed? Will they still be in the
cache when they get executed? How many statements do you want to be
cached? In rare cases when statements are not re-executed, or are
likely not to be in the cache, you might even want to disable the
cache to eliminate its management overheads.

Use bind variables otherwise each variant of the statement will have
its own statement cache entry and cursor. With appropriate binding
only one entry and cursor will be needed.

Set the database’s open_cursors parameter appropriately.
This parameter specifies the maximum number of cursors that each
“session” (i.e each node-oracle connection) can use. When a
connection exceeds the value, the ORA-1000 error is thrown.

Along with a cursor per entry in the connection’s statement cache,
any new statements that a connection is currently executing, or
ResultSets that haven’t been released (in neither situation are
these yet cached), will also consume a cursor. Make sure that
open_cursors is large enough to accommodate the maximum open
cursors any connection may have. The upper bound required is
stmtCacheSize + the maximum number of executing statements in a
connection.

Remember this is all per connection. Also cache management happens
when statements are internally released. The majority of your
connections may use less than open_cursors cursors, but if one
connection is at the limit and it then tries to execute a new
statement, that connection will get ORA-1000: maximum open cursors
exceeded.

10. PL/SQL Execution

PL/SQL stored procedures, functions and anonymous blocks can be called
from node-oracledb using execute().

Note the error property of the callback is not set when PL/SQL
“success with info” warnings such as compilation warnings occur.

10.4 Using DBMS_OUTPUT

The DBMS_OUTPUT package is the standard way to “print” output
from PL/SQL. The way DBMS_OUTPUT works is like a buffer. Your
Node.js application code must first turn on DBMS_OUTPUT buffering for
the current connection by calling the PL/SQL procedure
DBMS_OUTPUT.ENABLE(NULL). Then any PL/SQL executed by the
connection can put text into the buffer using
DBMS_OUTPUT.PUT_LINE(). Finally DBMS_OUTPUT.GET_LINE() is used to
fetch from that buffer. Note, any PL/SQL code that uses DBMS_OUTPUT
runs to completion before any output is available to the user. Also,
other database connections cannot access your buffer.

A basic way to fetch DBMS_OUTPUT with node-oracledb is to bind an
output string when calling the PL/SQL DBMS_OUTPUT.GET_LINE()
procedure, print the string, and then repeat until there is no more
data. The following snippet is based on the example
dbmsoutputgetline.js:

11. Working with CLOB and BLOB Data

Oracle Database uses LOB data types to store long objects. The CLOB
type is used for character data and the BLOB type is used for binary
data. In node-oracledb, LOBs can be represented by instances of
the Lob class or as Strings and Buffers.

11.1 Simple Insertion of LOBs

Node.js String or Buffer types can be passed into PL/SQL blocks or
inserted into the database by binding to LOB columns or PL/SQL
parameters.

If the data is larger than can be handled as a String or Buffer in
Node.js or node-oracledb, it will need to be streamed to
a Lob, as discussed in Streams and Lobs.
See LOB Bind Parameters for size considerations regarding
LOB binds.

Given the table:

CREATETABLEmylobs(idNUMBER,cCLOB,bBLOB);

an INSERT example is:

varfs=require('fs');varstr=fs.readFileSync('example.txt','utf8');...conn.execute("INSERT INTO mylobs (id, myclobcol) VALUES (:idbv, :cbv)",{idbv:1,cbv:str},// type and direction are optional for IN bindsfunction(err,result){if(err)console.error(err.message);elseconsole.log('CLOB inserted from example.txt');...

Getting LOBs as String or Buffer from PL/SQL

PL/SQL LOB OUT parameters can be bound as oracledb.STRING or
oracledb.BUFFER. See LOB Bind Parameters for size
considerations regarding LOB binds.

conn.execute("BEGIN lobs_out(:id, :c, :b); END;",{id:20,c:{type:oracledb.STRING,dir:oracledb.BIND_OUT,maxSize:50000},b:{type:oracledb.BUFFER,dir:oracledb.BIND_OUT,maxSize:50000}},function(err,result){if(err){returncb(err,conn);}varstr=result.outBinds.c;// a Stringvarbuf=result.outBinds.b;// a Bufferreturncb(null,str,buf);// do something with str and buf});

The fetched String and Buffer can be used directly in Node.js.

If data to be bound is larger than can be handled as a String or
Buffer in Node.js or node-oracledb, it will need to be explicitly
streamed to a Lob, as discussed
in Streams and Lobs.
See LOB Bind Parameters for size considerations regarding
LOB binds.

11.3 Streams and Lobs

The Lob Class in node-oracledb implements the Node.js
Stream interface to provide streaming access to CLOB and BLOB
database columns and to PL/SQL bind parameters.

Node-oracledb Lobs can represent persistent LOBs (those permanently
stored in the database) or temporary LOBs (such as those created
with connection.createLob(), or returned
from some SQL or PL/SQL).

If multiple LOBs are streamed concurrently, worker threads will
effectively be serialized on the connection.

It is the application’s responsibility to make sure the connection
remains open while a Stream operation such as pipe() is in progress.

Readable Lobs

Being a Stream object, a Lob being read from the database has two
modes of operation: “flowing mode” and “paused mode”. In flowing mode,
data is piped to another stream, or events are posted as data is read.
In paused mode the application must explicitly call read() to get
data.

The read(size) unit is in characters for CLOBs and in bytes for BLOBs.

When reading a LOB from the database, resources are automatically
released at completion of the readable stream or if there is a LOB
error. The lob.close() method can also be used to close persistent
LOBs that have not been streamed to completion.

A Readable Lob object starts out in paused mode. If a ‘data’ event
handler is added, or the Lob is piped to a Writeable stream, then the
Lob switches to flowing mode.

For unpiped Readable Lobs operating in flowing mode where the Lob is
read through event handlers, the Lob object can be switched to paused
mode by calling pause(). Once the Lob is in paused mode, it stops
emitting data events.

Similarly, a Readable Lob operating in the paused mode can be switched
to flowing mode by calling resume(). It will then start emitting
‘data’ events again.

Writeable Lobs

Lobs are written to with pipe(). Alternatively the write() method
can be called successively, with the last piece being written by the
end() method. The end() method must be called because it frees
resources. If the Lob is being piped into, then the write() and
end() methods are automatically called.

Writeable Lobs also have events, see the Node.js Stream
documentation.

At the conclusion of streaming into a Writeable Lob, the close event
will occur. It is recommended to put logic such as committing and
releasing connections in this event (or after it occurs). See
lobinsert2.js. It is also recommended that persistent LOBs not
use the finish event handler for cleanup.

11.4 Using RETURNING INTO to Insert into LOBs

If Strings or Buffers are too large to be directly inserted into the
database (see Simple Insertion of LOBs), use a
RETURNING INTO clause to retrieve a Lob for a table
item. Data can then be streamed into the Lob and committed directly
to the table:

connection.execute("INSERT INTO mylobs (id, c) VALUES (:id, EMPTY_CLOB()) RETURNING c INTO :lobbv",{id:4,lobbv:{type:oracledb.CLOB,dir:oracledb.BIND_OUT}},{autoCommit:false},// a transaction needs to span the INSERT and pipe()function(err,result){if(err){console.error(err.message);return;}if(result.rowsAffected!=1||result.outBinds.lobbv.length!=1){console.error('Error getting a LOB locator');return;}varlob=result.outBinds.lobbv[0];lob.on('close',function(){connection.commit(// all data is loaded so we can commit itfunction(err){if(err)console.error(err.message);connection.close(function(err){if(err)console.error(err);});});});lob.on('error',function(err){console.error(err);connection.close(function(err){if(err)console.error(err.message);});});varinStream=fs.createReadStream('example.txt');// open the file to read frominStream.on('error',function(err){if(err)console.error(err);});inStream.pipe(lob);// copies the text to the LOB});

This example streams from a file into the table. When the data has
been completely streamed, the Lob is automatically closed and the
‘close’ event triggered. At this point the data can be committed.

Returned Lobs can be used as Readable Streams. Data can be
streamed from each Lob, for example to a file. At the conclusion of
the stream, persistent LOBs are automatically closed.

Lobs returned from the database that are not streamed can be passed
back to the database as IN binds for PL/SQL blocks, for INSERT, or
for UPDATE statements. The Lobs should then be closed
with lob.close(). If they are passed as IN OUT binds,
they will be automatically closed and the
execution outBinds property will contain the
updated Lob.

PL/SQL LOB Parameter Fetch Example

conn.execute("BEGIN lobs_out(:id, :c, :b); END;",{id:1,c:{type:oracledb.CLOB,dir:oracledb.BIND_OUT},b:{type:oracledb.BLOB,dir:oracledb.BIND_OUT}},function(err,result){if(err){returncb(err,conn);}varclob=result.outBinds.c;varblob=result.outBinds.b;cb(null,clob,blob);// do something with the Lobs});

Streaming Out a Lob

Once a Lob is obtained from a query or PL/SQL OUT bind, it can be
streamed out:

if(lob===null){// . . . do special handling such as create an empty file or throw an error}if(lob.type===oracledb.CLOB){lob.setEncoding('utf8');// set the encoding so we get a 'string' not a 'buffer'}lob.on('error',function(err){cb(err);});lob.on('close',function(){cb(null);});// all done. The Lob is automatically closed.varoutStream=fs.createWriteStream('myoutput.txt');outStream.on('error',function(err){cb(err);});// switch into flowing mode and push the LOB to myoutput.txtlob.pipe(outStream);

Note the Lob is automatically closed at the end of the stream.

An alternative to the lob.pipe() call is to have a data event on
the Lob Stream which processes each chunk of LOB data separately.
Either a String or Buffer can be built up or, if the LOB is big, each
chunk can be written to another Stream or to a file:

if(lob===null){// . . . do special handling such as create an empty file or throw an error}varstr="";lob.setEncoding('utf8');// set the encoding so we get a 'string' not a 'buffer'lob.on('error',function(err){cb(err);});lob.on('close',function(){cb(null);});// all done. The Lob is automatically closed.lob.on('data',function(chunk){str+=chunk;// or use Buffer.concat() for BLOBS});lob.on('end',function(){fs.writeFile(...,str,...);});

Node-oracledb’s lob.pieceSize can be used to
control the number of bytes retrieved for each readable ‘data’ event.
This sets the number of bytes (for BLOBs) or characters (for CLOBs).
The default is lob.chunkSize. The
recommendation is for it to be a multiple of chunkSize.

11.6 Using createLob() for PL/SQL IN Binds

Node-oracledb applications can create Oracle ‘temporary LOBs’ by
calling connection.createLob(). These are
instances of the Lob class. They can be populated with
data and passed to PL/SQL blocks. This is useful if the data is
larger than feasible for direct binding
(see Simple Insertion of LOBs). These Lobs can
also be used for SQL statement IN binds, however the RETURNING INTO
method shown above will be more efficient.

Lobs from createLob() will use space in the temporary tablespace
until lob.close() is called. Database Administrators
can track this usage by querying V$TEMPORARY_LOBS.

Passing a Lob Into PL/SQL

The following insertion example is based on lobplsqltemp.js. It
creates an empty LOB, populates it, and then passes it to a PL/SQL
procedure.

Once created, data can be inserted into it. For example to read a
text file:

templob.on('error',function(err){somecallback(err);});// The data was loaded into the temporary LOB, so use ittemplob.on('finish',function(){somecallback(null,templob);});// copies the text from 'example.txt' to the temporary LOBvarinStream=fs.createReadStream('example.txt');inStream.on('error',function(err){...});inStream.pipe(templob);

Now the LOB has been populated, it can be bound in somecallback() to
a PL/SQL IN parameter:

// For PROCEDURE lobs_in (p_id IN NUMBER, c_in IN CLOB, b_in IN BLOB)conn.execute("BEGIN lobs_in(:id, :c, null); END;",{id:3,c:templob},// type and direction are optional for IN bindsfunction(err){if(err){returncb(err);}console.log("Call completed");returncb(null,conn,templob);});

Persistent or temporary Lobs returned from the database should be
closed with lob.close() unless they have been automatically closed.
Automatic closing of returned Lobs occurs when:

streaming has completed

a stream error occurs

the Lob was used as the source for an IN OUT bind

If you try to close a Lob being used for streaming you will get the
error NJS-023: concurrent operations on a Lob are not allowed.

The connection must be open when calling lob.close() on a temporary
LOB.

The lob.close() method emits the Node.js Stream ‘close’ event
unless the Lob has already been closed explicitly or automatically.

12. Oracle Database 12c JSON Data type

Oracle Database 12.1.0.2 introduced native support for JSON data. You
can use JSON with relational database features, including
transactions, indexing, declarative querying, and views. You can
project JSON data relationally, making it available for relational
processes and tools.

JSON data in the database is stored as BLOB, CLOB or VARCHAR2 data.
This means that node-oracledb can easily insert and query it.

As an example, the following table has a PO_DOCUMENT column that is
enforced to be JSON:

Queries can access JSON with Oracle JSON path expressions. These
expressions are matched by Oracle SQL functions and conditions to
select portions of the JSON data. Path expressions can use wildcards
and array ranges. An example is $.friends which is the value of
JSON field friends.

Oracle provides SQL functions and conditions to create, query, and
operate on JSON data stored in the database. An example is the Oracle
SQL Function JSON_TABLE which projects JSON data to a relational
format effectively making it usable like an inline relational view.
Another example is JSON_EXISTS which tests for the existence of a
particular value within some JSON data:

14. Bind Parameters for Prepared Statements

SQL and PL/SQL statements may contain bind parameters, indicated by
colon-prefixed identifiers or numerals. These indicate where
separately specified values are substituted in a statement when it is
executed, or where values are to be returned after execution.

IN binds are values passed into the database. OUT binds are used to
retrieve data. IN OUT binds are passed in, and may return a different
value after the statement executes.

Using bind parameters is recommended in preference to constructing SQL
or PL/SQL statements by string concatenation or template literals.
This is for performance and security.

Inserted data that is bound is passed to the database separately from
the statement text. It can never be executed directly. This means
there is no need to escape bound data inserted into the database.

If a statement is executed more than once with different values for
the bind parameters, then Oracle can re-use context from the initial
execution, generally improving performance. However, if similar
statements contain hard coded values instead of bind parameters,
Oracle sees the statement text is different and will be less
efficient.

Bind parameters can be used to substitute data but not the text of the
statement.

Bind variables cannot be used in DDL statements, for example
CREATE TABLE or ALTER commands.

14.1 IN Bind Parameters

For IN binds, a data value is passed into the database and substituted
into the statement during execution of SQL or PL/SQL.

The direction dir is oracledb.BIND_IN, which is the default when dir
is not specified.

The val attribute may be a constant or a JavaScript variable.

If type is omitted, it is inferred from the bind data value. If
type is set, it can be oracledb.STRING, oracledb.NUMBER,
oracledb.DATE or oracledb.BUFFER matching the standard Node.js
type of the data being passed into the database. Use a bind type of
oracledb.BLOB or oracledb.CLOB to pass in Lob
instances. The type oracledb.BUFFER can bind a Node.js Buffer to
an Oracle Database RAW, LONG RAW or BLOB type.

Since dir and type have defaults, these attributes are sometimes
omitted for IN binds. Binds can be like:

The position of the array values corresponds to the position of the
SQL bind parameters as they occur in the statement, regardless of their
names. This is still true even if the bind parameters are named like
:0, :1, etc. The following snippet will fail because the country
name needs to be the second entry of the array so it becomes the
second value in the INSERT statement

In the context of SQL statements, the input array position ‘n’
indicates the bind parameter at the n’th position in the
statement. However, in the context of PL/SQL statements the position
‘n’ in the bind call indicates a binding for the n’th unique parameter
name in the statement when scanned left to right.

If a bind parameter name is repeated in the SQL string
then bind by name syntax should be used.

Bind Data Type Notes

When binding a JavaScript Date value in an INSERT statement, it is
inserted as if it represented a TIMESTAMP WITH LOCAL TIME ZONE value.
In the database, TIMESTAMP WITH LOCAL TIME ZONE dates are normalized
to the database time zone, or to the time zone specified for TIMESTAMP
WITH TIME ZONE columns. If later queried, they are returned in the
session time zone. See Fetching Date and Timestamps
for more information.

The type oracledb.CURSOR cannot be used with IN binds.

Binding NCHAR, NVARCHAR or NCLOB for DML may result in incorrect
character mapping, depending on the database character set and the
database national character set. It may work in the case where the
database character set can safely convert to the database national
character set.

14.2 OUT and IN OUT Bind Parameters

OUT binds are used to retrieve data from the database. IN OUT binds
are passed in, and may return a different value after the statement
executes. IN OUT binds can be used for PL/SQL calls, but not for SQL.

For each OUT and IN OUT bind parameter
in bindParams, a bind variable object
containing
dir,
val, type,
and maxSize properties is used:

The dir attribute should be oracledb.BIND_OUT or
oracledb.BIND_INOUT, depending on whether data is only to be
returned from the database or additionally passed into the
database.

The val parameter in needed when binding IN OUT to pass a value
into the database. It is not used for OUT binds.

For oracledb.BIND_INOUT parameters, the type attribute is
inferred from the input data type. Alternatively it can be
explicitly set to oracledb.STRING, oracledb.NUMBER,
oracledb.DATE, oracledb.BLOB, oracledb.CLOB or
oracledb.BUFFER, matching the data type of the Node.js value or
variable. The output data type will always be the same as the the
input data type.

For oracledb.BIND_OUT parameters the type attribute will be the
node-oracledb or Node.js data type that data will be returned as.
It should be oracledb.STRING, oracledb.NUMBER, oracledb.DATE,
oracledb.BUFFER, oracledb.CURSOR, oracledb.BLOB, or
oracledb.CLOB. If type is not specified for OUT binds then
oracledb.STRING is assumed.

Oracle Database RAW, LONG RAW or BLOB data can be bound with a
type of oracledb.BUFFER to return a Node.js Buffer.

Oracle Database LONG, ROWID and UROWID data can be bound with a
type of oracledb.STRING to return a JavaScript String.

A maxSize attribute should be set for oracledb.STRING or
oracledb.BUFFER OUT or IN OUT binds. This is the maximum number
of bytes the bind parameter will return. If the output value does
not fit in maxSize bytes, then an error such ORA-06502: PL/SQL:
numeric or value error: character string buffer too small or
NJS-016: buffer is too small for OUT binds occurs.

A default value of 200 bytes is used when maxSize is not provided
for OUT binds of type oracledb.STRING or oracledb.BUFFER.

A string representing a UROWID may be up to 5267 bytes long in
node-oracledb.

Accessing OUT Bind Values

The results parameter of the execute()
callback contains an outBinds property with the
returned OUT and IN OUT bind values.
If bindParams was passed as an array, then
outBinds is returned as an array, with the same order as the binds
in the statement. If bindParams was passed as an object, then
outBinds is returned as an object.

14.3 DML RETURNING Bind Parameters

Bind parameters from “DML RETURNING” statements (such as INSERT
... RETURNING ... INTO ...) can use oracledb.BLOB, oracledb.CLOB, oracledb.STRING,
oracledb.NUMBER or oracledb.DATE for the OUT type.

For oracledb.STRING types, an error occurs if maxSize
is not large enough to hold a returned value.

Note each DML RETURNING bind OUT parameter is returned as an array
containing zero or more elements. Application code that is designed
to expect only one value could be made more robust if it confirms the
returned array length is not greater than one. This will help identify
invalid data or an incorrect WHERE clause that causes more results
to be returned.

Oracle Database DATE, TIMESTAMP, TIMESTAMP WITH LOCAL TIME ZONE
and TIMESTAMP WITH TIME ZONE types can be bound as oracledb.DATE for DML
RETURNING. These types can also be bound as oracledb.STRING, if desired.
ROWID and UROWID data to be returned can be bound as oracledb.STRING.
Note that a string representing a UROWID may be up to 5267 bytes
long.

No duplicate binds are allowed in a DML statement with a RETURNING
clause, and no duplication is allowed between bind parameters in the
DML section and the RETURNING section of the statement.

14.4 REF CURSOR Bind Parameters

Oracle REF CURSORS can be fetched in node-oracledb by binding a
oracledb.CURSOR to a PL/SQL call. The resulting bind variable becomes a
ResultSet, allowing rows to be fetched using
getRow() or getRows(). The ResultSet can
also be converted to a Readable Stream by using
toQueryStream().

If using getRow() or getRows() the ResultSet must be freed using
close() when all rows have been fetched, or when the
application does not want to continue getting more rows. If the REF
CURSOR is set to NULL or is not set in the PL/SQL procedure then the
returned ResultSet is invalid and methods like getRows() will
return an error when invoked.

varoracledb=require('oracledb');varnumRows=10;// number of rows to return from each call to getRows()varplsql="BEGIN get_emp_rs(:sal, :cursor); END;";varbindvars={sal:6000,cursor:{type:oracledb.CURSOR,dir:oracledb.BIND_OUT}}connection.execute(plsql,bindvars,function(err,result){if(err){...}fetchRowsFromRS(connection,result.outBinds.cursor,numRows);});functionfetchRowsFromRS(connection,resultSet,numRows){resultSet.getRows(// get numRows rowsnumRows,function(err,rows){if(err){...// close the ResultSet and release the connection}elseif(rows.length>0){// got some rowsconsole.log(rows);// process rowsif(rows.length===numRows)// might be more rowsfetchRowsFromRS(connection,resultSet,numRows);else// got fewer rows than requested so must be at end...// close the ResultSet and release the connection}else{// else no rows...// close the ResultSet and release the connection}});}

The connection must remain open until the stream is completely read.
Query results must be fetched to completion to avoid resource leaks.
The ResultSet close() call for streaming query results will be
executed internally when all data has been fetched.

14.5 LOB Bind Parameters

Database CLOBs can be bound with type set
to oracledb.CLOB. Database BLOBs can be bound
as oracledb.BLOB. These binds accept, or
return, node-oracledb Lob instances, which implement the
Node.js Stream interface.

Lobs may represent Oracle Database persistent LOBs (those stored in
tables) or temporary LOBs (such as those created
with createLob() or returned by some SQL and
PL/SQL operations).

LOBs can be bound with direction oracledb.BIND_IN, oracledb.BIND_OUT or
oracledb.BIND_INOUT, depending on context.

Note that any PL/SQL OUT LOB parameter should be initialized in the
PL/SQL block - even just to NULL - before the PL/SQL code
completes. Make sure to do this in all PL/SQL code paths including in
error handlers. This prevents node-oracledb throwing the error
DPI-007: invalid OCI handle or descriptor.

In many cases it will be easier to work with JavaScript Strings and
Buffers instead of Lobs. These types can be bound
directly for SQL IN binds to insert into, or update, LOB columns.
They can also be bound to PL/SQL LOB parameters. Use the bind
type oracledb.STRING for CLOBs
and oracledb.BUFFER for BLOBs. The default
size used for these binds in the OUT direction is 200, so set
maxSize appropriately.

Size Limits for Binding LOBs to Strings and Buffers

When CLOBs are bound as oracledb.STRING, or BLOBs are bound as
oracledb.BUFFER, the limitation on binding is the memory available
to Node.js and the V8 engine. For data larger than several megabytes,
it is recommended to bind as oracledb.CLOB or oracledb.BLOB and
use Lob streaming. If you try to create large
Strings or Buffers in Node.js you will see errors like JavaScript
heap out of memory, or other space related messages.

Internally, temporary LOBs are used when binding Strings and Buffers
larger than 32 KB for PL/SQL calls. Freeing of the temporary LOB is
handled automatically. For SQL calls no temporary LOBs are used.

14.6 PL/SQL Collection Associative Array (Index-by) Bind Parameters

Arrays of strings and numbers can be bound to PL/SQL IN, IN OUT, and
OUT parameters of PL/SQL INDEX BY associative array type. This type
was formerly called PL/SQL tables or index-by tables. This method of
binding can be a very efficient way of transferring small data sets.
Note PL/SQL’s VARRAY and nested table collection types cannot be
bound.

For OUT and IN OUT binds, the maxArraySize
bind property must be set. Its value is the maximum number of
elements that can be returned in an array. An error will occur if the
PL/SQL block attempts to insert data beyond this limit. If the PL/SQL
code returns fewer items, the JavaScript array will have the actual
number of data elements and will not contain null entries. Setting
maxArraySize larger than needed will cause unnecessary memory
allocation.

For IN OUT binds, maxArraySize can be greater than the number of
elements in the input array. This allows more values to be returned
than are passed in.

For IN binds, maxArraySize is ignored, as also is maxSize.

For oracledb.STRING IN OUT or OUT binds, the string length
maxSize property may be set. If it is not set
the memory allocated per string will default to 200 bytes. If the
value is not large enough to hold the longest string data item in the
collection a runtime error occurs. To avoid unnecessary memory
allocation, do not let the size be larger than needed.

The next example fetches an array of values from a table. First,
insert these values:

14.7 Binding Multiple Values to a SQL WHERE IN Clause

sql='SELECT last_name FROM employees WHERE first_name IN (:bv)';binds=['Christopher'];connection.execute(sql,binds,function(...));

But a common use case for a query WHERE IN clause is for multiple
values, for example when a web user selects multiple check-box options
and the query should match all chosen values.

Trying to associate multiple data values with a single bind parameter
won’t work. To use a fixed, small number of values in an WHERE IN
bind clause, the SQL query should have individual bind parameters, for
example:

If you sometimes execute the query with a smaller number of items, a
null can be bound for the ‘missing’ values:

binds=['Alyssa','Christopher','Hazel',null];

When the exact same statement text is re-executed many times
regardless of the number of user supplied values, you get performance
and scaling benefits from not having multiple, unique SQL statements
being run.

Another solution when the number of data items is only known at
runtime is to build up an exact SQL string like:

binds=['Christopher','Hazel','Samuel'];sql="SELECT first_name, last_name FROM employees WHERE first_name IN (";for(vari=0;i<binds.length;i++)sql+=(i>0)?", :"+i:":"+i;sql+=")";

Binds are still used for security. But, depending how often this
query is executed, and how changeable the number of bind values is,
you can end up with lots of ‘unique’ query strings being executed.
You might not get the statement caching benefits that re-executing a
fixed SQL statement would have.

Another solution for a larger number of values is to construct a SQL
statement like:

SELECT ... WHERE col IN ( <something that returns a list of rows> )

The easiest way to do the <something that returns a list of rows>
will depend on how the data is initially represented and the number of
items. You might look at using CONNECT BY or nested tables. Or,
for really large numbers of items, you might prefer to use a global
temporary table. Some solutions are given in On Cursors, SQL, and
Analytics and in this StackOverflow answer.

15. Transaction Management

The node-oracledb add-on implements commit() and
rollback() methods that can be used to explicitly
control transactions.

If the autoCommit flag is set to true,
then a commit occurs at the end of each execute() call. Unlike an
explicit commit(), this does not require a round-trip to the
database. For maximum efficiency, set autoCommit to true for the
last execute() call of a transaction in preference to using an
additional, explicit commit() call.

When a connection is released, any ongoing transaction will be rolled
back. Therefore if a released, pooled connection is re-used by a
subsequent pool.getConnection() call
(or oracledb.getConnection() call that uses a
pool), then any DML statements performed on the obtained connection are
always in a new transaction.

When an application ends, any uncommitted transaction on a connection
will be rolled back.

Note: Oracle Database will implicitly commit when a DDL
statement is executed irrespective of the value of autoCommit.

16. Statement Caching

Node-oracledb’s execute() and
queryStream() methods use the Oracle Call Interface
statement cache to make re-execution of statements efficient.
This cache removes the need for the separate ‘prepare’ or ‘parse’
method which is sometimes seen in other Oracle APIs: there is no
separate method in node-oracledb.

Each non-pooled connection and each session in the connection pool has
its own cache of statements with a default size of 30. Statement
caching lets cursors be used without re-parsing the statement.
Statement caching also reduces meta data transfer costs between the
node-oracledb and the database. Performance and scalability are
improved.

In general, set the statement cache to the size of the working set of
statements being executed by the application.

Statement caching can be disabled by setting the size to 0. Disabling
the cache may be beneficial when the quantity or order of statements
causes cache entries to be flushed before they get a chance to be
reused. For example if there are more distinct statements than cache
slots, and the order of statement execution causes older statements to
be flushed from the cache before the statements are re-executed.

The value can also be overridden in the poolAttrs parameter to
the createPool() method.

With Oracle Database 12c, the statement cache size can be automatically tuned with the
External Configurationoraaccess.xml file.

To manually tune the statement cache size, monitor general application
load and the AWR “bytes sent via SQL*Net to client” values. The
latter statistic should benefit from not shipping statement metadata
to node-oracledb. Adjust the statement cache size to your
satisfaction.

Prefetching is the number of additional rows the underlying Oracle
client library fetches whenever node-oracledb requests query data from
the database. Prefetching is a tuning option to maximize data
transfer efficiency and minimize round-trips to the database. The
prefetch size does not affect when, or how many, rows are returned by
node-oracledb to the application. The cache management is
transparently handled by the Oracle client libraries. Note, standard
node-oracledb fetch tuning is via
fetchArraySize, but changing the prefetch
value can be useful in some cases such as when modifying the
application is not feasible.

18. Globalization and National Language Support (NLS)

Node-oracledb always uses Oracle’s AL32UTF8 character set internally.
Data will be converted between AL32UTF8 and the
database character set when it is inserted into, or queried from, the
database. The environment variable NLS_LANG can be used to
configure the Oracle client language and territory only.

19. End-to-end Tracing, Mid-tier Authentication, and Auditing

The Connection properties action,
module, and clientId set
metadata for end-to-end tracing. The values can be tracked in
database views, shown in audit trails, and seen in tools such as
Enterprise Manager.

The clientId property can also be used by applications that do their
own mid-tier authentication but connect to the database using the one
database schema. By setting clientId to the application’s
authenticated username, the database is aware of who the actual end
user is. This can, for example, be used by Oracle Virtual Private
Database policies to automatically restrict data access by that
user.

Applications should set the properties because they can greatly help
to identify and resolve unnecessary database resource usage, or
improper access.

The attributes are set on a connection object and
sent to the database on the next ‘round-trip’ from node-oracledb, for
example, with execute():

In general, applications should be consistent about how, and when,
they set the end-to-end tracing attributes so that current values are
recorded by the database.

Idle connections released back to a connection pool will retain the
previous attribute values of that connection. This avoids the overhead
of a round-trip to reset the values. The Oracle design assumption is
that pools are actively used and have few idle connections. After
getting a connection from a pool, an application that uses end-to-end
tracing should set new values appropriately.

When a Connection object is displayed, such as with console.log(),
the end-to-end tracing attributes will show as null even if values
have been set and are being sent to the database. This is for
architectural, efficiency and consistency reasons. When an already
established connection is retrieved from a local pool, node-oracledb
is not able to efficiently retrieve values previously established in
the connection. The same occurs if the values are set by a call to
PL/SQL code - there is no efficient way for node-oracledb to know the
values have changed.

The attribute values are commonly useful to DBAs. However, if knowing
the current values is useful in an application, the application should
save the values as part of its application state whenever the
node-oracledb attributes are set. Applications can also find the
current values by querying the Oracle data dictionary or using PL/SQL
procedures such as DBMS_APPLICATION_INFO.READ_MODULE() with the
understanding that these require round-trips to the database.

The Add-on Name

The Oracle Database V$SESSION_CONNECT_INFO view shows the version of
node-oracledb in use. This allows DBAs to verify that applications
are using the desired add-on version. For example, a DBA might see:

Notice there are two promise “chains”: one to get a connection and the
other to use it. This is required because it is only possible to
refer to the connection within the function to which it was passed.

When invoking asynchronous methods, it is possible to accidentally
get a Promise by forgetting to pass a callback function:

Since the returned promise will not have a catch block, as the
developer intended to use the callback programming style, any
rejections that occur will go unnoticed. Node.js 4.0 added the
unhandledRejection event to prevent such rejections from going
unnoticed:

Because node-oracledb Promises support is not enabled by default when
using Node.js 0.10, the callback API is expected. The error stack trace
indicates that line 10 of mypromiseapp.js forgot to pass the
callback. Either install your own Promise library or use the callback
programming style.

21. Async/Await and node-oracledb

Node.js 7.6 supports async functions, also known as Async/Await. These
can be used with node-oracledb. For example:

constoracledb=require('oracledb');functiongetEmployee(empid){returnnewPromise(asyncfunction(resolve,reject){letconn;try{conn=awaitoracledb.getConnection({user:"hr",password:"welcome",connectString:"localhost/XE"});letresult=awaitconn.execute('SELECT * FROM employees WHERE employee_id = :bv',[empid]);resolve(result.rows);}catch(err){// catches errors in getConnection and the queryreject(err);}finally{if(conn){// the conn assignment worked, must releasetry{awaitconn.release();}catch(e){console.error(e);}}}});}asyncfunctionrun(){try{letres=awaitgetEmployee(101);console.log(res);}catch(err){console.error(err);}}run();

If you are using Lob instances for LOB data instead of
working with the data directly as Strings or Buffers, then the Lobs
must be streamed since there is no Promisified interface for them.

22. Tracing SQL and PL/SQL Statements

Applications that have implemented End-to-end Tracing
calls such as action and module,
will make it easier in database monitoring tools to identify SQL
statement execution.

In node-oracledb itself, the ODPI-C tracing capability can be
used to log executed statements to the standard error stream. Before
executing Node.js, set the environment variable DPI_DEBUG_LEVEL
to 16. At a Windows command prompt, this could be done with set
DPI_DEBUG_LEVEL=16. On Linux, you might use:

export DPI_DEBUG_LEVEL=16
node myapp.js 2> log.txt

For an application that does a single query, the log file might
contain a tracing line consisting of the prefix ‘ODPI’, a thread
identifier, a timestamp, and the SQL statement executed:

23. Migrating from Previous node-oracledb Releases

23.1 Migrating from node-oracledb 1.13 to node-oracledb 2.0

Installation has changed. Pre-built binaries are available for
common platforms. To build from source code, change your
package.json dependency to install from GitHub. Refer to
INSTALL.

Users of Instant Client RPMs must now always have the Instant Client
libraries in the library search path. Refer to INSTALL.

Users of macOS must now always have the Instant Client
libraries in ~/lib or /usr/local/lib. Refer to INSTALL.

For queries and REF CURSORS, the internal buffer sizing and tuning
of round-trips to Oracle Database is now done with
fetchArraySize. This replaces
prefetchRows, which is no longer used. It
also replaces the overloaded use of maxRows for
queryStream(). To upgrade scripts:

Replace the property prefetchRows with fetchArraySize and make
sure all values are greater than 0.

For direct fetches, optionally replace enormously
over-sized maxRows values with 0, meaning an unlimited number of
rows can be returned.

For direct fetches that relied on the version 1
default value of maxRows to limit the number of
returned rows to 100, it is recommended to use an OFFSET /
FETCH query clause. Alternatively explicitly set
maxRows to 100.

Review and update code that checks for specific NJS-XXX or
DPI-XXX error messages.

Ensure that all ResultSets and LOBs
are closed prior to calling
connection.close(). Otherwise you will get the
error DPI-1054: connection cannot be closed when open statements or
LOBs exist. (Note: this limitation was removed in node-oracledb 2.1)

Test applications to check if changes such as the improved property
validation uncover latent problems in your code.

23.2 Migrating from node-oracledb 2.0 to node-oracledb 2.1

When upgrading from node-oracledb version 2.0 to version 2.1:

If using the experimental _close method with Query
Streaming in Node 8 or later: