To use DECnet-Plus for OpenVMS or DECnet Phase IV software, you need
the appropriate software licenses. Table 1-3 lists the two basic
licenses and three license keys for OpenVMS VAX and Alpha systems, for
DECnet Phase IV and DECnet-Plus, respectively.

1Application programming interface
2The DVNETRTG license is required on one node in the cluster
to enable cluster alias
3Host-based routing is available on DECnet-Plus (Version
7.1); it was not available on DECnet Phase V (DECnet/OSI) systems
before Version 7.1
4Routing is not supported on DECnet Phase IV OpenVMS Alpha
systems

For mapping between node names and node addresses, you need at least
one of the following three name services:

Local namespace

DECdns

DNS/BIND

A DECnet-Plus network can use one name service exclusively, or it can
have a mixture of systems using one or more of the name services. While
configuring DECnet-Plus, you specify one or more of the three available
name services to use on the node. To determine which name service(s) to
use, check which name services are already being used by other nodes in
your network. For example, if the other nodes in your network are
already using DECdns, you will most likely want to use DECdns and join
the existing namespace. The following sections include additional
criteria and dependencies on the name services.

Choose the Local namespace if you have a small network and do not wish
to use a distributed namespace. The Local namespace is similar to the
permanent node database (NETNODE_REMOTE.DAT), used on DECnet Phase IV
systems. With the Local namespace, name-to-address mapping information
has to be administered separately on each node. To use the Local
namespace, no additional software is required.

With DECdns, all node names in the network can be administered from one
location. The mapping information is stored on two or more DECdns
servers and kept up-to-date networkwide automatically. DECnet-Plus
requires at least two DECdns servers in the network. DECdns server
software must be installed and configured on these systems (the server
software is optional software included with the DECnet-Plus for OpenVMS
software kit). The DECnet-Plus Planning Guide describes planning considerations and
the DECnet-Plus for OpenVMS Applications Installation and Advanced Configuration and DECnet-Plus DECdns Management guides include installation and
configuration instructions.

DNS/BIND, the distributed name service for TCP/IP, supports the storage
of IP addresses and the use of node synonyms. Node synonyms allow for
backward compatibility with older applications that cannot use long
domain names. (Note that DECnet-Plus also allows for node synonyms to
provide backward compatibility with DECnet Phase IV node names.)
DNS/BIND is needed if you want DECnet-Plus to run applications over
TCP/IP. To use the DNS/BIND name service, DECnet-Plus requires one or
more DNS/BIND servers in the network. DNS/BIND must be selected as one
of the name services if you plan to use the DECnet over TCP/IP or OSI
over TCP/IP features. See the appropriate TCP/IP documentation for more
information on DNS/BIND.

The DIGITAL Distributed Time Service (DECdts) synchronizes the system
clocks in computers connected by a network. The DECnet-Plus for OpenVMS
configuration procedure autoconfigures the DECdts clerk. If your
network uses multiple DECdns servers, or if you need network clock
sychronization, DIGITAL recommends that you install at least three
DECdts servers on each LAN. See the DECnet-Plus DECdts Management guide for more
information.

In large networks and networks requiring high throughput, one or more
dedicated routers are recommended for the network. DIGITAL recommends
using host-based routers to replace DECnet Phase IV host-based routers
or in environments not requiring high throughput.

The DECnet-Plus for OpenVMS VAX systems license includes the right to
use X.25 Access software (formerly known as VAX P.S.I. Access) or X.25
Native mode software (formerly known as VAX P.S.I.), which requires an
additional license.

The X.25 software in DECnet-Plus for OpenVMS is backwards compatible
with systems running the older VAX P.S.I. products. For further
information on X.25, refer to Chapters 2 and 4.

On DECnet-Plus for OpenVMS Alpha systems, the following licenses are
required:

To run CONS over LLC2 or CLNS over DIGITAL HDLC, a DECnet-Plus for
OpenVMS Alpha license is required.

To use LAPB to a WAN and to use any of the X.25 APIs or utilities
over either LAN or WAN, an X.25 for OpenVMS Alpha systems license is
required.

To develop OSI applications, you need to use the OSI application
development interfaces (API) installed with the base system. These
tools allow you to build network applications that adhere to the OSI
standards defined by the International Organization for Standardization
(ISO).

You can use the following components in building OSI applications:

An application programming interface (API) to the File Transfer,
Access and Management (FTAM) services within the Application layer

The OSI Applications Kernel (OSAK) API, which provides access to:

Presentation layer services

The Association Control Service Element (ACSE) of the Application
layer

The Remote Operations Service Element (ROSE) of the Application
layer

Where ISO standards exist, the APIs conform to these standards.

The following table shows the components available on DIGITAL UNIX and
OpenVMS platforms.

A DECnet-Plus network can be viewed as a distributed processing system.
The major functions of the network, such as network management and
routing, are not centralized in a single system. Each system can manage
both itself and remote systems. Adaptive routing eliminates the need to
set up network data paths.

Some of the primary features of the DECnet-Plus distributed network are:

Optional use of distributed system services for networkwide names
and synchronized time

Networkwide capabilities include the optional use of distributed system
services designed to make the network as transparent as possible to
users and applications. The DECnet-Plus distributed computing
environment provides the following services:

In the DECnet-Plus distributed processing environment, you can
physically distribute multiple resources or tasks that perform various
functions between systems on the network.

A distributed application is a collection of processes
that use resources, such as processing elements, databases, and
physical devices, located on other systems in the network. As a single
logical application, the elements or tasks are physically divided. A
task is a modular component of work that the
application programmer defines within an application. The work of a
distributed application is divided among tasks that can communicate
with each other.

In an OpenVMS operating system, a task is executed within the context
of a process. The process context defines the
environment in which the task executes. OpenVMS software controls the
access and allocates the resources required by the task, based on the
process context.

In a distributed application, each task is distinct and can be placed
in different locations in the network. The system interface for the
application allows you to run the application locally or remotely
without any apparent difference.

You can distribute an application so that each task is assigned to a
system with appropriate resources. For instance, one task computes on a
powerful processor while another stores the information in a database
on a system with extensive disk storage capabilities. A common example
of a distributed application is an implementation of the client/server
model, such as DECdns, in which the client task (on the DECdns clerk
system) requests service from the server task on a different system
(the DECdns server).

Interprocess communication is the movement of data and
control from one task running within a process to another task in the
network. Interprocess communication allows the various tasks on the
different processes to cooperate and communicate with each other,
exchanging message packets over the connection established between the
tasks.

Distributed applications can increase availability. You can design a
distributed application to avoid a single point of failure by moving
tasks to other processors if a processor fails. Other benefits of
distributed applications include:

An OpenVMS cluster configuration is an organization of OpenVMS
operating systems that communicate over a high-speed communications
path and share processor resources as well as disk storage. DECnet
connections are required for certain OpenVMS cluster tools and
configurations. DECnet-Plus for OpenVMS software provides support for
OpenVMS cluster systems.

DECnet-Plus for OpenVMS supports the use of multiple cluster aliases.
The alias node identifier (a node name or node address) is common to
some or all nodes in the cluster and permits users to address it as
though it were one node.

For management purposes, the cluster alias is viewed by the DECnet-Plus
software as a separate Node entity that is manageable through NCL
commands. The Alias entity differs from a regular Node entity in some
characteristics; for example, the Alias entity does not support a
Circuit entity. The cluster alias appears to the network as a
multicircuit end node, which is an end node with several active
circuits. In an OpenVMS cluster system that consists of DECnet-Plus for
OpenVMS systems on a LAN, the alias node appears as an end node with
multiple points for attachment to the LAN.

DECnet-Plus for OpenVMS supports the ability to access nodes in an
OpenVMS cluster using a separate alias node address, while retaining
the ability to address each node in the cluster individually. Not all
network objects may be accessed using this mechanism. The maximum
number of nodes supported for a cluster alias is 96. The maximum number
of cluster aliases for a single node is three.

The cluster_config.com command procedure for performing a
OpenVMS cluster configuration invokes the DECnet-Plus for OpenVMS
net$configure.com command procedure to perform any required
modifications to NCL initialization scripts. Use
cluster_config.com to create a configuration for each
satellite node in the cluster.