Part II Technical Requirements

Technical requirements analysis begins with the business requirements
documents that are created during the business analysis phase of the solution
life cycle. Using the business analysis, you perform a usage analysis. This
analysis helps you to determine expected load conditions and to create use
cases that model typical user interaction with the system. The analysis also
helps when creating a set of quality of service requirements. These requirements
define how a deployed solution must perform in areas such as response time,
availability, and security.

This part describes the technical requirements that must be defined
for a Directory Server Enterprise Edition deployment. It is divided into the following chapters:

Chapter 3 Usage Analysis for Directory Server Enterprise Edition

Usage analysis involves identifying the users of your system and determining
the usage patterns for those users. In doing so, a usage analysis enables
you to determine expected load conditions on your directory service.

Usage Analysis Factors

Your reasons for offering Sun Java System Directory Server Enterprise Edition as an identity management
solution have a direct effect on how you deploy the server.

During usage analysis, interview users whenever possible. Research existing
data on usage patterns, and interview builders and administrators of previous
systems. A usage analysis should provide you with the data that enables you
to determine the service requirements that are described in Chapter 5, Defining Service Level Agreements.

The information that should come out of a usage analysis includes the
following:

Number and type of client applications. Identify
how many client applications your deployment must support, and categorize
those applications, if necessary.

Administrative users. Identify
users who access the directory to monitor, update, and support its deployment.
Determine any specific administrative usage patterns that might affect technical
requirements, for example, administration of the deployment from outside the
firewall.

Usage patterns. Identify
how various types of applications access the system, and provide targets for
expected usage.

Answer the following questions, for example:

Are there times when usage spikes?

What are usual business hours?

Are client applications distributed globally?

What is the expected duration of application connectivity?

Client application growth. Determine
if the number of client applications is fixed or expected to grow. If you
anticipate additional applications, try to create reasonable projections of
the growth.

Application transactions. Identify
the types of transactions that must be supported.

These transactions can be categorized into use cases, for example:

What tasks are performed by the applications?

When applications bind to the directory, do they remain bound,
or do they typically perform a few tasks and unbind?

Studies and statistical data. Use
preexisting studies and other sources to determine patterns of application
behavior. Often, enterprises or industry organizations have research studies
from which you can extract useful information about users and client applications.
Log files for existing applications might contain statistical data that is
useful for making estimates for a system.

For more information about usage analysis, see the Sun Java Enterprise System Deployment Planning Guide.

Chapter 4 Defining Data Characteristics

The type of data in your directory determines how you structure the
directory, who can access the data, and how access is granted. Data types
can include, among others, user names, email addresses, telephone numbers,
and information about groups to which users belong.

This chapter explains how to locate, categorize, structure, and organize
data. It also explains how to map data to the Directory Server schema.
This chapter covers the following topics:

Determine how centralizing each piece of data affects the
management of data.

Centralized data management might require
new tools and new processes. Issues can arise when centralization requires
increasing staff in some organizations and decreasing staff in others.

Determining Data Ownership

Data ownership refers to the person or organization
that is responsible for ensuring that data is up-to-date. During the data
design phase, decide who can write data to the directory. Common strategies
for determining data ownership include the following:

Allow read-only access to the directory for everyone except
a small group of directory content managers.

Allow individual users to manage strategic subsets of information.

These subsets of information might include their passwords, descriptive
information about themselves, and their role within the organization.

Allow a person’s manager to write to some strategic
subset of that person’s information, such as contact information or
job title.

Allow an organization’s administrator to create and
manage entries for that organization.

Create roles that give groups of people read or write access
privileges.

For example, you might create roles for human resources,
finance, or accounting. Allow each of these roles to have read access, write
access, or both to the data needed by the group. This data might include salary
information, government identification number, and home phone numbers and
address.

As you determine who can write to the data, you might find that multiple
individuals require write access to the same information. For example, an
information systems or directory management group should have write access
to employee passwords. You might also want all employees to have write access
to their own passwords. While you generally must give multiple people write
access to the same information, try to keep this group small and easy to identify.
Small groups help to ensure your data’s integrity.

Distinguishing Between User and Configuration Data

To distinguish between data used to configure Directory Server and
other Java Enterprise System servers and the actual user data stored in the
directory, do the following:

Provide different backup strategies for user and configuration
data.

Provide different high availability standards for user and
configuration data.

Shut down, restore, and power up configuration servers quickly.

Keep configuration servers up while performing maintenance
on other Directory Server instances.

Identifying Data From Disparate Data Sources

When determining data sources, ensure that you include data from other
data sources, including legacy data sources. This data might not be stored
in the directory. However, Directory Server might need to have some knowledge
of, or control over, the data.

Directory Proxy Server provides a virtual directory feature
that aggregates information, in real-time, from multiple data repositories.
These repositories include LDAP directories, data that complies with the JDBC
specification, and LDIF flat files.

The virtual directory supports complex filters that handle attributes
from different data sources. It also supports modifications that combine attributes
from different data sources.

During the data analysis phase, you might find that the same data is
required by several applications, but in a different format. Instead of duplicating
this information, it is preferable to have the applications transform it for
their requirements.

Structuring Data With the Directory Information Tree

The directory information tree (DIT) provides
a way to structure directory data so that the data can be referred to by client
applications. The DIT interacts closely with other design decisions, including
how you distribute, replicate, or control access to directory data.

DIT Terminology

A well-designed DIT provides the following:

Simplified directory data maintenance

Flexibility in creating replication policies and access controls

Support for the applications that use the directory

Simplified directory navigation for users

The DIT structure follows the hierarchical LDAP model. The DIT organizes
data, for example, by group, by people, or by geographical location. It also
determines how data is partitioned across multiple servers.

DIT design has an impact on replication configuration and on how you
use Directory Proxy Server to distribute data. If you want to replicate or distribute
certain portions of a DIT, consider replication and the requirements of Directory Proxy Server at
design time. Also, decide at design time whether you require access controls
on branch points.

A DIT is defined in terms of suffixes, subsuffixes, and chained suffixes.
A suffix is a branch or subtree whose entire contents
are treated as a unit for administrative tasks. Indexing is defined for an
entire suffix, and an entire suffix can be initialized in a single operation.
A suffix is also usually the unit of replication. Data that you want to access
and manage in the same way should be located in the same suffix. A suffix
can be located at the root of the directory tree, where it is called a root suffix.

Because data can only be partitioned at the suffix level, an appropriate
directory tree structure is required to spread data across multiple servers.

The following figure shows a directory with two root suffixes. Each
suffix represents a separate corporate entity.

Figure 4–1 Two Root Suffixes in a Single Directory Server

A suffix might also be a branch of another suffix, in which case it
is called a subsuffix. The parent suffix does not include
the contents of the subsuffix for administrative operations. The subsuffix
is managed independently of its parent. Because LDAP operation results contain
no information about suffixes, directory clients are unaware of whether entries
are part of root suffixes or subsuffixes.

The following figure shows a directory with a single root suffix and
multiple subsuffixes for a large corporate entity.

Figure 4–2 One Root Suffix With Multiple Subsuffixes

A suffix corresponds to an individual database within the server. However,
databases and their files are managed internally by the server and database
terminology is not used.

Chained suffixes create a virtual DIT by referencing suffixes on other
servers. With chained suffixes, Directory Server performs the operation
on the remote suffix. The directory then returns the result as if the operation
had been performed locally. The location of the data is transparent. The client
is unaware that the suffix is chained and that the data is retrieved from
a remote server. A root suffix on one server can have subsuffixes that are
chained to another server. In this scenario, the client is aware of a single
tree structure.

In the special case of cascading chaining, the chained suffix might
reference another chained suffix on the remote server, and so on. Each server
forwards the operation and eventually returns the result to the server that
handles the client’s request.

Designing the DIT

DIT design involves choosing a suffix to contain your data, determining
the hierarchical relationship between data entries, and naming the entries
in the DIT hierarchy. The following sections describe the design process in
more detail.

Choosing a Suffix

The suffix is the name of the entry at the root of the DIT. If you have
two or more DITs that do not have a natural common root, you can use multiple
suffixes. The default Directory Server installation contains multiple suffixes.
One suffix is used to store user data. The other suffixes are for data that
is needed by internal directory operations, such as configuration information
and directory schema.

All directory entries must be located below a common base entry, the
suffix. Each suffix name must be as follows:

Globally unique

Static, so that the name rarely changes

Short, so that entries beneath the suffix are easier to read
online

Easy for a person to type and remember

It is generally considered best practice to map your enterprise domain
name to a Distinguished Name (DN). For example, an enterprise with the domain
name example.com would use a DN of dc=example,dc=com.

Creating the DIT Structure and Naming Entries

The structure of a DIT can be flat or hierarchical. Although a flat
tree is easier to manage, a degree of hierarchy might be required for data
partitioning, replication management, and access control.

Branch Points and Naming Considerations

A branch
point is a point at which you define a new subdivision within the
DIT. When deciding on branch points, avoid potential problematic name changes.
The likelihood of a name changing is proportional to the number of components
in the name that can potentially change. The more hierarchical the DIT, the
more components in the names, and the more likely the names are to change.

Use the following guidelines when defining and naming branch points:

Branch your tree to represent only the largest organizational
subdivisions in your enterprise.

Limit branch points to divisions,
such as Corporate Information Services, Customer Support, Sales, and Professional
Services. Make sure that your divisions are stable. Do not perform this kind
of branching if your enterprise reorganizes frequently.

Use functional or generic names rather than actual organizational
names.

Names change and you do not want to have to change your
DIT every time your enterprise renames its divisions. Instead, use generic
names that represent the function of the organization. For example, use Engineering instead of Widget Research and Development.

If you have multiple organizations that perform similar functions,
create a single branch point for that function instead of branching based
on divisional lines.

For example, even if you have multiple marketing
organizations that are responsible for a specific product line, create a single
Marketing subtree. All marketing entries then belong to that tree.

Try to use only the traditional branch point attributes that
are shown in the following table.

Traditional attributes increase
the likelihood of retaining compatibility with third-party LDAP client applications.
In addition, traditional attributes are known to the default directory schema,
which simplifies the construction of entries for the branch distinguished
name (DN).

Branch according to the type of data stored in the directory.

For example, you might create a separate branch for people, groups,
service, and devices.

Table 4–1 Traditional DN Branch Point
Attributes

Attribute Name

Definition

c

A country name.

o

An organization name. This attribute is typically used to represent
a large divisional branching. The branching might include a corporate division,
academic discipline, subsidiary, or other major branching within the enterprise.
You should also use this attribute to represent a domain name.

ou

An organizational unit. This attribute is typically used to represent
a smaller divisional branching of your enterprise than an organization. Organizational
units are generally subordinate to the preceding organization.

st

A state or province name.

l

A locality, such as a city, country, office, or facility name.

dc

A domain component.

Be consistent when choosing attributes for branch points. Some LDAP
client applications might fail if the DN format is inconsistent across your
DIT. If l (localityName) is subordinate to o (organizationName) in one part of your DIT, ensure that l is subordinate
to o in all other parts of your directory.

Access Control Considerations

A DIT hierarchy can enable certain types of access control. As with
replication, it is easier to group similar entries and to administer the entries
from a single branch.

A hierarchical DIT also enables distributed administration. For example,
you can use the DIT to give an administrator from the marketing department
access to marketing entries, and an administrator from the sales department
access to sales entries.

You can also set access controls based on directory content, rather
than the DIT. Use the ACI filtered target mechanism to define a single access
control rule. This rule states that a directory entry has access to all entries
that contain a particular attribute value. For example, you can set an ACI
filter that gives the sales administrator access to all entries that contain
the attribute ou=Sales.

However, ACI filters can be difficult to manage. You must decide which
method of access control is best suited to your directory: organizational
branching in the DIT hierarchy, ACI filters, or a combination of the two.

Designing a Directory Schema

The directory schema describes the types of data that can be stored
in a directory. During schema design, each data element is mapped to an LDAP
attribute. Related elements are gathered into LDAP object classes. A well-designed
schema helps maintain data integrity by imposing constraints on the size,
range, and format of data values. You decide what types of entries your directory
contains and the attributes that are available to each entry.

The predefined schema that is included with Directory Server contains
the Internet Engineering Task Force (IETF) standard LDAP schema. The schema
contains additional application-specific schema to support the features of
the server. It also contains Directory Server-specific schema extensions.
While this schema meets most directory requirements, you might need to extend
the schema with new object classes and attributes that are specific to your
directory.

Schema Design Process

Schema design involves doing the following:

Mapping your data to the default schema.

To map
existing data to the default schema, identify the type of object that each
data element describes then select a similar object class from the default
schema. Use the common object classes, such as groups, people, and organizations.
Select a similar attribute from the matching object class that best matches
the data element.

Identifying unmatched data.

Extending the default schema to define new elements to meet
your remaining needs.

Where possible, use the existing schema elements that are defined in
the default Directory Server schema. Standard schema elements help to ensure
compatibility with directory-enabled applications. Because the schema is based
on the LDAP standard, it has been reviewed and agreed to by a large number
of directory users.

Maintaining Data Consistency

Consistent data assists LDAP client applications in locating directory
entries. For each type of information that is stored in the directory, select
the required object classes and attributes to support that information. Always
use the same object classes and attributes. If you use schema objects inconsistently,
it is difficult to locate information.

You can maintain schema consistency in the following ways:

Use schema checking to ensure that attributes and object classes
conform to the schema rules.

The
LDAP schema allows you to place any data on any attribute value. However,
you should store data consistently in the DIT by selecting a format appropriate
for your LDAP client applications and directory users. With the LDAP protocol
and Directory Server, you must represent data using the data formats specified
in RFC 4517.

Other Directory Data Resources

For more information about the standard LDAP schema, and about designing
a DIT, see the following sites:

Chapter 5 Defining Service Level Agreements

Service level agreements are technical specifications that determine
how the system must perform under certain conditions. This chapter describes
the service requirements that are specific to Directory Server Enterprise Edition. The chapter includes
questions that you need to ask during the planning phase to ensure that your
deployment meets these requirements.

Identifying System Qualities

To identify system qualities, specify the minimum requirements that
your directory service must provide. The following system qualities typically
form a basis for quality of service requirements:

Performance. The measurement
of response time and throughput with respect to user load conditions.

Availability. A measure
of how often a system's resources and services are accessible to end users,
often expressed as the uptime of a system.

Scalability. The ability
to add capacity and users to a deployed system over time. Scalability typically
involves adding resources to the system without changing the deployment architecture.

Security.A complex combination
of factors that describe the integrity of a system and its users. Security
includes authentication and authorization of users, security of data, and
secure access to a deployed system.

Latent capacity. The ability
of a system to handle unusual peak loads without additional resources. Latent
capacity is a factor in availability, performance, and scalability.

Serviceability. The ease
by which a deployed system can be maintained, including monitoring the system,
fixing problems that arise, and upgrading hardware and software components.

Defining Performance Requirements

Performance requirements should be based on typical models of
directory usage. In all directory deployments, Directory Server supports
one or more client applications, and the requirements of these applications
must be assessed. Estimating how much information your directory contains,
and how often that information is accessed, involves identifying these applications
and determining how they use Directory Server.

Identifying Client Applications

The applications that access your directory and the data needs of these
applications have a significant impact on performance requirements. When identifying
client applications, consider the following:

What types of client applications are accessing Directory Server?

How many users access each of these applications?

What kind of operations do these applications perform?

What are the usage patterns for these operations?

Common applications that might use your directory include the following:

Browser applications, such as white
pages. Applications of this type generally access information such
as email addresses, telephone numbers, and employee names.

Messaging applications, especially
email servers. All email servers require email addresses, user
names, and routing information. Others require more advanced information such
as the place on disk where a user’s mailbox is stored, vacation notification
information, and protocol information.

Directory-enabled human resources
applications. These applications require more personal information
such as government identification numbers, home addresses, home telephone
numbers, and salary details.

When you have identified the information used by each application, you
might see that some types of data are used by more than one application. Performing
this kind of exercise during the planning stage can help you to avoid data
redundancy.

An unindexed
search means that the database is searched directly, instead of the index
file. Unindexed searches occur either when the All IDs Threshold is reached
within the index file used for the search, when no index file exists or when
the index file is not configured in the way required by the search.

Unindexed
searches are generally more time consuming than indexed searches.

Are searches concentrated in a particular data center or geographic
region?

If one data receives proportionally more search traffic
than other data centers, it might be worth placing additional, replicated
servers in this data center to balance the load.

Estimating the Acceptable Response Time

For each client application, determine the maximum response time that
is acceptable. The acceptable response time might differ for various geographical
locations, and for different kinds of operations.

Estimating the Acceptable Replication Latency

Estimate the level of synchronicity that is required between master
replicas and consumer replicas. The Directory Server replication model
is loosely consistent, that is, updates are accepted on a master without requiring
communication with the other replicas in a topology. At any given time, the
contents of each replica might be different. Over time, the replicas converge
until each replica has an identical copy of the data. As part of performance
planning, determine the maximum acceptable time that replicas have to converge.

Defining Availability Requirements

Availability implies an agreed minimum up time and
level of performance for your directory service. Failure, in this context,
is defined as anything that prevents the directory service from providing
this minimum level of service.

In assessing availability requirements, consider the following:

Is your directory service accessed only at particular times
of the day?

Do you have different availability requirements for read and
write operations?

Does the service span multiple geographical sites, and if
so, do these sites have different access time requirements?

Defining Scalability Requirements

As your directory evolves, the service levels that must be supported
might change. To raise the level of service after a system has been deployed
can be difficult. Thus, the initial design must take future requirements into
account.

When defining scalability requirements, consider the following:

Is there an anticipated increase in entry volume?

How many new users are expected within the next few years?

What is the expected growth rate, over the next few years,
in terms of data, users, and client applications?

Are any new business processes expected?

Increase CPU estimates to make sure that your deployment does not have
to be scaled prematurely. Look at the anticipated milestones for scaling and
projected load increase over time to make sure that you allow enough latent
capacity to reach the milestones.

Defining Serviceability Requirements

Chapter 6 Tuning System Characteristics and Hardware Sizing

A Directory Server Enterprise Edition deployment requires that certain system characteristics
be defined at the outset. This chapter describes the system information that
you need to address in the planning phase of your deployment.

Host System Characteristics

When identifying the host systems that will be used in your deployment,
consider the following:

Will the system be dedicated to a single server?

Will the system be running other applications, and if so,
what will the other applications be?

What percentage of the system's resources will these applications
require?

When the host systems have been identified, select a host name for each
host in the topology. Make sure that each host system has a static IP address.

Restrict physical access to the host system. Although Directory Server Enterprise Edition includes
many security features, directory security is compromised if physical access
to the host system is not controlled.

If the Directory Server instances do not provide a naming service
for the network, or if the deployment involves remote administration, a naming
service and the domain name for the host must be properly configured.

Port Numbers

At design time, select port numbers for each Directory Server and Directory Proxy Server instance.
If possible, do not change port numbers after your directory service is deployed
in a production environment.

Separate port numbers must be allocated for various services and components.

Specify the port number for accepting LDAP connections. The standard
port for LDAP communication is 389, although other ports can be used. For
example, if you must be able to start the server as a regular user, use an
unprivileged port, by default 1389. Port numbers less than 1024 require privileged
access. If you use a port number that is less than 1024, certain LDAP commands
must be run as root.

Specify the port number for accepting SSL-based connections. The standard
port for SSL-based LDAP (LDAPS) communication is 636, although other ports
can be used, such as the default 1636 when running as a regular user. For
example, an unprivileged port might be required so that the server can be
started as a regular user.

If you specify a non-privileged port and a server instance is installed
on a system to which other users have access, you might expose the port to
a hijack risk by another application. In other words, another application
can bind to the same address/port pair. The rogue application might then be
able to process requests that are intended for the server. The application
could also be used to capture passwords used in the authentication process,
to alter client requests or server responses, or to produce a denial of service
attack.

Both Directory Server and Directory Proxy Server allow you to restrict
the list of IP addresses on which the server listens. Directory Server has
configuration attributes nsslapd-listenhost and nsslapd-securelistenhost. Directory Proxy Server has listen-address properties
on ldap-listener and ldaps-listener configuration
objects. When you specify the list of interfaces on which to listen, other
programs are prevented from using the same port numbers as your server.

Directory Server DSML Port Numbers

In addition to processing requests in LDAP, Directory Server also
responds to requests sent in the Directory Service Markup Language v2 (DSML).
DSML is another way for a client to encode directory operations. Directory Server processes
DSML as any other request, with the same access control and security features.

If your topology includes DSML access, identify the following:

A standard HTTP port for receiving DSML requests. The default
port is 80.

If SSL is activated, an encrypted (HTTPS) port for receiving
encrypted DSML requests. The default port is 443.

A relative URL that, when appended to the host and port, determines
the complete URL that clients must use to send DSML requests

Directory Service Control Center and Common Agent Container Port Numbers

Directory Service Control Center, DSCC, is a web application for Sun Java Web Console
that enables you to administer Directory Server and Directory Proxy Server instances
through a web browser. For a server to be recognized by DSCC, the
server must be registered with DSCC. Unregistered servers can still
be managed using command-line utilities.

DSCC communicates with DSCC agents located on the
systems where servers are installed. The DSCC agents run inside a
common agent container, which routes network traffic to them and provides
them a framework in which to run.

If you plan to use DSCC to administer servers in your topology,
identify the following port numbers.

The encrypted HTTPS port for accessing DSCC through
Sun Java Web Console on the system where DSCC is installed. The default
port is 6789.

The management traffic port for DSCC to access its
agents local to the server through the common agent container, default: 11162,
on the system where the server instances are installed.

The port numbers for the DSCC Registry instance,
if you plan to replicate the configuration information. See dsccsetup(1M) for
details.

Even if all components are installed on the same system, DSCC still
communicates with its agents through these network ports.

Identity Synchronization for Windows Port Numbers

If your deployment includes identity synchronization with Microsoft
Active Directory, an available port is required for the Message Queue instance.
This port must be available on each Directory Server instance that participates
in the synchronization. The default non-secure port for Message Queue is 80,
and the default secure port is 443.

Hardware Sizing For Directory Service Control Center

DSCC runs as a web application inside Sun Java Web Console,
which runs inside a web application container. DSCC also runs its
own local instance of Directory Server to store configuration data.

The minimum requirement to run DSCC is 256 megabytes of memory
and 100 megabytes of free disk space. However, for optimum performance run DSCC on
a system with at least one gigabyte of memory devoted to DSCC and
a couple gigabytes of free disk space.

Hardware Sizing For Directory Proxy Server

Directory Proxy Server runs as a multithreaded Java program, and is built
to scale across multiple processors. In general, the more processing power
available the better, though you might find that in practice adding memory,
faster disks, or faster network connections can enhance performance more than
additional processors.

Configuring Virtual Memory

Directory Proxy Server uses memory mainly to hold information that is being
processed. Complex aggregations for processing some virtual directory requests
against multiple data sources may temporarily use extra memory. If one of
your data sources is an LDIF file, Directory Proxy Server constructs a representation
of that data source in memory. However, unless you use large LDIF data sources,
not a recommended deployment practice, a couple gigabytes of memory devoted
to Directory Proxy Server should suffice. You might want to increase the Java
virtual machine heap size when starting Directory Proxy Server if enough memory
is available. For example, to set the Java virtual machine heap size to 1000
megabytes, use the following command.

This command uses the -XX:NewRatio option, which is
specific to the Sun Java virtual machine. The default heap size is 250 megabytes.

Configuring Worker Threads and Backend
Connections

Directory Proxy Server allows you to configure how many threads the server
maintains to process requests. You configure this using the server property number-of-worker-threads, described in number-of-worker-threads(5dpconf). As a rule of thumb, try setting this number to 50
threads plus 20 threads for each data source used. To gauge whether the number
is sufficient, monitor the status of the Directory Proxy Server work queue on cn=Work Queue,cn=System Resource,cn=instance-path,cn=Application System,cn=DPS6.0,cn=Installed Product,cn=monitor.
If you find that the operationalStatus for the work queue
is STRESSED, this can mean thread-starved connection handlers
are unable to handle new client requests. Increasing number-of-worker-threads may help if more system resources are available for Directory Proxy Server.

The number of worker threads should also be appropriate for the number
of backend connections. If there are too many worker threads for
the number of backend connections, incoming connections are accepted but cannot
be transmitted to the backend connections. Such a situation is generally problematic
for client applications.

To determine whether this situation has arisen, check the log files
for error messages of the following type: "Unable to get backend
connections". Alternatively, look at the cn=monitor entry
for load balancing. If the totalBindConnectionsRefused attribute
in that entry is not null, the proxy was unable to process certain operations
because there were not enough backend connections. To solve this issue, increase
the maximum number of backend connections. You can configure the number of
backend connections for each data source by using the num-bind-limit, num-read-limit and num-write-limit properties
of the data source. If you have already reached the limit for backend connections,
reduce the number of worker threads.

If there are not enough worker threads for the
number of backend connections, so much work can pile up in the server's queue
that no new connections can be handled. Client connections can then be refused
at the TCP/IP level, with no LDAP error returned. To determine if this situation
has arisen, look at the statistics in the cn=monitor entry
for the work queue. In particular, readConnectionsRefused and writeConnectionsRefused should remain low. Also, the value of the maxNormalPriorityPeak attribute should remain low.

Disk Space for Directory Proxy Server

By default Directory Proxy Server requires up to one gigabyte of local disk
space for access logging, and another gigabyte of local
disk space for errors logging. Given the quantity of access log messages Directory Proxy Server writes when handling client
application requests, logging can be a performance bottleneck. Typically,
however, you must leave logging on in a production environment. For best performance
therefore put Directory Proxy Server logs on a fast, dedicated disk subsystem.
See Configuring Directory Proxy Server Logs in Sun Java System Directory Server Enterprise Edition 6.1 Administration Guide for
instructions on adjusting log settings.

Network Connections for Directory Proxy Server

Directory Proxy Server is a network-intensive application. For each client
application request, Directory Proxy Server may send multiple operations to different
data sources. Make sure the network connections between Directory Proxy Server and
your data sources are fast, with plenty of bandwidth and low latency. Also
make sure the connections between Directory Proxy Server and client applications
can handle the amount of traffic you expect.

Hardware Sizing For Directory Server

Getting the right hardware for a medium to large Directory Server deployment
involves some testing with data similar to the data you expect to serve in
production, and access patterns similar to those you expect from client applications.
When optimizing for particular systems, make sure you understand how system
buses, peripheral buses, I/O devices, and supported file systems work. This
knowledge helps you take advantage of I/O subsystem features when tuning these
features to support Directory Server. Sun Services can
help you make the right deployment decisions, including sizing the hardware
to your requirements.

This section looks at how to approach hardware sizing for Directory Server.
It covers what to consider when deciding how many processors, how much memory,
how much disk space, and what type of network connections to dedicate to Directory Server in
your deployment.

Unless indicated otherwise, the server properties described
in the following sections can be set with the dsconf command.
For more information about using dsconf, see dsconf(1M).

The Tuning Process

To tune performance implies modification of the default configuration
to reflect specific deployment requirements. The following list of process
phases covers the key things to think about when tuning Directory Server.

Define goals

Define specific, measurable objectives for tuning, based on
deployment requirements.

Consider the following questions.

Which applications use Directory Server?

Can you dedicate the entire system to Directory Server?

Does the system run other applications?

If so, which other applications run on the system?

How many entries are handled by the deployment?

How large are the entries?

How many searches per second must Directory Server support?

What types of searches are expected?

How many updates per second must Directory Server support?

What types of updates are expected?

What sort of peak update and search rates are expected?

What average rates are expected?

Does the deployment call for repeated bulk import initialization
on this system?

If so, how often do you expect to import data?
How many entries are imported?

What types of entries?

Must initialization be performed online with the server running?

The list here is not exhaustive. Ensure that your list of goals is exhaustive.

Select methods

Determine how you plan to implement optimizations. Also, determine
how you plan to measure and analyze optimizations.

Consider the following questions.

Can you change the hardware configuration of the system?

Are you limited to using hardware that you already have, tuning
only the underlying operating system, and Directory Server?

How can you simulate other applications?

How should you generate representative data samples for testing?

How should you measure results?

How should you analyze results?

Perform tests

Carry out the tests that you planned. For large, complex deployments,
this phase can take considerable time.

Verify results

Check whether the potential optimizations tested reach the
goals defined at the outset of the process.

If the optimizations reach the goals, document the results.

If the optimizations do not reach the goals, profile and monitor Directory Server.

Profile and monitor

Profile and monitor the behavior of Directory Server after
applying the potential modifications.

Collect measurements of all relative behavior.

Plot and analyze

Plot and analyze the behavior that you observed while profiling
and monitoring. Attempt to find evidence and to discover patterns that suggest
further tests.

You might need to go back to the profiling and monitoring phase to collect
more data.

Tweak and tune

Apply further potential optimizations suggested by your analysis
of measurements.

Return to the phase of performing tests.

Document results

When the optimizations applied reach the goals defined at
the outset of the process, document the optimizations well so the optimizations
can be easily reproduced.

Making Sample Directory Data

How much disk and memory space you devote to Directory Server depends
on your directory data. If you already have representative data in LDIF, use
that data when sizing hardware for your deployment. Representative data here
means sample data that corresponds to the data you expect to use in deployment,
but not actual data you use in deployment. Real data
comes with real privacy concerns, can be multiple orders of magnitude larger
than the specifications need to generate representative data, and may not
help you exercise all the cases you want to test. Representative data includes
entries whose average size is close to the size you expect to see in deployment,
whose attributes have values similar to those you expect to see in deployment,
and whose numbers are present in proportions similar to those you expect to
see in deployment.

Take anticipated growth into account when you are deciding on representative
data. It is advisable to include an overhead on current data for capacity
planning.

If you do not have representative data readily available, you can use
the makeldif(1) command
to generate sample LDIF, which you can then import into Directory Server. Chapter 4, Defining Data Characteristics can
help you figure out what representative data would be for your deployment.
The makeldif command is one of the Directory Server Resource Kit tools.

For deployments expected to serve millions of entries in production,
ideally you would load millions of entries for testing. Yet loading millions
of entries may not be practical for a first estimate. Start by creating a
few sets of representative data, for example 10,000 entries, 100,000 entries,
and 1,000,000 entries, import those, and extrapolate from the results you
observe to estimate the hardware required for further testing. When you are
estimating hardware requirements, make provision for data that will be replicated
to multiple servers.

Notice when you import directory data from LDIF into Directory Server the
resulting database files (including indexes) are larger than the LDIF representation.
The database files, by default, are located under the instance-path/db/ directory.

What to Configure and Why

Directory Server default configuration settings are defined for typical
small deployments and to make it easy to install and evaluate the product.
This section examines some key configuration settings to adjust for medium
to large deployments. In medium to large deployments you can often improve
performance significantly by adapting configuration settings to your particular
deployment.

Directory Server Database Page Size

When Directory Server reads or writes data, it works with fixed blocks
of data, called pages. By increasing the page size you increase the size of
the block that is read or written in one disk operation.

The page size is related to the size of entries and is a critical element
of performance. If you know that the average size of your entries is greater
than 4 kilobytes, you must increase the database page size. The database page
size should also match the file system disk block size.

Directory Server Cache Sizes

Directory Server is designed to respond quickly to client application
requests. In order to avoid waiting for directory data to be read from disk, Directory Server caches
data in memory. You can configure how much memory is devoted to cache for
database files, for directory entries, and for importing directory data from
LDIF.

Ideally the hardware on which you run Directory Server allows you
to devote enough space to cache all directory data in physical memory. The
data should fit comfortably, such that the system has enough physical memory
for operation, and the file system has plenty of physical memory for its caching
and operation. Once the data are cached, Directory Server has to read data
from and write data to disk only when a directory entry changes.

Directory Server supports 64–bit memory addressing, and so
can handle total cache sizes as large as a 64–bit processor can address.
For small to medium deployments it is often possible to provide enough memory
that all directory data can be held in cache. For large deployments, however,
caching everything may not be practical or cost effective.

For large deployments, caching everything in memory can cause side effects.
Tools such as the pmap command, that traverse the process
memory map to gather data, can freeze the server process for a noticeable
time. Core files can become so large that writing them to disk during a crash
can take several minutes. Startup times can be slow if the server is shut
down abruptly and then restarted. Directory Server can also pause and stop
responding temporarily when it reaches a checkpoint and has to flush dirty
cached pages to disk. When the cache is very large, the pauses can become
so long that monitoring software assumes Directory Server is down.

I/O buffers at the operating system level can provide better performance.
Very large buffers can compensate for smaller database caches.

Directory Server Indexes

Directory Server indexes directory entry attribute values to speed
searches for those values. You can configure attributes to be indexed in various
ways. For example, indexes can help Directory Server determine quickly
whether an attribute has a value, whether it has a value equal to a given
value, and whether it has a value containing a given substring.

Indexes can add to search performance, but they can also impact write
performance. When an attribute is indexed, Directory Server has to update
the index as values of the attribute change.

Directory Server saves index data to files. The more indexes you
configure, the more disk space required. Directory Server indexes and data
files are found, by default, under the instance-path/db/ directory.

Directory Server Administration Files

Some Directory Server administration files can potentially become
very large. These files include the LDIF files containing directory data,
backups, core files, and log files.

Depending on your deployment, you may use LDIF both to import Directory Server data,
and to serve as auxiliary backup. A standard text format, LDIF allows you
to export binary data as well as strings. LDIF can occupy significant disk
space in large deployments. For example, a directory containing 10 million
entries having an average size of 2 kilobytes, would in LDIF representation
occupy 20 gigabytes on disk. You might maintain multiple LDIF files of that
size if you use the format for auxiliary backup.

Binary backup files also occupy space on disk, at least until you move
them somewhere else for safekeeping. Backup files produced with Directory Server utilities
consist of binary copies of the directory database files. Alternatively for
large deployments you can put Directory Server in frozen mode and take
a snapshot of the file system. Either way, you must have disk space available
for the backup.

By default Directory Server writes log messages to instance-path/logs/access and instance-path/logs/errors. By default Directory Server requires
one gigabyte of local disk space for access logging,
and another 200 megabytes of local disk space for errors logging.

Directory Server Replication

Directory Server lets you replicate directory data for availability
and load balancing between the servers in your deployment. Directory Server allows
you to have multiple read-write (master) replicas deployed together.

Internally, the server makes this possible by keeping track of changes
to directory data. When the same data are modified on more than one read-write
replica Directory Server can resolve the changes correctly on all replicas.
The data to track these changes, must be retained until they are no longer
needed for replication. By default changes are retained for seven days. If
your directory data undergoes much modification, especially of large multi-valued
attributes, this data can grow quite large.

Directory Server Threads and File Descriptors

Directory Server runs as a multithreaded process, and is designed
to scale on multiprocessor systems. You can configure the number of threads Directory Server creates
at startup to process operations. By default Directory Server creates 30
threads. The value is set using the dsconf(1M) command to adjust the server
property thread-count.

The trick is to keep the threads as busy as possible without incurring
undo overhead from having to handle many threads. As long as all directory
data fits in cache, better performance is often seen when thread-count is
set to twice the number of processors plus the expected number of simultaneous
update operations. If only a fraction of a large directory data set fits in
cache, Directory Server threads may often have to wait for data being read
from disk. In that case you may find performance improves with a much higher
thread count, up to 16 times the number of available processors.

Directory Server uses file descriptors to hold data related to open
client application connections. By default Directory Server uses a maximum
of 1024 file descriptors. The value is set using the dsconf command
to adjust the server property file-descriptor-count. If
you see a message in the errors log stating too
many fds open, you may observe better performance by increasing file-descriptor-count, presuming your system allows Directory Server to
open additional file descriptors.

The file-descriptor-count property does not apply
on Windows.

Directory Server Growth

Once in deployment Directory Server use is likely to grow. Planning
for growth is key for a successful deployment, in which you continue to provide
a consistently high level of service. Plan for larger, more powerful systems
than you need today, basing your requirements in part on the growth you expect
tomorrow.

Sometimes directory services must grow rapidly, even suddenly. This
is the case for example when a directory service sized for one organization
is merged with that of another organization. By preparing for growth in advance
and by explicitly identifying your expectations, you are better equipped to
deal with rapid and sudden growth, because you know in advance whether the
expected increase outstrips the capacity you planned.

Top Tuning Tips

Basic recommendations follow. These recommendations apply in most situations.
Although the recommendations presented here are in general valid, avoid the
temptation to apply the recommendations without understanding the impact on
the deployment at hand. This section is intended as a checklist, not a cheat
sheet.

Adjust cache sizes.

Ideally, the server has enough
available physical memory to hold all caches used by Directory Server.
Furthermore, an appropriate amount of extra physical memory is available to
account for future growth. When plenty of physical memory is available, set
the entry cache size large enough to hold all entries in the directory. Use
the entry-cache-size suffix property. Set the database
cache size large enough to hold all indexes with the db-cache-size property.
Use the dn-cache-size or dn-cache-count properties
to control the size of the DN cache.

From time to time, you can add additional indexes
that support requests from new applications. You can add, remove, or modify
indexes while Directory Server is running. Use for example the dsconf
create-index and dsconf delete-index commands.

Especially for deployments
that support large numbers of updates, Directory Server can be extremely
disk I/O intensive. If possible, consider spreading the load across multiple
disks with separate controllers.

Disable unnecessary logging.

Disk access is slower
than memory access. Heavy logging can therefore have a negative impact on
performance. Reduce disk load by leaving audit logging off when not required,
such as on a read-only server instance. Leave error logging at a minimal level
when not using the error log to troubleshoot problems. You can also reduce
the impact of logging by putting log files on a dedicated disk, or on a lesser
used disk, such as the disk used for the replication changelog.

When replicating large numbers of updates, consider adjusting
the appropriate replication agreement properties.

The properties
are transport-compression, transport-group-size,
and transport-window-size.

On Solaris systems, move the database home directory to a tmpfs file system.

With the database cache backing files on a tmpfs file
system, the system does not repeatedly flush the database cache backing files
to disk. You therefore avoid a performance bottleneck for updates. In some
cases, you also avoid the performance bottleneck for searches. The database
cache memory is mapped to the Directory Server process space. The system
essentially shares cache memory and memory used to hold the backing files
in the tmpfs file system. You therefore gain performance
at essentially no cost in terms of memory space needed.

The primary cost associated with this optimization is that database
cache must be rebuilt after a restart of the host machine. This cost is probably
not a cost that you can avoid, however, if you expect a restart to happen
only after a software or hardware failure. After such a failure, the database
cache must be rebuilt anyway.

Enable transaction batches if you can afford to lose updates
during a software or hardware failure.

You enable transaction
batches by setting the server property db-batched-transaction-count.

Each update to the transaction log is followed by a sync operation to
ensure that update data is not lost. By enabling transaction batches, updates
are grouped together before being written to the transaction log. Sync operations
only take place when the whole batch is written to the transaction log. Transaction
batches can therefore significantly increase update performance. The improvement
comes with a trade off. The trade off is during a crash, you lose update data
not yet written to the transaction log.

Note –

With transaction batches enabled, you lose up to db-batched-transaction-count
- 1 updates during a software or hardware failure. The loss happens
because Directory Server waits for the batch to fill, or for 1 second,
whichever is sooner, before flushing content to the transaction log and thus
to disk.

The referential integrity plug-in ensures that when entries
are modified, or deleted from the directory, all references to those entries
are updated. By default, the processing is performed synchronously, before
the response for the delete operation is returned to the client. You can configure
the plug-in to have the updates performed asynchronously. Use the ref-integrity-check-delay server property.

Simulating Client Application Load

To measure Directory Server performance, you prepare the server,
then subject it to the kind of client application traffic you expect in production.
The better you reproduce the kind of access patterns client applications that
happen in production, the better job you can do sizing the hardware and configuring Directory Server appropriately.

Directory Server Resource Kit provides the authrate(1), modrate(1), and searchrate(1) commands
you can use for basic tests. These commands let you measure the rate of binds,
modifications, and searches your directory service can support.

You can also simulate, measure, and graph complex, realistic client
access using SLAMD. The SLAMD Distributed Load Generation Engine (SLAMD) is
a Java application that is designed to stress test and analyze the performance
of network-based applications. It was originally developed by Sun Microsystems,
Inc. to benchmark and analyze the performance of LDAP Directory Servers.
SLAMD is available as an open source application under the Sun Public License,
an OSI-approved open source license. To obtain information about SLAMD, go
to http://www.slamd.com/. SLAMD is also available
as a java.net project. See https://slamd.dev.java.net/.

Directory Server and Processors

As a multithreaded process built to work on systems with multiple processors, Directory Server performance
scales linearly in most cases as you devote more processors to it. When running Directory Server on
a system with many processors, consider using the dsconf command
to adjust the server property thread-count, which is the
number of threads Directory Server starts to process server operations.

In specific directory deployments, however, adding more processors might
not significantly impact performance. When handling demanding performance
requirements for searching, indexing, and replication, consider load balancing
and directory proxy technologies as part of the solution.

Directory Server and Memory

The following factors significantly affect the amount of memory needed:

Overhead for the operating system, other applications running
on the system, and system administration activity

To estimate the memory size required to run Directory Server, estimate
the memory needed for a specific Directory Server configuration on a system
loaded as in production, including application load generated for example
using the Directory Server Resource Kit commands or SLAMD.

Before you measure Directory Server process size, give the server
some time after startup to fill entry caches as during normal or peak operation.
If you have space to put everything in cache memory, you can speed this warm
up period for Directory Server by reading every entry in the directory
to fill entry caches. If you do not have space to put everything in cache
memory, simulate client access for some time until the cache fills as it would
with a pattern of normal or peak operation.

With the server in an equilibrium state, you can use utilities such
as pmap on Solaris or Linux, or the Windows Task Manager
to measure memory used by the Directory Server process, ns-slapd on
UNIX systems, slapd.exe on Windows systems. For more information,
see the pmap(1) man page. Measure process size both during
normal operation and peak operation before deciding how much memory to use.

Make sure to add to your estimates the amount of memory needed for system
administration, and for the system itself. Operating system memory requirements
can vary widely depending on the system configuration. Therefore, estimating
the memory needed to run the underlying operating system must be done empirically.
After tuning the system, monitor memory use to your estimate. You can use
utilities such as the Solaris vmstat and sar commands,
or the Task Manager on Windows to measure memory use.

At a minimum, provide enough memory so that running Directory Server does
not cause constant page swapping, which negatively affects performance. Utilities
such as MemTool, unsupported and available separately for
Solaris systems, can be useful in monitoring how memory is used by and allocated
to running applications.

If the system cannot accommodate additional memory, yet you continue
to observe constant page swapping, reduce the size of the database and entry
caches. Although you can throttle memory use with the heap-high-threshold-size and heap-low-threshold-size server settings,
consider the heap threshold mechanism as a last resort. Performance suffers
when Directory Server must delay other operations to free heap memory.

Directory Server and Local Disk Space

Disk use and I/O capabilities can have great impact on performance.
The disk subsystem can become an I/O bottleneck, especially for a deployment
that supports large numbers of modifications. This section recommends ways
to estimate overall disk capacity for a Directory Server instance.

Note –

Do not install Directory Server or any
data it accesses on network disks.

Directory Server software
does not support the use of network-attached storage through NFS, AFS, or
SMB. All configuration, database, and index files must reside on local storage
at all times, even after installation. Log files can be stored on network
disks.

The following factors significantly affect the amount of local disk
space needed:

When you have set up indexes, adjusted the database page size, and imported
directory data, you can estimate the disk capacity required for the instance
by reading the size of the instance-path/ contents,
and adding the size of expected LDIF, backups, logs, and core files. Also
estimate how much the sizes you measure are expected to grow, particularly
during peak operation. Make sure you leave a couple of gigabytes of extra
space for the errors log in case you need to increase
the log level and size for debugging purposes.

Getting an estimation of the disk required for directory data can be
done in some cases by extrapolation. If it is not practical to load Directory Server with
as much data as you expect in production, extrapolate from smaller sets of
sample data as suggested in Making Sample Directory Data. When the amount of directory data you use is
smaller than in production, you must extrapolate for other measurements, too.

The following factors determine how fast the local disk must be:

Level of updates sustained, including the volume of replication
traffic

Whether directory data are mainly in cache or on disk

Log levels used for access and error logging, and whether
the audit log is enabled

Disks used should not be saturated under normal operating circumstances.
You can use tools such as the Solaris iostat command to
isolate potential I/O bottlenecks.

To increase disk throughput distribute files across disk subsystems.
Consider providing dedicated disk subsystems for transaction logs (dsconf
set-server-prop db-log-path:/transaction/log/path),
databases (dsconf create-suffix --db-path /suffix/database/pathsuffix-name), and log
files (dsconf set-log-prop path:/log/file/path).
In addition consider putting database cache files on a memory-based file system
such as a Solaris tmpfs file system, where files are swapped
to disk only if available memory is exhausted (for example, dsconf
set-server-prop db-env-path:/tmp). If you put database cache files
on a memory-based file system, make sure the system does not run out of space
to keep that entire file system in memory.

To further increase throughput use multiple disks in RAID configuration.
Large, non volatile I/O buffers and high-performance disk subsystems such
as those offered in Sun StorEdgeTM products can greatly
enhance Directory Server performance and uptime. On Solaris 10 systems,
using ZFS can also improve performance.

Directory Server and Network Connectivity

Directory Server is a network-intensive application. You can estimate
theoretical maximum throughput using the following formula. Notice that this
formula does not account for replication traffic.

max. throughput = max. entries returned/second x average entry size

Imagine that a Directory Server must respond to a peak of 5000 searches
per second and that the server returns one entry per search. The entries have
an average size of 2000 bytes. The theoretical maximum throughput would be
10 megabytes, or 80 megabits, not counting replication. 80 megabits are likely
to be more than a single 100-megabit Ethernet adapter can provide. To improve
network availability for a Directory Server instance, equip the system
with a faster connection, or with multiple network interfaces. Directory Server can
listen on multiple network interfaces within the same process.

Note –

The preceding example assumes that the client application requests all attributes when reading or searching the directory. Generally,
you should design client applications so that they request only the required attributes.

If you intend to cluster Directory Servers on the same network for
load balancing purposes, make sure the network infrastructure can support
the additional load generated for replication. If you plan multi-master replication
over a wide area network, test your configuration to make sure the connection
provides sufficient throughput with minimum latency and near-zero packet loss.
High latency and packet loss both slow replication. In addition, avoid a topology
where replication traffic goes through a load balancer.

Limiting Directory Server Resources Available
to Clients

The default configuration of Directory Server can allow client applications
to use more Directory Server resources than are required.

The following uses of resources can hurt directory performance:

Opening many connections then leaving them idle or unused

Launching costly and unnecessary unindexed searches

Storing enormous and unplanned for binary attribute values

In some deployment situations, you should not modify the default configuration.
For deployments where you cannot tune Directory Server, use Directory Proxy Server to
limit resources, and to protect against denial of service attacks.

In some deployment situations, one instance of Directory Server must
support client applications, such as messaging servers, and directory clients
such as user mail applications. In such situations, consider using bind DN
based resource limits to raise individual limits for directory intensive applications.
The limits for an individual account can be adjusted by setting the attributes nsSizeLimit, nsTimeLimit, nsLookThroughLimit, and nsIdleTimeout on the individual entry.
For information about how to control resource limits for individual accounts,
see Setting Resource Limits For Each Client Account in Sun Java System Directory Server Enterprise Edition 6.1 Administration Guide.

Table 6–1 describes the parameters
that set the global values for resource limits. The limits in Table 6–1 do not apply to the Directory Manager
user, therefore, ensure client applications do not connect as the Directory
Manager user.

Sets the time in seconds after which Directory Server closes an idle
client connection. Here idle means that the connection
remains open, yet no operations are requested. By default, no time limit is
set.

You set this server property with the dsconf set-server-prop command.

Some applications, such as messaging servers, may open a pool of connections
that remain idle when traffic is low, but that should not be closed. Ideally,
you might dedicate a replica to support the application in this case. If that
is not possible, consider bind DN based individual limits.

In any case, set this value high enough not to close connections that
other applications expect to remain open, but set it low enough that connections
cannot be left idle abusively. Consider setting it to 7200 seconds, which
is 2 hours, for example.

Attribute

nsslapd-ioblocktimeout on dn: cn=config

Sets the time in milliseconds after which Directory Server closes
a stalled client connection. Here stalled means that
the server is blocked either sending output to the client or reading input
from the client.

You set this attribute with the ldapmodify command.

For Directory Server instances particularly exposed to denial of
service attacks, consider lowering this value from the default of 1,800,000
milliseconds, which is 30 minutes.

Server property

look-through-limit

Sets the maximum number of candidate entries checked for matches during
a search.

You set this server property with the dsconf set-server-prop command.

Some applications, such as messaging servers, may need to search the
entire directory. Ideally, you might dedicate a replica to support the application
in this case. If that is not possible, consider bind DN based, individual
limits.

In any case, consider lowering this value from the default of 5000 entries,
but not below the threshold value of search-size-limit.

Attribute

nsslapd-maxbersize on dn: cn=config

Sets the maximum size in bytes for an incoming ASN.1 message encoded
according to Basic Encoding Rules, BER. Directory Server rejects requests
to add entries larger than this limit.

You set this attribute with the ldapmodify command.

If you are confident you can accurately anticipate maximum entry size
for your directory data, consider changing this value from the default of
2097152, which is 2 MB, to the size of the largest expected directory entry.

The next largest size limit for an update is the size of the transaction
log file, nsslapd-db-logfile-size, which by default is
10 MB.

Server property

max-threads-per-connection-count

Sets the maximum number of threads per client connection.

You set this server property with the dsconf set-server-prop command.

Some applications, such as messaging servers, may open a pool of connections
and may issue many requests on each connection. Ideally, you might dedicate
a replica to support the application in this case. If that is not possible,
consider bind DN based, individual limits.

If you anticipate that some applications may perform many requests per
connection, consider increasing this value from the default of 5, but do not
increase it to more than 10. Typically do not specify more than 10 threads
per connection.

Server property

search-size-limit

Sets the maximum number of entries Directory Server returns in response
to a search request.

You set this server property with the dsconf set-server-prop command.

Some applications, such as messaging servers, may need to search the
entire directory. Ideally, you might dedicate a replica to support the application
in this case. If that is not possible, consider bind DN based, individual
limits.

In any case, consider lowering this value from the default of 2000 entries.

Server property

search-time-limit

Sets the maximum number of seconds Directory Server allows for handling
a search request.

You set this server property with the dsconf set-server-prop command.

Some applications, such as messaging servers, may need to perform very
large searches. Ideally, you might dedicate a replica to support the application
in this case. If that is not possible, consider bind DN based, individual
limits.

In any case, set this value as low as you can and still meet deployment
requirements. The default value of 3600 seconds, which is 1 hour, is larger
than necessary for many deployments. Consider using 600 seconds, which is
10 minutes, as a starting point for optimization tests.

Limiting System Resources Used By Directory Server

Table 6–2 describes the parameters
that can be used to tune how a Directory Server instance uses system and
network resources.

Table 6–2 Tuning Recommendations
For System Resources

Tuning Parameter

Description

Attribute

nsslapd-listenhost on dn: cn=config

Sets the hostname for the IP interface on which Directory Server listens.
This attribute is multivalued.

You set this attribute with the ldapmodify command.

Default behavior is to listen on all interfaces. The default behavior
is adapted for high volume deployments using redundant network interfaces
for availability and throughput.

Consider setting this value when deploying on a multihomed system, or
when listening only for IPv4 or IPv6 traffic on a system supporting each protocol
through a separate interface. Consider setting nsslapd-securelistenhost when
using SSL.

Server property

file-descriptor-count

Sets the maximum number of file descriptors Directory Server attempts
to use.

You set this server property with the dsconf set-server-prop command.

The default value is the maximum number of file descriptors allowed
for a process on the system at the time when the Directory Server instance
is created. The maximum value corresponds to the maximum number of file descriptors
allowed for a process on the system. Refer to your operating system documentation
for details.

Directory Server uses file descriptors to handle client connections,
and to maintain files internally. If the error log indicates Directory Server sometimes
stops listening for new connections because not enough file descriptors are
available, increasing the value of this attribute may increase the number
of client connections Directory Server can handle simultaneously.

If you have increased the number of file descriptors available on the
system, set the value of this attribute accordingly. The value of this property
should be less than or equal to the maximum number of file descriptors available
on the system.

Attribute

nsslapd-nagle on dn: cn=config

Sets whether to delay sending of TCP packets at the socket level.

You set this attribute with the ldapmodify command.

Consider setting this to on if you need to reduce
network traffic.

Attribute

nsslapd-reservedescriptors on dn: cn=config

Sets the number of file descriptors Directory Server maintains to
manage indexing, replication and other internal processing. Such file descriptors become unavailable to handle client connections.

You set this attribute with the ldapmodify command.

Consider increasing the value of this attribute from the default of
64 if all of the following are true.

Directory Server replicates to more than 10 consumers or Directory Server maintains
more than 30 index files.

Directory Server handles a large number of client connections.

Messages in the error log suggest Directory Server is running
out of file descriptors for operations not related to
client connections.

Notice that as the number of reserved file descriptors increases, the
number of file descriptors available to handle client connections decreases.
If you increase the value of this attribute, consider increasing the number
of file descriptors available on the system, and increasing the value of file-descriptor-count.

If you decide to change this attribute, for a first estimate of the
number of file descriptors to reserve, try setting the value of nsslapd-reservedescriptors according to the following formula.

Here ReplDescriptors is number of supplier
replica plus 8 if replication is used. PTADescriptors is
3 if the Pass Through Authentication, PTA, plug-in is enabled, and 0 otherwise. SSLDescriptors is 5 if SSL is used, and 0 otherwise.

The number of databases is the same as the number of suffixes for the
instance, unless the instance is configured to use more than one database
per suffix. Verify estimates through empirical testing.

Attribute

nsslapd-securelistenhost on dn: cn=config

Sets the hostname for the IP interface on which Directory Server listens
for SSL connections. This attribute is multivalued.

You set this attribute with the ldapmodify command.

Default behavior is to listen on all interfaces. Consider this attribute
in the same way as nsslapd-listenhost.

Server property

max-thread-count

Sets the number of threads Directory Server uses.

You set this server property with the dsconf set-server-prop command.

Consider adjusting the value of this property if any of the following
are true.

Client applications perform many simultaneous, time-consuming
operations such as updates or complex searches.

Directory Server supports many simultaneous client connections.

Multiprocessor systems can sustain larger thread pools than single processor
systems. As a first estimate when optimizing the value of this attribute,
use two times the number of processors or 20 plus the number of simultaneous
updates.

Consider also adjusting the maximum number of threads per client connection, max-threads-per-connection-count. The maximum number of these threads
handling client connections cannot exceed the maximum number of file descriptors
available on the system. In some cases, it may prove useful to reduce, rather than increase, the value of this attribute.

Verify estimates through empirical testing. Results depend not only
on the particular deployment situation but also on the underlying system.

Basic Directory Server Sizing Example:
Disk and Memory Requirements

This section provides an example that shows initial steps in sizing Directory Server disk
and memory requirements for deployment. The system used for this example was
selected by chance and because it had sufficient processing power and memory
to complete the sizing tasks quickly. It does not necessarily represent a
recommended system for production use. You can it however to gain insight
into how much memory and disk space might be required for production systems.

System Characteristics

The following system information was observed using the Solaris Management
Console (smc).

2 AMD64 CPUs (2.2 gigahertz)

Solaris 10 Operating System

4 gigabytes physical memory

40 gigabytes swap

Physical memory in use before Directory Server installation:
700 megabytes

Observing memory size with the default cache settings, and nothing loaded
from the suffix into entry cache yet, the server process occupies approximately
170 megabytes of memory with a heap size of about 56 megabytes.

The small default entry cache was no doubt filled completely after priming,
even with only 10,000 entries. To see the size for a full entry cache, set
a large entry cache size, import the data again, and prime the cache.

Populating the Suffix With 100,000 Sample Directory Entries

As you move to 100,000 entries, you have more directory data to fit
into database and entry caches. Initially, import 100,000 entries and examine
the size required on disk for this volume of directory data.

Directory data contained in the database for our example suffix, dc=example,dc=com, now occupy about 142 megabytes.

$ du -hs /local/ds/db/example/
142M /local/ds/db/example

You can increase the size of the database cache to hold this content.
If you expect the volume of directory data to grow over time, you can set
the database cache larger than currently necessary. You can also set the entry
cache size larger than necessary. Entry cache grows as the server responds
to client requests, unlike the database cache, which is allocated at startup.

The database is somewhat larger, however. The additional indexes increased
the size of the database from 142 megabytes to 163 megabytes.

$ du -hs /local/ds/db/example/
163M /local/ds/db/example

Populating the Suffix With 1,000,000 Sample Directory Entries

As you move from 100,000 entries to 1,000,000 entries, you no longer
have enough space on a system with 4 gigabytes of physical memory to include
all entries in the entry cache. You can begin by importing the data and examining
the size it occupies on disk.

Given a database cache this large and only 4 gigabytes of physical memory,
you cannot fit more than a fraction of entries into the entry cache for the
suffix. Here, set entry cache size to one gigabyte, and then prime the cache
to see the change in the process heap size.

Basic Security Checks

The recommendations in this section do not eliminate all risk. Instead,
the recommendations are intended as a short checklist to help you limit typical
security risks.

Isolate and firewall the system. If
at all possible, isolate the system where Directory Server runs from the
public Internet with a network firewall.

Do not allow dual boot. Do
not run other operating systems on the system that runs a production Directory Server.
Other systems can permit access to files, which you should not allow.

Use strong passwords. Use
a root password at least eight characters long. The password should include
punctuation or other non-alphabetic characters.

You can use the
Strong Password Check server plug-in to refuse weak passwords. The dsconf server property pwd-strong-check-enabled can
be used to turn the plug-in on.

If you choose to use longer operating system passwords, you might have
to configure the way passwords are handled by the system. See your operating
system documentation for instructions.

Use a safe user and group ID for the server. For security
reasons, do not run Directory Server with super user privileges.

You
can, for example, use the UNIX commands groupadd and useradd to create a user and group without login privileges. You
can then run the server as this user and group.

For example, to add a group that is named servers, do the following.

# groupadd servers

To add a user named server1 as a member of the group servers, use the following command.

# useradd -g servers -s /bin/false -c "server1"

A particular deployment can call for sharing Directory Server files
with other servers, such as a messaging server. In such a deployment, consider
running the servers with the same user, group ID.

Use the core facility. To
facilitate debugging, you can allow processes running with this user, group
ID to dump core. Use a utility such as the Solaris command coreadm.
For example, you can enable Directory Server to generate core files by
allowing setuid processes to do so, and updating the coreadm configuration:

# coreadm -e proc-setid
# coreadm -u

When scripting server startup, you can add the following line to your
startup script. The line allows Directory Server to generate core files
of the form core.ns-slapd.pid,
where pid is the process ID.

coreadm -p core.%f.%p $$

Disable unnecessary services. For
top performance with less risk, dedicate the system to Directory Server.
As explained elsewhere in this guide, do not run Directory Service Control Center on the same
system. When you run additional services, especially network services, you
negatively affect server performance and scalability. You can also thereby
increase security risks.

As with many network services, telnet and ftp pose
security risks. These two services are particularly dangerous, because the
commands transmit user passwords in clear text over the network. Work around
the need for telnet and ftp by using
clients such as Secure Shell, ssh, and Secure FTP, sftp, instead. See your operating system documentation for details on disabling network services.

If the Directory Server instance does not provide the naming service
for the network, consider enabling a naming service for the system. Directory Server then
uses the naming service for example when resolving ACIs.

Accurate System Clock Time

Ensure the system clock is reasonably in sync with other systems. Good
clock synchronization facilitates replication. Good synchronization also facilitates
correlation of date and time stamps in log files between systems. Consider
using a Network Time Protocol, NTP, client to set the correct system time.

Restart When System Reboots

You can enable a server instance to restart at system boot time by using
the dsadm command. For example, use the dsadm
enable-service command on Solaris 10 and Windows systems. On other
systems, use the dsadm autostart command. If you did not
install from native packages, refer to your operating system documentation
for help ensuring the server starts at system boot time.

When possible, stop Directory Server with the dsadm command,
or from DSCC. If the Directory Server is stopped abruptly during
system shutdown, there is no way to guarantee that all data has been written
to disk correctly. When Directory Server restarts, it must therefore verify
the database integrity. This process can take some time.

Furthermore, consider using a logging option with your file system.
File system logging generally both improves write performance, and also decreases
the time required to perform a file system check. The system must check the
file system when the file system is not cleanly unmounted during a crash.
Also, consider using RAID for storage.

System-Specific Tuning With The idsktune Command

The idsktune(1M) utility
that is provided with the product can help you diagnose basic system configuration
shortcomings. The utility offers recommendations for tuning the system to
support high performance directory services. The utility does not actually
implement any of the recommendations. The recommendations should be implemented
by a qualified system administrator.

When you run the utility as root, idsktune gathers
information about the system. The utility displays notices, warnings, and
errors with recommended corrective actions. The idsktune command
checks the following.

Operating system and kernel versions are supported for this
release.

Available memory, and available disk space meet minimum requirements
for typical use.

System resource limits meet minimum requirements for typical
use.

Required patches are installed.

Tip –

Fix at minimum all ERROR conditions before installing Directory Server software
on a system intended for production use.

Individual deployment requirements can exceed minimum requirements.
You can provide more resources than the resources identified as minimum system
requirements by the idsktune utility.

Consider local network conditions and other applications before implementing
specific recommendations. Refer to the operating system documentation for
additional tips on tuning network settings.

File Descriptor Settings

Directory Server uses file descriptors when handling concurrent client
connections. A low maximum number of file descriptors that are available for
the server process can thus limit the number of concurrent connections. Recommendations
that concern the number of file descriptors therefore relate to the number
of concurrent connections Directory Server can handle.

On Solaris systems, the number of file descriptors available is configured
through the rlim_fd_max parameter. Refer to the operating
system documentation for further instructions on modifying the number of available
file descriptors.

Transmission Control Protocol (TCP) Settings

Specific network settings depend on the platform. On some systems, you
can enhance Directory Server performance by modifying TCP settings.

Note –

First deploy your directory service, then consider tuning these
parameters, if necessary.

This section discusses the reasoning behind idsktune recommendations
that concern TCP settings, and provides a method for tuning these settings
on Solaris 10 systems.

Inactive Connections

Some systems allow you to configure the interval between transmission
of keepalive packets. This setting can determine how long
a TCP connection is maintained while inactive and potentially disconnected.
When set too high, the keepalive interval can cause the
system to use unnecessary resources to keep connections for clients that have
become disconnected. For most deployments, set this parameter to a value of
600 seconds. This value, which is 600,000 milliseconds, or 10 minutes, allows
more concurrent connections to Directory Server.

When set too low, however, the keepalive interval
can cause the system to drop connections during transient network outages.

On Solaris systems, this time interval is configured through the tcp_keepalive_interval parameter.

Outgoing Connections

Some systems allow you to configure how long a system waits for an outgoing
connection to be established. When set too high, establishing outgoing connections
to destination servers such as replicas not responding quickly can cause long
delays. For Intranet deployments on fast, reliable networks, you can set this
parameter to a value of 10 seconds to improve performance. Do not, however,
use such a low value on networks with slow, unreliable, or WAN connections,
however.

On Solaris systems, this time interval is configured through the tcp_ip_abort_cinterval parameter.

Retransmission Timeout

Some systems allow you to configure the initial time interval between
retransmission of packets. This setting affects the wait before retransmission
of an unacknowledged packet. When set too high, clients can be kept waiting
on lost packets. For Intranet deployments on fast, reliable networks, you
can set this parameter to a value of 500 milliseconds to improve performance.
Do not, however, use such a low value on networks with round trip times of
more than 250 milliseconds.

On Solaris systems, this time interval is configured through the tcp_rexmit_interval_initial parameter.

Sequence Numbers

Some systems allow you to configure how the system handles initial sequence
number generation. For extranet and Internet deployments, set this parameter
so initial sequence number generation is based on RFC 1948 to prevent sequence
number attacks. In such environments, other TCP tuning settings mentioned
here are not useful.

On Solaris systems, this behavior is configured through the tcp_strong_iss parameter.

Tuning TCP Settings on Solaris 10 Systems

On Solaris 10 systems, the simplest way to tune TCP settings is to create
a simple SMF service as follows:

Create an SMF profile for Directory Server tuning.

Edit the following xml file according to your environment
and save the file as /var/svc/manifest/site/ndd-nettune.xml.

Run svcadm to enable nettune (for
more information, see the svcadm(1M) man page).

Run svcs -x (for more information see the svcs(1) man page).

Chapter 7 Identifying Security Requirements

How you secure data in Directory Server Enterprise Edition has an impact on all other areas of
design. This chapter describes how to analyze your security needs and explains
how to design your directory service to meet those needs.

Tampering. Information
in transit is changed or replaced and then sent on to the recipient. For example,
someone could alter an order for goods or change a person’s resume.

This threat includes unauthorized modification of data or configuration
information. If your directory cannot detect tampering, an attacker might
alter a client’s request to the server. The attacker might also cancel
the request or change the server’s response to the client. The Secure
Socket Layer (SSL) protocol and similar technologies can solve this problem
by signing information at either end of the connection.

Impersonation. Information
passes to a person who poses as the intended recipient.

Impersonation can take two forms, spoofing and misrepresentation.

Spoofing. A person or computer
impersonates someone else. For example, a person can pretend to have the
mail address jdoe@example.com, or a computer can identify
itself as a site called www.example.com when it is not.

Misrepresentation. A person
or organization misrepresents itself. For example, suppose the site www.example.com pretends to be a furniture store when it is really just a site
that takes credit-card payments but never sends any goods.

Denial of service. An attacker
uses the system resources to prevent these resources from being used by legitimate
users.

In a denial of service attack, the attacker’s goal
is to prevent the directory from providing service to its clients. Directory Server Enterprise Edition provides
a way of preventing denial of service attacks by setting limits on the resources
that are allocated to a particular bind DN.

Overview of Security Methods

A security policy must be able to prevent sensitive information
from being modified or retrieved by unauthorized users, but easy enough to
administer.

Authentication. Provides
a means for one party to verify another’s identity. For example, a client
gives a password to Directory Server during an LDAP bind operation. As
part of the authentication process, password policies define
the criteria that a password must satisfy to be considered valid, for example,
age, length, and syntax. Account inactivation disables
a user account, group of accounts, or an entire domain so that all authentication
attempts are automatically rejected.

Encryption. Protects the
privacy of information. When data is encrypted, the data is scrambled in a
way that only the recipient can decode. The Secure Sockets Layer (SSL)
maintains data integrity by encrypting information in transit. If encryption
and message digests are applied to the information being sent, the recipient
can determine that the information was not tampered with during transit. Attribute encryption maintains data integrity by encrypting stored
information.

Access control. Tailors
the access rights that are granted to different directory users, and provides
a means of specifying required credentials or bind attributes.

Auditing. Enables you to
determine if the security of your directory has been compromised. For example,
you can audit the log files maintained by your directory.

These security tools can be used in combination in your security design.
You can also use other features of the directory, such as replication and
data distribution, to support your security design.

Anonymous Access

Anonymous access is the simplest form of directory access. Anonymous
access makes data available to any directory user, regardless of whether the
user has authenticated.

Anonymous access does not allow you to track who is performing searches
or what kind searches are being performed, only that someone is performing
searches. When you allow anonymous access, anyone who connects to your directory
can access the data. If you allow anonymous access to data, and attempt to
block a user or group from that data, the user can access the data by binding
to the directory anonymously.

You can restrict the privileges of anonymous access. Usually, directory
administrators allow anonymous access only for read, search, and compare
privileges. You can also limit access to a subset of attributes that contain
general information such as names, telephone numbers, and email addresses.
Do not allow anonymous access to sensitive data, such as government identification
numbers, home telephone numbers and addresses, and salary information.

Anonymous access to the root DSE (base DN "")
is required. Access to the root DSE enables applications to discover the capabilities
of the server, the supported security mechanisms, and the supported suffixes.

Simple Password Authentication

If anonymous access is not set up, a client must authenticate
to Directory Server to access the directory contents. With simple password
authentication, a client authenticates to the server by providing a simple,
reusable password.

The client authenticates to Directory Server through a bind operation
in which the client provides a distinguished name and credentials. The server
locates the entry that corresponds to the client DN, then checks whether the
client's password matches the value stored with the entry. If the password
matches, the server authenticates the client. If the password does not match,
the authentication operation fails and the client receives an error message.

Note –

The drawback of simple password authentication is that the password
is transmitted in clear text, which can compromise security. If a rogue user
is listening, that user can impersonate an authorized user.

Simple password authentication offers an easy way of authenticating
users. However, you need to restrict the use of simple password authentication
to your organization’s intranet. This kind of authentication does not
offer the level of security that is required for transmissions between business
partners over an extranet or for transmissions with customers on the Internet.

Simple Password Authentication Over a Secure Connection

A secure connection uses encryption to make data unreadable to third
parties while the data is sent over the network between Directory Server and
its clients. Clients can establish secure connections in either of the following
ways:

Binding to the secure port by using the Secure Socket Layer
(SSL)

Binding to an insecure port with anonymous access, then sending
the Start TLS control to begin using Transport Layer Security (TLS)

In either case, the server must have a security certificate, and the
client must be configured to trust this certificate. The server sends its
certificate to the client to perform server authentication,
using public-key cryptography. This results in the client knowing that it
is connected to the intended server and that the server is not being tampered
with.

Then, for privacy, the client and server encrypt all data transmitted
through the connection. The client sends the bind DN and password on the encrypted
connection to authenticate the user. All further operations are performed
with the identity of the user. The operations might also be performed with
a proxy identity if the bind DN has proxy rights to other user identities.
In all cases, the results of operations are encrypted when these results are
returned to the client.

Certificate-Based Client Authentication

When establishing encrypted connections over SSL or TLS, you can
also configure the server to require client authentication.
The client must send its credentials to the server to confirm the identity
of the user. The user's certificate, not the DN, is used to determine the
bind DN. Client authentication protects against user impersonation and is
the most secure type of connection.

Certificate-based client authentication operates at the SSL, TLS layer
only. To use a certificate-based authentication ID with LDAP, you must use
SASL EXTERNAL authentication after establishing the SSL connection.

You can configure certificate-based client authentication by using the dsconf get-server-prop command. See dsconf(1M) for more information.

SASL-Based Client Authentication

Client authentication during an SSL or TLS connection can also
use the Simple Authentication and Security Layer (SASL), a generic security
interface, to establish the identity of the client. Directory Server Enterprise Edition supports the
following mechanisms through SASL:

DIGEST-MD5. This mechanism
authenticates clients by comparing a hashed value sent by the client with
a hash of the user's password. However, because the mechanism must read user
passwords, all users wanting to be authenticated through DIGEST-MD5 must have {CLEAR} passwords in the directory.

GSSAPI. The General Security
Services API (GSSAPI) is available only on the Solaris Operating System.
It allows Directory Server to interact with the Kerberos V5 security system
to identify a user. The client application must present its credentials to
the Kerberos system, which in turn validates the user's identity to Directory Server.

EXTERNAL. This mechanism
authenticates a user in LDAP based on the credentials specified by an external
security layer, such as SSL or TLS.

Preventing Authentication by Using Global Account
Lockout

In this version of Directory Server, authentication failures
with a password are monitored and replicated. This enables rapid, global account
lockout after a specified number of authentication attempts with an invalid
password. Global account lockout is supported in any of the following cases:

Client applications bind to a single server in the topology
only

The topology does not include any read-only consumers

Directory Proxy Server is used to control all bind traffic

Imagine a situation where all bind attempts are not directed to the
same server, and the client application performs bind attempts on multiple
servers faster than lockout data can be replicated. In the worst-case scenario,
the client would be allowed the specified number of attempts on each server
where the client attempted to bind. This situation would be unlikely if the
client application were driven by input from a human user. However, an automated
client built to attack a topology could exploit this deployment choice.

To retain a strictly local lockout policy in a replicated topology,
you must maintain compatibility with the 5.2 password policy. In this situation,
the policy in effect must not be the default password policy. Local lockout
can also be achieved by restricting binds to read-only consumers.

External Authentication Mappings and Services

Directory Server provides user account host mapping, which associates
a network user account with a Directory Server user account. This feature
simplifies the management of both user accounts. Host mapping is required
for users who are externally authenticated.

Proxy Authorization

Proxy authorization is a special form of access control. Proxy authorization
or proxy authentication is when an application is forced to use a specific
username/password combination to gain access to the server.

With proxy authorization, an administrator can request access to Directory Server by
assuming the identity of a regular user. The administrator binds to the directory
with his own credentials and is granted the rights of the regular user. This
assumed identity is called the proxy user. The DN of
that user is called the proxy DN.
The proxy user is evaluated as a regular user. Access is denied if the proxy
user entry is locked or inactivated or if the password has expired.

An advantage of the proxy mechanism is that you can enable an LDAP application
to use a single bind to service multiple users who are accessing Directory Server.
Instead of each user having to bind and authenticate, the client application
binds to Directory Server and uses proxy rights.

Password Policy Options

A default password policy is applied. The parameters of this
default policy can be changed.

An additional, specialized password policy can be applied
to a particular user.

An additional, specialized password policy can be applied
to a set of users by using the CoS and Roles functionality. Password policies
cannot be applied to static groups.

Password Policies in a Replicated Environment

Configuration information for the default password policy is not replicated.
Instead, it is part of the server instance configuration. If you modify the
default password policy, the same modifications must be made on each server
in the topology. If you need a password policy that is replicated,
you must define a specialized password policy under a part of the directory
tree that is replicated.

All password information that is stored in the user entry is replicated.
This information includes the current password, password history, password
expiration dates and so forth.

Consider the following impact of password policies in a replicated environment:

A user with an impending password expiration receives a warning
from every replica to which the user binds before changing his password.

When a user changes his password, the new password might take
a while to be updated on all replicas. A situation could arise where a user
changes his password and then immediately rebinds to one of the consumer replicas
with the new password. In this case, the bind could fail until the replica
receives the updated password. This situation can be alleviated using prioritized
replication to force password changes to be replicated first.

Determining Encryption Methods

Securing Connections With SSL

Security design involves
more than an authentication scheme for identifying users and an access control
scheme for protecting information. You must also protect the integrity of
information between servers and client applications while it is being sent
over the network.

To provide secure communications over the network, you can use both
the LDAP and DSML protocols over the Secure Sockets Layer (SSL). When SSL
is configured and activated, clients connect to a dedicated secure port where
all communications are encrypted after the SSL connection is established. Directory Server and
Directory Proxy Server also support the Start Transport Layer Security (Start
TLS) control. Start TLS allows the client to initiate an encrypted connection
over the standard LDAP port.

What Is Attribute Encryption?

Directory Server Enterprise Edition provides various features to protect data at the access level,
including password authentication, certificate-based authentication, SSL,
and proxy authorization. However, the data stored in database files, backup
files, and LDIF files must also be protected. The attribute encryption feature
prevents users from accessing sensitive data while the data is stored.

Attribute encryption enables certain attributes to be stored in encrypted
form. Attribute encryption is configured at the database level. Thus, after
an attribute is encrypted, the attribute is encrypted in every entry in the
database. Because attribute encryption occurs at the attribute level (not
the entry level), the only way to encrypt an entire entry is to encrypt all
of its attributes.

Attribute encryption also enables you to export data to another database
in an encrypted format. The purpose of attribute encryption is to protect
sensitive data only when the data is being stored or exported. Therefore,
the encryption is always reversible. Encrypted attributes are decrypted when
returned through search requests.

The following figure shows a user entry being added to the database,
where attribute encryption has been configured to encrypt the salary attribute.

Figure 7–1 Attribute Encryption Logic

Attribute Encryption Implementation

The attribute encryption feature supports a wide range of encryption
algorithms. Portability across different platforms is ensured. As an additional
security measure, attribute encryption uses the private key of the server’s
SSL certificate to generate its own key. This key is then used to perform
the encryption and decryption operations. To be able to encrypt attributes,
a server must be running over SSL. The SSL certificate and its private key
are stored securely in the database and protected by a password. This key
database password is required to authenticate to the server. The server assumes
that whoever has access to this key database password is authorized to export
decrypted data.

Note that attribute encryption only protects stored attributes. If you
are replicating these attributes, replication must be configured over SSL
to protect the attributes during transport.

If you use attribute encryption, you cannot use the binary copy feature
to initialize one server from another server.

Attribute Encryption and Performance

Sensitive data can be accessed directly through index files. Thus, you
must encrypt the index keys corresponding to the encrypted attributes, to
ensure that the attributes are fully protected. Indexing already has a performance
impact, without the added cost of encrypting index keys. Therefore, configure
attribute encryption before data is imported or added
to the database for the first time. This procedure ensures that encrypted
attributes are indexed as such from the outset.

Designing Access Control With ACIs

Access control enables you to specify that certain clients have access
to particular information, while other clients do not. You implement access
control using one or more access control lists (ACLs). ACLs consist of a series
of access control instructions (ACIs) that either allow or deny permissions
to specified entries and their attributes. Permissions include the ability
to read, write, search, proxy, add, delete, compare, import and export.

By using an ACL, you can set permissions for the following:

The entire directory

A particular subtree of the directory

Specific entries in the directory

A specific set of entry attributes

Any entry that matches a given LDAP search filter

In addition, you can set permissions for a specific user, for all users
that belong to a group, or for all users of the directory. You can also define
access for a network location, such as an IP address or a DNS name.

ACI Scope

Directory Server 6.1 includes two major changes to
ACI scope.

Ability to specify the scope of an
ACI. In previous versions of Directory Server, you could not
specify the scope of an ACI. ACIs automatically applied to the entry that
contained the ACI and all of its subtree. Therefore, it was necessary to use deny ACIs in several cases. Deny ACIs can be difficult to manage,
particularly when a deny ACI and an allow ACI apply to a single entry.

Root ACIs now apply to the root entry
and its entire subtree. In previous versions of Directory Server,
ACIs located in the root DSE applied to the root entry only and not its
children. ACIs placed in any other entry applied to the entry that contained
the ACI and all of its subtree. In Directory Server Enterprise Edition ACIs placed in the root entry
are treated like ACIs placed anywhere else.

The new root ACIs
have an obvious security advantage. Administrators no longer have to bind
as the Directory Manager to perform certain operations. Administrators can
now be forced to bind by using strong authentication such as SSL. When configuring
ACIs that are intended to apply only to the root entry,
the scope of the ACI must now specifically be set to base.

Obtaining Effective Rights Information

The access
control model provided by Directory Server 6.1 can grant
access to users through many different mechanisms. However, this flexibility
can make your security policy fairly complex. Several parameters can define
the security context of a user, including IP address, machine name, time
of day, and authentication method.

Tips on Using ACIs

The following tips can simplify your directory security model and improve
directory performance:

Minimize the number of ACIs in your directory, and use macro
ACIs where possible.

Although Directory Server can evaluate
over 50,000 ACIs, managing a large number of ACI statements can be difficult.
Excessive ACIs can also have a negative impact on memory consumption.

Balance allow and deny permissions.

The default
rule is to deny access to any user who has not been specifically granted access.
However, you can reduce the number of ACIs by using one ACI that allows access
close to the root of the tree and using a small number of deny ACIs close
to the leaf entries. This approach can prevent excessive allow ACIs close
to the leaf entries.

Identify the smallest set of attributes on any given ACI.

If you allow or deny access to a subset of attributes on an object,
determine whether the smallest list is the set of attributes that are allowed
or the set of attributes that are denied. Then express your ACI so that you
are managing the smallest list.

For example, the people object class contains dozens of attributes.
To allow a user to update just a few attributes, write your ACI so that it
allows write access for just those few attributes. To allow a user to update
all but one or two attributes, create the ACI so that it denies write access
for those one or two attributes.

Use LDAP search filters cautiously.

Search filters
do not directly name the object for which you are managing access. Search
filters can therefore result in unexpected results especially as your directory
becomes more complex. If you use search filters in ACIs, run an ldapsearch operation with the same filter. This action will ensure that you
know what the results of the changes mean to your directory.

Do not duplicate ACIs in different parts of your directory
tree.

Look for overlapping ACIs. Imagine that you have an ACI
at your directory root point that allows a group write access to the commonName and givenName attributes. Imagine also that
you have another ACI that allows the same group write access to just the commonName attribute. In this scenario, consider reworking your
ACIs so that only one attribute grants write access for the group.

As your directory grows more complicated, accidental overlapping of
ACIs becomes increasingly common. If you avoid ACI overlap, security management
becomes easier and the total number of ACIs in your directory is reduced.

Limit
ACI placement to your directory root point and to major directory branch points.
If you organize ACIs into groups, the total list of ACIs is easier to manage
and the total number of ACIs can be kept to a minimum.

Avoid using double negatives, such as deny write if the bind
DN is not equal to cn=Joe.

Although this syntax
is acceptable to the server, the syntax can be confusing for an administrator.

Designing Access Control With Connection
Rules

Connection rules enable you to prevent selected clients from establishing
connections to Directory Server. The purpose of connection rules is to
prevent a denial-of-service attack caused by malicious or poorly designed
clients that connect to Directory Server and flood the server with requests.

Designing Access Control With Directory Proxy Server

Directory Proxy Server connection
handlers provide a method of access control that enables you to classify
incoming client connections. In this way, you can restrict the operations
that can be performed based on how the connection has been classified.

You can use this functionality, for example, to restrict access to clients
that connect from a specified IP address only. The following figure shows
how you can use Directory Proxy Server connection handlers to deny write operations
from specific IP addresses.

Figure 7–2 Directory Proxy Server Connection Handler Logic

How Connection Handlers Work

A connection handler consists of a list of criteria and a list of policies.
Directory Proxy Server determines a connection's class membership by matching
the origination attributes of the connection with the criteria of the class.
When the connection has been matched to a class, Directory Proxy Server applies
the policies that are contained in that class to the connection.

Connection handler criteria can include the following:

Client physical address

Domain name or host name

Client DN pattern

Authentication method

SSL

The following policies can be associated with a connection handler:

Administrative limits policy. Enables
you to set certain limits on, for example, the number of open connections
from clients of a specific class.

Content adaptation policy. Enables
you to restrict the kind of operations a connection can perform, for example,
attribute renaming.

Data distribution policy. Enables
you to use a specific distribution scheme for a connection.

Using CoS Securely

Access control for reading applies to both the real attributes and the
virtual attributes of an entry. A virtual attribute generated by the Class
of Service (CoS) mechanism is read like a normal attribute. Virtual attributes
should therefore be given read protection in the same way. However, to make
the CoS value secure, you must protect all of the sources of information the
CoS value uses: the definition entries, the template entries, and the target
entries. The same is true for update operations. Write access to each source
of information must be controlled to protect the value that is generated
from these sources. For more information, see Chapter 9, Directory Server Class
of Service, in Sun Java System Directory
Server Enterprise Edition 6.1 Reference.

Using Firewalls

Firewall technology
is typically used to filter or block network traffic to and from an internal
network. If LDAP requests are coming from outside a perimeter firewall, you
need to specify what ports and protocols are allowed to pass through the firewall.

The ports and protocols that you specify depend on your directory architecture.
As a general rule, the firewall must be configured to allow TCP and UDP connections
on ports 389 and 636.

Host-based firewalls can be installed on the same server that is running Directory Server.
The rules for host-based firewalls are similar to the rules for perimeter
defense firewalls.

Running as Non-Root

You can create and run server instances as a non-root user. By running
server instances as a non-root user, you limit any potential
damage that an exploit could cause. Furthermore, servers running as non-root users are subject to access control mechanisms on the operating
system.

Other Security Resources

For more information about designing a secure directory, see the following
resources:

Directory Server Enterprise Edition Administration Model

Directory Server Enterprise Edition gives
the administrator more control over instance creation and administration.
This control is achieved by using two new commands, dsadm and dsconf. These commands provide all the functionality
previously supplied by the directoryserver command plus
additional functionality.

The dsadm command enables the administrator to create,
start, and stop a Directory Server instance. This command combines all
operations that require file system access to the Directory Server instance.
The command must be run on the machine that hosts the instance. It does
not perform any operation that requires LDAP access to the instance or access
to an agent.

In the new administration model, a Directory Server instance is no
longer tied to a ServerRoot. Each Directory Server instance
is a standalone directory that can be manipulated in the same manner as an
ordinary standalone directory.

The dsconf command combines the administration operations
that require write access to cn=config. The dsconf
command is an LDAP client. It can only be executed on an active Directory Server instance.
The command can be run remotely, enabling administrators to configure multiple
instances from a single remote machine.

Directory Proxy Server provides two comparable commands, dpadm and dpconf. The dpadm command
enables the administrator to create, start, and stop a Directory Proxy Server instance.
The dpconf command enables the administrator to configure Directory Proxy Server by
using LDAP and to access the Directory Server configuration through Directory Proxy Server.

In addition to these command-line utilities, Directory Server Enterprise Edition is integrated
into the Java Web Console. The Console enables Directory Server Enterprise Edition and other Sun products
to be managed from a centralized user interface. Directory Service Control Center (DSCC)
is a service of the Java Web Console that is specifically for managing Directory Servers
and Directory Proxy Servers. DSCC provides the same functionality as
the command-line utilities, as well as wizards that enable you to configure
several servers simultaneously. In addition, DSCC provides a replication
topology drawing tool that enables you to monitor replication topologies graphically.
This tool simplifies replication monitoring by providing a real-time view
of individual masters, hubs, and consumers, and the replication agreements
between them.

Remote Administration

The Directory Server Enterprise Edition administration model, described in the previous
section, also enables remote administration of any Directory Server or Directory Proxy Server in
the topology. Servers can be administered remotely using both the command-line
utilities and the Java Web Console.

The dsadm and dpadm utilities
cannot be run remotely. These utilities must be installed and run on the same
machine as the server instance that is being administered. For details of
the functionality provided with dsadm and dpadm,
see the dsadm(1M) and dpadm(1M) man pages.

The dsconf and dpconf utilities
can be run remotely. For details of the functionality provided with dsconf and dpconf, see the dsconf(1M) and dpconf(1M) man pages.

The following figure illustrates how the new administration model facilitates
remote administration. This illustration shows that the console and configuration
commands can be installed and run remotely from the Directory Server and Directory Proxy Server instances.
The administration commands must be run locally to the instances.

Build automation around backup and recovery tools, and ensure
that automatic scripts are maintained.

This strategy avoids unnecessary
delays if you have to restore from a backup in an emergency.

Determine a retention and rotation strategy.

This
strategy includes how often you perform backups and how long you keep them.
When determining retention and rotation of backups, be aware of the purge
delay and its impact on backups in a replicated topology. As modifications
occur on a supplier, changes are recorded in the change log. Without a method
of emptying the change log, its size would continue to increase until the
change log consumed all available disk space. By default, changes are purged
every seven days. This period is known as the purge delay. When a change
has been purged, the change can no longer be replicated. For this reason,
make sure that databases are backed up at least as often as the purge delay.

Use the backup and recovery tools provided with Directory Server Enterprise Edition rather
than merely performing a system backup and recovery.

Choosing a Backup Method

Directory Server Enterprise Edition provides two methods of backing up data: binary backup and
backup to an LDIF file. Both of these methods have advantages and limitations,
and knowing how to use each method will assist you in planning an effective
backup strategy.

Binary Backup

Binary backup produces a copy of the database files, and is performed
at the file-system level. The output of a binary backup is a set of binary
files containing all entries, indexes, the change log, and the transaction
log. A binary backup does not contain configuration data.

Binary backup is performed using one of the following commands:

dsadm backup must be run offline, that
is, when the Directory Server instance is stopped. The command must be
run on the local server containing the Directory Server instance.

dsconf backup can be run online and remote
to the Directory Server instance.

Binary backup has the following advantages:

All suffixes can be backed up at the same time.

Binary backup is significantly faster than a backup to LDIF.

The replication change log is backed up.

Binary backup has one limitation. Restoration from a binary backup can
be performed only on a server with an identical configuration.

This limitation implies the following:

Both machines must use the same hardware and the same operating
system, including any service packs or patches.

Both machines must have the same version of Directory Server installed,
including binary format (32 bits or 64 bits), service packs and patch levels.

Both servers must have the same directory tree that is divided
into the same suffixes. The database files for all suffixes must be copied
together Individual suffixes cannot be copied.

Each suffix must have the same indexes configured on both
servers, including virtual list view (VLV) indexes. The database files for
the suffixes must have the same name.

Each server must have the same suffixes configured as replicas.
If fractional replication is configured, fractional replication must be configured
identically on all master servers.

Attribute encryption must not be used on either server.

At a minimum, you need to perform a regular binary backup on each set
of coherent machines. Coherent machines are machines that have an identical
configuration, as defined previously.

Note –

Because restoration from a local backup is easier, perform a binary
backup on each server.

These abbreviations are used in the remaining diagrams in this chapter:

M = master replica

RA = replication agreement

The following figure assumes that M1 and M2 have an identical configuration
and that M3 and M4 have an identical configuration. In this scenario, a binary
backup would be performed on M1 and on M3. In the case of failure, M1 or M2
could be restored from the binary backup of M1 (db1). M3 or M4 could be restored
from the binary backup of M3 (db2). M1 and M2 could not be restored from the
binary backup of M3. M3 and M4 could not be restored from the binary backup
of M1.

Figure 8–2 Offline Binary Backup

Backup to LDIF

Backup to LDIF is performed at the suffix level. The output of a backup
to LDIF is a formatted LDIF file, which is a copy of the data contained in
the suffix. As such, this process takes longer than a binary backup.

Backup to LDIF is performed using one of the following commands:

dsadm export must be run offline, that
is, when the Directory Server instance is stopped. This command must be
run on the local server containing the Directory Server instance.

dsconf export can be run online and remote
to the Directory Server instance.

Note –

Replication information is backed up unless you use the -Q option
when running these commands.

The dse.ldif configuration
file is not backed up in a backup to LDIF. To enable you to restore a previous
configuration, back this file up manually.

Backup to LDIF has the following advantages:

Backup to LDIF can be performed from any server, regardless
of its configuration.

Restoration from an LDIF backup can be performed on any server,
regardless of its configuration.

Backup to LDIF has one limitation. In situations where rapid backup
and restoration are required, backup to LDIF might take too long to be viable.

You need to perform a regular backup by using backup to LDIF for each
replicated suffix, on a single master in your topology.

In the following figure, dsadm export is performed
for each replicated suffix, on one master only (M1).

Figure 8–3 Offline Backup to LDIF

Choosing a Restoration Method

Directory Server Enterprise Edition provides two methods of restoring data: binary restore and
restoration from an LDIF file. As with the backup methods, both of these methods
have advantages and limitations.

Binary Restore

Binary restore copies data at the database level. Binary restore
is performed using one of the following commands:

dsadm restore must be run offline, that
is, when the Directory Server instance is stopped. This command must be
run on the local server containing the Directory Server instance.

dsconf restore can be run online and remote
to the Directory Server instance.

Binary restore has the following advantages:

All suffixes can be restored at the same time.

The replication change log is restored.

Binary restore is significantly faster than restoring from
an LDIF file.

If you are not aware that your database was corrupt when you
performed the binary backup, you risk restoring a corrupt database. Binary
backup creates an exact copy of the database.

Binary restore is the preferred restoration method if the machines have
an identical configuration and time is a major consideration.

The following figure assumes that M1 and M2 have an identical configuration
and that M3 and M4 have an identical configuration. In this scenario, M1 or
M2 can be restored from the binary backup of M1 (db1). M3 or M4 can be restored
from the binary backup of M3 (db2).

Figure 8–4 Offline Binary Restore

Restoration From LDIF

Restoration from an LDIF file is performed at the suffix level.
As such, this process takes longer than a binary restore. Restoration from
LDIF can be performed using one of the following commands:

dsadm import must be run offline, that
is, when the Directory Server instance is stopped. This command must be
run on the local server containing the Directory Server instance.

dsconf import can be run online and remote
to the Directory Server instance.

Restoration from an LDIF file has the following advantages:

This command can be performed on any server, regardless of
its configuration.

A single LDIF file can be used to deploy an entire directory
service, regardless of its replication topology. This functionality is particularly
useful for the dynamic expansion and contraction of a directory service according
to anticipated business needs.

In the following figure, dsadmin import is performed
for each replicated suffix, on one master only (M1).

Figure 8–5 Offline Restoration From LDIF

Designing a Logging Strategy

Logging is managed and configured at the individual server level. While
logging is enabled by default, it can be reconfigured or disabled according
to the requirements of your deployment. Designing a logging strategy assists
with planning hardware requirements. For more information, see Hardware Sizing For Directory Server.

Defining Logging Policies

Each Directory Server in a topology stores logging information in
three files:

Access log. Lists the clients
that connect to the server and the operations requested.

Error log. Provides information
about server errors.

Audit log. Gives details about
modifications to suffixes and to the configuration.

Each Directory Proxy Server in a topology stores logging information in
two files:

Access log. Lists the clients
that connect to Directory Proxy Server and the operations requested.

Error log. Contains server
error messages.

You can manage the log files for both Directory Server and Directory Proxy Server in
these ways:

Defining log file creation policies

Defining log file deletion policies

Manually creating and deleting log files

Defining log file permissions

Defining Log File Creation Policies

A log file creation policy enables you to periodically archive
the current log and start a new log file. Log file creation policies can be
defined for Directory Server and Directory Proxy Server from the Directory
Control Center or using the command-line utilities.

When defining a log file creation policy, consider the following:

How many logs do you want to keep?

When this number
of logs is reached, the oldest log file in the folder is deleted before a
new log is created. If this value is set to 1, the logs
are not rotated and grow indefinitely.

What is the maximum size, in Megabytes, for each log file?

When a log file reaches this maximum size or the maximum age defined
in the next item, the file is archived. A new log file is started.

How often should the current log file be archived?

The
default is every day.

At what time of day should log files be rotated?

Time-based
rotation makes operations like log analysis and trending easier, because
each log file covers the same time period.

Log file rotation can also be based on a combination of criteria. For
example, you can specify that logs be rotated at 23h30 only if
the file size is greater than 10 Megabytes.

Defining Log File Deletion Policies

A log file deletion policy enables you to automatically delete
old archived logs. Log file deletion policies can be defined for Directory Server and
Directory Proxy Server from the Directory Service Control Center or using the command-line utilities.
A log file deletion policy is not applied unless you have defined a log file
creation policy. Log file deletion will not work if you have just one log
file. The server evaluates and applies the log file deletion policy at the
time of log rotation.

When defining a log file deletion policy, consider the following:

What is the maximum size of the combined archived logs?

When the maximum size is reached, the oldest archived log is automatically
deleted.

What is the minimum free disk space that should be available?

When the free disk space reaches this minimum value, the oldest archived
log is automatically deleted.

What is the maximum age of log files?

When a log
file reaches this maximum age, the log file is automatically deleted.

Manually Creating and Deleting Log Files

If you do not want to define automatic creation and deletion policies
for Directory Server, you can create and delete log files manually. In
addition, Directory Server provides a task that enables you to rotate
any log immediately, regardless of the defined creation policy. This functionality
might be useful if, for example, an event occurs that needs to be examined
in more detail. The immediate rotation function causes the server to create
a new log file. The previous file can therefore be examined without the server
appending logs to this file.

Designing a Monitoring Strategy

An effective monitoring
and event management strategy is crucial to a successful deployment. Such
a strategy defines which events should be monitored, which tools to use, and
what action to take should an event occur. If you have a plan for commonplace
events, possible outages and reduced levels of service can be prevented. This
strategy improves the availability and quality of service of your directory.

Command-line tools. Include
operating system-specific tools to monitor performance such as disk usage,
LDAP tools such as ldapsearch to collect server statistics
stored in the directory, third-party tools, or custom shell or Perl scripts.

Directory Server and Directory Proxy Server logs. Include the access, audit, and error logs. These logs can be monitored
manually or parsed using custom scripts to extract monitoring information
that is relevant to your deployment. The Directory Server Resource Kit provides
a log analyzer tool, logconv, that enables you to analyze
the access logs. The log analyzer tool extracts usage statistics and counts
the occurrences of significant events. For more information about this tool,
see logconv(1).
For information about viewing and configuring log files, see Chapter 14, Directory Server Logging, in Sun Java System Directory Server Enterprise Edition 6.1 Administration Guide.

Directory Service Control Center (DSCC). Is
a graphical user interface that enables you to monitor directory operations
in real time. DSCC provides general server information, including
a resource summary, current resource usage, connection status, and global
database cache information. It also provides general database information,
such as the database type, status, and entry cache statistics. Cache information
and information relative to each index file within the database is also provided.
In addition, DSCC provides information relative to the connections
and the operations performed on each chained suffix.

Data Administration With Directory Editor

The Directory Editor component of Directory Server Enterprise Edition is a Java web
application that enables you to manage directory data by using a web browser. Directory Editor
provides all users with remote access to directory data without having to
install any client software.

Directory Editor offers the following functionality:

Enables administrators and end users to create and edit directory
users, groups, and containers.

Supports several concurrent users, depending on the application
server and underlying hardware.

Supports large enterprise directory installations.

Enables customization, branding, and embedding of the interface.

Customization dynamically adapts to the Directory Server schema.

Enables customization through the configuration of forms,
rather than by direct programming.

Supports SSL-encrypted transmissions between the client browser
and Directory Server.

Limits access to menus and functions, based on roles.

Roles are scanned to match group names. Roles have access to certain
capabilities, which are high-level actions such as Browse,
Configure, Debug, Edit, Create, and Search.

Limits access to the data based on the existing ACIs in Directory Server.
It is not necessary to define ACIs that are specific to Directory Editor.

Enables paged display of large volumes of data, based on the
virtual list view (VLV) index.

Grouping Directory Entries and Managing Attributes

The directory information
tree organizes entries hierarchically. This hierarchy is a type of grouping
mechanism. The hierarchy is not well suited for associations between dispersed
entries, for organizations that change frequently, or for data that is repeated
in many entries. Directory Server groups and roles offer more flexible
associations between entries. The class of service (CoS) mechanism enables
you to manage attributes so that the attributes are shared between entries.
This sharing is done in a way that is invisible to applications.

Static and Dynamic Groups

Dynamic groups. Specify
a filter and all entries that match the filter are members of the group. These
groups are dynamic because membership is defined each time the filter is evaluated.

Managed, Filtered, and Nested Roles

Roles are an entry grouping
mechanism. Roles enable you to determine role membership as soon as an entry
is retrieved from the directory. Each role has members,
or entries that possess the role. As with groups, you can specify role members
explicitly or dynamically.

Directory Server supports the following three types of roles:

Managed roles. Explicitly
assign a role to member entries.

Filtered roles. Automatically
make entries members if the entries match a specified LDAP filter. In this
way, the role depends on the attributes contained in each entry.

Nested roles. Enable you
to create roles that contain other roles.

Deciding Between Groups and Roles

The functionality of the groups and roles mechanisms overlap somewhat.
Both mechanisms have advantages and disadvantages. Generally, the roles mechanism
is designed to provide frequently required functionality more efficiently.
Because the choice of a grouping mechanism influences server complexity and
determines how clients process membership information, you must plan your
grouping mechanism carefully. To decide which mechanism is more suitable,
you need to understand the typical membership queries and management operations
that are performed.

Advantages of the Groups Mechanism

Groups have the following advantages:

Static groups are the only standards-based grouping mechanism.
Static groups are therefore interoperable with most client applications and
LDAP servers.

Static groups are preferable to roles for enumerating members.

If you only need to enumerate members of a given
set, static groups are less costly. Enumerating members of a static group
by retrieving the member attribute is easier than recovering
all entries that share a role. In Directory Server 6.1,
significant performance improvements have been made for large multi-valued
attributes. Equality matching and modify operations on these attributes are
greatly improved, specifically in relation to static groups. Membership testing
for group entries has also been improved. These improvements remove some of
the previous restrictions on static groups, specifically the restriction on
group size.

Static groups are preferable to roles for management operations
such as assigning and removing members.

Static groups are the
simplest mechanism for assigning a user to a set or removing a user from a
set. Special access rights are not required to add the user to the group.

The right to create the group entry automatically gives you the right
to assign members to that group. This is not the case for managed and filtered
roles. In these roles, the administrator must also have the right to write
the nsroledn attribute to the user entry. The same access
right restrictions also apply indirectly to nested roles. The ability to create
a nested role implies the ability to pull together other roles that have already
been defined.

Dynamic groups are preferable to roles for use in filter-based
ACIs.

If you only need to find all members
based on a filter, such as for designating bind rules in ACIs, use dynamic
groups. Although filtered roles are similar to dynamic groups, filtered roles
trigger the roles mechanism and generate the virtual nsRole attribute.
If your client does not need the nsRole value, use dynamic
groups to avoid the overhead of this computation.

Groups are preferable to roles for adding or removing sets
into or from existing sets.

If you want to add a set to an existing
set, or remove a set from an existing set, the groups mechanism is simplest.
The groups mechanism presents no nesting restrictions. The roles mechanism
only allows nested roles to receive other roles.

Groups are preferable to roles if flexibility of scope for
grouping entries is critical.

Groups are flexible in terms of
scope because the scope for possible members is the entire directory, regardless
of where the group definition entries are located. Although roles can also
extend their scope beyond a given subtree, they can only do so by adding the
scope-extending attribute nsRoleScopeDN to a nested role.

Advantages of the Roles Mechanism

Roles have the following advantages:

Roles are preferable to dynamic groups if you want to enumerate
members of a set and find all sets of which a given entry
is a member. Static groups also provide this functionality with the isMemberOf attribute.

Roles push membership information out to
the user entry where this information can be cached to make subsequent membership
tests more efficient. The server performs all computations, and the client
only needs to read the values of the nsRole attribute.
In addition, all types of roles appear in this attribute, allowing the client
to process all roles uniformly. Roles can perform both operations more efficiently
and with simpler clients than is possible with dynamic groups.

Roles are preferable to groups if you want to integrate your
grouping mechanism with existing Directory Server functionality such as
CoS, Password Policy, Account Inactivation, and ACIs.

If you want
to use the membership of a set “naturally” in the server, roles
are a better option. This implies that you use the membership computations
that the server does automatically. Roles can be used in resource-oriented
ACIs, as a basis for CoS, as part of more complex search filters, and with
Password Policy, Account Inactivation, and so forth. Groups do not allow
this kind of integration.

Restricting Permissions on Roles

Be aware of the following issues when using roles:

The nsRole attribute can only be assigned by
the roles mechanism. While this attribute cannot be assigned or modified by
any directory user, it is potentially readable by any
directory user. Define access controls to keep this attribute from being
read by unauthorized users.

The nsRoleDN attribute defines managed
role membership. You need to decide whether users can add or remove themselves
from the role. To keep from modifying their own roles, you must define an
ACI to that effect.

Filtered roles determine membership through filters that are
based on the existence or the values of attributes in user entries. Assign
the user permissions of these attributes carefully to control who can define
membership in the filtered role.

Managing Attributes With Class of Service

The Class of
Service (CoS) mechanism allows attributes to be shared between entries. Like
the role mechanism, CoS generates virtual attributes on the entries as the
entries are retrieved. CoS does not define membership, but it does allow related
entries to share data for coherency and space considerations. CoS values are
calculated dynamically when the values are requested. CoS functionality and
the various types of CoS are described in detail in the Sun Java System Directory Server Enterprise Edition 6.1 Reference.

The following sections examine the ways in which you can use the CoS
functionality as intended, while avoiding performance pitfalls:

CoS generation always impacts performance. Client applications
that search for more attributes than they actually need can compound the problem.

If you can influence how client applications are written, remind
developers that client applications perform much better when looking up only
those attribute values that they actually need.

Using CoS When Many Entries Share the Same Value

CoS provides substantial benefits for relatively low cost when you need
the same attribute value to appear on numerous entries in a subtree.

Imagine, for example, a directory for MyCompany, Inc. in which every
user entry under ou=People has a companyName attribute.
Contractors have real values for companyName attributes
on their entries, but all regular employees have a single CoS-generated value, MyCompany, Inc., for companyName. The following
figure demonstrates this example with pointer CoS. Notice that CoS generates companyName values for all permanent employees without overriding
real, not CoS-generated, companyName values stored for
contractor employees. The company name is generated only for those entries
for which companyName is an allowed attribute.

Figure 8–6 Generating CompanyName With Pointer
CoS

In cases where many entries share the same value, pointer CoS works
particularly well. The ease of maintaining companyName for
permanent employees offsets the additional processing cost of generating attribute
values. Deep directory information trees (DITs) tend to bring together entries
that share common characteristics. Pointer CoS can be used in deep DITs to
generate common attribute values by placing CoS definitions at appropriate
branches in the tree.

Using CoS When Entries Have Natural Relationships

Consider an enterprise directory in which every employee has a manager.
Every employee shares a mail stop and fax number with the nearest administrative
assistant. Figure 8–7 demonstrates
the use of indirect CoS to retrieve the department number from the manager
entry. In Figure 8–8, the mail stop
and fax number are retrieved from the administrative assistant entry.

Figure 8–7 Generating DepartmentNumber With
Indirect CoS

In this implementation, the manager’s entry has a real value for departmentNumber, and this real value overrides any generated value.
Directory Server does not generate attribute values from CoS-generated
attribute values. Thus, in the Figure 8–7 example,
the department number attribute value needs to be managed only on the manager's
entry. Likewise, for the example shown in Figure 8–8, mail stop and fax number attributes need to be managed
only on the administrative assistant’s entry.

Figure 8–8 Generating Mail Stop and Fax Number With Indirect
CoS

A single CoS definition entry can be used to exploit relationships such
as these for many different entries in the directory.

Another natural relationship is service level. Consider an Internet
service provider that offers customers standard, silver, gold, and platinum
packages. A customer’s disk quota, number of mailboxes, and rights
to prepaid support levels depend on the service level purchased. The following
figure demonstrates how a classic CoS scheme enables this functionality.

Figure 8–9 Generating Servic-Level Data With Classic CoS

Avoiding Excessive CoS Definitions

Directory Server optimizes CoS when one classic CoS definition entry
is associated with multiple CoS template entries. Directory Server does
not optimize CoS if many CoS definitions potentially apply. Instead, Directory Server checks
each CoS definition to determine whether the definition applies. This behavior
leads to performance problems if you have thousands of CoS definitions.

This situation can arise in a modified version of the example shown
in Figure 8–9. Consider an Internet
service provider that offers customers delegated administration of their customers’
service level. Each customer provides definition entries for standard, silver,
gold, and platinum service levels. Ramping up to 1000 customers means creating
1000 classic CoS definitions. Directory Server performance would be affected
as it runs through the list of 1000 CoS definitions to determine which apply.
If you must use CoS in this sort of situation, consider indirect CoS. In indirect
CoS, customers’ entries identify the entries that define their class
of service allotments.

When you start approaching the limit of having different CoS schemes
for every target entry or two, you are better off updating the real values.
You then achieve better performance by reading real, not CoS-generated values.