OpenLDAP Everywhere Reloaded, Part I

Directory services is one of the most interesting and crucial
parts of computing today. They provide our account management, basic
authentication, address books and a back-end repository for the
configuration of many other important applications.

In this multipart series, I cover how to engineer an OpenLDAP directory
service to create a unified login for heterogeneous environments. With
current software and a modern approach to server design, the aim is to reduce
the number of single points of failure for the directory.

In this article, I describe how to configure two Linux servers to host core network services
required for clients to query the directory service. I configure these
core services to be highly available through the use of failover pools
and/or replication.

Figure 1. An Overall View of Core Network Services, Including
LDAP (Note: the image of the hard disk icon in this figure was taken from the Open Icon
Library
Project: http://openiconlibrary.sourceforge.net.)

Assumptions and Prerequisites

Certain approaches were taken in this design with small-to-medium
enterprises (SMEs) in mind. You may wish to custom-tailor the design if
you are a small-to-medium business (SMB) or large-scale enterprise.

The servers discussed in this article were installed with the latest
stable version of the Debian GNU/Linux. At the time of this writing, this
was Debian 6.0.2.1 (Squeeze). Although it has not been tested for Ubuntu,
Ubuntu users should be able to log in as root (run sudo su
-) and have
few problems.

As per Figure 1, the fictional local domain name is
example.com. Four fictitious subnetworks exist: 192.168.1.0/24,
192.168.2.0/24, 192.168.3.0/24 and 192.168.4.0/24. Routing between
the four subnets is assumed to be working correctly. Where appropriate,
please substitute applicable values for your domain name, IP addresses,
netmask addresses and so on.

LDAP users are assumed to have home directories in /export/home rather
than /home. This allows LDAP credentials to be compatible for operating
systems other than Linux. Many proprietary UNIXes, for example, use
/export/home as the default home directory. /home on Solaris is also
reserved for other purposes (automount directories).

The design assumes that /export/home is actually a shared disk.

This is typically implemented as a mountpoint to an NFS server on a host
or NAS; however, the design makes no determination about how to achieve
the shared disk, which is beyond the scope of the article, so I'm leaving
it
to the reader to decide how to implement this.

You can opt not to implement the shared disk, but there are some serious
drawbacks if you don't. All LDAP users will need their $HOME directory
to be created manually by the administrator for every server to which they
wish to log in (prior to them logging in). Also, the files a user creates on one server
will not be available to other servers unless the user copies them to the
other server manually. This is a major inconvenience for users and creates
a waste of server disk space (and backup tape space) because of the
duplication of data.

All example passwords are set to "linuxjournal", and it's
assumed you'll replace these with your own sensible values.

Install Packages

On both linux01.example.com and linux02.example.com, use your preferred
package manager to install the ntp, bind9, bind9utils, dnsutils,
isc-dhcp-server, slapd and ldap-utils packages.

Start with Accurate Timekeeping (NTP)

Accurate timekeeping between the two Linux servers is a requirement for
DHCP failover.
There are additional benefits in having accurate time, namely:

It's required if you intend to implement (or already have implemented) secure
authentication with Kerberos.

It's required if you intend to have some form of Linux integration with
Microsoft Active Directory.

It's required if you intend to use N-Way Multi-Master replication in OpenLDAP.

It greatly assists in troubleshooting, eliminating the guesswork when comparing logfile timestamps between servers, networking equipment and client devices.

Once ntp is installed on both
linux01.example.com and linux02.example.com,
you are practically finished. The Debian NTP team creates very sensible
defaults for ntp.conf(5). Time sources, such as 0.debian.pool.ntp.org
and 1.debian.pool.ntp.org, will work adequately for most scenarios.

If you prefer to use your own time sources, you can modify the lines
beginning with server in /etc/ntp.conf. Replace the address with that
of your preferred time source(s).

You can check on both servers to see if your ntp configuration is correct
with the ntpq(1) command:

Don't be concerned if your ntpq output shows a different set of
servers. The *.pool.ntp.org addresses are DNS round-robin records that
balance DNS queries among hundreds of different NTP servers. The
important thing is to check that ntp can contact upstream NTP servers.

Name Resolution (DNS)

If the LDAP client can't resolve the hostname of the Linux servers
that run OpenLDAP, they can't connect to the directory services they
provide. This can include the inability to retrieve basic UNIX account
information for authentication, which will prevent user logins.

As such, configure ISC bind to provide DNS zones in a
master/slave combination between the two Linux servers. The example workstations
will be configured (through DHCP) to query DNS on linux01.example.com,
then linux02.example.com if the first query fails.

Note: /etc/bind/named.conf normally is replaced by the package manager
when the bind9 package is upgraded.
Debian's default named.conf(5) has an include
/etc/bind/named.conf.local statement so that site local zone
configurations added there are not lost when the bind9 package is upgraded.

Stewart Walters is a Solutions Architect with more than 15 years' experience
in the Information Technology industry. Amongst other industry
certifications, he is a Senior Level Linux Professional (LPIC-3).

Comments

Comment viewing options

Submitted by visit the site (not verified) on Sat, 06/23/2012 - 07:37.

Undeniably imagine that that you said. Your favourite justification appeared to
be at the net the simplest factor to have in mind of.
I say to you, I certainly get annoyed even as people consider issues that they plainly
do not recognise about. You controlled to hit the nail upon the highest
as well as outlined out the entire thing with no need side
effect , other folks can take a signal. Will likely
be again to get more. Thank you

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.