Streaming Replication

From PostgreSQL wiki

Streaming Replication (SR) provides the capability to continuously ship and
apply the WAL XLOG records to some number of standby servers in order to keep them current.

This feature was added to PostgreSQL 9.0. The discussion below is a developer oriented one that contains some out of date information. Users of this feature should use the documentation for the feature or a setup tutorial instead:

Developer and historical details on the project

Usage

Users Overview

Log-shipping

XLOG records generated in the primary are periodically shipped to the standby via the network.

In the existing warm standby, only records in a filled file are shipped, what's referred to as file-based log-shipping. In SR, XLOG records in partially-filled XLOG file are shipped too, implementing record-based log-shipping. This means the window for data loss in SR is usually smaller than in warm standby, unless the warm standby was also configured for record-based shipping (which is complicated to setup).

The content of XLOG files written to the standby are exactly the same as those on the primary. XLOG files shipped can be used for a normal recovery and PITR.

Multiple standbys

More than one standby can establish a connection to the primary for SR. XLOG records are concurrently shipped to all these standbys. The delay/death of a standby does not harm log-shipping to other standbys.

The maximum number of standbys can be specified as a GUC variable.

Continuous recovery

The standby continuously replays XLOG records shipped without using pg_standby.

XLOG records shipped are replayed as soon as possible without waiting until XLOG file has been filled. The combination of Hot Standby and SR would make the latest data inserted into the primary visible in the standby almost immediately.

The standby periodically removes old XLOG files which are no longer needed for recovery, to prevent excessive disk usage.

Setup

The start of log-shipping does not interfere with any query processing on the primary.

The standby can be started in various conditions.

If there are XLOG files in archive directory and restore_command is supplied, at first those files are replayed. Then the standby requests XLOG records following the last applied one to the primary. This prevents XLOG files already present in the standby from being shipped again. Similarly, XLOG files in pg_xlog are also replayed before starting log-shipping.

If there is no XLOG files on the standby, the standby requests XLOG records following the starting XLOG location of recovery (the redo starting location).

Connection settings and authentication

A user can configure the same settings as a normal connection to a connection for SR (e.g., keepalive, pg_hba.conf).

Activation

The standby can keep waiting for activation as long as a user likes. This prevents the standby from being automatically brought up by failure of recovery or network outage.

Progress report

The primary and standby report the progress of log-shipping in PS display.

Graceful shutdown

When smart/fast shutdown is requested, the primary waits to exit until XLOG records have been sent to the standby, up to the shutdown checkpoint record.

Restrictions

Synchronous log-shipping

By default, SR supports operates in asynchronous manner, so the commit command might return a "success" to a client before the corresponding XLOG records are shipped to the standby. To enable synchronous replication, see Synchronous Replication

Replication beyond timeline

A user has to get a fresh backup whenever making the old standby catch up.

Clustering

Postgres doesn't provide any clustering feature.

How to Use

1. Install postgres in the primary and standby server as usual. This requires only configure, make and make install.

2. Create the initial database cluster in the primary server as usual, using initdb.

3. Set up connections and authentication so that the standby server can successfully connect to the replication pseudo-database on the primary.

4. Set up the streaming replication related parameters on the primary server.

$ $EDITOR postgresql.conf
# To enable read-only queries on a standby server, wal_level must be set to
# "hot_standby". But you can choose "archive" if you never connect to the
# server in standby mode.
wal_level = hot_standby
# Set the maximum number of concurrent connections from the standby servers.
max_wal_senders = 5
# To prevent the primary server from removing the WAL segments required for
# the standby server before shipping them, set the minimum number of segments
# retained in the pg_xlog directory. At least wal_keep_segments should be
# larger than the number of segments generated between the beginning of
# online-backup and the startup of streaming replication. If you enable WAL
# archiving to an archive directory accessible from the standby, this may
# not be necessary.
wal_keep_segments = 32
# Enable WAL archiving on the primary to an archive directory accessible from
# the standby. If wal_keep_segments is a high enough number to retain the WAL
# segments required for the standby server, this is not necessary.
archive_mode = on
archive_command = 'cp %p /path_to/archive/%f'

5. Start postgres on the primary server.

6. Make a base backup by copying the primary server's data directory to the standby server.

7. Set up replication-related parameters, connections and authentication in the standby server like the primary, so that the standby might work as a primary after failover.

8. Enable read-only queries on the standby server. But if wal_level is archive on the primary, leave hot_standby unchanged (i.e., off).

$ $EDITOR postgresql.conf
hot_standby = on

9. Create a recovery command file in the standby server; the following parameters are required for streaming replication.

$ $EDITOR recovery.conf
# Note that recovery.conf must be in $PGDATA directory.
# Specifies whether to start the server as a standby. In streaming replication,
# this parameter must to be set to on.
standby_mode = 'on'
# Specifies a connection string which is used for the standby server to connect
# with the primary.
primary_conninfo = 'host=192.168.0.10 port=5432 user=postgres'
# Specifies a trigger file whose presence should cause streaming replication to
# end (i.e., failover).
trigger_file = '/path_to/trigger'
# Specifies a command to load archive segments from the WAL archive. If
# wal_keep_segments is a high enough number to retain the WAL segments
# required for the standby server, this may not be necessary. But
# a large workload can cause segments to be recycled before the standby
# is fully synchronized, requiring you to start again from a new base backup.
restore_command = 'cp /path_to/archive/%f "%p"'

10. Start postgres in the standby server. It will start streaming replication.

11. You can calculate the replication lag by comparing the current WAL write location on the primary with the last WAL location received/replayed by the standby. They can be retrieved using pg_current_xlog_location on the primary and the pg_last_xlog_receive_location/pg_last_xlog_replay_location on the standby, respectively.

Repeat the operations from 6th; making a fresh backup, some configurations and starting the original primary as the standby. The primary server doesn't need to be stopped during these operations.

How to restart streaming replication after the standby fails

Restart postgres in the standby server after eliminating the cause of failure.

How to disconnect the standby from the primary

Create the trigger file in the standby while the primary is running. Then the standby would be brought up.

How to re-synchronize the stand-alone standby after isolation

Shut down the standby as usual. And repeat the operations from 6th.

If you have more than one slave, promoting one will break the other(s). Update their recovery.conf settings to point to the new master, set recovery_target_timeline to 'latest', scp/rsync the pg_xlog directory, and restart the slave.

Add new parameter (replication_timeout_action) to specify the reaction to replication_timeout.

Future release

Synchronization capability

Introduce the synchronization mode which can control how long transaction commit waits for replication before the commit command returns a "success" to a client. The valid modes are async, recv, fsync and apply.

Add new parameter (replication_timeout_action) to specify the reaction to replication_timeout.

Monitoring

Provide the capability to check the progress and gap of streaming replication via one query. A collaboration of HS and SR is necessary to provide that capability on the standby side.

Provide the capability to check if the specified repliation is in progress via a query. Also more detailed status information might be necessary, e.g, the standby is catching up now, has already gotten into sync, and so on.

Change the stats collector to collect the statistics information about replication, e.g., average delay of replication time.

Develop the tool to calculate the latest XLOG position from XLOG files. This is necessary to check the gap of replication after the server fails.

Also develop the tool to extract the user-readable contents from XLOG files. This is necessary to see the contents of the gap, and manually restore them.

Easy to Use

Introduce the parameters like:

replication_halt_timeout - replication will halt if no data has been sent for this much time.

replication_halt_segments - replication will halt if number of WAL files in pg_xlog exceeds this threshold.

These parameters allow us to avoid disk overflow.

Add new feature which transfers also base backup via the direct connection between the primary and the standby.

Add new hooks like walsender_hook and walreceiver_hook to cooperate with the add-on program for compression like pglesslog.

Provide a graceful termination of replication via a query on the primary. On the standby, a trigger file mechanism already provides that capability.

Support replication beyond timeline. The timeline history files need to be shipped from the primary to the standby.

Robustness

Support keepalive in libpq. This is useful for a client and the standby to detect a failure of the primary immediately.