Execute a slonik script containing the
command update functions (id = [whatever]); for
each node in the cluster.

Note: Remember that your slonik upgrade script like all other
slonik scripts must contain the proper preamble commands to function.

Start all slons.

The overall operation is relatively safe: If there is any
mismatch between component versions, the slon will refuse to start
up, which provides protection against corruption.

You need to be sure that the C library containing SPI trigger
functions has been copied into place in the PostgreSQL build. There
are multiple possible approaches to this:

The trickiest part of this is ensuring that the C library
containing SPI functions is copied into place in the PostgreSQL build;
the easiest and safest way to handle this is to have two separate
PostgreSQL builds, one for each Slony-I version, where the postmaster
is shut down and then restarted against the "new" build;
that approach requires a brief database outage on each node.

While that approach has been found to be easier and safer,
nothing prevents one from carefully copying Slony-I components for
the new version into place to overwrite the old version as
the "install" step. That might not
work on Windows™ if it locks library files that
are in use. It is also important to make sure that any connections
to the database are restarted after the new binary is installed.

Run make install to install new
Slony-I components on top of the old

If you build Slony-I on the same system on which it
is to be deployed, and build from sources, overwriting the old with
the new is as easy as make install. There is no
need to restart a database backend; just to stop slon processes,
run the UPDATE FUNCTIONS script, and start new
slon processes.

Unfortunately, this approach requires having a build
environment on the same host as the deployment. That may not be
consistent with efforts to use common PostgreSQL and Slony-I binaries
across a set of nodes.

Create a new PostgreSQL and Slony-I build

With this approach, the old PostgreSQL build with old
Slony-I components persists after switching to a new PostgreSQL build
with new Slony-I components. In order to switch to the new Slony-I
build, you need to restart the
PostgreSQLpostmaster, therefore interrupting
applications, in order to get it to be aware of the location of the
new components.

The TABLE ADD KEY slonik command has been removed in version 2.0.
This means that all tables must have a set of columns that form a
unique key for the table.
If you are upgrading from a previous Slony-I version and are using a
Slony-I created primary key then you will need to modify your table
to have its own primary key before installing Slony-I version 2.0

New in 2.0.5 (but not older versions of 2.0.x) is that
SLONIK SUBSCRIBE SET commands are submitted by
slonik to the set origin not the provider. This means that you
only need to issue SLONIK WAIT FOR EVENT on the set origin
to wait for the subscription process to complete.

The version 2 branch is substantially
different from earlier releases, dropping support for versions of
PostgreSQL prior to 8.3, as in version 8.3, support for a
"session replication role" was added, thereby eliminating
the need for system catalog hacks as well as the
not-entirely-well-supported xxid data type.

As a result of the replacement of the xxid type
with a (native-to-8.3) PostgreSQL transaction XID type, the slonik
command SLONIK UPDATE FUNCTIONS is quite inadequate to
the process of upgrading earlier versions of Slony-I to version
2.

In version 2.0.2, we have added a new option to SLONIK SUBSCRIBE SET, OMIT COPY, which
allows taking an alternative approach to upgrade which amounts to:

Uninstall old version of Slony-I

When Slony-I uninstalls itself, catalog corruptions are fixed back up.

Install Slony-I version 2

Resubscribe, with OMIT COPY

Warning

There is a large "foot gun" here: during
part of the process, Slony-I is not installed in any form, and if an
application updates one or another of the databases, the
resubscription, omitting copying data, will be left with data
out of sync.

The administrator must take care; Slony-I
has no way to help ensure the integrity of the data during this
process.

The following process is suggested to help make the upgrade
process as safe as possible, given the above risks.

Trigger handling has
fundamentally changed between Slony-I 1.2 and 2.0.

Generally speaking, what needs to happen is to query
sl_table on each node, and, for any triggers found in
sl_table, it is likely to be appropriate to set up a
script indicating either ENABLE REPLICA TRIGGER or
ENABLE ALWAYS TRIGGER for these triggers.

This step may be done before the application outage.

Begin an application outage during which updates should no longer be applied to the database.

To ensure that applications cease to make changes, it would be appropriate to lock them out via modifications to pg_hba.conf

Ensure replication is entirely caught up, via examination of the sl_status view, and any application data that may seem appropriate.