A CASE expression appearing within the
test value subexpression of another CASE
could become confused about whether its own test value was null or
not. Also, inlining of a SQL function implementing the equality
operator used by a CASE expression could
result in passing the wrong test value to functions called within a
CASE expression in the SQL function's
body. If the test values were of different data types, a crash
might result; moreover such situations could be abused to allow
disclosure of portions of server memory. (CVE-2016-5423)

Numerous places in vacuumdb and
other client programs could become confused by database and role
names containing double quotes or backslashes. Tighten up quoting
rules to make that safe. Also, ensure that when a conninfo string
is used as a database name parameter to these programs, it is
correctly treated as such throughout.

Fix handling of paired double quotes in psql's \connect and
\password commands to match the
documentation.

Introduce a new -reuse-previous option
in psql's \connect command to allow explicit control of
whether to re-use connection parameters from a previous connection.
(Without this, the choice is based on whether the database name
looks like a conninfo string, as before.) This allows secure
handling of database names containing special characters in
pg_dumpall scripts.

pg_dumpall now refuses to deal
with database and role names containing carriage returns or
newlines, as it seems impractical to quote those characters safely
on Windows. In future we may reject such names on the server side,
but that step has not been taken yet.

These are considered security fixes because crafted object names
containing special characters could have been used to execute
commands with superuser privileges the next time a superuser
executes pg_dumpall or other
routine maintenance operations. (CVE-2016-5424)

The SQL standard specifies that IS NULL
should return TRUE for a row of all null values (thus ROW(NULL,NULL) IS NULL yields TRUE), but this is not
meant to apply recursively (thus ROW(NULL,
ROW(NULL,NULL)) IS NULL yields FALSE). The core executor got
this right, but certain planner optimizations treated the test as
recursive (thus producing TRUE in both cases), and contrib/postgres_fdw could produce remote queries
that misbehaved similarly.

In several cases the to_number()
function would read one more character than it should from the
input string. There is a small chance of a crash, if the input
happens to be adjacent to the end of memory.

Do not run the planner on the query contained in CREATE MATERIALIZED VIEW or CREATE TABLE AS when WITH NO
DATA is specified (Michael Paquier, Tom Lane)

This avoids some unnecessary failure conditions, for example if
a stable function invoked by the materialized view depends on a
table that doesn't exist yet.

Previously, these cases locked the target tuple (by setting its
XMAX) but did not WAL-log that action, thus risking data integrity
problems if the page were spilled to disk and then a database crash
occurred before the tuple update could be completed.

The statistics collector failed to update the statistics file
for shared catalogs after a request from a regular backend. This
problem was partially masked because the autovacuum launcher
regularly makes requests that did cause such updates; however, it
became obvious with autovacuum disabled.

Some cases in VACUUM unnecessarily
caused an XID to be assigned to the current transaction. Normally
this is negligible, but if one is up against the XID wraparound
limit, consuming more XIDs during anti-wraparound vacuums is a very
bad thing.

If we're only analyzing some columns, we should not prevent
routine auto-analyze from happening for the other columns.

Fix ANALYZE's overestimation of
n_distinct for a unique or nearly-unique
column with many null entries (Tom Lane)

The nulls could get counted as though they were themselves
distinct values, leading to serious planner misestimates in some
types of queries.

Prevent autovacuum from starting multiple workers for the same
shared catalog (Álvaro Herrera)

Normally this isn't much of a problem because the vacuum doesn't
take long anyway; but in the case of a severely bloated catalog, it
could result in all but one worker uselessly waiting instead of
doing useful work on other tables.

Make sure that the worker processes will exit promptly, and also
arrange to send query-cancel requests to the connected backends, in
case they are doing something long-running such as a CREATE INDEX.

Fix error reporting in parallel pg_dump and pg_restore (Tom Lane)

Previously, errors reported by pg_dump or pg_restore worker processes might never make
it to the user's console, because the messages went through the
master process, and there were various deadlock scenarios that
would prevent the master process from passing on the messages.
Instead, just print everything to stderr.
In some cases this will result in duplicate messages (for instance,
if all the workers report a server shutdown), but that seems better
than no message.

Ensure that parallel pg_dump or
pg_restore on Windows will shut
down properly after an error (Kyotaro Horiguchi)

Previously, it would report the error, but then just sit until
manually stopped by the user.

Make pg_dump behave better when
built without zlib support (Kyotaro Horiguchi)

It didn't work right for parallel dumps, and emitted some rather
pointless warnings in other cases.

Submit correction

If you see anything in the documentation that is not correct, does not match
your experience with the particular feature or requires further clarification,
please use
this form
to report a documentation issue.