Renaming varchar_transform to varchar_support had a side effect
I hadn't foreseen: the core regression tests leave around a
transform object that relies on that function, so the name
change breaks cross-version upgrade tests, because the name
used in the older branches doesn't match.
Since the dependency on varchar_transform was chosen with the
aid of a dartboard anyway (it would surely not work as a
language transform support function), fix by just choosing
a different random builtin function with the right signature.
Also add some comments explaining why this isn't horribly unsafe.
I chose to make the same substitution in a couple of other
copied-and-pasted test cases, for consistency, though those
aren't directly contributing to the testing problem.
Per buildfarm. Back-patch, else it doesn't fix the problem.

warn_or_exit_horribly() was blithely passing a potentially-NULL
string pointer to a %s format specifier. That works (at least
to the extent of not crashing) on some platforms, but not all,
and since we switched to our own snprintf.c it doesn't work
for us anywhere.
Of the three string fields being handled this way here, I think
that only "owner" is supposed to be nullable ... but considering
that this is error-reporting code, it has very little business
assuming anything, so put in defenses for all three.
Per a crash observed on buildfarm member crake and then
reproduced here. Because of the portability aspect,
back-patch to all supported versions.

While this isn't really supposed to happen, it can occur in OOM
situations and perhaps others. Instead of crashing, substitute
"(no message provided)".
I didn't worry about localizing this text, since we aren't
localizing anything else here; besides, if we're on the edge of
OOM, it's unlikely gettext() would work.
Report and fix by Sergio Conde Gómez in bug #15624.
Discussion: https://postgr.es/m/15624-4dea54091a2864e6@postgresql.org

Also clean up some discussion that had been left in a very confused
state thanks to half-hearted adjustments for the change to
standard_conforming_strings being the default.
Discussion: https://postgr.es/m/154954987367.1297.4358910045409218@wrigleys.postgresql.org

As reported in bug #15613 from Srinivasan S A, file_fdw and postgres_fdw
neglected to mark plain baserel foreign paths as parameterized when the
relation has lateral_relids. Other FDWs have surely copied this mistake,
so rather than just patching those two modules, install a band-aid fix
in create_foreignscan_path to rectify the mistake centrally.
Although the band-aid is enough to fix the visible symptom, correct
the calls in file_fdw and postgres_fdw anyway, so that they are valid
examples for external FDWs.
Also, since the band-aid isn't enough to make this work for parameterized
foreign joins, throw an elog(ERROR) if such a case is passed to
create_foreignscan_path. This shouldn't pose much of a problem for
existing external FDWs, since it's likely they aren't trying to make such
paths anyway (though some of them may need a defense against joins with
lateral_relids, similar to the one this patch installs into postgres_fdw).
Add some assertions in relnode.c to catch future occurrences of the same
error --- in particular, as backstop against core-code mistakes like the
one fixed by commit bdd9a99aa.
Discussion: https://postgr.es/m/15613-092be1be9576c728@postgresql.org

The modules RewindTest.pm and ServerSetup.pm are really only useful for
TAP tests, so they really belong in the TAP test directories. In
addition, ServerSetup.pm is renamed to SSLServer.pm.
The test scripts have their own directories added to the search path so
that the relocated modules will be found, regardless of where the tests
are run from, even on modern perl where "." is no longer in the
searchpath.
Discussion: https://postgr.es/m/e4b0f366-269c-73c3-9c90-d9cb0f4db1f9@2ndQuadrant.com
Backpatch as appropriate to 9.5

create_lateral_join_info() computes a bunch of information about lateral
references between base relations, and then attempts to propagate those
markings to appendrel children of the original base relations. But the
original coding neglected the possibility of indirect descendants
(grandchildren etc). During v11 development we noticed that this was
wrong for partitioned-table cases, but failed to realize that it was just
as wrong for any appendrel. While the case can't arise for appendrels
derived from traditional table inheritance (because we make a flat
appendrel for that), nested appendrels can arise from nested UNION ALL
subqueries. Failure to mark the lower-level relations as having lateral
references leads to confusion in add_paths_to_append_rel about whether
unparameterized paths can be built. It's not very clear whether that
leads to any user-visible misbehavior; the lack of field reports suggests
that it may cause nothing worse than minor cost misestimation. Still,
it's a bug, and it leads to failures of Asserts that I intend to add
later.
To fix, we need to propagate information from all appendrel parents,
not just those that are RELOPT_BASERELs. We can still do it in one
pass, if we rely on the append_rel_list to be ordered with ancestor
relationships before descendant ones; add assertions checking that.
While fixing this, we can make a small performance improvement by
traversing the append_rel_list just once instead of separately for
each appendrel parent relation.
Noted while investigating bug #15613, though this patch does not fix
that (which is why I'm not committing the related Asserts yet).
Discussion: https://postgr.es/m/3951.1549403812@sss.pgh.pa.us

Contrary to the comment on 772d4b76, only paths starting with "./" or
"../" are considered relative to the current working directory by perl's
"do" function. So this patch converts all the relevant cases to use "./"
paths. This only affects MSVC.
Backpatch to all live branches.

Use "do" instead of "require" for included files, as it doesn't look for
them in the search path but relative to the current working directory.
These changes have already been made to REL_10_STABLE and later, to
satisfy the demands of perlcritic, but need backporting now to earlier
branches.

DST law changes in Kazakhstan, Metlakatla, and São Tomé and Príncipe.
Kazakhstan's Qyzylorda zone is split in two, creating a new zone
Asia/Qostanay, as some areas did not change UTC offset.
Historical corrections for Hong Kong and numerous Pacific islands.

Historically we've had each release branch include all prior branches'
notes, including minor-release changes, back to the beginning of the
project. That's basically an O(N^2) proposition, and it was starting to
catch up with us: as of HEAD the back-branch release notes alone accounted
for nearly 30% of the documentation. While there's certainly some value
in easy access to back-branch notes, this is getting out of hand.
Hence, switch over to the rule that each branch contains only its own
release notes. So as to not make older notes too hard to find, each
branch will provide URLs for the immediately preceding branches'
release notes on the project website.
There might be value in providing aggregated notes across all branches
somewhere on the website, but that's a task for another day.
Discussion: https://postgr.es/m/cbd4aeb5-2d9c-8b84-e968-9e09393d4c83@postgresql.org

Commit 62215de29 turns out to have been not quite on-the-mark.
When we are forced to postpone dumping of a materialized view into
the dump's post-data section (because it depends on a unique index
that isn't created till that section), we may also have to postpone
dumping other matviews that depend on said matview. The previous fix
didn't reliably work for such cases: it'd break the dependency loops
properly, producing a workable object ordering, but it didn't
necessarily mark all the matviews as "postponed_def". This led to
harmless bleating about "archive items not in correct section order",
as reported by Tom Cassidy in bug #15602. Less harmlessly,
selective-restore options such as --section might misbehave due to
the matview dump objects not being properly labeled.
The right way to fix it is to consider that each pre-data dependency
we break amounts to moving the no-longer-dependent object into
post-data, and hence we should mark that object if it's a matview.
Back-patch to all supported versions, since the issue's been there
since matviews were introduced.
Discussion: https://postgr.es/m/15602-e895445f73dc450b@postgresql.org

Rather than define ld_library_path_ver with a big nested $(if), just
put the overriding values in the makefiles for the relevant ports.
Also add a variable for port makefiles to append their own stuff to
with_temp_install, and use it to set LD_LIBRARY_PATH_RPATH=1 on
FreeBSD which is needed to make LD_LIBRARY_PATH override DT_RPATH
if DT_RUNPATH is not set (which seems to depend in unpredictable ways
on the choice of compiler, at least on my system).
Backpatch for the benefit of anyone doing regression tests on FreeBSD.
(For other platforms there should be no functional change.)

Add PG_CFLAGS, PG_CXXFLAGS, and PG_LDFLAGS variables to pgxs.mk which
will be appended or prepended to the corresponding make variables.
Notably, there was previously no way to pass custom CXXFLAGS to third
party extension module builds, COPT and PROFILE supporting only CFLAGS
and LDFLAGS.
Backpatch all the way down to ease integration with existing
extensions.
Author: Christoph Berg
Reviewed-by: Andres Freund, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20181113104005.GA32154@msg.credativ.de
Backpatch-through: 9.4

To avoid deadlock, backend acquires a lock on heap pages in block
number order. In certain cases, lock on heap pages is dropped and
reacquired. In this case, the locks are dropped for reading in
corresponding VM page/s. The issue is we re-acquire locks in bufferId
order whereas the intention was to acquire in blockid order.
This commit ensures that we will always acquire locks on heap pages in
blockid order.
Reported-by: Nishant Fnu
Author: Nishant Fnu
Reviewed-by: Amit Kapila and Robert Haas
Backpatch-through: 9.4
Discussion: https://postgr.es/m/5883C831-2ED1-47C8-BFAC-2D5BAE5A8CAE@amazon.com

M src/backend/access/heap/hio.c

Fix use of dangling pointer in heap_delete() when logging replica identity

When logging the replica identity of a deleted tuple, XLOG_HEAP_DELETE
records include references of the old tuple. Its data is stored in an
intermediate variable used to register this information for the WAL
record, but this variable gets away from the stack when the record gets
actually inserted.
Spotted by clang's AddressSanitizer.
Author: Stas Kelvish
Discussion: https://postgr.es/m/085C8825-AD86-4E93-AF80-E26CDF03D1EA@postgrespro.ru
Backpatch-through: 9.4

The bug was that determining which columns are part of the replica
identity index using RelationGetIndexAttrBitmap() would run
eval_const_expressions() on index expressions and predicates across
all indexes of the table, which in turn might require a snapshot, but
there wasn't one set, so it crashes. There were actually two separate
bugs, one on the publisher and one on the subscriber.
To trigger the bug, a table that is part of a publication or
subscription needs to have an index with a predicate or expression
that lends itself to constant expressions simplification.
The fix is to avoid the constant expressions simplification in
RelationGetIndexAttrBitmap(), so that it becomes safe to call in these
contexts. The constant expressions simplification comes from the
calls to RelationGetIndexExpressions()/RelationGetIndexPredicate() via
BuildIndexInfo(). But RelationGetIndexAttrBitmap() calling
BuildIndexInfo() is overkill. The latter just takes pg_index catalog
information, packs it into the IndexInfo structure, which former then
just unpacks again and throws away. We can just do this directly with
less overhead and skip the troublesome calls to
eval_const_expressions(). This also removes the awkward
cross-dependency between relcache.c and index.c.
Bug: #15114
Reported-by: Петър Славов <pet.slavov@gmail.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/152110589574.1223.17983600132321618383@wrigleys.postgresql.org/

Previously, \g would successfully execute the COPY command, but
the target specification if any was ignored, so that the data was
always dumped to the regular query output target. This seems like
a clear bug, so let's not just fix it but back-patch it.
While at it, adjust the documentation for \copy to recommend
"COPY ... TO STDOUT \g foo" as a plausible alternative.
Back-patch to 9.5. The problem exists much further back, but the
code associated with \g was refactored enough in 9.5 that we'd
need a significantly different patch for 9.4, and it doesn't
seem worth the trouble.
Daniel Vérité, reviewed by Fabien Coelho
Discussion: https://postgr.es/m/15dadc39-e050-4d46-956b-dcc4ed098753@manitou-mail.org

Since LISTEN is (still) disallowed, UNLISTEN must be a no-op in a
hot-standby session, and so there's no harm in allowing it. This
change allows client code to not worry about whether it's connected
to a primary or standby server when performing session-state-reset
type activities. (Note that DISCARD ALL, which includes UNLISTEN,
was already allowed, making it inconsistent to reject UNLISTEN.)
Per discussion, back-patch to all supported versions.
Shay Rojansky, reviewed by Mi Tar
Discussion: https://postgr.es/m/CADT4RqCf2gA_TJtPAjnGzkC3ZiexfBZiLmA-mV66e4UyuVv8bA@mail.gmail.com

A report from Andrew Dunstan showed that an ecpglib breakage that
causes repeated query failures could lead to infinite loops in some
ecpg test scripts, because they contain "while(1)" loops with no
exit condition other than successful test completion. That might
be all right for manual testing, but it seems entirely unacceptable
for automated test environments such as our buildfarm. We don't
want buildfarm owners to have to intervene manually when a test
goes wrong.
To fix, just change all those while(1) loops to exit after at most
100 iterations (which is more than any of them expect to iterate).
This seems sufficient since we'd see discrepancies in the test output
if any loop executed the wrong number of times.
I tested this by dint of intentionally breaking ecpg_do_prologue
to always fail, and verifying that the tests still got to completion.
Back-patch to all supported branches, since the whole point of this
exercise is to protect the buildfarm against future mistakes.
Discussion: https://postgr.es/m/18693.1548302004@sss.pgh.pa.us

Apparently, some builds of MinGW contain a version of
_configthreadlocale() that always returns -1, indicating failure.
Rather than treating that as a curl-up-and-die condition, soldier on
as though the function didn't exist. This leaves us without thread
safety on such MinGW versions, but we didn't have it anyway.
Discussion: https://postgr.es/m/d06a16bc-52d6-9f0d-2379-21242d7dbe81@2ndQuadrant.com

ecpglib attempts to force the LC_NUMERIC locale to "C" while reading
server output, to avoid problems with strtod() and related functions.
Historically it's just issued setlocale() calls to do that, but that
has major problems if we're in a threaded application. setlocale()
itself is not required by POSIX to be thread-safe (and indeed is not,
on recent OpenBSD). Moreover, its effects are process-wide, so that
we could cause unexpected results in other threads, or another thread
could change our setting.
On platforms having uselocale(), which is required by POSIX:2008,
we can avoid these problems by using uselocale() instead. Windows
goes its own way as usual, but we can make it safe by using
_configthreadlocale(). Platforms having neither continue to use the
old code, but that should be pretty much nobody among current systems.
(Subsequent buildfarm results show that recent NetBSD versions still
lack uselocale(), but it's not a big problem because they also do not
support non-"C" settings for LC_NUMERIC.)
Back-patch of commits 8eb4a9312 and ee27584c4.
Michael Meskes and Tom Lane; thanks also to Takayuki Tsunakawa.
Discussion: https://postgr.es/m/31420.1547783697@sss.pgh.pa.us

This reverts commit f57b94d9d0bbf42a18690d820f884dc0fc6bf79e.
Per discussion, it's not desirable to add valgrind suppressions for
outside our own code base (e.g. glibc in this case), especially when
the suppressions may be platform-specific. There are better ways to
deal with that, e.g. by providing local suppressions.
Discussion: https://www.postgresql.org/message-id/flat/90ac0452-e907-e7a4-b3c8-15bd33780e62%402ndquadrant.com

Recent OpenBSD (at least 5.9 and up) has a version of getopt(3)
that will not cope with the "-:" spec we use to accept double-dash
options in postgres.c and postmaster.c. Admittedly, that's a hack
because POSIX only requires getopt() to allow alphanumeric option
characters. I have no desire to find another way, however, so
let's just do what we were already doing on Solaris: force use
of our own src/port/getopt.c implementation.
In passing, improve some of the comments around said implementation.
Per buildfarm and local testing. Back-patch to all supported branches.
Discussion: https://postgr.es/m/30197.1547835700@sss.pgh.pa.us

Previously, parseCheckAggregates was run before
assign_query_collations, but this causes problems if any expression
has already had a collation assigned by some transform function (e.g.
transformCaseExpr) before parseCheckAggregates runs. The differing
collations would cause expressions not to be recognized as equal to
the ones in the GROUP BY clause, leading to spurious errors about
unaggregated column references.
The result was that CASE expr WHEN val ... would fail when "expr"
contained a GROUPING() expression or matched one of the group by
expressions, and where collatable types were involved; whereas the
supposedly identical CASE WHEN expr = val ... would succeed.
Backpatch all the way; this appears to have been wrong ever since
collations were introduced.
Per report from Guillaume Lelarge, analysis and patch by me.
Discussion: https://postgr.es/m/CAECtzeVSO_US8C2Khgfv54ZMUOBR4sWq+6_bLrETnWExHT=rFg@mail.gmail.com
Discussion: https://postgr.es/m/87muo0k0c7.fsf@news-spur.riddles.org.uk

This will enable things like the buildfarm client to discover more
reliably if certain libraries have been installed.
Discussion: https://postgr.es/m/859e7c91-7ef4-d4b4-2ca2-8046e0cbee09@2ndQuadrant.com
Backpatch to all live branches.

runtime.sgml said that you couldn't change SysV IPC parameters on OpenBSD
except by rebuilding the kernel. That's definitely wrong in OpenBSD 6.x,
and excavation in their man pages says it changed in OpenBSD 3.3.
Update NetBSD and OpenBSD sections to recommend adjustment of the SEMMNI
and SEMMNS settings, which are painfully small by default on those
platforms. (The discussion thread contemplated recommending that
people select POSIX semaphores instead, but the performance consequences
of that aren't really clear, so I'll refrain.)
Remove pointless discussion of SEMMNU and SEMMAP from the FreeBSD
section. Minor other wordsmithing.
Discussion: https://postgr.es/m/27582.1546928073@sss.pgh.pa.us

This patch changes the rule for whether or not a tuple seen by ANALYZE
should be included in its sample.
When we last touched this logic, in commit 51e1445f1, we weren't
thinking very hard about tuples being UPDATEd by a long-running
concurrent transaction. In such a case, we might see the pre-image as
either LIVE or DELETE_IN_PROGRESS depending on timing; and we might see
the post-image not at all, or as INSERT_IN_PROGRESS. Since the existing
code will not sample either DELETE_IN_PROGRESS or INSERT_IN_PROGRESS
tuples, this leads to concurrently-updated rows being omitted from the
sample entirely. That's not very helpful, and it's especially the wrong
thing if the concurrent transaction ends up rolling back.
The right thing seems to be to sample DELETE_IN_PROGRESS rows just as if
they were live. This makes the "sample it" and "count it" decisions the
same, which seems good for consistency. It's clearly the right thing
if the concurrent transaction ends up rolling back; in effect, we are
sampling as though IN_PROGRESS transactions haven't happened yet.
Also, this combination of choices ensures maximum robustness against
the different combinations of whether and in which state we might see the
pre- and post-images of an update.
It's slightly annoying that we end up recording immediately-out-of-date
stats in the case where the transaction does commit, but on the other
hand the stats are fine for columns that didn't change in the update.
And the alternative of sampling INSERT_IN_PROGRESS rows instead seems
like a bad idea, because then the sampling would be inconsistent with
the way rows are counted for the stats report.
Per report from Mark Chambers; thanks to Jeff Janes for diagnosing
what was happening. Back-patch to all supported versions.
Discussion: https://postgr.es/m/CAFh58O_Myr6G3tcH3gcGrF-=OExB08PJdWZcSBcEcovaiPsrHA@mail.gmail.com

Debian testing and newer now require that RSA and DHE keys are at
least 2048 bit long and no longer allow SHA-1 for signatures in
certificates. This is currently causing the ssl tests to fail there
because the test certificates and keys have been created in violation
of those conditions.
Update the parameters to create the test files and create a new set of
test files.
Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>
Reported-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/20180917131340.GE31460%40paquier.xyz

MinMaxExpr invokes the btree comparison function for its input datatype,
so it's only leakproof if that function is. Many such functions are
indeed leakproof, but others are not, and we should not just assume that
they are. Hence, adjust contain_leaked_vars to verify the leakproofness
of the referenced function explicitly.
I didn't add a regression test because it would need to depend on
some particular comparison function being leaky, and that's a moving
target, per discussion.
This has been wrong all along, so back-patch to supported branches.
Discussion: https://postgr.es/m/31042.1546194242@sss.pgh.pa.us

Detect it the way pg_ctl's wait_for_postmaster() does. When pg_regress
spawned a postmaster that failed startup, we were detecting that only
with "pg_regress: postmaster did not respond within 60 seconds".
Back-patch to 9.4 (all supported versions).
Reviewed by Tom Lane.
Discussion: https://postgr.es/m/20181231172922.GA199150@gust.leadboat.com

Previously, it would only remove temp tables created in the same
session; but if the session uses the BackendId of a previously crashed
backend that left temp tables around, those would not get removed.
Since autovacuum would not drop them either (because it sees that the
BackendId is in use by the current session) these can cause annoying
xid-wraparound warnings.
Apply to branches 9.4 to 10. This is not a problem since version 11,
because commit 943576bddcb5 added state tracking that makes autovacuum
realize that those temp tables are not ours, so it removes them.
This is useful to handle in DISCARD, because even though it does not
handle all situations, it does handle the common one where a connection
pooler keeps the same session open for an indefinitely long time.
Discussion: https://postgr.es/m/20181226190834.wsk2wzott5yzrjiq@alvherre.pgsql
Reviewed-by: Takayuki Tsunakawa, Michaël Paquier

When temp tables are in danger of XID wraparound, autovacuum drops them;
however, it preserves those that are owned by a working session. This
is desirable, except when the session is connected to a different
database (because the temp tables cannot be from that session), so make
it only keep the temp tables only if the backend is in the same database
as the temp tables.
This is not bulletproof: it fails to detect temp tables left by a
session whose backend ID is reused in the same database but the new
session does not use temp tables. Commit 943576bddcb5 fixes that case
too, for branches 11 and up (which is why we don't apply this fix to
those branches), but back-patching that one is not universally agreed
on.
Discussion: https://postgr.es/m/20181214162843.37g6h3txto43akrb@alvherre.pgsql
Reviewed-by: Takayuki Tsunakawa, Michaël Paquier

Inheritance trees can include temporary tables if the parent is
permanent, which makes possible the presence of multiple temporary
children from different sessions. Trying to issue a TRUNCATE on the
parent in this scenario causes a failure, so similarly to any other
queries just ignore such cases, which makes TRUNCATE work
transparently.
This makes truncation behave similarly to any other DML query working on
the parent table with queries which need to be issues on children. A
set of isolation tests is added to cover basic cases.
Reported-by: Zhou Digoal
Author: Amit Langote, Michael Paquier
Discussion: https://postgr.es/m/15565-ce67a48d0244436a@postgresql.org
Backpatch-through: 9.4

I made a frontend fprintf() format use %m, forgetting that that's only
safe in HEAD not the back branches; prior to 96bf88d52 and d6c55de1f,
it would work on glibc platforms but not elsewhere. Revert to using
%s ... strerror(errno) as the code did before.
We could have left HEAD as-is, but for code consistency across branches,
I chose to apply this patch there too.
Per Coverity and a few buildfarm members.

At the end of recovery for the post-promotion process, a new history
file is created followed by the last partial segment of the previous
timeline. Based on the timing, the archiver would first try to archive
the last partial segment and then the history file. This can delay the
detection of a new timeline taken, particularly depending on the time it
takes to transfer the last partial segment as it delays the moment the
history file of the new timeline gets archived. This can cause promoted
standbys to use the same timeline as one already taken depending on the
circumstances if multiple instances look at archives at the same
location.
This commit changes the order of archiving so as history files are
archived in priority over other file types, which reduces the likelihood
of the same timeline being taken (still not reducing the window to
zero), and it makes the archiver behave more consistently with the
startup process doing its post-promotion business.
Author: David Steele
Reviewed-by: Michael Paquier, Kyotaro Horiguchi
Discussion: https://postgr.es/m/929068cf-69e1-bba2-9dc0-e05986aed471@pgmasters.net
Backpatch-through: 9.5

"$user" in a search_path string is replaced by CURRENT_USER not
SESSION_USER. (It actually was SESSION_USER in the initial implementation,
but we changed it shortly later, and evidently forgot to fix the docs to
match.)
Noted by antonov@stdpr.ru
Discussion: https://postgr.es/m/159151fb45d490c8d31ea9707e9ba99d@stdpr.ru

"rescanratio" was computed as 1 + rescanned-tuples / total-inner-tuples,
which is sensible if it's to be multiplied by total-inner-tuples or a cost
value corresponding to scanning all the inner tuples. But in reality it
was (mostly) multiplied by inner_rows or a related cost, numbers that take
into account the possibility of stopping short of scanning the whole inner
relation thanks to a limited key range in the outer relation. This'd
still make sense if we could expect that stopping short would result in a
proportional decrease in the number of tuples that have to be rescanned.
It does not, however. The argument that establishes the validity of our
estimate for that number is independent of whether we scan all of the inner
relation or stop short, and experimentation also shows that stopping short
doesn't reduce the number of rescanned tuples. So the correct calculation
is 1 + rescanned-tuples / inner_rows, and we should be sure to multiply
that by inner_rows or a corresponding cost value.
Most of the time this doesn't make much difference, but if we have
both a high rescan rate (due to lots of duplicate values) and an outer
key range much smaller than the inner key range, then the error can
be significant, leading to a large underestimate of the cost associated
with rescanning.
Per report from Vijaykumar Jain. This thinko appears to go all the way
back to the introduction of the rescan estimation logic in commit
70fba7043, so back-patch to all supported branches.
Discussion: https://postgr.es/m/CAE7uO5hMb_TZYJcZmLAgO6iD68AkEK6qCe7i=vZUkCpoKns+EQ@mail.gmail.com

This is an oversight from recent commit b13fd344. While on it, tweak
the previous test with a better name for the renamed primary key.
Detected by buildfarm member prion which forces relation cache release
with -DRELCACHE_FORCE_RELEASE. Back-patch down to 9.4 as the previous
commit.

When a constraint gets renamed, it may have associated with it a target
relation (for example domain constraints don't have one). Not
invalidating the target relation cache when issuing the renaming can
result in issues with subsequent commands that refer to the old
constraint name using the relation cache, causing various failures. One
pattern spotted was using CREATE TABLE LIKE after a constraint
renaming.
Reported-by: Stuart <sfbarbee@gmail.com>
Author: Amit Langote
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/2047094.V130LYfLq4@station53.ousa.org

reap_child() basically ignored the possibility of either an error in
waitpid() itself or a child process failure on signal. We don't really
need to do more than report and crash hard, but proceeding as though
nothing is wrong is definitely Not Acceptable. The error report for
nonzero child exit status was pretty off-point, as well.
Noted while fooling around with child-process failure detection
logic elsewhere. It's been like this a long time, so back-patch to
all supported branches.

When GIN vacuum deletes a posting tree page, it assumes that no concurrent
searchers can access it, thanks to ginStepRight() locking two pages at once.
However, since 9.4 searches can skip parts of posting trees descending from the
root. That leads to the risk that page is deleted and reclaimed before
concurrent search can access it.
This commit prevents the risk of above by waiting for every transaction, which
might wait to reference this page, to finish. Due to binary compatibility
we can't change GinPageOpaqueData to store corresponding transaction id.
Instead we reuse page header pd_prune_xid field, which is unused in index pages.
Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com
Author: Andrey Borodin, Alexander Korotkov
Reviewed-by: Alexander Korotkov
Backpatch-through: 9.4

On standby ginRedoDeletePage() can work concurrently with read-only queries.
Those queries can traverse posting tree in two ways.
1) Using rightlinks by ginStepRight(), which locks the next page before
unlocking its left sibling.
2) Using downlinks by ginFindLeafPage(), which locks at most one page at time.
Original lock order was: page, parent, left sibling. That lock order can
deadlock with ginStepRight(). In order to prevent deadlock this commit changes
lock order to: left sibling, page, parent. Note, that position of parent in
locking order seems insignificant, because we only lock one page at time while
traversing downlinks.
Reported-by: Chen Huajun
Diagnosed-by: Chen Huajun, Peter Geoghegan, Andrey Borodin
Discussion: https://postgr.es/m/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com
Author: Alexander Korotkov
Backpatch-through: 9.4

Our support for multiple-set-clauses in UPDATE assumes that the Params
referencing a MULTIEXPR_SUBLINK SubPlan will appear before that SubPlan
in the targetlist of the plan node that calculates the updated row.
(Yeah, it's a hack...) In some PG branches it's possible that a Result
node gets inserted between the primary calculation of the update tlist
and the ModifyTable node. setrefs.c did the wrong thing in this case
and left the upper-level Params as Params, causing a crash at runtime.
What it should do is replace them with "outer" Vars referencing the child
plan node's output. That's a result of careless ordering of operations
in fix_upper_expr_mutator, so we can fix it just by reordering the code.
Fix fix_join_expr_mutator similarly for consistency, even though join
nodes could never appear in such a context. (In general, it seems
likely to be a bit cheaper to use Vars than Params in such situations
anyway, so this patch might offer a tiny performance improvement.)
The hazard extends back to 9.5 where the MULTIEXPR_SUBLINK stuff
was introduced, so back-patch that far. However, this may be a live
bug only in 9.6.x and 10.x, as the other branches don't seem to want
to calculate the final tlist below the Result node. (That plan shape
change between branches might be a mini-bug in itself, but I'm not
really interested in digging into the reasons for that right now.
Still, add a regression test memorializing what we expect there,
so we'll notice if it changes again.)
Per bug report from Eduards Bezverhijs.
Discussion: https://postgr.es/m/b6cd572a-3e44-8785-75e9-c512a5a17a73@tieto.com

This module overlooked this necessary fixup step on the results of
transformWhereClause(). It accidentally worked anyway, because the
constructed expression involved type "name" which is not collatable,
but it fell over while I was experimenting with changing "name" to
be collatable.
Back-patch, not because there's any live bug here in back branches,
but because somebody might use this code as a model for some real
application and then not understand why it doesn't work.

Unlike other ALTER ref pages, this one neglected to mention that
ALTER OWNER requires being a member of the new owning role.
Per bug #15546 from Stefan Kadow.
Discussion: https://postgr.es/m/15546-0558c75fd2025e7c@postgresql.org

Although copyfuncs.c has a check_stack_depth call in its recursion,
equalfuncs.c, outfuncs.c, and readfuncs.c lacked one. This seems
unwise.
Likewise fix planstate_tree_walker(), in branches where that exists.
Discussion: https://postgr.es/m/30253.1544286631@sss.pgh.pa.us

The stanza of ExecuteTruncate[Guts] that truncates a target table's toast
relation re-used the loop local variable "rel" to reference the toast rel.
This was safe enough when written, but commit d42358efb added code below
that that supposed "rel" still pointed to the parent table. Therefore,
the stats counter update was applied to the wrong relcache entry (the
toast rel not the user rel); and if we were unlucky and that relcache
entry had been flushed during reindex_relation, very bad things could
ensue.
(I'm surprised that CLOBBER_CACHE_ALWAYS testing hasn't found this.
I'm even more surprised that the problem wasn't detected during the
development of d42358efb; it must not have been tested in any case
with a toast table, as the incorrect stats counts are very obvious.)
To fix, replace use of "rel" in that code branch with a more local
variable. Adjust test cases added by d42358efb so that some of them
use tables with toast tables.
Per bug #15540 from Pan Bian. Back-patch to 9.5 where d42358efb came in.
Discussion: https://postgr.es/m/15540-01078812338195c0@postgresql.org

Places that are testing for *printf failure ought to include the format
string in their error reports, since bad-format-string is one of the
more likely causes of such failure. This both makes it easier to find
and repair the mistake, and provides at least some useful info to the
user who stumbles across such a problem.
Also, tighten snprintf.c to report EINVAL for an invalid flag or
final character in a format %-spec (including the case where the
%-spec is missing a final character altogether). This seems like
better project policy, and it also allows removing an instruction
or two from the hot code path.
Back-patch the error reporting change in pvsnprintf, since it should be
harmless and may be helpful; but not the snprintf.c change.
Per discussion of bug #15511 from Ertuğrul Kahveci, which reported an
invalid translated format string. These changes don't fix that error,
but they should improve matters next time we make such a mistake.
Discussion: https://postgr.es/m/15511-1d8b6a0bc874112f@postgresql.org

The source code comments documented this, but the user-facing docs, not
so much. Add a section to Appendix B that discusses it.
In passing, improve a couple other things in Appendix B --- notably,
a long-obsolete claim that time zone abbreviations are looked up in
a fixed table.
Per bug #15527 from Michael Davidson.
Discussion: https://postgr.es/m/15527-f1be0b4dc99ebbe7@postgresql.org

M doc/src/sgml/datetime.sgml

Ensure static libraries have correct mod time even if ranlib messes it up.

In at least Apple's version of ranlib, the output file is updated to have
a mod time equal to the max of the timestamps of its components, and that
data only has seconds precision. On a filesystem with sub-second file
timestamp precision --- say, APFS --- this can result in the finished
static library appearing older than its input files, which causes useless
rebuilds and possible outright failures in parallel makes.
We've only seen this reported in the field from people using Apple's
ranlib with a non-Apple make, because Apple's make doesn't know about
sub-second timestamps either so it doesn't decide rebuilds are needed.
But Apple's ranlib presumably shares code with at least some BSDen,
so it's not that unlikely that the same problem could arise elsewhere.
To fix, just "touch" the output file after ranlib finishes.
We seem to need this in only one place. There are other calls of
ranlib in our makefiles, but they are working on intermediate files
whose timestamps are not actually important, or else on an installed
static library for which sub-second timestamp precision is unlikely
to matter either. (Also, so far as I can tell, Apple's ranlib doesn't
mess up the file timestamp in the latter usage anyhow.)
In passing, change "ranlib" to "$(RANLIB)" in one place that was
bypassing the make macro for no good reason.
Per bug #15525 from Jack Kelly (via Alyssa Ross).
Back-patch to all supported branches.
Discussion: https://postgr.es/m/15525-a30da084f17a1faa@postgresql.org

This fixes an oversight from c6c3334 which forgot that if a subset of
WAL senders are stopping and in a sync state, other WAL senders could
still be waiting for a WAL position to be synced while committing a
transaction. However the subset of stopping senders would not release
waiters, potentially breaking synchronous replication guarantees. This
commit makes sure that even WAL senders stopping are able to release
waiters and are tracked properly.
On 9.4, this can also trigger an assertion failure when setting for
example max_wal_senders to 1 where a WAL sender is not able to find
itself as in synchronous state when the instance stops.
Reported-by: Paul Guo
Author: Paul Guo, Michael Paquier
Discussion: https://postgr.es/m/CAEET0ZEv8VFqT3C-cQm6byOB4r4VYWcef1J21dOX-gcVhCSpmA@mail.gmail.com
Backpatch-through: 9.4

During table rewrites (VACUUM FULL and CLUSTER), the main heap is logged
using XLOG / FPI records, and thus (correctly) ignored in decoding.
But the associated TOAST table is WAL-logged as plain INSERT records,
and so was logically decoded and passed to reorder buffer.
That has severe consequences with TOAST tables of non-trivial size.
Firstly, reorder buffer has to keep all those changes, possibly spilling
them to a file, incurring I/O costs and disk space.
Secondly, ReoderBufferCommit() was stashing all those TOAST chunks into
a hash table, which got discarded only after processing the row from the
main heap. But as the main heap is not decoded for rewrites, this never
happened, so all the TOAST data accumulated in memory, resulting either
in excessive memory consumption or OOM.
The fix is simple, as commit e9edc1ba already introduced infrastructure
(namely HEAP_INSERT_NO_LOGICAL flag) to skip logical decoding of TOAST
tables, but it only applied it to system tables. So simply use it for
all TOAST data in raw_heap_insert().
That would however solve only the memory consumption issue - the TOAST
changes would still be decoded and added to the reorder buffer, and
spilled to disk (although without TOAST tuple data, so much smaller).
But we can solve that by tweaking DecodeInsert() to just ignore such
INSERT records altogether, using XLH_INSERT_CONTAINS_NEW_TUPLE flag,
instead of skipping them later in ReorderBufferCommit().
Review: Masahiko Sawada
Discussion: https://www.postgresql.org/message-id/flat/1a17c643-e9af-3dba-486b-fbe31bc1823a%402ndquadrant.com
Backpatch: 9.4-, where logical decoding was introduced

Unfortunately ac218aa4f6 missed the fact that a reference to
'pg_catalog.regnamespace'::regclass wouldn't work before that type is
known. Fix that, by replacing the regtype usage with a join to
pg_type.
Reported-By: Tom Lane
Author: Andres Freund
Discussion: https://postgr.es/m/8863.1543297423@sss.pgh.pa.us
Backpatch: 9.5-, like ac218aa4f6

When the regrole (0c90f6769) and regnamespace (cb9fa802b) types were
added in 9.5, pg_upgrade's check for reg* types wasn't updated. While
regrole currently is safe, regnamespace is not.
It seems unlikely that anybody uses regnamespace inside catalog tables
across a pg_upgrade, but the tests should be correct nevertheless.
While at it, reorder the types checked in the query to be
alphabetical. Otherwise it's annoying to compare existing and tested
for types.
Author: Andres Freund
Discussion: https://postgr.es/m/037e152a-cb25-3bcb-4f35-bdc9988f8204@2ndQuadrant.com
Backpatch: 9.5-, as regrole/regnamespace

latex_escaped_print() mistranslated \ and failed to provide any translation
for # ^ and ~, all of which would typically lead to LaTeX document syntax
errors. In addition it didn't translate < > and |, which would typically
render as unexpected characters.
To some extent this represents shortcomings in ancient versions of LaTeX,
which if memory serves had no easy way to render these control characters
as ASCII text. But that's been fixed for, um, decades. In any case there
is no value in emitting guaranteed-to-fail output for these characters.
Noted while fooling with test cases added by commit 9a98984f4. Back-patch
the code change to all supported versions.

Hstore data generated on pg 8.4 and pg_upgraded to current versions
remains in its original on-disk format unless modified. The same goes
for values generated by the addon hstore-new module on pre-9.0
versions. (The hstoreUpgrade function converts old values on the fly
when read in, but the on-disk value is not modified by this.)
Since old-format empty hstores (and hstore-new hstores) have
representations compatible with the new format, hstoreUpgrade thought
it could get away without modifying such values; but this breaks
hstore_hash (and the new hstore_hash_extended) which assumes
bit-perfect matching between semantically identical hstore values.
Only one bit actually differs (the "new version" flag in the count
field) but that of course is enough to break the hash.
Fix by making hstoreUpgrade unconditionally convert all old values to
new format.
Backpatch all the way, even though this changes a hash value in some
cases, because in those cases the hash value is already failing - for
example, a hash join between old- and new-format empty hstores will be
failing to match, or a hash index on an hstore column containing an
old-format empty value will be failing to find the value since it will
be searching for a hash derived from a new-format datum. (There are no
known field reports of this happening, probably because hashing of
hstores has only been useful in limited circumstances and there
probably isn't much upgraded data being used this way.)
Per concerns arising from discussion of commit eb6f29141be. Original
bug is my fault.
Discussion: https://postgr.es/m/60b1fd3b-7332-40f0-7e7f-f2f04f777747%402ndquadrant.com

ftoi4 and its sibling coercion functions did their overflow checks in
a way that looked superficially plausible, but actually depended on an
assumption that the MIN and MAX comparison constants can be represented
exactly in the float4 or float8 domain. That fails in ftoi4, ftoi8,
and dtoi8, resulting in a possibility that values near the MAX limit will
be wrongly converted (to negative values) when they need to be rejected.
Also, because we compared before rounding off the fractional part,
the other three functions threw errors for values that really ought
to get rounded to the min or max integer value.
Fix by doing rint() first (requiring an assumption that it handles
NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already),
and by comparing to values that should coerce to float exactly, namely
INTxx_MIN and -INTxx_MIN. Also remove some random cosmetic discrepancies
between these six functions.
This back-patches commits cbdb8b4c0 and 452b637d4. In the 9.4 branch,
also back-patch the portion of 62e2a8dc2 that added PG_INTnn_MIN and
related constants to c.h, so that these functions can rely on them.
Per bug #15519 from Victor Petrovykh.
Patch by me; thanks to Andrew Gierth for analysis and discussion.
Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org

1. Integer overflow in internal_size could result in memory corruption
in decompression since a zero-length array would be allocated and then
written to. This leads to crashes or corruption when traversing an
index which has been populated with sufficiently sparse values. Fix by
using int64 for computations and checking for overflow.
2. Integer overflow in g_int_compress could cause pessimal merge
choices, resulting in unnecessarily large ranges (which would in turn
trigger issue 1 above). Fix by using int64 again.
3. Even without overflow, array sizes could become large enough to
cause unexplained memory allocation errors. Fix by capping the sizes
to a safe limit and report actual errors pointing at gist__intbig_ops
as needed.
4. Large inputs to the compression function always consist of large
runs of consecutive integers, and the compression loop was processing
these one at a time in an O(N^2) manner with a lot of overhead. The
expected runtime of this function could easily exceed 6 months for a
single call as a result. Fix by performing a linear-time first pass,
which reduces the worst case to something on the order of seconds.
Backpatch all the way, since this has been wrong forever.
Per bug #15518 from report from irc user "dymk", analysis and patch by
me.
Discussion: https://postgr.es/m/15518-799e426c3b4f8358@postgresql.org

This code was trying to be paranoid, but it wasn't paranoid enough.
It only ensured that the selected port is in 0..65535, while most
Unix systems will refuse unprivileged attempts to use TCP port numbers
below 1024.
Change it to allow specification of ports 1024..65535, while if the
port is outside that range, map it into 49152..65535 which is the
port range used by our later branches.
The main reason we've not noticed this up to now is that it's not
important when testing over Unix-socket connections, only TCP,
and most of our test code deliberately prevents the postmaster from
opening any TCP ports. However, the SSL tests do open up a TCP port,
and I believe this explains why buildfarm member chipmunk has been
failing the SSL tests in 9.5: it's picking a reserved port number.
Patch in 9.5 and 9.4. Later branches do not use this code.

This commit back-patches these 9.6-era commits into 9.4 and 9.5:
e97af6c8b Replace our hacked version of ax_pthread.m4 with latest upstream version.
3b14a17c8 Move pthread-tests earlier in the autoconf script.
01051a987 Use AS_IF rather than plain shell "if" in pthread-check.
a2932283c Update ax_pthread.m4 to an experimental draft version from upstream.
The net result is to sync configure's checks for threading-related
flags and libraries with the version we've been using since 9.6.
The motivation for doing so now is that it seems the older code does
not work correctly on very recent RHEL7/ppc64, as evidenced by
buildfarm member quokka. The newer code is pretty battle-hardened
by now, so this seems like a low-risk fix.
Discussion: https://postgr.es/m/3320.1542647565@sss.pgh.pa.us

The test case that Autoconf uses to discover whether a function has
been declared doesn't work reliably with clang, because clang reports
a warning not an error if the name is a known built-in function.
On some platforms, this results in a lot of compile-time warnings about
strlcpy and related functions not having been declared.
There is a fix for this (by Noah Misch) in the upstream Autoconf sources,
but since they've not made a release in years and show no indication of
doing so anytime soon, let's just absorb their fix directly. We can
revert this when and if we update to a newer Autoconf release.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/26819.1542515567@sss.pgh.pa.us

On some operating systems, it doesn't make sense to retry fsync(),
because dirty data cached by the kernel may have been dropped on
write-back failure. In that case the only remaining copy of the
data is in the WAL. A subsequent fsync() could appear to succeed,
but not have flushed the data. That means that a future checkpoint
could apparently complete successfully but have lost data.
Therefore, violently prevent any future checkpoint attempts by
panicking on the first fsync() failure. Note that we already
did the same for WAL data; this change extends that behavior to
non-temporary data files.
Provide a GUC data_sync_retry to control this new behavior, for
users of operating systems that don't eject dirty data, and possibly
forensic/testing uses. If it is set to on and the write-back error
was transient, a later checkpoint might genuinely succeed (on a
system that does not throw away buffers on failure); if the error is
permanent, later checkpoints will continue to fail. The GUC defaults
to off, meaning that we panic.
Back-patch to all supported releases.
There is still a narrow window for error-loss on some operating
systems: if the file is closed and later reopened and a write-back
error occurs in the intervening time, but the inode has the bad
luck to be evicted due to memory pressure before we reopen, we could
miss the error. A later patch will address that with a scheme
for keeping files with dirty data open at all times, but we judge
that to be too complicated to back-patch.
Author: Craig Ringer, with some adjustments by Thomas Munro
Reported-by: Craig Ringer
Reviewed-by: Robert Haas, Thomas Munro, Andres Freund
Discussion: https://postgr.es/m/20180427222842.in2e4mibx45zdth5%40alap3.anarazel.de

wcsrtombs (called through wchar2char from common functions like lower,
upper, etc.) uses various optimizations that may look like access to
uninitialized data, triggering valgrind reports.
For example AVX2 instructions load data in 256-bit chunks, and gconv
does something similar with 32-bit chunks. This is faster than accessing
the bytes one by one, and the uninitialized part of the buffer is not
actually used. So suppress the bogus reports.
The exact stack depends on possible optimizations - it might be AVX, SSE
(as in the report by Aleksander Alekseev) or something else. Hence the
last frame is wildcarded, to deal with this.
Backpatch all the way back to 9.4.
Author: Tomas Vondra
Discussion: https://www.postgresql.org/message-id/flat/90ac0452-e907-e7a4-b3c8-15bd33780e62%402ndquadrant.com
Discussion: https://www.postgresql.org/message-id/20180220150838.GD18315@e733.localdomain

M src/tools/valgrind.supp

Doc: remove claim that all \pset format options are unique in 1 letter.

This hasn't been correct since 9.3 added "latex-longtable".
I left the phraseology "Unique abbreviations are allowed" alone.
It's correct as far as it goes, and we are studiously refraining
from specifying exactly what happens if you give a non-unique
abbreviation. (The answer in the back branches is "you get a
backwards-compatible choice", and the answer in HEAD will shortly
be "you get an error", but there seems no need to mention such
details here.)
Daniel Vérité
Discussion: https://postgr.es/m/cb7e1caf-3ea6-450d-af28-f524903a030c@manitou-mail.org

In commit ecfd55795, I removed sqlda.c's checks for ndigits != 0 on the
grounds that we should duplicate the state of the numeric value's digit
buffer even when all the digits are zeroes. However, that still isn't
quite right, because another possible state of the digit buffer is
buf == digits == NULL (this occurs for a NaN). As the code now stands,
it'll invoke memcpy with a NULL source address and zero bytecount,
which we know a few platforms crash on. Hence, reinstate the no-copy
short-circuit, but make it test specifically for buf != NULL rather than
some other condition. In hindsight, the ndigits test (added by commit
f2ae9f9c3) was almost certainly meant to fix the NaN case not the
all-zeroes case as the associated thread alleged.
As before, back-patch to all supported versions.
Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905C71161@g01jpexmbkw24

If a failure happens when a transaction is starting between the moment
the transaction status is changed from TRANS_DEFAULT to TRANS_START and
the moment the current user ID and security context flags are fetched
via GetUserIdAndSecContext(), or before initializing its basic fields,
then those may get reset to incorrect values when the transaction
aborts, leaving the session in an inconsistent state.
One problem reported is that failing a starting transaction at the first
query of a session could cause several kinds of system crashes on the
follow-up queries.
In order to solve that, move the initialization of the transaction state
fields and the call of GetUserIdAndSecContext() in charge of fetching
the current user ID close to the point where the transaction status is
switched to TRANS_START, where there cannot be any error triggered
in-between, per an idea of Tom Lane. This properly ensures that the
current user ID, the security context flags and that the basic fields of
TransactionState remain consistent even if the transaction fails while
starting.
Reported-by: Richard Guo
Diagnosed-By: Richard Guo
Author: Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAN_9JTxECSb=pEPcb0a8d+6J+bDcOZ4=DgRo_B7Y5gRHJUM=Rw@mail.gmail.com
Backpatch-through: 9.4

Numeric values with leading zeroes were incorrectly copied into a
SQLDA (SQL Descriptor Area), leading to wrong results in ECPG programs.
Report and patch by Daisuke Higuchi. Back-patch to all supported
versions.
Discussion: https://postgr.es/m/1803D792815FC24D871C00D17AE95905C71161@g01jpexmbkw24

classify_index_clause_usage() is O(N^2) in the number of distinct index
qual clauses it considers, because of its use of a simple search list to
store them. For nearly all queries, that's fine because only a few clauses
will be considered. But Alexander Kuzmenkov reported a machine-generated
query with 80000 (!) index qual clauses, which caused this code to take
forever. Somewhat remarkably, this is the only O(N^2) behavior we now
have for such a query, so let's fix it.
We can get rid of the O(N^2) runtime for cases like this without much
damage to the functionality of choose_bitmap_and() by separating out
paths with "too many" qual or pred clauses, and deeming them to always
be nonredundant with other paths. Then their clauses needn't go into
the search list, so it doesn't get too long, but we don't lose the
ability to consider bitmap AND plans altogether. I set the threshold
for "too many" to be 100 clauses per path, which should be plenty to
ensure no change in planning behavior for normal queries.
There are other things we could do to make this go faster, but it's not
clear that it's worth any additional effort. 80000 qual clauses require
a whole lot of work in many other places, too.
The code's been like this for a long time, so back-patch to all supported
branches. The troublesome query only works back to 9.5 (in 9.4 it fails
with stack overflow in the parser); so I'm not sure that fixing this in
9.4 has any real-world benefit, but perhaps it does.
Discussion: https://postgr.es/m/90c5bdfa-d633-dabe-9889-3cf3e1acd443@postgrespro.ru

This patch fixes several related cases in which pg_shdepend entries were
never made, or were lost, for references to roles appearing in the ACLs of
schemas and/or types. While that did no immediate harm, if a referenced
role were later dropped, the drop would be allowed and would leave a
dangling reference in the object's ACL. That still wasn't a big problem
for normal database usage, but it would cause obscure failures in
subsequent dump/reload or pg_upgrade attempts, taking the form of
attempts to grant privileges to all-numeric role names. (I think I've
seen field reports matching that symptom, but can't find any right now.)
Several cases are fixed here:
1. ALTER DOMAIN SET/DROP DEFAULT would lose the dependencies for any
existing ACL entries for the domain. This case is ancient, dating
back as far as we've had pg_shdepend tracking at all.
2. If a default type privilege applies, CREATE TYPE recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for types in 9.2.
3. If a default schema privilege applies, CREATE SCHEMA recorded the
ACL properly but forgot to install dependency entries for it.
This dates to the addition of default privileges for schemas in v10
(commit ab89e465c).
Another somewhat-related problem is that when creating a relation
rowtype or implicit array type, TypeCreate would apply any available
default type privileges to that type, which we don't really want
since such an object isn't supposed to have privileges of its own.
(You can't, for example, drop such privileges once they've been added
to an array type.)
ab89e465c is also to blame for a race condition in the regression tests:
privileges.sql transiently installed globally-applicable default
privileges on schemas, which sometimes got absorbed into the ACLs of
schemas created by concurrent test scripts. This should have resulted
in failures when privileges.sql tried to drop the role holding such
privileges; but thanks to the bug fixed here, it instead led to dangling
ACLs in the final state of the regression database. We'd managed not to
notice that, but it became obvious in the wake of commit da906766c, which
allowed the race condition to occur in pg_upgrade tests.
To fix, add a function recordDependencyOnNewAcl to encapsulate what
callers of get_user_default_acl need to do; while the original call
sites got that right via ad-hoc code, none of the later-added ones
have. Also change GenerateTypeDependencies to generate these
dependencies, which requires adding the typacl to its parameter list.
(That might be annoying if there are any extensions calling that
function directly; but if there are, they're most likely buggy in the
same way as the core callers were, so they need work anyway.) While
I was at it, I changed GenerateTypeDependencies to accept most of its
parameters in the form of a Form_pg_type pointer, making its parameter
list a bit less unwieldy and mistake-prone.
The test race condition is fixed just by wrapping the addition and
removal of default privileges into a single transaction, so that that
state is never visible externally. We might eventually prefer to
separate out tests of default privileges into a script that runs by
itself, but that would be a bigger change and would make the tests
run slower overall.
Back-patch relevant parts to all supported branches.
Discussion: https://postgr.es/m/15719.1541725287@sss.pgh.pa.us

Previously it was possible to set client_min_messages to FATAL or PANIC,
which had the effect of suppressing transmission of regular ERROR messages
to the client. Perhaps that seemed like a useful option in the past, but
the trouble with it is that it breaks guarantees that are explicitly made
in our FE/BE protocol spec about how a query cycle can end. While libpq
and psql manage to cope with the omission, that's mostly because they
are not very bright; client libraries that have more semantic knowledge
are likely to get confused. Notably, pgODBC doesn't behave very sanely.
Let's fix this by getting rid of the ability to set client_min_messages
above ERROR.
In HEAD, just remove the FATAL and PANIC options from the set of allowed
enum values for client_min_messages. (This change also affects
trace_recovery_messages, but that's OK since these aren't useful values
for that variable either.)
In the back branches, there was concern that rejecting these values might
break applications that are explicitly setting things that way. I'm
pretty skeptical of that argument, but accommodate it by accepting these
values and then internally setting the variable to ERROR anyway.
In all branches, this allows a couple of tiny simplifications in the
logic in elog.c, so do that.
Also respond to the point that was made that client_min_messages has
exactly nothing to do with the server's logging behavior, and therefore
does not belong in the "When To Log" subsection of the documentation.
The "Statement Behavior" subsection is a better match, so move it there.
Jonah Harris and Tom Lane
Discussion: https://postgr.es/m/7809.1541521180@sss.pgh.pa.us
Discussion: https://postgr.es/m/15479-ef0f4cc2fd995ca2@postgresql.org