PostgreSQL devises a
query plan for each query it receives.
Choosing the right plan to match the query structure and the
properties of the data is absolutely critical for good
performance, so the system includes a complex planner that tries to choose good plans. You can
use the EXPLAIN command to see
what query plan the planner creates for any query. Plan-reading
is an art that requires some experience to master, but this
section attempts to cover the basics.

Examples in this section are drawn from the regression test
database after doing a VACUUM ANALYZE,
using 9.2 development sources. You should be able to get similar
results if you try the examples yourself, but your estimated
costs and row counts might vary slightly because ANALYZE's statistics are random samples rather
than exact, and because costs are inherently somewhat
platform-dependent.

The examples use EXPLAIN's default
"text" output format, which is compact
and convenient for humans to read. If you want to feed EXPLAIN's output to a program for further
analysis, you should use one of its machine-readable output
formats (XML, JSON, or YAML) instead.

The structure of a query plan is a tree of plan nodes. Nodes at the bottom level of the
tree are scan nodes: they return raw rows from a table. There
are different types of scan nodes for different table access
methods: sequential scans, index scans, and bitmap index scans.
There are also non-table row sources, such as VALUES clauses and set-returning functions in
FROM, which have their own scan node
types. If the query requires joining, aggregation, sorting, or
other operations on the raw rows, then there will be additional
nodes above the scan nodes to perform these operations. Again,
there is usually more than one possible way to do these
operations, so different node types can appear here too. The
output of EXPLAIN has one line for
each node in the plan tree, showing the basic node type plus
the cost estimates that the planner made for the execution of
that plan node. Additional lines might appear, indented from
the node's summary line, to show additional properties of the
node. The very first line (the summary line for the topmost
node) has the estimated total execution cost for the plan; it
is this number that the planner seeks to minimize.

Since this query has no WHERE
clause, it must scan all the rows of the table, so the planner
has chosen to use a simple sequential scan plan. The numbers
that are quoted in parentheses are (left to right):

Estimated start-up cost. This is the time expended
before the output phase can begin, e.g., time to do the
sorting in a sort node.

Estimated total cost. This is stated on the assumption
that the plan node is run to completion, i.e., all
available rows are retrieved. In practice a node's parent
node might stop short of reading all available rows (see
the LIMIT example below).

Estimated number of rows output by this plan node.
Again, the node is assumed to be run to completion.

Estimated average width of rows output by this plan node
(in bytes).

The costs are measured in arbitrary units determined by the
planner's cost parameters (see Section
18.7.2). Traditional practice is to measure the costs in
units of disk page fetches; that is, seq_page_cost
is conventionally set to 1.0 and the
other cost parameters are set relative to that. The examples in
this section are run with the default cost parameters.

It's important to understand that the cost of an upper-level
node includes the cost of all its child nodes. It's also
important to realize that the cost only reflects things that
the planner cares about. In particular, the cost does not
consider the time spent transmitting result rows to the client,
which could be an important factor in the real elapsed time;
but the planner ignores it because it cannot change it by
altering the plan. (Every correct plan will output the same row
set, we trust.)

The rows value is a little tricky
because it is not the number of rows processed or scanned by
the plan node, but rather the number emitted by the node. This
is often less than the number scanned, as a result of filtering
by any WHERE-clause conditions that
are being applied at the node. Ideally the top-level rows
estimate will approximate the number of rows actually returned,
updated, or deleted by the query.

Notice that the EXPLAIN output
shows the WHERE clause being applied
as a "filter" condition attached to
the Seq Scan plan node. This means that the plan node checks
the condition for each row it scans, and outputs only the ones
that pass the condition. The estimate of output rows has been
reduced because of the WHERE clause.
However, the scan will still have to visit all 10000 rows, so
the cost hasn't decreased; in fact it has gone up a bit (by
10000 * cpu_operator_cost,
to be exact) to reflect the extra CPU time spent checking the
WHERE condition.

The actual number of rows this query would select is 7000,
but the rows estimate is only
approximate. If you try to duplicate this experiment, you will
probably get a slightly different estimate; moreover, it can
change after each ANALYZE command,
because the statistics produced by ANALYZE are taken from a randomized sample of
the table.

Here the planner has decided to use a two-step plan: the
child plan node visits an index to find the locations of rows
matching the index condition, and then the upper plan node
actually fetches those rows from the table itself. Fetching
rows separately is much more expensive than reading them
sequentially, but because not all the pages of the table have
to be visited, this is still cheaper than a sequential scan.
(The reason for using two plan levels is that the upper plan
node sorts the row locations identified by the index into
physical order before reading them, to minimize the cost of
separate fetches. The "bitmap"
mentioned in the node names is the mechanism that does the
sorting.)

The added condition stringu1 =
'xxx' reduces the output row count estimate, but not the
cost because we still have to visit the same set of rows.
Notice that the stringu1 clause cannot
be applied as an index condition, since this index is only on
the unique1 column. Instead it is
applied as a filter on the rows retrieved by the index. Thus
the cost has actually gone up slightly to reflect this extra
checking.

In this type of plan the table rows are fetched in index
order, which makes them even more expensive to read, but there
are so few that the extra cost of sorting the row locations is
not worth it. You'll most often see this plan type for queries
that fetch just a single row. It's also often used for queries
that have an ORDER BY condition that
matches the index order, because then no extra sort step is
needed to satisfy the ORDER BY.

If there are indexes on several columns referenced in
WHERE, the planner might choose to use
an AND or OR combination of the indexes:

But this requires visiting both indexes, so it's not
necessarily a win compared to using just one index and treating
the other condition as a filter. If you vary the ranges
involved you'll see the plan change accordingly.

This is the same query as above, but we added a LIMIT so that not all the rows need be
retrieved, and the planner changed its mind about what to do.
Notice that the total cost and row count of the Index Scan node
are shown as if it were run to completion. However, the Limit
node is expected to stop after retrieving only a fifth of those
rows, so its total cost is only a fifth as much, and that's the
actual estimated cost of the query. This plan is preferred over
adding a Limit node to the previous plan because the Limit
could not avoid paying the startup cost of the bitmap scan, so
the total cost would be something over 25 units with that
approach.

Let's try joining two tables, using the columns we have been
discussing:

In this plan, we have a nested-loop join node with two table
scans as inputs, or children. The indentation of the node
summary lines reflects the plan tree structure. The join's
first, or "outer", child is a bitmap
scan similar to those we saw before. Its cost and row count are
the same as we'd get from SELECT ... WHERE
unique1 < 10 because we are applying the WHERE clause unique1 <
10 at that node. The t1.unique2 =
t2.unique2 clause is not relevant yet, so it doesn't
affect the row count of the outer scan. The nested-loop join
node will run its second, or "inner"
child once for each row obtained from the outer child. Column
values from the current outer row can be plugged into the inner
scan; here, the t1.unique2 value from
the outer row is available, so we get a plan and costs similar
to what we saw above for a simple SELECT
... WHERE t2.unique2 = constant case. (The estimated cost
is actually a bit lower than what was seen above, as a result
of caching that's expected to occur during the repeated index
scans on t2.) The costs of the loop
node are then set on the basis of the cost of the outer scan,
plus one repetition of the inner scan for each outer row (10 *
7.87, here), plus a little CPU time for join processing.

In this example the join's output row count is the same as
the product of the two scans' row counts, but that's not true
in all cases because there can be additional WHERE clauses that mention both tables and so
can only be applied at the join point, not to either input
scan. For example, if we add one more condition:

The extra condition t1.hundred <
t2.hundred can't be tested in the tenk2_unique2 index, so it's applied at the join
node. This reduces the estimated output row count of the join
node, but does not change either input scan.

When dealing with outer joins, you might see join plan nodes
with both "Join Filter" and plain
"Filter" conditions attached. Join
Filter conditions come from the outer join's ON clause, so a row that fails the Join Filter
condition could still get emitted as a null-extended row. But a
plain Filter condition is applied after the outer-join rules
and so acts to remove rows unconditionally. In an inner join
there is no semantic difference between these types of
filters.

If we change the query's selectivity a bit, we might get a
very different join plan:

Here, the planner has chosen to use a hash join, in which
rows of one table are entered into an in-memory hash table,
after which the other table is scanned and the hash table is
probed for matches to each row. Again note how the indentation
reflects the plan structure: the bitmap scan on tenk1 is the input to the Hash node, which
constructs the hash table. That's then returned to the Hash
Join node, which reads rows from its outer child plan and
searches the hash table for each one.

Merge join requires its input data to be sorted on the join
keys. In this plan the tenk1 data is
sorted by using an index scan to visit the rows in the correct
order, but a sequential scan and sort is preferred for
onek, because there are many more rows
to be visited in that table. (Sequential-scan-and-sort
frequently beats an index scan for sorting many rows, because
of the nonsequential disk access required by the index
scan.)

One way to look at variant plans is to force the planner to
disregard whatever strategy it thought was the cheapest, using
the enable/disable flags described in Section
18.7.1. (This is a crude tool, but useful. See also
Section 14.3.) For example,
if we're unconvinced that sequential-scan-and-sort is the best
way to deal with table onek in the
previous example, we could try

which shows that the planner thinks that sorting onek by index-scanning is about 12% more
expensive than sequential-scan-and-sort. Of course, the next
question is whether it's right about that. We can investigate
that using EXPLAIN ANALYZE, as
discussed below.

It is possible to check the accuracy of the planner's
estimates by using EXPLAIN's
ANALYZE option. With this option,
EXPLAIN actually executes the query,
and then displays the true row counts and true run time
accumulated within each plan node, along with the same
estimates that a plain EXPLAIN shows.
For example, we might get a result like this:

Note that the "actual time"
values are in milliseconds of real time, whereas the cost estimates are expressed in arbitrary units;
so they are unlikely to match up. The thing that's usually most
important to look for is whether the estimated row counts are
reasonably close to reality. In this example the estimates were
all dead-on, but that's quite unusual in practice.

In some query plans, it is possible for a subplan node to be
executed more than once. For example, the inner index scan will
be executed once per outer row in the above nested-loop plan.
In such cases, the loops value reports
the total number of executions of the node, and the actual time
and rows values shown are averages per-execution. This is done
to make the numbers comparable with the way that the cost
estimates are shown. Multiply by the loops value to get the total time actually spent
in the node. In the above example, we spent a total of 0.480
milliseconds executing the index scans on tenk2.

The Sort node shows the sort method used (in particular,
whether the sort was in-memory or on-disk) and the amount of
memory or disk space needed. The Hash node shows the number of
hash buckets and batches as well as the peak amount of memory
used for the hash table. (If the number of batches exceeds one,
there will also be disk space usage involved, but that is not
shown.)

Another type of extra information is the number of rows
removed by a filter condition:

These counts can be particularly valuable for filter
conditions applied at join nodes. The "Rows
Removed" line only appears when at least one scanned
row, or potential join pair in the case of a join node, is
rejected by the filter condition.

A case similar to filter conditions occurs with "lossy" index scans. For example, consider this
search for polygons containing a specific point:

The planner thinks (quite correctly) that this sample table
is too small to bother with an index scan, so we have a plain
sequential scan in which all the rows got rejected by the
filter condition. But if we force an index scan to be used, we
see:

Here we can see that the index returned one candidate row,
which was then rejected by a recheck of the index condition.
This happens because a GiST index is "lossy" for polygon containment tests: it
actually returns the rows with polygons that overlap the
target, and then we have to do the exact containment test on
those rows.

EXPLAIN has a BUFFERS option that can be used with ANALYZE to get even more runtime statistics:

The numbers provided by BUFFERS
help to identify which parts of the query are the most
I/O-intensive.

Keep in mind that because EXPLAIN
ANALYZE actually runs the query, any side-effects will
happen as usual, even though whatever results the query might
output are discarded in favor of printing the EXPLAIN data. If you want to analyze a
data-modifying query without changing your tables, you can roll
the command back afterwards, for example:

As seen in this example, when the query is an INSERT, UPDATE, or
DELETE command, the actual work of
applying the table changes is done by a top-level Insert,
Update, or Delete plan node. The plan nodes underneath this
node perform the work of locating the old rows and/or computing
the new data. So above, we see the same sort of bitmap table
scan we've seen already, and its output is fed to an Update
node that stores the updated rows. It's worth noting that
although the data-modifying node can take a considerable amount
of runtime (here, it's consuming the lion's share of the time),
the planner does not currently add anything to the cost
estimates to account for that work. That's because the work to
be done is the same for every correct query plan, so it doesn't
affect planning decisions.

The Total runtime shown by
EXPLAIN ANALYZE includes executor
start-up and shut-down time, as well as the time to run any
triggers that are fired, but it does not include parsing,
rewriting, or planning time. Time spent executing BEFORE triggers, if any, is included in the time
for the related Insert, Update, or Delete node; but time spent
executing AFTER triggers is not
counted there because AFTER triggers
are fired after completion of the whole plan. The total time
spent in each trigger (either BEFORE
or AFTER) is also shown separately.
Note that deferred constraint triggers will not be executed
until end of transaction and are thus not shown at all by
EXPLAIN ANALYZE.

There are two significant ways in which run times measured
by EXPLAIN ANALYZE can deviate from
normal execution of the same query. First, since no output rows
are delivered to the client, network transmission costs and I/O
conversion costs are not included. Second, the measurement
overhead added by EXPLAIN ANALYZE can
be significant, especially on machines with slow gettimeofday() operating-system calls. You
can use the pg_test_timing tool to measure the
overhead of timing on your system.

EXPLAIN results should not be
extrapolated to situations much different from the one you are
actually testing; for example, results on a toy-sized table
cannot be assumed to apply to large tables. The planner's cost
estimates are not linear and so it might choose a different
plan for a larger or smaller table. An extreme example is that
on a table that only occupies one disk page, you'll nearly
always get a sequential scan plan whether indexes are available
or not. The planner realizes that it's going to take one disk
page read to process the table in any case, so there's no value
in expending additional page reads to look at an index. (We saw
this happening in the polygon_tbl
example above.)

There are cases in which the actual and estimated values
won't match up well, but nothing is really wrong. One such case
occurs when plan node execution is stopped short by a
LIMIT or similar effect. For example,
in the LIMIT query we used before,

the estimated cost and row count for the Index Scan node are
shown as though it were run to completion. But in reality the
Limit node stopped requesting rows after it got two, so the
actual row count is only 2 and the runtime is less than the
cost estimate would suggest. This is not an estimation error,
only a discrepancy in the way the estimates and true values are
displayed.

Merge joins also have measurement artifacts that can confuse
the unwary. A merge join will stop reading one input if it's
exhausted the other input and the next key value in the one
input is greater than the last key value of the other input; in
such a case there can be no more matches and so no need to scan
the rest of the first input. This results in not reading all of
one child, with results like those mentioned for LIMIT. Also, if the outer (first) child contains
rows with duplicate key values, the inner (second) child is
backed up and rescanned for the portion of its rows matching
that key value. EXPLAIN ANALYZE counts
these repeated emissions of the same inner rows as if they were
real additional rows. When there are many outer duplicates, the
reported actual row count for the inner child plan node can be
significantly larger than the number of rows that are actually
in the inner relation.

BitmapAnd and BitmapOr nodes always report their actual row
counts as zero, due to implementation limitations.

Submit correction

If you see anything in the documentation that is not correct, does not match
your experience with the particular feature or requires further clarification,
please use
this form
to report a documentation issue.