18 November 2014

One common source of query problems in PostgreSQL results an unexpectedly-bad query plan when a LIMIT clause is included in a query. The typical symptom is that PostgreSQL picks an index-based plan that actually takes much, much longer than if a different index, or no index at all, had been used.

Here’s an example. First, we create a simple table and an index on it:

PostgreSQL doesn’t keep correlated statistics about columns; each column’s statistics are kept independently. Thus, PostgreSQL made an assumption about the distribution of values of i in the table: they were scattered more or less evenly throughout. Thus, walking the index backwards meant that, to get 10 “hits,” it would have to scan about 100 index entries… and the index scan would be a big win.

It was wrong, however, because all of the i=1 values were clustered right at the beginning. If we reverse the order of the scan, we can see that was a much more efficient plan:

A CTE is an “optimization fence”: The planner is prohibited from pushing the ORDER BY or LIMIT down into the CTE. In this case, that means that it is also prohibited from picking the index scan, and we’re back to the sequential scan.

So, when you see a query come completely apart, and it has a LIMIT clause, check to see if PostgreSQL is guessing wrong about the distribution of data. If the total number of hits before the LIMIT are relatively small, you can often use a CTE to isolate that part, and only apply the LIMIT thereafter. (Of course, you might be better off just doing the LIMIT operation in your application!)

Could you explain *why* would “all of the i=1 values clustered right at the beginning”?

The data is constructed that way in the example. We set i to 1 for anything lower than a certain value of f, and then retrieve in reverse order based on f.

In real-life data, there could be a correlation between the field in the ORDER BY and the field in the predicate, or just random distribution that worked out badly in the particular case of the query you’re working on.

This statement is absolutely correct; it may be worth pointing out, though, that one big reason Postgres doesn’t do this is that finding a way to do so efficiently is a Very Hard Problem. Other lesser concerns include figuring out which columns’ correlations are worth tracking.