* Ivan Voras (ivoras(at)freebsd(dot)org) wrote:
> Total runtime: 0.507 ms
[...]
> Total runtime: 118.689 ms
>
> See in the first query where I have a simple LIMIT, it fetches random 10
> rows quickly, but in the second one, as soon as I give it to execute and
> calculate the entire result set before I limit it, the performance is
> horrible.
What you've shown is that it takes 0.5ms for 10 rows, and 118ms for 8500
rows. Now, maybe I've missed something, but that's 0.05ms per row for
the first query and 0.01ms per row for the second, and you've added a
sort into the mix. The performance of going through the data actually
improves on a per-record basis, since you're doing more in bulk.
Since you're ordering by 'id', PG has to look at every row returned by
the index scan. That's not free.
Regarding the statistics, it's entirely possible that the index is *not*
the fastest way to pull this data (it's nearly 10% of the table..), if
the stats were better it might use a seq scan instead, not sure how bad
the cost of the filter itself would be.
Thanks,
Stephen