Discussion:
Avoiding Recheck Cond when using Select Distinct
(too old to reply)
jackrg
2013-02-22 16:36:16 UTC
Permalink
The following query produces a Recheck Cond and a costly Bitmap Heap Scan
even though I have a composite index that covers both columns being filtered
and selected. I believe this is because the initial bitmap scan produces
2912 rows, which is too many for the available bitmap space. I've tried
rewriting the command as "Select ... group by" but it still uses the BHS. Is
there a way to rewrite this command that would improve performance by
avoiding the costly Bitmap Heap Scan?


SELECT distinct store_id, book_id FROM "sales_points" WHERE
"sales_points"."store_id" IN (1, 2, 3, 4, 5, 6, 199, 201, 202) AND
"sales_points"."book_id" IN (421, 422, 419, 420)

Here is the explain/analyze output:


"HashAggregate (cost=5938.72..5939.01 rows=97 width=8) (actual
time=10.837..10.854 rows=32 loops=1)"
" -> Bitmap Heap Scan on sales_points (cost=47.03..5936.53 rows=2191
width=8) (actual time=0.547..5.296 rows=4233 loops=1)"
" Recheck Cond: (book_id = ANY ('{421,422,419,420}'::integer[]))"
" Filter: (store_id = ANY ('{1,2,3,4,5,6,199,201,202}'::integer[]))"
" -> Bitmap Index Scan on index_sales_points_on_book_id
(cost=0.00..46.92 rows=4430 width=0) (actual time=0.469..0.469 rows=4233
loops=1)"
" Index Cond: (book_id = ANY ('{421,422,419,420}'::integer[]))"
"Total runtime: 10.935 ms"


Actual runtime is more like 15ms when tested against a development database
(which gave est. total runtime of 6ms). Under load in production, the
command takes 10,158 ms. Tuning Postgre is not an option, as the instance
is provided by Heroku and as far as I know cannot be tuned by me.



--
View this message in context: http://postgresql.1045698.n5.nabble.com/Avoiding-Recheck-Cond-when-using-Select-Distinct-tp5746290.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Jeff Janes
2013-02-22 17:19:27 UTC
Permalink
Post by jackrg
The following query produces a Recheck Cond and a costly Bitmap Heap Scan
even though I have a composite index that covers both columns being filtered
and selected.
Can you show us the definition of that index?
Post by jackrg
I believe this is because the initial bitmap scan produces
2912 rows, which is too many for the available bitmap space. I've tried
rewriting the command as "Select ... group by" but it still uses the BHS. Is
there a way to rewrite this command that would improve performance by
avoiding the costly Bitmap Heap Scan?
How do you know that the bitmap heap scan is costly, since you haven't
gotten it to use an alternative to compare it to? As a temporary
experimental measure, you could at least set enable_bitmapscan TO off, to
see what happens.
Post by jackrg
SELECT distinct store_id, book_id FROM "sales_points" WHERE
"sales_points"."store_id" IN (1, 2, 3, 4, 5, 6, 199, 201, 202) AND
"sales_points"."book_id" IN (421, 422, 419, 420)
"HashAggregate (cost=5938.72..5939.01 rows=97 width=8) (actual
time=10.837..10.854 rows=32 loops=1)"
" -> Bitmap Heap Scan on sales_points (cost=47.03..5936.53 rows=2191
width=8) (actual time=0.547..5.296 rows=4233 loops=1)"
" Recheck Cond: (book_id = ANY ('{421,422,419,420}'::integer[]))"
" Filter: (store_id = ANY ('{1,2,3,4,5,6,199,201,202}'::integer[]))"
" -> Bitmap Index Scan on index_sales_points_on_book_id
(cost=0.00..46.92 rows=4430 width=0) (actual time=0.469..0.469 rows=4233
loops=1)"
" Index Cond: (book_id = ANY
('{421,422,419,420}'::integer[]))"
"Total runtime: 10.935 ms"
Actual runtime is more like 15ms when tested against a development database
(which gave est. total runtime of 6ms).
I don't understand the parenthetical. In the explain plan you show, where
is 6ms coming from?
Post by jackrg
Under load in production, the
command takes 10,158 ms.
Do you mean 10.158 ms rather than 10,158 ms? If the production environment
really takes 1000 times longer than the environment in which you gathered
the EXPLAIN, then I would seriously doubt how useful that EXPLAIN could
possibly be.

Cheers,

Jeff
Jeff Janes
2013-02-23 23:53:01 UTC
Permalink
On Fri, Feb 22, 2013 at 9:45 AM, Jack Royal-Gordon <
Post by Jeff Janes
Post by jackrg
The following query produces a Recheck Cond and a costly Bitmap Heap Scan
even though I have a composite index that covers both columns being filtered
and selected.
Can you show us the definition of that index?
CREATE INDEX test ON sales_points USING btree (book_id, store_id);
When I made up some random data that I thought was plausible, the planner
did choose to use this use this index. You might have some skew in your
data that we need to know more about in order to reproduce this (But that
is probably not worthwhile, see below)
Post by Jeff Janes
Post by jackrg
I believe this is because the initial bitmap scan produces
2912 rows, which is too many for the available bitmap space. I've tried
rewriting the command as "Select ... group by" but it still uses the BHS. Is
there a way to rewrite this command that would improve performance by
avoiding the costly Bitmap Heap Scan?
How do you know that the bitmap heap scan is costly, since you haven't
gotten it to use an alternative to compare it to? As a temporary
experimental measure, you could at least set enable_bitmapscan TO off, to
see what happens.
"HashAggregate (cost=8202.02..8203.33 rows=131 width=8) (actual
time=4.275..4.280 rows=32 loops=1)"
" -> Index Only Scan using test on sales_points (cost=0.01..8187.46
rows=2912 width=8) (actual time=0.083..3.268 rows=4233 loops=1)"
" Index Cond: ((book_id = ANY ('{421,422,419,420}'::integer[])) AND
(store_id = ANY ('{1,2,3,4,5,6,199,201,202}'::integer[])))"
" Heap Fetches: 4233"
"Total runtime: 4.331 ms"
While the total runtime reported is actually less, the costs reported are
50% higher.
I suspect this is because the planner assumes that IO is rather expensive,
and the bitmap plan saves on IO slightly while taking more CPU. But since
all your data is in memory in this case, the IO savings are not real, while
the extra CPU time is real. However, this is probably not relevant.
Shaving 5ms off of a 10ms execution time looks impressive, but shaving 5ms
off of your problematic case of 10s isn't going to do much for you.

It could be that the time savings extrapolated to the problem case will be
multiplicative rather than additive. But my gut is telling me it won't be
multiplicative. In fact it will probably go the other way, if the bitmap
truly is more IO efficient, and your problematic case is due to poor IO
performance, then getting it off the bitmap scan could easily make things
worse.


Under load in production, the
Post by Jeff Janes
Post by jackrg
command takes 10,158 ms.
Do you mean 10.158 ms rather than 10,158 ms? If the production
environment really takes 1000 times longer than the environment in which
you gathered the EXPLAIN, then I would seriously doubt how useful that
EXPLAIN could possibly be.
That's right, 10K ms. That's why I started trying to tune the performance
of this query -- I had a warning in my log of a long-running query. Two
thoughts why these numbers might be so different: 1) in production but not
in the standalone test, many jobs are hitting the database simultaneously,
so perhaps there was a great deal of I/O wait time involved;
If the server is so overwhelmed that it takes 1000 times longer, I think it
would be setting off alarm bells all over the place, rather than with just
this one query. Unless all that other activity is driving your data out of
the cache so this query need to read it back from disk when on dev system
it doesn't need to. But that means you need to find a way to simulate the
actual cache hit on your dev system, or you won't be able to do realistic
testing.

It would really help to have "explain (analyze, buffers)". Especially if
you turn on track_io_timing, (although that part probably can't be done on
Heroku, as it requires superuser access.)

It would help even more if you can get that information while the server is
misbehaving, rather than when it is not. Does Heroku support auto_explain?
Post by Jeff Janes
2) production and the standalone test were separated by about 8 hours, and
its possible that some sort of auto-vacuum operation had taken place to
make the execution more efficient.
More likely would be that some operation caused it to change to a different
plan that is inherently inefficient, rather than changing the efficiency of
a given plan. That is where auto_explain would help. Also, if the bad
execution is coming through your app, while the execution through psql,
then maybe your app is secretly changing some of the planner settings.

Cheers,

Jeff
Maciek Sakrejda
2013-02-25 18:11:11 UTC
Permalink
Post by Jeff Janes
It would really help to have "explain (analyze, buffers)". Especially if
you turn on track_io_timing, (although that part probably can't be done on
Heroku, as it requires superuser access.)
Right, that's not supported right now, although given that the
superuser is primarily for performance considerations (right?),
perhaps we should find some way of exposing this.
Post by Jeff Janes
It would help even more if you can get that information while the server is
misbehaving, rather than when it is not. Does Heroku support auto_explain?
We used to have some support for it, but given that all the tunables
there are superuser-only, we found it wasn't very useful in its
current form and took it out. Same as above: we probably should find a
way of exposing the tunables here to a less-trusted user. It'd be
great to have more granularity in some of these permissions.
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Jeff Janes
2013-02-25 18:48:51 UTC
Permalink
Post by Maciek Sakrejda
Post by Jeff Janes
It would really help to have "explain (analyze, buffers)". Especially if
you turn on track_io_timing, (although that part probably can't be done
on
Post by Jeff Janes
Heroku, as it requires superuser access.)
Right, that's not supported right now, although given that the
superuser is primarily for performance considerations (right?),
perhaps we should find some way of exposing this.
I don't think it is SUSET for performance reasons, as the ordinary user
already has plenty of ways to shoot themselves (and fellow users) in the
foot performance-wise. I think it was based on the idea that those
tracking tools that the administrator has turned on, ordinary users may
turn off. I think the other way around would be fine (if it is off for the
server, the user can still turn it on for their session--and presumably
also turn it off again if it is one only because they set it that way, not
because the administrator set it that way), but I think that that behavior
is not trivial to implement. I looked in the archives, but the SUSET
nature of this doesn't seem to have been discussed.

Cheers,

Jeff

Vitalii Tymchyshyn
2013-02-22 17:59:40 UTC
Permalink
Post by jackrg
Tuning Postgre is not an option, as the instance
is provided by Heroku and as far as I know cannot be tuned by me.
Most tuning parameters can be set at per-query basis, so you can issue
alter database set param=value
to have same effect as if it was set through postgresql.conf.
Maciek Sakrejda
2013-02-22 19:06:08 UTC
Permalink
Post by jackrg
Post by jackrg
Tuning Postgre is not an option, as the instance
is provided by Heroku and as far as I know cannot be tuned by me.
Most tuning parameters can be set at per-query basis, so you can issue
alter database set param=value
to have same effect as if it was set through postgresql.conf.
Jack,

Jeff brought up some great points and What Vitalii suggested should
let you tweak most knobs, but if you're running into limitations of
the platform or you find default settings which seem outright
incorrect, please file a support ticket and we'll be happy to work
with you.

Thanks,
Maciek
Heroku Postgres
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Loading...