Discussion:
Slow queries after vacuum analyze
(too old to reply)
Kevin Grittner
2012-12-13 20:42:44 UTC
Permalink
Threre is a vacuum analyze planned during the night.
The morning, 1 day out of 2, there are some extremely slow
queries. Those queries lasts more than 5 minutes (never waited
more and cancelled them) whereas when everything is OK they last
less than 300ms.
In order to resolve this i have to execute a least one analyze,
sometimes more.
Windows
PostgreSQL 8.4.8
default_statistics_target = 100
In addition to an increase for shared_buffers, work_mem, ..., i
changed the default_statistics_target to 500 with no effect.
It was even worse as i never managed to get rid of the slow
queries after running many analyze.
So i fell back to default_statistics_target=100 in order to get
rid of those slow queries.
You probably need to adjust your cost factors to more accurately
reflect the actual costs of various activities on your system. What
is probably happening is that there are two plans which are very
close together in estimated costs using the current values, while
the actual costs are very different.  The particular random sample
chosen can push the balance one way or the other.

Please show the results from running the query on this page:

http://wiki.postgresql.org/wiki/Server_Configuration

Also, a description of the run environment would help.

Other information listed on this page would help, although cores,
RAM, and storage system information would probably be most
important.

http://wiki.postgresql.org/wiki/Server_Configuration

-Kevin
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Kevin Grittner
2012-12-18 20:09:54 UTC
Permalink
Memory : In use 4 Go, Free 15Go, cache 5 Go.
If the active portion of your database is actually small enough
that it fits in the OS cache, I recommend:

seq_page_cost = 0.1
random_page_cost = 0.1
cpu_tuple_cost = 0.05
shared_buffers = 512MB
temp_buffers = 16MB
work_mem = 32MB
wal_buffers = 16MB
checkpoint_segments = 32
effective_cache_size = 2560MB
default_statistics_target = 500
autovacuum_vacuum_scale_factor = 0.05
autovacuum_analyze_scale_factor = 0.025
You could probably go a little higher on work_mem and
effective_cache_size. I would leave default_statistics_target alone
unless you see a lot of estimates which are off by more than an
order of magnitude. Even then, it is often better to set a higher
value for a few individual columns than for everything. Remember
that this setting has no effect until you reload the configuration
and then VACUUM.

-Kevin
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Kevin Grittner
2012-12-21 15:34:13 UTC
Permalink
I would leave default_statistics_target alone unless you see a lot of
estimates which are off by more than an order of magnitude. Even then, it
is often better to set a higher value for a few individual columns than for
everything.
We had an issue with a customer where we had to increase the statistics
parameter for a primary key.
So I'd like to know if there is a way to identify for which column we have
to change the statistics.
I don't know a better way than to investigate queries which seem to
be running longer than you would expect, and look for cases where
EXPLAIN ANALYZE shows an estimated row count which is off from
actual by enough to cause a problem. Sometimes this is caused by
correlations between values in different columns, in which case a
higher target is not likely to help; but sometimes it's a matter
that there is an uneven distribution among values not included in
the "most common values", in which case boosting the target to
store more values and finer-grained information on ranges will be
exactly what you need.

-Kevin
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Loading...