Discussion:
SMP on a heavy loaded database
(too old to reply)
nobody nowhere
2013-01-03 23:45:08 UTC
Permalink
Centos 5.X kernel 2.6.18-274
pgsql-9.1 from pgdg-91-centos.repo
relatively small database 3.2Gb
Lot of insert, update, delete.

I see non balanced _User_ usage on 14 CPU, exclusively assigned to the hardware raid controller.
What I'm doing wrong, and is it possible somehow to fix?

Thanks in advance.

Andrew.

# top -d 10.00 -b -n 2 -U postgres -c

top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42
Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st
Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st
Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st
Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers
Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6513 postgres 16 0 4329m 235m 225m S 3.1 1.5 0:02.24 postgres: XXXX_DB [local] idle
6891 postgres 16 0 4331m 223m 213m S 1.7 1.4 0:01.44 postgres: XXXX_DB [local] idle
6829 postgres 16 0 4329m 219m 210m S 1.6 1.4 0:01.56 postgres: XXXX_DB [local] idle
6539 postgres 16 0 4330m 319m 308m S 1.5 2.0 0:03.64 postgres: XXXX_DB [local] idle
6487 postgres 16 0 4329m 234m 224m S 1.2 1.5 0:02.95 postgres: XXXX_DB [local] idle
6818 postgres 16 0 4328m 224m 215m S 1.2 1.4 0:02.00 postgres: XXXX_DB [local] idle
6831 postgres 16 0 4328m 215m 206m S 1.2 1.3 0:01.41 postgres: XXXX_DB [local] idle
6868 postgres 16 0 4330m 223m 213m S 1.2 1.4 0:01.46 postgres: XXXX_DB [local] idle
6899 postgres 15 0 4328m 220m 211m S 1.2 1.4 0:01.61 postgres: XXXX_DB [local] idle
6515 postgres 15 0 4331m 233m 223m S 1.0 1.5 0:02.66 postgres: XXXX_DB [local] idle
6890 postgres 16 0 4331m 279m 268m S 1.0 1.7 0:02.01 postgres: XXXX_DB [local] idle
7083 postgres 15 0 4328m 207m 199m S 1.0 1.3 0:00.77 postgres: XXXX_DB [local] idle
6374 postgres 16 0 4329m 245m 235m S 0.9 1.5 0:04.30 postgres: XXXX_DB [local] idle
6481 postgres 15 0 4328m 293m 285m S 0.9 1.8 0:03.17 postgres: XXXX_DB [local] idle
6484 postgres 16 0 4329m 236m 226m S 0.9 1.5 0:02.82 postgres: XXXX_DB [local] idle
6509 postgres 16 0 4332m 237m 225m S 0.9 1.5 0:02.90 postgres: XXXX_DB [local] idle
6522 postgres 15 0 4330m 238m 228m S 0.9 1.5 0:02.35 postgres: XXXX_DB [local] idle
6812 postgres 16 0 4329m 283m 274m S 0.9 1.8 0:02.19 postgres: XXXX_DB [local] idle
7086 postgres 15 0 4328m 202m 194m S 0.9 1.3 0:00.70 postgres: XXXX_DB [local] idle
6494 postgres 15 0 4329m 317m 306m S 0.8 2.0 0:03.98 postgres: XXXX_DB [local] idle
6542 postgres 16 0 4330m 309m 299m S 0.8 1.9 0:02.79 postgres: XXXX_DB [local] idle
6550 postgres 15 0 4329m 287m 277m S 0.8 1.8 0:02.80 postgres: XXXX_DB [local] idle
6777 postgres 16 0 4329m 229m 219m S 0.8 1.4 0:02.13 postgres: XXXX_DB [local] idle
6816 postgres 16 0 4329m 230m 220m S 0.8 1.4 0:01.61 postgres: XXXX_DB [local] idle
6822 postgres 15 0 4329m 305m 295m S 0.8 1.9 0:02.09 postgres: XXXX_DB [local] idle
6897 postgres 15 0 4328m 219m 210m S 0.8 1.4 0:01.69 postgres: XXXX_DB [local] idle
6926 postgres 16 0 4328m 209m 200m S 0.8 1.3 0:00.81 postgres: XXXX_DB [local] idle
6473 postgres 16 0 4329m 236m 226m S 0.7 1.5 0:02.81 postgres: XXXX_DB [local] idle
6826 postgres 16 0 4330m 226m 216m S 0.7 1.4 0:02.14 postgres: XXXX_DB [local] idle
6834 postgres 16 0 4331m 282m 271m S 0.7 1.8 0:03.06 postgres: XXXX_DB [local] idle
6882 postgres 15 0 4330m 222m 212m S 0.7 1.4 0:01.83 postgres: XXXX_DB [local] idle
6885 postgres 16 0 4328m 104m 96m S 0.6 0.7 0:00.94 postgres: XXXX_DB [local] idle
6878 postgres 15 0 4319m 2992 1472 S 0.4 0.0 40:20.10 postgres: wal sender process postgres 555.555.555.555(47880) streaming 21B/2BFE82F8
6519 postgres 16 0 4330m 249m 240m S 0.3 1.6 0:03.14 postgres: XXXX_DB [local] idle
6477 postgres 16 0 4331m 239m 228m S 0.2 1.5 0:02.75 postgres: XXXX_DB [local] idle
6500 postgres 16 0 4328m 227m 219m S 0.2 1.4 0:01.84 postgres: XXXX_DB [local] idle
6576 postgres 16 0 4331m 289m 278m S 0.2 1.8 0:03.01 postgres: XXXX_DB [local] idle
6637 postgres 16 0 4330m 230m 220m S 0.2 1.4 0:02.13 postgres: XXXX_DB [local] idle
6773 postgres 16 0 4330m 225m 214m S 0.2 1.4 0:02.98 postgres: XXXX_DB [local] idle
6838 postgres 16 0 4329m 224m 215m S 0.2 1.4 0:01.30 postgres: XXXX_DB [local] idle
7283 postgres 16 0 4326m 24m 18m S 0.2 0.2 0:00.08 postgres: XXXX_DB [local] idle
6378 postgres 16 0 4329m 267m 258m S 0.1 1.7 0:03.74 postgres: XXXX_DB [local] idle
6439 postgres 15 0 4330m 256m 244m S 0.1 1.6 0:03.62 postgres: XXXX_DB [local] idle
6535 postgres 15 0 4330m 289m 279m S 0.1 1.8 0:03.14 postgres: XXXX_DB [local] idle
6538 postgres 15 0 4330m 231m 221m S 0.1 1.4 0:02.17 postgres: XXXX_DB [local] idle
6544 postgres 15 0 4329m 226m 216m S 0.1 1.4 0:01.86 postgres: XXXX_DB [local] idle
6546 postgres 15 0 4329m 229m 219m S 0.1 1.4 0:02.40 postgres: XXXX_DB [local] idle
6552 postgres 16 0 4330m 246m 236m S 0.1 1.5 0:02.49 postgres: XXXX_DB [local] idle
6555 postgres 15 0 4328m 226m 217m S 0.1 1.4 0:02.05 postgres: XXXX_DB [local] idle
6558 postgres 16 0 4329m 233m 223m S 0.1 1.5 0:02.59 postgres: XXXX_DB [local] idle
6572 postgres 16 0 4328m 227m 218m S 0.1 1.4 0:01.69 postgres: XXXX_DB [local] idle
6580 postgres 16 0 4329m 229m 220m S 0.1 1.4 0:02.34 postgres: XXXX_DB [local] idle
6724 postgres 16 0 4331m 231m 220m S 0.1 1.4 0:01.80 postgres: XXXX_DB [local] idle
6804 postgres 16 0 4328m 115m 106m S 0.1 0.7 0:01.48 postgres: XXXX_DB [local] idle
6811 postgres 15 0 4329m 223m 214m S 0.1 1.4 0:01.51 postgres: XXXX_DB [local] idle
6821 postgres 16 0 4331m 306m 295m S 0.1 1.9 0:02.19 postgres: XXXX_DB [local] idle
6836 postgres 16 0 4329m 226m 216m S 0.1 1.4 0:01.72 postgres: XXXX_DB [local] idle
6879 postgres 16 0 4330m 222m 212m S 0.1 1.4 0:01.84 postgres: XXXX_DB [local] idle
6888 postgres 16 0 4328m 216m 208m S 0.1 1.4 0:01.32 postgres: XXXX_DB [local] idle
6896 postgres 16 0 4328m 213m 206m S 0.1 1.3 0:01.07 postgres: XXXX_DB [local] idle
14999 postgres 15 0 115m 1840 808 S 0.1 0.0 29:59.16 postgres: stats collector process
830 postgres 15 0 4319m 8396 6420 S 0.0 0.1 0:00.06 postgres: XXXX_DB 192.168.0.1(42974) idle
6808 postgres 15 0 4328m 222m 214m S 0.0 1.4 0:01.80 postgres: XXXX_DB [local] idle
6873 postgres 15 0 4329m 222m 213m S 0.0 1.4 0:01.92 postgres: XXXX_DB [local] idle
6875 postgres 16 0 4329m 228m 219m S 0.0 1.4 0:02.46 postgres: XXXX_DB [local] idle
6906 postgres 16 0 4328m 216m 208m S 0.0 1.4 0:00.83 postgres: XXXX_DB [local] idle
7274 postgres 15 0 4344m 534m 531m S 0.0 3.3 0:00.37 postgres: autovacuum worker process XXXX_DB
7818 postgres 15 0 4319m 6640 4680 S 0.0 0.0 0:00.06 postgres: postgres XXXX_DB 193.8.246.6(1032) idle
10553 postgres 15 0 4319m 6940 5000 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(35402) idle
10600 postgres 15 0 4319m 6780 4848 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(35612) idle
11146 postgres 15 0 4319m 7692 5744 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(39366) idle
12291 postgres 15 0 4319m 6716 4784 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(49540) idle
12711 postgres 15 0 4319m 8048 5984 S 0.0 0.0 0:00.02 postgres: XXXX_DB 192.168.0.1(51440) idle
12717 postgres 15 0 4319m 6768 4836 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(51616) idle
12815 postgres 15 0 4319m 6540 4608 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(52989) idle
13140 postgres 15 0 4319m 7736 5660 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(55225) idle
14378 postgres 15 0 4320m 7324 4928 S 0.0 0.0 0:00.03 postgres: postgres postgres 222.222.222.222(1030) idle
14979 postgres 15 0 4316m 104m 103m S 0.0 0.6 6:46.80 /usr/pgsql-9.1/bin/postmaster -p 5432 -D /var/lib/pgsql/9.1/data
14981 postgres 15 0 112m 1368 728 S 0.0 0.0 0:00.06 postgres: logger process
14995 postgres 15 0 4320m 2.0g 2.0g S 0.0 12.7 4:44.31 postgres: writer process
14996 postgres 15 0 4318m 17m 16m S 0.0 0.1 0:12.76 postgres: wal writer process
14997 postgres 15 0 4319m 3312 1568 S 0.0 0.0 0:10.14 postgres: autovacuum launcher process
14998 postgres 15 0 114m 1444 756 S 0.0 0.0 0:13.06 postgres: archiver process last was 000000010000021B0000002A
15027 postgres 15 0 4319m 80m 77m S 0.0 0.5 31:35.48 postgres: monitor XXXX_DB 10.0.0.0 (55433) idle
15070 postgres 15 0 4319m 82m 80m S 0.0 0.5 28:39.80 postgres: monitor XXXX_DB 10.10.0.1 (59360) idle
15808 postgres 15 0 4324m 15m 10m S 0.0 0.1 0:00.27 postgres: postgres XXXX_DB 222.222.222.222 (1031) idle
18787 postgres 15 0 4319m 8004 5932 S 0.0 0.0 0:00.02 postgres: XXXX_DB 192.168.0.1(46831) idle
18850 postgres 15 0 4319m 7364 5304 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(48843) idle
20331 postgres 15 0 4319m 6592 4660 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(60573) idle
26950 postgres 15 0 4319m 8172 6136 S 0.0 0.0 0:00.03 postgres: XXXX_DB 192.168.0.1(47890) idle
27599 postgres 15 0 4319m 8220 6200 S 0.0 0.1 0:00.04 postgres: XXXX_DB 192.168.0.1(49566) idle
28039 postgres 15 0 4319m 6644 4696 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(38329) idle
30450 postgres 15 0 4319m 8412 6316 S 0.0 0.1 0:00.03 postgres: XXXX_DB 192.168.0.1(49490) idle
31327 postgres 15 0 4319m 8508 6412 S 0.0 0.1 0:00.03 postgres: XXXX_DB 192.168.0.1(57064) idle
31363 postgres 15 0 4319m 8428 6364 S 0.0 0.1 0:00.03 postgres: XXXX_DB 192.168.0.1(58128) idle
32624 postgres 15 0 4319m 7356 5340 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(38002) idle
32651 postgres 15 0 4319m 8540 6572 S 0.0 0.1 0:00.07 postgres: XXXX_DB 192.168.0.1(38544) idle
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Scott Marlowe
2013-01-04 07:42:47 UTC
Permalink
Post by nobody nowhere
Centos 5.X kernel 2.6.18-274
pgsql-9.1 from pgdg-91-centos.repo
relatively small database 3.2Gb
Lot of insert, update, delete.
I see non balanced _User_ usage on 14 CPU, exclusively assigned to the hardware raid controller.
What I'm doing wrong, and is it possible somehow to fix?
Thanks in advance.
Andrew.
# top -d 10.00 -b -n 2 -U postgres -c
top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42
Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st
Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st
Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st
Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers
Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached
So how many concurrent users are accessing this db? pgsql assigns one
process on one core so to speak. It can't spread load for one user
over all cores.
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
nobody nowhere
2013-01-04 14:41:25 UTC
Permalink
Post by Scott Marlowe
Post by nobody nowhere
Centos 5.X kernel 2.6.18-274
pgsql-9.1 from pgdg-91-centos.repo
relatively small database 3.2Gb
Lot of insert, update, delete.
I see non balanced _User_ usage on 14 CPU, exclusively assigned to the hardware raid controller.
What I'm doing wrong, and is it possible somehow to fix?
Thanks in advance.
Andrew.
# top -d 10.00 -b -n 2 -U postgres -c
top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42
Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st
Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st
Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st
Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers
Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached
So how many concurrent users are accessing this db? pgsql assigns one
process on one core so to speak. It can't spread load for one user
over all cores.
64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp
Charles Gomes
2013-01-04 14:47:51 UTC
Permalink
________________________________
Subject: [PERFORM] Re[2]: [PERFORM] SMP on a heavy loaded database
Date: Fri, 4 Jan 2013 18:41:25 +0400
Пятница, 4 января 2013, 0:42 -07:00 от Scott Marlowe
On Thu, Jan 3, 2013 at 4:45 PM, nobody nowhere
Post by nobody nowhere
Centos 5.X kernel 2.6.18-274
pgsql-9.1 from pgdg-91-centos.repo
relatively small database 3.2Gb
Lot of insert, update, delete.
I see non balanced _User_ usage on 14 CPU, exclusively assigned to
the hardware raid controller.
Post by nobody nowhere
What I'm doing wrong, and is it possible somehow to fix?
Thanks in advance.
Andrew.
# top -d 10.00 -b -n 2 -U postgres -c
top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42
Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st
Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st
Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st
Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers
Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached
So how many concurrent users are accessing this db? pgsql assigns one
process on one core so to speak. It can't spread load for one user
over all cores.
64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp
Are you running IRQ Balance ? The OS can pin process to the respective IRQ handler.
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
nobody nowhere
2013-01-04 15:56:28 UTC
Permalink
Post by Charles Gomes
________________________________
Subject: [PERFORM] Re[2]: [PERFORM] SMP on a heavy loaded database
Date: Fri, 4 Jan 2013 18:41:25 +0400
ПятМОца, 4 яМваря 2013, 0:42 -07:00 Пт Scott Marlowe
On Thu, Jan 3, 2013 at 4:45 PM, nobody nowhere
Post by nobody nowhere
Centos 5.X kernel 2.6.18-274
pgsql-9.1 from pgdg-91-centos.repo
relatively small database 3.2Gb
Lot of insert, update, delete.
I see non balanced _User_ usage on 14 CPU, exclusively assigned to
the hardware raid controller.
Post by nobody nowhere
What I'm doing wrong, and is it possible somehow to fix?
Thanks in advance.
Andrew.
# top -d 10.00 -b -n 2 -U postgres -c
top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42
Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st
Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st
Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st
Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers
Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached
So how many concurrent users are accessing this db? pgsql assigns one
process on one core so to speak. It can't spread load for one user
over all cores.
64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp
Are you running IRQ Balance ? The OS can pin process to the respective IRQ handler. I switch off IRQ Balance on any heavy loaded servers and statically assign IRQ's to processors using
echo 000X > /proc/irq/XX/smp_affinity

irqballance do not help to fix it..
Claudio Freire
2013-01-04 14:52:10 UTC
Permalink
So how many concurrent users are accessing this db? pgsql assigns one
process on one core so to speak. It can't spread load for one user
over all cores.
64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp
I guess that means the server isn't dedicated to postgres...

...have you checked which PID is using that core? Is it postgres-related?
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
nobody nowhere
2013-01-04 16:23:46 UTC
Permalink
Post by Claudio Freire
So how many concurrent users are accessing this db? pgsql assigns one
process on one core so to speak. It can't spread load for one user
over all cores.
64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp
I guess that means the server isn't dedicated to postgres...
...have you checked which PID is using that core? Is it postgres-related?
How do I know it?

Only postgres on this server heavely use the Raid controller. PHP comletely in XCache. At night I'll try to change socket to tcp. May be it  will help,
 
Claudio Freire
2013-01-04 16:43:16 UTC
Permalink
Post by Claudio Freire
...have you checked which PID is using that core? Is it postgres-related?
How do I know it?
An unfiltered top or ps might give you a clue. You could also try
iotop, php does hit the filesystem (sessions stored in disk), and if
it's on the same partition as postgres, postgres' fsyncs might cause
it to flush to disk quite heavily.
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Claudio Freire
2013-01-04 19:04:00 UTC
Permalink
Post by Claudio Freire
An unfiltered top or ps might give you a clue. You could also try
Look at letter on the topic start.
It's filtered by -u postgres, so you can't see apache there.
Post by Claudio Freire
iotop, php does hit the filesystem (sessions stored in disk), and if
it's on the same partition as postgres, postgres' fsyncs might cause
it to flush to disk quite heavily.
The question was "which PID is using that core?"
Can you using top or iotop certanly answer on this question? I can't.
If you see some process hogging CPU/IO in a way that's consistent with
CPU14, then you have a candidate. I don't see much in that iotop,
except the 640k/s writes in pg's writer, which isn't much at all
unless you have a seriously underpowered/broken system. If all fails,
you can look for processes with high accumulated cputime, like the
"monitor" ones there on the first top (though it doesn't say much,
since that top is incomplete), or the walsender. Without the ability
to compare against all other processes, none of that means much - but
once you do, you can inspect those processes more closely.

Oh... and you can also tell top to show the "last used processor". I
guess I should have said this first ;-)
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
nobody nowhere
2013-01-04 21:07:50 UTC
Permalink
Post by Claudio Freire
Oh... and you can also tell top to show the "last used processor". I
guess I should have said this first ;-)
Even if do not fix it, I'll know a new feature of top :)
Certainly sure 14 CPU


 Total DISK READ: top - 21:54:38 up 453 days, 23:34, 1 user, load average: 0.56, 0.55, 0.48
Tasks: 429 total, 1 running, 428 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.2%us, 0.1%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.1%us, 0.1%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 0.7%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu4 : 1.5%us, 0.4%sy, 0.0%ni, 98.0%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st
Cpu5 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu6 : 2.1%us, 0.2%sy, 0.0%ni, 97.4%id, 0.2%wa, 0.0%hi, 0.1%si, 0.0%st
Cpu7 : 2.4%us, 0.4%sy, 0.0%ni, 97.0%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Cpu8 : 1.4%us, 0.4%sy, 0.0%ni, 98.1%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st
Cpu9 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu10 : 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu11 : 1.2%us, 0.5%sy, 0.0%ni, 97.9%id, 0.0%wa, 0.0%hi, 0.5%si, 0.0%st
Cpu12 : 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu13 : 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu14 : 20.5%us, 0.9%sy, 0.0%ni, 78.1%id, 0.4%wa, 0.0%hi, 0.1%si, 0.0%st
Cpu15 : 1.2%us, 0.1%sy, 0.0%ni, 98.5%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 16426540k total, 16173980k used, 252560k free, 219348k buffers
Swap: 4194232k total, 147296k used, 4046936k free, 14482096k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND
47 root RT -5 0 0 0 S 0.0 0.0 0:34.84 15 [migration/15]
48 root 34 19 0 0 0 S 0.0 0.0 0:01.42 15 [ksoftirqd/15]
49 root RT -5 0 0 0 S 0.0 0.0 0:00.00 15 [watchdog/15]
65 root 10 -5 0 0 0 S 0.0 0.0 0:00.03 15 [events/15]
238 root 10 -5 0 0 0 S 0.0 0.0 0:03.76 15 [kblockd/15]
406 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 15 [cqueue/15]
601 root 15 0 0 0 0 S 0.0 0.0 88:52.30 15 [pdflush]
620 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 15 [aio/15]
964 root 17 -5 0 0 0 S 0.0 0.0 0:00.00 15 [ata/15]
2684 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 15 [kmpathd/15]
2914 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 15 [rpciod/15]
3270 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 15 [ib_cm/15]
5906 rpc 15 0 8072 688 552 S 0.0 0.0 0:00.00 15 portmap
14979 postgres 15 0 4316m 104m 103m S 0.0 0.6 6:54.39 15 /usr/pgsql-9.1/bin/postmaster -p 5432 -D /var/lib/pgsql/9.1/data
44 root RT -5 0 0 0 S 0.0 0.0 0:40.50 14 [migration/14]
45 root 34 19 0 0 0 S 0.0 0.0 0:03.51 14 [ksoftirqd/14]
46 root RT -5 0 0 0 S 0.0 0.0 0:00.00 14 [watchdog/14]
64 root 10 -5 0 0 0 S 0.0 0.0 0:00.04 14 [events/14]
237 root 10 -5 0 0 0 S 0.0 0.0 9:51.44 14 [kblockd/14]
405 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 14 [cqueue/14]
619 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 14 [aio/14]
963 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 14 [ata/14]
1092 root 10 -5 0 0 0 S 0.0 0.0 52:21.12 14 [kjournald]
2683 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 14 [kmpathd/14]
2724 root 10 -5 0 0 0 S 0.0 0.0 2:15.40 14 [kjournald]
2726 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 14 [kjournald]
2913 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 14 [rpciod/14]
3269 root 18 -5 0 0 0 S 0.0 0.0 0:00.00 14 [ib_cm/14]
8970 postgres 16 0 4327m 205m 197m S 0.2 1.3 0:01.33 14 postgres: user user_db [local] idle
8973 postgres 15 0 4327m 199m 191m S 0.1 1.2 0:00.37 14 postgres: user user_db [local] idle
8977 postgres 16 0 4328m 48m 40m S 0.7 0.3 0:00.76 14 postgres: user user_db [local] idle
8980 postgres 16 0 4328m 51m 43m S 0.1 0.3 0:00.50 14 postgres: user user_db [local] idle
8981 postgres 15 0 4327m 203m 195m S 0.0 1.3 0:00.72 14 postgres: user user_db [local] idle
8985 postgres 15 0 4327m 43m 36m S 0.1 0.3 0:00.29 14 postgres: user user_db [local] idle
8988 postgres 16 0 4328m 205m 196m S 0.0 1.3 0:00.91 14 postgres: user user_db [local] idle
8991 postgres 15 0 4327m 205m 197m S 0.1 1.3 0:00.79 14 postgres: user user_db [local] idle
8993 postgres 15 0 4328m 207m 199m S 1.9 1.3 0:00.99 14 postgres: user user_db [local] idle
8996 postgres 15 0 4328m 205m 196m S 1.1 1.3 0:00.93 14 postgres: user user_db [local] idle
9000 postgres 16 0 4328m 207m 199m S 0.7 1.3 0:00.82 14 postgres: user user_db [local] idle
9004 postgres 16 0 4329m 204m 194m S 0.1 1.3 0:00.69 14 postgres: user user_db [local] idle
9005 postgres 15 0 4327m 200m 193m S 0.7 1.2 0:00.63 14 postgres: user user_db [local] idle
9007 postgres 15 0 4327m 199m 192m S 0.1 1.2 0:00.49 14 postgres: user user_db [local] idle
9010 postgres 15 0 4327m 202m 195m S 0.2 1.3 0:00.65 14 postgres: user user_db [local] idle
9016 postgres 15 0 4326m 34m 28m S 0.1 0.2 0:00.15 14 postgres: user user_db [local] idle
9018 postgres 16 0 4327m 203m 195m S 1.0 1.3 0:00.72 14 postgres: user user_db [local] idle
9020 postgres 15 0 4327m 45m 37m S 0.1 0.3 0:00.49 14 postgres: user user_db [local] idle
9022 postgres 15 0 4327m 42m 35m S 0.1 0.3 0:00.20 14 postgres: user user_db [local] idle
9025 postgres 16 0 4328m 201m 193m S 0.3 1.3 0:00.75 14 postgres: user user_db [local] idle
9026 postgres 16 0 4327m 47m 40m S 0.1 0.3 0:00.49 14 postgres: user user_db [local] idle
9038 postgres 16 0 4327m 201m 193m S 0.1 1.3 0:00.70 14 postgres: user user_db [local] idle
9042 postgres 15 0 4327m 201m 193m S 1.8 1.3 0:00.71 14 postgres: user user_db [local] idle
9046 postgres 15 0 4327m 201m 193m S 0.1 1.3 0:00.65 14 postgres: user user_db [local] idle
9048 postgres 15 0 4327m 200m 193m S 1.4 1.2 0:00.52 14 postgres: user user_db [local] idle
9049 postgres 15 0 4328m 200m 192m S 0.1 1.2 0:00.50 14 postgres: user user_db [local] idle
9053 postgres 15 0 4327m 44m 37m S 0.1 0.3 0:00.34 14 postgres: user user_db [local] idle
9054 postgres 16 0 4327m 46m 40m S 0.1 0.3 0:00.43 14 postgres: user user_db [local] idle
9055 postgres 16 0 4328m 200m 192m S 0.0 1.3 0:00.39 14 postgres: user user_db [local] idle
9056 postgres 16 0 4328m 201m 192m S 0.7 1.3 0:00.75 14 postgres: user user_db [local] idle
9057 postgres 16 0 4327m 200m 192m S 0.2 1.3 0:00.72 14 postgres: user user_db [local] idle
9061 postgres 15 0 4328m 200m 192m S 0.0 1.2 0:00.49 14 postgres: user user_db [local] idle
9065 postgres 15 0 4328m 204m 196m S 0.3 1.3 0:00.80 14 postgres: user user_db [local] idle
9067 postgres 15 0 4327m 43m 35m S 0.0 0.3 0:00.30 14 postgres: user user_db [local] idle
9071 postgres 15 0 4327m 48m 40m S 0.1 0.3 0:00.53 14 postgres: user user_db [local] idle
9076 postgres 15 0 4326m 43m 36m S 0.0 0.3 0:00.61 14 postgres: user user_db [local] idle
9078 postgres 15 0 4328m 206m 198m S 0.0 1.3 0:00.64 14 postgres: user user_db [local] idle
9079 postgres 15 0 4327m 45m 38m S 0.0 0.3 0:00.37 14 postgres: user user_db [local] idle
9080 postgres 16 0 4327m 200m 193m S 0.0 1.3 0:00.62 14 postgres: user user_db [local] idle
9082 postgres 16 0 4328m 202m 193m S 1.5 1.3 0:00.84 14 postgres: user user_db [local] idle
9084 postgres 15 0 4327m 46m 38m S 0.0 0.3 0:00.54 14 postgres: user user_db [local] idle
9086 postgres 15 0 4328m 203m 194m S 0.0 1.3 0:00.38 14 postgres: user user_db [local] idle
9087 postgres 16 0 4327m 199m 192m S 1.0 1.2 0:00.63 14 postgres: user user_db [local] idle
9089 postgres 16 0 4328m 205m 196m S 0.2 1.3 0:00.87 14 postgres: user user_db [local] idle
9091 postgres 15 0 4327m 45m 38m S 0.1 0.3 0:00.41 14 postgres: user user_db [local] idle
9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user user_db [local] idle
9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user user_db [local] idle
9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user user_db [local] idle
13629 root 18 0 65288 280 140 S 0.0 0.0 0:00.00 14 rpc.rquotad
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Claudio Freire
2013-01-04 21:20:17 UTC
Permalink
Post by nobody nowhere
9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user user_db [local] idle
9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user user_db [local] idle
9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user user_db [local] idle
That looks like pg has been pinned to CPU14. I don't think it's pg's
doing. All I can think of is: check scheduler tweaks, numa, and pg's
initscript. Just in case it's being pinned explicitly.
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Claudio Freire
2013-01-04 21:53:17 UTC
Permalink
Post by Claudio Freire
9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user
user_db [local] idle
9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user
user_db [local] idle
9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user
user_db [local] idle
That looks like pg has been pinned to CPU14. I don't think it's pg's
doing. All I can think of is: check scheduler tweaks, numa, and pg's
initscript. Just in case it's being pinned explicitly.
Not pinned.
Forks with tcp connection use other CPU. I just add connections pool and
change socket to tcp
How interesting. It must be a peculiarity of unix sockets. I know unix
sockets have close to no buffering, task-switching to the consumer
instead of buffering. Perhaps what you're experiencing here is this
"optimization" effect. It's probably not harmful at all. The OS will
switch to another CPU if the need arises.

Have you done any stress testing? Is there any actual performance impact?
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
nobody nowhere
2013-01-05 08:37:49 UTC
Permalink
Post by Claudio Freire
Post by Claudio Freire
9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user
user_db [local] idle
9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user
user_db [local] idle
9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user
user_db [local] idle
That looks like pg has been pinned to CPU14. I don't think it's pg's
doing. All I can think of is: check scheduler tweaks, numa, and pg's
initscript. Just in case it's being pinned explicitly.
Not pinned.
Forks with tcp connection use other CPU. I just add connections pool and
change socket to tcp
How interesting. It must be a peculiarity of unix sockets. I know unix
sockets have close to no buffering, task-switching to the consumer
instead of buffering. Perhaps what you're experiencing here is this
"optimization" effect. It's probably not harmful at all. The OS will
switch to another CPU if the need arises. It's not socket problem.
Ths same result when I change php fast-cgi connection to tcp,
Remote clients over tcp use insert-delete. Just data collection. Nothing more.
Locally php its lot of PL data processing functions. It's PL problem !!
Post by Claudio Freire
Have you done any stress testing? Is there any actual performance impact?
On my experience stress testing and real production perfomance usually absolutely different. :)
No application development going together with business growing. We just add functional to the system step by step.
For a last couple month we just grow up quickly and I decide to check performance :(
Post by Claudio Freire
--
http://www.postgresql.org/mailpref/pgsql-performance
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Tom Lane
2013-01-04 23:01:56 UTC
Permalink
[ all postgres processes seem to be pinned to CPU 14 ]
I wonder whether this is a "benefit" of sched_autogroup_enabled?

http://archives.postgresql.org/message-id/***@optionshouse.com

regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
nobody nowhere
2013-01-05 08:53:22 UTC
Permalink
Post by Tom Lane
[ all postgres processes seem to be pinned to CPU 14 ]
I wonder whether this is a "benefit" of sched_autogroup_enabled?
regards, tom lane
Thanks Lane

RHEL 5.x
:(
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
nobody nowhere
2013-01-07 18:10:17 UTC
Permalink
Fixed by

synchronous_commit = off
Post by nobody nowhere
Post by Tom Lane
[ all postgres processes seem to be pinned to CPU 14 ]
I wonder whether this is a "benefit" of sched_autogroup_enabled?
regards, tom lane
Thanks Lane
RHEL 5.x
:(
--
http://www.postgresql.org/mailpref/pgsql-performance
--
Sent via pgsql-performance mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Loading...