Skip to content

文档,https://www.postgresql.org/docs/12/runtime-config-resource.html

PostgreSQL 修改配置生效方法

1.查看参数的默认单位

select name, unit from pg_settings;
select name, unit from pg_settings;

2.查看参数允许的枚举

select name, enumvals from pg_settings;
select name, enumvals from pg_settings;

设置参数

  • 方式一:在 postgresql.conf设置
  • 方式二:在启动时传递参数:postgres -c log_connections=yes -c log_destination=’syslog’

2.1 在 psql 里查看及设置参数

  • 查看:show postgresql.conf中的参数名;
  • 设置:set postgresql.conf中的参数名 TO 参数值 | set postgresql.conf中的参数名=参数值

对于配置服务器,,太多时候我们在Linux中做的操作是,配置 *.conf文件,然后重启服务。而很多服务都具有reload功能,而但是具体到某个配置,有时候直接说出需不需要重启服务而使得配置生效,这并不是一件容易的事情。

postgresql却讲这部分能用在数据表中显式的告诉了我们:

postgres# select name, context from pg_settings;
postgres=# select distinct(context) from pg_settings;
      context
-------------------
 backend
 user
 internal
 postmaster
 superuser
 sighup
 superuser-backend
(7 行记录)
postgres# select name, context from pg_settings;
postgres=# select distinct(context) from pg_settings;
      context
-------------------
 backend
 user
 internal
 postmaster
 superuser
 sighup
 superuser-backend
(7 行记录)

internal: 编译期间的设置,只有重新编译才能生效。

postmaster: 只有服务重启才能生效。

sighup: 给服务器发送HUP信号会是服务器重新加载postgresql.conf配置,可以立即生效。

backend: 与sighup类似,但是不影响正在运行的会话,只在新会话中生效

superuser: 使用superuser(如postgres)才能更改,不用重新加载所有配置即可生效。

user: 单个会话用户可以在任意时间做修改,只会影响该会话

2.2 重新加载数据库配置的方法有三种

用超级用户运行

postgres=# SELECT pg_reload_conf();
postgres=# SELECT pg_reload_conf();

用UNIX的kill手动发起HUP信号

$kill -HUP PID
$kill -HUP PID

使用pg_ctl命令触发SIGHUP信号

$pg_ctl reload
$pg_ctl reload

3.参数生效的方式

set

可以在会话级别设置这种

sql
postgres=# set work_mem=262144;
SET
postgres=# show work_mem;
 work_mem
----------
 256MB
(1 行记录)

#重新登录后
$ psql -U postgres                             
输入 "help" 来获取帮助信息.                              
                                                
postgres=# show work_mem;                       
 work_mem                                       
----------                                      
 4MB                                            
(1 行记录)
postgres=# set work_mem=262144;
SET
postgres=# show work_mem;
 work_mem
----------
 256MB
(1 行记录)

#重新登录后
$ psql -U postgres                             
输入 "help" 来获取帮助信息.                              
                                                
postgres=# show work_mem;                       
 work_mem                                       
----------                                      
 4MB                                            
(1 行记录)

pg_reload_conf函数

更改完sighup类的参数,可以直接使用下面语句来重载

postgres=# select pg_reload_conf();
 pg_reload_conf
----------------
 t
(1 行记录)
postgres=# select pg_reload_conf();
 pg_reload_conf
----------------
 t
(1 行记录)

pg_ctl

使用pg_reload,这个操作会重载配置文件

[postgres@localhost ~]$ pg_ctl reload 
server signaled

重启
pg_ctl restart来重启数据库,这些会直接更改能改的一切参数,但是注意的是一些会话级别的参数可能因此会丢失
[postgres@localhost ~]$ pg_ctl reload 
server signaled

重启
pg_ctl restart来重启数据库,这些会直接更改能改的一切参数,但是注意的是一些会话级别的参数可能因此会丢失

postgresql修改配置生效方法

sql
postgres# select name, context from pg_settings;
postgres# select name, context from pg_settings;
namecontext
archive_commandsihup
archive_modepostmaster
block_sizeinternal
log_connectionsbackend
log_min_duration_statementsuperuser
search_patchuser

internal: 编译期间的设置,只有重新编译才能生效。

postmaster: 只有服务重启才能生效。

sighup: 给服务器发送HUP信号会是服务器重新加载postgresql.conf配置,可以立即生效。

backend: 与sighup类似,但是不影响正在运行的会话,只在新会话中生效

superuser: 使用superuser(如postgres)才能更改,不用重新加载所有配置即可生效。

user: 单个会话用户可以在任意时间做修改,只会影响该会话

sql
postgres@postgres=>select name,context from pg_settings where name like 'wal_buffers';
    name     |  context   
-------------+------------
 wal_buffers | postmaster
(1 row)

--- 如果context的值是postmaster,就需要重启数据库
postgres@postgres=>select name,context from pg_settings where name like 'wal_buffers';
    name     |  context   
-------------+------------
 wal_buffers | postmaster
(1 row)

--- 如果context的值是postmaster,就需要重启数据库

重新加载数据库配置的方法有三种:

  1. 用超级用户运行

postgres=# SELECT pg_reload_conf();

  1. 用UNIX的kill手动发起HUP信号

$kill -HUP PID

3.使用pg_ctl命令触发SIGHUP信号

$pg_ctl reload

4.查看配置文件位置

sql
postgres=# select name,setting from pg_settings where category='File Locations';
       name        |                 setting                  
-------------------+------------------------------------------
 config_file       | /var/lib/postgresql/data/postgresql.conf
 data_directory    | /var/lib/postgresql/data
 external_pid_file | 
 hba_file          | /var/lib/postgresql/data/pg_hba.conf
 ident_file        | /var/lib/postgresql/data/pg_ident.conf
(5 rows)

#查看参数
select name,context,unit,setting,boot_val,reset_val
from pg_settings
where name in
( 'listen_addresses','max_connection','shared_buffers','effective_cache_size','work_mem','maintenance_work_mem'
)
order by context,name ;

         name         |  context   | unit | setting | boot_val  | reset_val 
----------------------+------------+------+---------+-----------+-----------
 listen_addresses     | postmaster |      | *       | localhost | *
 shared_buffers       | postmaster | 8kB  | 65536   | 1024      | 65536
 effective_cache_size | user       | 8kB  | 262144  | 524288    | 262144
 maintenance_work_mem | user       | kB   | 131072  | 65536     | 131072
 work_mem             | user       | kB   | 3121152 | 4096      | 3121152
(5 rows)
postgres=# select name,setting from pg_settings where category='File Locations';
       name        |                 setting                  
-------------------+------------------------------------------
 config_file       | /var/lib/postgresql/data/postgresql.conf
 data_directory    | /var/lib/postgresql/data
 external_pid_file | 
 hba_file          | /var/lib/postgresql/data/pg_hba.conf
 ident_file        | /var/lib/postgresql/data/pg_ident.conf
(5 rows)

#查看参数
select name,context,unit,setting,boot_val,reset_val
from pg_settings
where name in
( 'listen_addresses','max_connection','shared_buffers','effective_cache_size','work_mem','maintenance_work_mem'
)
order by context,name ;

         name         |  context   | unit | setting | boot_val  | reset_val 
----------------------+------------+------+---------+-----------+-----------
 listen_addresses     | postmaster |      | *       | localhost | *
 shared_buffers       | postmaster | 8kB  | 65536   | 1024      | 65536
 effective_cache_size | user       | 8kB  | 262144  | 524288    | 262144
 maintenance_work_mem | user       | kB   | 131072  | 65536     | 131072
 work_mem             | user       | kB   | 3121152 | 4096      | 3121152
(5 rows)

4.1 查看关键参数

sql
postgres=# select name,context,unit,setting,boot_val,reset_val from pg_settings where name in('listen_addresses','max_connections','shared_buffers','effective_cache_size','work_mem','maintenance_work_mem') order by context,name;
         name         |  context   | unit | setting | boot_val  | reset_val 
----------------------+------------+------+---------+-----------+-----------
 listen_addresses     | postmaster |      | *       | localhost | *
 max_connections      | postmaster |      | 100     | 100       | 100
 shared_buffers       | postmaster | 8kB  | 16384   | 1024      | 16384
 effective_cache_size | user       | 8kB  | 524288  | 524288    | 524288
 maintenance_work_mem | user       | kB   | 65536   | 65536     | 65536
 work_mem             | user       | kB   | 4096    | 4096      | 4096
(6 rows)
postgres=# select name,context,unit,setting,boot_val,reset_val from pg_settings where name in('listen_addresses','max_connections','shared_buffers','effective_cache_size','work_mem','maintenance_work_mem') order by context,name;
         name         |  context   | unit | setting | boot_val  | reset_val 
----------------------+------------+------+---------+-----------+-----------
 listen_addresses     | postmaster |      | *       | localhost | *
 max_connections      | postmaster |      | 100     | 100       | 100
 shared_buffers       | postmaster | 8kB  | 16384   | 1024      | 16384
 effective_cache_size | user       | 8kB  | 524288  | 524288    | 524288
 maintenance_work_mem | user       | kB   | 65536   | 65536     | 65536
 work_mem             | user       | kB   | 4096    | 4096      | 4096
(6 rows)

context 设置为postmaster,更改此形参后需要重启PostgreSQL服务才能生效;

设置为user,那么只需要执行一次重新加载即可全局生效。重启数据库服务会终止活动连接,但重新加载不会。

unit 字段表示这些设置的单位

setting是指当前设置;boot_val是指默认设置;reset_val是指重新启动服务器或重新加载设置之后的新设置

在postgresql.conf中修改了设置后,一定记得查看一下setting和reset_val并确保二者是一致,否则说明设置并未生效,需要重新启动服务器或者重新加载设置

4.2postgresql.auto.conf与postgresql.conf区别

对于9.4版及之后的版本来说,有相同配置,Postgresql.auto.conf的优先级是高于postgresql.conf的,如果这两个文件中存在同名配置项,则系统会优先选择前者设定的值

值得注意的是 auto.conf这个文件必须在 psql 中使用 alter system 来修改,而conf可以直接在文本编辑器中修改

postgresql.auto.conf 为二进制格式

在使用 alter system set 修改postgres的配置文件的时候,仅影响 auto.conf 这个文件的内容,将 参数设回 default 时,auto.conf文件的这项配置会被删除,重新用回 conf 文件的设置

4.3 postgresql.conf

修改这些值是一定要重新启动数据库服务

listen_addresses 一般设定为localhost或者local,但也有很多人会设为*,表示使用本机任一IP地址均可连接到Postgresql服务

port 默认值 为5432

max_connections

5.开启归档

5.1查看

sql

postgres=# \c
You are now connected to database "postgres" as user "postgres".

#查看是否归档
postgres=# show archive_mode;
-[ RECORD 1 ]+---
archive_mode | on

on-----》开启
off----->关闭

postgres=# \c
You are now connected to database "postgres" as user "postgres".

#查看是否归档
postgres=# show archive_mode;
-[ RECORD 1 ]+---
archive_mode | on

on-----》开启
off----->关闭

5.2开启

PostgreSQL数据库参数文件:

sql
[postgres@pg01 ~]$ cd /var/lib/pgsql/12/data
[postgres@pg01 data]$ ll postgresql.conf 
-rw-------. 1 postgres postgres 26612 Mar  1 16:03 postgresql.conf
[postgres@pg01 ~]$ cd /var/lib/pgsql/12/data
[postgres@pg01 data]$ ll postgresql.conf 
-rw-------. 1 postgres postgres 26612 Mar  1 16:03 postgresql.conf

vi postgres.conf修改如下

sql
wal_level = replica                     # minimal, replica, or logical
archive_mode = on               # enables archiving; off, on, or always
archive_command = 'test ! -f /var/lib/pgsql/12/arch/%f && cp %p  /var/lib/pgsql/12/arch/%f'
## 其中: 
%p表示wal日志文件的路径,
%f表示wal日志文件名称

wal_level :
minimal 记录的WAL日志信息最少, 除了记录数据库异常关闭需要恢复时的WAL外, 其他操作信息都不记录;
replica 记录的WAL信息比minimal信息多, 会记录支持WAL归档、复制和备库中启用只读查询等操作所需的WAL信息;
logical 记录的最多, 包含了支持逻辑解析所需的WAL。 并包含了minimal和replica记录的信息

archive_command:
	可以将WAL归档到本级目录, 也可以归档到远程其他主机上
	
max_wal_senders:
控制主库上的最大WAL发送进程数, 通过pg_basebackup命令在主库上做基准备份时也会消耗WAL进程。此参数不能大于max_connections, 默认值10
wal_level = replica                     # minimal, replica, or logical
archive_mode = on               # enables archiving; off, on, or always
archive_command = 'test ! -f /var/lib/pgsql/12/arch/%f && cp %p  /var/lib/pgsql/12/arch/%f'
## 其中: 
%p表示wal日志文件的路径,
%f表示wal日志文件名称

wal_level :
minimal 记录的WAL日志信息最少, 除了记录数据库异常关闭需要恢复时的WAL外, 其他操作信息都不记录;
replica 记录的WAL信息比minimal信息多, 会记录支持WAL归档、复制和备库中启用只读查询等操作所需的WAL信息;
logical 记录的最多, 包含了支持逻辑解析所需的WAL。 并包含了minimal和replica记录的信息

archive_command:
	可以将WAL归档到本级目录, 也可以归档到远程其他主机上
	
max_wal_senders:
控制主库上的最大WAL发送进程数, 通过pg_basebackup命令在主库上做基准备份时也会消耗WAL进程。此参数不能大于max_connections, 默认值10

需要重启数据库

手动切换日志

在PG10之前:

postgres=# select pg_switch_xlog();

在PG10之后:

postgres=# select pg_switch_wal();

pg_switch_wal

--------------- 0/130005E0 (1 row)

5.3清理归档日志

/data/pgdata/archivedir

wal_level = replica 
archive_mode = on 
#archive_command = 'test ! -f /data/pgdata/archivedir/%f && cp %p /data/pgdata/archivedir/%f' 
archive_command = 'arch.sh %f %p'
archive_timeout = 60s

#vi arch.sh
[postgres@pg01 data]$ pwd
/data/pgdata/data
[postgres@pg01 data]$ cat arch.sh 
test ! -f /data/pgdata/archivedir/$1 && cp --preserve=timestamps $2 /data/pgdata/archivedir/$1 ; find /data/pgdata/archivedir/ -type f -mtime +7 -exec rm -f {} \;
wal_level = replica 
archive_mode = on 
#archive_command = 'test ! -f /data/pgdata/archivedir/%f && cp %p /data/pgdata/archivedir/%f' 
archive_command = 'arch.sh %f %p'
archive_timeout = 60s

#vi arch.sh
[postgres@pg01 data]$ pwd
/data/pgdata/data
[postgres@pg01 data]$ cat arch.sh 
test ! -f /data/pgdata/archivedir/$1 && cp --preserve=timestamps $2 /data/pgdata/archivedir/$1 ; find /data/pgdata/archivedir/ -type f -mtime +7 -exec rm -f {} \;

重启pg数据库

[postgres@pg01 pgdata]$ ps uax|grep archiver postgres 6745 0.0 0.2 140540 4684 ? Ss 03:05 0:00 postgres: archiver

https://zhangeamon.top/postgres/params/

6,postmaster.pid含义

cat postmaster.pid  
  
2551  
/pgdata/digoal/1921/data02/pg_root  
1331803654  
1921  
/pgdata/digoal/1921/data02/pg_root  
0.0.0.0  
  1921001   6127619
cat postmaster.pid  
  
2551  
/pgdata/digoal/1921/data02/pg_root  
1331803654  
1921  
/pgdata/digoal/1921/data02/pg_root  
0.0.0.0  
  1921001   6127619

1、2551 postgres主进程的PID

ps -ewf|grep 2551  
postgres  2551     1  0 17:27 ?        00:00:00 /opt/pgsql/bin/postgres  
postgres  2552  2551  0 17:27 ?        00:00:00 postgres: logger process     
postgres  2554  2551  0 17:27 ?        00:00:00 postgres: writer process     
postgres  2555  2551  0 17:27 ?        00:00:00 postgres: wal writer process     
postgres  2556  2551  0 17:27 ?        00:00:00 postgres: autovacuum launcher process     
postgres  2557  2551  0 17:27 ?        00:00:00 postgres: archiver process     
postgres  2558  2551  0 17:27 ?        00:00:00 postgres: stats collector process
ps -ewf|grep 2551  
postgres  2551     1  0 17:27 ?        00:00:00 /opt/pgsql/bin/postgres  
postgres  2552  2551  0 17:27 ?        00:00:00 postgres: logger process     
postgres  2554  2551  0 17:27 ?        00:00:00 postgres: writer process     
postgres  2555  2551  0 17:27 ?        00:00:00 postgres: wal writer process     
postgres  2556  2551  0 17:27 ?        00:00:00 postgres: autovacuum launcher process     
postgres  2557  2551  0 17:27 ?        00:00:00 postgres: archiver process     
postgres  2558  2551  0 17:27 ?        00:00:00 postgres: stats collector process

2、/pgdata/digoal/1921/data02/pg_root

数据目录.

3、1331803654

这行是这个文件创建的时间转换成epoch. 我一开始一直以为是pg_postmaster_start_time获得的时间.如果你发现不对的话就去看看postmaster.pid的文件创建时间吧.

digoal=# select pg_postmaster_start_time();  
   pg_postmaster_start_time      
-------------------------------  
 2012-03-15 17:27:34.416047+08  
(1 row)  
  
digoal=# select extract(epoch from '2012-03-15 17:27:34'::timestamp);    
date_part     
------------    
1331803654   
(1 row)
digoal=# select pg_postmaster_start_time();  
   pg_postmaster_start_time      
-------------------------------  
 2012-03-15 17:27:34.416047+08  
(1 row)  
  
digoal=# select extract(epoch from '2012-03-15 17:27:34'::timestamp);    
date_part     
------------    
1331803654   
(1 row)

虽然上面的postmaster时间和这个文件的创建时间一致, 但它其实是这个文件的创建时间.

stat postmaster.pid      
  
File: `postmaster.pid'     
  
Size: 119             Blocks: 8          IO Block: 4096     
  
regular file Device: fd03h/64771d    Inode: 1572868     Links: 1   
  
Access: (0600/-rw-------)    
  
Uid: (  500/postgres)     
  
Gid: (  500/postgres)   
  
Access: 2012-03-15 17:41:10.416810701 +0800   
  
Modify: 2012-03-15 17:27:34.364810701 +0800   
  
Change: 2012-03-15 17:27:34.364810701 +0800
stat postmaster.pid      
  
File: `postmaster.pid'     
  
Size: 119             Blocks: 8          IO Block: 4096     
  
regular file Device: fd03h/64771d    Inode: 1572868     Links: 1   
  
Access: (0600/-rw-------)    
  
Uid: (  500/postgres)     
  
Gid: (  500/postgres)   
  
Access: 2012-03-15 17:41:10.416810701 +0800   
  
Modify: 2012-03-15 17:27:34.364810701 +0800   
  
Change: 2012-03-15 17:27:34.364810701 +0800

4、1921 数据库监听端口. 在postgresql.conf中对应 port = 1921

5、/pgdata/digoal/1921/data02/pg_root 是unix socket的监听目录. 在postgresql.conf中对应 unix_socket_directory = '/pgdata/digoal/1921/data02/pg_root'

6、0.0.0.0 数据库监听地址, 对应 postgresql.conf 的 listen_addresses = '0.0.0.0'

7、最后一行对应的是共享内存的地址(shared memory segments中的key和shmid).

  1921001   6127619  
ipcs    
------ Shared Memory Segments --------   
key        shmid      owner      perms      bytes      nattch     status         
0x001d4fe9 6127619    postgres  600        617250816  4
  1921001   6127619  
ipcs    
------ Shared Memory Segments --------   
key        shmid      owner      perms      bytes      nattch     status         
0x001d4fe9 6127619    postgres  600        617250816  4

postmaster.pid显示的是key转成10进制后的数字

7.参数

7.1内存参数

work_mem

work_mem (integer)

Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. The value defaults to four megabytes (4MB). Note that for a complex query, several sort or hash operations might be running in parallel; each operation will be allowed to use as much memory as this value specifies before it starts to write data into temporary files. Also, several running sessions could be doing such operations concurrently. Therefore, the total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries.

声明内部排序操作和Hash表在开始使用临时磁盘文件之前使用的内存限制。 缺省数值是4兆字节(4MB)。请注意对于复杂的查询, 可能会并发行若干排序或者散列表操作;每个都会被允许使用这个参数获得这么多内存, 然后才会开始求助于临时文件。同样,好几个正在运行的会话可能会同时进行排序操作。 因此使用的总内存可能是work_mem的好几倍。 当选择这个值的时候,必须记住这个事实。 ORDER BY, DISTINCT和融合连接都要用到排序操作。 Hash表在散列连接、散列为基础的聚合、散列为基础的IN子查询处理中都要用到

案例

  • 生成一百万条记录
shell
[postgres@sht-sgmhadoopdn-04 ~]$ perl -e '@c=("a".."z","A".."Z",0..9); print join("",map{$c[rand@c]}10..20+rand(40))."\n" for 1..1000000' > /tmp/random_strings
[postgres@sht-sgmhadoopdn-04 ~]$ ls -lh /tmp/random_strings 
-rw-r--r-- 1 postgres dba 31M Nov 21 22:44 /tmp/random_strings
[postgres@sht-sgmhadoopdn-04 ~]$ perl -e '@c=("a".."z","A".."Z",0..9); print join("",map{$c[rand@c]}10..20+rand(40))."\n" for 1..1000000' > /tmp/random_strings
[postgres@sht-sgmhadoopdn-04 ~]$ ls -lh /tmp/random_strings 
-rw-r--r-- 1 postgres dba 31M Nov 21 22:44 /tmp/random_strings
  • 创建对应表结构并导入数据
sql
edbstore=# CREATE TABLE test (id serial PRIMARY KEY, random_text text );
CREATE TABLE

edbstore=# \d test
                            Table "public.test"
   Column    |  Type   |                     Modifiers                     
-------------+---------+---------------------------------------------------
 id          | integer | not null default nextval('test_id_seq'::regclass)
 random_text | text    | 
Indexes:
    "test_pkey" PRIMARY KEY, btree (id)

edbstore=# \d
             List of relations
 Schema |    Name     |   Type   |  Owner   
--------+-------------+----------+----------
 public | tb1         | table    | postgres
 public | test        | table    | postgres
 public | test_id_seq | sequence | postgres
(3 rows)

edbstore=# copy test (random_text) FROM '/tmp/random_strings';
COPY 1000000

edbstore=# select * from test limit 10;
 id |                   random_text                   
----+-------------------------------------------------
  1 | CKQyHTYH5VjeHRUC6YYLF8H5S
  2 | G22uBhFmrlA17wTUzf
  3 | ey6kX7I6etknzhEFCL
  4 | 8LB6navSS8VyoIeqbJBx9RqB3O4AI8GIFExnM7s
  5 | bvYt4dKGSiAun6yA5Q7owlKWJGEgD0nlxoBRZm8B
  6 | qk1RfhXHwo2PNpbI4
  7 | rnPterTw1a3Z3DoL8rhzlltUKb5
  8 | l2TrrbDsBkAa5V5ZBKFE59k4T7sDKA58yrS0mJNssl7CJnF
  9 | xM9HPgq6QMRsx1aOTqM0LPRQRYkQy50uV
 10 | viSJ4p1i3O0dY8tKei3x
(10 rows)
edbstore=# CREATE TABLE test (id serial PRIMARY KEY, random_text text );
CREATE TABLE

edbstore=# \d test
                            Table "public.test"
   Column    |  Type   |                     Modifiers                     
-------------+---------+---------------------------------------------------
 id          | integer | not null default nextval('test_id_seq'::regclass)
 random_text | text    | 
Indexes:
    "test_pkey" PRIMARY KEY, btree (id)

edbstore=# \d
             List of relations
 Schema |    Name     |   Type   |  Owner   
--------+-------------+----------+----------
 public | tb1         | table    | postgres
 public | test        | table    | postgres
 public | test_id_seq | sequence | postgres
(3 rows)

edbstore=# copy test (random_text) FROM '/tmp/random_strings';
COPY 1000000

edbstore=# select * from test limit 10;
 id |                   random_text                   
----+-------------------------------------------------
  1 | CKQyHTYH5VjeHRUC6YYLF8H5S
  2 | G22uBhFmrlA17wTUzf
  3 | ey6kX7I6etknzhEFCL
  4 | 8LB6navSS8VyoIeqbJBx9RqB3O4AI8GIFExnM7s
  5 | bvYt4dKGSiAun6yA5Q7owlKWJGEgD0nlxoBRZm8B
  6 | qk1RfhXHwo2PNpbI4
  7 | rnPterTw1a3Z3DoL8rhzlltUKb5
  8 | l2TrrbDsBkAa5V5ZBKFE59k4T7sDKA58yrS0mJNssl7CJnF
  9 | xM9HPgq6QMRsx1aOTqM0LPRQRYkQy50uV
 10 | viSJ4p1i3O0dY8tKei3x
(10 rows)
  • 通过每次获取不通的数据量来观察每次explain的执行方式
sql
edbstore=# show work_mem;
 work_mem 
----------
 1MB
(1 row)

edbstore=# EXPLAIN analyze SELECT * FROM test WHERE id <= 10 ORDER BY random_text ASC;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=8.73..8.75 rows=9 width=35) (actual time=0.188..0.202 rows=10 loops=1)
   Sort Key: random_text
   Sort Method: quicksort  Memory: 25kB
   ->  Index Scan using test_pkey on test  (cost=0.42..8.58 rows=9 width=35) (actual time=0.018..0.037 rows=10 loops=1)
         Index Cond: (id <= 10)
 Planning time: 1.435 ms
 Execution time: 0.294 ms
(7 rows)

edbstore=# EXPLAIN analyze SELECT * FROM test WHERE id <= 100 ORDER BY random_text ASC;
                                                         QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=13.50..13.75 rows=100 width=35) (actual time=0.870..1.027 rows=100 loops=1)
   Sort Key: random_text
   Sort Method: quicksort  Memory: 34kB
   ->  Index Scan using test_pkey on test  (cost=0.42..10.18 rows=100 width=35) (actual time=0.022..0.218 rows=100 loops=1)
         Index Cond: (id <= 100)
 Planning time: 0.286 ms
 Execution time: 1.248 ms
(7 rows)

edbstore=# EXPLAIN analyze SELECT * FROM test WHERE id <= 1000 ORDER BY random_text ASC;
                                                          QUERY PLAN                                                          
------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=92.57..95.10 rows=1011 width=35) (actual time=8.846..10.251 rows=1000 loops=1)
   Sort Key: random_text
   Sort Method: quicksort  Memory: 112kB
   ->  Index Scan using test_pkey on test  (cost=0.42..42.12 rows=1011 width=35) (actual time=0.027..2.474 rows=1000 loops=1)
         Index Cond: (id <= 1000)
 Planning time: 0.286 ms
 Execution time: 11.584 ms
(7 rows)

edbstore=# EXPLAIN analyze SELECT * FROM test WHERE id <= 10000 ORDER BY random_text ASC;
                                                            QUERY PLAN                                                            
----------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=1049.39..1074.68 rows=10116 width=35) (actual time=144.963..160.943 rows=10000 loops=1)
   Sort Key: random_text
   Sort Method: external merge  Disk: 448kB
   ->  Index Scan using test_pkey on test  (cost=0.42..376.45 rows=10116 width=35) (actual time=0.063..22.225 rows=10000 loops=1)
         Index Cond: (id <= 10000)
 Planning time: 0.149 ms
 Execution time: 173.841 ms
(7 rows)

edbstore=# EXPLAIN analyze SELECT * FROM test WHERE id <= 100000 ORDER BY random_text ASC;
                                                              QUERY PLAN                                                              
--------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=17477.39..17727.70 rows=100122 width=35) (actual time=1325.789..1706.516 rows=100000 loops=1)
   Sort Key: random_text
   Sort Method: external merge  Disk: 4440kB
   ->  Index Scan using test_pkey on test  (cost=0.42..3680.56 rows=100122 width=35) (actual time=0.088..214.490 rows=100000 loops=1)
         Index Cond: (id <= 100000)
 Planning time: 0.147 ms
 Execution time: 1822.008 ms
(7 rows)

edbstore=# EXPLAIN analyze SELECT * FROM test WHERE id <= 1000000 ORDER BY random_text ASC;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=202426.34..204926.34 rows=1000000 width=35) (actual time=8703.143..10160.421 rows=1000000 loops=1)
   Sort Key: random_text
   Sort Method: external merge  Disk: 44504kB
   ->  Seq Scan on test  (cost=0.00..20732.00 rows=1000000 width=35) (actual time=0.024..1021.491 rows=1000000 loops=1)
         Filter: (id <= 1000000)
 Planning time: 0.316 ms
 Execution time: 10577.464 ms
(7 rows)
edbstore=# show work_mem;
 work_mem 
----------
 1MB
(1 row)

edbstore=# EXPLAIN analyze SELECT * FROM test WHERE id <= 10 ORDER BY random_text ASC;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=8.73..8.75 rows=9 width=35) (actual time=0.188..0.202 rows=10 loops=1)
   Sort Key: random_text
   Sort Method: quicksort  Memory: 25kB
   ->  Index Scan using test_pkey on test  (cost=0.42..8.58 rows=9 width=35) (actual time=0.018..0.037 rows=10 loops=1)
         Index Cond: (id <= 10)
 Planning time: 1.435 ms
 Execution time: 0.294 ms
(7 rows)

edbstore=# EXPLAIN analyze SELECT * FROM test WHERE id <= 100 ORDER BY random_text ASC;
                                                         QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=13.50..13.75 rows=100 width=35) (actual time=0.870..1.027 rows=100 loops=1)
   Sort Key: random_text
   Sort Method: quicksort  Memory: 34kB
   ->  Index Scan using test_pkey on test  (cost=0.42..10.18 rows=100 width=35) (actual time=0.022..0.218 rows=100 loops=1)
         Index Cond: (id <= 100)
 Planning time: 0.286 ms
 Execution time: 1.248 ms
(7 rows)

edbstore=# EXPLAIN analyze SELECT * FROM test WHERE id <= 1000 ORDER BY random_text ASC;
                                                          QUERY PLAN                                                          
------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=92.57..95.10 rows=1011 width=35) (actual time=8.846..10.251 rows=1000 loops=1)
   Sort Key: random_text
   Sort Method: quicksort  Memory: 112kB
   ->  Index Scan using test_pkey on test  (cost=0.42..42.12 rows=1011 width=35) (actual time=0.027..2.474 rows=1000 loops=1)
         Index Cond: (id <= 1000)
 Planning time: 0.286 ms
 Execution time: 11.584 ms
(7 rows)

edbstore=# EXPLAIN analyze SELECT * FROM test WHERE id <= 10000 ORDER BY random_text ASC;
                                                            QUERY PLAN                                                            
----------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=1049.39..1074.68 rows=10116 width=35) (actual time=144.963..160.943 rows=10000 loops=1)
   Sort Key: random_text
   Sort Method: external merge  Disk: 448kB
   ->  Index Scan using test_pkey on test  (cost=0.42..376.45 rows=10116 width=35) (actual time=0.063..22.225 rows=10000 loops=1)
         Index Cond: (id <= 10000)
 Planning time: 0.149 ms
 Execution time: 173.841 ms
(7 rows)

edbstore=# EXPLAIN analyze SELECT * FROM test WHERE id <= 100000 ORDER BY random_text ASC;
                                                              QUERY PLAN                                                              
--------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=17477.39..17727.70 rows=100122 width=35) (actual time=1325.789..1706.516 rows=100000 loops=1)
   Sort Key: random_text
   Sort Method: external merge  Disk: 4440kB
   ->  Index Scan using test_pkey on test  (cost=0.42..3680.56 rows=100122 width=35) (actual time=0.088..214.490 rows=100000 loops=1)
         Index Cond: (id <= 100000)
 Planning time: 0.147 ms
 Execution time: 1822.008 ms
(7 rows)

edbstore=# EXPLAIN analyze SELECT * FROM test WHERE id <= 1000000 ORDER BY random_text ASC;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=202426.34..204926.34 rows=1000000 width=35) (actual time=8703.143..10160.421 rows=1000000 loops=1)
   Sort Key: random_text
   Sort Method: external merge  Disk: 44504kB
   ->  Seq Scan on test  (cost=0.00..20732.00 rows=1000000 width=35) (actual time=0.024..1021.491 rows=1000000 loops=1)
         Filter: (id <= 1000000)
 Planning time: 0.316 ms
 Execution time: 10577.464 ms
(7 rows)
rowSort MethodExecution time
10quicksort Memory: 25kB0.294 ms
100Sort Method: quicksort Memory: 34kB1.248 ms
1000Sort Method: quicksort Memory: 112kB11.584 ms
10000Sort Method: external merge Disk: 448kB173.841 ms
100000Sort Method: external merge Disk: 4440kB1822.008 ms
1000000Sort Method: external merge Disk: 44504kB10577.464 ms

通过上图我们可以看到,当sort的数据大于一万条时,explain显示排序方法从 quicksort in memory, 到external merge disk method,说明此时的work_mem的大小不能满足我们在内存的sort和hash表的需求。此时我们将work_mem参数的值调大

psql
edbstore=# set work_mem="500MB";
SET
edbstore=# EXPLAIN analyze SELECT * FROM test WHERE id <= 1000000 ORDER BY random_text ASC;
                                                      QUERY PLAN                                                       
-----------------------------------------------------------------------------------------------------------------------
 Sort  (cost=120389.84..122889.84 rows=1000000 width=35) (actual time=6232.270..6884.121 rows=1000000 loops=1)
   Sort Key: random_text
   Sort Method: quicksort  Memory: 112847kB
   ->  Seq Scan on test  (cost=0.00..20732.00 rows=1000000 width=35) (actual time=0.015..659.035 rows=1000000 loops=1)
         Filter: (id <= 1000000)
 Planning time: 0.125 ms
 Execution time: 7302.621 ms
(7 rows)
edbstore=# set work_mem="500MB";
SET
edbstore=# EXPLAIN analyze SELECT * FROM test WHERE id <= 1000000 ORDER BY random_text ASC;
                                                      QUERY PLAN                                                       
-----------------------------------------------------------------------------------------------------------------------
 Sort  (cost=120389.84..122889.84 rows=1000000 width=35) (actual time=6232.270..6884.121 rows=1000000 loops=1)
   Sort Key: random_text
   Sort Method: quicksort  Memory: 112847kB
   ->  Seq Scan on test  (cost=0.00..20732.00 rows=1000000 width=35) (actual time=0.015..659.035 rows=1000000 loops=1)
         Filter: (id <= 1000000)
 Planning time: 0.125 ms
 Execution time: 7302.621 ms
(7 rows)
rowSort MethodExecution time
1000000quicksort Memory: 112847kB6887.851 ms

可以发现sort method从merg disk变成quicksort in memory。

maintenance_work_mem

​ maintenance_work_mem定义的内存主要影响vacuum,analyze,create index,reindex等操作,这些命令用到的频率不高,但是会消耗较多的资源,系统默认值通常为64MB,可以分配高一点的内存,让上述命令进行地快速一点。当系统内存为32G时,可以分配512MB-1024MB的内存

effective_cache_size

effective_cache_size是优化器假设查询可以使用的最大内存(包括PostgreSQL和系统缓存),和shared_buffer内存无关,只是给优化器生产计划使用的一个假设值。该值建议设置为可用空闲内存的25%-50%,设置值大点,可以让优化器更倾向使用索引扫描而不是顺序扫描

wal_buffers