Skip to content

1.单节点快照备份

如果没有etcdctl工具,进行下载

bash
[root@master01 ~]# ETCD_VER=v3.5.0
[root@master01 ~]# wget https://github.com/etcd-io/etcd/releases/download/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz
[root@master01 ~]# tar xf etcd-v3.5.0-linux-amd64.tar.gz 
[root@master01 ~]# cp etcd-v3.5.0-linux-amd64/etcdctl /usr/local/bin/
[root@master01 ~]# cp etcd-v3.5.0-linux-amd64/etcdutl /usr/local/bin/


[root@master01 ~]# etcdctl version
etcdctl version: 3.5.0
API version: 3.5
[root@master01 ~]# ETCD_VER=v3.5.0
[root@master01 ~]# wget https://github.com/etcd-io/etcd/releases/download/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz
[root@master01 ~]# tar xf etcd-v3.5.0-linux-amd64.tar.gz 
[root@master01 ~]# cp etcd-v3.5.0-linux-amd64/etcdctl /usr/local/bin/
[root@master01 ~]# cp etcd-v3.5.0-linux-amd64/etcdutl /usr/local/bin/


[root@master01 ~]# etcdctl version
etcdctl version: 3.5.0
API version: 3.5

1.备份

如果有证书,添加(cacert、cert、key)这三个参数即可,没有去掉即可

比如:

bash
[root@master01 ~]# export ETCDCTL_API=3
etcdctl \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
--endpoints=https://127.0.0.1:2379 \
endpoint status --write-out=table

+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+-------+
|        ENDPOINT        |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS|
+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+-------+
| https://127.0.0.1:2379 | e8b350c59aaca55a |   3.5.9 |  3.9 MB |     false |      false |        27 |     802333 |             802333 |       |
[root@master01 ~]# export ETCDCTL_API=3
etcdctl \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
--endpoints=https://127.0.0.1:2379 \
endpoint status --write-out=table

+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+-------+
|        ENDPOINT        |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS|
+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+-------+
| https://127.0.0.1:2379 | e8b350c59aaca55a |   3.5.9 |  3.9 MB |     false |      false |        27 |     802333 |             802333 |       |

etcd中每一列字段含义

字段含义
ENDPOINTetcd 的访问地址,客户端通过这个地址来连接到 etcd
IDetcd 节点的唯一标识符
VERSIONetcd 服务的版本号
DB SIZEetcd 数据库的大小
IS LEADER表示该节点是否是 Raft 集群的 leader。在 Raft 协议中,leader 负责处理所有的客户端交互,日志复制,以及其他的节点(followers)接收 leader 的指令。
IS LEARNER表示该节点是否是 learner。在 etcd 集群中,一个节点可以作为 learner,这意味着它可以接收来自 leader 的日志条目,但不能参与投票。
RAFT TERMRaft 术语中的 "term" 是一个逻辑时间单位,在这个"term" 时间内,会选出一个新的 leader。
RAFT INDE最新的 Raft 日志条目的索引。
RAFT APPLIED INDEX已应用到状态机的最新的 Raft 日志条目的索引。
ERRORSetcd 节点在处理时遇到的错误。
bash
DATE=$(date +%Y-%m-%d)
BIND_IP=`/sbin/ifconfig ${ETH0}| grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1' | head -1`



[root@localhost ~]#etcdctl --endpoints localhost:2379 snapshot save  $BIND_IP-snapshot-$DATE.db

{"level":"info","ts":1623915233.1032534,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"uu.db.part"}
{"level":"info","ts":"2021-06-17T03:33:53.105-0400","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1623915233.1055887,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"localhost:2379"}
{"level":"info","ts":"2021-06-17T03:33:53.108-0400","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}
{"level":"info","ts":1623915233.1104572,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"localhost:2379","size":"20 kB","took":0.007136312}
{"level":"info","ts":1623915233.1106055,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"uu.db"}
Snapshot saved at uu.db
DATE=$(date +%Y-%m-%d)
BIND_IP=`/sbin/ifconfig ${ETH0}| grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1' | head -1`



[root@localhost ~]#etcdctl --endpoints localhost:2379 snapshot save  $BIND_IP-snapshot-$DATE.db

{"level":"info","ts":1623915233.1032534,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"uu.db.part"}
{"level":"info","ts":"2021-06-17T03:33:53.105-0400","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1623915233.1055887,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"localhost:2379"}
{"level":"info","ts":"2021-06-17T03:33:53.108-0400","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}
{"level":"info","ts":1623915233.1104572,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"localhost:2379","size":"20 kB","took":0.007136312}
{"level":"info","ts":1623915233.1106055,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"uu.db"}
Snapshot saved at uu.db
  • 查看快照信息
[root@localhost ~]# etcdctl snapshot status uu.db --write-out=table
+----------+----------+------------+------------+
|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| 6d1803a9 |      110 |        146 |      37 kB |
+----------+----------+------------+------------+
[root@localhost ~]# etcdctl snapshot status uu.db --write-out=table
+----------+----------+------------+------------+
|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| 6d1803a9 |      110 |        146 |      37 kB |
+----------+----------+------------+------------+

2.恢复

停止etcd服务,等待etcd容器停止后,在执行恢复操作

bash
删除当前etcd的data目录,否则会报错目录已存在


操作:
 export ETCDCTL_API=3
 cd /data/etcd_data/
[root@localhost etcd_data]# etcdctl snapshot restore /root/192.168.122.247-snapshot-2021-06-17.db --data-dir=./data/etcd_data/


{"level":"info","ts":1623915905.37449,"caller":"snapshot/v3_snapshot.go:296","msg":"restoring snapshot","path":"/root/192.168.122.247-snapshot-2021-06-17.db","wal-dir":"data/etcd_data/member/wal","data-dir":"./data/etcd_data/","snap-dir":"data/etcd_data/member/snap"}
{"level":"info","ts":1623915905.382676,"caller":"membership/cluster.go:392","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"0","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":1623915905.391657,"caller":"snapshot/v3_snapshot.go:309","msg":"restored snapshot","path":"/root/192.168.122.247-snapshot-2021-06-17.db","wal-dir":"data/etcd_data/member/wal","data-dir":"./data/etcd_data/","snap-dir":"data/etcd_data/member/snap"}


#恢复完成后需要将从备份重新生成的data目录的属主和属组修改为etcd系统服务中的对应属主和属组

[root@localhost etcd_data]# ls
data  default.etcd  etcd.conf  etcd.yml  etcds  member  wal
[root@localhost etcd_data]# ls data/
etcd_data
 此时会在/data/etcd/ 下新建立 data目录,因此会改变存储路径,此时需要进行数据目录的修改
 mv default.etcd  default.etcd_bak
 mv /data/etcd/data/etcd/default.etcd   /data/etcd/
 rm -rf /data/etcd/data
 
重启etcd
# systemctl stop etcd
# systemctl start etcd
删除当前etcd的data目录,否则会报错目录已存在


操作:
 export ETCDCTL_API=3
 cd /data/etcd_data/
[root@localhost etcd_data]# etcdctl snapshot restore /root/192.168.122.247-snapshot-2021-06-17.db --data-dir=./data/etcd_data/


{"level":"info","ts":1623915905.37449,"caller":"snapshot/v3_snapshot.go:296","msg":"restoring snapshot","path":"/root/192.168.122.247-snapshot-2021-06-17.db","wal-dir":"data/etcd_data/member/wal","data-dir":"./data/etcd_data/","snap-dir":"data/etcd_data/member/snap"}
{"level":"info","ts":1623915905.382676,"caller":"membership/cluster.go:392","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"0","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":1623915905.391657,"caller":"snapshot/v3_snapshot.go:309","msg":"restored snapshot","path":"/root/192.168.122.247-snapshot-2021-06-17.db","wal-dir":"data/etcd_data/member/wal","data-dir":"./data/etcd_data/","snap-dir":"data/etcd_data/member/snap"}


#恢复完成后需要将从备份重新生成的data目录的属主和属组修改为etcd系统服务中的对应属主和属组

[root@localhost etcd_data]# ls
data  default.etcd  etcd.conf  etcd.yml  etcds  member  wal
[root@localhost etcd_data]# ls data/
etcd_data
 此时会在/data/etcd/ 下新建立 data目录,因此会改变存储路径,此时需要进行数据目录的修改
 mv default.etcd  default.etcd_bak
 mv /data/etcd/data/etcd/default.etcd   /data/etcd/
 rm -rf /data/etcd/data
 
重启etcd
# systemctl stop etcd
# systemctl start etcd

3. 2380端口备份

ETCDCTL_API=3 etcdctl --endpoints=http://192.168.122.249:2380 snapshot save snapshot.db

#恢复
ETCDCTL_API=3 etcdctl snapshot restore snapshot.db

#重启故障etcd,修改已存在集群
initial-cluster-state为existing
ETCDCTL_API=3 etcdctl --endpoints=http://192.168.122.249:2380 snapshot save snapshot.db

#恢复
ETCDCTL_API=3 etcdctl snapshot restore snapshot.db

#重启故障etcd,修改已存在集群
initial-cluster-state为existing

2.集群快照备份

1.备份

bash
#!/bin/bash
DATE=$(date +%Y-%m-%d-%H)
BIND_IP=`/sbin/ifconfig ${ETH0}| grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1' | head -1`

backup_path="/data/aw-etcdbak/"
cd $backup_path
export ETCDCTL_API=3
ENDPOINTS='10.16.1.110:12379,10.16.1.111:12379,10.16.1.112:12379'
etcdctl --endpoints=$ENDPOINTS snapshot save  $BIND_IP-snapshot-$DATE.db

find ./ -type f -name  "*.db"  -mtime +7 |xargs rm -f


#查看备份快照信息
[root@localhost ~]#  etcdctl --write-out=table snapshot status 192.168.122.247-snapshot-2021-06-17.db
+----------+----------+------------+------------+
|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| cefed78f |        0 |          3 |      20 kB |
+----------+----------+------------+------------+
#!/bin/bash
DATE=$(date +%Y-%m-%d-%H)
BIND_IP=`/sbin/ifconfig ${ETH0}| grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1' | head -1`

backup_path="/data/aw-etcdbak/"
cd $backup_path
export ETCDCTL_API=3
ENDPOINTS='10.16.1.110:12379,10.16.1.111:12379,10.16.1.112:12379'
etcdctl --endpoints=$ENDPOINTS snapshot save  $BIND_IP-snapshot-$DATE.db

find ./ -type f -name  "*.db"  -mtime +7 |xargs rm -f


#查看备份快照信息
[root@localhost ~]#  etcdctl --write-out=table snapshot status 192.168.122.247-snapshot-2021-06-17.db
+----------+----------+------------+------------+
|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| cefed78f |        0 |          3 |      20 kB |
+----------+----------+------------+------------+

2.数据恢复

bash
etcd 获取备份数据172.24.119.41-snapshot-2021-06-17.db 后,分别将改备份数据分发至三个节点
分别停止etcd 三个节点


systemctl stop etcd
确认停掉三个节点后依次执行恢复操作


node1 恢复操作
mv /data/etcd/node1.etcd /data/etcd/node1.etcd_bak
cd /data/etcd/
etcdctl --data-dir=/var/lib/etcd snapshot  restore   /data/172.24.119.41-snapshot-2019-09-24.db            --name node1   --initial-cluster node1=http://172.25.102.10:2380,node2=http://172.25.102.39:2380,node3=http://172.25.102.17:2380   --initial-advertise-peer-urls http://172.25.102.10:2380


node2 恢复操作
mv /data/etcd/node2.etcd /data/etcd/node2.etcd_bak
cd /data/etcd/
etcdctl --data-dir=/var/lib/etcd snapshot  restore   /data/172.24.119.41-snapshot-2019-09-24.db            --name node2   --initial-cluster node1=http://172.25.102.10:2380,node2=http://172.25.102.39:2380,node3=http://172.25.102.17:2380   --initial-advertise-peer-urls http://172.25.102.39:2380


node3 恢复操作

mv /data/etcd/node3.etcd /data/etcd/node3.etcd_bak
cd /data/etcd/
etcdctl --data-dir=/var/lib/etcd snapshot  restore   /data/172.24.119.41-snapshot-2019-09-24.db            --name node3   --initial-cluster node1=http://172.25.102.10:2380,node2=http://172.25.102.39:2380,node3=http://172.25.102.17:2380   --initial-advertise-peer-urls http://172.25.102.17:2380


node1 node2 node3 依次启动 etcd
systemctl start etcd

etcdctl get /ad/media  查看恢复的数据已正常
etcd 获取备份数据172.24.119.41-snapshot-2021-06-17.db 后,分别将改备份数据分发至三个节点
分别停止etcd 三个节点


systemctl stop etcd
确认停掉三个节点后依次执行恢复操作


node1 恢复操作
mv /data/etcd/node1.etcd /data/etcd/node1.etcd_bak
cd /data/etcd/
etcdctl --data-dir=/var/lib/etcd snapshot  restore   /data/172.24.119.41-snapshot-2019-09-24.db            --name node1   --initial-cluster node1=http://172.25.102.10:2380,node2=http://172.25.102.39:2380,node3=http://172.25.102.17:2380   --initial-advertise-peer-urls http://172.25.102.10:2380


node2 恢复操作
mv /data/etcd/node2.etcd /data/etcd/node2.etcd_bak
cd /data/etcd/
etcdctl --data-dir=/var/lib/etcd snapshot  restore   /data/172.24.119.41-snapshot-2019-09-24.db            --name node2   --initial-cluster node1=http://172.25.102.10:2380,node2=http://172.25.102.39:2380,node3=http://172.25.102.17:2380   --initial-advertise-peer-urls http://172.25.102.39:2380


node3 恢复操作

mv /data/etcd/node3.etcd /data/etcd/node3.etcd_bak
cd /data/etcd/
etcdctl --data-dir=/var/lib/etcd snapshot  restore   /data/172.24.119.41-snapshot-2019-09-24.db            --name node3   --initial-cluster node1=http://172.25.102.10:2380,node2=http://172.25.102.39:2380,node3=http://172.25.102.17:2380   --initial-advertise-peer-urls http://172.25.102.17:2380


node1 node2 node3 依次启动 etcd
systemctl start etcd

etcdctl get /ad/media  查看恢复的数据已正常

3.备份目录

#v3.3
etcdctl --ca-file=ca.pem --cert-file=etcd.pem --key-file=etcd-key.pem backup --data-dir /var/lib/etcd --backup-dir /tmp/etcd
 
--data-dir:指明数据目录的位置
--backup-dir:指明备份的位置
#v3.3
etcdctl --ca-file=ca.pem --cert-file=etcd.pem --key-file=etcd-key.pem backup --data-dir /var/lib/etcd --backup-dir /tmp/etcd
 
--data-dir:指明数据目录的位置
--backup-dir:指明备份的位置