Skip to content

一、插件

1.Krew

1.1 介绍

Krew is a tool that makes it easy to use kubectl plugins. Krew helps you discover plugins, install and manage them on your machine. It is similar to tools like apt, dnf or brew. Today, over 130 kubectl plugins are available on Krew.

  • For kubectl users: Krew helps you find, install and manage kubectl plugins in a consistent way.
  • For plugin developers: Krew helps you package and distribute your plugins on multiple platforms and makes them discoverable.

1.2 部署Krew

image-20241013194433118

  • 下载
bash
wget https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_amd64.tar.gz

wget https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew.yaml
wget https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_amd64.tar.gz

wget https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew.yaml
  • 安装
bash
$ tempDir=$(mktemp -d)  
$ tar zxvf krew-linux_amd64.tar.gz -C ${tempDir}
[root@kube-master-01 ~]# ${tempDir}/krew-linux_amd64 install --manifest=krew.yaml --archive=krew-linux_amd64.tar.gz
Installing plugin: krew
Installed plugin: krew
\
 | Use this plugin:
 | 	kubectl krew
 | Documentation:
 | 	https://krew.sigs.k8s.io/
 | Caveats:
 | \
 |  | krew is now installed! To start using kubectl plugins, you need to add
 |  | krew's installation directory to your PATH:
 |  |
 |  |   * macOS/Linux:
 |  |     - Add the following to your ~/.bashrc or ~/.zshrc:
 |  |         export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
 |  |     - Restart your shell.
 |  |
 |  |   * Windows: Add %USERPROFILE%\.krew\bin to your PATH environment variable
 |  |
 |  | To list krew commands and to get help, run:
 |  |   $ kubectl krew
 |  | For a full list of available plugins, run:
 |  |   $ kubectl krew search
 |  |
 |  | You can find documentation at
 |  |   https://krew.sigs.k8s.io/docs/user-guide/quickstart/.
 | /
/
$ tempDir=$(mktemp -d)  
$ tar zxvf krew-linux_amd64.tar.gz -C ${tempDir}
[root@kube-master-01 ~]# ${tempDir}/krew-linux_amd64 install --manifest=krew.yaml --archive=krew-linux_amd64.tar.gz
Installing plugin: krew
Installed plugin: krew
\
 | Use this plugin:
 | 	kubectl krew
 | Documentation:
 | 	https://krew.sigs.k8s.io/
 | Caveats:
 | \
 |  | krew is now installed! To start using kubectl plugins, you need to add
 |  | krew's installation directory to your PATH:
 |  |
 |  |   * macOS/Linux:
 |  |     - Add the following to your ~/.bashrc or ~/.zshrc:
 |  |         export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
 |  |     - Restart your shell.
 |  |
 |  |   * Windows: Add %USERPROFILE%\.krew\bin to your PATH environment variable
 |  |
 |  | To list krew commands and to get help, run:
 |  |   $ kubectl krew
 |  | For a full list of available plugins, run:
 |  |   $ kubectl krew search
 |  |
 |  | You can find documentation at
 |  |   https://krew.sigs.k8s.io/docs/user-guide/quickstart/.
 | /
/
  • 清理目录
bash
rm ${tempDir} -rf
rm ${tempDir} -rf
  • 设置环境变量
# 临时生效
$ export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"

# 永久生效
$ echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> ~/.bashrc
# 临时生效
$ export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"

# 永久生效
$ echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> ~/.bashrc
  • 验证效果
bash
[root@kube-master-01 ~]# kubectl krew version
OPTION            VALUE
GitTag            v0.4.4
GitCommit         343e657
IndexURI          https://github.com/kubernetes-sigs/krew-index.git
BasePath          /root/.krew
IndexPath         /root/.krew/index/default
InstallPath       /root/.krew/store
BinPath           /root/.krew/bin
DetectedPlatform  linux/amd64
[root@kube-master-01 ~]# kubectl krew version
OPTION            VALUE
GitTag            v0.4.4
GitCommit         343e657
IndexURI          https://github.com/kubernetes-sigs/krew-index.git
BasePath          /root/.krew
IndexPath         /root/.krew/index/default
InstallPath       /root/.krew/store
BinPath           /root/.krew/bin
DetectedPlatform  linux/amd64

二、常用插件

https://kubernetes.io/zh-cn/docs/tasks/extend-kubectl/kubectl-plugins/

1.tree

该插件是由 Google 大佬开发,通过 ownerReferences 来发现 kubernetes 对象之间的相互关联,并通过树状图来展示,对资源的关系一目了然

安装

$ kubectl krew install tree
$ kubectl krew install tree

使用

bash
$ kubectl tree --help

[root@kube-master-01 containerd]# kubectl tree deployment demoapp
NAMESPACE  NAME                                READY  REASON  AGE
default    Deployment/demoapp                  -              6d4h
default    └─ReplicaSet/demoapp-7cc687d895   -              6d4h
default      ├─Pod/demoapp-7cc687d895-8xltm  True           6d4h
default      └─Pod/demoapp-7cc687d895-ptr24  True           6d4h
$ kubectl tree --help

[root@kube-master-01 containerd]# kubectl tree deployment demoapp
NAMESPACE  NAME                                READY  REASON  AGE
default    Deployment/demoapp                  -              6d4h
default    └─ReplicaSet/demoapp-7cc687d895   -              6d4h
default      ├─Pod/demoapp-7cc687d895-8xltm  True           6d4h
default      └─Pod/demoapp-7cc687d895-ptr24  True           6d4h

2.status

kubectl-status 这个插件简化了 getdescribe 操作,采用不同颜色和箭头等元素来展示 kubernetes 资源的生命周期和状态信息,可以查看单个资源,也可以查看该 namespace 下的所有该资源的状态,极大的缩短了问题排查的时间,减少了操作步骤

安装

bash
kubectl krew install status
kubectl krew install status

使用

bash
kubectl status --help
kubectl status --help
  • 查看pod
bash
#查看具体pod
[root@kube-master-01 containerd]# kubectl status pod demoapp-7cc687d895-8xltm

Pod/demoapp-7cc687d895-8xltm -n default, created 6d ago by ReplicaSet/demoapp-7cc687d895 Running BestEffort
  Current: Pod is Ready
  Managed by demoapp application
  PodScheduled -> Initialized -> ContainersReady -> Ready for 4h
  Containers:
    demoapp (registry.cn-zhangjiakou.aliyuncs.com/hsuing/demoapp:v1) Running for 4h and Ready, restarted 4 times
      usage cpu usage:0.007/[no-req, no-lim], mem usage:28.9MB/[no-req, no-lim]
      previously: Started 1d ago and Unknown after 1d with exit code exit with 255
  Known/recorded manage events:
    6d ago Updated by kube-controller-manager (metadata, spec)
    4h ago Updated by calico (metadata)
    4h ago Updated by kubelet (status)
  Services matching this pod:
    Service/demoapp-svc -n default, created 6d ago, last endpoint change was 4h ago
      Current: Service is ready
      Managed by demoapp-svc application
      Ready: Pod/demoapp-7cc687d895-ptr24 -n default on Node/kube-node-01, 172.18.10.106:80/TCP (80-80)
      Ready: Pod/demoapp-7cc687d895-8xltm -n default on Node/kube-node-02, 172.25.241.15:80/TCP (80-80)
      Known/recorded manage events:
        6d ago Updated by kubectl-client-side-apply (metadata, spec)
      Ingresses matching this Service:
        Ingress/demoapp -n default, created 6d ago, gen:1
          Current: Resource is current LoadBalancer:10.103.236.70
          Service/demoapp-svc:80 has 2 endpoints.
          Known/recorded manage events:
            6d ago Updated by kubectl-client-side-apply (metadata, spec)
            6d ago Updated by nginx-ingress-controller (status)
#查看具体pod
[root@kube-master-01 containerd]# kubectl status pod demoapp-7cc687d895-8xltm

Pod/demoapp-7cc687d895-8xltm -n default, created 6d ago by ReplicaSet/demoapp-7cc687d895 Running BestEffort
  Current: Pod is Ready
  Managed by demoapp application
  PodScheduled -> Initialized -> ContainersReady -> Ready for 4h
  Containers:
    demoapp (registry.cn-zhangjiakou.aliyuncs.com/hsuing/demoapp:v1) Running for 4h and Ready, restarted 4 times
      usage cpu usage:0.007/[no-req, no-lim], mem usage:28.9MB/[no-req, no-lim]
      previously: Started 1d ago and Unknown after 1d with exit code exit with 255
  Known/recorded manage events:
    6d ago Updated by kube-controller-manager (metadata, spec)
    4h ago Updated by calico (metadata)
    4h ago Updated by kubelet (status)
  Services matching this pod:
    Service/demoapp-svc -n default, created 6d ago, last endpoint change was 4h ago
      Current: Service is ready
      Managed by demoapp-svc application
      Ready: Pod/demoapp-7cc687d895-ptr24 -n default on Node/kube-node-01, 172.18.10.106:80/TCP (80-80)
      Ready: Pod/demoapp-7cc687d895-8xltm -n default on Node/kube-node-02, 172.25.241.15:80/TCP (80-80)
      Known/recorded manage events:
        6d ago Updated by kubectl-client-side-apply (metadata, spec)
      Ingresses matching this Service:
        Ingress/demoapp -n default, created 6d ago, gen:1
          Current: Resource is current LoadBalancer:10.103.236.70
          Service/demoapp-svc:80 has 2 endpoints.
          Known/recorded manage events:
            6d ago Updated by kubectl-client-side-apply (metadata, spec)
            6d ago Updated by nginx-ingress-controller (status)
bash
#查看所有pod
kubectl status pod
#查看所有pod
kubectl status pod

3.view-allocations

kubectl-view-allocations 可以非常方便的展示 CPU、Mem、GPU 等资源的分布情况,并可以对 namespace、node、pod 等维度进行展示

安装

bash
$ kubectl krew install view-allocations

$  kubectl-view_allocations --help
$ kubectl krew install view-allocations

$  kubectl-view_allocations --help

使用

  • 指定node的cpu
bash
[root@kube-master-01 containerd]# kubectl-view_allocations -g node -r cpu
 Resource               Requested         Limit  Allocatable    Free
  cpu                   (25%) 1.4   (7%) 400.0m          5.5     4.2
  ├─ kube-master-01  (37%) 550.0m   (7%) 100.0m          1.5  950.0m
  ├─ kube-node-01    (18%) 350.0m            __          2.0     1.6
  └─ kube-node-02    (22%) 450.0m  (15%) 300.0m          2.0     1.6
[root@kube-master-01 containerd]# kubectl-view_allocations -g node -r cpu
 Resource               Requested         Limit  Allocatable    Free
  cpu                   (25%) 1.4   (7%) 400.0m          5.5     4.2
  ├─ kube-master-01  (37%) 550.0m   (7%) 100.0m          1.5  950.0m
  ├─ kube-node-01    (18%) 350.0m            __          2.0     1.6
  └─ kube-node-02    (22%) 450.0m  (15%) 300.0m          2.0     1.6
  • 指定pod的cpu
bash
[root@kube-master-01 containerd]# kubectl-view_allocations -g pod -r cpu
 Resource                                       Requested        Limit  Allocatable  Free
  cpu                                           (25%) 1.4  (7%) 400.0m          5.5   4.2
  +- calico-node-2nc7p                             250.0m           __           __    __
  +- calico-node-8qw5w                             250.0m           __           __    __
  +- calico-node-sn9xm                             250.0m           __           __    __
  +- coredns-9867fb84c-257q7                       100.0m           __           __    __
  +- grafana-core-596b4d8c57-p8xgj                 100.0m       300.0m           __    __
  +- ingress-nginx-controller-5b88878bf4-swz6f     100.0m           __           __    __
  +- ingress-nginx-controller-5b88878bf4-z2spz     100.0m           __           __    __
  +- metrics-server-5559744ff8-cns4t               100.0m           __           __    __
  +- openelb-manager-75cd76c58b-7q6lm              100.0m       100.0m           __    __
[root@kube-master-01 containerd]# kubectl-view_allocations -g pod -r cpu
 Resource                                       Requested        Limit  Allocatable  Free
  cpu                                           (25%) 1.4  (7%) 400.0m          5.5   4.2
  +- calico-node-2nc7p                             250.0m           __           __    __
  +- calico-node-8qw5w                             250.0m           __           __    __
  +- calico-node-sn9xm                             250.0m           __           __    __
  +- coredns-9867fb84c-257q7                       100.0m           __           __    __
  +- grafana-core-596b4d8c57-p8xgj                 100.0m       300.0m           __    __
  +- ingress-nginx-controller-5b88878bf4-swz6f     100.0m           __           __    __
  +- ingress-nginx-controller-5b88878bf4-z2spz     100.0m           __           __    __
  +- metrics-server-5559744ff8-cns4t               100.0m           __           __    __
  +- openelb-manager-75cd76c58b-7q6lm              100.0m       100.0m           __    __
  • 展示ns
bash
[root@kube-master-01 containerd]# kubectl-view_allocations -g namespace
 Resource              Requested        Limit  Allocatable   Free
  cpu                  (25%) 1.4  (7%) 400.0m          5.5    4.2
  ├─ ingress-nginx        200.0m           __           __     __
  ├─ kube-system          950.0m           __           __     __
  ├─ monitor              100.0m       300.0m           __     __
  └─ openelb-system       100.0m       100.0m           __     __
  ephemeral-storage           __           __       102.0G     __
  memory             (11%) 1.0Gi  (16%) 1.5Gi        9.1Gi  7.6Gi
  ├─ ingress-nginx       180.0Mi           __           __     __
  ├─ kube-system         270.0Mi      170.0Mi           __     __
  ├─ monitor             500.0Mi        1.0Gi           __     __
  └─ openelb-system      100.0Mi      300.0Mi           __     __
  pods                 (5%) 17.0    (5%) 17.0        330.0  313.0
  ├─ default                 4.0          4.0           __     __
  ├─ ingress-nginx           2.0          2.0           __     __
  ├─ kube-system             6.0          6.0           __     __
  ├─ monitor                 1.0          1.0           __     __
  └─ openelb-system          4.0          4.0           __     __
[root@kube-master-01 containerd]# kubectl-view_allocations -g namespace
 Resource              Requested        Limit  Allocatable   Free
  cpu                  (25%) 1.4  (7%) 400.0m          5.5    4.2
  ├─ ingress-nginx        200.0m           __           __     __
  ├─ kube-system          950.0m           __           __     __
  ├─ monitor              100.0m       300.0m           __     __
  └─ openelb-system       100.0m       100.0m           __     __
  ephemeral-storage           __           __       102.0G     __
  memory             (11%) 1.0Gi  (16%) 1.5Gi        9.1Gi  7.6Gi
  ├─ ingress-nginx       180.0Mi           __           __     __
  ├─ kube-system         270.0Mi      170.0Mi           __     __
  ├─ monitor             500.0Mi        1.0Gi           __     __
  └─ openelb-system      100.0Mi      300.0Mi           __     __
  pods                 (5%) 17.0    (5%) 17.0        330.0  313.0
  ├─ default                 4.0          4.0           __     __
  ├─ ingress-nginx           2.0          2.0           __     __
  ├─ kube-system             6.0          6.0           __     __
  ├─ monitor                 1.0          1.0           __     __
  └─ openelb-system          4.0          4.0           __     __

4.images

kubectl-images 可以展示集群中正在使用的镜像,并对 namespace 进行一个简单的统计。使用这个插件可以非常方面的查看 namespace 中使用了哪些镜像,尤其在排查问题需要查看镜像版本时非常有用

安装

bash
$ kubectl krew install images

$ kubectl images --help
$ kubectl krew install images

$ kubectl images --help

使用

bash
[root@kube-master-01 containerd]# kubectl images -nmonitor
[Summary]: 1 namespaces, 1 pods, 1 containers and 1 different images
+-------------------------------+--------------+---------------------------------------------------------+
|              Pod              |  Container   |                          Image                          |
+-------------------------------+--------------+---------------------------------------------------------+
| grafana-core-596b4d8c57-p8xgj | grafana-core | registry.cn-zhangjiakou.aliyuncs.com/hsuing/grafana:v11 |
+-------------------------------+--------------+---------------------------------------------------------+
[root@kube-master-01 containerd]# kubectl images -nmonitor
[Summary]: 1 namespaces, 1 pods, 1 containers and 1 different images
+-------------------------------+--------------+---------------------------------------------------------+
|              Pod              |  Container   |                          Image                          |
+-------------------------------+--------------+---------------------------------------------------------+
| grafana-core-596b4d8c57-p8xgj | grafana-core | registry.cn-zhangjiakou.aliyuncs.com/hsuing/grafana:v11 |
+-------------------------------+--------------+---------------------------------------------------------+