Skip to content

1. Helm部署Ingress

ingress-nginx部署两个副本数,使用pod反亲和。将两个pod调度到不同的节点上

或者采用daemonset方式部署、

  • 环境

kubernetes-1.29.x

containerd-1.7

ingress-nginx-4.11.2

1.0 nginx-ingress三种模式

Deployment+LoadBalancer

采用deployment进行部署nginx-ingress-controller,需要创建一个type:LoadBalancer的service进行关联nginx-ingress-controller这组pod。通常是在使用公有云进行创建负载均衡器并绑定公网地址。只要将域名解析指向该地址,即可实现集群服务的对外访问。

Deployment+NodePort

采用deployment进行部署nginx-ingress-controller,需要创建一个type:NodePort的service进行关联nginx-ingress-controller这组pod。ingress暴露在集群节点ip的特定端口上。由于nodeport暴露的端口是随机端口,一般会在前面再搭建一套负载均衡器来转发请求。改方式一般用于宿主机是相对固定的环境ip地址不变的场景。

DaemonSet+HostNetwork

用DaemonSet 结合nodeselector来部署ingress-controller到特定的Node上。然后使用HostNetwork直接把该pod与宿主机node的网络打通,直接使用宿主机的80/443端口就能访问服务。该方式整个请求链路最简单,性能相对nodeport模式更好。缺点是由于直接利用宿主机节点的网络和端口,一个node只能部署一个ingress-controller pod。比较适合大并发的生产环境使用

1.1 部署ingress

1.下载chart包

版本对应关系,https://github.com/kubernetes/ingress-nginx?tab=readme-ov-file#supported-versions-table

bash
#添加官网源
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

# 配置helm微软源地址(国内使用下面两个任意其中一个都可以)
helm repo add stable http://mirror.azure.cn/kubernetes/charts

# 配置helm阿里源地址
helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts



#更新源
helm repo update

#查看
helm repo list

#查找
helm search repo ingress-nginx/ingress-nginx

# 查看chart信息
helm show chart ingress-nginx/ingress-nginx --version 4.11.3

#指定版本
helm pull ingress-nginx/ingress-nginx --untar --untardir /root/helm_project/ingress-nginx --version 4.11.3


#在线安装
helm install ingress-nginx ingress-nginx/ingress-nginx --version 4.11.3 -f ingress-nginx-value.yml -n ingress-nginx
#添加官网源
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

# 配置helm微软源地址(国内使用下面两个任意其中一个都可以)
helm repo add stable http://mirror.azure.cn/kubernetes/charts

# 配置helm阿里源地址
helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts



#更新源
helm repo update

#查看
helm repo list

#查找
helm search repo ingress-nginx/ingress-nginx

# 查看chart信息
helm show chart ingress-nginx/ingress-nginx --version 4.11.3

#指定版本
helm pull ingress-nginx/ingress-nginx --untar --untardir /root/helm_project/ingress-nginx --version 4.11.3


#在线安装
helm install ingress-nginx ingress-nginx/ingress-nginx --version 4.11.3 -f ingress-nginx-value.yml -n ingress-nginx

2.创建自定义values

yaml
vim /root/helm_project/ingress-nginx/ingress-nginx-value.yaml
controller:
  kind: Deployment # 部署模式
  # 副本数
  replicaCount: 2
  # 开启ssl
  extraArgs:
    enable-ssl-passthrough: "true"
  
  hostNetwork: true # 使⽤主机网络
  dnsPolicy: ClusterFirstWithHostNet  # 优先使⽤集群内的DNS解析服务
  
  
  image:
    registry: registry.cn-zhangjiakou.aliyuncs.com
    image: hsuing/ingress-nginx
    tag: "v1.11.3"
    digest: ''
    pullPolicy: ifNotPresent

  # pod反亲和
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app.kubernetes.io/component
          operator: In
          values:
          - ingress-controller
      topologyKey: kubernetes.io/hostname

  # 标签选择器
  nodeSelector:
    ingress-nginx: ingress-controller

  # service配置
  service:
    type: LoadBalancer
    externalTrafficPolicy: Cluster

  # 配置metrics采集
  metrics:
    enabled: true
    port: 10254

  # 优雅推出
  lifecycle:
    preStop:
      exec:
        command:
          - /wait-shutdown

  admissionWebhooks:
    enabled: true
    patch:
      enabled: true
      image:
        registry: registry.cn-zhangjiakou.aliyuncs.com
        image: hsuing/kube-webhook-certgen
        tag: 'v1.4.4'
        digest: ''
        pullPolicy: ifNotPresent
      tolerations:
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
          effect: NoSchedule
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: node-role.kubernetes.io/control-plane
                    operator: Exists
vim /root/helm_project/ingress-nginx/ingress-nginx-value.yaml
controller:
  kind: Deployment # 部署模式
  # 副本数
  replicaCount: 2
  # 开启ssl
  extraArgs:
    enable-ssl-passthrough: "true"
  
  hostNetwork: true # 使⽤主机网络
  dnsPolicy: ClusterFirstWithHostNet  # 优先使⽤集群内的DNS解析服务
  
  
  image:
    registry: registry.cn-zhangjiakou.aliyuncs.com
    image: hsuing/ingress-nginx
    tag: "v1.11.3"
    digest: ''
    pullPolicy: ifNotPresent

  # pod反亲和
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app.kubernetes.io/component
          operator: In
          values:
          - ingress-controller
      topologyKey: kubernetes.io/hostname

  # 标签选择器
  nodeSelector:
    ingress-nginx: ingress-controller

  # service配置
  service:
    type: LoadBalancer
    externalTrafficPolicy: Cluster

  # 配置metrics采集
  metrics:
    enabled: true
    port: 10254

  # 优雅推出
  lifecycle:
    preStop:
      exec:
        command:
          - /wait-shutdown

  admissionWebhooks:
    enabled: true
    patch:
      enabled: true
      image:
        registry: registry.cn-zhangjiakou.aliyuncs.com
        image: hsuing/kube-webhook-certgen
        tag: 'v1.4.4'
        digest: ''
        pullPolicy: ifNotPresent
      tolerations:
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
          effect: NoSchedule
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: node-role.kubernetes.io/control-plane
                    operator: Exists

解释:

https://kubernetes.io/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/

1. controller

  • kind: Deployment
    • 指定 Ingress NGINX Controller 的工作负载类型为 Deployment,用于管理和升级 Pod
  • replicaCount: 3
    • 设置 3 个控制器 Pod 副本以实现高可用性;这意味着在集群中将会有 3 个 Ingress NGINX Controller 实例,这些实例将分布在不同的节点上,以确保服务的可用性

2. affinity

  • podAntiAffinity
    • 这个配置项确保每个 Ingress NGINX Controller Pod 不会被调度到同一个节点上,从而实现更高的容错能力
    • requiredDuringSchedulingIgnoredDuringExecution
      • 强制性调度规则,要求在 Pod 调度时必须满足此条件;确保不同的 Pod 不会调度到同一个节点上
    • labelSelector
      • 用于选择具有特定标签的 Pod;这里,指定 app.kubernetes.io/name=ingress-nginx,也就是所有带有 ingress-nginx 标签的 Pod 不会被调度到同一个节点上
    • topologyKey: "kubernetes.io/hostname"
      • 指定 Pod 之间的调度反亲和性以 hostname 为基础,也就是说,Pod 不会被调度到同一个主机上,以确保各个实例的分布式部署

3. service

  • type: LoadBalancer
    • 将 Ingress NGINX 服务的类型设置为 LoadBalancer,集群将为 Ingress NGINX Controller 自动配置一个外部负载均衡器,从而使外部流量能够访问集群内部的服务

4. admissionWebhooks

  • enabled: true
    • 启用 Admission Webhooks,这是 Kubernetes 中的一种机制,用于在资源(如 Pod)被创建、更新或删除时进行验证和修改;Ingress NGINX Controller 的 Admission Webhooks 主要用于处理证书和其它配置的自动化管理
    • patch
      • 这个字段下的配置专门为 Admission Webhooks 的 Patch Pod 配置调度策略
      • tolerations
        • key: node-role.kubernetes.io/control-plane 容忍控制平面节点上的污点,即使节点有 NoSchedule 污点,Pod 仍然可以调度到控制平面节点
      • affinity
        • requiredDuringSchedulingIgnoredDuringExecution
          • 强制性调度规则,要求 Admission Webhooks 的 Patch Pod,必须调度到带有 node-role.kubernetes.io/control-plane 标签的控制平面节点上

这份文件确保 Ingress NGINX Controller 的高可用性和稳定性,通过多副本部署、反亲和性调度策略以及云环境中的负载均衡来实现这一目标。Admission Webhooks 的配置则保证了 Kubernetes 集群中敏感资源的自动管理

3.打标签

bash
 kubectl label node kube-node-01 ingress-nginx=ingress-controller
 
 kubectl label node kube-node-02 ingress-nginx=ingress-controller
 
 kubectl label node kube-master-01 node-role.kubernetes.io/control-plane=ingress-nginx
 #查看
 kubectl get nodes --show-labels
 kubectl label node kube-node-01 ingress-nginx=ingress-controller
 
 kubectl label node kube-node-02 ingress-nginx=ingress-controller
 
 kubectl label node kube-master-01 node-role.kubernetes.io/control-plane=ingress-nginx
 #查看
 kubectl get nodes --show-labels

4.部署

bash
#创建ns
kubectl create ns ingress-nginx

helm install  ingress-nginx -f /root/helm_project/ingress-nginx/ingress-nginx-value.yaml /root/helm_project/ingress-nginx -ningress-nginx


#如果修改了变量,
helm upgrade  ingress-nginx -f /root/helm_project/ingress-nginx/ingress-nginx-value.yaml /root/helm_project/ingress-nginx -ningress-nginx
#创建ns
kubectl create ns ingress-nginx

helm install  ingress-nginx -f /root/helm_project/ingress-nginx/ingress-nginx-value.yaml /root/helm_project/ingress-nginx -ningress-nginx


#如果修改了变量,
helm upgrade  ingress-nginx -f /root/helm_project/ingress-nginx/ingress-nginx-value.yaml /root/helm_project/ingress-nginx -ningress-nginx
  • 查看values
bash
helm get values ingress-nginx -a -ningress-nginx
helm get values ingress-nginx -a -ningress-nginx

openelb-0.6安装有问题,nginx获取不到ip

  • 查看ingress状态
bash
[root@kube-master-01 helm_project]# kg all -ningress-nginx
NAME                                            READY   STATUS    RESTARTS   AGE
pod/ingress-nginx-controller-5b88878bf4-swz6f   1/1     Running   0          35m
pod/ingress-nginx-controller-5b88878bf4-z2spz   1/1     Running   0          35m

NAME                                         TYPE           CLUSTER-IP        EXTERNAL-IP     PORT(S)     AGE
service/ingress-nginx-controller             LoadBalancer   192.168.142.60    10.103.236.70   80:32057/TCP,443:58699/TCP   35m
service/ingress-nginx-controller-admission   ClusterIP      192.168.105.131   <none>          443/TCP     35m
service/ingress-nginx-controller-metrics     ClusterIP      192.168.12.140    <none>          10254/TCP     35m

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   2/2     2            2           35m

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-5b88878bf4   2         2         2       35m
[root@kube-master-01 helm_project]# kg all -ningress-nginx
NAME                                            READY   STATUS    RESTARTS   AGE
pod/ingress-nginx-controller-5b88878bf4-swz6f   1/1     Running   0          35m
pod/ingress-nginx-controller-5b88878bf4-z2spz   1/1     Running   0          35m

NAME                                         TYPE           CLUSTER-IP        EXTERNAL-IP     PORT(S)     AGE
service/ingress-nginx-controller             LoadBalancer   192.168.142.60    10.103.236.70   80:32057/TCP,443:58699/TCP   35m
service/ingress-nginx-controller-admission   ClusterIP      192.168.105.131   <none>          443/TCP     35m
service/ingress-nginx-controller-metrics     ClusterIP      192.168.12.140    <none>          10254/TCP     35m

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   2/2     2            2           35m

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-5b88878bf4   2         2         2       35m

1.2 验证ingress

1.创建yaml文件

bash
kubectl create deployment demoapp --image=registry.cn-zhangjiakou.aliyuncs.com/hsuing/demoapp:v1 --replicas=2  --dry-run=client -oyaml > 1.demoapp-deployment.yaml

kubectl apply -f  demoapp-deployment.yaml

[root@kube-master-01 helm_project]#  kubectl get pods  -o wide --show-labels -l app=demoapp
NAME                       READY   STATUS    RESTARTS   AGE    IP             NODE           NOMINATED NODE   READINESS GATES   LABELS
demoapp-7cc687d895-8xltm   1/1     Running   0          113s   172.25.241.5   kube-node-02   <none>           <none>          app=demoapp,pod-template-hash=7cc687d895
demoapp-7cc687d895-ptr24   1/1     Running   0          113s   172.18.10.94   kube-node-01   <none>           <none>          app=demoapp,pod-template-hash=7cc687d895
kubectl create deployment demoapp --image=registry.cn-zhangjiakou.aliyuncs.com/hsuing/demoapp:v1 --replicas=2  --dry-run=client -oyaml > 1.demoapp-deployment.yaml

kubectl apply -f  demoapp-deployment.yaml

[root@kube-master-01 helm_project]#  kubectl get pods  -o wide --show-labels -l app=demoapp
NAME                       READY   STATUS    RESTARTS   AGE    IP             NODE           NOMINATED NODE   READINESS GATES   LABELS
demoapp-7cc687d895-8xltm   1/1     Running   0          113s   172.25.241.5   kube-node-02   <none>           <none>          app=demoapp,pod-template-hash=7cc687d895
demoapp-7cc687d895-ptr24   1/1     Running   0          113s   172.18.10.94   kube-node-01   <none>           <none>          app=demoapp,pod-template-hash=7cc687d895

2.创建service对象

bash
kubectl create service clusterip demoapp-svc --tcp=80:80 --dry-run=client -oyaml > 2.demoapp-service.yaml

kubectl apply -f 2.demoapp-service.yaml

#查看svc
[root@kube-master-01 helm_project]# kg svc
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
demoapp-svc   ClusterIP   192.168.1.234    <none>        80/TCP    45s

#访问
[root@kube-master-01 helm_project]# curl 192.168.1.234
hsuing demoapp v1.1 !! ClientIP: 172.18.201.64, PodName: demoapp-7cc687d895-8xltm, PodIP: 172.25.241.5!
kubectl create service clusterip demoapp-svc --tcp=80:80 --dry-run=client -oyaml > 2.demoapp-service.yaml

kubectl apply -f 2.demoapp-service.yaml

#查看svc
[root@kube-master-01 helm_project]# kg svc
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
demoapp-svc   ClusterIP   192.168.1.234    <none>        80/TCP    45s

#访问
[root@kube-master-01 helm_project]# curl 192.168.1.234
hsuing demoapp v1.1 !! ClientIP: 172.18.201.64, PodName: demoapp-7cc687d895-8xltm, PodIP: 172.25.241.5!

3.创建ingress

为demoapp-service创建ingress

bash
kubectl create ingress demoapp --rule='demoapp.han.net/*'=demoapp-svc:80 --class=nginx --dry-run=client -o yaml > demoapp-ingress.yaml

#创建
kubectl apply -f demoapp-ingress.yaml


#查看ingress,大约几秒钟之后,就自动从 OpenELB 那边取到 IP 
[root@kube-master-01 helm_project]# kubectl get ingress
NAME      CLASS   HOSTS             ADDRESS         PORTS   AGE
demoapp   nginx   demoapp.han.net   10.103.236.70   80      9s
kubectl create ingress demoapp --rule='demoapp.han.net/*'=demoapp-svc:80 --class=nginx --dry-run=client -o yaml > demoapp-ingress.yaml

#创建
kubectl apply -f demoapp-ingress.yaml


#查看ingress,大约几秒钟之后,就自动从 OpenELB 那边取到 IP 
[root@kube-master-01 helm_project]# kubectl get ingress
NAME      CLASS   HOSTS             ADDRESS         PORTS   AGE
demoapp   nginx   demoapp.han.net   10.103.236.70   80      9s
从旧 Pod 故障,到新 Pod 正常运行,这个过程大约经历了 6 分钟;
这个时间的长短,取决于节点的健康检查周期,节点失效确认时间 --node-monitor-grace-period,
以及 Pod 驱逐超时时间 --pod-eviction-timeout
从旧 Pod 故障,到新 Pod 正常运行,这个过程大约经历了 6 分钟;
这个时间的长短,取决于节点的健康检查周期,节点失效确认时间 --node-monitor-grace-period,
以及 Pod 驱逐超时时间 --pod-eviction-timeout

1.3 修改ingress-nginx配置

https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap

http://nginx.org/en/docs/ngx_core_module.html#worker_rlimit_nofile

http://nginx.org/en/docs/ngx_core_module.html#worker_processes

yaml
vim /root/helm_project/ingress-nginx/ingress-nginx-value.yaml
controller:
  kind: Deployment # 部署模式
  config:
    #允许注解
    allow-snippet-annotations: "true"
    keep-alive-requests: "10000"
    upstream-keepalive-connections: "200"
    max-worker-connections: "65536"
    max-worker-open-files: "655350"
    worker-cpu-affinity: "auto"
    worker-processes: "auto"
    server-tokens: "false"
    #全 局 禁 用 308重 定 向
    ssl-redirect: "false"
vim /root/helm_project/ingress-nginx/ingress-nginx-value.yaml
controller:
  kind: Deployment # 部署模式
  config:
    #允许注解
    allow-snippet-annotations: "true"
    keep-alive-requests: "10000"
    upstream-keepalive-connections: "200"
    max-worker-connections: "65536"
    max-worker-open-files: "655350"
    worker-cpu-affinity: "auto"
    worker-processes: "auto"
    server-tokens: "false"
    #全 局 禁 用 308重 定 向
    ssl-redirect: "false"
server-name-hash-bucket-size: "128" 
 client-body-timeout: "60" 
 client-header-buffer-size: "8k" 
 large-client-header-buffers: "4 32k" 
 proxy-body-size: "256m" 
 client-body-buffer-size: "128k" 
 http2-max-concurrent-streams: "64" 
 http2-max-field-size: "16k" 
 keep-alive: "30" 
 proxy-connect-timeout: "15" 
 proxy-send-timeout: "60" 
 proxy-read-timeout: "60" 
 proxy-buffer-size: "128k" 
 proxy-buffers-number: "32" 
 proxy-headers-hash-max-size: "51200" 
 proxy-headers-hash-bucket-size: "6400" 
 proxy-next-upstream: "error timeout http_500 http_502 http_503 http_504" 
 use-gzip: "true" 
 gzip-min-length: "1000" 
 gzip-level: "1" 
 gzip-types: "text/plain application/x-javascript text/css application/xml application/json" 
 ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2" 
 ssl-ciphers: "ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:AES256+EDH" 
 ssl-buffer-size: "16k" 
 ssl-session-cache: "true" 
 ssl-session-timeout: "5m" 
 ssl-dh-param: "/etc/nginx/pem/dhparam.pem" 
 log-format-upstream:
server-name-hash-bucket-size: "128" 
 client-body-timeout: "60" 
 client-header-buffer-size: "8k" 
 large-client-header-buffers: "4 32k" 
 proxy-body-size: "256m" 
 client-body-buffer-size: "128k" 
 http2-max-concurrent-streams: "64" 
 http2-max-field-size: "16k" 
 keep-alive: "30" 
 proxy-connect-timeout: "15" 
 proxy-send-timeout: "60" 
 proxy-read-timeout: "60" 
 proxy-buffer-size: "128k" 
 proxy-buffers-number: "32" 
 proxy-headers-hash-max-size: "51200" 
 proxy-headers-hash-bucket-size: "6400" 
 proxy-next-upstream: "error timeout http_500 http_502 http_503 http_504" 
 use-gzip: "true" 
 gzip-min-length: "1000" 
 gzip-level: "1" 
 gzip-types: "text/plain application/x-javascript text/css application/xml application/json" 
 ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2" 
 ssl-ciphers: "ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:AES256+EDH" 
 ssl-buffer-size: "16k" 
 ssl-session-cache: "true" 
 ssl-session-timeout: "5m" 
 ssl-dh-param: "/etc/nginx/pem/dhparam.pem" 
 log-format-upstream: