区块链技术博客
www.b2bchain.cn

kubeadm 安装单master kubernetes 集群

这篇文章主要介绍了kubeadm 安装单master kubernetes 集群的讲解,通过具体代码实例进行16955 讲解,并且分析了kubeadm 安装单master kubernetes 集群的详细步骤与相关技巧,需要的朋友可以参考下https://www.b2bchain.cn/?p=16955

本文实例讲述了2、树莓派设置连接WiFi,开启VNC等等的讲解。分享给大家供大家参考文章查询地址https://www.b2bchain.cn/7039.html。具体如下:

文章目录

    • kubeadm 安装单master kubernetes 集群
      • 1.配置要求
      • 2.检查 centos / hostname
      • 3.检查网络
      • 4.安装docker及kubelet
        • 4.1.修改docker镜像地址
        • 4.2.创建组件安装脚本
      • 5.初始化 master 节点
        • 5.1.安装calico网络组件
        • 5.2.创建初始化master脚本
      • 6.初始化 worker节点
        • 6.2.初始化worker
        • 6.3.常见错误原因
        • 6.4.移除 worker节点并重试
        • 6.5.检查初始化结果
      • 7.安装 Ingress Controller

kubeadm 安装单master kubernetes 集群

1.配置要求

对于 Kubernetes 初学者,在搭建K8S集群时,推荐在阿里云或腾讯云采购如下配置:(也可以使用自己的虚拟机、私有云等最容易获得的 Linux 环境)

至少2台 2核4G 的服务器
Cent OS 7.6
安装后的软件版本为

Kubernetes v1.18.x
calico 3.13.1
nginx-ingress 1.5.5
Docker 19.03.8
安装后的拓扑图如下:
kubeadm 安装单master kubernetes 集群

2.检查 centos / hostname

# 在 master 节点和 worker 节点都要执行 cat /etc/redhat-release  # 此处 hostname 的输出将会是该机器在 Kubernetes 集群中的节点名字 # 不能使用 localhost 作为节点的名字 hostname  # 请使用 lscpu 命令,核对 CPU 信息 # Architecture: x86_64    本安装文档不支持 arm 架构 # CPU(s):       2         CPU 内核数量不能低于 2 lscpu 
  • 操作系统兼容性
    kubeadm 安装单master kubernetes 集群
  • 修改 hostname
    如果需要修改 hostname,可执行如下指令:
# 修改 hostname hostnamectl set-hostname your-new-host-name # 查看修改结果 hostnamectl status # 修改/etc/hosts文件 172.16.106.200 k8s-master 172.16.106.227 k8s-node-1 172.16.106.226 k8s-node-2  # 快速复制到其它主机 scp /etc/hosts [email protected]:/etc/ scp /etc/hosts [email protected]:/etc/ 

3.检查网络

  • 所有的节点都要检查网络是否互通
[[email protected] ~]$ ip route show default via 172.16.106.200 dev eth0 proto dhcp metric 100  169.254.169.254 via 172.16.106.230 dev eth0 proto dhcp metric 100  172.16.106.0/24 dev eth0 proto kernel scope link src 172.16.106.217 metric 100  192.168.124.0/24 dev virbr0 proto kernel scope link src 192.168.124.1   $ ip address 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo        valid_lft forever preferred_lft forever     inet6 ::1/128 scope host         valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000     link/ether fa:16:3e:48:21:8a brd ff:ff:ff:ff:ff:ff     inet 172.16.106.200/24 brd 172.16.106.255 scope global noprefixroute dynamic eth0        valid_lft 79399sec preferred_lft 79399sec     inet6 fe80::f816:3eff:fe48:218a/64 scope link         valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000     link/ether 52:54:00:e2:1c:cc brd ff:ff:ff:ff:ff:ff     inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0        valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000     link/ether 52:54:00:e2:1c:cc brd ff:ff:ff:ff:ff:ff 

kubelet使用的IP地址:

ip route show 命令中,可以知道机器的默认网卡,通常是 eth0,如 default via 172.21.0.23 dev eth0
ip address 命令中,可显示默认网卡的 IP 地址,Kubernetes 将使用此 IP 地址与集群内的其他节点通信,如 172.17.216.80
所有节点上 Kubernetes 所使用的 IP 地址必须可以互通(无需 NAT 映射、无安全组或防火墙隔离)

4.安装docker及kubelet

注意:下面的操作在 master 节点和 worker 节点上都要执行

  • 请认真核对如下选项,7 个都满足后才能正常安装。
  1. 我的任意节点 centos 版本为 7.6 或 7.7 或 7.8
  2. 我的任意节点 CPU 内核数量大于等于 2,且内存大于等于 4G
  3. 我的任意节点 hostname 不是 localhost,且不包含下划线、小数点、大写字母
  4. 我的任意节点都有固定的内网 IP 地址
  5. 我的任意节点都只有一个网卡,如果有特殊目的,我可以在完成 K8S 安装后再增加新的网卡
  6. 我的任意节点上 Kubelet 使用的 IP 地址 可互通(无需 NAT 映射即可相互访问),且没有防火墙、安全组隔离
  7. 我的任意节点不会直接使用 docker run 或 docker-compose 运行容器

4.1.修改docker镜像地址

使用 root 身份在所有节点执行如下代码,以安装软件:

  1. docker
  2. nfs-utils
  3. kubectl / kubeadm / kubelet

kubelet 在群集中所有节点上运行的核心组件, 用来执行如启动 pods 和 containers 等操作。
kubeadm 引导启动 k8s 集群的命令行工具,用于初始化 Cluster。
kubectl 是 Kubernetes 命令行工具。通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。

  • docker hub 镜像请根据自己网络的情况任选一个,手动执行以下代码:
  • 在 master 节点和 worker 节点都要更改docker镜像地址
# 在 master 节点和 worker 节点都要执行 # 腾讯云 docker hub 镜像 # export REGISTRY_MIRROR="https://mirror.ccs.tencentyun.com" # DaoCloud 镜像 # export REGISTRY_MIRROR="http://f1361db2.m.daocloud.io" # 华为云镜像 # export REGISTRY_MIRROR="https://05f073ad3c0010ea0f4bc00b7105ec20.mirror.swr.myhuaweicloud.com" # 阿里云 docker hub 镜像 export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com 

4.2.创建组件安装脚本

  • 新建一个install_kubelet.sh 名称的脚本文件复制下面的内容到文件中
#!/bin/bash  # 在 master 节点和 worker 节点都要执行  # 安装 docker # 参考文档如下 # https://docs.docker.com/install/linux/docker-ce/centos/  # https://docs.docker.com/install/linux/linux-postinstall/  # 卸载旧版本 yum remove -y docker  docker-client  docker-client-latest  docker-ce-cli  docker-common  docker-latest  docker-latest-logrotate  docker-logrotate  docker-selinux  docker-engine-selinux  docker-engine  # 设置 yum repository yum install -y yum-utils  device-mapper-persistent-data  lvm2 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo  # 安装并启动 docker yum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8 containerd.io systemctl enable docker systemctl start docker  # 安装 nfs-utils # 必须先安装 nfs-utils 才能挂载 nfs 网络存储 yum install -y nfs-utils yum install -y wget  # 关闭 防火墙 systemctl stop firewalld systemctl disable firewalld  # 关闭 SeLinux setenforce 0 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config  # 关闭 swap swapoff -a yes | cp /etc/fstab /etc/fstab_bak cat /etc/fstab_bak |grep -v swap > /etc/fstab  # 修改 /etc/sysctl.conf # 如果有配置,则修改 sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf # 可能没有,追加 echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf # 执行命令以应用 sysctl -p  # 配置K8S的yum源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF  # 卸载旧版本 yum remove -y kubelet kubeadm kubectl  # 安装kubelet、kubeadm、kubectl # 将 ${1} 替换为 kubernetes 版本号,例如 1.17.2 yum install -y kubelet-${1} kubeadm-${1} kubectl-${1}  # 修改docker Cgroup Driver为systemd # # 将/usr/lib/systemd/system/docker.service文件中的这一行 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock # # 修改为 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd # 如果不修改,在添加 worker 节点时可能会碰到如下错误 # [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".  # Please follow the guide at https://kubernetes.io/docs/setup/cri/ sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service  # 重启 docker,并启动 kubelet systemctl daemon-reload systemctl restart docker systemctl enable kubelet && systemctl start kubelet  docker version 
  • 执行install_kubelet.sh脚本安装组件
    在 master 节点和 worker 节点都要执行
# 最后一个参数 1.18.5 用于指定 kubenetes 版本,支持所有 1.18.x 版本的安装  sh install_kubelet.sh 1.18.5 
  • 核查安装结果:
Complete! Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. Client: Docker Engine - Community  Version:           19.03.8  API version:       1.40  Go version:        go1.12.17  Git commit:        afacb8b  Built:             Wed Mar 11 01:27:04 2020  OS/Arch:           linux/amd64  Experimental:      false  Server: Docker Engine - Community  Engine:   Version:          19.03.8   API version:      1.40 (minimum version 1.12)   Go version:       go1.12.17   Git commit:       afacb8b   Built:            Wed Mar 11 01:25:42 2020   OS/Arch:          linux/amd64   Experimental:     false  containerd:   Version:          1.2.13   GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429  runc:   Version:          1.0.0-rc10   GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd  docker-init:   Version:          0.18.0   GitCommit:        fec3683 

5.初始化 master 节点

  • 注意:初始化master只在 master 节点执行

  • 关于初始化时用到的环境变量:

    1. APISERVER_NAME 不能是 master 的 hostname
    2. APISERVER_NAME 必须全为小写字母、数字、小数点,不能包含减号
    3. POD_SUBNET 所使用的网段不能与 master节点/worker节点 所在的网段重叠。该字段的取值为一个 CIDR 值,如果您对 CIDR 这个概念还不熟悉,请仍然执行 export POD_SUBNET=10.100.0.1/16 命令,不做修改
# 替换 x.x.x.x 为 master 节点实际 IP(请使用内网 IP) # export 命令只在当前 shell 会话中有效,开启新的 shell 窗口后,如果要继续安装过程,请重新执行此处的 export 命令 export MASTER_IP=x.x.x.x # 替换 apiserver.demo 为 您想要的 dnsName export APISERVER_NAME=apiserver.7d.com # Kubernetes 容器组所在的网段,该网段安装完成后,由 kubernetes 创建,事先并不存在于您的物理网络中 export POD_SUBNET=10.100.0.1/16 echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts 

5.1.安装calico网络组件

让 Kubernetes Cluster 能够工作,必须安装 Pod 网络,否则 Pod 之间无法通信。
Kubernetes 支持多种网络方案,这里我们使用 calico

  • calico文件获取方式有两种,一个是直接从网盘下载,另一个就是执行下面的命令创建该文件。
  • 方式一:网盘下载

链接: https://pan.baidu.com/s/1w4ShwVvW3Ixl5ebtfxfybA 提取码: sqru

  • 方式二:脚本创建

  • cat方式创建calico-3.13.1.yaml文件

cat > calico-3.13.1.yaml <<EOF --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata:   name: networksets.crd.projectcalico.org spec:   scope: Namespaced   group: crd.projectcalico.org   version: v1   names:     kind: NetworkSet     plural: networksets     singular: networkset  --- --- # Source: calico/templates/rbac.yaml  # Include a clusterrole for the kube-controllers component, # and bind it to the calico-kube-controllers serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:   name: calico-kube-controllers rules:   # Nodes are watched to monitor for deletions.   - apiGroups: [""]     resources:       - nodes     verbs:       - watch       - list       - get   # Pods are queried to check for existence.   - apiGroups: [""]     resources:       - pods     verbs:       - get   # IPAM resources are manipulated when nodes are deleted.   - apiGroups: ["crd.projectcalico.org"]     resources:       - ippools     verbs:       - list   - apiGroups: ["crd.projectcalico.org"]     resources:       - blockaffinities       - ipamblocks       - ipamhandles     verbs:       - get       - list       - create       - update       - delete   # Needs access to update clusterinformations.   - apiGroups: ["crd.projectcalico.org"]     resources:       - clusterinformations     verbs:       - get       - create       - update --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:   name: calico-kube-controllers roleRef:   apiGroup: rbac.authorization.k8s.io   kind: ClusterRole   name: calico-kube-controllers subjects: - kind: ServiceAccount   name: calico-kube-controllers   namespace: kube-system --- # Include a clusterrole for the calico-node DaemonSet, # and bind it to the calico-node serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:   name: calico-node rules:   # The CNI plugin needs to get pods, nodes, and namespaces.   - apiGroups: [""]     resources:       - pods       - nodes       - namespaces     verbs:       - get   - apiGroups: [""]     resources:       - endpoints       - services     verbs:       # Used to discover service IPs for advertisement.       - watch       - list       # Used to discover Typhas.       - get   # Pod CIDR auto-detection on kubeadm needs access to config maps.   - apiGroups: [""]     resources:       - configmaps     verbs:       - get   - apiGroups: [""]     resources:       - nodes/status     verbs:       # Needed for clearing NodeNetworkUnavailable flag.       - patch       # Calico stores some configuration information in node annotations.       - update   # Watch for changes to Kubernetes NetworkPolicies.   - apiGroups: ["networking.k8s.io"]     resources:       - networkpolicies     verbs:       - watch       - list   # Used by Calico for policy information.   - apiGroups: [""]     resources:       - pods       - namespaces       - serviceaccounts     verbs:       - list       - watch   # The CNI plugin patches pods/status.   - apiGroups: [""]     resources:       - pods/status     verbs:       - patch   # Calico monitors various CRDs for config.   - apiGroups: ["crd.projectcalico.org"]     resources:       - globalfelixconfigs       - felixconfigurations       - bgppeers       - globalbgpconfigs       - bgpconfigurations       - ippools       - ipamblocks       - globalnetworkpolicies       - globalnetworksets       - networkpolicies       - networksets       - clusterinformations       - hostendpoints       - blockaffinities     verbs:       - get       - list       - watch   # Calico must create and update some CRDs on startup.   - apiGroups: ["crd.projectcalico.org"]     resources:       - ippools       - felixconfigurations       - clusterinformations     verbs:       - create       - update   # Calico stores some configuration information on the node.   - apiGroups: [""]     resources:       - nodes     verbs:       - get       - list       - watch   # These permissions are only requried for upgrade from v2.6, and can   # be removed after upgrade or on fresh installations.   - apiGroups: ["crd.projectcalico.org"]     resources:       - bgpconfigurations       - bgppeers     verbs:       - create       - update   # These permissions are required for Calico CNI to perform IPAM allocations.   - apiGroups: ["crd.projectcalico.org"]     resources:       - blockaffinities       - ipamblocks       - ipamhandles     verbs:       - get       - list       - create       - update       - delete   - apiGroups: ["crd.projectcalico.org"]     resources:       - ipamconfigs     verbs:       - get   # Block affinities must also be watchable by confd for route aggregation.   - apiGroups: ["crd.projectcalico.org"]     resources:       - blockaffinities     verbs:       - watch   # The Calico IPAM migration needs to get daemonsets. These permissions can be   # removed if not upgrading from an installation using host-local IPAM.   - apiGroups: ["apps"]     resources:       - daemonsets     verbs:       - get  --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:   name: calico-node roleRef:   apiGroup: rbac.authorization.k8s.io   kind: ClusterRole   name: calico-node subjects: - kind: ServiceAccount   name: calico-node   namespace: kube-system  --- # Source: calico/templates/calico-node.yaml # This manifest installs the calico-node container, as well # as the CNI plugins and network config on # each master and worker node in a Kubernetes cluster. kind: DaemonSet apiVersion: apps/v1 metadata:   name: calico-node   namespace: kube-system   labels:     k8s-app: calico-node spec:   selector:     matchLabels:       k8s-app: calico-node   updateStrategy:     type: RollingUpdate     rollingUpdate:       maxUnavailable: 1   template:     metadata:       labels:         k8s-app: calico-node       annotations:         # This, along with the CriticalAddonsOnly toleration below,         # marks the pod as a critical add-on, ensuring it gets         # priority scheduling and that its resources are reserved         # if it ever gets evicted.         scheduler.alpha.kubernetes.io/critical-pod: ''     spec:       nodeSelector:         kubernetes.io/os: linux       hostNetwork: true       tolerations:         # Make sure calico-node gets scheduled on all nodes.         - effect: NoSchedule           operator: Exists         # Mark the pod as a critical add-on for rescheduling.         - key: CriticalAddonsOnly           operator: Exists         - effect: NoExecute           operator: Exists       serviceAccountName: calico-node       # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force       # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.       terminationGracePeriodSeconds: 0       priorityClassName: system-node-critical       initContainers:         # This container performs upgrade from host-local IPAM to calico-ipam.         # It can be deleted if this is a fresh installation, or if you have already         # upgraded to use calico-ipam.         - name: upgrade-ipam           image: calico/cni:v3.13.1           command: ["/opt/cni/bin/calico-ipam", "-upgrade"]           env:             - name: KUBERNETES_NODE_NAME               valueFrom:                 fieldRef:                   fieldPath: spec.nodeName             - name: CALICO_NETWORKING_BACKEND               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: calico_backend           volumeMounts:             - mountPath: /var/lib/cni/networks               name: host-local-net-dir             - mountPath: /host/opt/cni/bin               name: cni-bin-dir           securityContext:             privileged: true         # This container installs the CNI binaries         # and CNI network config file on each node.         - name: install-cni           image: calico/cni:v3.13.1           command: ["/install-cni.sh"]           env:             # Name of the CNI config file to create.             - name: CNI_CONF_NAME               value: "10-calico.conflist"             # The CNI network config to install on each node.             - name: CNI_NETWORK_CONFIG               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: cni_network_config             # Set the hostname based on the k8s node name.             - name: KUBERNETES_NODE_NAME               valueFrom:                 fieldRef:                   fieldPath: spec.nodeName             # CNI MTU Config variable             - name: CNI_MTU               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: veth_mtu             # Prevents the container from sleeping forever.             - name: SLEEP               value: "false"           volumeMounts:             - mountPath: /host/opt/cni/bin               name: cni-bin-dir             - mountPath: /host/etc/cni/net.d               name: cni-net-dir           securityContext:             privileged: true         # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes         # to communicate with Felix over the Policy Sync API.         - name: flexvol-driver           image: calico/pod2daemon-flexvol:v3.13.1           volumeMounts:           - name: flexvol-driver-host             mountPath: /host/driver           securityContext:             privileged: true       containers:         # Runs calico-node container on each Kubernetes node.  This         # container programs network policy and routes on each         # host.         - name: calico-node           image: calico/node:v3.13.1           env:             # Use Kubernetes API as the backing datastore.             - name: DATASTORE_TYPE               value: "kubernetes"             # Wait for the datastore.             - name: WAIT_FOR_DATASTORE               value: "true"             # Set based on the k8s node name.             - name: NODENAME               valueFrom:                 fieldRef:                   fieldPath: spec.nodeName             # Choose the backend to use.             - name: CALICO_NETWORKING_BACKEND               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: calico_backend             # Cluster type to identify the deployment type             - name: CLUSTER_TYPE               value: "k8s,bgp"             # Auto-detect the BGP IP address.             - name: IP               value: "autodetect"             # Enable IPIP             - name: CALICO_IPV4POOL_IPIP               value: "Always"             # Set MTU for tunnel device used if ipip is enabled             - name: FELIX_IPINIPMTU               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: veth_mtu             # The default IPv4 pool to create on startup if none exists. Pod IPs will be             # chosen from this range. Changing this value after installation will have             # no effect. This should fall within `--cluster-cidr`.             # - name: CALICO_IPV4POOL_CIDR             #   value: "192.168.0.0/16"             # Disable file logging so `kubectl logs` works.             - name: CALICO_DISABLE_FILE_LOGGING               value: "true"             # Set Felix endpoint to host default action to ACCEPT.             - name: FELIX_DEFAULTENDPOINTTOHOSTACTION               value: "ACCEPT"             # Disable IPv6 on Kubernetes.             - name: FELIX_IPV6SUPPORT               value: "false"             # Set Felix logging to "info"             - name: FELIX_LOGSEVERITYSCREEN               value: "info"             - name: FELIX_HEALTHENABLED               value: "true"           securityContext:             privileged: true           resources:             requests:               cpu: 250m           livenessProbe:             exec:               command:               - /bin/calico-node               - -felix-live               - -bird-live             periodSeconds: 10             initialDelaySeconds: 10             failureThreshold: 6           readinessProbe:             exec:               command:               - /bin/calico-node               - -felix-ready               - -bird-ready             periodSeconds: 10           volumeMounts:             - mountPath: /lib/modules               name: lib-modules               readOnly: true             - mountPath: /run/xtables.lock               name: xtables-lock               readOnly: false             - mountPath: /var/run/calico               name: var-run-calico               readOnly: false             - mountPath: /var/lib/calico               name: var-lib-calico               readOnly: false             - name: policysync               mountPath: /var/run/nodeagent       volumes:         # Used by calico-node.         - name: lib-modules           hostPath:             path: /lib/modules         - name: var-run-calico           hostPath:             path: /var/run/calico         - name: var-lib-calico           hostPath:             path: /var/lib/calico         - name: xtables-lock           hostPath:             path: /run/xtables.lock             type: FileOrCreate         # Used to install CNI.         - name: cni-bin-dir           hostPath:             path: /opt/cni/bin         - name: cni-net-dir           hostPath:             path: /etc/cni/net.d         # Mount in the directory for host-local IPAM allocations. This is         # used when upgrading from host-local to calico-ipam, and can be removed         # if not using the upgrade-ipam init container.         - name: host-local-net-dir           hostPath:             path: /var/lib/cni/networks         # Used to create per-pod Unix Domain Sockets         - name: policysync           hostPath:             type: DirectoryOrCreate             path: /var/run/nodeagent         # Used to install Flex Volume Driver         - name: flexvol-driver-host           hostPath:             type: DirectoryOrCreate             path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds ---  apiVersion: v1 kind: ServiceAccount metadata:   name: calico-node   namespace: kube-system  --- # Source: calico/templates/calico-kube-controllers.yaml  # See https://github.com/projectcalico/kube-controllers apiVersion: apps/v1 kind: Deployment metadata:   name: calico-kube-controllers   namespace: kube-system   labels:     k8s-app: calico-kube-controllers spec:   # The controllers can only have a single active instance.   replicas: 1   selector:     matchLabels:       k8s-app: calico-kube-controllers   strategy:     type: Recreate   template:     metadata:       name: calico-kube-controllers       namespace: kube-system       labels:         k8s-app: calico-kube-controllers       annotations:         scheduler.alpha.kubernetes.io/critical-pod: ''     spec:       nodeSelector:         kubernetes.io/os: linux       tolerations:         # Mark the pod as a critical add-on for rescheduling.         - key: CriticalAddonsOnly           operator: Exists         - key: node-role.kubernetes.io/master           effect: NoSchedule       serviceAccountName: calico-kube-controllers       priorityClassName: system-cluster-critical       containers:         - name: calico-kube-controllers           image: calico/kube-controllers:v3.13.1           env:             # Choose which controllers to run.             - name: ENABLED_CONTROLLERS               value: node             - name: DATASTORE_TYPE               value: kubernetes           readinessProbe:             exec:               command:               - /usr/bin/check-status               - -r  ---  apiVersion: v1 kind: ServiceAccount metadata:   name: calico-kube-controllers   namespace: kube-system --- # Source: calico/templates/calico-etcd-secrets.yaml  --- # Source: calico/templates/calico-typha.yaml  --- # Source: calico/templates/configure-canal.yaml EOF 

5.2.创建初始化master脚本

  • 创建init_master.sh 脚本将下面的内容复制到脚本中
#!/bin/bash  # 只在 master 节点执行  # 脚本出错时终止执行 set -e  if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then   echo -e "33[31;1m请确保您已经设置了环境变量 POD_SUBNET 和 APISERVER_NAME 33[0m"   echo 当前POD_SUBNET=$POD_SUBNET   echo 当前APISERVER_NAME=$APISERVER_NAME   exit 1 fi   # 查看完整配置选项 https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2 rm -f ./kubeadm-config.yaml cat <<EOF > ./kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v${1} imageRepository: registry.aliyuncs.com/k8sxio controlPlaneEndpoint: "${APISERVER_NAME}:6443" networking:   serviceSubnet: "10.96.0.0/16"   podSubnet: "${POD_SUBNET}"   dnsDomain: "cluster.local" EOF  # kubeadm init # 根据您服务器网速的情况,您需要等候 3 - 10 分钟 kubeadm init --config=kubeadm-config.yaml --upload-certs  # 配置 kubectl rm -rf /root/.kube/ mkdir /root/.kube/ cp -i /etc/kubernetes/admin.conf /root/.kube/config  # 安装 calico 网络插件 # 参考文档 https://docs.projectcalico.org/v3.13/getting-started/kubernetes/self-managed-onprem/onpremises echo "安装calico-3.13.1" kubectl apply -f calico-3.13.1.yaml 
  • 执行初始化脚本:
sh init_master.sh  1.18.5 

如果执行失败:

  • 请确保您的环境符合 安装 docker 及 kubelet 中所有勾选框的要求

  • 请确保您使用 root 用户执行初始化命令

  • 不能下载 kubernetes 的 docker 镜像

    • 安装文档中,默认使用阿里云的 docker 镜像仓库,然而,有时候,该镜像会罢工
    • 如碰到不能下载 docker 镜像的情况,请尝试手工初始化,并修改手工初始化脚本为:
      imageRepository: gcr.azk8s.cn/google-containers
  • 检查环境变量,执行如下命令
    echo MASTER_IP=KaTeX parse error: Expected ‘EOF’, got ‘&’ at position 13: {MASTER_IP} &̲& echo APISERVE…{APISERVER_NAME} && echo POD_SUBNET=${POD_SUBNET}

  • 请验证如下几点:

    • 环境变量 MASTER_IP 的值应该为 master 节点的 内网IP,如果不是,请重新 export
    • APISERVER_NAME 不能是 master 的 hostname
    • APISERVER_NAME 必须全为小写字母、数字、小数点,不能包含减号
    • POD_SUBNET 所使用的网段不能与 master节点/worker节点 所在的网段重叠。该字段的取值为一个 CIDR 值,如果您对 CIDR 这个概念还不熟悉,请仍然执行 export POD_SUBNET=10.100.0.1/16 命令,不做修改
  • 重新初始化 master 节点前,请先执行kubeadm reset -f 操作

  • 检查 master 初始化结果

# 只在 master 节点执行  # 执行如下命令,等待 3-10 分钟,直到所有的容器组处于 Running 状态 watch kubectl get pod -n kube-system -o wide  Every 2.0s: kubectl get pod -n kube-system -o wide                                                                                                                                                                                Sat Jul 18 13:26:12 2020  NAME                                           READY   STATUS    RESTARTS   AGE   IP               NODE                   NOMINATED NODE   READINESS GATES calico-kube-controllers-5b8b769fcd-xcgdd       1/1     Running   0          40m   10.100.41.2      k8s-master   <none>           <none> calico-node-pz7w5                              1/1     Running   0          40m   172.16.106.200   k8s-master   <none>           <none> coredns-66db54ff7f-dw9n9                       1/1     Running   0          40m   10.100.41.3      k8s-master   <none>           <none> coredns-66db54ff7f-tb2lb                       1/1     Running   0          40m   10.100.41.1      k8s-master   <none>           <none> etcd-k8s-master.novalocal                      1/1     Running   0          40m   172.16.106.200   k8s-master   <none>           <none> kube-apiserver-k8s-master.novalocal            1/1     Running   0          40m   172.16.106.200   k8s-master   <none>           <none> kube-controller-manager-k8s-master             1/1     Running   0          40m   172.16.106.200   k8s-master   <none>           <none> kube-proxy-44dm6                               1/1     Running   0          40m   172.16.106.200   k8s-master   <none>           <none> kube-scheduler-k8s-master                      1/1     Running   0          40m   172.16.106.200   k8s-master   <none>           <none>  # 查看 master 节点初始化结果 kubectl get nodes -o wide  NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME k8s-master   Ready    master   40m   v1.18.5   172.16.106.200   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://19.3.8 

6.初始化 worker节点

  • 在 master 节点上执行获得 join命令参数
# 只在 master 节点执行 kubeadm token create --print-join-command 

可获取kubeadm join 命令及参数,如下所示

# kubeadm token create 命令的输出 kubeadm join apiserver.7d.com:6443 --token f7uewc.h0z9b3e5gxzij9hl     --discovery-token-ca-cert-hash sha256:829e2813e91fe228afb7273c33dd7ad4f643a94b5635da56d097cd9399833662  

有效时间

该 token 的有效时间为 2 个小时,2小时内,您可以使用此 token 初始化任意数量的 worker 节点。

6.2.初始化worker

针对所有的 worker 节点执行

# 只在 worker 节点执行 # 替换 x.x.x.x 为 master 节点的内网 IP export MASTER_IP=x.x.x.x # 替换 apiserver.demo 为初始化 master 节点时所使用的 APISERVER_NAME export APISERVER_NAME=apiserver.7d.com echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts  # 替换为 master 节点上 kubeadm token create 命令的输出 kubeadm join apiserver.7d.com:6443 --token f7uewc.h0z9b3e5gxzij9hl     --discovery-token-ca-cert-hash sha256:829e2813e91fe228afb7273c33dd7ad4f643a94b5635da56d097cd9399833662  

Kubernetes 的 Worker 节点跟 Master 节点几乎是相同的,它们运行着的都是一个 kubelet 组件。唯一的区别在于,在 kubeadm init 的过程中,kubelet 启动后,Master 节点上还会自动运行 kube-apiserver、kube-scheduler、kube-controller-manger 这三个系统 Pod。

6.3.常见错误原因

  • worker 节点不能访问 apiserver
    在 worker 节点执行以下语句可验证 worker 节点是否能访问 apiserver
# 后面的url地址替换为自己在初始化master时候输入的域名。 curl -ik https://apiserver.7d.com:6443 

如果不能,请在 master 节点上验证

curl -ik https://localhost:6443 

正常输出结果如下所示:

HTTP/1.1 403 Forbidden Cache-Control: no-cache, private Content-Type: application/json X-Content-Type-Options: nosniff Date: Sat, 18 Jul 2020 05:47:43 GMT Content-Length: 233  {  "kind": "Status",  "apiVersion": "v1",  "metadata": {      },  "status": "Failure",  "message": "forbidden: User "system:anonymous" cannot get path "/"",  "reason": "Forbidden",  "details": {      },  "code": 403 }  

如果 master 节点能够访问 apiserver、而 worker 节点不能,则请检查自己的网络设置
/etc/hosts 是否正确设置?
是否有安全组或防火墙的限制?

6.4.移除 worker节点并重试

正常情况下,您无需移除 worker 节点,如果添加到集群出错,您可以移除 worker 节点,再重新尝试添加

  • 在准备移除的 worker 节点上执行
# 只在 worker 节点执行  # 卸载 kubeadm reset -f  # 清理 modprobe -r ipip lsmod rm -rf ~/.kube/ rm -rf /etc/kubernetes/ rm -rf /etc/systemd/system/kubelet.service.d rm -rf /etc/systemd/system/kubelet.service rm -rf /usr/bin/kube* rm -rf /etc/cni rm -rf /opt/cni rm -rf /var/lib/etcd rm -rf /var/etcd 
  • 在 master 节点 上执行:
# 只在 master 节点执行 kubectl get nodes -o wide 
  • 如果列表中没有您要移除的节点,则忽略下一个步骤
# 只在 master 节点执行 #将 k8s-node-x 替换为要移除的 worker 节点的名字 #worker 节点的名字可以通过在节点 k8s-master 上执行 kubectl get nodes 命令获得 kubectl delete node k8s-node-x 

6.5.检查初始化结果

# 只在 master 节点执行 kubectl get nodes -o wide 

7.安装 Ingress Controller

在 master 节点上执行

  • 创建nginx-ingress.yaml文件将下面的内容复制到文件中
# 如果打算用于生产环境,请参考 https://github.com/nginxinc/kubernetes-ingress/blob/v1.5.5/docs/installation.md 并根据您自己的情况做进一步定制  apiVersion: v1 kind: Namespace metadata:   name: nginx-ingress  --- apiVersion: v1 kind: ServiceAccount metadata:   name: nginx-ingress    namespace: nginx-ingress  --- apiVersion: v1 kind: Secret metadata:   name: default-server-secret   namespace: nginx-ingress type: Opaque data:   tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN2akNDQWFZQ0NRREFPRjl0THNhWFhEQU5CZ2txaGtpRzl3MEJBUXNGQURBaE1SOHdIUVlEVlFRRERCWk8KUjBsT1dFbHVaM0psYzNORGIyNTBjbTlzYkdWeU1CNFhEVEU0TURreE1qRTRNRE16TlZvWERUSXpNRGt4TVRFNApNRE16TlZvd0lURWZNQjBHQTFVRUF3d1dUa2RKVGxoSmJtZHlaWE56UTI5dWRISnZiR3hsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUwvN2hIUEtFWGRMdjNyaUM3QlBrMTNpWkt5eTlyQ08KR2xZUXYyK2EzUDF0azIrS3YwVGF5aGRCbDRrcnNUcTZzZm8vWUk1Y2Vhbkw4WGM3U1pyQkVRYm9EN2REbWs1Qgo4eDZLS2xHWU5IWlg0Rm5UZ0VPaStlM2ptTFFxRlBSY1kzVnNPazFFeUZBL0JnWlJVbkNHZUtGeERSN0tQdGhyCmtqSXVuektURXUyaDU4Tlp0S21ScUJHdDEwcTNRYzhZT3ExM2FnbmovUWRjc0ZYYTJnMjB1K1lYZDdoZ3krZksKWk4vVUkxQUQ0YzZyM1lma1ZWUmVHd1lxQVp1WXN2V0RKbW1GNWRwdEMzN011cDBPRUxVTExSakZJOTZXNXIwSAo1TmdPc25NWFJNV1hYVlpiNWRxT3R0SmRtS3FhZ25TZ1JQQVpQN2MwQjFQU2FqYzZjNGZRVXpNQ0F3RUFBVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQWpLb2tRdGRPcEsrTzhibWVPc3lySmdJSXJycVFVY2ZOUitjb0hZVUoKdGhrYnhITFMzR3VBTWI5dm15VExPY2xxeC9aYzJPblEwMEJCLzlTb0swcitFZ1U2UlVrRWtWcitTTFA3NTdUWgozZWI4dmdPdEduMS9ienM3bzNBaS9kclkrcUI5Q2k1S3lPc3FHTG1US2xFaUtOYkcyR1ZyTWxjS0ZYQU80YTY3Cklnc1hzYktNbTQwV1U3cG9mcGltU1ZmaXFSdkV5YmN3N0NYODF6cFErUyt1eHRYK2VBZ3V0NHh3VlI5d2IyVXYKelhuZk9HbWhWNThDd1dIQnNKa0kxNXhaa2VUWXdSN0diaEFMSkZUUkk3dkhvQXprTWIzbjAxQjQyWjNrN3RXNQpJUDFmTlpIOFUvOWxiUHNoT21FRFZkdjF5ZytVRVJxbStGSis2R0oxeFJGcGZnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=   tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdi91RWM4b1JkMHUvZXVJTHNFK1RYZUprckxMMnNJNGFWaEMvYjVyYy9XMlRiNHEvClJOcktGMEdYaVN1eE9ycXgrajlnamx4NXFjdnhkenRKbXNFUkJ1Z1B0ME9hVGtIekhvb3FVWmcwZGxmZ1dkT0EKUTZMNTdlT1l0Q29VOUZ4amRXdzZUVVRJVUQ4R0JsRlNjSVo0b1hFTkhzbysyR3VTTWk2Zk1wTVM3YUhudzFtMApxWkdvRWEzWFNyZEJ6eGc2clhkcUNlUDlCMXl3VmRyYURiUzc1aGQzdUdETDU4cGszOVFqVUFQaHpxdmRoK1JWClZGNGJCaW9CbTVpeTlZTW1hWVhsMm0wTGZzeTZuUTRRdFFzdEdNVWozcGJtdlFmazJBNnljeGRFeFpkZFZsdmwKMm82MjBsMllxcHFDZEtCRThCay90elFIVTlKcU56cHpoOUJUTXdJREFRQUJBb0lCQVFDZklHbXowOHhRVmorNwpLZnZJUXQwQ0YzR2MxNld6eDhVNml4MHg4Mm15d1kxUUNlL3BzWE9LZlRxT1h1SENyUlp5TnUvZ2IvUUQ4bUFOCmxOMjRZTWl0TWRJODg5TEZoTkp3QU5OODJDeTczckM5bzVvUDlkazAvYzRIbjAzSkVYNzZ5QjgzQm9rR1FvYksKMjhMNk0rdHUzUmFqNjd6Vmc2d2szaEhrU0pXSzBwV1YrSjdrUkRWYmhDYUZhNk5nMUZNRWxhTlozVDhhUUtyQgpDUDNDeEFTdjYxWTk5TEI4KzNXWVFIK3NYaTVGM01pYVNBZ1BkQUk3WEh1dXFET1lvMU5PL0JoSGt1aVg2QnRtCnorNTZud2pZMy8yUytSRmNBc3JMTnIwMDJZZi9oY0IraVlDNzVWYmcydVd6WTY3TWdOTGQ5VW9RU3BDRkYrVm4KM0cyUnhybnhBb0dCQU40U3M0ZVlPU2huMVpQQjdhTUZsY0k2RHR2S2ErTGZTTXFyY2pOZjJlSEpZNnhubmxKdgpGenpGL2RiVWVTbWxSekR0WkdlcXZXaHFISy9iTjIyeWJhOU1WMDlRQ0JFTk5jNmtWajJTVHpUWkJVbEx4QzYrCk93Z0wyZHhKendWelU0VC84ajdHalRUN05BZVpFS2FvRHFyRG5BYWkyaW5oZU1JVWZHRXFGKzJyQW9HQkFOMVAKK0tZL0lsS3RWRzRKSklQNzBjUis3RmpyeXJpY05iWCtQVzUvOXFHaWxnY2grZ3l4b25BWlBpd2NpeDN3QVpGdwpaZC96ZFB2aTBkWEppc1BSZjRMazg5b2pCUmpiRmRmc2l5UmJYbyt3TFU4NUhRU2NGMnN5aUFPaTVBRHdVU0FkCm45YWFweUNweEFkREtERHdObit3ZFhtaTZ0OHRpSFRkK3RoVDhkaVpBb0dCQUt6Wis1bG9OOTBtYlF4VVh5YUwKMjFSUm9tMGJjcndsTmVCaWNFSmlzaEhYa2xpSVVxZ3hSZklNM2hhUVRUcklKZENFaHFsV01aV0xPb2I2NTNyZgo3aFlMSXM1ZUtka3o0aFRVdnpldm9TMHVXcm9CV2xOVHlGanIrSWhKZnZUc0hpOGdsU3FkbXgySkJhZUFVWUNXCndNdlQ4NmNLclNyNkQrZG8wS05FZzFsL0FvR0FlMkFVdHVFbFNqLzBmRzgrV3hHc1RFV1JqclRNUzRSUjhRWXQKeXdjdFA4aDZxTGxKUTRCWGxQU05rMXZLTmtOUkxIb2pZT2pCQTViYjhibXNVU1BlV09NNENoaFJ4QnlHbmR2eAphYkJDRkFwY0IvbEg4d1R0alVZYlN5T294ZGt5OEp0ek90ajJhS0FiZHd6NlArWDZDODhjZmxYVFo5MWpYL3RMCjF3TmRKS2tDZ1lCbyt0UzB5TzJ2SWFmK2UwSkN5TGhzVDQ5cTN3Zis2QWVqWGx2WDJ1VnRYejN5QTZnbXo5aCsKcDNlK2JMRUxwb3B0WFhNdUFRR0xhUkcrYlNNcjR5dERYbE5ZSndUeThXczNKY3dlSTdqZVp2b0ZpbmNvVlVIMwphdmxoTUVCRGYxSjltSDB5cDBwWUNaS2ROdHNvZEZtQktzVEtQMjJhTmtsVVhCS3gyZzR6cFE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=  --- kind: ConfigMap apiVersion: v1 metadata:   name: nginx-config   namespace: nginx-ingress data:   server-names-hash-bucket-size: "1024"   --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata:   name: nginx-ingress rules: - apiGroups:   - ""   resources:   - services   - endpoints   verbs:   - get   - list   - watch - apiGroups:   - ""   resources:   - secrets   verbs:   - get   - list   - watch - apiGroups:   - ""   resources:   - configmaps   verbs:   - get   - list   - watch   - update   - create - apiGroups:   - ""   resources:   - pods   verbs:   - list - apiGroups:   - ""   resources:   - events   verbs:   - create   - patch - apiGroups:   - extensions   resources:   - ingresses   verbs:   - list   - watch   - get - apiGroups:   - "extensions"   resources:   - ingresses/status   verbs:   - update - apiGroups:   - k8s.nginx.org   resources:   - virtualservers   - virtualserverroutes   verbs:   - list   - watch   - get  --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata:   name: nginx-ingress subjects: - kind: ServiceAccount   name: nginx-ingress   namespace: nginx-ingress roleRef:   kind: ClusterRole   name: nginx-ingress   apiGroup: rbac.authorization.k8s.io  --- apiVersion: apps/v1 kind: DaemonSet metadata:   name: nginx-ingress   namespace: nginx-ingress   annotations:     prometheus.io/scrape: "true"     prometheus.io/port: "9113" spec:   selector:     matchLabels:       app: nginx-ingress   template:     metadata:       labels:         app: nginx-ingress     spec:       serviceAccountName: nginx-ingress       containers:       - image: nginx/nginx-ingress:1.5.5         name: nginx-ingress         ports:         - name: http           containerPort: 80           hostPort: 80         - name: https           containerPort: 443           hostPort: 443         - name: prometheus           containerPort: 9113         env:         - name: POD_NAMESPACE           valueFrom:             fieldRef:               fieldPath: metadata.namespace         - name: POD_NAME           valueFrom:             fieldRef:               fieldPath: metadata.name         args:           - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config           - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret          #- -v=3 # Enables extensive logging. Useful for troubleshooting.          #- -report-ingress-status          #- -external-service=nginx-ingress          #- -enable-leader-election           - -enable-prometheus-metrics          #- -enable-custom-resources 
  • 在master节点上执行
# 只在 master 节点执行 kubectl apply -f nginx-ingress.yaml 

只在您想选择其他 Ingress Controller 的情况下卸载

# 只在 master 节点执行 kubectl delete -f nginx-ingress.yaml 

原文地址:https://blog.csdn.net/zuozewei/article/details/107419381
文章源码地址:https://github.com/zuozewei/blog-example/tree/master/Kubernetes/k8s-instal-script

本文转自互联网,侵权联系删除kubeadm 安装单master kubernetes 集群

赞(0) 打赏
部分文章转自网络,侵权联系删除b2bchain区块链学习技术社区 » kubeadm 安装单master kubernetes 集群
分享到: 更多 (0)

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址

b2b链

联系我们联系我们