跳过正文
  1. 所有文章/

07|部署 Master 节点组件

·7252 字·15 分钟
目录
K8S集群部署 - 这篇文章属于一个选集。
§ 7: 本文

kubernetes master 节点运行如下组件:

  • kube-apiserver;
  • kube-scheduler;
  • kube-controller-manager。

kube-apiserver、kube-scheduler 和 kube-controller-manager 均以多实例模式运行:

  1. kube-scheduler 和 kube-controller-manager 会自动选举产生一个 leader 实例,其它实例处于阻塞模式,当 leader 挂了,重新选举产生新的 leader,从而保证服务可用性;
  2. kube-apiserver 是无状态的,可以通过 kube-nginx 进行代理访问,从而保证服务可用性;
graph TB
    subgraph "Master Node 1 (k8s-01)"
        api1["kube-apiserver
(6443)"] scheduler1["kube-scheduler
(Leader)"] controller1["kube-controller-manager
(Leader)"] end subgraph "Master Node 2 (k8s-02)" api2["kube-apiserver
(6443)"] scheduler2["kube-scheduler
(Standby)"] controller2["kube-controller-manager
(Standby)"] end subgraph "存储层" etcd1["etcd Node 1"] etcd2["etcd Node 2"] etcd3["etcd Node 3"] end subgraph "Worker Node" nginx["kube-nginx
(127.0.0.1:8443)"] kubelet["kubelet"] kube-proxy["kube-proxy"] end kubectl["kubectl
(运维工具)"] %% API Server 连接 etcd api1 -->|"读写数据"| etcd1 api1 -->|"读写数据"| etcd2 api1 -->|"读写数据"| etcd3 api2 -->|"读写数据"| etcd1 api2 -->|"读写数据"| etcd2 api2 -->|"读写数据"| etcd3 %% Scheduler 和 Controller 连接本地 API Server scheduler1 -->|"连接本机"| api1 controller1 -->|"连接本机"| api1 scheduler2 -->|"连接本机"| api2 controller2 -->|"连接本机"| api2 %% Leader 选举 scheduler1 <-.->|"Leader选举"| scheduler2 controller1 <-.->|"Leader选举"| controller2 %% Worker Node 访问 Master nginx -->|"负载均衡"| api1 nginx -->|"负载均衡"| api2 kubelet -->|"通过本地代理"| nginx kube-proxy -->|"watch资源"| nginx %% kubectl 直接访问 kubectl -->|"直接访问"| api1 kubectl -->|"直接访问"| api2 style api1 fill:#e3f2fd style api2 fill:#e3f2fd style scheduler1 fill:#fff3e0 style controller1 fill:#e8f5e9 style scheduler2 fill:#fff8e1 style controller2 fill:#f1f8e9 style etcd1 fill:#fce4ec style etcd2 fill:#fce4ec style etcd3 fill:#fce4ec

注意:如果没有特殊指明,本文档的所有操作均在 k8s-01 节点上执行。

下载最新版本二进制文件
#

CHANGELOG-1.31 页面下载二进制 tar 文件并解压:

cd /opt/k8s/work
wget https://dl.k8s.io/v1.31.1/kubernetes-server-linux-amd64.tar.gz  # 自行解决翻墙问题
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
tar -xzvf  kubernetes-src.tar.gz

将二进制文件拷贝到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  scp kubernetes/server/bin/{apiextensions-apiserver,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} root@${node_ip}:/opt/k8s/bin/
  ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
done

部署 kube-apiserver 组件
#

本小节讲解部署一个 2 实例 kube-apiserver 集群的步骤。

注意:如果没有特殊指明,本小节的所有操作均在 k8s-01 节点上执行。

创建 kubernetes-master 证书和私钥
#

创建证书签名请求:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes-master",
  "hosts": [
    "127.0.0.1",
    "10.37.91.93",
    "10.37.43.62",
    "${CLUSTER_KUBERNETES_SVC_IP}",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local",
    "kubernetes.default.svc.${CLUSTER_DNS_DOMAIN}"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "k8s",
      "OU": "superproj"
    }
  ]
}
EOF
  • hosts 字段指定授权使用该证书的 IP 和域名列表,这里列出了 master 节点 IP、kubernetes 服务的 IP 和域名;

生成证书和私钥:

cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
ls kubernetes*pem

将生成的证书和私钥文件拷贝到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert"
  scp kubernetes*.pem root@${node_ip}:/etc/kubernetes/cert/
done

创建加密配置文件
#

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
  providers:
    - aescbc:
        keys:
          - name: key1
            secret: ${ENCRYPTION_KEY}
    - identity: {}
EOF

将加密配置文件拷贝到 master 节点的 /etc/kubernetes 目录下:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  scp encryption-config.yaml root@${node_ip}:/etc/kubernetes/
done

创建审计策略文件
#

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > audit-policy.yaml <<EOF
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
  # The following requests were manually identified as high-volume and low-risk, so drop them.
  - level: None
    resources:
      - group: ""
        resources:
          - endpoints
          - services
          - services/status
    users:
      - 'system:kube-proxy'
    verbs:
      - watch

  - level: None
    resources:
      - group: ""
        resources:
          - nodes
          - nodes/status
    userGroups:
      - 'system:nodes'
    verbs:
      - get

  - level: None
    namespaces:
      - kube-system
    resources:
      - group: ""
        resources:
          - endpoints
    users:
      - 'system:kube-controller-manager'
      - 'system:kube-scheduler'
      - 'system:serviceaccount:kube-system:endpoint-controller'
    verbs:
      - get
      - update

  - level: None
    resources:
      - group: ""
        resources:
          - namespaces
          - namespaces/status
          - namespaces/finalize
    users:
      - 'system:apiserver'
    verbs:
      - get

  # Don't log HPA fetching metrics.
  - level: None
    resources:
      - group: metrics.k8s.io
    users:
      - 'system:kube-controller-manager'
    verbs:
      - get
      - list

  # Don't log these read-only URLs.
  - level: None
    nonResourceURLs:
      - '/healthz*'
      - /version
      - '/swagger*'

  # Don't log events requests.
  - level: None
    resources:
      - group: ""
        resources:
          - events

  # node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
  # for expected updates from nodes
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    users:
      - kubelet
      - 'system:node-problem-detector'
      - 'system:serviceaccount:kube-system:node-problem-detector'
    verbs:
      - update
      - patch

  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    userGroups:
      - 'system:nodes'
    verbs:
      - update
      - patch

  # deletecollection calls can be large, don't log responses for expected namespace deletions
  - level: Request
    omitStages:
      - RequestReceived
    users:
      - 'system:serviceaccount:kube-system:namespace-controller'
    verbs:
      - deletecollection

  # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data, so only log at the Metadata level.
  - level: Metadata
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - secrets
          - configmaps
      - group: authentication.k8s.io
        resources:
          - tokenreviews
  # Get repsonses can be large; skip them.
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
    verbs:
      - get
      - list
      - watch

  # Default level for known APIs
  - level: RequestResponse
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io

  # Default level for all other requests.
  - level: Metadata
    omitStages:
      - RequestReceived
EOF

分发审计策略文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  scp audit-policy.yaml root@${node_ip}:/etc/kubernetes/audit-policy.yaml
done

创建后续访问 metrics-server 或 kube-prometheus 使用的证书
#

创建证书签名请求:

cd /opt/k8s/work
cat > proxy-client-csr.json <<EOF
{
  "CN": "aggregator",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "k8s",
      "OU": "superproj"
    }
  ]
}
EOF
  • CN 名称需要位于 kube-apiserver 的 –requestheader-allowed-names 参数中,否则后续访问 metrics 时会提示权限不足。

生成证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client
ls proxy-client*.pem

将生成的证书和私钥文件拷贝到所有 master 节点:

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  scp proxy-client*.pem root@${node_ip}:/etc/kubernetes/cert/
done

创建 /etc/kubernetes/pki/sa.key
#

mkdir -p /etc/kubernetes/pki
chmod 700 /etc/kubernetes/pki
openssl genrsa -out sa.key 2048

for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  ssh root@${node_ip} "mkdir -p /etc/kubernetes/pki"
done

for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  scp sa.key root@${node_ip}:/etc/kubernetes/pki/sa.key
done

创建 kube-apiserver systemd unit 模板文件
#

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kube-apiserver.service.template <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=${K8S_DIR}/kube-apiserver
ExecStart=/opt/k8s/bin/kube-apiserver \\
  --advertise-address=##NODE_IP## \\
  --default-not-ready-toleration-seconds=360 \\
  --default-unreachable-toleration-seconds=360 \\
  --max-mutating-requests-inflight=2000 \\
  --max-requests-inflight=4000 \\
  --default-watch-cache-size=200 \\
  --delete-collection-workers=2 \\
  --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\
  --etcd-cafile=/etc/kubernetes/cert/ca.pem \\
  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \\
  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \\
  --etcd-servers=${ETCD_ENDPOINTS} \\
  --bind-address=##NODE_IP## \\
  --secure-port=6443 \\
  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\
  --audit-log-maxage=15 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-truncate-enabled \\
  --audit-log-path=${K8S_DIR}/kube-apiserver/audit.log \\
  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \\
  --profiling \\
  --anonymous-auth=false \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --enable-bootstrap-token-auth \\
  --requestheader-allowed-names="aggregator" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --service-account-key-file=/etc/kubernetes/cert/ca.pem \\
  --authorization-mode=Node,RBAC \\
  --runtime-config=api/all=true \\
  --enable-admission-plugins=NodeRestriction \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --event-ttl=168h \\
  --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \\
  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\
  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\
  --kubelet-timeout=10s \\
  --proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \\
  --proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --service-node-port-range=${NODE_PORT_RANGE} \\
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
  --service-account-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\
  --api-audiences=https://kubernetes.default.svc.cluster.local \\
  --v=2
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • –advertise-address:apiserver 对外通告的 IP(kubernetes 服务后端节点 IP);
  • –default-*-toleration-seconds:设置节点异常相关的阈值;
  • –max-*-requests-inflight:请求相关的最大阈值;
  • –etcd-*:访问 etcd 的证书和 etcd 服务器地址;
  • –bind-address:https 监听的 IP,不能为 127.0.0.1,否则外界不能访问它的安全端口 6443;
  • –secret-port:https 监听端口;
  • –insecure-port=0:关闭监听 http 非安全端口(8080);
  • –tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件;
  • –audit-*:配置审计策略和审计日志文件相关的参数;
  • –client-ca-file:验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
  • –enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
  • –requestheader-*:kube-apiserver 的 aggregator layer 相关的配置参数,proxy-client & HPA 需要使用;
  • –requestheader-client-ca-file:用于签名 –proxy-client-cert-file 和 –proxy-client-key-file 指定的证书;在启用了 metric aggregator 时使用;
  • –requestheader-allowed-names:不能为空,值为逗号分割的 –proxy-client-cert-file 证书的 CN 名称,这里设置为 “aggregator”;
  • –service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 –service-account-private-key-file 指定私钥文件,两者配对使用;
  • –runtime-config=api/all=true:启用所有版本的 APIs,如 autoscaling/v2alpha1;
  • –authorization-mode=Node,RBAC、–anonymous-auth=false:开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
  • –enable-admission-plugins:启用一些默认关闭的 plugins;
  • –allow-privileged:运行执行 privileged 权限的容器;
  • –apiserver-count=3:指定 apiserver 实例的数量;
  • –event-ttl:指定 events 的保存时间;
  • –kubelet-:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
  • –proxy-client-*:apiserver 访问 metrics-server 使用的证书;
  • –service-cluster-ip-range:指定 Service Cluster IP 地址段;
  • –service-node-port-range:指定 NodePort 的端口范围;

如果 kube-apiserver 机器没有运行 kube-proxy,则还需要添加 –enable-aggregator-routing=true 参数;

关于 –requestheader-XXX 相关参数,参考:

注意:

  1. –requestheader-client-ca-file 指定的 CA 证书,必须具有 client auth and server auth;
  2. 如果 –requestheader-allowed-names 不为空,且 –proxy-client-cert-file 证书的 CN 名称不在 allowed-names 中,则后续查看 node 或 pods 的 metrics 失败,提示:
$ kubectl top nodes
Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User "aggregator" cannot list resource "nodes" in API group "metrics.k8s.io" at the cluster scope

为各节点创建和分发 kube-apiserver systemd unit 文件
#

替换模板文件中的变量,为各节点生成 systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=0; i < 2; i++ ))
do
  sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" \
kube-apiserver.service.template > kube-apiserver-${NODE_IPS[i]}.service
done
ls kube-apiserver*.service
  • NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP;

分发生成的 systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  scp kube-apiserver-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-apiserver.service
done

启动 kube-apiserver 服务
#

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-apiserver"
  ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
done

检查 kube-apiserver 运行状态
#

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  ssh root@${node_ip} "systemctl status kube-apiserver |grep 'Active:'"
done

确保状态为 active (running),否则查看日志,确认原因:

journalctl -u kube-apiserver

检查集群状态
#

$ kubectl cluster-info
Kubernetes control plane is running at https://10.37.91.93:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

$ kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.254.0.1   <none>        443/TCP   3m53s

$ kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE
ERROR
scheduler            Unhealthy   Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused
controller-manager   Unhealthy   Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused
etcd-0               Healthy     ok

因为 kube-scheduler、kube-controller-manager 还没有安装,所以这 2 个组件的状态是 Unhealthy,此为符合预期的状态。

检查 kube-apiserver 监听的端口
#

$ sudo netstat -lnpt|grep kube
tcp        0      0 10.37.91.93:6443        0.0.0.0:*               LISTEN      401359/kube-apiserv
  • 6443:接收 https 请求的安全端口,对所有请求做认证和授权;
  • 由于关闭了非安全端口,故没有监听 8080。

部署 kube-controller-manager 组件
#

本小节介绍部署高可用 kube-controller-manager 集群的步骤。

该集群包含 2 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态,当 leader 节点不可用时,阻塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

为保证通信安全,本节课先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:

  1. 与 kube-apiserver 的安全端口通信;
  2. 在安全端口(https,10252) 输出 prometheus 格式的 metrics;

注意:如果没有特殊指明,本小节的所有操作均在 k8s-01 节点上执行。

创建 kube-controller-manager 证书和私钥
#

创建证书签名请求:

cd /opt/k8s/work
cat > kube-controller-manager-csr.json <<EOF
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "hosts": [
    "127.0.0.1",
    "10.37.91.93",
    "10.37.43.62"
  ],
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-controller-manager",
      "OU": "superproj"
    }
  ]
}
EOF
  • hosts 列表包含所有 kube-controller-manager 节点 IP;
  • CN 和 O 均为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。

生成证书和私钥:

cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
ls kube-controller-manager*.pem

将生成的证书和私钥分发到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  scp kube-controller-manager*.pem root@${node_ip}:/etc/kubernetes/cert/
done

创建和分发 kubeconfig 文件
#

kube-controller-manager 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-controller-manager 证书等信息:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server="https://##NODE_IP##:6443" \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=kube-controller-manager.pem \
  --client-key=kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
  • kube-controller-manager 与 kube-apiserver 混部,故直接通过节点 IP 访问 kube-apiserver;

分发 kubeconfig 到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  sed -e "s/##NODE_IP##/${node_ip}/" kube-controller-manager.kubeconfig > kube-controller-manager-${node_ip}.kubeconfig
  scp kube-controller-manager-${node_ip}.kubeconfig root@${node_ip}:/etc/kubernetes/kube-controller-manager.kubeconfig
done

创建 kube-controller-manager systemd unit 模板文件
#

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kube-controller-manager.service.template <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=${K8S_DIR}/kube-controller-manager
ExecStart=/opt/k8s/bin/kube-controller-manager \\
  --profiling \\
  --cluster-name=kubernetes \\
  --controllers=*,bootstrapsigner,tokencleaner \\
  --kube-api-qps=1000 \\
  --kube-api-burst=2000 \\
  --leader-elect \\
  --use-service-account-credentials\\
  --concurrent-service-syncs=2 \\
  --bind-address=##NODE_IP## \\
  --secure-port=10252 \\
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \\
  --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-allowed-names="aggregator" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\
  --horizontal-pod-autoscaler-sync-period=10s \\
  --concurrent-deployment-syncs=10 \\
  --concurrent-gc-syncs=30 \\
  --node-cidr-mask-size=24 \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --terminated-pod-gc-threshold=10000 \\
  --root-ca-file=/etc/kubernetes/cert/ca.pem \\
  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  • –secure-port=10252、–bind-address=0.0.0.0:在所有网络接口监听 10252 端口的 https /metrics 请求;
  • –kubeconfig:指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver;
  • –authentication-kubeconfig 和 –authorization-kubeconfig:kube-controller-manager 使用它连接 apiserver,对 client 的请求进行认证和授权。kube-controller-manager 不再使用 –tls-ca-file 对请求 https metrics 的 Client 证书进行校验。如果没有配置这两个 kubeconfig 参数,则 client 连接 kube-controller-manager https 端口的请求会被拒绝(提示权限不足)。
  • –cluster-signing-*-file:签名 TLS Bootstrap 创建的证书;
  • –root-ca-file:放置到容器 ServiceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验;
  • –service-account-private-key-file:签名 ServiceAccount 中 Token 的私钥文件,必须和 kube-apiserver 的 –service-account-key-file 指定的公钥文件配对使用;
  • –service-cluster-ip-range:指定 Service Cluster IP 网段,必须和 kube-apiserver 中的同名参数一致;
  • –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
  • –controllers=*,bootstrapsigner,tokencleaner:启用的控制器列表,tokencleaner 用于自动清理过期的 Bootstrap token;
  • –horizontal-pod-autoscaler-*:custom metrics 相关参数,支持 autoscaling/v2alpha1;
  • –tls-cert-file、–tls-private-key-file:使用 https 输出 metrics 时使用的 Server 证书和秘钥;
  • –use-service-account-credentials=true:kube-controller-manager 中各 controller 使用 serviceaccount 访问 kube-apiserver;

为各节点创建和分发 kube-controller-manager systemd unit 文件
#

替换模板文件中的变量,为各节点创建 systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=0; i < 2; i++ ))
do
  sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" \
kube-controller-manager.service.template > kube-controller-manager-${NODE_IPS[i]}.service
done
ls kube-controller-manager*.service

分发到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  scp kube-controller-manager-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-controller-manager.service
done

启动 kube-controller-manager 服务
#

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-controller-manager"
  ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"
done

检查服务运行状态
#

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  ssh root@${node_ip} "systemctl status kube-controller-manager|grep Active"
done

确保状态为 active (running),否则查看日志,确认原因:

journalctl -u kube-controller-manager

kube-controller-manager 监听 10252 端口,接收 https 请求:

$ sudo netstat -lnpt | grep kube-cont
tcp        0      0 10.37.91.93:10252       0.0.0.0:*               LISTEN      421990/kube-control

查看输出的 metrics
#

注意:以下命令在 kube-controller-manager 节点上执行。

$ curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://10.37.43.62:10252/metrics |head
# HELP aggregator_discovery_aggregation_count_total [ALPHA] Counter of number of times discovery was aggregated
# TYPE aggregator_discovery_aggregation_count_total counter
aggregator_discovery_aggregation_count_total 0
# HELP apiextensions_apiserver_validation_ratcheting_seconds [ALPHA] Time for comparison of old to new for the purposes of CRDValidationRatcheting during an UPDATE in seconds.
# TYPE apiextensions_apiserver_validation_ratcheting_seconds histogram
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="1e-05"} 0
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="4e-05"} 0
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="0.00016"} 0
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="0.00064"} 0
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="0.00256"} 0

查看当前的 leader
#

$ kubectl -n kube-system get leases kube-controller-manager
NAME                      HOLDER                                  AGE
kube-controller-manager   k8s-01_5d243d35-b0c0-49cd-8bca-43c77a94ad5a   21m

可见,当前的 Leader 为 k8s-01 节点上运行的 kube-controller-manager 实例。

测试 kube-controller-manager 集群的高可用
#

停掉其中一个节点的 kube-controller-manager 服务,观察其它节点的日志,看是否获取了 leader 权限。

执行以下命令,停掉 k8s-01 节点上的 kube-controller-manager 服务,并再从查看当前的 Leader:

$ sudo systemctl stop kube-controller-manager.service
$ kubectl -n kube-system get leases kube-controller-manager
NAME                      HOLDER                                  AGE
kube-controller-manager   k8s-02_45a2acb9-b710-47b6-a138-3ac57f61861b   23m

可见,kube-controller-manager 的 Leader 切换到了 k8s-02 节点上的 kube-controller-manager 实例。

参考
#

  1. 关于 controller 权限和 use-service-account-credentials 参数:https://github.com/kubernetes/kubernetes/issues/48208
  2. kubelet 认证和授权:https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authorization

部署 kube-scheduler 组件
#

本小节介绍部署高可用 kube-scheduler 集群的步骤。

该集群包含 2 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态,当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

为保证通信安全,本文档先生成 x509 证书和私钥,kube-scheduler 在如下两种情况下使用该证书:

  1. 与 kube-apiserver 的安全端口通信;
  2. 在安全端口(https,10251) 输出 prometheus 格式的 metrics;

注意:如果没有特殊指明,本小节的所有操作均在 k8s-01 节点上执行。

创建 kube-scheduler 证书和私钥
#

创建证书签名请求:

cd /opt/k8s/work
cat > kube-scheduler-csr.json <<EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [
    "127.0.0.1",
    "10.37.91.93",
    "10.37.43.62"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-scheduler",
      "OU": "superproj"
    }
  ]
}
EOF
  • hosts 列表包含所有 kube-scheduler 节点 IP;
  • CN 和 O 均为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限;

生成证书和私钥:

cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
ls kube-scheduler*.pem

将生成的证书和私钥分发到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  scp kube-scheduler*.pem root@${node_ip}:/etc/kubernetes/cert/
done

创建和分发 kubeconfig 文件
#

kube-scheduler 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-scheduler 证书:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server="https://##NODE_IP##:6443" \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

分发 kubeconfig 到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  sed -e "s/##NODE_IP##/${node_ip}/" kube-scheduler.kubeconfig > kube-scheduler-${node_ip}.kubeconfig
  scp kube-scheduler-${node_ip}.kubeconfig root@${node_ip}:/etc/kubernetes/kube-scheduler.kubeconfig
done

创建 kube-scheduler 配置文件
#

cd /opt/k8s/work
cat >kube-scheduler.yaml.template <<EOF
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig"
  qps: 100
enableContentionProfiling: false
enableProfiling: true
leaderElection:
  leaderElect: true
EOF
  • –kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
  • –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

替换模板文件中的变量:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=0; i < 2; i++ ))
do
  sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" \
kube-scheduler.yaml.template > kube-scheduler-${NODE_IPS[i]}.yaml
done
ls kube-scheduler*.yaml
  • NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP;

分发 kube-scheduler 配置文件到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  scp kube-scheduler-${node_ip}.yaml root@${node_ip}:/etc/kubernetes/kube-scheduler.yaml
done
  • 重命名为 kube-scheduler.yaml;

创建 kube-scheduler systemd unit 模板文件
#

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kube-scheduler.service.template <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=${K8S_DIR}/kube-scheduler
ExecStart=/opt/k8s/bin/kube-scheduler \\
  --config=/etc/kubernetes/kube-scheduler.yaml \\
  --bind-address=##NODE_IP## \\
  --secure-port=10259 \\
  --tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem \\
  --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-allowed-names="" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF

为各节点创建和分发 kube-scheduler systemd unit 文件
#

替换模板文件中的变量,为各节点创建 systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=0; i < 2; i++ ))
do
  sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" \
kube-scheduler.service.template > kube-scheduler-${NODE_IPS[i]}.service
done
ls kube-scheduler*.service

分发 systemd unit 文件到所有 master 节点:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  scp kube-scheduler-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-scheduler.service
done

启动 kube-scheduler 服务
#

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-scheduler"
  ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
done

检查服务运行状态
#

source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
  echo ">>> ${node_ip}"
  ssh root@${node_ip} "systemctl status kube-scheduler|grep Active"
done

确保状态为 active (running),否则查看日志,确认原因:

journalctl -u kube-scheduler

查看输出的 metrics
#

注意:以下命令在 kube-scheduler 节点上执行。

kube-scheduler 监听 10259 端口,接收 https 请求,该接口对外提供 /metrics 和 /healthz。检查命令如下:

$ sudo netstat -lnpt |grep kube-sch
tcp        0      0 10.37.91.93:10259       0.0.0.0:*               LISTEN      433307/kube-schedul

访问 /metrics接口:

curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://10.37.91.93:10259/metrics |head
# HELP aggregator_discovery_aggregation_count_total [ALPHA] Counter of number of times discovery was aggregated
# TYPE aggregator_discovery_aggregation_count_total counter
aggregator_discovery_aggregation_count_total 0
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.

查看当前的 leader
#

$ kubectl -n kube-system get leases kube-scheduler
NAME             HOLDER                                  AGE
kube-scheduler   k8s-01_00dc5c29-22d9-4354-bd9d-025bae5ebc11   6m53s

可见,当前的 Leader 为 k8s-01 节点上运行的 kube-scheduler 实例。

测试 kube-scheduler 集群的高可用
#

停掉其中一个节点的 kube-scheduler 服务,观察其它节点的日志,看是否获取了 leader 权限。

执行以下命令,停掉 k8s-01 节点上的 kube-scheduler 服务,并再从查看当前的 Leader:

$ sudo systemctl stop kube-scheduler.service
$ kubectl -n kube-system get leases kube-scheduler
NAME             HOLDER                                  AGE
kube-scheduler   k8s-02_42ee9af7-637e-4d17-9285-5da67483a916   8m3s

可见,kube-scheduler 的 Leader 切换到了 k8s-02 节点上的 kube-scheduler 实例。

K8S集群部署 - 这篇文章属于一个选集。
§ 7: 本文