主控节点(master)
etcd证书制作
-
在整体架构中,12/21/22机器都需要部署etcd服务,首先在200机器上为etcd制作证书,在
/opt/certs
目录下,创建ca-config.json
文件,内容如下:{ "signing": { "default": { "expiry": "175200h" }, "profiles": { "server": { "expiry": "175200h", "usages": [ "signing", "key encipherment", "server auth" ] }, "client": { "expiry": "175200h", "usages": [ "signing", "key encipherment", "client auth" ] }, "peer": { "expiry": "175200h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } }
Tips:其中expiry表示有效期20年,profiles.server表示启动server时需要配置的证书,profiles.client表示client连接server时需要的证书,profiles.peer表示双向证书,服务端找客户端和客户端找服务端需要证书。
-
再创建
etcd-peer-csr.json
文件,内容如下:{ "CN": "k8s-etcd", "hosts": [ "10.4.7.11", "10.4.7.12", "10.4.7.21", "10.4.7.22" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] }
Tips:hosts表示etcd有可能部署的IP。
-
生成证书:
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json |cfssl-json -bare etcd-peer 2021/12/14 23:00:14 [INFO] generate received request 2021/12/14 23:00:14 [INFO] received CSR 2021/12/14 23:00:14 [INFO] generating key: rsa-2048 2021/12/14 23:00:14 [INFO] encoded CSR 2021/12/14 23:00:14 [INFO] signed certificate with serial number 350088718244052801917258856164037020016035615703 [root@hdss7-200 certs]# ll 总用量 36 -rw-r--r-- 1 root root 836 12月 14 22:52 ca-config.json -rw-r--r-- 1 root root 1041 12月 14 17:10 ca.csr -rw-r--r-- 1 root root 328 12月 14 17:10 ca-csr.json -rw------- 1 root root 1675 12月 14 17:10 ca-key.pem -rw-r--r-- 1 root root 1298 12月 14 17:10 ca.pem -rw-r--r-- 1 root root 1062 12月 14 23:00 etcd-peer.csr -rw-r--r-- 1 root root 363 12月 14 22:55 etcd-peer-csr.json -rw------- 1 root root 1675 12月 14 23:00 etcd-peer-key.pem -rw-r--r-- 1 root root 1428 12月 14 23:00 etcd-peer.pem
etcd服务部署
-
在12/21/22机器上安装etcd,首先创建代码目录,创建etcd用户,然后下载etcd-v3.1.20安装包:
[root@hdss7-21 ~]# mkdir /opt/src [root@hdss7-21 ~]# cd /opt/src [root@hdss7-21 src]# useradd -s /sbin/nologin -M etcd [root@hdss7-21 src]# id etcd uid=1000(etcd) gid=1000(etcd) 组=1000(etcd) [root@hdss7-21 src]# wget https://github.com/etcd-io/etcd/releases/download/v3.1.20/etcd-v3.1.20-linux-amd64.tar.gz [root@hdss7-12 src]# tar zxf etcd-v3.1.20-linux-amd64.tar.gz -C /opt [root@hdss7-12 src]# cd /opt [root@hdss7-12 src]# mv etcd-v3.1.20-linux-amd64/ etcd-v3.1.20 [root@hdss7-12 src]# ln -s /opt/etcd-v3.1.20/ /opt/etcd [root@hdss7-12 src]# cd etcd
-
在12/21/22机器上部署证书,配置启动脚本:
[root@hdss7-21 etcd]# mkdir -p /opt/etcd/certs /data/etcd/data/logs/etcd-server [root@hdss7-21 etcd]# cd certs [root@hdss7-21 certs]# scp hdss7-200:/opt/certs/ca.pem . [root@hdss7-21 certs]# scp hdss7-200:/opt/certs/etcd-peer.pem . [root@hdss7-21 certs]# scp hdss7-200:/opt/certs/etcd-peekey.pem . [root@hdss7-21 certs]# cd ..
在12/21/22的
/opt/etcd/
目录下创建etcd-server-startup.sh
脚本,脚本中initial-cluster
要写所有部署etcd的机器IP,其余etcd-server-7-21,listen-peer-urls,listen-client-urls,initial-advertise-peer-urls,advertise-client-urls的IP都需要改成和机器IP一致,内容如下:10.4.7.12
#!/bin/bash ./etcd --name etcd-server-7-12 \ --data-dir /data/etcd/etcd-server \ --listen-peer-urls https://10.4.7.12:2380 \ --listen-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \ --quota-backend-bytes 8000000000 \ --initial-advertise-peer-urls https://10.4.7.12:2380 \ --advertise-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \ --initial-cluster etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \ --ca-file ./certs/ca.pem \ --cert-file ./certs/etcd-peer.pem \ --key-file ./certs/etcd-peer-key.pem \ --client-cert-auth \ --trusted-ca-file ./certs/ca.pem \ --peer-ca-file ./certs/ca.pem \ --peer-cert-file ./certs/etcd-peer.pem \ --peer-key-file ./certs/etcd-peer-key.pem \ --peer-client-cert-auth \ --peer-trusted-ca-file ./certs/ca.pem \ --log-output stdout
10.4.7.21
#!/bin/bash ./etcd --name etcd-server-7-21 \ --data-dir /data/etcd/etcd-server \ --listen-peer-urls https://10.4.7.21:2380 \ --listen-client-urls https://10.4.7.21:2379,http://127.0.0.1:2379 \ --quota-backend-bytes 8000000000 \ --initial-advertise-peer-urls https://10.4.7.21:2380 \ --advertise-client-urls https://10.4.7.21:2379,http://127.0.0.1:2379 \ --initial-cluster etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \ --ca-file ./certs/ca.pem \ --cert-file ./certs/etcd-peer.pem \ --key-file ./certs/etcd-peer-key.pem \ --client-cert-auth \ --trusted-ca-file ./certs/ca.pem \ --peer-ca-file ./certs/ca.pem \ --peer-cert-file ./certs/etcd-peer.pem \ --peer-key-file ./certs/etcd-peer-key.pem \ --peer-client-cert-auth \ --peer-trusted-ca-file ./certs/ca.pem \ --log-output stdout
10.4.7.22
#!/bin/bash ./etcd --name etcd-server-7-22 \ --data-dir /data/etcd/etcd-server \ --listen-peer-urls https://10.4.7.22:2380 \ --listen-client-urls https://10.4.7.22:2379,http://127.0.0.1:2379 \ --quota-backend-bytes 8000000000 \ --initial-advertise-peer-urls https://10.4.7.22:2380 \ --advertise-client-urls https://10.4.7.22:2379,http://127.0.0.1:2379 \ --initial-cluster etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \ --ca-file ./certs/ca.pem \ --cert-file ./certs/etcd-peer.pem \ --key-file ./certs/etcd-peer-key.pem \ --client-cert-auth \ --trusted-ca-file ./certs/ca.pem \ --peer-ca-file ./certs/ca.pem \ --peer-cert-file ./certs/etcd-peer.pem \ --peer-key-file ./certs/etcd-peer-key.pem \ --peer-client-cert-auth \ --peer-trusted-ca-file ./certs/ca.pem \ --log-output stdout
修改权限:
[root@hdss7-12 etcd]# chmod +x etcd-server-startup.sh [root@hdss7-12 etcd]# chown -R etcd.etcd /opt/etcd-v3.1.20 /data/etcd /data/logs/etcd-server/ [root@hdss7-12 etcd]# ll 总用量 30072 drwxr-xr-x 2 etcd etcd 66 12月 15 20:25 certs drwxr-xr-x 11 etcd etcd 4096 10月 11 2018 Documentation -rwxr-xr-x 1 etcd etcd 16406432 10月 11 2018 etcd -rwxr-xr-x 1 etcd etcd 14327712 10月 11 2018 etcdctl -rwxr-xr-x 1 etcd etcd 982 12月 15 20:44 etcd-server-startup.sh -rw-r--r-- 1 etcd etcd 32632 10月 11 2018 README-etcdctl.md -rw-r--r-- 1 etcd etcd 5878 10月 11 2018 README.md -rw-r--r-- 1 etcd etcd 7892 10月 11 2018 READMEv2-etcdctl.md
-
在12/21/22机器上安装supervisor用于启动etcd:
[root@hdss7-21 etcd]# yum install -y supervisor [root@hdss7-21 etcd]# systemctl start supervisord && systemctl enable supervisord
创建etcd的supervisor守护进程配置文件
/etc/supervisord.d/etcd-server.ini
,其中涉及7-12的都需要改成与机器一致[program:etcd-server-7-12] command=/opt/etcd/etcd-server-startup.sh numprocs=1 directory=/opt/etcd autostart=true autorestart=true startsecs=30 startretries=3 exitcodes=0,2 stopsignal=QUIT stopwaitsecs=10 user=etcd redirect_stderr=true stdout_logfile=/data/logs/etcd-server/etcd.stdout.log stdout_logfile_maxbytes=64MB stdout_logfile_backups=4 stdout_capture_maxbytes=1MB stdout_events_enabled=false
刷新supervisor,查看etcd启动情况,必须监听2379和2380两个端口才是启动成功:
[root@hdss7-21 etcd]# supervisorctl update etcd-server-7-21: added process group [root@hdss7-21 etcd]# supervisorctl status etcd-server-7-21 RUNNING pid 9978, uptime 0:00:51 [root@hdss7-21 etcd]# netstat -tlnp|grep etcd tcp 0 0 10.4.7.21:2379 0.0.0.0:* LISTEN 9979/./etcd tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 9979/./etcd tcp 0 0 10.4.7.21:2380 0.0.0.0:* LISTEN 9979/./etcd # 任意节点(12/21/22)检测集群健康状态: [root@hdss7-21 etcd]# ./etcdctl cluster-health member 988139385f78284 is healthy: got healthy result from http://127.0.0.1:2379 member 5a0ef2a004fc4349 is healthy: got healthy result from http://127.0.0.1:2379 member f4a0cb0a765574a8 is healthy: got healthy result from http://127.0.0.1:2379 cluster is healthy [root@hdss7-21 etcd]# ./etcdctl member list 988139385f78284: name=etcd-server-7-22 peerURLs=https://10.4.7.22:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.22:2379 isLeader=false 5a0ef2a004fc4349: name=etcd-server-7-21 peerURLs=https://10.4.7.21:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.21:2379 isLeader=true f4a0cb0a765574a8: name=etcd-server-7-12 peerURLs=https://10.4.7.12:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.12:2379 isLeader=false # 哪台机器上的etcd进程最先启动成功,哪台机器就是leader
API-server集群部署
-
根据架构设计,apiserver部署在21和22机器上,首先下载解压k8s v1.15.12包:
[root@hdss7-22 etcd]# cd /opt/src [root@hdss7-22 src]# wget https://github.com/kubernetes/kubernetes/releases/download/v1.15.12/kubernetes.tar.gz [root@hdss7-21 src]# tar zxf kubernetes.tar.gz [root@hdss7-21 opt]# mv kubernetes kubernetes-down [root@hdss7-21 opt]# cd kubernetes-down/cluster/ # 执行下载kubernetes二进制包 [root@hdss7-21 cluster]# ./get-kube-binaries.sh [root@hdss7-21 cluster]# cd /opt/src/kubernetes-down/server/ [root@hdss7-21 server]# tar zxf kubernetes-server-linux-amd64.tar.gz -C /opt/ [root@hdss7-21 server]# cd /opt/ [root@hdss7-21 opt]# mv kubernetes kubernetes-v1.15.12 [root@hdss7-21 opt]# ln -s /opt/kubernetes-v1.15.12/ /opt/kubernetes # 删除多余的文件 [root@hdss7-21 opt]# cd /opt/kubernetes [root@hdss7-21 kubernetes]# ls addons kubernetes-src.tar.gz LICENSES server [root@hdss7-21 kubernetes]# rm -rf kubernetes-src.tar.gz [root@hdss7-21 kubernetes]# cd server/bin [root@hdss7-21 bin]# rm -rf *.tar *_tag [root@hdss7-21 bin]# ll 总用量 885640 -rwxr-xr-x 1 root root 43555296 5月 6 2020 apiextensions-apiserver -rwxr-xr-x 1 root root 100655136 5月 6 2020 cloud-controller-manager -rwxr-xr-x 1 root root 200894096 5月 6 2020 hyperkube -rwxr-xr-x 1 root root 40198592 5月 6 2020 kubeadm -rwxr-xr-x 1 root root 164682144 5月 6 2020 kube-apiserver -rwxr-xr-x 1 root root 116610080 5月 6 2020 kube-controller-manager -rwxr-xr-x 1 root root 43059232 5月 6 2020 kubectl -rwxr-xr-x 1 root root 119772208 5月 6 2020 kubelet -rwxr-xr-x 1 root root 36995680 5月 6 2020 kube-proxy -rwxr-xr-x 1 root root 38794336 5月 6 2020 kube-scheduler -rwxr-xr-x 1 root root 1648224 5月 6 2020 mounter
-
到200机器的
/opt/certs
目录下签发client证书,首先创建client-csr.json
文件:{ "CN": "k8s-node", "hosts": [ ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] }
执行命令生成证书
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client 2021/12/16 21:22:24 [INFO] generate received request 2021/12/16 21:22:24 [INFO] received CSR 2021/12/16 21:22:24 [INFO] generating key: rsa-2048 2021/12/16 21:22:24 [INFO] encoded CSR 2021/12/16 21:22:24 [INFO] signed certificate with serial number 127722226776894493816930732521678586220347426393 2021/12/16 21:22:24 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@hdss7-200 certs]# ll 总用量 52 -rw-r--r-- 1 root root 836 12月 14 22:52 ca-config.json -rw-r--r-- 1 root root 1041 12月 14 17:10 ca.csr -rw-r--r-- 1 root root 328 12月 14 17:10 ca-csr.json -rw------- 1 root root 1675 12月 14 17:10 ca-key.pem -rw-r--r-- 1 root root 1298 12月 14 17:10 ca.pem -rw-r--r-- 1 root root 993 12月 16 21:22 client.csr -rw-r--r-- 1 root root 280 12月 16 21:22 client-csr.json -rw------- 1 root root 1679 12月 16 21:22 client-key.pem -rw-r--r-- 1 root root 1363 12月 16 21:22 client.pem -rw-r--r-- 1 root root 1062 12月 14 23:00 etcd-peer.csr -rw-r--r-- 1 root root 363 12月 14 22:55 etcd-peer-csr.json -rw------- 1 root root 1675 12月 14 23:00 etcd-peer-key.pem -rw-r--r-- 1 root root 1428 12月 14 23:00 etcd-peer.pem
-
继续在200机器上为API-server制作证书,创建
apiserver-csr.json
:{ "CN": "k8s-apiserver", "hosts": [ "127.0.0.1", "192.168.0.1", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "10.4.7.10", "10.4.7.21", "10.4.7.22", "10.4.7.23" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] }
生成证书:
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver 2021/12/16 21:29:16 [INFO] generate received request 2021/12/16 21:29:16 [INFO] received CSR 2021/12/16 21:29:16 [INFO] generating key: rsa-2048 2021/12/16 21:29:16 [INFO] encoded CSR 2021/12/16 21:29:16 [INFO] signed certificate with serial number 323198776840425302765737130982725969133467364555
-
到21/22机器上,将证书复制过来:
[root@hdss7-21 bin]# cd /opt/kubernetes/server/bin [root@hdss7-21 bin]# mkdir cert [root@hdss7-21 bin]# cd cert # scp使用"filepath"可以一次复制多个文件 [root@hdss7-21 cert]# scp hdss7-200:"/opt/certs/ca.pem /opt/certs/ca-key.pem /opt/certs/client-key.pem /opt/certs/client.pem /opt/certs/apiserver.pem /opt/certs/apiserver-key.pem" . [root@hdss7-21 cert]# ll 总用量 24 -rw------- 1 root root 1675 12月 16 21:35 apiserver-key.pem -rw-r--r-- 1 root root 1549 12月 16 21:35 apiserver.pem -rw------- 1 root root 1675 12月 16 21:35 ca-key.pem -rw-r--r-- 1 root root 1298 12月 16 21:31 ca.pem -rw------- 1 root root 1679 12月 16 21:35 client-key.pem -rw-r--r-- 1 root root 1363 12月 16 21:35 client.pem
-
在21/22上首先在
/opt/kubernetes/server/bin
目录下创建conf
目录,然后创建配置文件audit.yaml
,内容如下:apiVersion: audit.k8s.io/v1beta1 # This is required. kind: Policy # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" # Resource "pods" doesn't match requests to any subresource of pods, # which is consistent with the RBAC policy. resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata # Long-running requests like watches that fall under this rule will not # generate an audit event in RequestReceived. omitStages: - "RequestReceived"
-
21/22创建apiserver启动脚本,在
/opt/kubernetes/server/bin
目录下创建kube-apiserver.sh
脚本,内容如下:#!/bin/bash ./kube-apiserver \ --apiserver-count 2 \ --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \ --audit-policy-file ./conf/audit.yaml \ --authorization-mode RBAC \ --client-ca-file ./cert/ca.pem \ --requestheader-client-ca-file ./cert/ca.pem \ --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \ --etcd-cafile ./cert/ca.pem \ --etcd-certfile ./cert/client.pem \ --etcd-keyfile ./cert/client-key.pem \ --etcd-servers https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \ --service-account-key-file ./cert/ca-key.pem \ --service-cluster-ip-range 192.168.0.0/16 \ --service-node-port-range 3000-29999 \ --target-ram-mb=1024 \ --kubelet-client-certificate ./cert/client.pem \ --kubelet-client-key ./cert/client-key.pem \ --log-dir /data/logs/kubernetes/kube-apiserver \ --tls-cert-file ./cert/apiserver.pem \ --tls-private-key-file ./cert/apiserver-key.pem \ --v 2
[root@hdss7-21 bin]# chmod +x kube-apiserver.sh [root@hdss7-21 bin]# vi /etc/supervisord.d/kube-apiserver.ini
创建apiserver的supervisor配置文件
kube-apiserver.ini
:[program:kube-apiserver-7-21] command=/opt/kubernetes/server/bin/kube-apiserver.sh ; the program (relative uses PATH, can take args) numprocs=1 ; number of processes copies to start (def 1) directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd) autostart=true ; start at supervisord start (default: true) autorestart=true ; retstart at unexpected quit (default: true) startsecs=30 ; number of secs prog must stay running (def. 1) startretries=3 ; max # of serial start failures (default 3) exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) stopsignal=QUIT ; signal used to kill process (default TERM) stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) user=root ; setuid to this UNIX account to run the program redirect_stderr=true ; redirect proc stderr to stdout (default false) stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log ; stderr log path, NONE for none; default AUTO stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB) stdout_logfile_backups=4 ; # of stdout logfile backups (default 10) stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) stdout_events_enabled=false ; emit events on stdout writes (default false)
创建日志目录,使用supervisor启动apiserver:
[root@hdss7-22 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver [root@hdss7-22 bin]# supervisorctl update [root@hdss7-22 bin]# supervisorctl status etcd-server-7-22 RUNNING pid 6955, uptime 1 day, 0:44:31 kube-apiserver-7-22 RUNNING pid 8895, uptime 0:00:32
k8s的yaml文件格式解析,可以参考这篇文章点击跳转
L4反代服务
在我们的架构规划中,11/12机器负责反向代理服务,同时利用keepalived并实现高可用。
nginx和keepalived安装
-
在11/12机器上使用
yum install -y nginx nginx-mod-stream
安装nginx软件,在/etc/nginx/nginx.conf
配置文件末尾添加内容如下,然后启动nginx:stream { upstream kube-apiserver { server 10.4.7.21:6443 max_fails=3 fail_timeout=30s; server 10.4.7.22:6443 max_fails=3 fail_timeout=30s; } server { listen 7443; proxy_connect_timeout 2s; proxy_timeout 900s; proxy_pass kube-apiserver; } }
-
在11、12机器上使用
yum install -y keepalived
命令安装keepalived服务,然后创建并编辑/etc/keepalived/check_port.sh
脚本,内容如下:#!/bin/bash CHK_PORT=$1 if [ -n "$CHK_PORT" ];then PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l` if [ $PORT_PROCESS -eq 0 ];then echo "Port $CHK_PORT Is Not Used,End." exit 1 fi else echo "Check Port Cant Be Empty!" fi
-
给
check_port.sh
文件增加执行权限,然后删除keepalived.conf配置文件的内容,然后在11机器(主)写入以下配置:! Congfiguration file for keepalived global_defs { router_id 10.4.7.11 } vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 7443" interval 2 weight -20 } vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 251 priority 100 advert_int 1 mcast_src_ip 10.4.7.11 nopreempt authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 10.4.7.10 } }
-
在12机器(从)情况keepalived.conf配置文件,写入内容如下:
! Configuration File for keepalived global_defs { router_id 10.4.7.12 } vrrp_script_chk_nginx { script "/etc/keepalived/check_port.sh 7443" interval 2 weight -20 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 251 mcast_src_ip 10.4.7.12 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 10.4.7.10 } }
-
启动keepalived,查看机器IP,如果某一台机器或nginx出问题,则IP会自动调度到另一台机器上,等故障机器修复后,重启故障机的keepalived,IP就会重新调度:
[root@hdss7-11 keepalived]# systemctl start keepalived [root@hdss7-11 keepalived]# systemctl enable keepalived Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service. [root@hdss7-11 keepalived]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:b6:de:20 brd ff:ff:ff:ff:ff:ff inet 10.4.7.11/24 brd 10.4.7.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 10.4.7.10/32 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::984d:a015:19cd:5b67/64 scope link noprefixroute valid_lft forever preferred_lft forever inet6 fe80::44d4:1853:c84:4437/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever
被控节点(node)
controller-manager安装(节点控制器/调度器服务)
在架构规划中,21/22机器作为controller节点,controller-manager服务安装步骤如下:
-
进入21/22机器上的
/opt/kubernetes/server/bin
目录,创建kube-controller-manager.sh
脚本文件,并赋予执行权限:#!/bin/bash ./kube-controller-manager \ --cluster-cidr 172.7.0.0/16 \ --leader-elect true \ --log-dir /data/logs/kubernetes/kube-controller-manager \ --master http://127.0.0.1:8080 \ --service-account-private-key-file ./cert/ca-key.pem \ --service-cluster-ip-range 192.168.0.0/16 \ --root-ca-file ./cert/ca.pem \ --v 2
[root@hdss7-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh [root@hdss7-21 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager
-
创建supervisor配置
/etc/supervisord.d/kube-controller-manager.ini
,配置中的7-21要根据机器的IP进行更改:[program:kube-controller-manager-7-21] command=/opt/kubernetes/server/bin/kube-controller-manager.sh numprocs=1 directory=/opt/kubernetes/server/bin autostart=true autorestart=true startsecs=30 startretries=3 exitcodes=0,2 stopsignal=QUIT stopwaitsecs=10 user=root redirect_stderr=true stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log stdout_logfile_maxbytes=64MB stdout_logfile_backups=4 stdout_capture_maxbytes=1MB stdout_events_enabled=false
-
创建
/opt/kubernetes/server/bin/kube-scheduler.sh
脚本文件:#!/bin/bash ./kube-scheduler \ --leader-elect \ --log-dir /data/logs/kubernetes/kube-scheduler \ --master http://127.0.0.1:8080 \ --v 2
[root@hdss7-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh [root@hdss7-21 bin]# mkdir -p /data/logs/kubernetes/kube-scheduler
-
创建scheduler的supervisor配置文件
/etc/supervisord.d/kube-scheduler.ini
:[program:kube-scheduler-7-21] command=/opt/kubernetes/server/bin/kube-scheduler.sh numprocs=1 directory=/opt/kubernetes/server/bin autostart=true autorestart=true startsecs=30 startretries=3 exitcodes=0,2 stopsignal=QUIT stopwaitsecs=10 user=root redirect_stderr=true stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log stdout_logfile_maxbytes=64MB stdout_logfile_backups=4 stdout_capture_maxbytes=1MB stdout_events_enabled=false
-
执行下面的命令更新supervisor,然后查看进程启动情况:
[root@hdss7-21 bin]# supervisorctl status etcd-server-7-21 RUNNING pid 1040, uptime 12 days, 17:37:05 kube-apiserver-7-21 RUNNING pid 1377, uptime 12 days, 17:36:14 [root@hdss7-21 bin]# supervisorctl update kube-controller-manager-7-21: added process group kube-scheduler-7-21: added process group [root@hdss7-21 bin]# supervisorctl status etcd-server-7-21 RUNNING pid 1040, uptime 12 days, 17:37:40 kube-apiserver-7-21 RUNNING pid 1377, uptime 12 days, 17:36:49 kube-controller-manager-7-21 RUNNING pid 93563, uptime 0:00:31 kube-scheduler-7-21 RUNNING pid 93565, uptime 0:00:31
-
创建kubectl的软连接,然后在21,22上查看集群健康情况:
[root@hdss7-21 bin]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl [root@hdss7-21 bin]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"}
kubelet计算节点部署
-
到200机器签发证书,进入
/opt/certs
,创建kubelet-csr.json
文件:{ "CN": "k8s-kubelet", "hosts": [ "127.0.0.1", "10.4.7.10", "10.4.7.21", "10.4.7.22", "10.4.7.23", "10.4.7.24", "10.4.7.25", "10.4.7.26", "10.4.7.27", "10.4.7.28" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] }
-
生成证书:
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json |cfssl-json -bare kubelet 2022/01/26 15:49:04 [INFO] generate received request 2022/01/26 15:49:04 [INFO] received CSR 2022/01/26 15:49:04 [INFO] generating key: rsa-2048 2022/01/26 15:49:04 [INFO] encoded CSR 2022/01/26 15:49:04 [INFO] signed certificate with serial number 56506595734708795427625005266808091338962053100 [root@hdss7-200 certs]# ll 总用量 84 -rw-r--r-- 1 root root 1204 12月 16 21:29 apiserver.csr -rw-r--r-- 1 root root 524 12月 16 21:28 apiserver-csr.json -rw------- 1 root root 1675 12月 16 21:29 apiserver-key.pem -rw-r--r-- 1 root root 1549 12月 16 21:29 apiserver.pem -rw-r--r-- 1 root root 836 12月 14 22:52 ca-config.json -rw-r--r-- 1 root root 1041 12月 14 17:10 ca.csr -rw-r--r-- 1 root root 328 12月 14 17:10 ca-csr.json -rw------- 1 root root 1675 12月 14 17:10 ca-key.pem -rw-r--r-- 1 root root 1298 12月 14 17:10 ca.pem -rw-r--r-- 1 root root 993 12月 16 21:22 client.csr -rw-r--r-- 1 root root 280 12月 16 21:22 client-csr.json -rw------- 1 root root 1679 12月 16 21:22 client-key.pem -rw-r--r-- 1 root root 1363 12月 16 21:22 client.pem -rw-r--r-- 1 root root 1062 12月 14 23:00 etcd-peer.csr -rw-r--r-- 1 root root 363 12月 14 22:55 etcd-peer-csr.json -rw------- 1 root root 1675 12月 14 23:00 etcd-peer-key.pem -rw-r--r-- 1 root root 1428 12月 14 23:00 etcd-peer.pem -rw-r--r-- 1 root root 1115 1月 26 15:49 kubelet.csr -rw-r--r-- 1 root root 492 1月 26 15:47 kubelet-csr.json -rw------- 1 root root 1679 1月 26 15:49 kubelet-key.pem -rw-r--r-- 1 root root 1468 1月 26 15:49 kubelet.pem
-
分发证书到21,22机器:
[root@hdss7-22 bin]# cd /opt/kubernetes/server/bin/cert/ [root@hdss7-22 cert]# scp hdss7-200:/opt/certs/kubelet.pem . [root@hdss7-22 cert]# scp hdss7-200:/opt/certs/kubelet-key.pem .
-
使用kubectl生成集群配置文件:
[root@hdss7-21 conf]# cd /opt/kubernetes/server/bin/conf [root@hdss7-21 conf]# kubectl config set-cluster myk8s \ --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \ --embed-certs=true \ --server=https://10.4.7.10:7443 \ --kubeconfig=kubelet.kubeconfig [root@hdss7-21 conf]# kubectl config set-credentials k8s-node \ --client-certificate=/opt/kubernetes/server/bin/cert/client.pem \ --client-key=/opt/kubernetes/server/bin/cert/client-key.pem \ --embed-certs=true \ --kubeconfig=kubelet.kubeconfig [root@hdss7-21 conf]# kubectl config set-context myk8s-context \ --cluster=myk8s \ --user=k8s-node \ --kubeconfig=kubelet.kubeconfig
-
切换到上面生成的上下文环境context:
[root@hdss7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig Switched to context "myk8s-context".
-
权限授权配置,在conf目录下,创建
k8s-node.yaml
文件如下:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: k8s-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: k8s-node
[root@hdss7-21 conf]# kubectl create -f k8s-node.yaml [root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-node -o yaml
-
然后后到22机器上,将21机器生成的
kubelet.kubeconfig
复制过来:root@hdss7-22 conf] scp hdss7-21:/opt/kubernetes/server/bin/conf/kubelet.kubeconfig [root@hdss7-22 conf]# ls audit.yaml kubelet.kubeconfig
-
接着准备
pause
基础镜像,在200机器执行下面的命令:[root@hdss7-200 ~]# docker pull kubernetes/pause [root@hdss7-200 ~]# docker images |grep pause [root@hdss7-200 ~]# docker tag f9d5de079539 harbor.od.com/public/pause:latest [root@hdss7-200 ~]# docker push harbor.od.com/public/pause:latest
-
在21/22机器上,创建kubelet启动脚本和supervisor配置文件,与IP相关的配置根据实际修改:
/opt/kubernetes/server/bin/kubelet.sh
#!/bin/bash ./kubelet \ --anonymous-auth=false \ --cgroup-driver systemd \ --cluster-dns 192.168.0.2 \ --cluster-domain cluster.local \ --runtime-cgroups=/systemd/system.slice \ --kubelet-cgroups=/systemd/system.slice \ --fail-swap-on="false" \ --client-ca-file ./cert/ca.pem \ --tls-cert-file ./cert/kubelet.pem \ --tls-private-key-file ./cert/kubelet-key.pem \ --hostname-override hdss7-21.host.com \ --image-gc-high-threshold 20 \ --image-gc-low-threshold 10 \ --kubeconfig ./conf/kubelet.kubeconfig \ --log-dir /data/logs/kubernetes/kube-kubelet \ --pod-infra-container-image harbor.od.com/public/pause:latest \ --root-dir /data/kubelet
/etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-7-21] command=/opt/kubernetes/server/bin/kubelet.sh ; the program (relative uses PATH, can take args) numprocs=1 ; number of processes copies to start (def 1) directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd) autostart=true ; start at supervisord start (default: true) autorestart=true ; retstart at unexpected quit (default: true) startsecs=30 ; number of secs prog must stay running (def. 1) startretries=3 ; max # of serial start failures (default 3) exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) stopsignal=QUIT ; signal used to kill process (default TERM) stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) user=root ; setuid to this UNIX account to run the program redirect_stderr=true ; redirect proc stderr to stdout (default false) stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log ; stderr log path, NONE for none; default AUTO stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB) stdout_logfile_backups=4 ; # of stdout logfile backups (default 10) stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) stdout_events_enabled=false ; emit events on stdout writes (default false)
-
创建目录,修改文件权限:
[root@hdss7-21 kubernetes]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet [root@hdss7-21 kubernetes]# chmod +x /opt/kubernetes/server/bin/kubelet.sh [root@hdss7-21 kubernetes]# supervisorctl update kube-kubelet-7-21: added process group [root@hdss7-21 kubernetes]# supervisorctl status etcd-server-7-21 RUNNING pid 1040, uptime 24 days, 23:10:34 kube-apiserver-7-21 RUNNING pid 1377, uptime 24 days, 23:09:43 kube-controller-manager-7-21 RUNNING pid 106847, uptime 1:54:18 kube-kubelet-7-21 RUNNING pid 108509, uptime 0:00:31 kube-scheduler-7-21 RUNNING pid 99912, uptime 5:26:19
-
给21/22机器设置master,node标签
[root@hdss7-21 bin]# kubectl get node NAME STATUS ROLES AGE VERSION hdss7-21.host.com Ready <none> 4m11s v1.15.12 hdss7-22.host.com Ready <none> 4m3s v1.15.12 [root@hdss7-21 bin]# kubectl label node hdss7-21.host.comnode-role.kubernetes.io/master= node/hdss7-21.host.com labeled [root@hdss7-21 bin]# kubectl label node hdss7-21.host.comnode-role.kubernetes.io/node= node/hdss7-21.host.com labeled [root@hdss7-22 supervisord.d]# kubectl get nodes NAME STATUS ROLES AGE VERSION hdss7-21.host.com Ready master,node 5m12s v1.15.12 hdss7-22.host.com Ready <none> 5m4s v1.15.12 [root@hdss7-22 supervisord.d]# kubectl label node hdss7-22.host.com node-role.kubernetes.io/master= node/hdss7-22.host.com labeled [root@hdss7-22 supervisord.d]# kubectl label node hdss7-22.host.com node-role.kubernetes.io/node= node/hdss7-22.host.com labeled [root@hdss7-22 supervisord.d]# kubectl get node NAME STATUS ROLES AGE VERSION hdss7-21.host.com Ready master,node 6m41s v1.15.12 hdss7-22.host.com Ready master,node 6m33s v1.15.12
kube-proxy节点部署
-
到200机器上签发证书,编辑
/opt/certs/kube-proxy-csr.json
:{ "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] }
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client 2022/02/07 21:09:43 [INFO] generate received request 2022/02/07 21:09:43 [INFO] received CSR 2022/02/07 21:09:43 [INFO] generating key: rsa-2048 2022/02/07 21:09:44 [INFO] encoded CSR 2022/02/07 21:09:44 [INFO] signed certificate with serial number 118897100823525204507602500752997026162852927684 2022/02/07 21:09:44 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@hdss7-200 certs]# ll 总用量 100 -rw-r--r-- 1 root root 1204 12月 16 21:29 apiserver.csr -rw-r--r-- 1 root root 524 12月 16 21:28 apiserver-csr.json -rw------- 1 root root 1675 12月 16 21:29 apiserver-key.pem -rw-r--r-- 1 root root 1549 12月 16 21:29 apiserver.pem -rw-r--r-- 1 root root 836 12月 14 22:52 ca-config.json -rw-r--r-- 1 root root 1041 12月 14 17:10 ca.csr -rw-r--r-- 1 root root 328 12月 14 17:10 ca-csr.json -rw------- 1 root root 1675 12月 14 17:10 ca-key.pem -rw-r--r-- 1 root root 1298 12月 14 17:10 ca.pem -rw-r--r-- 1 root root 993 12月 16 21:22 client.csr -rw-r--r-- 1 root root 280 12月 16 21:22 client-csr.json -rw------- 1 root root 1679 12月 16 21:22 client-key.pem -rw-r--r-- 1 root root 1363 12月 16 21:22 client.pem -rw-r--r-- 1 root root 1062 12月 14 23:00 etcd-peer.csr -rw-r--r-- 1 root root 363 12月 14 22:55 etcd-peer-csr.json -rw------- 1 root root 1675 12月 14 23:00 etcd-peer-key.pem -rw-r--r-- 1 root root 1428 12月 14 23:00 etcd-peer.pem -rw-r--r-- 1 root root 1115 1月 26 15:49 kubelet.csr -rw-r--r-- 1 root root 492 1月 26 15:47 kubelet-csr.json -rw------- 1 root root 1679 1月 26 15:49 kubelet-key.pem -rw-r--r-- 1 root root 1468 1月 26 15:49 kubelet.pem -rw-r--r-- 1 root root 1005 2月 7 21:09 kube-proxy-client.csr -rw------- 1 root root 1679 2月 7 21:09 kube-proxy-client-key.pem -rw-r--r-- 1 root root 1375 2月 7 21:09 kube-proxy-client.pem -rw-r--r-- 1 root root 267 2月 7 21:07 kube-proxy-csr.json
-
分发证书:
# 21机器 [root@hdss7-21 cert]# pwd /opt/kubernetes/server/bin/cert [root@hdss7-21 cert]# scp hdss7-200:/opt/certs/kube-proxy-client.pem . [root@hdss7-21 cert]# scp hdss7-200:/opt/certs/kube-proxy-client-key.pem . # 22机器 [root@hdss7-22 supervisord.d]# cd /opt/kubernetes/server/bin/cert/ [root@hdss7-22 cert]# scp hdss7-200:/opt/certs/kube-proxy-client.pem . [root@hdss7-22 cert]# scp hdss7-200.host.com:/opt/certs/kube-proxy-client-key.pem .
-
生成kube-proxy配置文件:
# 21机器 [root@hdss7-21 cert]# cd ../conf/ [root@hdss7-21 conf]# kubectl config set-cluster myk8s \ --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \ --embed-certs=true \ --server=https://10.4.7.10:7443 \ --kubeconfig=kube-proxy.kubeconfig [root@hdss7-21 conf]# kubectl config set-credentials kube-proxy \ --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig [root@hdss7-21 conf]# kubectl config set-credentials kube-proxy \ --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig [root@hdss7-21 conf]# kubectl config set-context myk8s-context \ --cluster=myk8s \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig [root@hdss7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig # 22机器 [root@hdss7-22 cert]# cd ../conf/ [root@hdss7-22 conf]# scp hdss7-21:/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig .
-
为21/22机器加载ipvs模块,在家目录创建
ipvs.sh
文件:#!/bin/bash ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs" for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*");do /sbin/modinfo -F filename $i &>/dev/null if [ $? -eq 0 ];then /sbin/modprobe $i fi done
[root@hdss7-21 ~]# chmod +x ipvs.sh [root@hdss7-21 ~]# ./ipvs.sh [root@hdss7-21 ~]# lsmod |grep ip_vs ip_vs_wrr 12697 0 ip_vs_wlc 12519 0 ip_vs_sh 12688 0 ip_vs_sed 12519 0 ip_vs_rr 12600 0 ip_vs_pe_sip 12740 0 nf_conntrack_sip 33860 1 ip_vs_pe_sip ip_vs_nq 12516 0 ip_vs_lc 12516 0 ip_vs_lblcr 12922 0 ip_vs_lblc 12819 0 ip_vs_ftp 13079 0 ip_vs_dh 12688 0 ip_vs 141432 24 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc nf_nat 26787 3 ip_vs_ftp,nf_nat_ipv4,nf_nat_masquerade_ipv4 nf_conntrack 133053 8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4 libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
-
编辑kube proxy的启动脚本和supervisor守护进程配置,配置内容根据所在机器进行修改:
/opt/kubernetes/server/bin/kube-proxy.sh
#!/bin/bash ./kube-proxy \ --cluster-cidr 172.7.0.0/16 \ --hostname-override hdss7-21.host.com \ --proxy-mode=ipvs \ --ipvs-scheduler=nq \ --kubeconfig ./conf/kube-proxy.kubeconfig
/etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-7-21] command=/opt/kubernetes/server/bin/kube-proxy.sh numprocs=1 directory=/opt/kubernetes/server/bin autostart=true autorestart=true startsecs=30 startretries=3 exitcodes=0,2 stopsignal=QUIT stopwaitsecs=10 user=root redirect_stderr=true stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log stdout_logfile_maxbytes=64MB stdout_logfile_backups=4 stdout_capture_maxbytes=1MB stdout_events_enabled=false
-
完成配置并启动进程:
[root@hdss7-21 supervisord.d]# chmod +x /opt/kubernetes/server/bin/kube-proxy.sh [root@hdss7-21 supervisord.d]# mkdir -p /data/logs/kubernetes/kube-proxy/ [root@hdss7-21 supervisord.d]# supervisorctl update # 安装ipvsadm,用于设置、维护和检查linux内核中虚拟服务器列表 [root@hdss7-22 supervisord.d]# yum install ipvsadm -y [root@hdss7-22 supervisord.d]# ipvsadm -Ln [root@hdss7-22 supervisord.d]# kubectl get svc
-
集群验证,在21或22任意一台机器上,创建一个pod的配置文件
nginx-ds.yaml
:apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: nginx-ds spec: template: metadata: labels: app: nginx-ds spec: containers: - name: my-nginx image: harbor.od.com/public/nginx:v1.21 ports: - containerPort: 80
[root@hdss7-21 ~]# kubectl create -f nginx-ds.yaml [root@hdss7-22 supervisord.d]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ds-bw2k9 1/1 Running 0 49m 172.7.21.2 hdss7-21.host.com <none> <none> nginx-ds-x2j7p 1/1 Running 0 49m 172.7.22.2 hdss7-22.host.com <none> <none>
到此,K8S集群就已经搭建完成。