一、基础环境
012345678910 $ uname -aLinux jp-master01 6.1.0-28-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.119-1 (2024-11-22) x86_64 GNU/Linux$ uname -aLinux tw-master02 6.1.0-28-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.119-1 (2024-11-22) x86_64 GNU/Linux$ uname -aLinux hk-master03 6.1.0-28-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.119-1 (2024-11-22) x86_64 GNU/Linux$ uname -aLinux singapore-worker 6.1.0-28-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.119-1 (2024-11-22) x86_64 GNU/Linux环境基于GCP三个月免费试用搭建!
二、架构拓扑
三、搭建过程
3.1 Netmaker搭建-基于docker compose
1、将域名api.jp.t4x.org、dashboard.jp.t4x.org、broker.jp.t4x.org解析到服务器IP #比如本次实验的IP地址:35.213.98.66
2、允许TCP80、443、51821;UDP51821端口出网 #可能需要出网的包括TCP53、8085、1883、8883、8083、18083; UDP53 文 章 源 自 note.t4x.orgByrd's Blog-https://note.t4x.org/project/note-website-architecture/
3.2 netmaker连通性验证
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
$ wg interface: netmaker public key: OY41ScnlfuQvEfD9kA1h0niE5dzaCobiDrs0kJl6jAE= private key: (hidden) listening port: 51821 peer: YLtM904xd2g4kfdTc+Y1R+5fPurlrC3zLX3k7zIweC8= endpoint: 35.215.182.242:51821 allowed ips: 100.64.0.2/32 latest handshake: 17 seconds ago transfer: 3.93 KiB received, 1.61 KiB sent persistent keepalive: every 20 seconds peer: G4caUqhVSNCJAvdR+JPBQC3KOUi6K4P58IfvkEAYyg8= endpoint: 35.206.246.217:51821 allowed ips: 100.64.0.1/32 latest handshake: 19 seconds ago transfer: 3.90 KiB received, 1.90 KiB sent persistent keepalive: every 20 seconds peer: 7zdhkmamEJMB3nbDs2cUxtdeeWMU7GUGoWhBniD85CY= endpoint: 34.2.17.129:51821 allowed ips: 100.64.0.3/32 latest handshake: 35 seconds ago transfer: 1.23 KiB received, 4.13 KiB sent persistent keepalive: every 20 seconds $ ping 100.64.0.1 PING 100.64.0.1 (100.64.0.1) 56(84) bytes of data. 64 bytes from 100.64.0.1: icmp_seq=1 ttl=64 time=34.2 ms $ ping 100.64.0.2 PING 100.64.0.2 (100.64.0.2) 56(84) bytes of data. 64 bytes from 100.64.0.2: icmp_seq=1 ttl=64 time=50.1 ms $ ping 100.64.0.3 PING 100.64.0.3 (100.64.0.3) 56(84) bytes of data. 64 bytes from 100.64.0.3: icmp_seq=1 ttl=64 time=75.2 ms $ ping 100.64.0.4 PING 100.64.0.4 (100.64.0.4) 56(84) bytes of data. 64 bytes from 100.64.0.4: icmp_seq=1 ttl=64 time=0.042 ms |
文 章 源 自 note.t4x.orgByrd's Blog-https://note.t4x.org/project/note-website-architecture/
3.3 防火墙设置
0 1 2 3 4 5 6 7 8 9 10 |
apt install firewalld -y firewall-cmd --permanent --new-zone=personal systemctl restart firewalld firewall-cmd --zone=personal --add-masquerade firewall-cmd --zone=personal --add-forward firewall-cmd --zone=personal --add-rich-rule="rule family="ipv4" source address="你的IP地址" accept" firewall-cmd --set-default=personal for i in `route -n | egrep -v "Kernel|Destination|\*" | awk '{print $NF}' | sort | uniq -c | awk '{print $2}'`;do firewall-cmd --zone=personal --add-interface=${i}; done firewall-cmd --zone=personal --add-rich-rule="rule family="ipv4" source address="100.64.0.0/16" accept" firewall-cmd --zone=personal --add-rich-rule="rule family="ipv4" source address="10.42.0.0/16" accept" firewall-cmd --zone=personal --add-rich-rule="rule family="ipv4" source address="10.43.0.0/16" accept" |
文 章 源 自 note.t4x.orgByrd's Blog-https://note.t4x.org/project/note-website-architecture/
3.4 k3s服务搭建
0 1 2 3 4 5 6 7 8 9 10 |
$ wget https://github.com/k3s-io/k3s/releases/download/v1.31.2%2Bk3s1/k3s #--no-check-certificate $ chmod +x k3s && mv k3s /usr/local/bin/ $ wget https://dl.k8s.io/v1.31.2/kubernetes-client-linux-amd64.tar.gz #解压后二进制文件放到/usr/local/bin目录下 $ tar zxf kubernetes-client-linux-amd64.tar.gz $ chmod +x kubernetes/client/bin/* $ mv kubernetes/client/bin/* /usr/local/bin/ $ wget https://github.com/k3s-io/helm-controller/releases/download/v0.16.5/helm-controller-amd64 $ chmod +x helm-controller-amd64 $ mv helm-controller-amd64 /usr/local/bin/helm $ wget https://github.com/etcd-io/etcd/releases/download/v3.5.13/etcd-v3.5.13-linux-amd64.tar.gz $ tar -zxvf etcd-v3.5.13-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.13-linux-amd64/etcd{,ctl} |
文 章 源 自 note.t4x.orgByrd's Blog-https://note.t4x.org/project/note-website-architecture/
3.4.0 Master
3.4.0为本地测试后加入的,可能和之前的IP不一致,请按需修改
3.4.1 Master01
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
cat > /etc/systemd/system/k3s.service << BYRD # k3s service config start [Unit] Description=Lightweight Kubernetes Documentation=https://k3s.io Wants=network-online.target [Install] WantedBy=multi-user.target [Service] Type=notify Environment="CATTLE_NEW_SIGNED_CERT_EXPIRATION_DAYS=36500" EnvironmentFile=-/etc/default/%N EnvironmentFile=-/etc/sysconfig/%N EnvironmentFile=-/etc/systemd/system/k3s.service.env KillMode=process Delegate=yes # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=1048576 LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity TimeoutStartSec=0 Restart=always RestartSec=5s ExecStartPre=-/sbin/modprobe br_netfilter ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/k3s \ server \ --cluster-init \ --bind-address '0.0.0.0' \ --write-kubeconfig-mode '0600' \ --write-kubeconfig '/root/.kube/config' \ --data-dir '/data/k3s' \ --cluster-cidr 10.42.0.0/16 \ --service-cidr 10.43.0.0/16 \ --service-node-port-range '30000-42767' \ --tls-san 'k3s.t4x.org' \ --node-ip '100.64.0.4' \ --advertise-address '100.64.0.4' \ --node-external-ip '100.64.0.4' \ --node-label 'nodetype=server' \ --secrets-encryption \ --disable-network-policy \ --flannel-iface 'netmaker' \ --flannel-backend 'none' \ --disable 'servicelb' \ --disable 'traefik' \ --log '/var/log/k3s.log' \ --kube-proxy-arg "proxy-mode=ipvs" "masquerade-all=true" \ --kube-proxy-arg "metrics-bind-address=0.0.0.0" \ --datastore-endpoint 'https://100.64.0.4:2379,https://100.64.0.1:2379,https://100.64.0.2:2379' \ --datastore-cafile '/data/k3s/server/tls/etcd/server-ca.crt' \ --datastore-certfile '/data/k3s/server/tls/etcd/server-client.crt' \ --datastore-keyfile '/data/k3s/server/tls/etcd/server-client.key' # k3s service config end BYRD |
文 章 源 自 note.t4x.orgByrd's Blog-https://note.t4x.org/project/note-website-architecture/
3.4.2 Master02
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
cat > /etc/systemd/system/k3s.service << BYRD # k3s service config start # k3s service config start [Unit] Description=Lightweight Kubernetes Documentation=https://k3s.io Wants=network-online.target [Install] WantedBy=multi-user.target [Service] Type=notify EnvironmentFile=-/etc/default/%N EnvironmentFile=-/etc/sysconfig/%N EnvironmentFile=-/etc/systemd/system/k3s.service.env KillMode=process Delegate=yes # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=1048576 LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity TimeoutStartSec=0 Restart=always RestartSec=5s ExecStartPre=-/sbin/modprobe br_netfilter ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/k3s \ server \ --server https://100.64.0.4:6443 \ --bind-address '0.0.0.0' \ --write-kubeconfig-mode '0600' \ --write-kubeconfig '/root/.kube/config' \ --data-dir '/data/k3s' \ --cluster-cidr 10.42.0.0/16 \ --service-cidr 10.43.0.0/16 \ --service-node-port-range '30000-42767' \ --tls-san 'k3s.t4x.org' \ --node-ip '100.64.0.1' \ --advertise-address '100.64.0.1' \ --node-external-ip '100.64.0.1' \ --node-label 'nodetype=server' \ --secrets-encryption \ --disable-network-policy \ --flannel-iface 'netmaker' \ --flannel-backend 'none' \ --disable 'servicelb' \ --disable 'traefik' \ --log '/var/log/k3s.log' \ --kube-proxy-arg "proxy-mode=ipvs" "masquerade-all=true" \ --kube-proxy-arg "metrics-bind-address=0.0.0.0" \ --token 'K104884f06ec7d4525be21542fd9ceb8644b865fd28dcf761bf5c9ec222cd384997::server:c70783c06dfb97351e2c07b988bf09c4' # k3s service config end BYRD |
文 章 源 自 note.t4x.orgByrd's Blog-https://note.t4x.org/project/note-website-architecture/
3.4.3 Master03
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
cat > /etc/systemd/system/k3s.service << BYRD # k3s service config start [Unit] Description=Lightweight Kubernetes Documentation=https://k3s.io Wants=network-online.target [Install] WantedBy=multi-user.target [Service] Type=notify EnvironmentFile=-/etc/default/%N EnvironmentFile=-/etc/sysconfig/%N EnvironmentFile=-/etc/systemd/system/k3s.service.env KillMode=process Delegate=yes # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=1048576 LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity TimeoutStartSec=0 Restart=always RestartSec=5s ExecStartPre=-/sbin/modprobe br_netfilter ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/k3s \ server \ --server https://100.64.0.4:6443 \ --bind-address '0.0.0.0' \ --write-kubeconfig-mode '0600' \ --write-kubeconfig '/root/.kube/config' \ --data-dir '/data/k3s' \ --cluster-cidr 10.42.0.0/16 \ --service-cidr 10.43.0.0/16 \ --service-node-port-range '30000-42767' \ --tls-san 'k3s.t4x.org' \ --node-ip '100.64.0.2' \ --advertise-address '100.64.0.2' \ --node-external-ip '100.64.0.2' \ --node-label 'nodetype=server' \ --secrets-encryption \ --disable-network-policy \ --flannel-iface 'netmaker' \ --flannel-backend 'none' \ --disable 'servicelb' \ --disable 'traefik' \ --log '/var/log/k3s.log' \ --kube-proxy-arg "proxy-mode=ipvs" "masquerade-all=true" \ --kube-proxy-arg "metrics-bind-address=0.0.0.0" \ --token 'K104884f06ec7d4525be21542fd9ceb8644b865fd28dcf761bf5c9ec222cd384997::server:c70783c06dfb97351e2c07b988bf09c4' BYRD |
文 章 源 自 note.t4x.orgByrd's Blog-https://note.t4x.org/project/note-website-architecture/
3.4.4 worker节点
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
$ cat /etc/systemd/system/k3s-agent.service [Unit] Description=Lightweight Kubernetes Documentation=https://k3s.io Wants=network-online.target [Install] WantedBy=multi-user.target [Service] Type=exec EnvironmentFile=-/etc/systemd/system/k3s-agent.service.env KillMode=process Delegate=yes LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity TimeoutStartSec=0 Restart=always RestartSec=5s ExecStartPre=-/sbin/modprobe br_netfilter ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/k3s agent \ --node-label 'nodetype=worker' \ --node-external-ip '100.64.0.3' \ --node-ip '100.64.0.3' \ --data-dir '/data/k3s' \ --token 'c70783c06dfb97351e2c07b988bf09c4' \ --log '/var/log/k3s_agent.log' \ --server 'https://100.64.0.4:6443' \ --flannel-iface 'netmaker' \ --flannel-backend 'none' \ --kube-proxy-arg "proxy-mode=ipvs" "masquerade-all=true" \ --kube-proxy-arg "metrics-bind-address=0.0.0.0" |
3.5 集群状态
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
$ kubectl get node NAME STATUS ROLES AGE VERSION hk-master03 NotReady control-plane,etcd,master 9m45s v1.31.2+k3s1 jp-master01 NotReady control-plane,etcd,master 13h v1.31.2+k3s1 singapore-worker NotReady <none> 56s v1.31.2+k3s1 tw-master02 NotReady control-plane,etcd,master 78m v1.31.2+k3s1 $ etcdctl --endpoints="100.64.0.4:2379,100.64.0.1:2379,100.64.0.2:2379" --cacert=/data/k3s/server/tls/etcd/server-ca.crt --cert=/data/k3s/server/tls/etcd/client.crt --key=/data/k3s/server/tls/etcd/client.key endpoint status --write-out=table +-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | 100.64.0.4:2379 | 12f94331ab325023 | 3.5.13 | 2.6 MB | true | false | 3 | 148671 | 148671 | | | 100.64.0.1:2379 | 48b04a37c8bb4824 | 3.5.13 | 2.2 MB | false | false | 3 | 148671 | 148671 | | | 100.64.0.2:2379 | b6dfc381547c6 | 3.5.13 | 2.2 MB | false | false | 3 | 148671 | 148671 | | +-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ $ etcdctl --endpoints="100.64.0.4:2379,100.64.0.1:2379,100.64.0.2:2379" --cacert=/data/k3s/server/tls/etcd/server-ca.crt --cert=/data/k3s/server/tls/etcd/client.crt --key=/data/k3s/server/tls/etcd/client.key member list b6dfc381547c6, started, hk-master03-7c98aa06, https://100.64.0.2:2380, https://100.64.0.2:2379, false 12f94331ab325023, started, jp-master01-e0e287f1, https://100.64.0.4:2380, https://100.64.0.4:2379, false 48b04a37c8bb4824, started, tw-master02-ceeeb2b2, https://100.64.0.1:2380, https://100.64.0.1:2379, false $ export ETCDCTL_API=3 $ export ETCDCTL_CERT=/data/k3s/server/tls/etcd/client.crt $ export ETCDCTL_KEY=/data/k3s/server/tls/etcd/client.key $ export ETCDCTL_CACERT=/data/k3s/server/tls/etcd/server-ca.crt root@jp-master01:/data/k3s/server/tls/etcd# etcdctl member list b6dfc381547c6, started, hk-master03-7c98aa06, https://100.64.0.2:2380, https://100.64.0.2:2379, false 12f94331ab325023, started, jp-master01-e0e287f1, https://100.64.0.4:2380, https://100.64.0.4:2379, false 48b04a37c8bb4824, started, tw-master02-ceeeb2b2, https://100.64.0.1:2380, https://100.64.0.1:2379, false $ etcdctl endpoint status --cluster https://100.64.0.2:2379, b6dfc381547c6, 3.5.13, 2.3 MB, false, false, 3, 150454, 150454, https://100.64.0.4:2379, 12f94331ab325023, 3.5.13, 2.6 MB, true, false, 3, 150454, 150454, https://100.64.0.1:2379, 48b04a37c8bb4824, 3.5.13, 2.4 MB, false, false, 3, 150454, 150454, $ etcdctl --endpoints https://100.64.0.1:2379 put my-key "my-value" $ etcdctl get my-key my-key my-value $ etcdctl --endpoints https://100.64.0.2:2379 get my-key my-key my-value $ kubectl get node,pod --all-namespaces #未安装calico的状态 NAME STATUS ROLES AGE VERSION node/vm-0-2-debian-vg NotReady control-plane,etcd,master 15m v1.31.2+k3s1 node/vm-4-3-debian-sg NotReady control-plane,etcd,master 5m37s v1.31.2+k3s1 node/vm-4-6-debian-kr NotReady control-plane,etcd,master 7m58s v1.31.2+k3s1 NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-56f6fc8fd7-7r4vb 0/1 Pending 0 15m kube-system pod/local-path-provisioner-5cf85fd84d-qt44r 0/1 Pending 0 15m kube-system pod/metrics-server-5985cbc9d7-2fc7b 0/1 Pending 0 15m |
3.6 calico配置
3.7 k3s状态
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
$ kubectl get node,svc,pod --all-namespaces -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME node/hk-master03 Ready control-plane,etcd,master 12h v1.31.2+k3s1 100.64.0.2 100.64.0.2 Debian GNU/Linux 12 (bookworm) 6.1.0-28-cloud-amd64 containerd://1.7.22-k3s1 node/jp-master01 Ready control-plane,etcd,master 26h v1.31.2+k3s1 100.64.0.4 100.64.0.4 Debian GNU/Linux 12 (bookworm) 6.1.0-28-cloud-amd64 containerd://1.7.22-k3s1 node/singapore-worker Ready <none> 12h v1.31.2+k3s1 100.64.0.3 100.64.0.3 Debian GNU/Linux 12 (bookworm) 6.1.0-28-cloud-amd64 containerd://1.7.22-k3s1 node/tw-master02 Ready control-plane,etcd,master 13h v1.31.2+k3s1 100.64.0.1 100.64.0.1 Debian GNU/Linux 12 (bookworm) 6.1.0-28-cloud-amd64 containerd://1.7.22-k3s1 NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 26h <none> kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 26h k8s-app=kube-dns kube-system service/metrics-server ClusterIP 10.43.146.92 <none> 443/TCP 26h k8s-app=metrics-server NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system pod/calico-kube-controllers-5d7d9cdfd8-dwkzv 1/1 Running 0 26m 10.42.144.129 hk-master03 <none> <none> kube-system pod/calico-node-2tvpc 1/1 Running 0 26m 100.64.0.1 tw-master02 <none> <none> kube-system pod/calico-node-42wcv 1/1 Running 0 26m 100.64.0.2 hk-master03 <none> <none> kube-system pod/calico-node-8wsd7 1/1 Running 0 5m53s 100.64.0.3 singapore-worker <none> <none> kube-system pod/calico-node-smvd6 1/1 Running 0 26m 100.64.0.4 jp-master01 <none> <none> kube-system pod/coredns-56f6fc8fd7-grnr4 1/1 Running 0 19s 10.42.144.131 hk-master03 <none> <none> kube-system pod/local-path-provisioner-5cf85fd84d-wv24x 1/1 Running 0 64m 10.42.210.66 tw-master02 <none> <none> kube-system pod/metrics-server-5985cbc9d7-jdc2w 1/1 Running 0 57s 10.42.210.67 tw-master02 <none> <none> |
3.8 集群验证
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
$ curl https://10.43.0.1 -k { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "Unauthorized", "reason": "Unauthorized", "code": 401 curl https://10.43.146.92 -k { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": {}, "code": 403 $ telnet 10.43.0.10 53 Trying 10.43.0.10... Connected to 10.43.0.10. Escape character is '^]'. $ telnet 10.43.0.10 9153 Trying 10.43.0.10... Connected to 10.43.0.10. Escape character is '^]'. $ apt install ncat -y $ nc -uzv 10.43.0.10 53 Ncat: Version 7.93 ( https://nmap.org/ncat ) Ncat: Connected to 10.43.0.10:53. Ncat: UDP packet sent successfully Ncat: 1 bytes sent, 0 bytes received in 2.06 seconds. $ ping 10.42.144.129 PING 10.42.144.129 (10.42.144.129) 56(84) bytes of data. 64 bytes from 10.42.144.129: icmp_seq=1 ttl=63 time=49.3 ms $ ping 10.42.210.66 PING 10.42.210.66 (10.42.210.66) 56(84) bytes of data. 64 bytes from 10.42.210.66: icmp_seq=1 ttl=63 time=33.4 ms |
四、服务配置
4.0 MySQL
4.1 MySQL Master
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
$ echo -n 'admin123' | base64 YWRtaW4xMjM= $ cat mysql-secret.yaml apiVersion: v1 kind: Secret metadata: name: mysql-secret type: Opaque data: MYSQL_ROOT_PASSWORD: YWRtaW4xMjM= $ kubectl create -f mysql-secret.yaml $ cat mysql-storage.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv-pxc-storage labels: app: mysql spec: capacity: #容量配置 storage: 1Gi volumeMode: Filesystem #卷的模式,比如文件系统的Filesystem或者Block块模式 accessModes: #PV默认,此案例中的只可以挂载到一个pod中 - ReadWriteOnce persistentVolumeReclaimPolicy: Retain #回收策略 storageClassName: tt-test #PV的类 hostPath: #pv的类型,案例中以本地目录挂载 path: /var/lib/mysql #宿主机路径 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pxc-pvc spec: storageClassName: tt-test accessModes: - ReadWriteOnce #必须和PV一致 resources: requests: storage: 1Gi #必须小于等于PV大小 $ chown -R 1001:root /var/lib/mysql $ kubectl create -f mysql-storage.yaml $ cat mysql-master.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: mysql spec: serviceName: mysql replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: nodeSelector: kubernetes.io/hostname: jp-master01 volumes: - name: pxc-storange persistentVolumeClaim: claimName: pxc-pvc #securityContext: # fsGroup: 1001 # 设置文件系统组 # runAsUser: 1001 # runAsGroup: 1001 #image: percona/percona-xtradb-cluster:latest containers: - name: mysql image: percona/percona-xtradb-cluster:5.7.44 #image: bitnami/openresty:1.27.1-1 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: MYSQL_ROOT_PASSWORD - name: CLUSTER_NAME value: "PXC" ports: - containerPort: 3306 - containerPort: 4444 - containerPort: 4567 - containerPort: 4568 volumeMounts: - name: pxc-storange mountPath: /var/lib/mysql tolerations: - effect: NoExecute key: node.kubernetes.io/unreachable tolerationSeconds: 10 operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 10 - key: nodetype operator: Equal value: "master" affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - mysql namespaces: - default topologyKey: kubernetes.io/hostname --- apiVersion: v1 kind: Service metadata: name: mysql spec: clusterIP: None # Headless Service selector: app: mysql ports: - port: 3306 targetPort: 3306 --- apiVersion: v1 kind: Service metadata: name: pxc-cluster spec: type: ClusterIP selector: app: mysql ports: - name: pxc-0 protocol: TCP port: 4567 targetPort: 4567 - name: pxc-1 protocol: TCP port: 4568 targetPort: 4568 - name: pxc-2 protocol: TCP port: 4444 targetPort: 4444 |
4.2 MySQL slave
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
$ kubectl create -f mysql-master.yaml $ cat mysql-slave.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: slave spec: serviceName: mysql replicas: 2 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: volumes: - name: pxc-storange persistentVolumeClaim: claimName: pxc-pvc affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - tw-master02 - hk-master03 containers: - name: mysql image: percona/percona-xtradb-cluster:5.7.44 #image: bitnami/openresty:1.27.1-1 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: MYSQL_ROOT_PASSWORD - name: CLUSTER_NAME value: "PXC" - name: CLUSTER_JOIN value: "mysql-0.mysql.default.svc.cluster.local" ports: - containerPort: 3306 - containerPort: 4444 - containerPort: 4567 - containerPort: 4568 volumeMounts: - name: pxc-storange mountPath: /var/lib/mysql tolerations: - effect: NoExecute key: node.kubernetes.io/unreachable tolerationSeconds: 10 operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 10 - key: nodetype operator: Equal value: "master" $ kubectl create -f mysql-slave.yaml |
4.1 MySQL验证
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
$ kubectl exec -ti mysql-0 -- mysql -uroot -p'admin123' -e "show global status like 'wsrep_cluster_size';" mysql: [Warning] Using a password on the command line interface can be insecure. +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 3 | +--------------------+-------+ $ kubectl exec -ti mysql-0 -- mysql -uroot -p'admin123' -e "show status like 'wsrep_incoming_addresses';" mysql: [Warning] Using a password on the command line interface can be insecure. +--------------------------+----------------------------------------------------------------------------------------------------------------------------------------+ | Variable_name | Value | +--------------------------+----------------------------------------------------------------------------------------------------------------------------------------+ | wsrep_incoming_addresses | mysql-0.mysql.default.svc.cluster.local:3306,slave-1.mysql.default.svc.cluster.local:3306,slave-0.mysql.default.svc.cluster.local:3306 | +--------------------------+----------------------------------------------------------------------------------------------------------------------------------------+ $ kubectl exec -ti mysql-0 -- bash bash-4.4$ mysql -uroot -p Enter password: mysql> create database ttt; $ kubectl exec -ti slave-0 -- bash bash-4.4$ mysql -uroot -p Enter password: mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | | ttt | +--------------------+ |
4.2 监控部署
0 1 2 3 4 5 6 7 8 9 10 11 |
$ git clone https://github.com/prometheus-operator/kube-prometheus.git $ cd kube-prometheus/manifests/ $ kubectl create -f setup/ $ kubectl create -f . $ kubectl apply --server-side -f setup/ $ kubectl wait \ --for condition=Established \ --all CustomResourceDefinition \ --namespace=monitoring $ kubectl apply -f . $ kubectl edit svc grafana -n monitoring $ kubectl edit svc prometheus-k8s -n monitoring |
4.3 网站配置
说明:此处通过nodeSelector选择了某节点模拟web的共享目录,通过nginx模拟了LB,没配置ingress。
通过访问http://jp.t4x.org:8080/安装了wordpress,可能存在网站无法写入 wp-config.php 的权限问题,可以通过配置网站目录权限解决。
0123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596 $ cat nginx.yamlapiVersion: apps/v1 # 指定API版本kind: Deployment # 资源的类型,Deployment StatefulSet Servicemetadata:name: blog # Deployment的名称namespace: default # 指定namespace名称spec: # Deployment 规格replicas: 1 # 指定运行的Pod副本数selector: # 选择器,用于确定哪些 Pod 受此 Deployment 管理matchLabels:app: blog # 标签选择器,用于选择要控制的Pods, Pod 必须具有此标签才能被选择environment: prodversion: v1.0role: internaltemplate: # Pod 模板metadata:labels:app: blog # Pod 的标签,与上面的选择器匹配environment: prodrole: internalversion: v1.0spec:nodeSelector:kubernetes.io/hostname: singapore-workercontainers:- name: nginx # 容器的名称image: bitnami/openresty:1.27.1-1 # 容器使用的镜像imagePullPolicy: IfNotPresentports:- containerPort: 80 # 容器监听的端口- containerPort: 443 # 容器监听的端口resources:limits:memory: 200Mirequests:cpu: 100mmemory: 200MivolumeMounts: # 卷挂载- mountPath: /opt/bitnami/openresty/nginx/conf/nginx.conf # 容器内的挂载路径name: web-nginx-config # 引用的卷名称,必须与下面的卷名称匹配- mountPath: /app # 容器内的挂载路径name: web-files # 引用的卷名称,必须与下面的卷名称匹配- name: php # 容器的名称image: bitnami/php-fpm:7.4.33 # 容器使用的镜像imagePullPolicy: IfNotPresentports:- containerPort: 9000 # 容器监听的端口resources:limits:memory: 200Mirequests:cpu: 200mmemory: 200MivolumeMounts: # 卷挂载- mountPath: /app # 容器内的挂载路径name: web-files # 引用的卷名称,必须与下面的卷名称匹配volumes: # 卷列表- name: web-nginx-confighostPath: # 使用 hostPath 卷类型path: /etc/nginx/blog/conf/nginx.conf # 宿主机上的路径type: File # 如果路径不存在,则创建它#type: DirectoryOrCreate # 如果路径不存在,则创建它- name: web-files # 卷的名称hostPath: # 使用 hostPath 卷类型path: /web/wordpress # 宿主机上的路径type: DirectoryOrCreate # 如果路径不存在,则创建它---apiVersion: v1 # 指定了当前使用的Kubernetes API的版本kind: Service # 定义了我们将创建一个Servicemetadata: # 包含了Service的元数据name: blog-internal # 这是Service的名称namespace: default #指定namespace名称spec: #定义了Service的特性type: NodePort # Service类型为NodePortselector: # 标签选择器,用于指定哪些Pod接收流量app: blog # Pod 的标签,与上面的选择器匹配environment: prodrole: internalversion: v1.0ports: # 端口定义- name: httpprotocol: TCP # 使用的协议port: 8080 # Service对外暴露的端口targetPort: 80 # Pod上的端口- name: httpsprotocol: TCPport: 443targetPort: 443- name: phpprotocol: TCPport: 9000targetPort: 9000$ kubectl create -f /root/k3s/nginx/nginx.yaml
nginx配置
012345678910111213141516171819202122232425262728293031323334353637383940414243444546474849 $ cat /etc/nginx/blog/conf/nginx.conf #容器中nginx配置events {worker_connections 1024;}http {include mime.types;default_type application/octet-stream;sendfile on;keepalive_timeout 65;server {listen 80;server_name localhost;location / {root /app;index index.php;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_set_header X-Forwarded-Proto $scheme;}location ~ \.php {root /app;try_files $uri =404;fastcgi_pass blog-internal:9000;fastcgi_index index.php;fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;include fastcgi.conf;}error_page 500 502 503 504 /50x.html;location = /50x.html {root html;}}}$ cat /usr/local/openresty/nginx/conf/extra/pro.conf #模拟lb配置server {listen 8080;server_name jp.t4x.org;location / {# 注意: 反向代理后端 URL 的最后需要有一个路径符号client_max_body_size 500m;#proxy_pass http://100.64.0.4:42012/;proxy_pass http://100.64.0.4:42027; #42027是nodeport隐射的端口proxy_set_header Host $host:8080;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_set_header X-Forwarded-Proto $scheme;}}
wordpress 数据库配置
012345678910111213141516171819 $ kubectl exec -ti mysql-0 -- bashmysql> CREATE USER 'wordpress'@'%' IDENTIFIED BY 'admin';mysql> CREATE DATABASE wordpress CHARACTER SET utf8mb4 COLLATE utf8mb4_bin;mysql> GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpress'@'%' WITH GRANT OPTION;mysql> ALTER USER 'wordpress'@'%' IDENTIFIED WITH mysql_native_password BY 'admin';/** Database username */define( 'DB_USER', 'wordpress' );/** Database password */define( 'DB_PASSWORD', 'admin' );/** Database hostname */define( 'DB_HOST', 'mysql' );/** Database charset to use in creating database tables. */define( 'DB_CHARSET', 'utf8mb4' );/** The database collate type. Don't change this if in doubt. */define( 'DB_COLLATE', '' );
五、服务器资源
012345678910111213141516171819202122232425 root@jp-master01:~/k3s/nginx# free -mtotal used free shared buff/cache availableMem: 16002 3143 6261 3 6930 12858Swap: 0 0 0root@tw-master02:~# free -mtotal used free shared buff/cache availableMem: 16002 2260 9829 2 4243 13742Swap: 0 0 0root@hk-master03:~# free -mtotal used free shared buff/cache availableMem: 16002 2393 9589 1 4350 13608Swap: 0root@singapore-worker:/web/wordpress# free -mtotal used free shared buff/cache availableMem: 3924 972 2212 26 1035 2952Swap: 0 0 0root@jp-master01:~/k3s/nginx# kubectl top nodeNAME CPU(cores) CPU% MEMORY(bytes) MEMORY%hk-master03 88m 2% 2781Mi 17%jp-master01 213m 5% 4183Mi 26%singapore-worker 77m 3% 1103Mi 28%tw-master02 87m 2% 2737Mi 17%
0123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153 root@jp-master01:~/k3s/nginx# kubectl get nodes,pods,services,deployments,replicasets,statefulsets,daemonsets,jobs,cronjobs,configmaps,secrets,ingress,pv,pvc --all-namespacesNAME STATUS ROLES AGE VERSIONnode/hk-master03 Ready control-plane,etcd,master 4d13h v1.31.2+k3s1node/jp-master01 Ready control-plane,etcd,master 5d3h v1.31.2+k3s1node/singapore-worker Ready <none> 4d13h v1.31.2+k3s1node/tw-master02 Ready control-plane,etcd,master 4d14h v1.31.2+k3s1NAMESPACE NAME READY STATUS RESTARTS AGEdefault pod/blog-648664bcdd-2ttv9 2/2 Running 0 37mdefault pod/mysql-0 1/1 Running 0 21hdefault pod/slave-0 1/1 Running 2 (21h ago) 21hdefault pod/slave-1 1/1 Running 0 21hkube-system pod/calico-kube-controllers-5d7d9cdfd8-dwkzv 1/1 Running 0 4d1hkube-system pod/calico-node-2tvpc 1/1 Running 0 4d1hkube-system pod/calico-node-42wcv 1/1 Running 0 4d1hkube-system pod/calico-node-8wsd7 1/1 Running 0 4d1hkube-system pod/calico-node-smvd6 1/1 Running 0 4d1hkube-system pod/coredns-56f6fc8fd7-grnr4 1/1 Running 0 4dkube-system pod/local-path-provisioner-5cf85fd84d-wv24x 1/1 Running 0 4d1hkube-system pod/metrics-server-5985cbc9d7-jdc2w 1/1 Running 0 4dmonitoring pod/alertmanager-main-0 2/2 Running 0 20hmonitoring pod/alertmanager-main-1 2/2 Running 0 20hmonitoring pod/alertmanager-main-2 2/2 Running 0 20hmonitoring pod/blackbox-exporter-d7779b7d4-m4lvs 3/3 Running 0 20hmonitoring pod/grafana-778555f685-x2nxc 1/1 Running 0 20hmonitoring pod/kube-state-metrics-74f55cf6d9-xktd5 3/3 Running 0 20hmonitoring pod/node-exporter-59lqd 2/2 Running 0 20hmonitoring pod/node-exporter-f52ht 2/2 Running 0 20hmonitoring pod/node-exporter-ht822 2/2 Running 0 20hmonitoring pod/node-exporter-j2t5m 2/2 Running 0 20hmonitoring pod/prometheus-adapter-784f566c54-54mj2 1/1 Running 0 20hmonitoring pod/prometheus-adapter-784f566c54-dmftk 1/1 Running 0 20hmonitoring pod/prometheus-k8s-0 2/2 Running 0 20hmonitoring pod/prometheus-operator-6c55f986bc-gx7h2 2/2 Running 0 20hNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdefault service/blog-internal NodePort 10.43.4.237 <none> 8080:42027/TCP,443:30248/TCP,9000:34153/TCP 37mdefault service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5d3hdefault service/mysql ClusterIP None <none> 3306/TCP 21hdefault service/pxc-cluster ClusterIP 10.43.162.9 <none> 4567/TCP,4568/TCP,4444/TCP 21hkube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 5d3hkube-system service/kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 20hkube-system service/metrics-server ClusterIP 10.43.146.92 <none> 443/TCP 5d3hmonitoring service/alertmanager-main ClusterIP 10.43.146.202 <none> 9093/TCP,8080/TCP 20hmonitoring service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 20hmonitoring service/blackbox-exporter ClusterIP 10.43.219.155 <none> 9115/TCP,19115/TCP 20hmonitoring service/grafana NodePort 10.43.31.46 <none> 3000:35790/TCP 20hmonitoring service/kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 20hmonitoring service/node-exporter ClusterIP None <none> 9100/TCP 20hmonitoring service/prometheus-adapter ClusterIP 10.43.80.176 <none> 443/TCP 20hmonitoring service/prometheus-k8s NodePort 10.43.251.45 <none> 9090:42012/TCP,8080:37098/TCP 20hmonitoring service/prometheus-operated ClusterIP None <none> 9090/TCP 20hmonitoring service/prometheus-operator ClusterIP None <none> 8443/TCP 20hNAMESPACE NAME READY UP-TO-DATE AVAILABLE AGEdefault deployment.apps/blog 1/1 1 1 37mkube-system deployment.apps/calico-kube-controllers 1/1 1 1 4d1hkube-system deployment.apps/coredns 1/1 1 1 5d3hkube-system deployment.apps/local-path-provisioner 1/1 1 1 5d3hkube-system deployment.apps/metrics-server 1/1 1 1 5d3hmonitoring deployment.apps/blackbox-exporter 1/1 1 1 20hmonitoring deployment.apps/grafana 1/1 1 1 20hmonitoring deployment.apps/kube-state-metrics 1/1 1 1 20hmonitoring deployment.apps/prometheus-adapter 2/2 2 2 20hmonitoring deployment.apps/prometheus-operator 1/1 1 1 20hNAMESPACE NAME DESIRED CURRENT READY AGEdefault replicaset.apps/blog-648664bcdd 1 1 1 37mkube-system replicaset.apps/calico-kube-controllers-5d7d9cdfd8 1 1 1 4d1hkube-system replicaset.apps/coredns-56f6fc8fd7 1 1 1 5d3hkube-system replicaset.apps/local-path-provisioner-5cf85fd84d 1 1 1 5d3hkube-system replicaset.apps/metrics-server-5985cbc9d7 1 1 1 5d3hmonitoring replicaset.apps/blackbox-exporter-d7779b7d4 1 1 1 20hmonitoring replicaset.apps/grafana-778555f685 1 1 1 20hmonitoring replicaset.apps/kube-state-metrics-74f55cf6d9 1 1 1 20hmonitoring replicaset.apps/prometheus-adapter-784f566c54 2 2 2 20hmonitoring replicaset.apps/prometheus-operator-6c55f986bc 1 1 1 20hNAMESPACE NAME READY AGEdefault statefulset.apps/mysql 1/1 21hdefault statefulset.apps/slave 2/2 21hmonitoring statefulset.apps/alertmanager-main 3/3 20hmonitoring statefulset.apps/prometheus-k8s 1/1 20hNAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEkube-system daemonset.apps/calico-node 4 4 4 4 4 kubernetes.io/os=linux 4d1hmonitoring daemonset.apps/node-exporter 4 4 4 4 4 kubernetes.io/os=linux 20hNAMESPACE NAME DATA AGEdefault configmap/kube-root-ca.crt 1 5d3hkube-node-lease configmap/kube-root-ca.crt 1 5d3hkube-public configmap/kube-root-ca.crt 1 5d3hkube-system configmap/calico-config 4 4d1hkube-system configmap/cluster-dns 2 5d3hkube-system configmap/coredns 2 5d3hkube-system configmap/extension-apiserver-authentication 6 5d3hkube-system configmap/kube-apiserver-legacy-service-account-token-tracking 1 5d3hkube-system configmap/kube-root-ca.crt 1 5d3hkube-system configmap/local-path-config 4 5d3hmonitoring configmap/adapter-config 1 20hmonitoring configmap/blackbox-exporter-configuration 1 20hmonitoring configmap/grafana-dashboard-alertmanager-overview 1 20hmonitoring configmap/grafana-dashboard-apiserver 1 20hmonitoring configmap/grafana-dashboard-cluster-total 1 20hmonitoring configmap/grafana-dashboard-controller-manager 1 20hmonitoring configmap/grafana-dashboard-grafana-overview 1 20hmonitoring configmap/grafana-dashboard-k8s-resources-cluster 1 20hmonitoring configmap/grafana-dashboard-k8s-resources-multicluster 1 20hmonitoring configmap/grafana-dashboard-k8s-resources-namespace 1 20hmonitoring configmap/grafana-dashboard-k8s-resources-node 1 20hmonitoring configmap/grafana-dashboard-k8s-resources-pod 1 20hmonitoring configmap/grafana-dashboard-k8s-resources-workload 1 20hmonitoring configmap/grafana-dashboard-k8s-resources-workloads-namespace 1 20hmonitoring configmap/grafana-dashboard-kubelet 1 20hmonitoring configmap/grafana-dashboard-namespace-by-pod 1 20hmonitoring configmap/grafana-dashboard-namespace-by-workload 1 20hmonitoring configmap/grafana-dashboard-node-cluster-rsrc-use 1 20hmonitoring configmap/grafana-dashboard-node-rsrc-use 1 20hmonitoring configmap/grafana-dashboard-nodes 1 20hmonitoring configmap/grafana-dashboard-nodes-aix 1 20hmonitoring configmap/grafana-dashboard-nodes-darwin 1 20hmonitoring configmap/grafana-dashboard-persistentvolumesusage 1 20hmonitoring configmap/grafana-dashboard-pod-total 1 20hmonitoring configmap/grafana-dashboard-prometheus 1 20hmonitoring configmap/grafana-dashboard-prometheus-remote-write 1 20hmonitoring configmap/grafana-dashboard-proxy 1 20hmonitoring configmap/grafana-dashboard-scheduler 1 20hmonitoring configmap/grafana-dashboard-workload-total 1 20hmonitoring configmap/grafana-dashboards 1 20hmonitoring configmap/kube-root-ca.crt 1 20hmonitoring configmap/prometheus-k8s-rulefiles-0 8 20hNAMESPACE NAME TYPE DATA AGEdefault secret/mysql-secret Opaque 1 4dkube-system secret/hk-master03.node-password.k3s Opaque 1 4d13hkube-system secret/jp-master01.node-password.k3s Opaque 1 5d3hkube-system secret/k3s-serving kubernetes.io/tls 2 5d3hkube-system secret/singapore-worker.node-password.k3s Opaque 1 4d13hkube-system secret/tw-master02.node-password.k3s Opaque 1 4d14hmonitoring secret/alertmanager-main Opaque 1 20hmonitoring secret/alertmanager-main-generated Opaque 1 20hmonitoring secret/alertmanager-main-tls-assets-0 Opaque 0 20hmonitoring secret/alertmanager-main-web-config Opaque 1 20hmonitoring secret/grafana-config Opaque 1 20hmonitoring secret/grafana-datasources Opaque 1 20hmonitoring secret/prometheus-k8s Opaque 1 20hmonitoring secret/prometheus-k8s-thanos-prometheus-http-client-file Opaque 1 20hmonitoring secret/prometheus-k8s-tls-assets-0 Opaque 0 20hmonitoring secret/prometheus-k8s-web-config Opaque 1 20hNAMESPACE NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGEdefault persistentvolume/pv-pxc-storage 1Gi RWO Retain Bound default/pxc-pvc tt-test <unset> 11mNAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGEdefault persistentvolumeclaim/pxc-pvc Bound pv-pxc-storage 1Gi RWO tt-test <unset> 11m
六、证书问题
证书脚本:
0123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161 $ cat 1.sh#!/usr/bin/env bash# Example K3s CA certificate generation script.## This script will generate files sufficient to bootstrap K3s cluster certificate# authorities. By default, the script will create the required files under# /var/lib/rancher/k3s/server/tls, where they will be found and used by K3s during initial# cluster startup. Note that these files MUST be present before K3s is started the first# time; certificate data SHOULD NOT be changed once the cluster has been initialized.## The output path may be overridden with the DATA_DIR environment variable.## This script will also auto-generate certificates and keys for both root and intermediate# certificate authorities if none are found.# If you have existing certs, you must place then in `DATA_DIR/server/tls`.# If you have only an existing root CA, provide:# root-ca.pem# root-ca.key# If you have an existing root and intermediate CA, provide:# root-ca.pem# intermediate-ca.pem# intermediate-ca.keyset -eumask 027TIMESTAMP=$(date +%s)PRODUCT="${PRODUCT:-k3s}"DATA_DIR="${DATA_DIR:-/data/${PRODUCT}}"if type -t openssl-3 &>/dev/null; thenOPENSSL=openssl-3elseOPENSSL=opensslfiecho "Using $(type -p ${OPENSSL}): $(${OPENSSL} version)"if ! ${OPENSSL} ecparam -name prime256v1 -genkey -noout -out /dev/null &>/dev/null; thenecho "openssl not found or missing Elliptic Curve (ecparam) support."exit 1fi${OPENSSL} version | grep -qF 'OpenSSL 3' && OPENSSL_GENRSA_FLAGS=-traditionalmkdir -p "${DATA_DIR}/server/tls/etcd"cd "${DATA_DIR}/server/tls"# Set up temporary openssl configurationmkdir -p ".ca/certs"trap "rm -rf .ca" EXITtouch .ca/indexopenssl rand -hex 8 > .ca/serialcat >.ca/config <<'EOF'[ca]default_ca = ca_default[ca_default]dir = ./.cadatabase = $dir/indexserial = $dir/serialnew_certs_dir = $dir/certsdefault_md = sha256policy = policy_anything[policy_anything]commonName = supplied[req]distinguished_name = req_distinguished_name[req_distinguished_name][v3_ca]subjectKeyIdentifier = hashauthorityKeyIdentifier = keyid:alwaysbasicConstraints = critical, CA:truekeyUsage = critical, digitalSignature, keyEncipherment, keyCertSignEOF# Don't overwrite the service account issuer key; we pass the key into both the controller-manager# and the apiserver instead of passing a cert list into the apiserver, so there's no facility for# rotation and things will get very angry if all the SA keys are invalidated.if [[ -e service.key ]]; thenecho "Generating additional Kubernetes service account issuer RSA key"OLD_SERVICE_KEY="$(cat service.key)"elseecho "Generating Kubernetes service account issuer RSA key"fi${OPENSSL} genrsa ${OPENSSL_GENRSA_FLAGS:-} -out service.key 2048echo "${OLD_SERVICE_KEY}" >> service.key# Use existing root CA if presentif [[ -e root-ca.pem ]]; thenecho "Using existing root certificate"elseecho "Generating root certificate authority RSA key and certificate"${OPENSSL} genrsa ${OPENSSL_GENRSA_FLAGS:-} -out root-ca.key 4096${OPENSSL} req -x509 -new -nodes -sha256 -days 36500 \-subj "/CN=${PRODUCT}-root-ca@${TIMESTAMP}" \-key root-ca.key \-out root-ca.pem \-config .ca/config \-extensions v3_caficat root-ca.pem > root-ca.crt# Use existing intermediate CA if presentif [[ -e intermediate-ca.pem ]]; thenecho "Using existing intermediate certificate"elseif [[ ! -e root-ca.key ]]; thenecho "Cannot generate intermediate certificate without root certificate private key"exit 1fiecho "Generating intermediate certificate authority RSA key and certificate"${OPENSSL} genrsa ${OPENSSL_GENRSA_FLAGS:-} -out intermediate-ca.key 4096${OPENSSL} req -new -nodes \-subj "/CN=${PRODUCT}-intermediate-ca@${TIMESTAMP}" \-key intermediate-ca.key |${OPENSSL} ca -batch -notext -days 36500 \-in /dev/stdin \-out intermediate-ca.pem \-keyfile root-ca.key \-cert root-ca.pem \-config .ca/config \-extensions v3_caficat intermediate-ca.pem root-ca.pem > intermediate-ca.crtif [[ ! -e intermediate-ca.key ]]; thenecho "Cannot generate leaf certificates without intermediate certificate private key"exit 1fi# Generate new leaf CAs for all the control-plane and etcd componentsfor TYPE in client server request-header etcd/peer etcd/server; doCERT_NAME="${PRODUCT}-$(echo ${TYPE} | tr / -)-ca"echo "Generating ${CERT_NAME} leaf certificate authority EC key and certificate"${OPENSSL} ecparam -name prime256v1 -genkey -noout -out ${TYPE}-ca.key${OPENSSL} req -new -nodes \-subj "/CN=${CERT_NAME}@${TIMESTAMP}" \-key ${TYPE}-ca.key |${OPENSSL} ca -batch -notext -days 36500 \-in /dev/stdin \-out ${TYPE}-ca.pem \-keyfile intermediate-ca.key \-cert intermediate-ca.pem \-config .ca/config \-extensions v3_cacat ${TYPE}-ca.pem \intermediate-ca.pem \root-ca.pem > ${TYPE}-ca.crtdoneechoecho "CA certificate generation complete. Required files are now present in: ${DATA_DIR}/server/tls"echo "For security purposes, you should make a secure copy of the following files and remove them from cluster members:"ls ${DATA_DIR}/server/tls/root-ca.* ${DATA_DIR}/server/tls/intermediate-ca.* | xargs -n1 echo -e "\t"if [ "${DATA_DIR}" != "/var/lib/rancher/${PRODUCT}" ]; thenechoecho "To update certificates on an existing cluster, you may now run:"echo " k3s certificate rotate-ca --path=${DATA_DIR}/server"fi
有效期:
012345678910111213141516171819202122232425262728293031323334353637383940414243444546 $ for i in `find /data/k3s/server/tls/ -name "*.crt"`; do echo $i; openssl x509 -enddate -noout -in $i; done/data/k3s/server/tls/etcd/peer-ca.crtnotAfter=Dec 24 14:11:12 2124 GMT/data/k3s/server/tls/etcd/server-ca.crtnotAfter=Dec 24 14:11:12 2124 GMT/data/k3s/server/tls/etcd/client.crtnotAfter=Dec 24 14:19:43 2124 GMT/data/k3s/server/tls/etcd/peer-server-client.crtnotAfter=Dec 24 14:19:43 2124 GMT/data/k3s/server/tls/etcd/server-client.crtnotAfter=Dec 24 14:19:43 2124 GMT/data/k3s/server/tls/root-ca.crtnotAfter=Dec 24 14:11:09 2124 GMT/data/k3s/server/tls/intermediate-ca.crtnotAfter=Dec 24 14:11:11 2124 GMT/data/k3s/server/tls/client-ca.crtnotAfter=Dec 24 14:11:12 2124 GMT/data/k3s/server/tls/server-ca.crtnotAfter=Dec 24 14:11:12 2124 GMT/data/k3s/server/tls/request-header-ca.crtnotAfter=Dec 24 14:11:12 2124 GMT/data/k3s/server/tls/client-ca.nochain.crtnotAfter=Dec 24 14:11:12 2124 GMT/data/k3s/server/tls/client-admin.crtnotAfter=Dec 24 14:19:43 2124 GMT/data/k3s/server/tls/client-supervisor.crtnotAfter=Dec 24 14:19:43 2124 GMT/data/k3s/server/tls/client-controller.crtnotAfter=Dec 24 14:19:43 2124 GMT/data/k3s/server/tls/client-scheduler.crtnotAfter=Dec 24 14:19:43 2124 GMT/data/k3s/server/tls/client-kube-apiserver.crtnotAfter=Dec 24 14:19:43 2124 GMT/data/k3s/server/tls/client-kube-proxy.crtnotAfter=Dec 24 14:19:43 2124 GMT/data/k3s/server/tls/client-k3s-controller.crtnotAfter=Dec 24 14:19:43 2124 GMT/data/k3s/server/tls/client-k3s-cloud-controller.crtnotAfter=Dec 24 14:19:43 2124 GMT/data/k3s/server/tls/server-ca.nochain.crtnotAfter=Dec 24 14:11:12 2124 GMT/data/k3s/server/tls/serving-kube-apiserver.crtnotAfter=Dec 24 14:19:43 2124 GMT/data/k3s/server/tls/client-auth-proxy.crtnotAfter=Dec 24 14:19:43 2124 GMT/data/k3s/server/tls/temporary-certs/apiserver-loopback-client__.crtnotAfter=Dec 24 13:19:44 2124 GMT
证书:
0123456 MasterNodes='k3s-master02 k3s-master03'for NODE in $MasterNodes; dossh $NODE "mkdir -p /etc/etcd/ssl"for FILE in *; doscp -rp /data/k3s/server/tls/${FILE} $NODE:/data/k3s/server/tls/${FILE}donedone
七、local-path
0123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138 $ kubectl get storageclass -n kube-system #默认是delete策略,保留数据更改为Retain策略NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGElocal-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 25h$ kubectl get sc -ANAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGElocal-path (default) rancher.io/local-path Retain WaitForFirstConsumer false 58s$ kubectl delete -f mysql-storage.yaml$ cat mysql-pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata:name: pxc-datanamespace: dbspec:accessModes:- ReadWriteOncestorageClassName: local-pathresources:requests:storage: 1Gi$ kubectl create -f mysql-pvc.yaml$ kubectl get pv,pvc -A #pending状态,$ kubectl describe pvc localpath-pxc -ndb 提示 "waiting for first consumer to be created before binding"NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGEdb persistentvolumeclaim/pxc-data Pending$ cat mysql-master1.yamlapiVersion: apps/v1kind: StatefulSetmetadata:name: mysqlnamespace: dbspec:serviceName: mysqlreplicas: 1selector:matchLabels:app: mysqltemplate:metadata:labels:app: mysqlspec:nodeSelector:kubernetes.io/hostname: k3s-master01volumes:- name: pxc-storangepersistentVolumeClaim:claimName: pxc-datacontainers:- name: mysqlimage: percona/percona-xtradb-cluster:5.7.44#image: bitnami/openresty:1.27.1-1env:- name: MYSQL_ROOT_PASSWORDvalueFrom:secretKeyRef:name: mysql-secretkey: MYSQL_ROOT_PASSWORD- name: CLUSTER_NAMEvalue: "PXC"ports:- containerPort: 3306- containerPort: 4444- containerPort: 4567- containerPort: 4568volumeMounts:- name: pxc-storangemountPath: /var/lib/mysqltolerations:- effect: NoExecutekey: node.kubernetes.io/unreachabletolerationSeconds: 10operator: Exists- effect: NoExecutekey: node.kubernetes.io/not-readyoperator: ExiststolerationSeconds: 10- key: nodetypeoperator: Equalvalue: "master"affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- mysqlnamespaces:- dbtopologyKey: kubernetes.io/hostname---apiVersion: v1kind: Servicemetadata:name: mysqlnamespace: dbspec:type: ClusterIP#clusterIP: None # Headless Serviceselector:app: mysqlports:- port: 3306targetPort: 3306---apiVersion: v1kind: Servicemetadata:name: pxc-clusternamespace: dbspec:type: ClusterIPselector:app: mysqlports:- name: pxc-0protocol: TCPport: 4567targetPort: 4567- name: pxc-1protocol: TCPport: 4568targetPort: 4568- name: pxc-2protocol: TCPport: 4444targetPort: 4444$ kubectl get pv,pvc -ANAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGEpersistentvolume/pvc-f522f581-b575-4568-a9ff-1ba85d9c4fa2 1Gi RWO Retain Bound db/pxc-data local-path <unset> 93sNAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGEdb persistentvolumeclaim/pxc-data Bound pvc-f522f581-b575-4568-a9ff-1ba85d9c4fa2 1Gi RWO local-path <unset> 3m49s$ kubectl get pod -ndbNAME READY STATUS RESTARTS AGEmysql-0 1/1 Running 0 3m19s$ kubectl exec -ti mysql-0 -ndb -- mysql -uroot -p'admin123'mysql>
八、network
0123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194 $ kubectl get nodes,pods,services,deployments,replicasets,statefulsets,daemonsets,jobs,cronjobs,configmaps,secrets,ingress,networkpolicy,storageclass,pv,pvc --all-namespaces #基于--flannel-backend 'vxlan'NAME STATUS ROLES AGE VERSIONnode/k3s-master01 Ready control-plane,etcd,master 32h v1.31.2+k3s1node/k3s-master02 Ready control-plane,etcd,master 32h v1.31.2+k3s1node/k3s-master03 Ready control-plane,etcd,master 32h v1.31.2+k3s1node/k3s-node01 Ready worker 32h v1.31.2+k3s1node/k3s-node02 Ready worker 32h v1.31.2+k3s1NAMESPACE NAME READY STATUS RESTARTS AGEdb pod/mysql-0 1/1 Running 1 (9m55s ago) 172mkube-system pod/coredns-56f6fc8fd7-9pqh9 1/1 Running 4 (9m55s ago) 32hkube-system pod/local-path-provisioner-5cf85fd84d-2dfg4 1/1 Running 1 (9m13s ago) 3h57mkube-system pod/metrics-server-5985cbc9d7-wwhb2 1/1 Running 4 (9m55s ago) 32hmonitoring pod/alertmanager-main-0 2/2 Running 0 4m27smonitoring pod/alertmanager-main-1 2/2 Running 0 4m26smonitoring pod/alertmanager-main-2 2/2 Running 0 4m26smonitoring pod/blackbox-exporter-d7779b7d4-rjdk2 3/3 Running 0 7m32smonitoring pod/grafana-778555f685-gmkrz 1/1 Running 0 7m3smonitoring pod/kube-state-metrics-74f55cf6d9-mqpcg 3/3 Running 0 6m53smonitoring pod/node-exporter-2gx8h 2/2 Running 0 6m36smonitoring pod/node-exporter-hdsnn 2/2 Running 0 6m36smonitoring pod/node-exporter-htjtd 2/2 Running 0 6m35smonitoring pod/node-exporter-m8tpv 2/2 Running 0 6m36smonitoring pod/node-exporter-qxkhw 2/2 Running 0 6m35smonitoring pod/prometheus-adapter-784f566c54-dcdf5 1/1 Running 0 5m29smonitoring pod/prometheus-adapter-784f566c54-z7cz2 1/1 Running 0 5m29smonitoring pod/prometheus-k8s-0 2/2 Running 0 4m27smonitoring pod/prometheus-k8s-1 2/2 Running 0 4m26smonitoring pod/prometheus-operator-6c55f986bc-cfqd8 2/2 Running 0 5m16sNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdb service/mysql ClusterIP 10.243.187.137 <none> 3306/TCP,4567/TCP,4568/TCP,4444/TCP 172mdefault service/kubernetes ClusterIP 10.243.0.1 <none> 443/TCP 32hkube-system service/kube-dns ClusterIP 10.243.0.10 <none> 53/UDP,53/TCP,9153/TCP 32hkube-system service/kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 30hkube-system service/metrics-server ClusterIP 10.243.170.109 <none> 443/TCP 32hmonitoring service/alertmanager-main ClusterIP 10.243.2.40 <none> 9093/TCP,8080/TCP 7m38smonitoring service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 4m29smonitoring service/blackbox-exporter ClusterIP 10.243.110.71 <none> 9115/TCP,19115/TCP 7m33smonitoring service/grafana NodePort 10.243.56.184 <none> 3000:37722/TCP 7m4smonitoring service/kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 6m54smonitoring service/node-exporter ClusterIP None <none> 9100/TCP 6m38smonitoring service/prometheus-adapter ClusterIP 10.243.188.73 <none> 443/TCP 5m36smonitoring service/prometheus-k8s NodePort 10.243.127.141 <none> 9090:35230/TCP,8080:31893/TCP 5m58smonitoring service/prometheus-operated ClusterIP None <none> 9090/TCP 4m28smonitoring service/prometheus-operator ClusterIP None <none> 8443/TCP 5m20sNAMESPACE NAME READY UP-TO-DATE AVAILABLE AGEkube-system deployment.apps/coredns 1/1 1 1 32hkube-system deployment.apps/local-path-provisioner 1/1 1 1 3h57mkube-system deployment.apps/metrics-server 1/1 1 1 32hmonitoring deployment.apps/blackbox-exporter 1/1 1 1 7m36smonitoring deployment.apps/grafana 1/1 1 1 7m9smonitoring deployment.apps/kube-state-metrics 1/1 1 1 6m59smonitoring deployment.apps/prometheus-adapter 2/2 2 2 5m45smonitoring deployment.apps/prometheus-operator 1/1 1 1 5m28sNAMESPACE NAME DESIRED CURRENT READY AGEkube-system replicaset.apps/coredns-56f6fc8fd7 1 1 1 32hkube-system replicaset.apps/local-path-provisioner-5cf85fd84d 1 1 1 3h57mkube-system replicaset.apps/metrics-server-5985cbc9d7 1 1 1 32hmonitoring replicaset.apps/blackbox-exporter-d7779b7d4 1 1 1 7m36smonitoring replicaset.apps/grafana-778555f685 1 1 1 7m8smonitoring replicaset.apps/kube-state-metrics-74f55cf6d9 1 1 1 6m59smonitoring replicaset.apps/prometheus-adapter-784f566c54 2 2 2 5m44smonitoring replicaset.apps/prometheus-operator-6c55f986bc 1 1 1 5m27sNAMESPACE NAME READY AGEdb statefulset.apps/mysql 1/1 172mmonitoring statefulset.apps/alertmanager-main 3/3 4m30smonitoring statefulset.apps/prometheus-k8s 2/2 4m29sNAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEmonitoring daemonset.apps/node-exporter 5 5 5 5 5 kubernetes.io/os=linux 6m44sNAMESPACE NAME DATA AGEdb configmap/kube-root-ca.crt 1 28hdefault configmap/kube-root-ca.crt 1 32hkube-node-lease configmap/kube-root-ca.crt 1 32hkube-public configmap/kube-root-ca.crt 1 32hkube-system configmap/cluster-dns 2 32hkube-system configmap/coredns 2 32hkube-system configmap/extension-apiserver-authentication 6 32hkube-system configmap/kube-apiserver-legacy-service-account-token-tracking 1 32hkube-system configmap/kube-root-ca.crt 1 32hkube-system configmap/local-path-config 4 3h57mmonitoring configmap/adapter-config 1 5m46smonitoring configmap/blackbox-exporter-configuration 1 7m36smonitoring configmap/grafana-dashboard-alertmanager-overview 1 7m31smonitoring configmap/grafana-dashboard-apiserver 1 7m30smonitoring configmap/grafana-dashboard-cluster-total 1 7m30smonitoring configmap/grafana-dashboard-controller-manager 1 7m29smonitoring configmap/grafana-dashboard-grafana-overview 1 7m28smonitoring configmap/grafana-dashboard-k8s-resources-cluster 1 7m27smonitoring configmap/grafana-dashboard-k8s-resources-multicluster 1 7m26smonitoring configmap/grafana-dashboard-k8s-resources-namespace 1 7m26smonitoring configmap/grafana-dashboard-k8s-resources-node 1 7m25smonitoring configmap/grafana-dashboard-k8s-resources-pod 1 7m25smonitoring configmap/grafana-dashboard-k8s-resources-workload 1 7m24smonitoring configmap/grafana-dashboard-k8s-resources-workloads-namespace 1 7m23smonitoring configmap/grafana-dashboard-kubelet 1 7m23smonitoring configmap/grafana-dashboard-namespace-by-pod 1 7m22smonitoring configmap/grafana-dashboard-namespace-by-workload 1 7m21smonitoring configmap/grafana-dashboard-node-cluster-rsrc-use 1 7m21smonitoring configmap/grafana-dashboard-node-rsrc-use 1 7m20smonitoring configmap/grafana-dashboard-nodes 1 7m17smonitoring configmap/grafana-dashboard-nodes-aix 1 7m19smonitoring configmap/grafana-dashboard-nodes-darwin 1 7m18smonitoring configmap/grafana-dashboard-persistentvolumesusage 1 7m16smonitoring configmap/grafana-dashboard-pod-total 1 7m15smonitoring configmap/grafana-dashboard-prometheus 1 7m14smonitoring configmap/grafana-dashboard-prometheus-remote-write 1 7m15smonitoring configmap/grafana-dashboard-proxy 1 7m12smonitoring configmap/grafana-dashboard-scheduler 1 7m11smonitoring configmap/grafana-dashboard-workload-total 1 7m11smonitoring configmap/grafana-dashboards 1 7m10smonitoring configmap/kube-root-ca.crt 1 8m13smonitoring configmap/prometheus-k8s-rulefiles-0 8 4m35sNAMESPACE NAME TYPE DATA AGEdb secret/mysql-secret Opaque 1 26hkube-system secret/k3s-master01.node-password.k3s Opaque 1 32hkube-system secret/k3s-master02.node-password.k3s Opaque 1 32hkube-system secret/k3s-master03.node-password.k3s Opaque 1 32hkube-system secret/k3s-node01.node-password.k3s Opaque 1 32hkube-system secret/k3s-node02.node-password.k3s Opaque 1 32hkube-system secret/k3s-serving kubernetes.io/tls 2 32hmonitoring secret/alertmanager-main Opaque 1 7m39smonitoring secret/alertmanager-main-generated Opaque 1 4m35smonitoring secret/alertmanager-main-tls-assets-0 Opaque 0 4m33smonitoring secret/alertmanager-main-web-config Opaque 1 4m32smonitoring secret/grafana-config Opaque 1 7m31smonitoring secret/grafana-datasources Opaque 1 7m31smonitoring secret/prometheus-k8s Opaque 1 4m33smonitoring secret/prometheus-k8s-thanos-prometheus-http-client-file Opaque 1 4m30smonitoring secret/prometheus-k8s-tls-assets-0 Opaque 0 4m32smonitoring secret/prometheus-k8s-web-config Opaque 1 4m30sNAMESPACE NAME POD-SELECTOR AGEmonitoring networkpolicy.networking.k8s.io/alertmanager-main app.kubernetes.io/component=alert-router,app.kubernetes.io/instance=main,app.kubernetes.io/name=alertmanager,app.kubernetes.io/part-of=kube-prometheus 7m42smonitoring networkpolicy.networking.k8s.io/blackbox-exporter app.kubernetes.io/component=exporter,app.kubernetes.io/name=blackbox-exporter,app.kubernetes.io/part-of=kube-prometheus 7m36smonitoring networkpolicy.networking.k8s.io/grafana app.kubernetes.io/component=grafana,app.kubernetes.io/name=grafana,app.kubernetes.io/part-of=kube-prometheus 7m9smonitoring networkpolicy.networking.k8s.io/kube-state-metrics app.kubernetes.io/component=exporter,app.kubernetes.io/name=kube-state-metrics,app.kubernetes.io/part-of=kube-prometheus 6m59smonitoring networkpolicy.networking.k8s.io/node-exporter app.kubernetes.io/component=exporter,app.kubernetes.io/name=node-exporter,app.kubernetes.io/part-of=kube-prometheus 6m44smonitoring networkpolicy.networking.k8s.io/prometheus-adapter app.kubernetes.io/component=metrics-adapter,app.kubernetes.io/name=prometheus-adapter,app.kubernetes.io/part-of=kube-prometheus 5m43smonitoring networkpolicy.networking.k8s.io/prometheus-k8s app.kubernetes.io/component=prometheus,app.kubernetes.io/instance=k8s,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=kube-prometheus 6m34smonitoring networkpolicy.networking.k8s.io/prometheus-operator app.kubernetes.io/component=controller,app.kubernetes.io/name=prometheus-operator,app.kubernetes.io/part-of=kube-prometheus 5m26sNAMESPACE NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEstorageclass.storage.k8s.io/local-path (default) rancher.io/local-path Retain WaitForFirstConsumer false 3h57mNAMESPACE NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGEpersistentvolume/pvc-4a232f33-602b-490f-a14d-4fa894ed6bda 1Gi RWO Retain Bound db/pxc-data local-path <unset> 177mpersistentvolume/pvc-c9c60b30-06f5-4eb8-b0a8-73539a358f7c 1Gi RWO Retain Released db/pxc-data local-path <unset> 3h22mpersistentvolume/pvc-f522f581-b575-4568-a9ff-1ba85d9c4fa2 1Gi RWO Retain Released db/pxc-data local-path <unset> 3h43mNAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGEdb persistentvolumeclaim/pxc-data Bound pvc-4a232f33-602b-490f-a14d-4fa894ed6bda 1Gi RWO local-path <unset> 179m$ cat /etc/cni/net.d/10-calico.conflist{"name": "k8s-pod-network","cniVersion": "0.3.1","plugins": [{"type": "calico","log_level": "info","log_file_path": "/var/log/calico/cni/cni.log","datastore_type": "kubernetes","nodename": "k3s-master01","mtu": 0,"ipam": {"type": "calico-ipam","container_setings": {"allow_ip_frowarding": true}},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}},{"type": "bandwidth","capabilities": {"bandwidth": true}}]}
九、其他问题
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
$ cat 1.yaml apiVersion: v1 kind: PersistentVolume metadata: name: my-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" storageClassName: "fast-storage" # 这个 storageClassName 是 PV 的 persistentVolumeReclaimPolicy: Retain # 保持数据,不释放 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: "slow-storage" # 这个 storageClassName 是 PVC 的 $ kubectl get -f 1.yaml NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE persistentvolume/my-pv 1Gi RWO Retain Available fast-storage <unset> 5m31s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/my-pvc Pending slow-storage <unset> 5m30s $ kubectl bind-pv --persistentvolume=my-pv --persistentvolumeclaim=my-pvc #Kubernetes 1.16前适用,如果1.16后pv、pvc的storageClassName不一样则无法绑定!!! |
参考文档:
1:https://github.com/percona-lab/percona-docker/
2:https://hub.docker.com/r/percona/percona-xtradb-cluster
3:https://github.com/prometheus-operator/kube-prometheus/
4:https://mp.weixin.qq.com/s/R88DraaaS3bpm3PurzpP9g
5:https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/quickstart
6:https://docs.k3s.io/zh/cli/certificate
7:https://docs.rancher.cn/docs/k3s/storage/_index
申明:除非注明Byrd's Blog内容均为原创,未经许可禁止转载!详情请阅读版权申明!