一、基础环境
01234567 $ uname -aLinux k3s-master01 6.1.0-28-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.119-1 (2024-11-22) x86_64 GNU/Linux$ uname -aLinux k3s-master02 6.1.0-28-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.119-1 (2024-11-22) x86_64 GNU/Linux$ uname -aLinux k3s-master03 6.8.0-47-generic #47-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 27 21:40:26 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
二、K3s Server配置安装
2.1 必要准备工作
012345678 $ wget https://github.com/k3s-io/k3s/releases/download/v1.31.2%2Bk3s1/k3s #--no-check-certificate$ chmod +x k3s && mv k3s /usr/local/bin/$ wget https://dl.k8s.io/v1.31.2/kubernetes-client-linux-amd64.tar.gz #解压后二进制文件放到/usr/local/bin目录下$ tar zxf kubernetes-client-linux-amd64.tar.gz$ chmod +x kubernetes/client/bin/*$ mv kubernetes/client/bin/* /usr/local/bin/$ wget https://github.com/k3s-io/helm-controller/releases/download/v0.16.5/helm-controller-amd64$ chmod +x helm-controller-amd64$ mv helm-controller-amd64 /usr/local/bin/helm
2.2 k3s-master01启动文件
0123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051 $ cat /etc/systemd/system/k3s.service[Unit]Description=Lightweight KubernetesDocumentation=https://k3s.ioWants=network-online.target[Install]WantedBy=multi-user.target[Service]Type=notifyEnvironmentFile=-/etc/default/%NEnvironmentFile=-/etc/sysconfig/%NEnvironmentFile=-/etc/systemd/system/k3s.service.envKillMode=processDelegate=yes# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNOFILE=1048576LimitNPROC=infinityLimitCORE=infinityTasksMax=infinityTimeoutStartSec=0Restart=alwaysRestartSec=5sExecStartPre=-/sbin/modprobe br_netfilterExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/k3s \server \--bind-address '0.0.0.0' \--write-kubeconfig-mode '0600' \--write-kubeconfig '/root/.kube/config' \--data-dir '/data/k3s' \--cluster-cidr 10.42.0.0/16 \--service-cidr 10.43.0.0/16 \--service-node-port-range '30000-42767' \--tls-san 'k3s.t4x.org' \--node-ip '本地IP' \ #基于wireguard所以写了wireguard的IP--advertise-address 'wg' \--node-external-ip 'wg' \--node-label 'nodetype=server' \--secrets-encryption \#--flannel-backend 'vxlan' \--disable-network-policy \--flannel-iface 'netmaker' \ # 因为我使用的Netmaker和wireguard已经搭建好了wg网络,直接调用网卡接口--flannel-backend 'none' \--disable 'servicelb' \--disable 'traefik' \--log '/var/log/k3s.log' \--kube-proxy-arg "proxy-mode=ipvs" "masquerade-all=true" \--kube-proxy-arg "metrics-bind-address=0.0.0.0" \--datastore-endpoint 'mysql://user:password@tcp(1.1.1.1:3306)/dbname'
2.3 k3s-masterN启动文件 只是增加了master01的token
01234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950 $ cat /etc/systemd/system/k3s.service[Unit]Description=Lightweight KubernetesDocumentation=https://k3s.ioWants=network-online.target[Install]WantedBy=multi-user.target[Service]Type=notifyEnvironmentFile=-/etc/default/%NEnvironmentFile=-/etc/sysconfig/%NEnvironmentFile=-/etc/systemd/system/k3s.service.envKillMode=processDelegate=yes# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNOFILE=1048576LimitNPROC=infinityLimitCORE=infinityTasksMax=infinityTimeoutStartSec=0Restart=alwaysRestartSec=5sExecStartPre=-/sbin/modprobe br_netfilterExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/k3s \server \--bind-address '0.0.0.0' \--write-kubeconfig-mode '0600' \--write-kubeconfig '/root/.kube/config' \--data-dir '/data/k3s' \--cluster-cidr 10.42.0.0/16 \--service-cidr 10.43.0.0/16 \--service-node-port-range '30000-42767' \--tls-san 'k3s.t4x.org' \--node-ip 'wg ip' \--node-external-ip '公网IP' \--advertise-address 'wg ip' \--node-label 'nodetype=worker' \--secrets-encryption \--flannel-iface 'netmaker' \--flannel-backend 'none' \--disable 'servicelb' \--disable 'traefik' \--log '/var/log/k3s.log' \--kube-proxy-arg "proxy-mode=ipvs" "masquerade-all=true" \--kube-proxy-arg "metrics-bind-address=0.0.0.0" \--token 'K***::server:***' \--datastore-endpoint 'mysql://user:password@tcp(1.1.1.1:3306)/dbname'
SourceByrd's Weblog-https://note.t4x.org/kubernetes/k3s-server-agent-config/
01 类似于: INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh --node-external-ip 82.157.xx.xx --advertise-address 82.157.xx.xx --node-ip 10.10.10.1 --flannel-iface wg0 --kube-proxy-arg "proxy-mode=ipvs"类似于: INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh --node-external-ip 82.157.xx.xx --advertise-address 82.157.xx.xx --flannel-backend wireguard --kube-proxy-arg "proxy-mode=ipvs"
2.4 部署完成后的状态
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
mroot@k3s-master01:~# kubectl get node NAME STATUS ROLES AGE VERSION k3s-master01 NotReady control-plane,master 4m18s v1.31.2+k3s1 root@k3s-master01:~# kubectl get node -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k3s-master01 NotReady control-plane,master 4m22s v1.31.2+k3s1 192.168.31.241 192.168.31.241 Debian GNU/Linux 12 (bookworm) 6.1.0-28-amd64 containerd://1.7.22-k3s1 root@k3s-master01:~# kubectl get pods -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-56f6fc8fd7-qr9bp 0/1 Pending 0 4m31s <none> <none> <none> <none> local-path-provisioner-5cf85fd84d-7fc6d 0/1 Pending 0 4m31s <none> <none> <none> <none> metrics-server-5985cbc9d7-xbktq 0/1 Pending 0 4m31s <none> <none> <none> <none> root@k3s-master01:~# kubectl apply -f calico.yaml #未部署calico的时候网络服务状态NotReady是正常的 poddisruptionbudget.policy/calico-kube-controllers created serviceaccount/calico-kube-controllers created serviceaccount/calico-node created serviceaccount/calico-cni-plugin created configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/tiers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/adminnetworkpolicies.policy.networking.k8s.io created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created daemonset.apps/calico-node created deployment.apps/calico-kube-controllers created |
三、K3s Server配置验证
3.1 2台master状态
012345678910111213141516 $ kubectl get nodes,services,pods --all-namespaces -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEnode/k3s-master03 Ready control-plane,master 42m v1.31.2+k3s1 WireGuard IP 公网ip Ubuntu 24.04.1 LTS 6.8.0-47-generic containerd://1.7.22-k3s1node/master-internal Ready control-plane,master 73m v1.31.2+k3s1 WireGuard IP WireGuard IP Debian GNU/Linux 12 (bookworm) 6.1.0-28-amd64 containerd://1.7.22-k3s1NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTORdefault service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 73m <none>kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 73m k8s-app=kube-dnskube-system service/metrics-server ClusterIP 10.43.172.21 <none> 443/TCP 73m k8s-app=metrics-serverNAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESkube-system pod/calico-kube-controllers-5d7d9cdfd8-nsf4h 1/1 Running 0 52m 10.42.48.193 master-internal <none> <none>kube-system pod/calico-node-575n2 1/1 Running 0 42m WireGuard IP k3s-master03 <none> <none>kube-system pod/calico-node-phbzv 1/1 Running 0 52m WireGuard IP master-internal <none> <none>kube-system pod/coredns-56f6fc8fd7-9vgjh 1/1 Running 1 (61m ago) 73m 10.42.48.197 master-internal <none> <none>kube-system pod/local-path-provisioner-5cf85fd84d-5l9tm 1/1 Running 1 (61m ago) 73m 10.42.48.200 master-internal <none> <none>kube-system pod/metrics-server-5985cbc9d7-zfh7f 1/1 Running 1 (61m ago) 73m 10.42.48.198 master-internal <none> <none>
3.2 路由以及服务器当前IP
0123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566 $ route -n #master-internal服务器Kernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface10.42.48.192 0.0.0.0 255.255.255.192 U 0 0 0 *10.42.48.193 0.0.0.0 255.255.255.255 UH 0 0 0 califf56c4d731110.42.48.197 0.0.0.0 255.255.255.255 UH 0 0 0 cali1592a4fd86510.42.48.198 0.0.0.0 255.255.255.255 UH 0 0 0 cali3839105c1fb10.42.48.200 0.0.0.0 255.255.255.255 UH 0 0 0 calicc72675401710.42.66.128 对端WireGuardIP 255.255.255.192 UG 0 0 0 tunl0$ ip a #master-internal服务器5: netmaker: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000link/noneinet x.x.x.x/16 brd x.x.255.255 scope global netmakervalid_lft forever preferred_lft forever6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group defaultlink/ether 12:bb:a4:29:a3:92 brd ff:ff:ff:ff:ff:ffinet 10.43.0.10/32 scope global kube-ipvs0valid_lft forever preferred_lft foreverinet 10.43.172.21/32 scope global kube-ipvs0valid_lft forever preferred_lft foreverinet 10.43.0.1/32 scope global kube-ipvs0valid_lft forever preferred_lft forever7: cali1592a4fd865@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default qlen 1000link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-c8f849e9-6c89-064e-59b6-cf991adb1a03inet6 fe80::ecee:eeff:feee:eeee/64 scope linkvalid_lft forever preferred_lft forever8: cali3839105c1fb@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default qlen 1000link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-c0cc0621-9940-9940-a2bc-f2143acc8a6ainet6 fe80::ecee:eeff:feee:eeee/64 scope linkvalid_lft forever preferred_lft forever10: calicc726754017@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default qlen 1000link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-999cbcb6-e620-a305-bff8-1e6d03d0ccafinet6 fe80::ecee:eeff:feee:eeee/64 scope linkvalid_lft forever preferred_lft forever11: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000link/ipip 0.0.0.0 brd 0.0.0.0inet 10.42.48.192/32 scope global tunl0valid_lft forever preferred_lft forever16: califf56c4d7311@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default qlen 1000link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-7707ae19-c28a-a507-8c20-202b6a7cfa64inet6 fe80::ecee:eeff:feee:eeee/64 scope linkvalid_lft forever preferred_lft forever$ route -n #k3s-master03服务器Kernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface10.42.48.192 对端WireGuardIP 255.255.255.192 UG 0 0 0 tunl010.42.66.128 0.0.0.0 255.255.255.192 U 0 0 0 *$ ip a #k3s-master03服务器3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000link/ipip 0.0.0.0 brd 0.0.0.0inet 10.42.66.128/32 scope global tunl0valid_lft forever preferred_lft forever5: netmaker: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000link/noneinet x.x.x.x/16 brd x.x.255.255 scope global netmakervalid_lft forever preferred_lft forever6: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group defaultlink/ether aa:10:27:4e:b4:79 brd ff:ff:ff:ff:ff:ffinet 10.43.0.10/32 scope global kube-ipvs0valid_lft forever preferred_lft foreverinet 10.43.172.21/32 scope global kube-ipvs0valid_lft forever preferred_lft foreverinet 10.43.0.1/32 scope global kube-ipvs0valid_lft forever preferred_lft forever
3.3 连通性测试
012345678910111213141516171819202122232425262728293031323334353637 $ curl 'https://10.42.48.198:10250' -k #k3s-master03服务器操作{"kind": "Status","apiVersion": "v1","metadata": {},"status": "Failure","message": "forbidden: User \"system:anonymous\" cannot get path \"/\"","reason": "Forbidden","details": {},"code": 403}$ telnet 10.43.0.1 443Trying 10.43.0.1...Connected to 10.43.0.1.Escape character is '^]'.$ telnet 10.43.0.10 53Trying 10.43.0.10...Connected to 10.43.0.10.Escape character is '^]'.$ ping 10.42.48.197PING 10.42.48.197 (10.42.48.197) 56(84) bytes of data.64 bytes from 10.42.48.197: icmp_seq=1 ttl=63 time=29.5 ms$ telnet 10.43.172.21 443 #master-internal服务器Trying 10.43.172.21...Connected to 10.43.172.21.Escape character is '^]'.telnet> quitConnection closed.$ telnet 10.43.0.10 53 #master-internal服务器Trying 10.43.0.10...Connected to 10.43.0.10.Escape character is '^]'.
SourceByrd's Weblog-https://note.t4x.org/kubernetes/k3s-server-agent-config/
0 calico-node-xxxx 其实就是wg的IP地址,因为基于wg,是前提互通的前提,无测试必要!!!
四、K3s Agent配置安装
SourceByrd's Weblog-https://note.t4x.org/kubernetes/k3s-server-agent-config/
0123456789101112131415161718192021222324252627282930313233343536 $ cat /etc/systemd/system/k3s-agent.service[Unit]Description=Lightweight KubernetesDocumentation=https://k3s.ioWants=network-online.target[Install]WantedBy=multi-user.target[Service]Type=execEnvironmentFile=-/etc/systemd/system/k3s-agent.service.envKillMode=processDelegate=yesLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTasksMax=infinityTimeoutStartSec=0Restart=alwaysRestartSec=5sExecStartPre=-/sbin/modprobe br_netfilterExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/k3s agent \--node-label 'nodetype=worker' \--node-external-ip '公网IP' \--node-ip 'WG IP' \--data-dir '/data/k3s' \--token 'XXX' \--log '/var/log/k3s_agent.log' \--server 'https://X.X.X.X:6443' \--disable-network-policy \--flannel-iface 'netmaker' \--flannel-backend 'none' \--node-name 'blog' \--kube-proxy-arg "proxy-mode=ipvs" "masquerade-all=true" \--kube-proxy-arg "metrics-bind-address=0.0.0.0"
五、k3s最终状态确认
SourceByrd's Weblog-https://note.t4x.org/kubernetes/k3s-server-agent-config/
0123456789101112131415161718192021222324252627282930313233 $ kubectl label node hz-blog node-role.kubernetes.io/worker=worker$ kubectl label node bj-proxy node-role.kubernetes.io/worker=worker$ kubectl label node sh-git node-role.kubernetes.io/worker=worker$ kubectl get node,pods,svc --all-namespaces -owideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEnode/bj-proxy Ready worker 5m19s v1.31.2+k3s1 node-ip(wg) 公网IP CentOS Stream 9 5.14.0-444.el9.x86_64 containerd://1.7.22-k3s1node/hz-blog Ready worker 15h v1.31.2+k3s1 node-ip(wg) 公网IP CentOS Stream 9 5.14.0-407.el9.x86_64 containerd://1.7.22-k3s1node/k3s-master03 Ready control-plane,master 37h v1.31.2+k3s1 node-ip(wg) 公网IP Ubuntu 24.04.1 LTS 6.8.0-47-generic containerd://1.7.22-k3s1node/master-internal Ready control-plane,master 38h v1.31.2+k3s1 node-ip(wg) 本地无公网用wgip Debian GNU/Linux 12 (bookworm) 6.1.0-28-amd64 containerd://1.7.22-k3s1node/sh-git NotReady worker 3m9s v1.31.2+k3s1 node-ip(wg) 公网IP CentOS Stream 9 5.14.0-407.el9.x86_64 containerd://1.7.22-k3s1NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESkube-system pod/calico-kube-controllers-5d7d9cdfd8-nsf4h 1/1 Running 0 37h 10.42.48.193 master-internal <none> <none>kube-system pod/calico-node-575n2 1/1 Running 0 37h node-ip(wg) k3s-master03 <none> <none>kube-system pod/calico-node-b87g9 1/1 Running 0 5m18s node-ip(wg) bj-proxy <none> <none>kube-system pod/calico-node-dv76v 1/1 Running 0 3m9s node-ip(wg) sh-git <none> <none>kube-system pod/calico-node-phbzv 1/1 Running 0 37h node-ip(wg) master-internal <none> <none>kube-system pod/calico-node-wkw9g 1/1 Running 0 15h node-ip(wg) hz-blog <none> <none>kube-system pod/coredns-56f6fc8fd7-9vgjh 1/1 Running 1 (37h ago) 38h 10.42.48.197 master-internal <none> <none>kube-system pod/local-path-provisioner-5cf85fd84d-5l9tm 1/1 Running 1 (37h ago) 38h 10.42.48.200 master-internal <none> <none>kube-system pod/metrics-server-5985cbc9d7-zfh7f 1/1 Running 1 (37h ago) 38h 10.42.48.198 master-internal <none> <none>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTORdefault service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 38h <none>kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 38h k8s-app=kube-dnskube-system service/metrics-server ClusterIP 10.43.172.21 <none> 443/TCP 38h k8s-app=metrics-server$ kubectl top nodesNAME CPU(cores) CPU% MEMORY(bytes) MEMORY%bj-proxy 98m 4% 2192Mi 59%hz-blog 71m 3% 2451Mi 68%k3s-master03 139m 6% 1134Mi 68%master-internal 429m 10% 1748Mi 44%sh-git 82m 4% 1110Mi 66%
六、K3s worker节点pod连通性测试
SourceByrd's Weblog-https://note.t4x.org/kubernetes/k3s-server-agent-config/
0123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081 $ kubectl taint node master-internal nodetype=master:NoSchedule$ cat nginx.yamlapiVersion: apps/v1kind: Deploymentmetadata:name: nginx-deploymentspec:replicas: 4 # 指定Pod副本数为3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:latest # 使用最新的Nginx镜像ports:- containerPort: 80 # Nginx监听的端口$ kubectl create -f nginx.yamldeployment.apps/nginx-deployment created$ kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-deployment-54b9c68f67-8rbts 1/1 Running 0 103s 10.42.222.129 sh-git <none> <none>nginx-deployment-54b9c68f67-d4tmq 1/1 Running 0 102s 10.42.66.147 k3s-master03 <none> <none>nginx-deployment-54b9c68f67-d62t2 1/1 Running 0 103s 10.42.174.205 hz-blog <none> <none>nginx-deployment-54b9c68f67-ndbk6 1/1 Running 0 103s 10.42.201.1 bj-proxy <none> <none>$ ping 10.42.222.129 #master-internal ping 测试PING 10.42.222.129 (10.42.222.129) 56(84) bytes of data.64 bytes from 10.42.222.129: icmp_seq=1 ttl=63 time=31.5 ms^C$ ping 10.42.66.147PING 10.42.66.147 (10.42.66.147) 56(84) bytes of data.64 bytes from 10.42.66.147: icmp_seq=1 ttl=63 time=29.8 ms^C$ ping 10.42.174.205PING 10.42.174.205 (10.42.174.205) 56(84) bytes of data.64 bytes from 10.42.174.205: icmp_seq=1 ttl=63 time=32.6 ms64 bytes from 10.42.174.205: icmp_seq=2 ttl=63 time=32.6 ms^C$ ping 10.42.201.1PING 10.42.201.1 (10.42.201.1) 56(84) bytes of data.64 bytes from 10.42.201.1: icmp_seq=1 ttl=63 time=26.2 ms^C$ ping 10.42.222.129 #sh-git ping 测试PING 10.42.222.129 (10.42.222.129) 56(84) bytes of data.64 bytes from 10.42.222.129: icmp_seq=1 ttl=64 time=0.245 ms^C$ ping 10.42.66.147PING 10.42.66.147 (10.42.66.147) 56(84) bytes of data.64 bytes from 10.42.66.147: icmp_seq=1 ttl=63 time=27.0 ms^C$ ping 10.42.174.205PING 10.42.174.205 (10.42.174.205) 56(84) bytes of data.64 bytes from 10.42.174.205: icmp_seq=1 ttl=63 time=9.47 ms^C$ ping 10.42.201.1PING 10.42.201.1 (10.42.201.1) 56(84) bytes of data.64 bytes from 10.42.201.1: icmp_seq=1 ttl=63 time=34.9 ms^C$ kubectl get services -o wide --all-namespacesNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTORdefault kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 38h <none>kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 38h k8s-app=kube-dnskube-system metrics-server ClusterIP 10.43.172.21 <none> 443/TCP 38h k8s-app=metrics-server$ telnet 10.43.0.1 443 #bj-proxy 到servicestelnet测试Trying 10.43.0.1...Connected to 10.43.0.1.Escape character is '^]'.$ telnet 10.43.0.10 53Trying 10.43.0.10...Connected to 10.43.0.10.Escape character is '^]'.$ telnet 10.43.172.21 443Trying 10.43.172.21...Connected to 10.43.172.21.Escape character is '^]'.
七、系统日志
0 1 2 3 4 5 6 7 |
time="2024-12-07T12:05:43+08:00" level=info msg="COMPACT compact revision changed since last iteration: 422907 => 423732" time="2024-12-07T12:10:43+08:00" level=info msg="COMPACT compact revision changed since last iteration: 423732 => 424557" time="2024-12-07T12:15:43+08:00" level=info msg="COMPACT compact revision changed since last iteration: 424557 => 425382" time="2024-12-07T12:20:43+08:00" level=info msg="COMPACT compact revision changed since last iteration: 425382 => 426207" time="2024-12-07T12:25:43+08:00" level=info msg="COMPACT compact revision changed since last iteration: 426207 => 427031" time="2024-12-07T12:30:43+08:00" level=info msg="COMPACT compact revision changed since last iteration: 427031 => 427856" time="2024-12-07T12:35:43+08:00" level=info msg="COMPACT compact revision changed since last iteration: 427856 => 428686" time="2024-12-07T12:40:43+08:00" level=info msg="COMPACT compact revision changed since last iteration: 428686 => 429550" |
九、问题解决
Q1:
0123456 $ kubectl get csE1130 17:57:55.274901 11722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"E1130 17:57:55.279022 11722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"E1130 17:57:55.281379 11722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"E1130 17:57:55.283712 11722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"E1130 17:57:55.286483 11722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"The connection to the server localhost:8080 was refused - did you specify the right host or port?
A1:
SourceByrd's Weblog-https://note.t4x.org/kubernetes/k3s-server-agent-config/
012 cat <<EOF >> /root/.bashrcexport KUBECONFIG=/etc/rancher/k3s/k3s.yamlEOF
参考文档:
1:https://docs.k3s.io/zh/
2:https://docs.tigera.io/calico/latest/getting-started/kubernetes/k3s/quickstart
3:https://github.com/containerd/nerdctl
4:https://note.t4x.org/basic/cross-wireguard-config/
5:https://note.t4x.org/basic/netmaker-manager-wireguard/
6:https://docs.tigera.io/archive/v3.18/networking/ip-autodetection
7:https://github.com/chobits/ngx_http_proxy_connect_module
8:https://blog.csdn.net/wq1205750492/article/details/124883196
9:https://developer.aliyun.com/mirror/docker-ce
10:https://baijiahao.baidu.com/s?id=1685684533566281761
11:https://github.com/k3s-io/k3s/
12:https://docs.rancher.cn/docs/k3s/architecture/_index
13:https://ranchermanager.docs.rancher.com/zh/v2.6/reference-guides/kubernetes-conceptsSourceByrd's Weblog-https://note.t4x.org/kubernetes/k3s-server-agent-config/
SourceByrd's Weblog-https://note.t4x.org/kubernetes/k3s-server-agent-config/