基本环境配置

配置信息 备注
系统版本 rocky linux 8.7
Pod网段 10.0.0.0/8
Service网段 172.16.0.0/12

Runtime配置

先卸载已经安装的docker

dnf remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine docker-ce containerd -y

如果服务器没有默认的docker源,首先配置源:

dnf -y install yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装Containerd

dnf install containerd.io -y

配置Containerd的内核

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sudo sysctl --system

创建Containerd的配置文件

sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
sed -i 's#k8s.gcr.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g'  /etc/containerd/config.toml
sed -i 's#registry.gcr.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g'  /etc/containerd/config.toml
sed -i 's#registry.k8s.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g'  /etc/containerd/config.toml

启动Containerd

systemctl daemon-reload
systemctl restart containerd
 systemctl enable containerd

查看Containerd插件

ctr plugin ls

Kubeadm&Kubelet

添加源

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

首先在Master01节点查看最新的Kubernetes版本是多少:

yum list kubeadm.x86_64 --showduplicates | sort -r

所有节点安装1.25最新版本kubeadm、kubelet和kubectl:

dnf install kubeadm-1.28* kubelet-1.28* kubectl-1.28* -y

如果选择的是Containerd作为的Runtime,需要更改Kubelet的配置使用Containerd作为Runtime:

cat >/etc/sysconfig/kubelet<<EOF
KUBELET_KUBEADM_ARGS="--runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
EOF

--container-runtime=remote 参数已弃用

设置Kubelet开机自启动(由于还未初始化,没有kubelet的配置文件,此时kubelet无法启动,无需管理):

systemctl daemon-reload
systemctl enable --now kubelet

此时kubelet是起不来的,日志会有报错不影响!

集群初始化

关闭swap

swapoff -a
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

所有节点下载镜像

kubeadm config images pull \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version 1.28.2

节点初始化集群:

kubeadm init \
--apiserver-advertise-address 192.168.4.5 \
--pod-network-cidr=10.0.0.0/8 \
--service-cidr=172.16.0.0/12 \
--kubernetes-version 1.28.0 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--cri-socket "unix:///var/run/containerd/containerd.sock"

master01节点配置kubectl环境:

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

根据master节点所有node节点加入集群:

kubeadm join 192.168.4.101:6443 --token oirspz.795zshkqbt5lfe6e \
	--discovery-token-ca-cert-hash sha256:d6f081f10ced13df052b0708223e2f00b2f1c1530b3025909f8b420d52d820d9

Addon安装

各个插件安装地址:

calico:https://docs.tigera.io/calico/latest/about

  1. Install Calico
  2. Kubernetes
  3. Self-managed on-premises(内部自我管理)
  4. Install Calico networking and network policy for on-premises deployments(为本地部署安装Calico网络和网络策略)https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/calico.yaml

metrics-server:https://github.com/kubernetes-sigs/metrics-server
https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

dashboard:https://kubernetes.io/zh-cn/docs/tasks/access-application-cluster/web-ui-dashboard/
https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

如metrics-server的deployment需要更换https模式,则可以增加如下步骤:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  template:
    spec:
      containers:
      - args:
        - --kubelet-insecure-tls
        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt # change to front-proxy-ca.crt for kubeadm
        - --requestheader-username-headers=X-Remote-User
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-extra-headers-prefix=X-Remote-Extra-
        volumeMounts:
        - name: ca-ssl
          mountPath: /etc/kubernetes/pki
      volumes:
      - name: ca-ssl
        hostPath:
          path: /etc/kubernetes/pki

去掉主节点的污点:

kubectl taint node localhost.localdomain  node-role.kubernetes.io/control-plane:NoSchedule-

自动补全

source <(kubectl completion bash) # 在 bash 中设置当前 shell 的自动补全,要先安装 bash-completion 包
echo "source <(kubectl completion bash)" >> ~/.bashrc # 在你的 bash shell 中永久地添加自动补全

创建calico.yaml

kubectl  create -f calico.yaml

创建metrics-server.yaml,

kubectl create -f components.yaml

创建完后通过kubectl top pod -n kube-system查看pod的CPU和内存使用状态,使用kubectl top nodes查看node的CPU和内存状态。

创建dashboard

kubectl create -f recommended.yaml

创建 dashboard-user.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1 
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

创建资源:

kubectl create -f dashboard-user.yaml

执行kubectl create token admin-user -n kube-system创建登录token

# kubectl create token admin-user -n kube-system
eyJhbGciOiJSUzI1NiIsImtpZCI6Ijd4VkZxVFNVRk1xTnJaZzZtZzBpdDNtYW9qUlkxbUxQNGp0SFdCTk1VUW8ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzA0OTkzNTQyLCJpYXQiOjE3MDQ5ODk5NDIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiYjdiNzE3MGItNzNkYy00MzIzLWE3ZmQtOTg0MThiMjYwODQ3In19LCJuYmYiOjE3MDQ5ODk5NDIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.U8hzvaeL-HEJJh7HyNyUsfyeGrSK6wCFsEatgtwZ8ZrYlux1KERey10GvA2NQOinXfO-fQ-2EO1cPIyRTWyyoa5EkvMcYqEY4vsPiZzI1_T29nR32ozalyC-Auwls5apuHkJQPLIrofI2u6mfCAl4c9zp-UmquJ038jJV08Q3igxVAiJ9XJZjIPLxK50B-w0ndcNl2vYErbCRxAUJiT3qnr8DbXNZUmVg1vvzXfOL6xlqasG5udK6Qe-YD9ljY3ZFx5um_VGWP3ndU8x98_ym19dbuA2o7zVYvSZa-ep17LkRLVXIWbNRGvoLTd7A6KeS9lJaxWtoJNbI-0h0TgcmA

通过kubectl get svc -n kubernetes-dashboard查看dashboard查看端口

# kubectl get svc -n kubernetes-dashboard 
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   172.20.8.197     <none>        8000/TCP        11m
kubernetes-dashboard        NodePort    172.25.215.253   <none>        443:32388/TCP   11m

如果没有,指定NodePort:

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort

访问nodeIP:32388访问dashboard