cetos7搭建部署k8s 版本1.28

系统最低要求

内存最少是4G  cpu个数最少两个  

IP 内存 CPU 主机名
192.168.231.120 4 4 K1  
192.168.231.121 4 4 K2
192.168.231.122 4 4 K3

基础准备工作--三台主机

关闭防火墙

systemctl stop firewalled

关闭swap

vim  /etc/fstab

设置主机名称

hostnameset 

内核参数设置

vim /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1

sysctl -p  

配置同步时间

yum install ntpdate
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
systemctl enable ntpdate
systemctl start ntpdate

systemctl status ntpdate

安装containerd服务

yum install -y containerd.io-1.6.27

安装docker 

yum install docker-ce -y

# 启动并开机启动

systemctl enable docker --now

  

k8s集群安装

配置k8s yum源

[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni

安装  kubelet kubeadm kubectl

yum install -y kubelet kubeadm kubectl

初始化集群

在master上面执行  

kubeadm  init --apiserver-advertise-address=192.168.1.11

初始化成功后 

在 K1上面执行 

在master上安装 calico 网络插件

wget --no-check-certificate https://docs.projectcalico.org/manifests/calico.yaml

kubectl apply -f calico.yaml

获取节点信息 验证是否安装成功  

K8s集群加入node工作节点

即把 K2 K3主机加入到k8s集群中

生成 K1主节点的token 

# 在K1 master上面生成  
kubeadm token create --print-join-command

# 创建一个永不过期的token 
kubeadm token create --ttl 0 --print-join-command



#  在k2 k3 wokenode上执行下面命令  加入k8s集群 
生成的结果如下 不过期的token 

kubeadm join 192.168.241.131:6443 --token 5ajtxi.sx49u7jyygnmw0c4 --discovery-token-ca-cert-hash sha256:ada6bf229e93d346c4af69f953c96040c12c30b1f2b10eb2993052fbfaa48651

k8s集群搭建完成

kubectl get nodes   #master上执行

安装helm

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash


#查看helm 版本
helm version

添加仓库-必须

# helm 官方仓库
helm repo add brigade https://brigadecore.github.io/charts

# 添加ingress 仓库
helm repo add ng https://kubernetes.github.io/ingress-nginx


# 删除仓库
helm repo remove stable


# 查看仓库
helm repo list


#搜索ingress

helm search repo ingress
NAME            	CHART VERSION	APP VERSION	DESCRIPTION                                       
ng/ingress-nginx	4.9.0        	1.9.5      	Ingress controller for Kubernetes using NGINX a...


#下载下来   会存储在本地
helm pull ng/ingress-nginx

常用K8s命令

删除master上的节点信息

POD命令

删除pod

1.
# 删除对应namespace下的pod
kubectl delete pod ingress-nginx-admission-patch-j4s7z -n ingress-nginx

2.

查看所有的namspace

kubectl get namespaces


#namespace下的所有pod    这里的namespace是kube-system

kubectl get pods -n kube-system

查看 指定namespace(空间是ingress-nginx) 下所有的pod 

kubectl get all -n ingress-nginx

查看加入节点的命令

kubeadm token create --print-join-command

#生成一个永不过期的token  
kubeadm token create --ttl 0 --print-join-command

k8s集群上删除一个节点

kubectl  delete nodes k3

在节点上清空数据

kubeadm reset

把pod调度到指定节点

kubectl label node master1 ingress=true

ERROR:

报错1:

K2节点加入报错 

[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR CRI]: container runtime is not running: output: time="2024-01-18T10:41:00-05:00" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

解决办法:

  在master端 重启 systemctl restart  containerd 

在node 端    删除  rm /etc/containerd/config.toml        重启  systemctl restart  containerd 

加入之后 如果没有 ready  在重启 master端的node

报错2:

[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
kubeadm reset       在节点上  清空节点数据 

重新加入