此文将搭建flannel网络,目的使跨主机的docker能够互相通信,也是保障kubernetes集群的网络基础和保障,和ha高可用。 部署的服务器为: master1 192.168.206.31 master2 192.168.206.32 master3 192.168.206.33 node1 192.168.206.41 node2 192.168.206.42 node3 192.168.206.43 VIP:192.168.206.30
ha1 192.168.206.36 ha2 192.168.206.37
一、生成Flannel网络TLS证书
在所有集群节点都安装Flannel,下面的操作在k8s-master1上进行。 1、创建证书签名请求
cat > flanneld-csr.json <<EOF { "CN": "flanneld", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Zhejiang", "L": "hangzhou", "O": "k8s", "OU": "System" } ] } EOF 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;
2、生成证书和私钥:
cfssl gencert -ca=/data/ssl/ca.pem -ca-key=/data/ssl/ca-key.pem -config=/data/ssl/ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld 创建证书存放目录: mkdir /opt/kubernetes/ssl/flannel 这里是复制到3master+3node上 cp flanneld*.pem /opt/kubernetes/ssl/flannel
二、部署 Flannel 1、下载安装Flannel
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz cp {flanneld,mk-docker-opts.sh} /opt/kubernetes/bin/
2、向 etcd 写入网段信息 下面2条命令在etcd集群中任意一台执行一次即可,也是是创建一个flannel网段供docker分配使用
etcdctl --ca-file=/opt/kubernetes/ssl/etcd/ca.pem --cert-file=/opt/kubernetes/ssl/etcd/etcd.pem --key-file=/opt/kubernetes/ssl/etcd/etcd-key.pem mkdir /opt/kubernetes/network etcdctl --ca-file=/opt/kubernetes/ssl/etcd/ca.pem --cert-file=/opt/kubernetes/ssl/etcd/etcd.pem --key-file=/opt/kubernetes/ssl/etcd/etcd-key.pem mk /opt/kubernetes/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'
3、创建system unit文件
cat > /etc/systemd/system/flanneld.service << EOF [Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify ExecStart=/opt/kubernetes/bin/flanneld -etcd-cafile=/opt/kubernetes/ssl/flannel/ca.pem -etcd-certfile=/opt/kubernetes/ssl/flannel/flanneld.pem -etcd-keyfile=/opt/kubernetes/ssl/flannel/flanneld-key.pem -etcd-endpoints=https://192.168.206.31:2379,https://192.168.206.32:2379,https://192.168.206.33:2379 -etcd-prefix=/opt/kubernetes/network ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure [Install] WantedBy=multi-user.target RequiredBy=docker.service EOF mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入到 /run/flannel/docker 文件中,后续 docker 启动时使用这个文件中参数值设置 docker0 网桥。 flanneld 使用系统缺省路由所在的接口和其它节点通信,对于有多个网络接口的机器(如,内网和公网),可以用 -iface=enpxx 选项值指定通信接口。
4、启动flannel并且设置开机自启动
systemctl daemon-reload systemctl enable flanneld systemctl start flanneld
5、查看flannel分配的子网信息
[root@k8s-master1 ~]# cat /run/flannel/docker DOCKER_OPT_BIP="--bip=172.30.94.1/24" DOCKER_OPT_IPMASQ="--ip-masq=true" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=172.30.94.1/24 --ip-masq=true --mtu=1450" [root@k8s-master1 ~]# cat /run/flannel/subnet.env FLANNEL_NETWORK=172.30.0.0/16 FLANNEL_SUBNET=172.30.94.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=false /run/flannel/docker是flannel分配给docker的子网信息,/run/flannel/subnet.env包含了flannel整个大网段以及在此节点上的子网段。
6、查看flannel网络是否生效
Last login: Thu Nov 19 09:28:40 2020 from 192.168.206.1 [root@k8s-master1 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:6e:7f:49 brd ff:ff:ff:ff:ff:ff inet 192.168.206.31/24 brd 192.168.206.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet6 fe80::bbd4:6d75:22b1:e631/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever inet6 fe80::129b:129d:71ca:5d94/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever inet6 fe80::1b37:c32:6cc4:be75/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever 3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether de:b1:04:6f:d6:57 brd ff:ff:ff:ff:ff:ff inet 172.30.65.0/32 brd 172.30.65.0 scope global flannel.1 valid_lft forever preferred_lft forever
三、安装docker、配置docker支持flannel网络 1、所有node安装docker
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #安装指定版本,这里安装18.06 yum list docker-ce --showduplicates | sort -r yum install -y docker-ce-18.06.1.ce-3.el7 systemctl start docker && systemctl enable docker
2、配置docker支持flannel网络,所有docker节点都操作
[root@k8s-master1 ~]# vi /etc/systemd/system/multi-user.target.wants/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker EnvironmentFile=/run/flannel/docker ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. #TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process # restart the docker process if it exits prematurely Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
3、重启docker,使配置生效
systemctl daemon-reload systemctl restart docker
4、查看所有集群主机的网络情况
etcdctl --ca-file=/opt/kubernetes/ssl/etcd/ca.pem --cert-file=/opt/kubernetes/ssl/etcd/etcd.pem --key-file=/opt/kubernetes/ssl/etcd/etcd-key.pem ls /opt/kubernetes/network/subnets
四、keepalived+haproxy高可用部署。 部署服务器 ha1 192.168.206.36 ha2 192.168.206.37 1、所有haproxy安装haproxy
yum install -y haproxy cat <<EOF > /etc/haproxy/haproxy.cfg global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon defaults mode tcp log global retries 3 timeout connect 10s timeout client 1m timeout server 1m frontend k8s-api bind *:6443 bind *:443 mode tcp option tcplog default_backend k8s-api backend k8s-api mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server master1 192.168.206.31:6443 check server master2 192.168.206.32:6443 check server master3 192.168.206.33:6443 check EOF
2、启动所有haproxy
systemctl start haproxy systemctl status haproxy systemctl enable haproxy
3、所有haproxy安装keepalived
yum install -y keepalived cat <<EOF > /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_K8S } vrrp_instance VI_1 { state MASTER(BACKUP) interface ens33 virtual_router_id 51 priority 100(备50) advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { VIP/24 } } EOF
4、所有haproxy启动keepalived
systemctl restart keepalived systemctl status keepalived systemctl enable keepalived
启动完成可用查看vip,或关闭主ha看vip是否偏移。