kubeadm高可用安装k8s集群1.20版本

通过kubeadm高可用安装k8s集群

一.环境规划+配置

K8S官网:https://kubernetes.io/docs/setup/
最新版高可用安装:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

如果在vm虚拟机上安装,首先先在虚拟机配置好机器。
VMware 虚拟机配置方法←点击

生产环境建议用二进制安装,虚拟机最好不要用中文的服务器和克隆的虚拟机(可能会有bug)。

1.高可用的集群规划
 主机名             IP                       说明
 master01-03       192.168.0.100-103        master*3
 node01-02         192.168.0.104-105        node*2
 k8s-master-lb     192.168.0.236            keepalived虚拟IP(VIP)

服务器配置信息:
系统版本:     CentOS 7.9  (低版本有bug)
Docker版本:   19.03.x
Pod网段:      172.168.0.0/12 
Service网段:  10.96.0.0/12

备注:负载均衡可以用腾讯内网的ELB,阿里云的SLB会回环不支持,这里的VIP不能用已使用的内网IP。

2.配置host主机名
所有主机配置一样的hosts
[root@k8s-master01 ~]# cat /etc/hosts
192.168.0.100 k8s-master01
192.168.0.101 k8s-master02
192.168.0.102 k8s-master03
192.168.0.236 k8s-master-lb    # 如果不是高可用集群,该IP为Master01的IP
192.168.0.104 k8s-node01
192.168.0.105 k8s-node02
3.配置yum源
所有主机设置yum源
[root@k8s-master01 ~]#
[root@k8s-master01 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo  #下载阿里云yum源
[root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2    #安装常用工具
[root@k8s-master01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo   #下载阿里云docker源
[root@k8s-master01 ~]# cat < /etc/yum.repos.d/kubernetes.repo    #设置阿里云k8s源
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1    #如果yum repolist失败,就改成0
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-master01 ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo  #替换优化
[root@k8s-master01 ~]# yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y #再次安装必要工具
4.关闭swap分区,同步所有节点时间
在所有主机操作
[root@k8s-master01 ~]# swapoff -a && sysctl -w vm.swappiness=0   #关闭swap
[root@k8s-master01 ~]# sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab   #去掉swap开机自动挂载
[root@k8s-master01 ~]# rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm  && yum install ntpdate -y  #安装ntpdate
所有节点同步时间。时间同步配置如下:
[root@k8s-master01 ~]# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime  #同步时区
[root@k8s-master01 ~]# echo 'Asia/Shanghai' >/etc/timezone 
[root@k8s-master01 ~]# ntpdate time2.aliyun.com  #同步阿里云的标准时间
[root@k8s-master01 ~]# crontab -e 
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com  > /dev/null 2>&1   #预防时间不同步集群报错,每5分钟同步一次时间
5.优化系统,并升级版本

修改所有节点配置limit,设置单独master01能免密登陆所有节点,并所有节点升级系统版本。

[root@k8s-master01 ~]# ulimit -SHn 65535  #
[root@k8s-master01 ~]# vim /etc/security/limits.conf 
# 末尾添加如下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
[root@k8s-master01 ~]# ssh-keygen -t rsa  #单独设置master01密钥
[root@k8s-master01 ~]# for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done 
[root@k8s-master01 ~]# yum update -y --exclude=kernel* && reboot  #所有节点升级至7.9版本,下载东西比较多,用阿里yum源会快很多。
[root@k8s-master01 ~]# cat /etc/redhat-release   #重启后检查版本是否为7.9

二.内核升级

1.升级内核

简介:所有节点内核需升级为4.19+,建议4.19相对稳定一些

[root@k8s-master01 ~]# uname -r  #查看内核版本
4.19:版本网上找kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm包和kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm包
5.1+最新:
[root@k8s-master01 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@k8s-master01 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm  #启用ELRepo仓库
[root@k8s-master01 ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available  #查看所有内核包
[root@k8s-master01 ~]# yum --enablerepo=elrepo-kernel install kernel-ml  #安装最新的稳定内核版
[root@k8s-master01 ~]# uname -sr 

安装好后,重启会默认回到之前旧版本内核,需要修改开机启动顺序(所有节点)

[root@k8s-master01 ~]# grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg  
[root@k8s-master01 ~]# grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"   #所有节点更改内核启动顺序
[root@k8s-master01 ~]# grubby --default-kernel   #检查默认内核是不是4.19+
[root@k8s-master01 ~]# reboot     #重启
[root@k8s-master01 ~]# uname -a   #所有节点重启后,然后检查内核是不是4.19+
2.安装ipvsadm

新版本都建议使用ipvs模块,功能比较强大。
Ipvs:监听Master节点增加和删除service以及endpoint的消息,调用Netlink接口创建相应的IPVS规则。通过IPVS规则,将流量转发至相应的Pod上。
Iptables:监听Master节点增加和删除service以及endpoint的消息,对于每一个Service,他都会场景一个iptables规则,将service的clusterIP代理到后端对应的Pod。

所有节点配置ipvs模块
[root@k8s-master01 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y
[root@k8s-master01 ~]# vim /etc/modules-load.d/ipvs.conf 
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
[root@k8s-master01 ~]# systemctl enable --now systemd-modules-load.service #开启服务--now表示打开
3.修改K8S集群内核参数,所有节点配置
[root@k8s-master01 ~]# cat < /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
[root@k8s-master01 ~]# sysctl --system
[root@k8s-master01 ~]# reboot
[root@k8s-master01 ~]# lsmod | grep --color=auto -e ip_vs -e nf_conntrack #重启后可以看到上一步加载的服务

三.基本组件安装

1.安装docker

主要安装的是集群中用到的各种组件,比如Docker-ce、Kubernetes各组件等。

所有节点安装docker
yum install docker-ce-19.03.* -y

由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd

[root@k8s-master01 ~]# mkdir /etc/docker
[root@k8s-master01 ~]# vim /etc/docker/daemon.json 
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
2.安装k8s组件

查看K8S可用版本:

[root@k8s-master01 ~]# yum list kubeadm.x86_64 --showduplicates | sort -r

所有节点安装最新版本kubeadm:

[root@k8s-master01 ~]# yum install kubeadm -y

默认配置的pause镜像使用gcr.io仓库,国内可能无法访问,所以这里配置Kubelet使用阿里云的pause镜像:

[root@k8s-master01 ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"

设置Kubelet开机自启动:

[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now kubelet

四.高可用组件安装

1.安装HAProxy

简介:如果不用高可用,就跳过此步骤,
所有Master节点通过yum安装HAProxy和KeepAlived:

yum install keepalived haproxy -y

所有Master节点配置HAProxy(所有master节点配置一样)

[root@k8s-master01 etc]# mkdir /etc/haproxy
[root@k8s-master01 etc]# vim /etc/haproxy/haproxy.cfg 
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01	192.168.0.100:6443  check
  server k8s-master02	192.168.0.101:6443  check
  server k8s-master03	192.168.0.102:6443  check
2.安装KeepAlived

所有Master节点配置KeepAlived,配置不一样,注意区分 [root@k8s-master01 pki]# vim /etc/keepalived/keepalived.conf ,注意每个节点的IP和网卡(interface参数)

Master01节点的配置:
[root@k8s-master01 etc]# mkdir /etc/keepalived

[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    mcast_src_ip 192.168.0.100
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.0.236
    }
    track_script {
       chk_apiserver
    }
}
Master02节点的配置:
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
   interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip 192.168.0.101
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.0.236
    }
    track_script {
       chk_apiserver
    }
}
Master03节点的配置:
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
 interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip 192.168.0.102
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.0.236
    }
    track_script {
       chk_apiserver
    }
}

所有master节点配置KeepAlived健康检查文件:

[root@k8s-master01 keepalived]# cat /etc/keepalived/check_apiserver.sh 
#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

启动haproxy和keepalived

[root@k8s-master01 keepalived]# chmod +x /etc/keepalived/check_apiserver.sh
[root@k8s-master01 keepalived]# systemctl daemon-reload
[root@k8s-master01 keepalived]# systemctl enable --now haproxy   #--now表示立即启动
[root@k8s-master01 keepalived]# systemctl enable --now keepalived
3.测试高可用

测试VIP

[root@k8s-master01 ~]# ping 192.168.0.236 -c 4    #测试ping 
PING 192.168.0.236 (192.168.0.236) 56(84) bytes of data.
64 bytes from 192.168.0.236: icmp_seq=1 ttl=64 time=0.464 ms
64 bytes from 192.168.0.236: icmp_seq=2 ttl=64 time=0.063 ms
64 bytes from 192.168.0.236: icmp_seq=3 ttl=64 time=0.062 ms
64 bytes from 192.168.0.236: icmp_seq=4 ttl=64 time=0.063 ms
--- 192.168.0.236 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3106ms
rtt min/avg/max/mdev = 0.062/0.163/0.464/0.173 ms

[root@k8s-master01 ~]# telnet 192.168.0.236 16443   #测试端口
Trying 192.168.0.236...
Connected to 192.168.0.236.
Escape character is '^]'.
Connection closed by foreign host.

如果ping不通且telnet没有出现 ] ,则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等
所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
所有节点查看selinux状态,必须为disable:getenforce
master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy
master节点查看监听端口:netstat -lntp ; ss -ltunp

五.集群初始化

官方初始化文档:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
所有节点创建kubeadm-config.yaml配置文件。
(# 注意,如果不是高可用集群,192.168.0.236:16443改为master01的地址,16443改为apiserver的端口,默认是6443,注意更改v1.20.0自己服务器kubeadm的版本:kubeadm version)

[root@k8s-master01 ~]# cd /root
[root@k8s-master01 ~]# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 7t2weq.bjbawausm0jaxury
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.0.100
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 192.168.0.236
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.0.236:16443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:
  dnsDomain: cluster.local
  podSubnet: 172.168.0.0/12
  serviceSubnet: 10.96.0.0/12
scheduler: {}

更新kubeadm文件:由于本版本为1.20.0,最新的kubeadm不一定是1.20.0,所以通过命令更新为new.yaml

[root@k8s-master01 ~]# kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

所有Master节点提前下载镜像,可以节省初始化时间:

[root@k8s-master01 ~]# kubeadm config images pull --config /root/new.yaml

所有节点设置开机自启动kubelet

[root@k8s-master01 ~]# systemctl enable --now kubelet(如果启动失败无需管理,初始化成功以后即可启动--now表示立即启动

Master01节点初始化(单独),初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可:

[root@k8s-master01 ~]# kubeadm init --config /root/new.yaml  --upload-certs

如果初始化失败,重置后再次初始化,命令如下:

kubeadm reset -f ; ipvsadm --clear  ; rm -rf ~/.kube

初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值):

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908 \
    --control-plane --certificate-key ac2854de93aaabdf6dc440322d4846fc230b290c818c32d6ea2e500fc930b0aa

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908

Master01节点配置环境变量,用于访问Kubernetes集群:

[root@k8s-master01 ~]# vim /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
[root@k8s-master01 ~]# source /root/.bashrc

查看节点状态:

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES                  AGE   VERSION
k8s-master01   NotReady   control-plane,master   74s   v1.20.0

采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:

[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide
NAME                                   READY     STATUS    RESTARTS   AGE       IP              NODE
coredns-777d78ff6f-kstsz               0/1       Pending   0          14m                 
coredns-777d78ff6f-rlfr5               0/1       Pending   0          14m                 
etcd-k8s-master01                      1/1       Running   0          14m       192.168.0.100   k8s-master01
kube-apiserver-k8s-master01            1/1       Running   0          13m       192.168.0.100   k8s-master01
kube-controller-manager-k8s-master01   1/1       Running   0          13m       192.168.0.100   k8s-master01
kube-proxy-8d4qc                       1/1       Running   0          14m       192.168.0.100   k8s-master01
kube-scheduler-k8s-master01            1/1       Running   0          13m       192.168.0.100   k8s-master01

六.高可用master配置

Token过期后生成新的token:

[root@k8s-master01 ~]# kubeadm token create --print-join-command

Master需要生成–certificate-key

[root@k8s-master01 ~]# kubeadm init phase upload-certs  --upload-certs

初始化其他master加入集群

kubeadm join 192.168.0.236:16443 --token fgtxr1.bz6dw1tci1kbj977     --discovery-token-ca-cert-hash sha256:06ebf46458a41922ff1f5b3bc49365cf3dd938f1a7e3e4a8c8049b5ec5a3aaa5 \
    --control-plane --certificate-key 03f99fb57e8d5906e4b18ce4b737ce1a055de1d144ab94d3cdcf351dfcd72a8b

七.node节点配置

Node节点上主要部署公司的一些业务应用,生产环境中不建议Master节点部署系统组件之外的其他Pod,测试环境可以允许Master节点部署Pod以节省系统资源。

kubeadm join 192.168.0.236:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:8c92ecb336be2b9372851a9af2c7ca1f7f60c12c68f6ffe1eb513791a1b8a908

所有节点初始化完成后,查看集群状态

[root@k8s-master01]# kubectl  get node
NAME           STATUS     ROLES                  AGE     VERSION
k8s-master01   NotReady   control-plane,master   8m53s   v1.20.0
k8s-master02   NotReady   control-plane,master   2m25s   v1.20.0
k8s-master03   NotReady   control-plane,master   31s     v1.20.0
k8s-node01     NotReady   <none>                 32s     v1.20.0
k8s-node02     NotReady   <none>                 88s     v1.20.0

八.Calico组件的安装

简介:以下步骤只在master01执行,符合CNI标准的网络插件,给每个Pod生成一个唯一的IP地址,并且把每个节点当做一个路由器。
组件通过yaml的方式安装,源码我已经放在码云上了,首先下载源码。

[root@k8s-master01]# cd /root && git clone https://gitee.com/yjexample/k8s-ha-install.git

切换分支

[root@k8s-master01]# cd /root/k8s-ha-install && git checkout manual-installation-v1.20.x && cd calico/

修改calico-etcd.yaml的以下位置:集群IP,证书等参数。

sed -i 's#etcd_endpoints: "http://:"#etcd_endpoints: "https://192.168.0.100:2379,https://192.168.0.101:2379,https://192.168.0.102:2379"#g' calico-etcd.yaml
ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml


sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml

POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`

# 注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释:


所以更改的时候请确保这个步骤的这个网段没有被统一替换掉,如果被替换掉了,还请改回来:

[root@k8s-master01]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml

改好后启动:

[root@k8s-master01]# kubectl apply -f calico-etcd.yaml

查看容器状态

[root@k8s-master01]#  kubectl  get po -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5f6d4b864b-pwvnb   1/1     Running   0          3m29s
calico-node-5lz9m                          1/1     Running   0          3m29s
calico-node-8z4bg                          1/1     Running   0          3m29s
calico-node-lmzvf                          1/1     Running   0          3m29s
calico-node-mpngv                          1/1     Running   0          3m29s
calico-node-vmqsl                          1/1     Running   0          3m29s
coredns-54d67798b7-8525g                   1/1     Running   0          39m
coredns-54d67798b7-fxs72                   1/1     Running   0          39m
etcd-k8s-master01                          1/1     Running   0          39m
etcd-k8s-master02                          1/1     Running   0          33m
etcd-k8s-master03                          1/1     Running   0          31m
kube-apiserver-k8s-master01                1/1     Running   0          39m
kube-apiserver-k8s-master02                1/1     Running   0          33m
kube-apiserver-k8s-master03                1/1     Running   0          30m
kube-controller-manager-k8s-master01       1/1     Running   1          39m
kube-controller-manager-k8s-master02       1/1     Running   0          33m
kube-controller-manager-k8s-master03       1/1     Running   0          31m
kube-proxy-hnkmj                           1/1     Running   0          39m
kube-proxy-jk4dm                           1/1     Running   0          32m
kube-proxy-nbcg2                           1/1     Running   0          32m
kube-proxy-qv9k7                           1/1     Running   0          32m
kube-proxy-x6xdc                           1/1     Running   0          33m
kube-scheduler-k8s-master01                1/1     Running   1          39m
kube-scheduler-k8s-master02                1/1     Running   0          33m
kube-scheduler-k8s-master03                1/1     Running   0          30m

九.Metrics部署

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
将Master01节点的front-proxy-ca.crt复制到所有Node节点

[root@k8s-master01]# scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt
[root@k8s-master01]# scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node02:/etc/kubernetes/pki/front-proxy-ca.crt

安装metrics server

[root@k8s-master01]# cd /root/k8s-ha-install/metrics-server-0.4.x-kubeadm/
[root@k8s-master01 metrics-server-0.4.x-kubeadm]# kubectl  create -f comp.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

等一会安装好后,查看状态

[root@k8s-master01 metrics-server-0.4.x-kubeadm]# kubectl  top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   109m         2%     1296Mi          33%       
k8s-master02   99m          2%     1124Mi          29%       
k8s-master03   104m         2%     1082Mi          28%       
k8s-node01     55m          1%     761Mi           19%       
k8s-node02     53m          1%     663Mi           17%

十.Dashboard部署

Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。
回到我的源码目录,安装指定版本,源码镜像均从官方下载,无作更改。

cd /root/k8s-ha-install/dashboard/
1.安装
[root@k8s-master01 dashboard]# kubectl  create -f .
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

自己也可以安装最新版本,官方GitHub地址:https://github.com/kubernetes/dashboard
可以在官方dashboard查看到最新版dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

创建管理员用户vim admin.yaml

[root@k8s-master01 dashboard]# vim admin.yaml
 apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
[root@k8s-master01 dashboard]# kubectl apply -f admin.yaml -n kube-system
2.登录dashboard

更改dashboard的svc为NodePort:

[root@k8s-master01 dashboard]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
修改“type:ClusterIP” 为 “type:NodePort”

查看端口号:

[root@k8s-master01 dashboard]# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard


根据自己的实例端口号,通过任意安装了kube-proxy的宿主机或者VIP的IP+端口即可访问到dashboard:
访问Dashboard:https://192.168.0.103:18282(请更改18282为自己的端口),选择登录方式为令牌(即token方式)

查看token值:

[root@k8s-master01 1.1.1]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-r4vcp
Namespace:    kube-system
Labels:       
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 2112796c-1c9e-11e9-91ab-000c298bf023

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI0dmNwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTEyNzk2Yy0xYzllLTExZTktOTFhYi0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.bWYmwgRb-90ydQmyjkbjJjFt8CdO8u6zxVZh-19rdlL_T-n35nKyQIN7hCtNAt46u6gfJ5XXefC9HsGNBHtvo_Ve6oF7EXhU772aLAbXWkU1xOwQTQynixaypbRIas_kiO2MHHxXfeeL_yYZRrgtatsDBxcBRg-nUQv4TahzaGSyK42E_4YGpLa3X3Jc4t1z0SQXge7lrwlj8ysmqgO4ndlFjwPfvg0eoYqu9Qsc5Q7tazzFf9mVKMmcS1ppPutdyqNYWL62P1prw_wclP0TezW1CsypjWSVT4AuJU8YmH8nTNR1EXn8mJURLSjINv6YbZpnhBIPgUGk1JYVLcn47w

将token值输入到令牌后,单击登录即可访问Dashboard

3.一些必要的修改

将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下:
在master01节点执行

kubectl edit cm kube-proxy -n kube-system
mode: “ipvs”

更新Kube-Proxy的Pod:

kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system

过一会后验证Kube-Proxy模式:

[root@k8s-master01 1.1.1]# curl 127.0.0.1:10249/proxyMode
ipvs

十一.注意事项

注意:kubeadm安装的集群,证书有效期默认是一年。master节点的kube-apiserver、kube-scheduler、kube-controller-manager、etcd都是以容器运行的。可以通过kubectl get po -n kube-system查看。
启动和二进制不同的是,
kubelet的配置文件在/etc/sysconfig/kubelet和/var/lib/kubelet/config.yaml
其他组件的配置文件在/etc/Kubernetes/manifests目录下,比如kube-apiserver.yaml,该yaml文件更改后,kubelet会自动刷新配置,也就是会重启pod。
Kubeadm安装后,master节点默认不允许部署pod,可以通过以下方式打开:
查看Taints:

[root@k8s-master01 ~]# kubectl  describe node -l node-role.kubernetes.io/master=  | grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule

删除Taint:

[root@k8s-master01 ~]# kubectl  taint node  -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule-
node/k8s-master01 untainted
node/k8s-master02 untainted
node/k8s-master03 untainted
[root@k8s-master01 ~]# kubectl  describe node -l node-role.kubernetes.io/master=  | grep Taints
Taints:             
Taints:             
Taints:
暂无评论

发送评论 编辑评论


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇
下一篇