添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
阅读23分钟

Kubernetes(简称k8s)是一个开源的容器管理系统,由Google公司发起并捐赠给Cloud Native Computing Foundation (CNCF),现已成为容器编排的事实标准。 Kubernetes 旨在提供一种跨主机集群的自动化部署、扩展和管理容器化应用的方式,通过声明式配置实现应用程序生命周期管理以及资源调度和服务发现。

学习Kubernetes时,以下是一些核心概念和技术点的概述:

基本概念

  • Pods :Kubernetes中最基础的可部署单元,代表了容器运行时环境,一个Pod可以包含一个或多个紧密相关的容器。
  • Nodes :集群中的工作节点,每个节点上运行着kubelet、kube-proxy等组件,用于管理容器和网络。
  • Services :为一组具有相同功能的Pod提供稳定的网络标识符,实现服务发现和负载均衡。
  • Deployments :负责定义和管理Pod副本集,支持滚动更新、回滚等功能。
  • Volumes :持久化存储解决方案,允许数据在Pod重建或迁移过程中得以保存。
  • Namespaces :逻辑上的隔离单元,用来组织不同环境或团队的应用程序资源。
  • 资源对象 : Kubernetes 使用资源模型来表示集群中的所有实体,如Pods、Services、Deployments、StatefulSets、Jobs、ConfigMaps、Secrets等,并通过API进行统一管理。

    控制器 : 控制器是Kubernetes中确保实际状态与期望状态一致的核心机制,例如ReplicaSet、DaemonSet、Job Controller等,它们不断地调整系统状态以符合用户定义的yaml或json格式的资源配置文件所表达的目标状态。

    网络 : Kubernetes网络模型确保集群内各个Pod之间能够相互通信,以及Pod对外部网络的访问能力。

    调度 : Kubernetes调度器根据资源需求、亲和性和反亲和性策略将Pod分配到合适的Node上运行。

    扩展性与弹性 : 自动水平扩展和自愈能力是Kubernetes的重要特性,可以根据负载自动增加或减少Pod的数量。

    安全 : 提供RBAC(基于角色的访问控制)、网络策略、Pod安全策略等多种安全机制。

    学习Kubernetes的过程通常包括安装和配置集群、编写YAML资源清单文件、理解并操作各种资源对象、设置网络、实施安全策略、以及利用Helm等工具包管理应用发布流程等内容。随着云原生技术的发展,学习Kubernetes还可能涉及服务网格(如Istio)、持续集成/持续交付(CI/CD)、日志监控、存储管理和高级调度策略等方面的知识。

    文档所需配套文件

    控制节点一 控制节点二 工作节点一 VIP
    IP 192.168.40.180 192.168.40.181 192.168.40.182 192.168.40.199
    主机名 master1 master2 node1
    安装组件 apiserver/controllermanager/scheduler/kubelet/etcd/docker/kubeproxy/keepalived/nginx/calico apiserver/controllermanager/scheduler/kubelet/etcd/docker/kubeproxy/keepalived/nginx/calico kubelet/kube-proxy/docker/calico/coredns

    CentOS7.6

    内存 4G CPU 6v 硬盘 100G

    安装 MobaXterm

  • 点击安装文件 MobaXterm_installer_20.3.msi
  • 点击 下一步
  • 选择保存位置 并 点击下一步
  • 以上,安装 MobaXterm 安装完成

    安装 VMware

  • 双击安装文件
  • 点击下一步
  • 勾选“我接收许可协议中的条款” 并 点击下一步
  • 选择安装位置并 点 击下一步
  • 勾掉 “启动时检查产品更新” 和 “加入VMware客户体验升级计划” 并点击下一步
  • 点击下一步
  • 点击安装按钮
  • 点击完成按钮,完成安装
  • 打开 VMware
  • 以上 VMware 安装成功

    创建虚拟机

    文档中只创建一个虚拟机(master1),与规划中的其他虚拟机创建方法一致,请自行创建。

    创建虚拟机

  • 选择 “文件” 中的 “新建虚拟机”
  • 点击下一步按钮
  • 选择 “稍后安装操作系统” 并 点击下一步
  • 选择 “版本” 并 点击下一步
  • 录入虚拟机名称 和 保存位置
  • 录入磁盘大小 并 点击下一步
  • 点击 “自定义硬件”
  • 修改自定义参数
  • 内存:4G
  • 处理器:处理器数量 2;每个处理器的内核数量 4。
  • 点击完成按钮,完成虚拟机创建
  • 安装 CentOS 操作系统

  • 点击 “编辑虚拟机设置”
  • 选择 “镜像文件” 并 点击确定按钮
  • 点击 “开启虚拟机”,等待出现图像页面
  • 直接选择安装
  • 选择语言,默认英语
  • 编辑网络配置文件 /etc/sysconfig/network-scripts/ifcfg-ens33

    vi /etc/sysconfig/network-scripts/ifcfg-ens33
    
    TYPE=Ethernet
    PROXY_METHOD=none
    BROWSER_ONLY=no
    BOOTPROTO=static
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_FAILURE_FATAL=no
    IPV6_ADDR_GEN_MODE=stable-privacy
    NAME=ens33
    UUID=85d7d5a2-9596-49df-a44f-2c5deb691b40
    DEVICE=ens33
    ONBOOT=yes
    IPADDR=192.168.40.180
    NETMASK=255.255.255.0
    GATEWAY=192.168.40.2
    DNS1=192.168.40.2
    

    **注意:修改两个位置 **

  • ONBOOT=yes
  • BOOTPROTO=static
    service network restart
    

    设置主机名

    # 主机名
    [root@localhost ~]# hostnamectl set-hostname master1
    # 立即显示
    [root@localhost ~]# bash
    # 主机名已经显示master1
    [root@master1 ~]# 
    

    设置DNS

    编辑 /etc/hosts

    vi /etc/hosts
    
    192.168.40.180 master1
    192.168.40.181 master2
    192.168.40.182 node1
    

    设置免密码登录

  • 生成密钥文件
  • [root@master1 ~]# ssh-keygen
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa):
    Created directory '/root/.ssh'.
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:4/iikaKcz5nnc8QUPsHdJ/LlpUvqo221v8kaOjfJOkg root@master1
    The key's randomart image is:
    +---[RSA 2048]----+
    |      . . .      |
    |       + o o o . |
    |      . o o = o  |
    |       +   . +   |
    |      o S   o .  |
    |     . = E . o   |
    |  . o o o o o.o  |
    |..o.o+.o ..*.*o .|
    |.o.=+oo...o+*.+=.|
    +----[SHA256]-----+
    
  • 在/root/.ssh下生产两个文件
  • id_rsa:私钥
  • id_rsa.pub:公钥
  • [root@master1 ~]# ls .ssh
    id_rsa  id_rsa.pub
    
  • 拷贝到其他机器
  • # 拷贝到master2上
    [root@master1 ~]# ssh-copy-id master2
    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
    The authenticity of host 'master2 (192.168.40.181)' can't be established.
    ECDSA key fingerprint is SHA256:3v/gb0XBMGsy1J4NRjWRBp/G7rVbvmOE/fstQSq/iCs.
    ECDSA key fingerprint is MD5:ea:94:d6:c1:b7:83:87:ab:8b:7a:00:fd:18:ff:43:52.
    Are you sure you want to continue connecting (yes/no)? yes
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@master2's password:
    Number of key(s) added: 1
    Now try logging into the machine, with:   "ssh 'master2'"
    and check to make sure that only the key(s) you wanted were added.
    # 拷贝到node1上
    [root@master1 ~]# ssh-copy-id node1
    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
    The authenticity of host 'node1 (192.168.40.182)' can't be established.
    ECDSA key fingerprint is SHA256:JHE9EQMEeGdWNc5bJTdzDoRhQ2q81sM8KRkJaSqJUvo.
    ECDSA key fingerprint is MD5:c9:5a:75:d4:49:25:e6:6b:a8:6b:e8:ac:c7:b6:1f:b5.
    Are you sure you want to continue connecting (yes/no)? yes
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@node1's password:
    Number of key(s) added: 1
    Now try logging into the machine, with:   "ssh 'node1'"
    and check to make sure that only the key(s) you wanted were added.
    

    其他两个虚拟机同理

    防火墙和设置yum源

    [root@master1 ~]# sudo yum install vim
    

    关闭交换分区

    编辑 /etc/fstab 文件

    [root@master1 ~]# vim /etc/fstab
    
  • 删除 UUID
  • 注释掉 swap
  • # /etc/fstab # Created by anaconda on Sat Feb 24 07:49:38 2024 # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info /dev/mapper/centos-root / xfs defaults 0 0 /dev/mapper/centos-home /home xfs defaults 0 0 #/dev/mapper/centos-swap swap swap defaults 0 0

    关闭防火墙

    systemctl stop firewalld && systemctl disable firewalld
    yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
    

    关闭SELINUX

    [root@master1 ~]# swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
    [root@master1 ~]# setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
    

    备份repo源

    mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
    

    设置repo源

  • 上传附件中的 CentOS-Base.repodocker-ce.repo
  • 拷贝到其他两个虚拟机上
  • # 拷贝 CentOS-Base.repo 文件到其他两个虚拟机上
    [root@master1 yum.repos.d]# scp /etc/yum.repos.d/CentOS-Base.repo node1:/etc/yum.repos.d
    CentOS-Base.repo                                                                                                                               100% 2523     1.4MB/s   00:00
    [root@master1 yum.repos.d]# scp /etc/yum.repos.d/CentOS-Base.repo master2:/etc/yum.repos.d
    CentOS-Base.repo                                                                                                                               100% 2523     1.6MB/s   00:00
    # 拷贝 docker-ce.repo 文件到其他两个虚拟机上
    [root@master1 yum.repos.d]# scp /etc/yum.repos.d/docker-ce.repo node1:/etc/yum.repos.d
    docker-ce.repo                                                                                                                                 100% 2640     1.8MB/s   00:00
    [root@master1 yum.repos.d]# scp /etc/yum.repos.d/docker-ce.repo master2:/etc/yum.repos.d
    docker-ce.repo  
    
  • 配置安装k8s组件需要的阿里云的repo源 编辑 /etc/yum.repos.d/kubernetes.repo 文件
  • vim /etc/yum.repos.d/kubernetes.repo
    
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=0
    

    拷贝 kubernetes.repo 文件到其他两个虚拟机上

    [root@master1 yum.repos.d]# scp /etc/yum.repos.d/kubernetes.repo node1:/etc/yum.repos.d
    kubernetes.repo                                                                                                                                100%  129   100.8KB/s   00:00
    [root@master1 yum.repos.d]# scp /etc/yum.repos.d/kubernetes.repo master2:/etc/yum.repos.d
    kubernetes.repo  
    

    配置时间同步

    #安装ntpdate命令
    [root@master1 ~]# yum install ntpdate -y
    #跟网络时间做同步
    [root@master1 ~]# ntpdate cn.pool.ntp.org
    #把时间同步做成计划任务
    [root@master1 ~]# crontab -e
    * */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org
    #重启crond服务
    [root@master1 ~]#service crond restart
    

    Docker

    安装 Docker-ce

    yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io -y
    systemctl start docker && systemctl enable docker && systemctl status docker
    
    [root@master1 ~]# docker version
    Client: Docker Engine - Community
     Version:           20.10.6
     API version:       1.41
     Go version:        go1.13.15
     Git commit:        370c289
     Built:             Fri Apr  9 22:45:33 2021
     OS/Arch:           linux/amd64
     Context:           default
     Experimental:      true
    Server: Docker Engine - Community
     Engine:
      Version:          20.10.6
      API version:      1.41 (minimum version 1.12)
      Go version:       go1.13.15
      Git commit:       8728dd2
      Built:            Fri Apr  9 22:43:57 2021
      OS/Arch:          linux/amd64
      Experimental:     false
     containerd:
      Version:          1.6.28
      GitCommit:        ae07eda36dd25f8a1b98dfbf587313b99c0190bb
     runc:
      Version:          1.1.12
      GitCommit:        v1.1.12-0-g51d5e94
     docker-init:
      Version:          0.19.0
      GitCommit:        de40ad0
    

    验证拉去镜像

    [root@master1 ~]# docker pull nginx
    Using default tag: latest
    latest: Pulling from library/nginx
    e1caac4eb9d2: Pull complete
    88f6f236f401: Pull complete
    c3ea3344e711: Pull complete
    cc1bb4345a3a: Pull complete
    da8fa4352481: Pull complete
    c7f80e9cdab2: Pull complete
    18a869624cb6: Pull complete
    Digest: sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107
    Status: Downloaded newer image for nginx:latest
    docker.io/library/nginx:latest
    

    查看拉去的镜像

    [root@master1 ~]# docker images
    REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
    nginx        latest    e4720093a3c1   10 days ago   187MB
    

    配置镜像加速器

    vim /etc/docker/daemon.json
     "registry-mirrors":["https://w7zktxk4.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
      "exec-opts": ["native.cgroupdriver=systemd"]
    

    注意 修改docker文件驱动为systemd,默认为cgroupfs,kubelet默认使用systemd,两者必须一致才可以。 重启Docker服务

    systemctl daemon-reload && systemctl restart docker && systemctl enable docker
    

    初始化安装包

  • Kubeadm:官方的一个安装k8s的工具,kubeadm init,kubeadm join
  • Kubelet:启动、删除、Pod需要的服务
  • Kubectl:操作k8s资源,创建、删除、修改资源
  • yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6 # 开机启动 systemctl enable kubelet && systemctl start kubelet # 查看状态 [root@master1 docker]# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since Sun 2024-02-25 08:41:09 EST; 5s ago Docs: https://kubernetes.io/docs/ Process: 23854 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255) Main PID: 23854 (code=exited, status=255) Feb 25 08:41:09 master1 systemd[1]: Unit kubelet.service entered failed state. Feb 25 08:41:09 master1 kubelet[23854]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4a86e78, 0x12a05f200, 0x0, 0xc000416101, 0xc00010e0c0) Feb 25 08:41:09 master1 kubelet[23854]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98 Feb 25 08:41:09 master1 kubelet[23854]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...) Feb 25 08:41:09 master1 kubelet[23854]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 Feb 25 08:41:09 master1 kubelet[23854]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4a86e78, 0x12a05f200) Feb 25 08:41:09 master1 kubelet[23854]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f Feb 25 08:41:09 master1 kubelet[23854]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs Feb 25 08:41:09 master1 kubelet[23854]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a Feb 25 08:41:09 master1 systemd[1]: kubelet.service failed.

    本文章是通过 keepalive+nginx 实现 apiserver 节点高可用

    安装nginx主备(主备)

    # 服务器的软件包信息缓存到本地
    yum makecache
    # 额外的软件包
    yum install epel-release
    # 安装 nginx keepalived
    yum install nginx keepalived -y
    

    修改nginx配置文件(主备)

    # 编辑配置文件
    vim /etc/nginx/nginx.conf
    
    user nginx;
    worker_processes auto;
    error_log /var/log/nginx/error.log;
    pid /run/nginx.pid;
    include /usr/share/nginx/modules/*.conf;
    events {
        worker_connections 1024;
    # 四层负载均衡,为两台Master apiserver组件提供负载均衡
    stream {
        log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
        access_log  /var/log/nginx/k8s-access.log  main;
        upstream k8s-apiserver {
           server 192.168.40.180:6443;   # Master1 APISERVER IP:PORT
           server 192.168.40.181:6443;   # Master2 APISERVER IP:PORT
        server {
           listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
           proxy_pass k8s-apiserver;
    http {
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
        access_log  /var/log/nginx/access.log  main;
        sendfile            on;
        tcp_nopush          on;
        tcp_nodelay         on;
        keepalive_timeout   65;
        types_hash_max_size 2048;
        include             /etc/nginx/mime.types;
        default_type        application/octet-stream;
        server {
            listen       80 default_server;
            server_name  _;
            location / {
    

    配置keepalive

  • 修改 keepalived 配置文件
  • # 编辑配置文件
    vim /etc/keepalived/keepalived.conf
    
    global_defs { 
       notification_email { 
         [email protected] 
         [email protected] 
         [email protected] 
       notification_email_from [email protected]  
       smtp_server 127.0.0.1 
       smtp_connect_timeout 30 
       router_id NGINX_MASTER
    vrrp_script check_nginx {
        script "/etc/keepalived/check_nginx.sh"
    vrrp_instance VI_1 { 
        state MASTER 
        interface ens33  # 修改为实际网卡名
        virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
        priority 100    # 优先级,备服务器设置 90 
        advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
        authentication { 
            auth_type PASS      
            auth_pass 1111 
        # 虚拟IP
        virtual_ipaddress { 
            192.168.40.199/24
        track_script {
            check_nginx
    
  • 新建文件 /etc/keepalived/check_nginx.sh
  • vim /etc/keepalived/check_nginx.sh
    
    #!/bin/bash
    count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
    if [ "$count" -eq 0 ];then
        systemctl stop keepalived
    
    chmod +x  /etc/keepalived/check_nginx.sh
    
    systemctl daemon-reload
    
  • 修改 keepalived 配置文件
  • vim /etc/keepalived/keepalived.conf
    
    global_defs { 
       notification_email { 
         [email protected] 
         [email protected] 
         [email protected] 
       notification_email_from [email protected]  
       smtp_server 127.0.0.1 
       smtp_connect_timeout 30 
       router_id NGINX_BACKUP
    vrrp_script check_nginx {
        script "/etc/keepalived/check_nginx.sh"
    vrrp_instance VI_1 { 
        state BACKUP 
        interface ens33
        virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
        priority 90
        advert_int 1
        authentication { 
            auth_type PASS      
            auth_pass 1111 
        virtual_ipaddress { 
            192.168.40.199/24
        track_script {
            check_nginx
    
  • 新建文件 /etc/keepalived/check_nginx.sh
  • vim /etc/keepalived/check_nginx.sh
    
    #!/bin/bash
    count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
    if [ "$count" -eq 0 ];then
        systemctl stop keepalived
    
    chmod +x  /etc/keepalived/check_nginx.sh
    
    systemctl daemon-reload
    

    启动nginx

    注意,需要安装 nginx-all-modules.noarch 然后 运行 nginx -t

    yum -y install nginx-all-modules.noarch
    nginx -t
    
    # 启动nginx
    systemctl start nginx
    # 查看nginx状态
    systemctl status nginx
    # 内容如下
    [root@master1 ~]# systemctl status nginx
    ● nginx.service - The nginx HTTP and reverse proxy server
       Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
       Active: active (running) since Mon 2024-02-26 08:20:50 EST; 17min ago
      Process: 9212 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
      Process: 9088 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
      Process: 9077 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
     Main PID: 9216 (nginx)
        Tasks: 9
       Memory: 14.9M
       CGroup: /system.slice/nginx.service
               ├─9216 nginx: master process /usr/sbin/nginx
               ├─9217 nginx: worker process
               ├─9218 nginx: worker process
               ├─9219 nginx: worker process
               ├─9220 nginx: worker process
               ├─9221 nginx: worker process
               ├─9222 nginx: worker process
               ├─9223 nginx: worker process
               └─9224 nginx: worker process
    Feb 26 08:20:49 master1 systemd[1]: Starting The nginx HTTP and reverse proxy server...
    Feb 26 08:20:50 master1 nginx[9088]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    Feb 26 08:20:50 master1 nginx[9088]: nginx: configuration file /etc/nginx/nginx.conf test is successful
    Feb 26 08:20:50 master1 systemd[1]: Started The nginx HTTP and reverse proxy server.
    

    keepalived

    # 启动keepalived
    systemctl start keepalived
    # 查看keepalived状态
    systemctl status keepalived
    # 内容如下
    [root@master1 ~]# systemctl status keepalived
    ● keepalived.service - LVS and VRRP High Availability Monitor
       Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
       Active: active (running) since Mon 2024-02-26 08:20:49 EST; 18min ago
      Process: 9075 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
     Main PID: 9103 (keepalived)
        Tasks: 3
       Memory: 5.4M
       CGroup: /system.slice/keepalived.service
               ├─9103 /usr/sbin/keepalived -D
               ├─9105 /usr/sbin/keepalived -D
               └─9106 /usr/sbin/keepalived -D
    Feb 26 08:20:51 master1 Keepalived_vrrp[9106]: Sending gratuitous ARP on ens33 for 192.168.40.199
    Feb 26 08:20:51 master1 Keepalived_vrrp[9106]: Sending gratuitous ARP on ens33 for 192.168.40.199
    Feb 26 08:20:51 master1 Keepalived_vrrp[9106]: Sending gratuitous ARP on ens33 for 192.168.40.199
    Feb 26 08:20:51 master1 Keepalived_vrrp[9106]: Sending gratuitous ARP on ens33 for 192.168.40.199
    Feb 26 08:20:56 master1 Keepalived_vrrp[9106]: Sending gratuitous ARP on ens33 for 192.168.40.199
    Feb 26 08:20:56 master1 Keepalived_vrrp[9106]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 192.168.40.199
    Feb 26 08:20:56 master1 Keepalived_vrrp[9106]: Sending gratuitous ARP on ens33 for 192.168.40.199
    Feb 26 08:20:56 master1 Keepalived_vrrp[9106]: Sending gratuitous ARP on ens33 for 192.168.40.199
    Feb 26 08:20:56 master1 Keepalived_vrrp[9106]: Sending gratuitous ARP on ens33 for 192.168.40.199
    Feb 26 08:20:56 master1 Keepalived_vrrp[9106]: Sending gratuitous ARP on ens33 for 192.168.40.199
    

    查看是否启动16443端口

    [root@master1 ~]# ss -antulp | grep :16443
    tcp    LISTEN     0      128       *:16443                 *:*                   users:(("nginx",pid=9224,fd=7),("nginx",pid=9223,fd=7),("nginx",pid=9222,fd=7),("nginx",pid=9221,fd=7),("nginx",pid=9220,fd=7),("nginx",pid=9219,fd=7),("nginx",pid=9218,fd=7),("nginx",pid=9217,fd=7),("nginx",pid=9216,fd=7))
    You have new mail in /var/spool/mail/root
    

    设置开机自启动

    # nginx 开机启动
    systemctl enable nginx
    # keepalived 开机启动
    systemctl enable keepalived
    

    查看网络配置

    [root@master1 ~]# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:21:f7:35 brd ff:ff:ff:ff:ff:ff
        inet 192.168.40.180/24 brd 192.168.40.255 scope global noprefixroute ens33
           valid_lft forever preferred_lft forever
        inet 192.168.40.199/24 scope global secondary ens33
           valid_lft forever preferred_lft forever
        inet6 fe80::8d:bfe1:44dd:2833/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
        link/ether 02:42:c5:6b:4f:5f brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    

    可以看到有一个虚拟 ip 192.168.40.199 端口为 16443

    [root@master2 ~]# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:ba:ef:3e brd ff:ff:ff:ff:ff:ff
        inet 192.168.40.181/24 brd 192.168.40.255 scope global noprefixroute ens33
           valid_lft forever preferred_lft forever
        inet6 fe80::1b48:3b77:4af0:7f06/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
        link/ether 02:42:e8:b7:f6:e7 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    

    没有看到虚拟ip的设置,这是因为这个是备用的节点

    测试vip是否能漂移

    测试如果关闭主节点的nginx,vip是否漂移到备节点

    停止主节点上的nginx

    [root@master1 ~]# systemctl stop nginx
    [root@master1 ~]# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:21:f7:35 brd ff:ff:ff:ff:ff:ff
        inet 192.168.40.180/24 brd 192.168.40.255 scope global noprefixroute ens33
           valid_lft forever preferred_lft forever
        inet6 fe80::8d:bfe1:44dd:2833/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
        link/ether 02:42:c5:6b:4f:5f brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    

    VIP已经移除

    查看从节点是否有VIP

    [root@master2 ~]# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:ba:ef:3e brd ff:ff:ff:ff:ff:ff
        inet 192.168.40.181/24 brd 192.168.40.255 scope global noprefixroute ens33
           valid_lft forever preferred_lft forever
        inet 192.168.40.199/24 scope global secondary ens33
           valid_lft forever preferred_lft forever
        inet6 fe80::1b48:3b77:4af0:7f06/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
        link/ether 02:42:13:e1:95:b4 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    

    初始化集群

    创建kubeadm-config.yaml文件

    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterConfiguration
    kubernetesVersion: v1.20.6
    controlPlaneEndpoint: 192.168.40.199:16443
    imageRepository: registry.aliyuncs.com/google_containers
    apiServer:
      certSANs:
      - 192.168.40.180
      - 192.168.40.181
      - 192.168.40.182
      - 192.168.40.199
    networking:
      podSubnet: 10.244.0.0/16
      serviceSubnet: 10.10.0.0/16
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind:  KubeProxyConfiguration
    mode: ipvs
    

    上传离线安装包

    scp k8simage-1-20-6.tar.gz master2:/root/huo-l
    scp k8simage-1-20-6.tar.gz node1:/root/huo-l
    

    注意:huo-l文件夹一定要先创建

    解压镜像压缩文件包

    [root@master1 huo-l]# docker load -i k8simage-1-20-6.tar.gz
    [root@master1 huo-l]# docker images
    REPOSITORY                                                        TAG        IMAGE ID       CREATED       SIZE
    registry.aliyuncs.com/google_containers/kube-proxy                v1.20.6    9a1ebfd8124d   2 years ago   118MB
    registry.aliyuncs.com/google_containers/kube-scheduler            v1.20.6    b93ab2ec4475   2 years ago   47.3MB
    registry.aliyuncs.com/google_containers/kube-controller-manager   v1.20.6    560dd11d4550   2 years ago   116MB
    registry.aliyuncs.com/google_containers/kube-apiserver            v1.20.6    b05d611c1af9   2 years ago   122MB
    calico/pod2daemon-flexvol                                         v3.18.0    2a22066e9588   2 years ago   21.7MB
    calico/node                                                       v3.18.0    5a7c4970fbc2   3 years ago   172MB
    calico/cni                                                        v3.18.0    727de170e4ce   3 years ago   131MB
    calico/kube-controllers                                           v3.18.0    9a154323fbf7   3 years ago   53.4MB
    registry.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   3 years ago   253MB
    registry.aliyuncs.com/google_containers/coredns                   1.7.0      bfe3a36ebd25   3 years ago   45.2MB
    registry.aliyuncs.com/google_containers/pause                     3.2        80d28bedfe5d   4 years ago   683kB
    

    kubeadm初始化

    # 初始化 kubeadm 
    kubeadm init --config kubeadm-config.yaml --ignore-preflight-errors=SystemVerification
    # 日志如下
    [init] Using Kubernetes version: v1.20.6
    [preflight] Running pre-flight checks
    [preflight] The system verification failed. Printing the output from the verification:
    KERNEL_VERSION: 3.10.0-957.el7.x86_64
    DOCKER_VERSION: 20.10.6
    OS: Linux
    CGROUPS_CPU: enabled
    CGROUPS_CPUACCT: enabled
    CGROUPS_CPUSET: enabled
    CGROUPS_DEVICES: enabled
    CGROUPS_FREEZER: enabled
    CGROUPS_MEMORY: enabled
    CGROUPS_PIDS: enabled
    CGROUPS_HUGETLB: enabled
            [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
            [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found.\n", err: exit status 1
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.10.0.1 192.168.40.180 192.168.40.199 192.168.40.181 192.168.40.182]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [192.168.40.180 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [192.168.40.180 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [kubelet-check] Initial timeout of 40s passed.
    [apiclient] All control plane components are healthy after 57.016991 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
    [mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: wkg8yl.8foqs30soh38bll9
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [addons] Applied essential addon: kube-proxy
    Your Kubernetes control-plane has initialized successfully!
    To start using your cluster, you need to run the following as a regular user:
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    Alternatively, if you are the root user, you can run:
      export KUBECONFIG=/etc/kubernetes/admin.conf
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
      kubeadm join 192.168.40.199:16443 --token wkg8yl.8foqs30soh38bll9 \
        --discovery-token-ca-cert-hash sha256:df96eb54128358c1c29ae037eb74f2b9fb2266ec46fe9b64c885150a2440f499 \
        --control-plane
    Then you can join any number of worker nodes by running the following on each as root:
    kubeadm join 192.168.40.199:16443 --token wkg8yl.8foqs30soh38bll9 \
        --discovery-token-ca-cert-hash sha256:df96eb54128358c1c29ae037eb74f2b9fb2266ec46fe9b64c885150a2440f499
    

    日志中的信息

    # 提示需要先执行如下代码
    To start using your cluster, you need to run the following as a regular user:
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    # 添加控制面板
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
      kubeadm join 192.168.40.199:16443 --token wkg8yl.8foqs30soh38bll9 \
        --discovery-token-ca-cert-hash sha256:df96eb54128358c1c29ae037eb74f2b9fb2266ec46fe9b64c885150a2440f499 \
        --control-plane
    # 添加工作面板
    Then you can join any number of worker nodes by running the following on each as root:
    kubeadm join 192.168.40.199:16443 --token wkg8yl.8foqs30soh38bll9 \
        --discovery-token-ca-cert-hash sha256:df96eb54128358c1c29ae037eb74f2b9fb2266ec46fe9b64c885150a2440f499
    
    [root@master1 huo-l]# kubectl get node
    NAME      STATUS     ROLES                  AGE   VERSION
    master1   NotReady   control-plane,master   10m   v1.20.6
    

    注意:还没有安装网络插件 所以是 NotReady状态

    添加Master节点

    在master2上执行

    cd /root && mkdir -p /etc/kubernetes/pki/etcd && mkdir -p ~/.kube/
    

    在master1上执行

    # 拷贝证书到master2
    scp /etc/kubernetes/pki/ca.crt master2:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/ca.key master2:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/sa.key master2:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/sa.pub master2:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/front-proxy-ca.crt master2:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/front-proxy-ca.key master2:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/etcd/ca.crt master2:/etc/kubernetes/pki/etcd/
    scp /etc/kubernetes/pki/etcd/ca.key master2:/etc/kubernetes/pki/etcd/
    

    join命令

    # 添加控制面板,前面生成的命令
      kubeadm join 192.168.40.199:16443 --token wkg8yl.8foqs30soh38bll9 \
        --discovery-token-ca-cert-hash sha256:df96eb54128358c1c29ae037eb74f2b9fb2266ec46fe9b64c885150a2440f499 \
        --control-plane
    

    注意:增加 --control-plane

    [root@k8s-hlmaster2 etcd]# kubectl get node
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    # 由于未执行
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    # 执行后
    [root@k8s-hlmaster2 etcd]# kubectl get node
    NAME            STATUS     ROLES                  AGE    VERSION
    k8s-hlmaster1   NotReady   control-plane,master   22h    v1.20.6
    k8s-hlmaster2   NotReady   control-plane,master   3m3s   v1.20.6
    

    添加工作节点

    在工作节点执行

    cd /root && mkdir -p /etc/kubernetes/pki/etcd && mkdir -p ~/.kube/
    

    执行增加工作节点到集群

    kubeadm join 192.168.40.199:16443 --token wkg8yl.8foqs30soh38bll9 \
        --discovery-token-ca-cert-hash sha256:df96eb54128358c1c29ae037eb74f2b9fb2266ec46fe9b64c885150a2440f499
    

    有可能会报错误

    [root@node1 ~]# kubeadm join 192.168.40.199:16443 --token wkg8yl.8foqs30soh38bll9 \
    >     --discovery-token-ca-cert-hash sha256:df96eb54128358c1c29ae037eb74f2b9fb2266ec46fe9b64c885150a2440f499
    [preflight] Running pre-flight checks
    [preflight] The system verification failed. Printing the output from the verification:
    KERNEL_VERSION: 3.10.0-957.el7.x86_64
    DOCKER_VERSION: 20.10.6
    OS: Linux
    CGROUPS_CPU: enabled
    CGROUPS_CPUACCT: enabled
    CGROUPS_CPUSET: enabled
    CGROUPS_DEVICES: enabled
    CGROUPS_FREEZER: enabled
    CGROUPS_MEMORY: enabled
    CGROUPS_PIDS: enabled
    CGROUPS_HUGETLB: enabled
            [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
    error execution phase preflight: [preflight] Some fatal errors occurred:
            [ERROR SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found.\n", err: exit status 1
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    To see the stack trace of this error execute with --v=5 or higher
    

    解决方案,增加参数

    [root@node1 ~]# kubeadm join 192.168.40.199:16443 --token wkg8yl.8foqs30soh38bll9 \
    >     --discovery-token-ca-cert-hash sha256:df96eb54128358c1c29ae037eb74f2b9fb2266ec46fe9b64c885150a2440f499 --ignore-preflight-errors=SystemVerification
    

    注意:新增加节点时有可能报错

    [root@node2 ~]# kubeadm join 192.168.40.199:16443 --token wkg8yl.8foqs30soh38bll9 \
    >      --discovery-token-ca-cert-hash sha256:df96eb54128358c1c29ae037eb74f2b9fb2266ec46fe9b64c885150a2440f499 --ignore-preflight-errors=SystemVerification
    [preflight] Running pre-flight checks
    [preflight] The system verification failed. Printing the output from the verification:
    KERNEL_VERSION: 3.10.0-957.el7.x86_64
    DOCKER_VERSION: 20.10.6
    OS: Linux
    CGROUPS_CPU: enabled
    CGROUPS_CPUACCT: enabled
    CGROUPS_CPUSET: enabled
    CGROUPS_DEVICES: enabled
    CGROUPS_FREEZER: enabled
    CGROUPS_MEMORY: enabled
    CGROUPS_PIDS: enabled
    CGROUPS_HUGETLB: enabled
            [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
            [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found.\n", err: exit status 1
    error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "wkg8yl"
    To see the stack trace of this error execute with --v=5 or higher
    

    该错误原因是token过期,此时需要通过kubedam重新生成token

    # 生成 token
    [root@master1 huo-l]# kubeadm token generate
    elj674.w31ltnzw0x89vxlf
    # 用生成的token 重新生成添加面板的命令
    [root@master1 huo-l]# kubeadm token create elj674.w31ltnzw0x89vxlf --print-join-command --ttl=0
    kubeadm join 192.168.40.199:16443 --token elj674.w31ltnzw0x89vxlf     --discovery-token-ca-cert-hash sha256:df96eb54128358c1c29ae037eb74f2b9fb2266ec46fe9b64c885150a2440f499
    # 重新加入集群
    [root@node2 ~]# kubeadm join 192.168.40.199:16443 --token elj674.w31ltnzw0x89vxlf     --discovery-token-ca-cert-hash sha256:df96eb54128358c1c29ae037eb74f2b9fb2266ec46fe9b64c885150a2440f499 --ignore-preflight-errors=SystemVerification
    

    node1 打标签

    [root@master1 pki]# kubectl label node node1 node-role.kubernetes.io/worker=worker
    node/node1 labeled
    
    [root@master1 pki]# kubectl get node
    NAME      STATUS     ROLES                  AGE    VERSION
    master1   NotReady   control-plane,master   23h    v1.20.6
    master2   NotReady   control-plane,master   38m    v1.20.6
    node1     NotReady   worker                 5m4s   v1.20.6
    

    查看 kube-system 命名空间下的 Pod

    [root@master1 pki]# kubectl get pod -n kube-system
    NAME                              READY   STATUS    RESTARTS   AGE
    coredns-7f89b7bc75-87l8p          0/1     Pending   0          23h
    coredns-7f89b7bc75-tp68c          0/1     Pending   0          23h
    etcd-master1                      1/1     Running   1          23h
    etcd-master2                      1/1     Running   0          39m
    kube-apiserver-master1            1/1     Running   2          23h
    kube-apiserver-master2            1/1     Running   2          39m
    kube-controller-manager-master1   1/1     Running   2          23h
    kube-controller-manager-master2   1/1     Running   0          38m
    kube-proxy-497qc                  1/1     Running   1          23h
    kube-proxy-cjtct                  1/1     Running   0          38m
    kube-proxy-z8mt5                  1/1     Running   0          6m6s
    kube-scheduler-master1            1/1     Running   2          23h
    kube-scheduler-master2            1/1     Running   0          38m
    

    coredns Pending状态,是因为没有按照网络插件

    新建 calico.yaml 文件

    # 文件内容见附件
    

    执行 calico.yaml 文件

    [root@master1 huo-l]# kubectl apply -f calico.yaml
    configmap/calico-config created
    customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
    clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrole.rbac.authorization.k8s.io/calico-node created
    clusterrolebinding.rbac.authorization.k8s.io/calico-node created
    daemonset.apps/calico-node created
    serviceaccount/calico-node created
    deployment.apps/calico-kube-controllers created
    serviceaccount/calico-kube-controllers created
    poddisruptionbudget.policy/calico-kube-controllers created
    

    等等大概一分钟

    [root@master1 huo-l]# kubectl get pod -n kube-system
    NAME                                       READY   STATUS    RESTARTS   AGE
    calico-kube-controllers-6949477b58-lcxnt   1/1     Running   0          71s
    calico-node-6qn6c                          1/1     Running   0          70s
    calico-node-7bvmq                          1/1     Running   0          70s
    calico-node-d9r9r                          1/1     Running   0          70s
    coredns-7f89b7bc75-87l8p                   0/1     Running   0          23h
    coredns-7f89b7bc75-tp68c                   1/1     Running   0          23h
    etcd-master1                               1/1     Running   1          23h
    etcd-master2                               1/1     Running   0          45m
    kube-apiserver-master1                     1/1     Running   2          23h
    kube-apiserver-master2                     1/1     Running   2          45m
    kube-controller-manager-master1            1/1     Running   2          23h
    kube-controller-manager-master2            1/1     Running   0          44m
    kube-proxy-497qc                           1/1     Running   1          23h
    kube-proxy-cjtct                           1/1     Running   0          44m
    kube-proxy-z8mt5                           1/1     Running   0          12m
    kube-scheduler-master1                     1/1     Running   2          23h
    kube-scheduler-master2                     1/1     Running   0          44m
    # 查看node
    [root@master1 huo-l]# kubectl get node
    NAME      STATUS   ROLES                  AGE   VERSION
    master1   Ready    control-plane,master   23h   v1.20.6
    master2   Ready    control-plane,master   46m   v1.20.6
    node1     Ready    worker                 12m   v1.20.6
    

    节点都是 Ready 状态

  • 上传镜像busybox-1-28.tar.gz
  • [root@node1 huo-l]# docker load -i busybox-1-28.tar.gz
    432b65032b94: Loading layer [==================================================>]   1.36MB/1.36MB
    Loaded image: busybox:1.28
    

    启动Pod

    [root@master1 huo-l]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
    If you don't see a command prompt, try pressing enter.
    / # ping www.baidu.com
    PING www.baidu.com (39.156.66.18): 56 data bytes
    64 bytes from 39.156.66.18: seq=0 ttl=127 time=30.573 ms
    64 bytes from 39.156.66.18: seq=1 ttl=127 time=30.152 ms
    64 bytes from 39.156.66.18: seq=2 ttl=127 time=30.355 ms
    

    注意:在控制节点

    部署 Pod

    上传 tomcat.tar.gz

    上传到工作节点(node1)

    [root@node1 huo-l]# docker load -i tomcat.tar.gz
    f1b5933fe4b5: Loading layer [==================================================>]  5.796MB/5.796MB
    9b9b7f3d56a0: Loading layer [==================================================>]  3.584kB/3.584kB
    edd61588d126: Loading layer [==================================================>]  80.28MB/80.28MB
    48988bb7b861: Loading layer [==================================================>]   2.56kB/2.56kB
    8e0feedfd296: Loading layer [==================================================>]  24.06MB/24.06MB
    aac21c2169ae: Loading layer [==================================================>]  2.048kB/2.048kB
    Loaded image: tomcat:8.5-jre8-alpine
    

    上传附件中的 tomcat 两个yaml 文件

    执行(控制节点)

    # 启动 pod
    [root@master1 huo-l]# kubectl apply -f tomcat.yaml
    pod/demo-pod created
    # 创建 service
    [root@master1 huo-l]# kubectl apply -f tomcat-service.yaml
    service/tomcat created
    You have new mail in /var/spool/mail/root
    # 查询 node
    [root@master1 huo-l]# kubectl get pod -o wide
    NAME       READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
    demo-pod   1/1     Running   0          27s   10.244.166.133   node1   <none>           <none>
    # 查询 service
    [root@master1 huo-l]# kubectl get svc
    NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
    kubernetes   ClusterIP   10.10.0.1       <none>        443/TCP          23h
    tomcat       NodePort    10.10.172.129   <none>        8080:30080/TCP   20s
    
    http://192.168.40.180:30080
    

    说明部署成功

    以上,K8S集群部署完毕。

  •