chmod +x ./easzup
# 下载kubernetes 镜像
./easzup -D
运行后, /etc/ansible
目录下就下载好了所有相关的文件了。
如果还没有安装 ansible,按照 boot.sh 的命令安装。
配置虚机信息
cp example/hosts.multi-node hosts
hosts 文件 用来描述 虚机和kubernetes的相关信息,可以根据 vagrant 相关的配置和个人需要修改 hosts 文件。
我的 hosts 文件如下:
10.184.0.131 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=/etc/vagrant/.vagrant/machines/n1/virtualbox/private_key
10.184.0.132 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2201 ansible_ssh_private_key_file=/etc/vagrant/.vagrant/machines/n2/virtualbox/private_key
10.184.0.133 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2202 ansible_ssh_private_key_file=/etc/vagrant/.vagrant/machines/n3/virtualbox/private_key
[local]
# 'etcd' cluster should have odd member(s) (1,3,5,...)
# variable 'NODE_NAME' is the distinct name of a member in 'etcd' cluster
[etcd]
10.184.0.131 NODE_NAME=etcd1
10.184.0.132 NODE_NAME=etcd2
10.184.0.133 NODE_NAME=etcd3
# master node(s)
[kube-master]
10.184.0.131
10.184.0.132
# work node(s)
[kube-node]
10.184.0.133
# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'yes' to install a harbor server; 'no' to integrate with existed one
# 'SELF_SIGNED_CERT': 'no' you need put files of certificates named harbor.pem and harbor-key.pem in directory 'down'
[harbor]
#192.168.1.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no SELF_SIGNED_CERT=yes
# [optional] loadbalance for accessing k8s from outside
[ex-lb]
#192.168.1.6 LB_ROLE=backup EX_APISERVER_VIP=10.184.0.13250 EX_APISERVER_PORT=8443
#192.168.1.7 LB_ROLE=master EX_APISERVER_VIP=10.184.0.13250 EX_APISERVER_PORT=8443
# [optional] ntp server for the cluster
[chrony]
#10.184.0.131
[all:vars]
# --------- Main Variables ---------------
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"
# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="iptables"
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.68.0.0/16"
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="172.20.0.0/16"
# NodePort Range
NODE_PORT_RANGE="20000-40000"
# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local."
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/opt/kube/bin"
# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"
# Deploy Directory (kubeasz workspace)
base_dir="/etc/ansible"
# Calico
IP_AUTODETECTION_METHOD: "interface=enp0s8"
ssh的配置信息,在 vagrant 目录下 运行命令 vagrant ssh-config
可以获得。
修改网络插件相关配置。
由于我是用的是 calico,在 hosts 文件中修改了下面几个配置:
CLUSTER_NETWORK=”calico”
IP_AUTODETECTION_METHOD: “interface=enp0s8”
如果不配置 IP_AUTODETECTION_METHOD,默认是 can-reach xxx
,很有可能导致集群无法互联。 vagrant 内网卡名为 enp0s8
.
修改 ansible.cfg 配置文件
增加下面的配置,使用 vagrant 用户。
remote_user = vagrant
修改以下ansible的yaml文件,增加become: true
配置
01.prepare.yml
02.etcd.yml
03.docker.yml
04.kube-master.yml
05.kube-node.yml
06.network.yml
07.cluster-addon.yml
这么修改的原因是,如果不使用 become: true
,apt 更新等很多需要root权限的操作将报错,使得流程不能继续下去。
修改大致如下。
ansible-playbook 01.prepare.yml
ansible-playbook 02.etcd.yml
ansible-playbook 03.docker.yml
ansible-playbook 04.kube-master.yml
ansible-playbook 05.kube-node.yml
ansible-playbook 06.network.yml
ansible-playbook 07.cluster-addon.yml