添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

新装的CentOS7.7系统

./kk create cluster --with-kubernetes v1.17.9 --with-kubesphere v3.0.0
+-----------------------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name                  | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
+-----------------------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| localhost.localdomain | y    | y    | y       | y        | y     | y     | y         | y      |            |             |                  | CST 14:47:29 |
+-----------------------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[14:47:55 CST] Downloading Installation Files               
INFO[14:47:55 CST] Downloading kubeadm ...                      
INFO[14:48:32 CST] Downloading kubelet ...                      
INFO[14:50:17 CST] Downloading kubectl ...                      
INFO[14:50:57 CST] Downloading helm ...                         
INFO[14:51:35 CST] Downloading kubecni ...                      
INFO[14:52:08 CST] Configurating operating system ...           
[localhost.localdomain 192.168.0.231] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
no crontab for root
INFO[14:52:09 CST] Installing docker ...                        
INFO[14:52:09 CST] Start to download images on all nodes        
[localhost.localdomain] Downloading image: kubesphere/etcd:v3.3.12
[localhost.localdomain] Downloading image: kubesphere/pause:3.1
[localhost.localdomain] Downloading image: kubesphere/kube-apiserver:v1.17.9
[localhost.localdomain] Downloading image: kubesphere/kube-controller-manager:v1.17.9
[localhost.localdomain] Downloading image: kubesphere/kube-scheduler:v1.17.9
[localhost.localdomain] Downloading image: kubesphere/kube-proxy:v1.17.9
[localhost.localdomain] Downloading image: coredns/coredns:1.6.9
[localhost.localdomain] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[localhost.localdomain] Downloading image: calico/kube-controllers:v3.15.1
[localhost.localdomain] Downloading image: calico/cni:v3.15.1
[localhost.localdomain] Downloading image: calico/node:v3.15.1
[localhost.localdomain] Downloading image: calico/pod2daemon-flexvol:v3.15.1
INFO[14:57:09 CST] Generating etcd certs                        
INFO[14:57:09 CST] Synchronizing etcd certs                     
INFO[14:57:09 CST] Creating etcd service                        
[localhost.localdomain 192.168.0.231] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
INFO[14:57:11 CST] Starting etcd cluster                        
[localhost.localdomain 192.168.0.231] MSG:
Configuration file will be created
INFO[14:57:11 CST] Refreshing etcd configuration                
Waiting for etcd to start
INFO[14:57:16 CST] Backup etcd data regularly                   
INFO[14:57:16 CST] Get cluster status                           
[localhost.localdomain 192.168.0.231] MSG:
Cluster will be created.
INFO[14:57:16 CST] Installing kube binaries                     
Push /root/kubekey/v1.17.9/amd64/kubeadm to 192.168.0.231:/tmp/kubekey/kubeadm   Done
Push /root/kubekey/v1.17.9/amd64/kubelet to 192.168.0.231:/tmp/kubekey/kubelet   Done
Push /root/kubekey/v1.17.9/amd64/kubectl to 192.168.0.231:/tmp/kubekey/kubectl   Done
Push /root/kubekey/v1.17.9/amd64/helm to 192.168.0.231:/tmp/kubekey/helm   Done
Push /root/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.0.231:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
ERRO[14:57:49 CST] Failed to enable kubelet service: Failed to exec command: sudo -E /bin/sh -c "systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet" 
Failed to execute operation: File exists: Process exited with status 1  node=192.168.0.231
WARN[14:57:49 CST] Task failed ...                              
WARN[14:57:49 CST] error: interrupted by error                  
Error: Failed to install kube binaries: interrupted by error
Usage:
  kk create cluster [flags]
Flags:
  -f, --filename string          Path to a configuration file
  -h, --help                     help for cluster
      --skip-pull-images         Skip pre pull images
      --with-kubernetes string   Specify a supported version of kubernetes
      --with-kubesphere          Deploy a specific version of kubesphere (default v3.0.0)
  -y, --yes                      Skip pre-check of the installation
Global Flags:
      --debug   Print detailed information (default true)
Failed to install kube binaries: interrupted by error

请问怎样解决?

Forest-L

[root@localhost ~]# ./kk delete cluster
Are you sure to delete this cluster? [yes/no]: yes
INFO[10:15:48 CST] Resetting kubernetes cluster ...             
[localhost.localdomain.cluster.local 192.168.0.231] MSG: [preflight] Running pre-flight checks W0107 10:15:48.419158 122699 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni] The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file. [localhost.localdomain.cluster.local 192.168.0.231] MSG: sudo -E /bin/sh -c "iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && ip link del kube-ipvs0 && ip link del nodelocaldns" INFO[10:15:50 CST] Successful.
[root@localhost ~]# systemctl stop kubelet [root@localhost ~]# netstat -ntlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1011/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1330/master
tcp6 0 0 :::22 :::* LISTEN 1011/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1330/master
[root@localhost ~]# ./kk create cluster --with-kubernetes v1.17.9 --with-kubesphere v3.0.0 +-------------------------------------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+ | name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time | +-------------------------------------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+ | localhost.localdomain.cluster.local | y | y | y | y | y | y | y | y | | | | CST 10:16:20 | +-------------------------------------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+ This is a simple check of your environment. Before installation, you should ensure that your machines meet all requirements specified at https://github.com/kubesphere/kubekey#requirements-and-recommendations Continue this installation? [yes/no]: yes INFO[10:16:25 CST] Downloading Installation Files
INFO[10:16:25 CST] Downloading kubeadm ...
INFO[10:16:25 CST] Downloading kubelet ...
INFO[10:16:25 CST] Downloading kubectl ...
INFO[10:16:25 CST] Downloading helm ...
INFO[10:16:25 CST] Downloading kubecni ...
INFO[10:16:25 CST] Configurating operating system ...
[localhost.localdomain.cluster.local 192.168.0.231] MSG: net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-arptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_local_reserved_ports = 30000-32767 INFO[10:16:26 CST] Installing docker ...
INFO[10:16:26 CST] Start to download images on all nodes
[localhost.localdomain.cluster.local] Downloading image: kubesphere/etcd:v3.3.12 [localhost.localdomain.cluster.local] Downloading image: kubesphere/pause:3.1 [localhost.localdomain.cluster.local] Downloading image: kubesphere/kube-apiserver:v1.17.9 [localhost.localdomain.cluster.local] Downloading image: kubesphere/kube-controller-manager:v1.17.9 [localhost.localdomain.cluster.local] Downloading image: kubesphere/kube-scheduler:v1.17.9 [localhost.localdomain.cluster.local] Downloading image: kubesphere/kube-proxy:v1.17.9 [localhost.localdomain.cluster.local] Downloading image: coredns/coredns:1.6.9 [localhost.localdomain.cluster.local] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12 [localhost.localdomain.cluster.local] Downloading image: calico/kube-controllers:v3.15.1 [localhost.localdomain.cluster.local] Downloading image: calico/cni:v3.15.1 [localhost.localdomain.cluster.local] Downloading image: calico/node:v3.15.1 [localhost.localdomain.cluster.local] Downloading image: calico/pod2daemon-flexvol:v3.15.1 INFO[10:18:27 CST] Generating etcd certs
INFO[10:18:28 CST] Synchronizing etcd certs
INFO[10:18:28 CST] Creating etcd service
INFO[10:18:29 CST] Starting etcd cluster
[localhost.localdomain.cluster.local 192.168.0.231] MSG: Configuration file will be created INFO[10:18:29 CST] Refreshing etcd configuration
Waiting for etcd to start INFO[10:18:34 CST] Backup etcd data regularly
INFO[10:18:34 CST] Get cluster status
[localhost.localdomain.cluster.local 192.168.0.231] MSG: Cluster will be created. INFO[10:18:34 CST] Installing kube binaries
Push /root/kubekey/v1.17.9/amd64/kubeadm to 192.168.0.231:/tmp/kubekey/kubeadm Done Push /root/kubekey/v1.17.9/amd64/kubelet to 192.168.0.231:/tmp/kubekey/kubelet Done Push /root/kubekey/v1.17.9/amd64/kubectl to 192.168.0.231:/tmp/kubekey/kubectl Done Push /root/kubekey/v1.17.9/amd64/helm to 192.168.0.231:/tmp/kubekey/helm Done Push /root/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.0.231:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done ERRO[10:19:06 CST] Failed to enable kubelet service: Failed to exec command: sudo -E /bin/sh -c "systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet" Failed to execute operation: File exists: Process exited with status 1 node=192.168.0.231 WARN[10:19:06 CST] Task failed ...
WARN[10:19:06 CST] error: interrupted by error
Error: Failed to install kube binaries: interrupted by error Usage: kk create cluster [flags] Flags: -f, --filename string Path to a configuration file -h, --help help for cluster --skip-pull-images Skip pre pull images --with-kubernetes string Specify a supported version of kubernetes --with-kubesphere Deploy a specific version of kubesphere (default v3.0.0) -y, --yes Skip pre-check of the installation Global Flags: --debug Print detailed information (default true) Failed to install kube binaries: interrupted by error 跟据你说的操作了一样不行。这台机系统是新装的,只手动装了docker,其它啥都没装过了。

[root@localhost ~]# journalctl -u kubelet -f
– Logs begin at 二 2021-01-05 14:15:20 CST. –
1月 07 10:15:37 localhost.localdomain.cluster.local systemd[1]: Unit kubelet.service entered failed state.
1月 07 10:15:37 localhost.localdomain.cluster.local systemd[1]: kubelet.service failed.
1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: kubelet.service holdoff time over, scheduling restart.
1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: Started kubelet: The Kubernetes Node Agent.
1月 07 10:15:47 localhost.localdomain.cluster.local kubelet[122677]: F0107 10:15:47.398417 122677 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file “/var/lib/kubelet/config.yaml”, error: open /var/lib/kubelet/config.yaml: no such file or directory
1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: Unit kubelet.service entered failed state.
1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: kubelet.service failed.
1月 07 10:15:48 localhost.localdomain.cluster.local systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
[root@localhost ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: inactive (dead) (Result: exit-code) since 四 2021-01-07 10:15:48 CST; 21min ago
Docs: http://kubernetes.io/docs/
Main PID: 122677 (code=exited, status=255)

1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: Unit kubelet.service entered failed state.
1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: kubelet.service failed.
1月 07 10:15:48 localhost.localdomain.cluster.local systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
[root@localhost ~]# ls /var/lib/kubelet/config.yaml
ls: 无法访问/var/lib/kubelet/config.yaml: 没有那个文件或目录

没yaml文件

[root@localhost ~]# journalctl -u kubelet -f
– Logs begin at 二 2021-01-05 14:15:20 CST. –
1月 07 10:15:37 localhost.localdomain.cluster.local systemd[1]: Unit kubelet.service entered failed state.
1月 07 10:15:37 localhost.localdomain.cluster.local systemd[1]: kubelet.service failed.
1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: kubelet.service holdoff time over, scheduling restart.
1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: Started kubelet: The Kubernetes Node Agent.
1月 07 10:15:47 localhost.localdomain.cluster.local kubelet[122677]: F0107 10:15:47.398417 122677 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file “/var/lib/kubelet/config.yaml”, error: open /var/lib/kubelet/config.yaml: no such file or directory
1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: Unit kubelet.service entered failed state.
1月 07 10:15:47 localhost.localdomain.cluster.local systemd[1]: kubelet.service failed.
1月 07 10:15:48 localhost.localdomain.cluster.local systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
1月 07 10:50:19 localhost.localdomain.cluster.local systemd[1]: Started kubelet: The Kubernetes Node Agent.
1月 07 10:50:19 localhost.localdomain.cluster.local systemd[1]: kubelet.service: main process exited, code=exited, status=203/EXEC
1月 07 10:50:19 localhost.localdomain.cluster.local systemd[1]: Unit kubelet.service entered failed state.
1月 07 10:50:19 localhost.localdomain.cluster.local systemd[1]: kubelet.service failed.
1月 07 10:50:29 localhost.localdomain.cluster.local systemd[1]: kubelet.service holdoff time over, scheduling restart.
1月 07 10:50:29 localhost.localdomain.cluster.local systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
1月 07 10:50:29 localhost.localdomain.cluster.local systemd[1]: Started kubelet: The Kubernetes Node Agent.
1月 07 10:50:29 localhost.localdomain.cluster.local systemd[1]: kubelet.service: main process exited, code=exited, status=203/EXEC
1月 07 10:50:29 localhost.localdomain.cluster.local systemd[1]: Unit kubelet.service entered failed state.
1月 07 10:50:29 localhost.localdomain.cluster.local systemd[1]: kubelet.service failed.
1月 07 10:50:33 localhost.localdomain.cluster.local systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
1月 07 10:50:33 localhost.localdomain.cluster.local systemd[1]: Started kubelet: The Kubernetes Node Agent.
1月 07 10:50:33 localhost.localdomain.cluster.local systemd[1]: kubelet.service: main process exited, code=exited, status=203/EXEC
1月 07 10:50:33 localhost.localdomain.cluster.local systemd[1]: Unit kubelet.service entered failed state.
1月 07 10:50:33 localhost.localdomain.cluster.local systemd[1]: kubelet.service failed.
1月 07 10:50:43 localhost.localdomain.cluster.local systemd[1]: kubelet.service holdoff time over, scheduling restart.
1月 07 10:50:43 localhost.localdomain.cluster.local systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
1月 07 10:50:43 localhost.localdomain.cluster.local systemd[1]: Started kubelet: The Kubernetes Node Agent.
1月 07 10:50:43 localhost.localdomain.cluster.local systemd[1]: kubelet.service: main process exited, code=exited, status=203/EXEC
1月 07 10:50:43 localhost.localdomain.cluster.local systemd[1]: Unit kubelet.service entered failed state.
1月 07 10:50:43 localhost.localdomain.cluster.local systemd[1]: kubelet.service failed.
1月 07 10:50:53 localhost.localdomain.cluster.local systemd[1]: kubelet.service holdoff time over, scheduling restart.
1月 07 10:50:53 localhost.localdomain.cluster.local systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
1月 07 10:50:53 localhost.localdomain.cluster.local systemd[1]: Started kubelet: The Kubernetes Node Agent.
1月 07 10:50:53 localhost.localdomain.cluster.local systemd[1]: kubelet.service: main process exited, code=exited, status=203/EXEC
1月 07 10:50:53 localhost.localdomain.cluster.local systemd[1]: Unit kubelet.service entered failed state.
1月 07 10:50:53 localhost.localdomain.cluster.local systemd[1]: kubelet.service failed.
1月 07 10:51:03 localhost.localdomain.cluster.local systemd[1]: kubelet.service holdoff time over, scheduling restart.
1月 07 10:51:03 localhost.localdomain.cluster.local systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
1月 07 10:51:03 localhost.localdomain.cluster.local systemd[1]: Started kubelet: The Kubernetes Node Agent.
1月 07 10:51:03 localhost.localdomain.cluster.local systemd[1]: kubelet.service: main process exited, code=exited, status=203/EXEC
1月 07 10:51:03 localhost.localdomain.cluster.local systemd[1]: Unit kubelet.service entered failed state.
1月 07 10:51:03 localhost.localdomain.cluster.local systemd[1]: kubelet.service failed.

重复打印这个

Cauchy

[root@localhost ~]# sudo -E /bin/sh -c “systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
Failed to execute operation: File exists

[root@localhost ~]# rm -fr /usr/local/bin/kubelet
[root@localhost ~]# which kubelet
/usr/bin/which: no kubelet in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)
[root@localhost ~]# ./kk create cluster –with-kubernetes v1.17.9 –with-kubesphere v3.0.0
+————————————-+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+————————————-+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+
| localhost.localdomain.cluster.local | y | y | y | y | y | y | y | y | | | | CST 11:09:40 |
+————————————-+——+——+———+———-+——-+——-+———–+——–+————+————-+——————+————–+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[11:09:42 CST] Downloading Installation Files
INFO[11:09:42 CST] Downloading kubeadm …
INFO[11:09:42 CST] Downloading kubelet …
INFO[11:09:43 CST] Downloading kubectl …
INFO[11:09:43 CST] Downloading helm …
INFO[11:09:43 CST] Downloading kubecni …
INFO[11:09:43 CST] Configurating operating system …
[localhost.localdomain.cluster.local 192.168.0.231] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
INFO[11:09:44 CST] Installing docker …
INFO[11:09:44 CST] Start to download images on all nodes
[localhost.localdomain.cluster.local] Downloading image: kubesphere/etcd:v3.3.12
[localhost.localdomain.cluster.local] Downloading image: kubesphere/pause:3.1
[localhost.localdomain.cluster.local] Downloading image: kubesphere/kube-apiserver:v1.17.9
[localhost.localdomain.cluster.local] Downloading image: kubesphere/kube-controller-manager:v1.17.9
[localhost.localdomain.cluster.local] Downloading image: kubesphere/kube-scheduler:v1.17.9
[localhost.localdomain.cluster.local] Downloading image: kubesphere/kube-proxy:v1.17.9
[localhost.localdomain.cluster.local] Downloading image: coredns/coredns:1.6.9
[localhost.localdomain.cluster.local] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[localhost.localdomain.cluster.local] Downloading image: calico/kube-controllers:v3.15.1
[localhost.localdomain.cluster.local] Downloading image: calico/cni:v3.15.1
[localhost.localdomain.cluster.local] Downloading image: calico/node:v3.15.1
[localhost.localdomain.cluster.local] Downloading image: calico/pod2daemon-flexvol:v3.15.1
INFO[11:13:12 CST] Generating etcd certs
INFO[11:13:12 CST] Synchronizing etcd certs
INFO[11:13:12 CST] Creating etcd service
INFO[11:13:14 CST] Starting etcd cluster
[localhost.localdomain.cluster.local 192.168.0.231] MSG:
Configuration file already exists
Waiting for etcd to start
INFO[11:13:22 CST] Refreshing etcd configuration
INFO[11:13:22 CST] Backup etcd data regularly
INFO[11:13:23 CST] Get cluster status
[localhost.localdomain.cluster.local 192.168.0.231] MSG:
Cluster will be created.
INFO[11:13:23 CST] Installing kube binaries
Push /root/kubekey/v1.17.9/amd64/kubeadm to 192.168.0.231:/tmp/kubekey/kubeadm Done
Push /root/kubekey/v1.17.9/amd64/kubelet to 192.168.0.231:/tmp/kubekey/kubelet Done
Push /root/kubekey/v1.17.9/amd64/kubectl to 192.168.0.231:/tmp/kubekey/kubectl Done
Push /root/kubekey/v1.17.9/amd64/helm to 192.168.0.231:/tmp/kubekey/helm Done
Push /root/kubekey/v1.17.9/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 192.168.0.231:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
ERRO[11:13:54 CST] Failed to enable kubelet service: Failed to exec command: sudo -E /bin/sh -c “systemctl enable kubelet && ln -snf /usr/local/bin/kubelet /usr/bin/kubelet”
Failed to execute operation: File exists: Process exited with status 1 node=192.168.0.231
WARN[11:13:54 CST] Task failed …
WARN[11:13:54 CST] error: interrupted by error
Error: Failed to install kube binaries: interrupted by error
Usage:
kk create cluster [flags]

Flags:
-f, –filename string Path to a configuration file
-h, –help help for cluster
–skip-pull-images Skip pre pull images
–with-kubernetes string Specify a supported version of kubernetes
–with-kubesphere Deploy a specific version of kubesphere (default v3.0.0)
-y, –yes Skip pre-check of the installation

Global Flags:
–debug Print detailed information (default true)

Failed to install kube binaries: interrupted by error