添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
相关文章推荐
慈祥的萝卜  ·  REST :: Apache Camel·  4 天前    · 
帅呆的佛珠  ·  总结 Underlay 和 Overlay ...·  1 周前    · 
成熟的刺猬  ·  【云原生渗透】- ...·  1 周前    · 
冲动的消炎药  ·  海事及水務局 DSAMA·  5 月前    · 
冷静的肉夹馍  ·  蔷薇(?)后花园 - ...·  5 月前    · 
帅气的火腿肠  ·  python ...·  1 年前    · 
憨厚的手电筒  ·  BLACKPINK:照亮天空 ...·  1 年前    · 

k8s安装遇到错误: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kub

这是在kubeadm 进行初始化的时候,我一直以为需要kubelet启动成功才能进行初始化,其实后来发现只有初始化后才能成功启动。
出现这两个问题的原因完全是初始化配置的问题,ip地址一定要是你本机上的ip,哪怕是虚拟ip,你也让它先飘到该主机上。这只是提供思路,我自己弄了半天,弄过kubelet和docker的驱动,网上找过很多但都没有用,因为我有一台主机成功了,其他两台死活出现这个问题,就是我把配置文件同步弄过去的时候ip地址都要改成本机上的。还有什么timeout 40s啥的都遇到过,最终修改初始化配置后都成功了。

[root@master ~]# systemctl status kubelet.service 
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: inactive (dead) (Result: exit-code) since 四 2022-07-28 17:47:56 CST; 2h 2min ago
     Docs: https://kubernetes.io/docs/
  Process: 20968 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
 Main PID: 20968 (code=exited, status=1/FAILURE)
7月 28 17:47:56 master systemd[1]: Unit kubelet.service entered failed state.
7月 28 17:47:56 master systemd[1]: kubelet.service failed.
7月 28 17:47:56 master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
 

[root@master ~]# kubeadm init

[root@master ~]# kubeadm init
I0728 19:51:30.106467   24188 version.go:254] remote version is much newer: v1.24.3; falling back to: stable-1.21
[init] Using Kubernetes version: v1.21.14
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
 

[root@master ~]# kubeadm reset

[root@master ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0728 19:53:03.440809   24432 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://192.168.161.16:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.161.16:6443: connect: no route to host
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0728 19:53:54.046924   24432 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W0728 19:53:54.058244   24432 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@master ~]# kubeadm init --apiserver-advertise-address=192.168.161.11 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version v1.21.3 --service-cidr=10.125.0.0/16 --pod-network-cidr=10.150.0.0/16
[init] Using Kubernetes version: v1.21.3
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.125.0.1 192.168.161.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.161.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.161.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.503026 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: x1dgfp.f58qfvz3w7htcf8g
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.161.11:6443 --token x1dgfp.f58qfvz3w7htcf8g \
	--discovery-token-ca-cert-hash sha256:9f799adc1da18f3ac27c3ba2d813f74ef05dbf8ad75ecd651e7927477e5c8c85 

加入集群报错

[root@node01 ~]# kubeadm join 192.168.161.11:6443 --token x1dgfp.f58qfvz3w7htcf8g \
> --discovery-token-ca-cert-hash sha256:9f799adc1da18f3ac27c3ba2d813f74ef05dbf8ad75ecd651e7927477e5c8c85 
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
	[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
 

[root@node01 ~]# rm -rf /etc/kubernetes/manifests /etc/kubernetes/kubelet.conf /etc/kubernetes/pki/ca.crt

[root@node01 ~]# kubeadm join 192.168.161.11:6443 --token x1dgfp.f58qfvz3w7htcf8g --discovery-token-ca-cert-hash sha256:9f799adc1da18f3ac27c3927477e5c8c85 
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow t.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
                    出现这两个问题的原因完全是初始化配置的问题,ip地址一定要是你本机上的ip,哪怕是虚拟ip,你也让它先飘到该主机上。这只是提供思路,我自己弄了半天,弄过kubelet和docker的驱动,网上找过很多但都没有用,因为我有一台主机成功了,其他两台死活出现这个问题,就是我把配置文件同步弄过去的时候ip地址都要改成本机上的。这是在kubeadm进行初始化的时候,我一直以为需要kubelet启动成功才能进行初始化,其实后来发现只有初始化后才能成功启动。...
					
k8s安装遇到错误failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kub
yum install docker-ce kubelet-1.11.1 kubeadm-1.11.1 kubectl-1.11.1 kubernetes-cni 启动kubelet失败 sysemctl status kubelet.service bash: sysemctl: command not found... [root@localhost rpm]# systemctl status kubelet.service ● kubelet.
未来社会二十年发展的核心技术趋势由ABCD四个字母组成,分别是AI(人工智能)、BlockChain(区块链)、Cloud(云)、和Data(大数据)每一次进步都有新的认知和感触 05-06
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manag...
常见问题一,error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: “systemd” is different from docker cgroup driver: “cgroupfs” 常见问题二、error: “Failed to load kubelet config file” err=“failed to load Kubelet config f.
kubeadm初始化kubernetes报错: error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists [ERROR FileAvailable–etc-kuber
22年全国等职业技术学校技能大赛企业网络搭建赛项linuxK8S详细解题过程,赛题答案。【任务描述】为了对容器进行更高级更灵活的管理,请采用Kubernetes服务,管理和控制容器。 在 linux5上安装kubernetes,linux6-linux7 作为 kubernetes 的节点,搭建一主二从的单集群。
我个人给公司开发的使用ansible部署k8s的脚本,支持vagrant调用ansbile,和直接ansible执行两种方式。k8s二进制组件使用最新的1.23.5 部署以下模块内容包括: preinstall 安装前准备,主机环境初始化,二进制文件拷贝 certs 生成集群所需要的ssl证书 master 主节点服务部署,使用systemd服务方式 worker 工作节点服务部署,使用systemd服务方式 addon 包括calico网络dns域名解析服务,dashboard, glusterfs+heketi和nfs存储部署 smoke_test 对环境做冒烟测试,包括 pod访问测试,dns访问测试,pv存储访问测试
要在Kubernetes搭建DNS服务,你需要使用Kubernetes提供的CoreDNS插件。以下是一些简单的步骤: 1. 在Kubernetes集群创建一个名为“coredns”的命名空间。 2. 创建一个名为“coredns.yaml”的文件,并在其定义CoreDNS的Deployment和Service。 3. 在“coredns.yaml”文件,将“image”字段设置为CoreDNS镜像的名称和版本号。 4. 在“coredns.yaml”文件,将“configMap”字段设置为CoreDNS的配置文件。 5. 在“configMap”文件,定义CoreDNS的配置。例如,你可以定义Kubernetes集群的默认域名和DNS服务器。 6. 使用kubectl命令创建CoreDNS Deployment和Service。例如:kubectl create -f coredns.yaml。 7. 验证CoreDNS是否正在运行。你可以使用kubectl get pods命令查看CoreDNS的Pod是否正在运行,并使用kubectl logs命令查看其日志。 8. 更新Kubernetes集群的每个Pod,以便它们将DNS查询发送到CoreDNS服务。 这些是很简单的步骤,但是在实践可能会遇到一些问题。因此,建议在搭建DNS服务之前,先对Kubernetes的基本概念和操作进行深入了解。