添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hy folks

after updating my server, I m not able to restart kubernetes.( Master & nodes )
the server was setup with kubeadm

[Feb  6 10:34:25 chgvascldp99 agent: 2019-02-06 10:34:25 CET | ERROR | (domain_forwarder.go:106 in retryTransactions) | Dropped 2 transactions in this retry attempt: 2 for exceeding the retry queue size limit of 30, 0 because the workers are too busy
Feb  6 10:34:25 chgvascldp99 agent: 2019-02-06 10:34:25 CET | ERROR | (config_poller.go:121 in collect) | Unable to collect configurations from provider kubernetes: permanent failure in kubeutil: retry number exceeded
Feb  6 10:34:26 chgvascldp99 systemd: kubelet.service holdoff time over, scheduling restart.
Feb  6 10:34:26 chgvascldp99 systemd: Stopped kubelet: The Kubernetes Node Agent.
Feb  6 10:34:26 chgvascldp99 systemd: Started kubelet: The Kubernetes Node Agent.
Feb  6 10:34:26 chgvascldp99 kubelet: F0206 10:34:26.662744   27634 server.go:189] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Feb  6 10:34:26 chgvascldp99 systemd: kubelet.service: main process exited, code=exited, status=255/n/a
Feb  6 10:34:26 chgvascldp99 systemd: Unit kubelet.service entered failed state.
Feb  6 10:34:26 chgvascldp99 systemd: kubelet.service failed.]

i've read on another issue on github someone said to run

kubeadm init per the setup instructions to create the required configuration.
the third post on this link
but in his cas it was after a new install . I dont know if i've to do the same ?

do you know why the config.yaml file disappeared ?

server : 3.10.0-957.5.1.el7.x86_64
kubectl : Major:"1", Minor:"13", GitVersion:"v1.13.3" GoVersion:"go1.11.5"
Kubernetes : v1.13.3

sig/error_kubelet_config_file

hi, what do you mean by updating the server?
did you update the underlying OS or only the kubeadm / kubelet binaries - e.g. following our upgrade instructions in the user manual?

the file should not be deleted unless a third party tool deleted it or if you've reset the k8s node.

/priority needs-move-evidence
/area kubeadm
/sig cluster-lifecycle

by update the server i mean the os server

i didn't update for the moment kubernetes , kubeadm or other tools is wat the second step .
I didn't reset any nodes.
In all of my nodes and the master the file is missing after an update server and reboot .

i've a question if i run kubeadm init i will lost everything or it will just recreate the file ?

looks like your OS update nuked the file.
kubeadm reset will delete the cluster and kubeadm init will create a new one, but maybe you don't want that.

try writing this file to the /var/lib/kubelet/config.yaml location.
it's the default kubelet v1beta1 config for kubeadm.

address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kind: KubeletConfiguration
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 1m0s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kind: KubeletConfiguration
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 1m0s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
          

but for the nodes
when i try to run the first command i execute at the begining when i setup them

kubeadm join 10.109.0.80:6443 --ignore-preflight-errors --token 3i2jzo.gicrm3jjbz0y64zg --discovery-token-ca-cert-hash sha256:5b20e87a257ea5551d8f5b3e1d502de099b485011d6b0e6062ad571fa97f5acb                                                                                                                                                                  
W0208 15:48:09.373634   14719 join.go:185] [join] WARNING: More than one API server endpoint supplied on command line [10.109.0.80:6443 3i2jzo.gicrm3jjbz0y64zg]. Using the first one.        
[discovery.bootstrapToken.token: Invalid value: "": the bootstrap token is invalid, discovery.tlsBootstrapToken: Invalid value: "": the bootstrap token is invalid]   

so i tryed to recreat the token

kubeadm token create
failed to load admin kubeconfig: open /root/.kube/config: no such file or directory

there is no file on my path :(

the correct way to use kubeadm is with sudo .... and not run in a root terminal.

also kubeadm token create does not need sudo or root.

for what user did install the kubeconfig file after kubeadm init finished?
try looking for the config file in this user's home directory.

--ignore-preflight-errors needs a value. try this.

kubeadm join 10.109.0.80:6443 --ignore-preflight-errors=all --token 3i2jzo.gicrm3jjbz0y64zg --discovery-token-ca-cert-hash sha256:5b20e87a257ea5551d8f5b3e1d502de099b485011d6b0e6062ad571fa97f5acb

but it seems to me it would be better to start from scratch.

i'm going to have to close this issue as it's not a kubeadm bug but we can continue the discussion.
/close

In response to this:

the correct way to use kubeadm is with sudo .... and not run in a root terminal.

also kubeadm token create does not need sudo or root.

for what user did install the kubeconfig file after kubeadm init finished?
try looking for the config file in this user's home directory.

--ignore-preflight-errors needs a value. try this.

kubeadm join 10.109.0.80:6443 --ignore-preflight-errors=all --token 3i2jzo.gicrm3jjbz0y64zg --discovery-token-ca-cert-hash sha256:5b20e87a257ea5551d8f5b3e1d502de099b485011d6b0e6062ad571fa97f5acb

but it seems to me it would be better to start from scratch.

i'm going to have to close this issue as it's not a kubeadm bug but we can continue the discussion.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@neolit123 Why does the config file repeat? is this necessary? Also I see the Kind is kubeletconfiguration

I am trying to use a yaml with kind masterconfiguration. will this replace that or do I also need a kind init issue.

KubeletConfiguration is a completely different type and you cannot use MasterConfiguration in place of it.
MasterConfiguration is an old kubeadm type that is now deprecated.

i fixe the first issue when i recreated the role and the rolebinding

create Role:

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: kube-system
  name: kubeadm:kubeadm-config
rules:
- apiGroups:
  resourceNames:
  - kubeadm-config
  resources:
  - configmaps
  verbs:
  - get

Create RoleBinding:

cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: kube-system name: kubeadm:kubeadm-config roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubeadm:kubeadm-config subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:kubeadm:default-node-token

Unfortunathely when i run the command on my node to join the master i get now :

discovery] Successfully established connection with API Server "10.109.0.80:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' unable to fetch the kubeadm-config ConfigMap: unexpected error when reading kubeadm-config ConfigMap: ClusterConfiguration key value pair missing

do you have an idea how i can fix it ?
Or do you think it's much better to start a new installation from scratch ?

/open

install 1.16.2 has this issue too:

Oct 10 15:21:43 ky001 kubelet[19129]: F1010 07:21:43.650488   19129 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config fil
Oct 10 15:21:43 ky001 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Oct 10 15:21:43 ky001 systemd[1]: kubelet.service: Failed with result 'exit-code'.
          

I got the same error messages though the file /var/lib/kubelet/config.yaml did exist. After running kubeadm init I found out, that swap was reenabled after a restart.

So doing a swapoff -a fixed it for me.