添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hello I hope this is the right place to post my issue. Forgive me if this isnt and please redirect me to the right place.

I am trying to install a cluster with one master ( server-1 ) and one minion ( server-2 ) running on ubuntu and using flannel for networking and using kubeadm to install master and minion. And I am trying to run the dashboard from the minion server-2 as discussed here . I am very new to kubernetes and not an expert on linux networking setup, so any help would be appreciated. Dashboard is not working and after some investigation seems to be a DNS issue.

kubectl and kubeadm : 1.6.6
Docker : 17.03.1-ce

My DNS service is up and exposing endpoints

ubuntu@server-1:~$ kubectl get svc --all-namespaces
NAMESPACE     NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       kubernetes             10.96.0.1       <none>        443/TCP         20h
kube-system   kube-dns               10.96.0.10      <none>        53/UDP,53/TCP   20h
kube-system   kubernetes-dashboard   10.97.135.242   <none>        80/TCP          3h
ubuntu@server-1:~$ kubectl get ep kube-dns --namespace=kube-system
NAME       ENDPOINTS                     AGE
kube-dns   10.244.0.4:53,10.244.0.4:53   17h

I created a busy-box pod and when I do a nslookup from it I got the following errors. Note that the command hang for some time before returning the error.

ubuntu@server-1:~$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes.default'
ubuntu@server-1:~$ kubectl exec -ti busybox -- nslookup kubernetes.local
Server:    10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes.local'
ubuntu@server-1:~$ kubectl exec -ti busybox -- nslookup kubernetes
Server:    10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes'
ubuntu@server-1:~$ kubectl exec -ti busybox -- nslookup 10.96.0.1
Server:    10.96.0.10
Address 1: 10.96.0.10
Name:      10.96.0.1
Address 1: 10.96.0.1

Resolv.conf seems properly configured

ubuntu@server-1:~$ kubectl exec busybox cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local local
options ndots:5

DNS pod is running

ubuntu@server-1:~$ kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME                       READY     STATUS    RESTARTS   AGE
kube-dns-692378583-5zj21   3/3       Running   0          17h

Here is iptables from server 1

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  anywhere             anywhere             /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere            
Chain FORWARD (policy DROP)
target     prot opt source               destination         
DOCKER-ISOLATION  all  --  anywhere             anywhere            
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  anywhere             anywhere             /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere            
Chain DOCKER (1 references)
target     prot opt source               destination         
Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere            
Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
Chain KUBE-SERVICES (2 references)
target     prot opt source               destination         
REJECT     tcp  --  anywhere             10.103.141.154       /* kube-system/kubernetes-dashboard: has no endpoints */ tcp dpt:http reject-with icmp-port-unreachable

here are iptables from server-2

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  anywhere             anywhere             /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere            
Chain FORWARD (policy DROP)
target     prot opt source               destination         
DOCKER-ISOLATION  all  --  anywhere             anywhere            
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  anywhere             anywhere             /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere            
Chain DOCKER (1 references)
target     prot opt source               destination         
Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere            
Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
Chain KUBE-SERVICES (2 references)
target     prot opt source               destination         
REJECT     tcp  --  anywhere             10.103.141.154       /* kube-system/kubernetes-dashboard: has no endpoints */ tcp dpt:http reject-with icmp-port-unreachable
          

Can you check the connectivity between a pod and the DNS server?

kubectl exec -it alpine --image alpine sh
$ nc 10.96.0.10 53

It should say something like 10.96.0.10 (10.96.0.10:53) open if it's able to connect.

Then you can try dig:

kubectl exec -it alpine --image alpine sh
$ apk update && apk add bind-tools
$ dig +trace @10.96.0.10 google.com
          

Same problem here. All of exposed services (Nexus, Dashboard) just stopped being available for unknown reason at last Friday evening. After digging into issue, found that kube-dns is not routing anything anymore and is not accessible by pods. Does anyone have come up with solution to this problem yet?

@cmluciano Logs were actually fine, and I found the root cause of my issue. It was with faulty flannel. So I had to do such thing in my cloud config and everything worked again (all pods could successfully reach out pods on other nodes):

- path: "/etc/systemd/system/flanneld.service.d/50-network-config.conf"
    permissions: "0644"
    owner: "root"
    content: |
      [Service]
      ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{ "Network": "10.10.0.0/16" }'```
          

I've run into the same issue as the OP, my setup is similar using CoreOS and flannel, I followed the getting started guide here

I'm trying to get the kubernetes-dashboard working on a node other than master but have received errors saying it cannot reach the apiserver.

2018/01/11 16:25:36 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ

When testing the connection using alpine I do get a connection but nothing gets returned for nslookup
/ # nc 10.96.0.10 53 -v
10.96.0.10 (10.96.0.10:53) open
^Cpunt!

/ # nslookup kubernetes.default
nslookup: can't resolve '(null)': Name does not resolve

Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ # nslookup kubernetes.local
nslookup: can't resolve '(null)': Name does not resolve

nslookup: can't resolve 'kubernetes.local': Try again
/ # nslookup kubernetes
nslookup: can't resolve '(null)': Name does not resolve

Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
I'm sure this is a network issue but don't know where to look, any help would greatly be appriciated

output **** --cluster_dns=1.2.3.4

THEN, in *dns.yaml
the clusterIP and cluster.local maybe like:
clusterIP: 1.2.3.4 and kubernetes cluster.local 1.2.0.0/16

AFTER setting, create new pod and will see:

[root@centos-d4fc98684-xtl6q /]# cat /etc/resolv.conf
nameserver 1.2.3.4
search kube-system.svc.cluster.local svc.cluster.local cluster.local
options ndots:3

AND (such centos or alpine or busybox ....)
/ # nslookup kubernetes.default
nslookup: can't resolve '(null)': Name does not resolve

Name: kubernetes.default
Address 1: 1.2.0.1 kubernetes.default.svc.cluster.local

dns OK

@cloudusers I'm seeing that issue on EKS, with any container:

/ # nslookup kubernetes.default
nslookup: can't resolve '(null)': Name does not resolve
Name: kubernetes.default
Address 1: 10.100.0.1 kubernetes.default.svc.cluster.local

I don't understand the nslookup: can't resolve '(null)': Name does not resolve bit. Where is it getting (null) from?