- path: "/etc/systemd/system/flanneld.service.d/50-network-config.conf"
permissions: "0644"
owner: "root"
content: |
[Service]
ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{ "Network": "10.10.0.0/16" }'```
I've run into the same issue as the OP, my setup is similar using CoreOS and flannel, I followed the getting started guide here
I'm trying to get the kubernetes-dashboard working on a node other than master but have received errors saying it cannot reach the apiserver.
2018/01/11 16:25:36 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
When testing the connection using alpine I do get a connection but nothing gets returned for nslookup
/ # nc 10.96.0.10 53 -v
10.96.0.10 (10.96.0.10:53) open
^Cpunt!
/ # nslookup kubernetes.default
nslookup: can't resolve '(null)': Name does not resolve
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ # nslookup kubernetes.local
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'kubernetes.local': Try again
/ # nslookup kubernetes
nslookup: can't resolve '(null)': Name does not resolve
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
I'm sure this is a network issue but don't know where to look, any help would greatly be appriciated
output **** --cluster_dns=1.2.3.4
THEN, in *dns.yaml
the clusterIP and cluster.local maybe like:
clusterIP: 1.2.3.4 and kubernetes cluster.local 1.2.0.0/16
AFTER setting, create new pod and will see:
[root@centos-d4fc98684-xtl6q /]# cat /etc/resolv.conf
nameserver 1.2.3.4
search kube-system.svc.cluster.local svc.cluster.local cluster.local
options ndots:3
AND (such centos or alpine or busybox ....)
/ # nslookup kubernetes.default
nslookup: can't resolve '(null)': Name does not resolve
Name: kubernetes.default
Address 1: 1.2.0.1 kubernetes.default.svc.cluster.local
dns OK
@cloudusers I'm seeing that issue on EKS, with any container:
/ # nslookup kubernetes.default
nslookup: can't resolve '(null)': Name does not resolve
Name: kubernetes.default
Address 1: 10.100.0.1 kubernetes.default.svc.cluster.local
I don't understand the nslookup: can't resolve '(null)': Name does not resolve
bit. Where is it getting (null)
from?
My guess is that the '(null)' response is for the AAAA request. Some
resolvers will send A and AAAA requests simultaneously. Try `nslookup
-type=A kubernetes.default`.
That said, the (null) is a bogus response either way and not sure why it
is there.
On Mon, Aug 27, 2018 at 3:06 AM jazoom ***@***.***> wrote:
I'm seeing the same thing. Where does it get null from, and why does it
then proceed to return the correct IP address? It obviously *did* resolve.
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<
#118 (comment)>, or mute
the thread
<
https://github.com/notifications/unsubscribe-auth/AJB4s7AdbbiLAg_lXJnBt2eyAZPPIBEKks5uU8SOgaJpZM4OHz1m>
@cloudusers I'm seeing that issue on EKS, with any container:
/ # nslookup kubernetes.default
nslookup: can't resolve '(null)': Name does not resolve
Name: kubernetes.default
Address 1: 10.100.0.1 kubernetes.default.svc.cluster.local
I don't understand the nslookup: can't resolve '(null)': Name does not resolve
bit. Where is it getting (null)
from?
hello, have you got the answer? I have the same question
I was trying to set a specific IP for resolvers on the worker nodes. Once I removed that, the problem stopped occurring.
gotcha . do you know the relation between the problem and your setting? would you mind give more detail ,maybe I made the same stuff.
@junsionzhang My nodes have Consul agent installed with the DNS interface enabled. I was bootstrapping the kubelets with --cluster-dns
pointing to the Consul DNS interface IP (I created a dummy interface with a static IP of 169.254.1.1 very similar to this article: https://medium.com/zendesk-engineering/making-docker-and-consul-get-along-5fceda1d52b9).
This prevented the pods from resolving records internal to the Kubernetes cluster. By leaving --cluster-dns
out of the bootstrap command, the pods can resolve internal addresses and still rely on the host's fallback of 169.254.1.1.