$ kubectl config view
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
Then no contexts are configured.
insecure-skip-tls-verify: true
server: https://localhost:6443
name: docker-for-desktop-cluster
contexts:
context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
This seems like another way to setup the cluster that said on the post (https://www.techrepublic.com/article/how-to-quickly-install-kubernetes-on-ubuntu/). Or is it using the same steps?
Getting kubectl to run really depends on how you installed it. Basically, if you install and have a proper config file, it should always work. So, either an old file from a previous installation is there or something silly like that (although usually difficult to spot).
Also, make sure the commands don’t fail (some on the post pasted that the step to copy the kubectl config failed). That is the way to authorize to the cluster, so it won’t never work if that step doesn’t work 
If I were you, I’d try removing everything from a previous run and starting from scratch and making sure nothing fails. If it does fail, try to fix that instead of continuing with the next steps. And if you can’t fix it, please report back with the steps you did, why it failed (the error) and what you tried and didn’t work.
This way it will be easier to solve 
Run these commands to fix it.
sudo cp /etc/kubernetes/admin.conf HOME/
sudo chown (id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
Bom dia
Im beggining my studies in Kubernets, follow the tutorial (Install and Set Up kubectl - Kubernetes) and when type “kubectl cluster-info” I receive the message “To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.
The connection to the server localhost:8080 was refused - did you specify the right host or port?”.
When I run kubectl cluster-info dump receive the message "
The connection to the server localhost:8080 was refused - did you specify the right host or port?"
The documentation I found tells to search the file admin.conf at folder /etc/kubernets, but when I do it I didn’t find the folder.
What can I do?
I have currently started working on this as well and I am running into the same brickwall.
I am running two nodes, one master and the other host.
Both are VMs running CentOS in Oracle Virtual Manager.
I got docker installed to the host and kubectl installed to master.
I can ssh from master to host and visa-verse. I receive ping replies as well. But cannot telnet into any of the machines from my physical Windows machine.
When it comes to “kubectl get nodes” I receive the error: The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?
~]$ kubectl config view
apiVersion: v1
clusters:
cluster:
certificate-authority-data: DATA+OMITTED
server: https://x.x.x.x:6443
name: local
contexts:
context:
cluster: local
user: kube-admin-local
name: local
current-context: local
kind: Config
preferences: {}
users:
name: kube-admin-local
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
I have added the IP address+port to the iptables, tried again. stop the firewalld but still got the same error.
Yes I had to be sure I used the “right user”. For me that was a user that had the admin.conf file copied to ~/.kube/config.
You can see the difference between the working “pi” users and the “root” user who does not have the config file:
pi@node0:~ $ sudo kubectl get pods --all-namespaces
The connection to the server localhost:8080 was refused - did you specify the right host or port?
pi@node0:~ $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-f9fd979d6-jjsjr 1/1 Running 0 11h
kube-system coredns-f9fd979d6-xl5f2 1/1 Running 0 11h
kube-system etcd-node0 1/1 Running 0 11h
kube-system kube-apiserver-node0 1/1 Running 0 11h
kube-system kube-controller-manager-node0 1/1 Running 0 11h
kube-system kube-flannel-ds-arm-4zq4g 1/1 Running 0 11h
kube-system kube-flannel-ds-arm-dcprj 1/1 Running 0 11h
kube-system kube-flannel-ds-arm-fwzkl 1/1 Running 0 11h
kube-system kube-flannel-ds-arm-q8t5k 1/1 Running 0 11h
kube-system kube-proxy-dlhc7 1/1 Running 0 11h
kube-system kube-proxy-glh92 1/1 Running 0 11h
kube-system kube-proxy-jh26p 1/1 Running 0 11h
kube-system kube-proxy-qflcw 1/1 Running 0 11h
kube-system kube-scheduler-node0 1/1 Running 0 11h
(base) jiribenes:~$ sudo kubeadm init
[sudo] password for jiribenes:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local rurhelena1920] and IPs [10.96.0.1 172.30.147.238]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost rurhelena1920] and IPs [172.30.147.238 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost rurhelena1920] and IPs [172.30.147.238 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.503678 seconds
[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.20” in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node rurhelena1920 as control-plane by adding the labels “node-role.kubernetes.io/master=’’” and “node-role.kubernetes.io/control-plane=’’ (deprecated)”
[mark-control-plane] Marking the node rurhelena1920 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: r1cctv.zgkk2aore4luh7wo
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config
sudo chown (id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
Installing Addons | Kubernetes
Then you can join any number of worker nodes by running the following on each as root:
devasim:
HOME/.kube/config -kubectl apply -f [https://cloud.weave.works/k8s/net?k8s-version=(kubectl version | base64 | tr -d ‘\n’)](https://cloud.weave.works/k8s/net?k8s-version=$(kubectl%20version%20|%20base64%20|%20tr%20-d%20’\n’) )
Mostly as a comment to folk - that also installs weave-net and should not just be copied/pasted