kubernetes Running Kubernetes Locally via Docker - `kubectl get nodes` returns `The connection to the server localhost:8080 was refused - did you specify the right host or port?` - Go

Going through this guide to set up kubernetes locally via docker I end up with the error message as stated above.

Steps taken: - export K8S_VERSION='1.3.0-alpha.1' (tried 1.2.0 as well) - copy-paste the docker run command - download the appropriate kubectl binary and put in on PATH (which kubectl works) - (optionally) setup the cluster - run kubectl get nodes

In short, no magic. I am running this locally on Ubuntu 14.04, docker 1.10.3. If you need more information let me know

Asked Oct 29 '21 15:10
avatar xificurC
xificurC

9 Answer:

Hello I'm getting the following error on Centos 7, how can solve this issue?

[root@ip-172-31-11-12 system]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
1
Answered Oct 28 '16 at 21:43
avatar  of rahmanusta
rahmanusta

Running on Mac OS High Sierra, I solved this by enabling Kubernetes built into Docker itself.

screen shot 2018-10-16 at 12 48 10 pm

screen shot 2018-10-16 at 12 48 49 pm

1
Answered Oct 16 '18 at 04:48
avatar  of dreadnautxbuddha
dreadnautxbuddha

If this happens in GCP, the below most likely will resolve the issue:

gcloud container clusters get-credentials your-cluster --zone your-zone --project your-project

1
Answered Apr 29 '17 at 03:29
avatar  of nestoru
nestoru

Similar to @sumitkau, I solved my problem with setting new kubelet config location using: kubectl --kubeconfig /etc/kubernetes/admin.conf get no You can also copy /etc/kubernetes/admin.conf to ~/.kube/config and it works, but I don't know that it's a good work or not!

1
Answered Apr 09 '17 at 10:06
avatar  of mamirkhani
mamirkhani

I was trying to get status from remote system using ansible and I was facing same issue. I tried and it worked. kubectl --kubeconfig ./admin.conf get pods --all-namespaces -o wide

1
Answered Apr 05 '17 at 13:11
avatar  of quasarenergy
quasarenergy

I have this issues. This solution work for me:

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

If you don't have admin.conf, plz install kubeadm And then remove ~/.kube/cache

rm -rf ~/.kube/cache
1
Answered Oct 24 '17 at 07:17
avatar  of thovt93
thovt93

Thanks to @mamirkhani. I solved this error. However I just found such info in "kubeadm init" output: Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf

I think this is the recommended solution.

1
Answered May 05 '17 at 09:08
avatar  of yueawang
yueawang

I have this issues. This solution work for me: export KUBECONFIG=/etc/kubernetes/admin.conf

1
Answered May 15 '18 at 07:52
avatar  of mapsic
mapsic

Try using --server to specify your master: kubectl --server=16.187.189.90:8080 get pod -o wide

1
Answered Aug 08 '16 at 07:37
avatar  of SylarChen
SylarChen