kubernetes Running Kubernetes Locally via Docker - `kubectl get nodes` returns `The connection to the server localhost:8080 was refused - did you specify the right host or port?` - Go
Going through this guide to set up kubernetes locally via docker I end up with the error message as stated above.
Steps taken:
- export K8S_VERSION='1.3.0-alpha.1'
(tried 1.2.0 as well)
- copy-paste the docker run
command
- download the appropriate kubectl
binary and put in on PATH
(which kubectl
works)
- (optionally) setup the cluster
- run kubectl get nodes
In short, no magic. I am running this locally on Ubuntu 14.04, docker 1.10.3. If you need more information let me know
9 Answer:
Hello I'm getting the following error on Centos 7, how can solve this issue?
[root@ip-172-31-11-12 system]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Running on Mac OS High Sierra, I solved this by enabling Kubernetes built into Docker itself.
If this happens in GCP, the below most likely will resolve the issue:
gcloud container clusters get-credentials your-cluster --zone your-zone --project your-project
Similar to @sumitkau, I solved my problem with setting new kubelet config location using: kubectl --kubeconfig /etc/kubernetes/admin.conf get no You can also copy /etc/kubernetes/admin.conf to ~/.kube/config and it works, but I don't know that it's a good work or not!
I was trying to get status from remote system using ansible and I was facing same issue. I tried and it worked. kubectl --kubeconfig ./admin.conf get pods --all-namespaces -o wide
I have this issues. This solution work for me:
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
If you don't have admin.conf
, plz install kubeadm
And then remove ~/.kube/cache
rm -rf ~/.kube/cache
Thanks to @mamirkhani. I solved this error. However I just found such info in "kubeadm init" output: Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf
I think this is the recommended solution.
I have this issues. This solution work for me: export KUBECONFIG=/etc/kubernetes/admin.conf
Try using --server to specify your master: kubectl --server=16.187.189.90:8080 get pod -o wide
Read next
- core Feature Request: Allow CARP to Connect/Disconnect a PPPoE Connection - PHP
- xarray Sum and prod with min_count forces evaluation Python
- pandas fillna() does not work when value parameter is a list - Python
- Force link style option - html-to-markdown
- qBittorrent [v3.4.0] qBittorrent has crashed (filelist bug?) - Cplusplus
- module alias is error - JavaScript next.js
- bibtex dislikes \textbackslash - zotero-better-bibtex
- Post Preview: Videos don't load correctly in post previews - WordPress-iOS