0

I have a local(without cloud provider) cluster made up of 3 vm the master and the nodes, I have created a volume with a nfs to reuse it if a pod die and is reschedule on another nodes, but i think same component not work well: I use to create the cluster just this guide: kubernetes guide and I have after that create the cluster this is the actual state:

master@master-VirtualBox:~/Documents/KubeT/nfs$ sudo kubectl get pod --all-namespaces 
    [sudo] password for master: 
    NAMESPACE     NAME                                        READY     STATUS    RESTARTS   AGE
    default       mysqlnfs3                                   1/1       Running   0          27m
    kube-system   etcd-master-virtualbox                      1/1       Running   0          46m
    kube-system   kube-apiserver-master-virtualbox            1/1       Running   0          46m
    kube-system   kube-controller-manager-master-virtualbox   1/1       Running   0          46m
    kube-system   kube-dns-86f4d74b45-f6hpf                   3/3       Running   0          47m
    kube-system   kube-flannel-ds-nffv6                       1/1       Running   0          38m
    kube-system   kube-flannel-ds-rqw9v                       1/1       Running   0          39m
    kube-system   kube-flannel-ds-s5wzn                       1/1       Running   0          44m
    kube-system   kube-proxy-6j7p8                            1/1       Running   0          38m
    kube-system   kube-proxy-7pj8d                            1/1       Running   0          39m
    kube-system   kube-proxy-jqshs                            1/1       Running   0          47m
    kube-system   kube-scheduler-master-virtualbox            1/1       Running   0          46m


master@master-VirtualBox:~/Documents/KubeT/nfs$ sudo kubectl get node
    NAME                STATUS    ROLES     AGE       VERSION
    host1-virtualbox    Ready     <none>    39m       v1.10.2
    host2-virtualbox    Ready     <none>    40m       v1.10.2
    master-virtualbox   Ready     master    48m       v1.10.2

and this is the pod:

master@master-VirtualBox:~/Documents/KubeT/nfs$ sudo kubectl get pod
    NAME        READY     STATUS    RESTARTS   AGE
    mysqlnfs3   1/1       Running   0          29m

it is schedule on the host2 and if i try to go in the shell of host 2 and I do dockerexec I use the container very well, the data are store and retrieve, but when I try to use kubect exec not work:

master@master-VirtualBox:~/Documents/KubeT/nfs$ sudo kubectl exec -it -n default mysqlnfs3 -- /bin/bash
 error: unable to upgrade connection: pod does not exist
marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Cristian Monti
  • 145
  • 1
  • 2
  • 12
  • Pod is present and syntax is ok so this looks like network issue... Are you sure all api ports between node2 and master are not taken by something else (some other service conflicting?). Also, you have other nodes to test, if you reschedule that pod to other node is it the same behavior (i.e. is it node-specific or not)? – Const May 04 '18 at 08:47
  • how can i check if the api port are taken? because when i run kubeadm init or join in the preflight check i see there is a check on the port, how can i control? – Cristian Monti May 04 '18 at 08:54
  • It will help if you attach log files and do some debug from command line. I suggest to provide some details about your configuration. Can you please: - logon on the master node and "ping" by hostname the suspected node? Have you got response? - dump state of the node and post it here? > kubectl get nodes host2-virtualbox -o yaml - check if you use recommended version of docker on the node and that there is no possible firewall issues? - provide output of: > kubectl -v=9 exec -it mysqlnfs3 -- /bin/bash > > kubectl -v=9 logs mysqlnfs3 – d0bry May 04 '18 at 16:29

0 Answers0