0

I installed glusterfs in kubernetes. and when the GLFS request kubernetes create volume. the log show that: 10.10.66.1:443,timeout.

and I checked out the kube-apiserver log like this:

11 20:30:50 localhost.localdomain kube-apiserver[5336]: E0211 20:30:50.547101 5336 upgradeaware.go:357] Error proxying data from client to backend: write tcp 172.16.5.150:51058->172.16.5.150:10250: write: broken pipe 2月 11 20:42:50 localhost.localdomain kube-apiserver[5336]: E0211 20:42:50.683623 5336 upgradeaware.go:357] Error proxying data from client to backend: write tcp 172.16.5.150:45196->172.16.5.152:10250: write: connection reset by peer 2月 11 20:44:50 localhost.localdomain kube-apiserver[5336]: E0211 20:44:50.871272 5336 upgradeaware.go:357] Error proxying data from client to backend: tls: use of closed connection 2月 11 21:58:50 localhost.localdomain kube-apiserver[5336]: E0211 21:58:50.692714 5336 upgradeaware.go:357] Error proxying data from client to backend: tls: use of closed connection 2月 11 21:58:50 localhost.localdomain kube-apiserver[5336]: E0211 21:58:50.829030 5
tremendows
  • 4,262
  • 3
  • 34
  • 51
Esc
  • 521
  • 13
  • 30
  • Did you install glusterfs following [this](https://github.com/gluster/gluster-kubernetes) documentation ? Have you tried [this](https://github.com/gluster/gluster-kubernetes/blob/master/docs/examples/hello_world/README.md) example to verify if it works at all ? What about required ports that needs to be open on your kubernetes nodes, described in [setup guide](https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md) ? – mario Feb 11 '20 at 17:38
  • @mario yes, i was fellow the [setup guide](https://github.com/gluster/gluster-kubernetes/blob/master/docs/se.tup-guide.md) to install glusterfs in my kubernetes by gk-deploy with option -g. and when i exec the script. i get the error: ` heketi topology loaded. Saving /tmp/heketi-storage.json secret/heketi-storage-secret created endpoints/heketi-storage-endpoints created service/heketi-storage-endpoints created job.batch/heketi-storage-copy-job created Error waiting for job 'heketi-storage-copy-job' to complete.` so i check pods.and find a pod is always in status ContainerCreating . – Esc Feb 12 '20 at 06:49
  • Next to the previous one. so i kubectl decribe the pod and get the output like this: Warning FailedCreatePodSandBox 4m13s (x20 over 14m) kubelet, 172.16.5.151 (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c9c98f0a510a5857dc541f0ad11cdbda987e7fe7f03b5a3508ec23819519b63c" network for pod "heketi-storage-copy-job-qn82b": networkPlugin cni failed to set up pod "heketi-storage-copy-job-qn82b_runsdata" network: Get https://[10.10.66.1]:443/api/v1/namespaces/runsdata: dial tcp 10.10.66.1:443: i/o timeout. – Esc Feb 12 '20 at 07:01
  • Next to the previous one. form the log above i know that there is something worry with the apiserver .so i exec systemctl status kube-apiserver. get the info like this: 2月 12 14:22:03 localhost.localdomain kube-apiserver[25854]: E0212 14:22:03.855961 25854 upgradeaware.go:357] Error proxying data from client to backend: write tcp 172.16.5.150:58706->172.16.5.150:10250: write: broken pipe. now i dont know what to do next. – Esc Feb 12 '20 at 07:09
  • And sometimes appear the error like this: 2月 12 21:08:50 localhost.localdomain kube-apiserver[29994]: E0212 21:08:50.234241 29994 upgradeaware.go:371] Error proxying data from backend to client: write tcp 172.16.5.150:6443->192.168.102.135:38104: write: broken pipe 2月 12 21:16:50 localhost.localdomain kube-apiserver[29994]: E0212 21:16:50.496817 29994 upgradeaware.go:357] Error proxying data from client to backend: tls: use of closed connection – Esc Feb 12 '20 at 13:51
  • Hello @Esc, one piece of advice: please try to add new information related to the question, such as `logs`, `command output`, etc. by **editing the question itself** rather than in the comments. It's much more readable for others, code can be properly formatted, so it can significantly increase your chances of getting help. – mario Feb 17 '20 at 14:04
  • @mario ok,thank you – Esc Feb 19 '20 at 16:00
  • Have you managed to resolve this issue ? The link to setup guide you posted seems to be broken. Did you use the instruction I provided in my first comment ? What about **ports** that need to be open on your **kubernetes nodes** ? Have you double checked they are open and this is not the cause of the issue ? – mario Feb 28 '20 at 10:50
  • @ mario yes I solve it. Thank you for your patience. The problem is caused by the port.I used iptables to open the port and wrote the rules after "deny all" ,that makes all my rules lose efficacy. I just change the position of the rules,and that is work. – Esc Feb 29 '20 at 08:50

0 Answers0