3

kubectl logs command intermittently fails with "getsockopt: no route to host" error.

# kubectl logs -f mypod-5c46d5c75d-2Cbtj

Error from server: Get https://X.X.X.X:10250/containerLogs/default/mypod-5c46d5c75d-2Cbtj/metaservichart?follow=true: dial tcp X.X.X.X:10250: getsockopt: no route to host

If I run the same command 5-6 times it works. I am not sure why this is happening. Any help will be really appreciated.

manish
  • 944
  • 2
  • 14
  • 27

4 Answers4

2

Just fyi, I just tried using another VPC 172.18.X.X on EKS, and all kubectl commands works fine.

Also I noticed that kops uses 172.18.X.X for docker's internal cidr when I was using 172.17.X.X VPC. So I speculate that kops changes default docker's cidr not to collide with cluster IP. I hope we could configure docker's cidr when EKS worker nodes are created, maybe by CloudFormation yaml template or something.

SnoU
  • 56
  • 1
  • 5
1

I had a chance to talk with AWS EKS Engineer in person. The official answer is that current EKS doesn't support 172.17.0.0/16 due to cidr overlapping with Docker's IP.It seems they have internal ticket to fix the issue, but no ETA.

SnoU
  • 56
  • 1
  • 5
0

I have exactly same issue with private ip 172.17.X.X

Error from server: Get https://172.17.X.X:10250/containerLogs/******: dial tcp 
172.17.X.X:10250: getsockopt: no route to host

I am using EKS-Optimized AMI v24.

Similar issue is discussed in here. https://github.com/aws/amazon-vpc-cni-k8s/issues/137. I wonder private ip starts with 172.17.X.X is the issue as it collides with Docker's default internal cidr, but I didn't have this issue when I was using kops.

SnoU
  • 56
  • 1
  • 5
  • 2
    true, even I never faced this issue with kops. I have raised this issue with AWS technical support but have't heard anything from them, it has been 15 days now :( GKE is much better, I am planning to move my services from EKS to GKE. – manish Nov 08 '18 at 07:05
0

Depending on the AMI, I get the error "getsockopt: no route to host".

I use "kubectl logs my-pod-id" to access the pod's logs.

  • I am running EKS V1.10, in AWS (yes I need to upgrade to V1.11 soon).
  • I am using an IP range 10.0.0.0 for my vpc and subnets. And I have 2 public and 2 private subnets.

It works (and also does not work), with the EXACT same routing, security groups, vpc, etc. Just the AMI change.

Works: ami-73a6e20b (Used when I first setup my cluster back in Oct 2018)

Does not work: ami-0e7ee8863c8536cce (and is the recommended Amazon EKS-optimized AMI as of today for us-west-2 Oregon - https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html)

My point is, it may not be your routing/security-group setup.

Sagan
  • 2,033
  • 2
  • 14
  • 12
  • I had the same issue. The cluster was set up using an older version of the CloudFormation template, back is August 2018. Almost everything works after upgrading the CloudFormation template and AMI, except the logs. – Asrail Jul 05 '19 at 04:04