I've made an ACS instance.
az acs create --orchestrator-type=kubernetes \
--resource-group $group \
--name $k8s_name \
--dns-prefix $kubernetes_server \
--generate-ssh-keys
az acs kubernetes get-credentials --resource-group $group --name $k8s_name
And run helm init
it has provisioned tiller pod fine. I then ran helm install stable/redis
and got a redis deployment up and running (seemingly).
I can kube exec -it
into the redis pod, and can see it's binding on 0.0.0.0
and can log in with redis-cli -h localhost
and redis-cli -h <pod_ip>
, but not redis-cli -h <service_ip>
(from kubectl get svc
.)
If I run up another pod (which is how I ran into this issue) I can ping redis.default
and it shows the DNS resolving to the correct service IP but gives no response. When I telnet <service_ip> 6379
or redis-cli -h <service_ip>
it hangs indefinitely.
I'm at a bit of a loss as to how to debug further. I can't ssh into the node to see what docker is doing.
Also, I'd initially tried this with a standard Alphine-Redis image, so the helm was a fallback. I tried it yesterday and the helm one worked, but the manual one didn't. Today doing it (on a newly built ACS cluster) it's not working at all on either.
I'm going to spin up the cluster again to see if its a stable reproduce, but I'm pretty confident something fishy is going on.
PS - I have a VNet with overlapping subnet 10.0.0.0/16 in a different region, when I go into the address range I do get a warning there that there is a clash, could that affect it?
<EDIT>
Some new insight... It's something to do with alpine based images (which we've been aiming to use)...
So kube run a --image=nginx
(which is ubuntu based) and I can shell in, install telnet and connect to the redis service.
But, e.g. kubectl run c --image=rlesouef/alpine-redis
then shell in, and telnet doesn't work to the same redis service.
</EDIT>