0

I want to expose Redis HA service running in kubernetes to clients running outside the cloud. For this, I'm trying to setup envoy which supports Redis. I'm using ambassador which is a wrapper around envoy for kubernetes. Followed this doc for the initial setup. I'm new to envoy and kubernetes.

How can I configure ambassador to act as proxy for my Redis service?

I'm guessing there is someplace to specify address of the Redis service in the proxy. Finding it hard to get this info.This page refers to Redis proxy in envoy documentation but I don't follow where to make the changes.

Also, I'm interested only in the edge proxy feature, not the service proxy feature of envoy for my use case.

rainhacker
  • 592
  • 2
  • 13

1 Answers1

0

I'd focus on your first sentence rather than your own conclusions which follow.

You want to expose Redis to the public network.
How you ended up with Envoy is beyond me; you probably only need a Kubernetes service with type set to LoadBalancer.
This is a terrible idea because Redis is unauthenticated by default, and the connection is in clear-text, don't say you haven't been warned ;-)

As for Envoy, sure, it does support Redis, but Ambassador has nothing to do with it, and if I understand your requirement correctly, is an entire overkill which seems to mostly distract you rather than help you get the job done.

https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/

samhain1138
  • 1,035
  • 8
  • 7
  • Exposing Redis on a public network is a terrible idea indeed. But I'm not doing that. I want to access Redis running in kubernetes from clients running within the company network but outside k8. The reason I was trying out proxy is because I'm running Redis in HA mode meaning it has master, slaves and sentinels. The slaves are enabled to serve read requests. Clients outside k8 connect to sentinel service in k8, which gives the internal pod address of master and slaves to the clients. These addresses cannot be used by clients to talk to Redis nodes in k8.... – rainhacker Sep 28 '18 at 14:34
  • ....This is where a proxy with comes into picture. I want to distribute requests in round robin read to all Redis nodes. A LoadBalancer service cannot distinguish if a Redis node in k8 is a master or slave. So it cannot route a 'set' request to the master. Also, later I found out that enovy doesn't support Redis replication, so I guess that's not a feasible option for this. – rainhacker Sep 28 '18 at 14:36
  • @rainhacker alright, forget about Envoy then (although I'm pretty sure your conclusion is incorrect). But you're off regarding how to distribute connections across an HA Redis setup. Obviously, you can't expect clients connecting to a distributed system to keep track of all IPs of all the slaves, etc. That simply doesn't work in the real world. Long story short: if you installed redis-ha from Helm, great. Otherwise, `helm install stable/redis-ha`, because I can't guarantee your setup otherwise, and I wouldn't be so confident about it (no offense meant, I'd screw it up too probably)... – samhain1138 Sep 29 '18 at 16:11
  • @rainhacker ...Then, you'll have a service called `-redis-ha-sentinel`. Connecting to this service will load-balance your connections like you described, minus the unnecessary complexity of connecting to different IP addresses (or directly to pods, bad idea, what if the pod dies, etc.?) – samhain1138 Sep 29 '18 at 16:11
  • I think you lack understanding of how Redis clients work. Redis client libraries maintain a topology of Redis nodes at their end, and refreshes it from time to time. Yes, clients connecting to Redis keeps track of all IPs. -redis-ha-sentinel won't work because clients will get the IP of master/slave from sentinels running inside K8 via this service which will tell the IPs of the master/slave pods. These addresses cannot be used to query Redis inside K8 by external clients. – rainhacker Oct 02 '18 at 13:30
  • Also, I've tried stable/redis-ha last week. It has some serious issues, like race conditions at startup. A PR is open with the fixes but not yet merged. I may be able to solve my problem by querying master and slave service directly instead. Will give it a try once PR is merged. – rainhacker Oct 02 '18 at 13:32