0

We have a setup like this:

elasticsearch <----> istio-proxy sidecar | elasticsearch-exporter (es-exporter) | istio-proxy sidecar <------> prometheus

All services are running within an EKS cluster. Istio version - 1.4.10.

Since, huge amount of data is present in elasticsearch, es-exporter takes a while to collect data (around 50s). Prometheus scrapes es-exporter every 60s. Since sidecar container istio-proxy's (es-exporter's sidecar) default timeout is 15s, prometheus targets are down with server returned HTTP status 504 Gateway Timeout.

Any idea how to overcome this issue. Increasing istio-proxy's timeout looks like a potential solution. Not sure how exactly this can be done.

  • Changing default sidecar timeout is a good idea for starters. You might also want to look into es-exporter timeout (`es.timeout` argument), which is by default 5s. –  Jan 24 '22 at 13:01
  • How would you change istio-proxy sidecar timeout? I'm a new to istio and service meshes. `es.timeout` is fine. – Sharat Naik Jan 25 '22 at 05:32
  • You have to set it in *VirtualService* yaml. Add `spec.http.timeout` field. You can read more [here](https://istio.io/latest/docs/tasks/traffic-management/request-timeouts/). Give us an update if it helped. –  Jan 25 '22 at 09:37
  • Just tried this. Didn't help. Probably because, prometheus uses IP of the pod (es-exporter) instead of service-name to scrape. And the timeout specified in virtual service affects only service objects. – Sharat Naik Jan 31 '22 at 10:48

0 Answers0