4

I have Kubernetes 1.17.5 and Istio 1.6.8 installed with demo profile.

And here is my test setup [nginx-ingress-controller] -> [proxy<->ServiceA] -> [proxy<->ServiceB]

  • Proxies for serviceA and serviceB are auto-injected by Istio (istio-injection=enabled)
  • Nginx ingress controller does not have tracing enabled and has no envoy proxy as a sidecar
  • ServiceA passes tracing headers down to ServiceB
  • I'm trying to trace calls from ServiceA to ServiceB and do not care about Ingress->ServiceA span at the moment

When I'm sending requests to ingress controller I can see that ServiceA receives all required tracing headers from the proxy

x-b3-traceid: d9bab9b4cdc8d0a7772e27bb7d15332f
x-request-id: 60e82827a270070cfbda38c6f30f478a
x-envoy-internal: true
x-b3-spanid: 772e27bb7d15332f
x-b3-sampled: 0
x-forwarded-proto: http

Problem is x-b3-sampled is always set to 0 and no spans/traces are getting pushed to Jaeger

Few things I've tried

  1. I've added Gateway and VirtualService to ServiceA to expose it through Istio ingressgateway. When I send traffic through ingressgateway everything works as expected. I can see traces [ingress-gateway]->[ServiceA]->[ServiceB] in the JaegerUI
  2. I've also tried to install Istio with custom config and play with tracing related parameters with no luck.

Here is the config I've tried to use

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  meshConfig:
    enableTracing: true
    defaultConfig:
      tracing:
        sampling: 100
  addonComponents:
    tracing:
      enabled: true
    grafana:
      enabled: false
    istiocoredns:
      enabled: false
    kiali:
      enabled: false
    prometheus:
      enabled: false
  values:
    tracing:
      enabled: true
    pilot:
      traceSampling: 100
arkadi4
  • 71
  • 4
  • `I've added Gateway and VirtualService to ServiceA to expose it through Istio ingressgateway. When I send traffic through ingressgateway everything works as expected.` I would say it work as expected, As mentioned in istio [documentation](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/) `Using the Istio Gateway, rather than Ingress, is recommended to make use of the full feature set that Istio offers, such as rich traffic management and security features.` So I assume that 1 of the features. – Jakub Aug 21 '20 at 07:05
  • Yes, sending requests through ingressgateway works as expected. But we're serving tons of traffic through ingress controller and it works for us. Replacing it with istio ingressgateway to work around the tracing issue would be too big of a change at the moment. – arkadi4 Aug 21 '20 at 17:58
  • 1
    When I replace ingress-controller with any other service and initialize request to serviceA from inside the cluster it works fine. So I think it has something to do with the fact that requests are getting forwarded from outside of the cluster. I’m trying to reconfigure ingress controller to remove all the X-Forwarded-* headers from the request to upstream services to trick envoy into "thinking" that request is local. Will see if it fixes the issue. – arkadi4 Aug 21 '20 at 18:00

1 Answers1

3

After few days of digging I've figured it out. Problem is in the format of the x-request-id header that nginx ingress controller uses.

Envoy proxy expects it to be an UUID (e.g. x-request-id: 3e21578f-cd04-9246-aa50-67188d790051) but ingrex controller passes it as a non-formatted random string (x-request-id: 60e82827a270070cfbda38c6f30f478a). When I pass properly formatted x-request-id header in the request to ingress controller its getting passed down to envoy proxy and request is getting sampled as expected. I also tried to remove x-request-id header from the request from ingress controller to ServiceA with a simple EnvoyFilter. And it also works as expected. Envoy proxy generates a new x-request-id and request is getting traced.

arkadi4
  • 71
  • 4
  • 1
    I have similar issue but on ingress-nginx, I have an Envoy sidecar in order to use mTLS between ingress controller pod & other services. The request x-request-id header that comes out of ingress-nginx is in this format: '4668081de0d9e63cae60680710a23cfd' but isn't that created by Envoy as ingress-nginx doesn't create that header (and other b3- headers) by itself? So I'm wondering why Envoy doesn't format the request id in a proper UUID form. – Phi Van Ngoc Oct 29 '20 at 10:04
  • Can you share how did you made sure the "x-request-id" is formatted properly? Any doc I can have look into? – Dhannanjai Jan 28 '22 at 11:04