0

EDIT: I am rewriting this question to narrow the scope as suggested in comments.

Under Deploying the application, the documentation says,

To run the sample with Istio requires no changes to the application itself. Instead, you simply need to configure and run the services in an Istio-enabled environment, with Envoy sidecars injected along side each service.

I have a NodeJS back-end API, that writes logs with winston package. I would presume that, the application will have to be changed so that the logs from the winston package can participate in distributed tracing. Is this correct?

cogitoergosum
  • 2,309
  • 4
  • 38
  • 62
  • 1
    I think this is far out of scope for a single question but a few short notes; gRPC is over HTTP, it does provide automatic TLS between mesh nodes, it would have no impact on a service that only talks to Kafka, rather than change app code, you repoint all remote HTTP endpoints through the sidecar proxy. – coderanger Nov 08 '19 at 04:35

2 Answers2

3

Distributed tracing systems in general require adding headers to outbound requests to tell the tracing system and downstream tasks which trace a given request belongs to. While this isn't Istio-specific, Istio does document a list of OpenTracing headers that need to be passed along. If you don't do this, then each call between services will show up as a separate trace instead of them being stitched together into a single unified end-to-end trace.

This is separate from your logging system. Unless you're sending logs via HTTP to something like Logstash or directly into Elasticsearch, logs won't show up in traces at all. The flip side of this is that you don't need to change anything in your logging setup to "work with" Istio, but mostly because there's not a lot of direct interaction.

David Maze
  • 130,717
  • 29
  • 175
  • 215
  • So, basically, I have to undo all my `winston` logs and replace them with something that Istio can understand - correct? – cogitoergosum Nov 08 '19 at 13:20
  • You should absolutely keep your `winston` logs; they have valuable data and it's complementary to what's in the tracing. But consider setting up something like a cluster-wide `fluentd` collector to get the logs into one place, and consider including a value like the OpenTracing `X-Request-Id:` header in the log content to be able to pair things up. – David Maze Nov 08 '19 at 13:48
  • If I have `X-Request-Id` set-up (I already do, actually) and an ELK/EFK cluster set-up outside of OpenShift, then what value add does OpenTracing bring in? – cogitoergosum Nov 08 '19 at 14:16
  • 1
    Seeing the visual graph of which services called which other services and how long it spent there is really useful. If an inbound request takes 10s and it should take 1s, but it traversed six different services in its lifetime, the trace graph can pretty clearly tell you which service to start debugging. – David Maze Nov 08 '19 at 14:30
1

No, your assumption is not correct. Istio tracing has nothing to do with logs. It's all about custom headers managed by Istio and modified automatically by sidecars to allow each sidecar processing traffic to put timestamp when traffic enters (request) and leaves (response). This gives you somewhat useful picture of actual delays between the containers participating in a network call.

On top of that, you are free to modify your app's code to include even more detailed method-level tracing using some OpenTracing-compatible lib for your app's language. Basically you add some lines besides your Winston logging in order to include checkpoints of your code execution pipeline too. While you can parse your logs and measure the same by math with log timestamps it's still way more job to be done to achieve what opentracing already offers you.