2

So, we are using kafka queues internally for some microservices' communication, also zipkin for distributed tracing. Would you suggest how to bring in kafka traces in zipkin server for debugability.

I came across the brave-kafka-interceptor, but could not understand it with with kafka from the minimal example provided. Is there any other example around, or something altogether different library is used.

v78
  • 2,803
  • 21
  • 44

2 Answers2

0

The easiest way to get it working is to use the Micrometer library and configure the micrometer to send this data to the Zipkin server.

Enabling metrics using micrometer is very simple, you need to just add a micrometer-core and spring-cloud-starter-zipkin libraries.

See this tutorial for details about configuration and code https://www.baeldung.com/tracing-services-with-zipkin

Micrometer will report consumer/producer metrics to Zipkin

sonus21
  • 5,178
  • 2
  • 23
  • 48
0

Add the configurations mentioned in https://github.com/openzipkin-contrib/brave-kafka-interceptor#configuration to both producer and consumer to configure tracing.

Once we have the traces, we need to flush them to zipkin UI. We need to call flush method on AsyncZipkinSpanHandler object to flush traces to zipkin. But using the brave kafka interceptors, we don't have access to that object.

So, we need to provide some idle time in our application to flush traces. Basically, if there is any idle time within our program, then zipkin helps in flushing traces even if flush() method is not explicitly called. (I am not sure if this is correct (regarding flushing traces in idle time). This is completely from my observations.)

ProducerTracing.java

import brave.kafka.interceptor.TracingProducerInterceptor;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class ProducerTracing {
    public static void main(String[] args) {
        Properties properties = new Properties();
        properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");

        properties.setProperty(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName());
        properties.setProperty("zipkin.http.endpoint", "http://127.0.0.1:9411/api/v2/spans");
        properties.setProperty("zipkin.sender.type", "HTTP");
        properties.setProperty("zipkin.encoding", "JSON");
        properties.setProperty("zipkin.remote.service.name", "kafka");
        properties.setProperty("zipkin.local.service.name", "producer");
        properties.setProperty("zipkin.trace.id.128bit.enabled", "true");
        properties.setProperty("zipkin.sampler.rate", "1.0F");

        KafkaProducer<String, String> producer = new KafkaProducer<>(properties);
        ProducerRecord<String, String> record = new ProducerRecord<>("topic", "key", "value");
        producer.send(record);

        try {
            Thread.sleep(1000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
}

Whereas on the consumer end, we don't need to call sleep method to create idle time. When the consumer calls poll() method, we get some idle time, since poll() method creates another thread and fetches records from kafka broker. So, mean time the consumer can flush traces to zipkin.

ConsumerTracing.java

import brave.kafka.interceptor.TracingConsumerInterceptor;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.time.Duration;
import java.util.Collections;
import java.util.Properties;

public class ConsumerTracing {
    public static void main(String[] args) {
        Properties properties = new Properties();
        properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
        properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "group");

        properties.setProperty(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName());
        properties.setProperty("zipkin.http.endpoint", "http://127.0.0.1:9411/api/v2/spans");
        properties.setProperty("zipkin.sender.type", "HTTP");
        properties.setProperty("zipkin.encoding", "JSON");
        properties.setProperty("zipkin.remote.service.name", "kafka");
        properties.setProperty("zipkin.local.service.name", "consumer");
        properties.setProperty("zipkin.trace.id.128bit.enabled", "true");
        properties.setProperty("zipkin.sampler.rate", "1.0F");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);

        consumer.subscribe(Collections.singleton("topic"));

        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(10));
            for (ConsumerRecord<String, String> record : records) {
                System.out.println(record.key() + " " + record.value());
            }
        }
    }
}

Now, we can observe traces in Zipkin. Traces observed

prasuna_16
  • 11
  • 4