3

I am new to druid and trying to do kafka(SSL) ingestion to SSL enabled druid. Druid is running on https.

Kafka Version : 2.2.2 Druid Version : 0.18.1

Kafka SSL works and I can assure it using the producer and consumer scripts :

bin/kafka-console-producer.sh --broker-list kafka01:9093 --topic testssl --producer.config config/client.properties
bin/kafka-console-consumer.sh --bootstrap-server kafka01:9093  --topic testssl config/client.properties --from-beginning

The above thing works. So I can assure that kafka SSL is setup.

Druid SSL Configuration :


    druid.enablePlaintextPort=false
    druid.enableTlsPort=true
    druid.server.https.keyStoreType=jks
    druid.server.https.keyStorePath=.jks
    druid.server.https.keyStorePassword=
    druid.server.https.certAlias=
    druid.client.https.protocol=TLSv1.2
    druid.client.https.trustStoreType=jks
    druid.client.https.trustStorePath=.jks
    druid.client.https.trustStorePassword=

Kafka SSL configuration :

ssl.truststore.location=<location>.jks --- The same is used for druid also
ssl.truststore.password=<password>
ssl.keystore.location=<location>.jks  --- The same is used for druid also  
ssl.keystore.password=<password>
ssl.key.password=<password>
ssl.enabled.protocols=TLSv1.2
ssl.client.auth=none
ssl.endpoint.identification.algorithm=
security.protocol=SSL

My consumerProperties spec looks like this :

"consumerProperties": {
      "bootstrap.servers" : "kafka01:9093",
      "security.protocol": "SSL",
      "ssl.enabled.protocols" : "TLSv1.2",
      "ssl.endpoint.identification.algorithm": "",
      "group.id" : "<grouop_name>",
      "ssl.keystore.type": "JKS",
      "ssl.keystore.location" : "/datadrive/<location>.jks",
      "ssl.keystore.password" : "<password>",
      "ssl.key.password" : "<password>",
      "ssl.truststore.location" : "/datadrive/<location>.jks",
      "ssl.truststore.password" : "<password>",
      "ssl.truststore.type": "JKS"
    }

After ingestion, the datasource gets created and the segments also get created but with 0 rows.

And after sometime I am continuously getting in the druid logs:

[task-runner-0-priority-0] org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=kafka-supervisor-llhigfpg] Sending READ_COMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(testssl-0)) to broker kafka01:9093 (id: 0 rack: null)

And after sometimes in coordinator-overlord.log I am getting :

2020-08-03T16:51:42,881 DEBUG [JettyScheduler] org.eclipse.jetty.io.WriteFlusher - ignored: WriteFlusher@278a176a{IDLE}->null java.util.concurrent.TimeoutException: Idle timeout expired: 300001/300000 ms

I am not sure what has gone wrong. I could not find much on the net for this issue. Need help on this.

NOTE : When druid is non-https and kafka is not ssl enabled, everything works fine.

Amit Mundu
  • 31
  • 1

0 Answers0