1

I am trying to publish data from Nifi 1.7.1 to Kafka 0.10 via SASL_Plaintext. We already tested that Kafka Brokers are available and receiving on our topic via Commandline on the Kafka Server. Still the PublishKafka_0_10 fails with the following logs:

2018-09-12 10:37:46,648 INFO [NiFi Web Server-365] o.a.n.c.s.StandardProcessScheduler Starting PublishKafka_0_10[id=ccfbf7e8-0165-1000-528f-6771c455e664]
2018-09-12 10:37:46,648 INFO [Timer-Driven Process Thread-9] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled PublishKafka_0_10[id=ccfbf7e8-0165-1000-528f-6771c455e664] to run with 1 threads
2018-09-12 10:37:46,658 INFO [Timer-Driven Process Thread-9] o.a.k.clients.producer.ProducerConfig ProducerConfig values: 
    acks = 1
    batch.size = 16384
    block.on.buffer.full = false
    bootstrap.servers = [ourkafkaserver:9092]
    buffer.memory = 33554432
    client.id = 
    compression.type = none
    connections.max.idle.ms = 540000
    interceptor.classes = null
    key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
    linger.ms = 0
    max.block.ms = 20000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.fetch.timeout.ms = 60000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 6
    retry.backoff.ms = 100
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = kafka
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = SASL_PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    timeout.ms = 30000
    value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer

2018-09-12 10:37:46,675 INFO [Timer-Driven Process Thread-9] o.a.k.c.s.authenticator.AbstractLogin Successfully logged in.
2018-09-12 10:37:46,675 INFO [kafka-kerberos-refresh-thread-our@nifiprincipal] o.a.k.c.security.kerberos.KerberosLogin [Principal=our@nifiprincipal]: TGT refresh thread started.
2018-09-12 10:37:46,675 INFO [kafka-kerberos-refresh-thread-our@nifiprincipal] o.a.k.c.security.kerberos.KerberosLogin [Principal=our@nifiprincipal]: TGT valid starting at: Wed Sep 12 10:37:46 UTC 2018
2018-09-12 10:37:46,676 INFO [kafka-kerberos-refresh-thread-our@nifiprincipal] o.a.k.c.security.kerberos.KerberosLogin [Principal=our@nifiprincipal]: TGT expires: Thu Sep 13 11:37:46 UTC 2018
2018-09-12 10:37:46,676 INFO [kafka-kerberos-refresh-thread-our@nifiprincipal] o.a.k.c.security.kerberos.KerberosLogin [Principal=our@nifiprincipal]: TGT refresh sleeping until: Thu Sep 13 06:45:43 UTC 2018
2018-09-12 10:37:46,676 INFO [Timer-Driven Process Thread-9] o.a.kafka.common.utils.AppInfoParser Kafka version : 0.10.2.1
2018-09-12 10:37:46,676 INFO [Timer-Driven Process Thread-9] o.a.kafka.common.utils.AppInfoParser Kafka commitId : e89bffd6b2eff799
2018-09-12 10:38:26,678 ERROR [Timer-Driven Process Thread-9] o.a.n.p.kafka.pubsub.PublishKafka_0_10 PublishKafka_0_10[id=ccfbf7e8-0165-1000-528f-6771c455e664] Failed to send all message for StandardFlowFileRecord[uuid=b2470c67-4c6e-4dd6-a969-f46e1da5673f,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1536744161212-1, container=default, section=1], offset=429, length=39],offset=0,name=10269008232495292,size=39] to Kafka; routing to failure due to org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 20000 ms.: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 20000 ms.
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 20000 ms.
2018-09-12 10:38:26,679 ERROR [Timer-Driven Process Thread-9] o.a.n.p.kafka.pubsub.PublishKafka_0_10 PublishKafka_0_10[id=ccfbf7e8-0165-1000-528f-6771c455e664] Failed to send all message for StandardFlowFileRecord[uuid=5c24d2ec-9f09-44e4-91ea-237f2bfedefa,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1536744161212-1, container=default, section=1], offset=468, length=39],offset=0,name=10269023234631434,size=39] to Kafka; routing to failure due to org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 20000 ms.: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 20000 ms.
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 20000 ms.
2018-09-12 10:38:26,679 INFO [Timer-Driven Process Thread-9] o.a.kafka.clients.producer.KafkaProducer Closing the Kafka producer with timeoutMillis = 20000 ms.
2018-09-12 10:38:26,679 WARN [kafka-kerberos-refresh-thread-our@nifiprincipal] o.a.k.c.security.kerberos.KerberosLogin [Principal=our@nifiprincipal]: TGT renewal thread has been interrupted and will exit.

I found the parameter sasl.kerberos.kinit.cmd = /usr/bin/kinit. Is it necessary to have kinit at that place or will Nifi use Java to get the Kerberos Ticket?

Any other hints why this could fail? We provide the jaas.conf file during startup with the command

java.arg.50=-Djava.security.auth.login.config=/path/to/our/kerberos/jaas.conf

in the bootstrap.conf file and it contains the following content:

KafkaClient {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  storeKey=true
  keyTab="/path/to/our/kerberos/nifi.keytab"
  serviceName="kafka"
  principal="our@nifiprincipal";
};
jugi
  • 622
  • 7
  • 15
  • 2
    I don't think this has to do with SASL, I think there is a networking issue where the server where NiFi is running can't reach one of the hosts where Kafka is running. See this post for some ideas - https://stackoverflow.com/questions/30880811/kafka-quickstart-advertised-host-name-gives-kafka-common-leadernotavailableexce – Bryan Bende Sep 12 '18 at 13:15
  • Indeed some of the Firewalls were locked and it was therefor not working. Thanks anyways – jugi Oct 04 '18 at 13:20

0 Answers0