0

I'm encountering the following errors while configuring kafka with Kerberos authentication.

Can somebody please let me know, what could be going wrong here in getting it fixed. Tried various options, but nothing seems to be working for me.

I could notice zookeeper is getting connected and in next attempt it fails

[2019-10-09 05:06:07,942] INFO Initiating client connection, connectString=kafka-d1.example.com:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@6adbc9d (org.apache.zookeeper.ZooKeeper)
[2019-10-09 05:06:07,945] DEBUG zookeeper.disableAutoWatchReset is false (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:07,959] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:07,961] DEBUG JAAS loginContext is: Client (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,252] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,253] INFO TGT refresh thread started. (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,254] DEBUG Client principal is "kafka/kafka-d1.example.com@EXAMPLE.COM". (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,261] DEBUG Server principal is "krbtgt/EXAMPLE.COM@EXAMPLE.COM". (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,264] INFO TGT valid starting at:        Wed Oct 09 05:06:08 EDT 2019 (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,264] INFO TGT expires:                  Wed Oct 09 15:06:08 EDT 2019 (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,264] INFO TGT refresh sleeping until: Wed Oct 09 13:06:47 EDT 2019 (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,265] INFO Client will use GSSAPI as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,265] DEBUG creating sasl client: Client=kafka/kafka-d1.example.com@EXAMPLE.COM;service=zookeeper;serviceHostname=kafka-d1.example.com (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,272] INFO Opening socket connection to server kafka-d1.example.com/10.14.61.17:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,277] INFO Socket connection established to kafka-d1.example.com/10.14.61.17:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,278] DEBUG Session establishment request sent on kafka-d1.example.com/10.14.61.17:2181 (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,286] INFO Session establishment complete on server kafka-d1.example.com/10.14.61.17:2181, sessionid = 0x16dafa306f20009, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,288] DEBUG ClientCnxn:sendSaslPacket:length=0 (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,289] DEBUG saslClient.evaluateChallenge(len=0) (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,289] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,300] ERROR An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating Zookeeper Quorum Member's  received SASL token. Zookeeper Client will go to AUTH_FAILED state. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,300] ERROR SASL authentication with Zookeeper Quorum member failed: javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating Zookeeper Quorum Member's  received SASL token. Zookeeper Client will go to AUTH_FAILED state. (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,300] ERROR [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,350] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /consumers
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:126)
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
    at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:546)
    at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1559)
    at kafka.zk.KafkaZkClient.makeSurePersistentPathExists(KafkaZkClient.scala:1480)
    at kafka.zk.KafkaZkClient$$anonfun$createTopLevelPaths$1.apply(KafkaZkClient.scala:1472)
    at kafka.zk.KafkaZkClient$$anonfun$createTopLevelPaths$1.apply(KafkaZkClient.scala:1472)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at kafka.zk.KafkaZkClient.createTopLevelPaths(KafkaZkClient.scala:1472)
    at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:373)
    at kafka.server.KafkaServer.startup(KafkaServer.scala:202)
    at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
    at kafka.Kafka$.main(Kafka.scala:75)
    at kafka.Kafka.main(Kafka.scala)
[2019-10-09 05:06:08,354] INFO shutting down (kafka.server.KafkaServer)
[2019-10-09 05:06:08,356] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,357] DEBUG Close called on already closed client (org.apache.zookeeper.ZooKeeper)
[2019-10-09 05:06:08,359] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,361] INFO shut down completed (kafka.server.KafkaServer)
[2019-10-09 05:06:08,361] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2019-10-09 05:06:08,364] INFO shutting down (kafka.server.KafkaServer)
Server {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab=/etc/keytabs/zookeeper.keytab
  storeKey=true
  useTicketCache=false
  principal=zookeeper/kafka-d1.EXAMPLE.COM@EXAMPLE.COM;
};

cat /etc/kafka/jaas.conf 
KafkaServer {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  storeKey=true
  keyTab="/etc/keytabs/kafka-d1.keytab"
  principal="kafka/kafka-d1.EXAMPLE.COM@EXAMPLE.COM";
};

Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  storeKey=true
  keyTab="/etc/keytabs/kafka-d1.keytab"
  principal="kafka/kafka-d1.EXAMPLE.COM@EXAMPLE.COM";
};

/etc/krb5.conf
[libdefaults]
default_realm = EXAMPLE.COM
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = aes256-cts
default_tkt_enctypes = aes256-cts
permitted_enctypes = aes256-cts
udp_preference_limit = 1
kdc_timeout = 3000
ignore_acceptor_hostname = true
[realms]
EXAMPLE.COM = {
kdc = srv-kerb.example.com
admin_server = srv-kerb.example.com

kdc = srv-kerb.example.com

}
[domain_realm]

Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating SASL token received from the Kafka Broker. This may be caused by Java's being unable to resolve the Kafka Broker's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Users must configure FQDN of kafka brokers when authenticating using SASL and socketChannel.socket().getInetAddress().getHostName() must match the hostname in principal/hostname@realm Kafka Client will go to AUTHENTICATION_FAILED state.

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Ahshan Md
  • 105
  • 2
  • 11
  • referred to the below [kafka-ACL](https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/) – Ahshan Md Oct 09 '19 at 11:06
  • Can you check dns resolution for `kafka-d1.EXAMPLE.COM`, forward and reverse? If its working fine, please add the content of `krb5.conf` to your post. – mazaneicha Oct 09 '19 at 18:12
  • @mazaneicha , my DNS resolution works fiine both forward and reverse and i have shared the /etc/krb5.conf – Ahshan Md Oct 10 '19 at 18:01
  • does having kafka broker and zookeeper running on the same host can cause any issue with this ? – Ahshan Md Oct 10 '19 at 18:04
  • Why there is nothing in `[domain_realm]` section in krb5.conf? Having kafka and zk on the same host should not be a problem. – mazaneicha Oct 10 '19 at 18:56
  • @mazaneicha , i have been re-using the same config which i have an existing Cloudera cluster with HDFS, YARN other services which is kerberized and did not happen to find any issues – Ahshan Md Oct 10 '19 at 19:15

1 Answers1

2

I had the same problem. Changing zookeeper host value, from IP address to FQDN (hostname) and also adding the hostname in /etc/hosts fixed the problem for me.

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
The UMA
  • 44
  • 3
  • I meet the same problem. Could you tell me how to change zookeeper host value please? – Henry Bai Mar 23 '22 at 14:42
  • @HenryBai It's through kerberos server configuration. I had been using the ip address of machine as my kerberos realm. I reconfigured server with desired machine hostname and it was resolved. Hope this helps. :) – The UMA Mar 28 '22 at 10:20