5

On setting kafka producer property - enable.idempotence to true

kafkaProps.put("enable.idempotence" , "true");

I am getting below error -

2021-04-18 16:43:53.584[0;39m [31mERROR[0;39m [35m15524[0;39m [2m---[0;39m [2m[ad | producer-1][0;39m [36mo.a.k.clients.producer.internals.Sender [0;39m [2m:[0;39m [Producer clientId=producer-1] Aborting producer batches due to fatal error

org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed.

[2m2021-04-18 16:43:53.585[0;39m [31mERROR[0;39m [35m15524[0;39m [2m---[0;39m [2m[  restartedMain][0;39m [36mc.a.c.g.kafkaclient.PricerProducer      [0;39m [2m:[0;39m sending above record failed. java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed. 
[2m

Does the cluster have to support/enable this feature. If so, what is the minimum version of Kafka the cluster should be on.


From kafka docs -

enable.idempotence

When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream. Note that enabling idempotence requires max.in.flight.requests.per.connection to be less than or equal to 5, retries to be greater than 0 and acks must be 'all'. If these values are not explicitly set by the user, suitable values will be chosen. If incompatible values are set, a ConfigException will be thrown. Type: boolean Default: false Valid Values: Importance: low

I have set - max.in.flight.requests.per.connection=1 and acks is unset, so it's automatically set to -1(all). So, i see my configuration is fine, but even otherwise it should result in to ConfigException not ClusterAuthorizationException.

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
joven
  • 371
  • 1
  • 6
  • 17
  • What versions _are_ you using? And idempotence has nothing to do with authorization, so are you trying to use a SASL/SSL connection? – OneCricketeer Apr 18 '21 at 13:58
  • 1
    i am sure our brokers are upwards of 0.11. `Kafka 0.11.0 includes support for idempotent and transactional capabilities in the producer.`. We are using SASL/JAAS – joven Apr 18 '21 at 14:12
  • 1
    if i remove `kafkaProps.put("enable.idempotence" , "true");` it works normal. No ClusterAuthorizationException is thrown. – joven Apr 18 '21 at 14:13
  • 1
    I remember we used to get some type of error when `log.message.format.version` on the brokers was less than 0.11... There's also an `IdempotentWrite` ACL - https://docs.confluent.io/5.3.0/kafka/authorization.html#enabling-authorization-for-idempotent-and-transactional-apis – OneCricketeer Apr 18 '21 at 14:18
  • ++thanks. this should be the most probable reason - `Enabling Authorization for Idempotent and Transactional APIs` – joven Apr 18 '21 at 14:34
  • `enable.idempotence` is `true` by default! I had to explicitly set it to `false` to get a Kafka client connecting to my company hosted instances. Thanks so much - I was stuck on this for over 4 hours this morning. Shame on the maintainers of Kafka for defaulting to a client configuration that may not be allowed by a server, and especially for having the server return such a vague error when the client's configuration is wrong like this. – ArtOfWarfare May 16 '23 at 18:01

1 Answers1

4

According to the kafka doc's its an autorization issue. Under "Authorization and ACLs" you'll find operations like "IdempotentWrite". You must be authorized to do "IdempotentWrite"'s how it looks like. So you need to add this privilege via acl for example. Operation "IdempotentWrite" for resource "cluster"

An idempotent produce action requires this privilege.

as described here: https://kafka.apache.org/documentation/#operations_resources_and_protocols

Valentin Bossi
  • 186
  • 2
  • 9