2

I am trying to make make my kafka producer transactional. I am sending 10 messages .If any error occurs no message should be sent to kafka i.e none or all.

I am using Spring Boot KafkaTemplate.

@Configuration
@EnableKafka
public class KakfaConfiguration {

    @Bean
    public ProducerFactory<String, String> producerFactory() {
        Map<String, Object> config = new HashMap<>();

        // props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL");
        // props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG,
        // appProps.getJksLocation());
        // props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG,
        // appProps.getJksPassword());
        config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
        config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        config.put(ProducerConfig.ACKS_CONFIG, acks);
        config.put(ProducerConfig.RETRY_BACKOFF_MS_CONFIG, retryBackOffMsConfig);
        config.put(ProducerConfig.RETRIES_CONFIG, retries);
        config.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
        config.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "prod-99");

        return new DefaultKafkaProducerFactory<>(config);
    }

    @Bean
    public KafkaTemplate<String, String> kafkaTemplate() {
        return new KafkaTemplate<>(producerFactory());
    }

    @Bean(name = "ktm")
    public KafkaTransactionManager kafkaTransactionManager() {
        KafkaTransactionManager ktm = new KafkaTransactionManager(producerFactory());
        ktm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
        return ktm;
    }

}

I am sending 10 messages like below as mentioned in the document. 9 messages should be sent and I message has size over 1MB which gets rejected by Kafka broker due to RecordTooLargeException

https://docs.spring.io/spring-kafka/reference/html/#using-kafkatransactionmanager

@Component
@EnableTransactionManagement
class Sender {

    @Autowired
    private KafkaTemplate<String, String> template;

    private static final Logger LOG = LoggerFactory.getLogger(Sender.class);

    @Transactional("ktm")
    public void sendThem(List<String> toSend) throws InterruptedException {
        List<ListenableFuture<SendResult<String, String>>> futures = new ArrayList<>();
        CountDownLatch latch = new CountDownLatch(toSend.size());
        ListenableFutureCallback<SendResult<String, String>> callback = new ListenableFutureCallback<SendResult<String, String>>() {

            @Override
            public void onSuccess(SendResult<String, String> result) {
                LOG.info(" message sucess : " + result.getProducerRecord().value());
                latch.countDown();
            }

            @Override
            public void onFailure(Throwable ex) {
                LOG.error("Message Failed ");
                latch.countDown();
            }
        };

        toSend.forEach(str -> {
            ListenableFuture<SendResult<String, String>> future = template.send("t_101", str);
            future.addCallback(callback);
        });

        if (latch.await(12, TimeUnit.MINUTES)) {
            LOG.info("All sent ok");
        } else {
            for (int i = 0; i < toSend.size(); i++) {
                if (!futures.get(i).isDone()) {
                    LOG.error("No send result for " + toSend.get(i));
                }
            }
        }

But when I see the topic t_hello_world 9 messages are there. My expectation was to see 0 messages as my producer is transactional. How can I achieve it?

I am getting the following logs

2020-04-30 18:04:36.036 ERROR 18688 --- [   scheduling-1] o.s.k.core.DefaultKafkaProducerFactory   : commitTransaction failed: CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer@1eb5a312, txId=prod-990]

org.apache.kafka.common.KafkaException: Cannot execute transactional method because we are in an error state
    at org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:923) ~[kafka-clients-2.4.1.jar:na]
at org.apache.kafka.clients.producer.internals.TransactionManager.lambda$beginCommit$2(TransactionManager.java:297) ~[kafka-clients-2.4.1.jar:na]
at org.apache.kafka.clients.producer.internals.TransactionManager.handleCachedTransactionRequestResult(TransactionManager.java:1013) ~[kafka-clients-2.4.1.jar:na]
at org.apache.kafka.clients.producer.internals.TransactionManager.beginCommit(TransactionManager.java:296) ~[kafka-clients-2.4.1.jar:na]
at org.apache.kafka.clients.producer.KafkaProducer.commitTransaction(KafkaProducer.java:713) ~[kafka-clients-2.4.1.jar:na]
at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.commitTransaction(DefaultKafkaProducerFactory.java



Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept.

2020-04-30 18:04:36.037  WARN 18688 --- [   scheduling-1] o.s.k.core.DefaultKafkaProducerFactory   : Error during transactional operation; producer removed from cache; possible cause: broker restarted during transaction: CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer@1eb5a312, txId=prod-990]
2020-04-30 18:04:36.038  INFO 18688 --- [   scheduling-1] o.a.k.clients.producer.KafkaProducer     : [Producer clientId=producer-prod-990, transactionalId=prod-990] Closing the Kafka producer with timeoutMillis = 5000 **ms.
2020-04-30 18:04:36.038  INFO 18688 --- [oducer-prod-990] o.a.k.clients.producer.internals.Sender  : [Producer clientId=producer-prod-990, transactionalId=prod-990] Aborting incomplete transaction due to shutdown**
Mickael Maison
  • 25,067
  • 7
  • 71
  • 68
Pale Blue Dot
  • 511
  • 2
  • 13
  • 33

2 Answers2

3

Uncommitted records are written to the log; when a transaction commits or rolls back, an extra record is written to the log with the state of the transaction.

Consumers, by default, see all records, including the uncommitted records (but not the special commit/abort record).

For the console consumer, you need to set the isolation level to read_committed. See the help:

--isolation-level <String>           Set to read_committed in order to      
                                       filter out transactional messages    
                                       which are not committed. Set to      
                                       read_uncommitted to read all          
                                       messages. (default: read_uncommitted)
Gary Russell
  • 166,535
  • 14
  • 146
  • 179
  • Russel i am using below command to see message in topic `kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic t_101 --from-beginning --property isolation.level=read_committed` despite providing the isolation_level i was able to read 9 messages.But i should not be able to see any messages. If you see my error log it says `Aborting incomplete transaction due to shutdown` Is the issue something to do with this? – Pale Blue Dot Apr 30 '20 at 16:31
  • And i cant see any issue with my Kafka Producer properties and use of @Transactional annotation – Pale Blue Dot Apr 30 '20 at 16:31
  • Since you are doing `--from--beginning`, perhaps you are seeing some old records? Try running the producer again while the consumer is running. – Gary Russell Apr 30 '20 at 16:34
  • 1
    It's not `--property isolation.level`; it's just `--isolation-level`. – Gary Russell Apr 30 '20 at 16:40
  • yes it works Thanks alot.`kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic t_101 --from-beginning --isolation-level=read_committed` worksI dont see uncommited message just one more doubt why i am getting the below error `Aborting incomplete transaction due to shutdown`.Is this something to worry about? – Pale Blue Dot Apr 30 '20 at 16:53
  • No; it's nothing to worry about. It's just a side-effect of closing the producer after the failed commit. – Gary Russell Apr 30 '20 at 17:53
  • just one last doubt : suppose i have 10 message.9 messages were sent and 1 was not sent due to error .Can i commit those 9 messages to kafka topic? Above code will make all 10 messages uncommitted – Pale Blue Dot Apr 30 '20 at 19:23
  • 1
    No; that's the point about transactions - all stored or none stored. – Gary Russell Apr 30 '20 at 19:35
  • In the above example - do we need to create KafkaTransactionManager @Bean(name = "ktm") ? If we provide all producer related information including Tx - as below - will I need to create a custom bean for producer, factory and tx ? – user3575226 Mar 21 '22 at 02:59
  • You don't need a transaction manager for sender-initiated transactions. Don't ask new questions on old answers; if there is something you don't understand, ask a new question with much more detail and your code/config. – Gary Russell Mar 21 '22 at 12:51
0

If I provide below configurations in yml file will I need to create factory, template and tx bean as given in the example code ?

for the given tx example if I use simple Consumer ( java code) or Kafka Tools will I able to view any record for the above Tx example - hope fully not - Am I correct as per Tx example.

spring:
  profiles: local
  kafka:
    producer:
      client-id: book-event-producer-client
      bootstrap-servers: localhost:9092,localhost:9093,localhost:9094
      key-serializer: org.apache.kafka.common.serialization.IntegerSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
      **transaction-id-prefix: tx-${random.uuid}**
      properties:
        **enable.idempotence: true**
        **acks: all**
        retries: 2        
        metadata.max.idle.ms: 10000
user3575226
  • 111
  • 5
  • 15