3

Using Spring-Kafkas ChainedKafkaTransactionManager I cannot see any point in implementing the transactional outbox pattern in a Spring Boot microservices context.

Putting message producer (i.e. KafkaTemplate's send method) and DB operation in the same transactional block solves exactly the problem that should be solved by the outbox pattern: If any exception is raised in the transactional code neither the db op is commited nor the message is read on the consumer side (configured with read_committed)

This way I dont need an additional table nor any type of CDC code. In summary the Spring Kafka way of transaction synchronization seems much easier to use and implement to me than any implementation of transactional outbox pattern.

Am I missing anything?

public ChainedKafkaTransactionManager chainedTransactionManager(
                        JpaTransactionManager transactionManager,
                        KafkaTransactionManager kafkaTransactionManager) {
        ChainedKafkaTransactionManager chainedKafkaTransactionManager = 
            new ChainedKafkaTransactionManager<>(transactionManager, 
                                                 kafkaTransactionManager);
        
        return chainedKafkaTransactionManager;
    }

    @Bean
    @Primary
    public JpaTransactionManager transactionManager(EntityManagerFactory 
        entityManagerFactory) {
        JpaTransactionManager jpaTransactionManager = 
                    new JpaTransactionManager(entityManagerFactory);
    
        return jpaTransactionManager;
    }

    @Bean
    public KafkaTransactionManager<Object, Object> 
             kafkaTransactionManager(ProducerFactory producerFactory) {
        KafkaTransactionManager kafkaTransactionManager = 
           new KafkaTransactionManager<>(producerFactory);
        
        return kafkaTransactionManager;
    }


    @Transactional(value = "chainedTransactionManager")
    public Customer createCustomer(Customer customer) {
        customer = customerRepository.save(customer);
        kafkaTemplate.send("customer-created-topic","Customer created");
          
        return customer;
    }
Marco
  • 33
  • 1
  • 5
  • Review: Whoa, what a block of text! adding some interpunction/paragraphs might make it more readable. Also adding some (pseudo) sourcecode might attract poeple who know the answer. – H.Hasenack Jan 02 '21 at 15:38
  • @H.Hasenack Thanks for your feedback. – Marco Jan 03 '21 at 08:00
  • Kafka topic is new to me too. But in case of producer is chained TM really needed? See https://docs.spring.io/spring-kafka/docs/2.6.4/reference/html/#transaction-synchronization – Pawel Jan 05 '21 at 14:06

2 Answers2

2

I think it doesn't give you the same level of safety. What if something fails between Kafka commit and DB commit.

https://medium.com/dev-genius/transactional-integration-kafka-with-database-7eb5fc270bdc

Pawel
  • 466
  • 1
  • 7
  • 20
0

you get a weaker guarantee if a data you are trying to update is external to kafka.

Note that exactly-once semantics is guaranteed within the scope of Kafka Streams’ internal processing only; for example, if the event streaming app written in Streams makes an RPC call to update some remote stores, or if it uses a customized client to directly read or write to a Kafka topic, the resulting side effects would not be guaranteed exactly once.

https://www.confluent.fr/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/

benaich
  • 912
  • 11
  • 29