1

In some solutions reported on StackOverflow, we are told to increase the MAX_REQUEST_SIZE_CONFIG. In my case it does not help, because the broker cannot be changed: it is set to receive 1 MB messages.

Can we send a message with a larger size? We tried multiple compressions but did not work. Below our configuration and some snippets of code.

ProducerConfig

props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
props.put(ProducerConfig.CLIENT_ID_CONFIG, clientID);
props.put(ProducerConfig.ACKS_CONFIG, acks);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, batchSize);
props.put(ProducerConfig.RETRIES_CONFIG, retries);
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, compression);
props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, requestTimeout);
props.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, transactionTimeout);
props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, maxBlock);

props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);

We send a message in the following format:

public class Test {
    private Class1 info;
    private Class3 payLoad;
}

I try with this library: li-apache-kafka-clients

The Producer send correctly the data, but the listener not receive it, because I use a batch listener

ConcurrentKafkaListenerContainerFactory<String, Test> factory = new ConcurrentKafkaListenerContainerFactory<>();

        
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.setBatchListener(true);
factory.setBatchErrorHandler(new SeekToCurrentBatchErrorHandler());
         factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);

Any ideas on how to fix this?

sashimi
  • 1,224
  • 2
  • 15
  • 23
DrBomber
  • 25
  • 7

0 Answers0