0

Let say i have below configuration for KafkaConsumer my Kafka is using PublishSubscribeChannel with taskexecutor.

    @Bean
    @InboundChannelAdapter(channel = "someInputChannel",poller = @Poller(fixedDelay = "5000", taskExecutor = "taskexecutor"))
    public KafkaMessageSource getKafkaMessageSource() {

        KafkaMessageSource kafkaMessageSource = new KafkaMessageSource (consumerFactory, new ConsumerProperties("topic"));
        kafkaMessageSource.getConsumerProperties().setClientId("listner");
        kafkaMessageSource.setMessageConverter(messageConverter());
        kafkaMessageSource.setPayloadType(CutsomRequest.class);
        return kafkaMessageSource;
    }   

ThreadPoolTaskExecutor


   @Bean(name = "taskexecutor")
    public ThreadPoolTaskExecutor queryRequestTaskExecutor() {
        ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
        threadPoolTaskExecutor.setCorePoolSize(poolSize);
        threadPoolTaskExecutor.setMaxPoolSize(maxPoolSize);
        threadPoolTaskExecutor.setThreadNamePrefix("Request-");
        threadPoolTaskExecutor.setWaitForTasksToCompleteOnShutdown(true);
        return threadPoolTaskExecutor;
    }

Dorequest

 @Bean
    public MessageChannel doRequest() {
        return new PublishSubscribeChannel(taskexecutor);
    }

Issue I am facing is java.lang.OutOfMemoryError: Direct buffer memory

Below is log Stacktrace for my issue:-


2022-04-08 09:25:06.859 ERROR 9 --- [scheduling-1] o.s.i.c.MessagePublishingErrorHandler    : failure occurred in messaging task

java.lang.OutOfMemoryError: Direct buffer memory
        at java.nio.Bits.reserveMemory(Bits.java:695) ~[?:1.8.0_311]
        at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) ~[?:1.8.0_311]
        at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) ~[?:1.8.0_311]
        at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:241) ~[?:1.8.0_311]
        at sun.nio.ch.IOUtil.read(IOUtil.java:195) ~[?:1.8.0_311]
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:378) ~[?:1.8.0_311]
        at org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:103) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:118) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.common.network.Selector.poll(Selector.java:481) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:561) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:227) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:164) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:257) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:480) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1261) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1230) ~[kafka-clients-2.8.1.jar!/:?]
        at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210) ~[kafka-clients-2.8.1.jar!/:?]
        at org.springframework.integration.kafka.inbound.KafkaMessageSource.doReceive(KafkaMessageSource.java:441) ~[spring-integration-kafka-3.3.1.RELEASE.jar!/:3.3.1.RELEASE]
        at org.springframework.integration.endpoint.AbstractMessageSource.receive(AbstractMessageSource.java:184) ~[spring-integration-core-5.3.2.RELEASE.jar!/:5.3.2.RELEASE]
        at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:212) ~[spring-integration-core-5.3.2.RELEASE.jar!/:5.3.2.RELEASE]
        at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:407) ~[spring-integration-core-5.3.2.RELEASE.jar!/:5.3.2.RELEASE]
        at org.springframework.integration.endpoint.AbstractPollingEndpoint.pollForMessage(AbstractPollingEndpoint.java:376) ~[spring-integration-core-5.3.2.RELEASE.jar!/:5.3.2.RELEASE]
        at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$null$3(AbstractPollingEndpoint.java:323) ~[spring-integration-core-5.3.2.RELEASE.jar!/:5.3.2.RELEASE]
        at org.springframework.integration.util.ErrorHandlingTaskExecutor.lambda$execute$0(ErrorHandlingTaskExecutor.java:57) ~[spring-integration-core-5.3.2.RELEASE.jar!/:5.3.2.RELEASE]
        at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50) ~[spring-core-5.2.19.RELEASE.jar!/:5.2.19.RELEASE]
        at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:55) ~[spring-integration-core-5.3.2.RELEASE.jar!/:5.3.2.RELEASE]
        at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$createPoller$4(AbstractPollingEndpoint.java:320) ~[spring-integration-core-5.3.2.RELEASE.jar!/:5.3.2.RELEASE]
        at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) [spring-context-5.2.9.RELEASE.jar!/:5.2.9.RELEASE]
        at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93) [spring-context-5.2.9.RELEASE.jar!/:5.2.9.RELEASE]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_311]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_311]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_311]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_311]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_311]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_311]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_311]

Can someone help me out with this issue. Thanks

  • Why do you think the publish-subscribe channel is related to the issue somehow? More over that one in your question is not involved with the Kafka channel adapter . You probably just have to remove that executor from the poller… – Artem Bilan Apr 12 '22 at 10:52
  • [Request-1] o.s.i.c.PublishSubscribeChannel : postSend (sent=true) on channel 'bean 'errorChannel'', message: ErrorMessage [payload=java.lang.OutOfMemoryError: Java heap space, headers={id=58e24ace-6913-32b0-8846-11015f0841dd, timestamp=1649761567429}] –  Apr 12 '22 at 11:07
  • @ArtemBilan So you think taskexecutor is not needed ? –  Apr 12 '22 at 11:08
  • You log says about an `errorChannel`, but not yours. Therefore it is misleading for us. I don’t why it fails with this error, but for Kafka it is recommended to handle data in the order it was sent. Therefor a task executor is wrong in that config. – Artem Bilan Apr 12 '22 at 11:12
  • This PublishSubscribeChannel is needed or not with taskexecutor. to publish Kafkamessage. –  Apr 12 '22 at 11:36
  • Please, learn why do we need a publish-subscribe channel: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations-publishsubscribechannel. Your task executor has nothing to do with this channel: it is configured on the poller, not channel. Both situations are valid in some use-cases. Your one doesn’t look related . Try to handle records from Kafka in a single thread to be sure that memory consumption depends on the number of threads you allocate to the poller. – Artem Bilan Apr 12 '22 at 11:47
  • Please, make some Google search for your memory problem. Looks like this one i exactly what you have: https://issues.apache.org/jira/browse/KAFKA-5814 – Artem Bilan Apr 12 '22 at 13:10
  • Just a Ask what if I want to use PublishSubscribeChannel with taskexecutor for asynchronous? What code changes do i need to do ? –  Apr 12 '22 at 14:52
  • The `PublishSubscribeChannel` must be supplied with an `Executor`. The one you have on the poller is just handing off the message to the separate thread. The `PublishSubscribeChannel` is synchronous by default, so all its subscribers are going to have a message sequentially, not in parallel. – Artem Bilan Apr 12 '22 at 14:55
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/243831/discussion-between-vaibhav-katwate-and-artem-bilan). –  Apr 12 '22 at 17:33
  • In my case i have to hand over my message to separate thread so i have called taskexecutor in poller and that is throwing outofmemory –  Apr 12 '22 at 17:37
  • OK. Consider then to decrease a `max.poll.records` of the consumer properties. It is `500` by default. And since you hand of your message to other threads, it is piling all new pulled into the memory. – Artem Bilan Apr 12 '22 at 17:45
  • props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG,60000); props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG,10); max.poll.records is set to 10 in consumer config. –  Apr 12 '22 at 17:49
  • See also this thread: https://stackoverflow.com/questions/42900473/kafka-consumers-throwing-java-lang-outofmemoryerror-direct-buffer-memory – Artem Bilan Apr 12 '22 at 17:52

0 Answers0