I am new to kafka in springboot. While working on a spring boot application, I encountered the following error in Kafka
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 369296129 larger than 104857600)
And the corresponding error in my application
Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry';
I figured out that the error occurs only when I set something in spring.profiles.active
in my application.properties file. If I remove all the usages of the spring profile my application starts working perfectly. What is happening here? And how can I fix this?
My application.properties file
server.port=${PORT}
server.shutdown.grace-period=${SHUTDOWN_GRACE_PERIOD:5s}
# PostgreSQL
spring.datasource.hikari.maximum-pool-size=${DB_POOL_MAX_SIZE:2}
spring.datasource.hikari.minimum-idle=${DB_POOL_MIN_IDLE:1}
spring.datasource.hikari.idle-timeout=${DB_POOL_IDLE_TIMEOUT_IN_MS:30000}
spring.datasource.url=${DATASOURCE_URL}
spring.datasource.username=${DATASOURCE_USERNAME}
spring.datasource.password=${DATASOURCE_PASSWORD}
spring.jpa.hibernate.ddl-auto=none
spring.datasource.initialization-mode=${DATA_INITIALIZATION_MODE:always}
service.name=integrations_service
# FeignClient configuration
feign.client.config.default.loggerLevel=${FEIGN_LOG_LEVEL:full}
feign.client.config.default.connectTimeout=${FEIGN_CONNECTION_TIMEOUT:30000}
feign.client.config.default.readTimeout=${FEIGN_READ_TIMEOUT:30000}
# kafka config and topics
kafka.servers=${KAFKA_SERVER}
kafka.topic1=${KAFKA_TOPIC_1}
kafka.topic2=${KAFKA_TOPIC_2}
kafka.topic3=${KAFKA_TOPIC_3}
# access log config
server.tomcat.accesslog.enabled=true
server.tomcat.accesslog.directory=/dev
server.tomcat.accesslog.prefix=stdout
server.tomcat.accesslog.buffered=false
server.tomcat.accesslog.suffix=
server.tomcat.accesslog.file-date-format=
server.tomcat.accesslog.pattern=[ACCESS_LOG] %t %F %D %B %v %r %s %{X-Correlation-Id}
# thread config
thread.api.core.pool.size=${API_CORE_THREAD_POOL_SIZE}
thread.api.max.pool.size=${API_MAX_THREAD_POOL_SIZE}
# environment config
spring.profiles.active=${ENVIRONMENT}
My local.env file
ENVIRONMENT=local
PORT=8086
KAFKA_SERVER=localhost:9092
DATASOURCE_URL=jdbc:postgresql://localhost:5432/integrations_service
DATASOURCE_USERNAME=username
DATASOURCE_PASSWORD=password
DB_POOL_MAX_SIZE=2
DB_POOL_MIN_IDLE=1
DB_POOL_IDLE_TIMEOUT_IN_MS=30000
DATA_INITIALIZATION_MODE=always
FEIGN_LOG_LEVEL=full
FEIGN_CONNECTION_TIMEOUT=30000
FEIGN_READ_TIMEOUT=30000
API_CORE_THREAD_POOL_SIZE=5
API_MAX_THREAD_POOL_SIZE=10
KAFAK_TOPIC_1=topic1
KAFAK_TOPIC_2=topic2
KAFAK_TOPIC_3=topic3
My error stack trace in spring boot application
"error.stack_trace": "org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry';
nested exception is java.lang.IllegalStateException: Topic(s) [topic1] is/are not present and missingTopicsFatal is true
org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:185)
org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:53)
org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:360)
org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:158)
org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:122)
org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:894)
org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:162)
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:553)
org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
org.springframework.boot.SpringApplication.refresh(SpringApplication.java:747)
org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397)
org.springframework.boot.SpringApplication.run(SpringApplication.java:315)
org.springframework.boot.SpringApplication.run(SpringApplication.java:1226)
org.springframework.boot.SpringApplication.run(SpringApplication.java:1215)
com.integrations.IntegrationsServiceApplication.main(IntegrationsServiceApplication.java:21)\nCaused by: java.lang.IllegalStateException: Topic(s) [topic1] is/are not present and missingTopicsFatal is true
org.springframework.kafka.listener.AbstractMessageListenerContainer.checkTopics(AbstractMessageListenerContainer.java:366)
org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:136)
org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:323)
org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:309)
org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:256)
org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:182)
... 14 more\n"
My usage of spring.profiles.active (sample class)
public class SampleClass {
...
@Value("${spring.profiles.active}") private String environment;
// use environment further
...
}
My docker-compose.yml file for kafka and zookeeper
version: "2"
services:
zookeeper:
image: docker.io/bitnami/zookeeper:3.7
ports:
- "2181:2181"
volumes:
- "zookeeper_data:/bitnami"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: docker.io/bitnami/kafka:3
ports:
- "9092:9092"
volumes:
- "kafka_data:/bitnami"
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- AUTO_CREATE_TOPICS_ENABLE=yes
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local