0

I'll try and keep this brief. I have a kafka broker running in docker, this is the docker-compose.yaml

version: '2'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    ports:
      - 22181:2181

  kafka:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper
    ports:
      - 29092:29092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092,NGROK://6.tcp.eu.ngrok.io:15124
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092,NGROK://0.0.0.0:9093
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,NGROK:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

I setup a kubernetes cluster using kind on my macbook. I created a simple spring boot application that creates a Kafka Consumer, here is the KafkaConfig:

@EnableKafka
@Configuration
@Slf4j
@SuppressWarnings("squid:S2068")
public class KafkaConfig {

    @Autowired
    private ApplicationProperties applicationProperties;

    @Bean
    public KafkaAdmin adminClient() {
        Map<String, Object> configs = new HashMap<>();
        configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, applicationProperties.getKafkaClusterURL());
        return new KafkaAdmin(configs);
    }

    @Bean
    public ConsumerFactory<String, String> consumerFactory() {
        log.info("Creating consumer factory");
        log.info(applicationProperties.getKafkaClusterURL());
        Map<String, Object> props = new HashMap<>();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, applicationProperties.getKafkaClusterURL());
        props.put(ConsumerConfig.GROUP_ID_CONFIG, applicationProperties.getKafkaConsumerGroupName());
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        return new DefaultKafkaConsumerFactory<>(props);
    }

    @Bean
    public ConcurrentKafkaListenerContainerFactory<String, String>
    kafkaListenerContainerFactory() {

        ConcurrentKafkaListenerContainerFactory<String, String> factory =
                new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        return factory;
    }

I use that image for my serving.yaml file in my kubernetes cluster:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
    name: knative-poc-service
    namespace: knative-poc-app

spec:
    template:
        spec:
            containers:
              - image: alnicole0103/knative-poc:1.0
                env:
                  - name: KAFKA_CLUSTER_URL
                    value: 6.tcp.eu.ngrok.io:15124
                  - name: KAFKA_CONSUMER_GROUP_NAME
                    value: knative-poc-group-1
                  - name: KAFKA_INPUT_TOPIC
                    value: knative-poc-topic

After trying many different things for the KAFKA_CLUSTER_URL, including the docker bridge gateway ip (172.17.0.1), host.docker.internal, i decided to just set up a tcp tunnel using ngrok on port 29092 (which is the url you can see in the service yaml). Using the ngrok address I can succesfully test the connection using offset explorer, but the knative service still can't connect.

The logs are

2023-09-02T13:38:03.043Z  WARN 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-knative-poc-group-1-1, groupId=knative-poc-group-1] Connection to node 1 (localhost/127.0.0.1:29092) could not be established. Broker may not be available.
2023-09-02T13:38:04.115Z  INFO 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-knative-poc-group-1-1, groupId=knative-poc-group-1] Node 1 disconnected.

I can see that the ngrok address is the bootsrap address that is being used to connect since I am printing the address in the KafkaConfig.

It surely can't be this hard to connect to my kafka broker from my kind cluster can it? :D

Thanks

I have tried everything above, tried different setups for the advertised listeners but nothing works. The closest I think i got was when I used the ngrok address but didnt have PLAINTEXT_HOST://localhost:29092 this line in my advertised listeners, and had no KAFKA_LISTENERS defined. The error i was getting was

org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-knative-poc-group-1, groupId=knative-poc-group] Cancelled in-flight API_VERSIONS request with correlation id 53 due to node -1 being disconnected (elapsed time since creation: 164ms, elapsed time since send: 164ms, request timeout: 30000ms)
2023-09-02T12:16:02.944Z  WARN 1 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-knative-poc-group-1, groupId=knative-poc-group] Bootstrap broker 6.tcp.eu.ngrok.io:15124 (id: -1 rack: null) disconnected
OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
  • Just use Strimzi to run Kafka in Kubernetes... Don't try to use compose and kubernetes together. Ngrok isn't necessary for everything running on the same machine – OneCricketeer Sep 02 '23 at 22:30
  • @OneCricketeer thanks i'll try out Strimzi. I know ngrok shouldnt be necessary i just spent so much time fiddling with the networking inside docker and kubernetes that i assumed it would be easier just creating an ngrok address that can be accessed from anywhere. – Alex Brown Sep 02 '23 at 22:38

1 Answers1

0
ports:
    - 29092:29092

That's what you're connected to

This is what's returned PLAINTEXT_HOST://localhost:29092 (matches that port)

Your client says localhost/127.0.0.1:29092 could not be established. Meaning the bootstrap though ngrok worked, but your advertised listeners don't use the same address... So since localhost refers to the pod running your app, not Kafka, or your host machine where the Docker port is being forwarded, it's failing.

Ultimately, you shouldn't try accessing host services from kind. Related issue. Instead, run kubernetes in Kubernetes

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245