0

Good day everyone!

The main problem is: I want to connect from my local machine to Kafka which is running on cluster (let it be DNS node03.st) in k8s container by my own manifest.

The manifest of zookeeper deployment is here (image: confluentinc/cp-zookeeper:6.2.4):

---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: aptmess
  name: zookeeper-aptmess-deployment
  labels:
    name: zookeeper-service-filter
spec:
  selector:
    matchLabels:
      app: zookeeper-label
  template:
    metadata:
      labels:
        app: zookeeper-label
    spec:
      containers:
        - name: zookeeper
          image: confluentinc/cp-zookeeper:6.2.4
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 2181 # ZK client
              name: client
            - containerPort: 2888 # Follower
              name: follower
            - containerPort: 3888 # Election
              name: election
            - containerPort: 8080 # AdminServer
              name: admin-server
          env:
            - name: ZOOKEEPER_ID
              value: "1"
            - name: ZOOKEEPER_SERVER_1
              value: zookeeper
            - name: ZOOKEEPER_CLIENT_PORT
              value: "2181"
            - name: ZOOKEEPER_TICK_TIME
              value: "2000"
---
apiVersion: v1
kind: Service
metadata:
  namespace: aptmess
  name: zookeeper-service-aptmess
  labels:
    name: zookeeper-service-filter
spec:
  type: NodePort
  ports:
    - port: 2181
      protocol: TCP
      name: client
    - name: follower
      port: 2888
      protocol: TCP
    - name: election
      port: 3888
      protocol: TCP
    - port: 8080
      protocol: TCP
      name: admin-server
  selector:
    app: zookeeper-label

My kafka StatefulSet manifest (image: confluentinc/cp-kafka:6.2.4):


---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: aptmess
  name: kafka-stateful-set-aptmess
  labels:
    name: kafka-service-filter
spec:
  serviceName: kafka-broker
  replicas: 1
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: kafka-label
  template:
    metadata:
      labels:
        app: kafka-label
    spec:
      volumes:
        - name: config
          emptyDir: {}
        - name: extensions
          emptyDir: {}
        - name: kafka-storage
          persistentVolumeClaim:
            claimName: kafka-data-claim
      terminationGracePeriodSeconds: 300
      containers:
        - name: kafka
          image: confluentinc/cp-kafka:6.2.4
          imagePullPolicy: Always
          ports:
            - containerPort: 9092
          resources:
            requests:
              memory: "2Gi"
              cpu: "1"
          command:
            - bash
            - -c
            - unset KAFKA_PORT; /etc/confluent/docker/run
          env:
            - name: KAFKA_ADVERTISED_HOST_NAME
              value: kafka-broker
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: zookeeper-service-aptmess:2181
            - name: KAFKA_BROKER_ID
              value: "1"
            - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
              value: "1"
            - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
              value: "PLAINTEXT:PLAINTEXT,CONNECTIONS_FROM_HOST:PLAINTEXT"
            - name: KAFKA_INTER_BROKER_LISTENER_NAME
              value: "PLAINTEXT"
            - name: KAFKA_LISTENERS
              value: "PLAINTEXT://0.0.0.0:9092"
            - name: KAFKA_ADVERTISED_LISTENERS
              value: "PLAINTEXT://kafka-broker.aptmess.svc.cluster.local:9092"

          volumeMounts:
            - name: config
              mountPath: /etc/kafka
            - name: extensions
              mountPath: /opt/kafka/libs/extensions
            - name: kafka-storage
              mountPath: /var/lib/kafka/
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
---
apiVersion: v1
kind: Service
metadata:
  namespace: aptmess
  name: kafka-broker
  labels:
    name: kafka-service-filter
spec:
  type: NodePort
  ports:
    - port: 9092
      name: kafka-port
      protocol: TCP

  selector:
    app: kafka-label

NodePort for port 9092 is 30000.

When i try to connect from localhost a got error:

from kafka import KafkaProducer

producer = KafkaProducer(
    bootstrap_servers=['node03.st:30000']
)
>> Error connecting to node kafka-broker.aptmess.svc.cluster.local:9092 (id: 1 rack: null)

I spent a long time by changing internal and external listeners, but it doesn't help me. What should i do to reach the goal of sending message from my localhost to remote Kafka broker?

Thanks in advance!

P.s: I have searched this links to find results:

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
aptmess
  • 13
  • 3

1 Answers1

0

NodePort for port 9092 is 30000

Then you need to define that node's hostname and port as part of KAFKA_ADVERTISED_LISTENERS, as mentioned in many of the linked posts... You've only defined one listener, and it's internal to k8s... However, keep in mind, that's a poor solution unless you force the broker pod to only be running on that one host, and that one port.

Alternatively, replace your setup with Strimzi operator, and read how you can use Ingress resources (ideally) to access the Kafka cluster, but they also support NodePort - https://strimzi.io/blog/2019/04/17/accessing-kafka-part-1/ (cross reference with latest documentation since that's an old post)

Ingress's would be ideal because the Ingress controller would be able to dynamically route requests to the broker pods while having a fixed external address, otherwise, you'll constantly need to use k8s api to describe the broker pods and get their current port information

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245