1

Following on discussion here, I used the following steps to enable an external client (based on kafkajs) connect to Strimzi on OpenShift. These steps are from here.

Enable external route

The kafka-persistent-single.yaml is edited to as shown below.

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    version: 2.3.0
    replicas: 1
    listeners:
      plain: {}
      tls: {}
      external:
          type: route
    config:
      offsets.topic.replication.factor: 1
      transaction.state.log.replication.factor: 1
      transaction.state.log.min.isr: 1
      log.message.format.version: "2.3"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 5Gi
        deleteClaim: false
  zookeeper:
    replicas: 1
    storage:
      type: persistent-claim
      size: 5Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

Extract certificate,

To extract certificate and use it in client, I ran the following command:

kubectl get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -D > ca.crt

Note that, I had to use base64 -D on my macOS and not base64 -d as shown in documentation.

Kafkajs client

This is the client adapted from their npm page and their documentation.

const fs = require('fs')
const { Kafka } = require('kafkajs')

const kafka = new Kafka({
  clientId: 'my-app',
  brokers: ['my-cluster-kafka-bootstrap-messaging-os.192.168.99.100.nip.io'],
  ssl : { rejectUnauthorized: false,
    ca : [fs.readFileSync('ca.crt', 'utf-8')]
  }
})

const producer = kafka.producer()
const consumer = kafka.consumer({ groupId: 'test-group' })

const run = async () => {
  // Producing
  await producer.connect()
  await producer.send({
    topic: 'test-topic',
    messages: [
      { value: 'Hello KafkaJS user!' },
    ],
  })

  // Consuming
  await consumer.connect()
  await consumer.subscribe({ topic: 'test-topic', fromBeginning: true })

  await consumer.run({
    eachMessage: async ({ topic, partition, message }) => {
      console.log({
        partition,
        offset: message.offset,
        value: message.value.toString(),
      })
    },
  })
}

run().catch(console.error)

Question

When I run node sample.js from the folder having ca.crt, I get a connection refused message.

{"level":"ERROR","timestamp":"2019-10-05T03:22:40.491Z","logger":"kafkajs","message":"[Connection] Connection error: connect ECONNREFUSED 192.168.99.100:9094","broker":"my-cluster-kafka-bootstrap-messaging-os.192.168.99.100.nip.io:9094","clientId":"my-app","stack":"Error: connect ECONNREFUSED 192.168.99.100:9094\n    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1113:14)"}

What am I missing?

cogitoergosum
  • 2,309
  • 4
  • 38
  • 62

2 Answers2

0

I guess that the problem is that you are missing the right port 443 on the broker address so you have to use

brokers: ['my-cluster-kafka-bootstrap-messaging-os.192.168.99.100.nip.io:443']

otherwise it is trying to connect to the default port 80 on the OpenShift route.

ppatierno
  • 9,431
  • 1
  • 30
  • 45
  • Ok, I was trying with `8443`. With `443`, I get this message, `{"level":"ERROR","timestamp":"2019-10-05T05:21:44.986Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Closed connection","retryCount":0,"retryTime":325}` – cogitoergosum Oct 05 '19 at 05:23
  • It seems to come from NodeJS and its configuration. I would try to disable host name verification to see if it works. Then just using a raw openssl client with same cert and route address to see that all is working from that point of view. If possible doing the same using the raw Kafka console client. Take also a look at this blog post https://strimzi.io/2019/04/30/accessing-kafka-part-3.html – ppatierno Oct 05 '19 at 05:30
  • 1
    I will be off for a little bit, se you later let me know ;) – ppatierno Oct 05 '19 at 05:30
  • To disable, host verification, I tried this `checkServerIdentity: () => undefined` as described here - https://stackoverflow.com/a/47957605/919480 No luck. – cogitoergosum Oct 05 '19 at 05:44
  • have you tried to test the certificate with openssl client or if a kafka console consumer is able to establish a connection. We need to check if it's a NodeJS problem or not, because imho the Strimzi configuration seems to be fine. – ppatierno Oct 05 '19 at 09:53
  • For the console clients, I set-up the `client-ssl.properties` file as described here (I didn't pass the key password though) - https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/configuring-wire-encryption/content/configuring_kafka_producer_and_kafka_consumer.html. I get this error - `[2019-10-05 17:29:16,135] ERROR Error when sending message to topic my-topic with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.TimeoutException: Topic my-topic not present in metadata after 60000 ms.` – cogitoergosum Oct 05 '19 at 12:00
  • `openssl s_client -connect` responds with a presence of a self-signed certificate - not more to look at; unless, you could suggest! – cogitoergosum Oct 05 '19 at 12:01
  • what's the certificate you are getting? get you give me the output of commands `oc get pods`, `oc get service`, `oc get route` and `oc get secret`? – ppatierno Oct 05 '19 at 12:03
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/200439/discussion-between-cogitoergosum-and-ppatierno). – cogitoergosum Oct 05 '19 at 12:25
0

After an extended discussion with @ppatierno, I feel that, the Strimzi cluster works well with the Kafka console clients. The kafkajs package, on the other hand, keeps failing with NOT_LEADER_FOR_PARTITION.

UPDATE The Python client seem to be working without a fuss; so, I am abandoning kafkajs.

cogitoergosum
  • 2,309
  • 4
  • 38
  • 62