0

I am new to the kafka field.

I have kafka, zookeeper and schema registry all installed in a RHEL7 machine (hostname: kafka-confluent), it is not a cluster setup so there is only 1 broker.

Now I would like to configure SSL encryption for my setup. I have created ssl keys and certificate according to the docs.

Then I configured the properties files.

My (confluent platform install dir) /etc/kafka/server.properties:

ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=password
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=password
ssl.key.password=password
security.inter.broker.protocol=SSL
ssl.client.auth=required
listeners=PLAINTEXT://:9092,SSL://:9093

My (confluent platform install dir) /etc/schema-registry/schema-registry.properties:

listeners=http://0.0.0.0:8081,https://0.0.0.0:8082
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=password
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=password
ssl.key.password=password
ssl.client.auth=true

I already have a topic test created before, when I publish the message in the server, it failed:

[kafka@kafka-confluent ~]$ echo "Hello, World" | /home/kafka/confluent-5.4.0/bin/kafka-console-producer --broker-list localhost:9093 --topic test > /dev/null

[2020-02-20 18:45:12,193] ERROR Error when sending message to topic test with key: null, value: 13 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Topic test not present in metadata after 60000 ms.

Then I checked the server.log, it shows failed authentication:

[2020-02-20 18:45:47,754] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Kevin Lee
  • 401
  • 3
  • 9
  • 22

1 Answers1

0

It is worth checking ssl.endpoint.identification.algorithm:

The endpoint identification algorithm used by clients to validate server host name. The default value is https. Clients including client connections created by the broker for inter-broker communication verify that the broker host name matches the host name in the broker’s certificate. Disable server host name verification by setting ssl.endpoint.identification.algorithm to an empty string

Therefore, setting ssl.endpoint.identification.algorithm to an empty string should do the trick:

ssl.endpoint.identification.algorithm=

Note that this config prevents man-in-the-middle attacks therefore consider configuring it accordingly.

Giorgos Myrianthous
  • 36,235
  • 20
  • 134
  • 156
  • Hi, I have added `ssl.endpoint.identification.algorithm=` to server.properties and restarted kafka, unfortunately, it is still showing the same error. – Kevin Lee Feb 21 '20 at 05:37