0

I am running 3 node kafka cluster which is secured through SSl encryption. Now I am trying to use kafka-connect to build a data pipeline to connect to any source DB (mongoDB, Cassandra). As part of this process, first I tried to integrate kafka connect to kafka broker using below configurations for connect-distributed.properties file-

bootstrap.servers=167.67.45.142:30056
group.id=connect-cluster
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.topic=connect-offsets
offset.storage.replication.factor=3
offset.storage.partitions=3
config.storage.topic=connect-configs
config.storage.replication.factor=3
config.storage.partitions=3
status.storage.topic=connect-statuses
status.storage.replication.factor=3
status.storage.partitions=3
offset.flush.interval.ms=10000
rest.host.name=connectcluster-0
rest.port=8083
security.protocol=ssl
ssl.trustore.location=/tmp/kafka.truststore.jks
ssl.truststore.password=password

producer.security.protocol=SSL
producer.ssl.truststore.location=/tmp/kafka.truststore.jks
producer.ssl.truststore.password=password

consumer.security.protocol=SSL
comumer.ssl.truststore.location=/tmp/kafka.truststore.jks
consumer.ssl.truststore.password=password

ssl.endpoint.identification.algorithm=
plugin.path=/var/lib/plugin-connectors/

With above configurations parameters, I started connect service and it is working fine.

As part of the next step to secure connect cluster with SSL, have add further configuration changes in connect-distributed.properties file, given below:-

listeners=https://connectcluster-0:8443
rest.advertised.listener=https
rest.advertised.host.name=connectcluster-0
rest.advertised.port=8083

ssl.keystore.location=/tmp/kafka.keystore.jks
ssl.keystore.password=password
ssl.key.password=password

I was following this confluent blog for kafka connect in section - Encryption with SSL.

https://www.docs.confluent.io/platform/current/kafka/encryption#encryption-ssl-connect

After, then when I have started kafka connect process, it started successfully.

When I am trying to acess connect rest service through - curl https://connectcluster-0:8443/

I am getting curl: (60) SSL certificate problem: self signed certificate in certificate chain...

Also tried to pass certs in curl - curl --cacert client.cer.pem https://connectcluster-0:8443/

Getting curl: (50) SSL: no alternative certificate subject name matches target hostname 'connectcluster-0'

Please let me know if I am missing something here in required kafka-connect configuration parameter

How I can secure my kafka-connect endpoint so that it can be accessible only from https:// protocols?

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
andy
  • 525
  • 3
  • 6
  • 22
  • Have you looked at the configuration options for the `listeners=HTTPS`? – OneCricketeer May 23 '21 at 13:35
  • Yes, will it requires seperate set of ssl certificate for kafka connect also?? – andy May 23 '21 at 14:45
  • The producers, consumers, and Schema Registry clients all can use separate certificates than the Connect workers, yes – OneCricketeer May 23 '21 at 15:45
  • Pls let me know, what specific configuration parameters we can pass in connect-distributed.properties?? So that connect endpoint works with https?? I guess again for this I have to create truststore and keystore?? – andy May 23 '21 at 16:38
  • The documentation tells you what properties you need. You can also see this question that seems to be trying to setup the same thing in Docker https://stackoverflow.com/q/67653340/2308683 – OneCricketeer May 25 '21 at 11:38
  • Or https://stackoverflow.com/questions/45250575/securing-access-to-rest-api-of-kafka-connect and https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=74682562#content/view/74682562 and https://docs.confluent.io/platform/current/connect/security.html – OneCricketeer May 25 '21 at 11:45
  • @OneCricketeer, tried all approach, none of have worked for me. One thing to highlights here- server CN or DNS name which, I used for certificate generation not possible to match with server /etc/hosts file name due to restrictions from certificate provider as it is not accepting specified server name in /etc/hosts file.. and this complete deployment I have done in kubernetes cluster and services are exposed though LoadBalancer. – andy May 30 '21 at 12:54
  • 1
    one trick have tried- making changes in /etc/hosts file of kafka-connect with kafka-broker LB-ip and Certificate DNS/CN name able to create connectors, it's working fine from other cluster but with in the cluster I am not able to access the service, it seems more look like networking or host end point verification issues. Any idea, Pls comment? – andy May 30 '21 at 17:09
  • I personally don't run Kafka or it's components in k8s, but I feel like modifying host files is the wrong solution – OneCricketeer May 30 '21 at 18:13

0 Answers0