3

Hello I'm trying to do hot reload of an SSL keystore file with the kafka client lib ( producer and consumer ).

That mean I would like the producer or consumer to switch to a new keystore without having to myself detect the change and close the producer or consumer to re-open it.

( the purpose is that in production an other process will edit or replace the keystore file )

The consumer and producer work well with the ssl.keystore.location file to talk to the brokers but when I edit the file or replace it , that never throw any log ( even error , if I put a wrong keystore file )

// kafkaLibVersion = '2.7.0'
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;
import java.util.concurrent.TimeUnit;

public class ProducerView {
    public static void main(String[] args) {

        Properties properties = new Properties();
        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        properties.put(ProducerConfig.LINGER_MS_CONFIG, "0");
        properties.put(ProducerConfig.ACKS_CONFIG, "1");
        properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        properties.put("security.protocol", "SSL");
        properties.put("ssl.key.password", "toto42sh");
        properties.put("ssl.keystore.password", "toto42sh");
        properties.put("ssl.keystore.location", "conf/toto.jks");

        KafkaProducer<String, String> producer = new KafkaProducer<>(properties);

        while (true) {
            ProducerRecord<String, String> record1 = new ProducerRecord<>("test_topic", "a", "b");
            try {
                producer.send(record1).get(5l, TimeUnit.SECONDS);
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
    }
}

It never log this line -> https://github.com/apache/kafka/blob/976e78e405d57943b989ac487b7f49119b0f4af4/clients/src/main/java/org/apache/kafka/common/security/ssl/SslFactory.java#L125

or this one if I delete the file -> https://github.com/apache/kafka/blob/976e78e405d57943b989ac487b7f49119b0f4af4/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L385

If I kill the process( the kafka producer ) and launch it again , then it detect the edited file , or fail in the case I have deleted the file.

Also this line is never call in the source code

https://github.com/apache/kafka/blob/fe1804370680b965a68fdd2978e2afa450daafe4/clients/src/main/java/org/apache/kafka/common/network/SslChannelBuilder.java#L91

Thank you

raphaelauv
  • 670
  • 1
  • 11
  • 22
  • I would assume it only needs to re-read the files on checking expiration / renewal, not very periodically and certainly not for every event. At the very least, you'd have to close the producer object and re-instantiate, which could be done on your own with a FileWatcher to detect file modification – OneCricketeer Apr 04 '21 at 15:37
  • 1
    I find it also a big issue. Yes the ongoing session isn't affected by expired certs. However once an event occurs - like kafka is down - then after the event is resolved the client tries a new handshake WITHOUT using the new jks. Which leads to failure. – beatrice Aug 31 '21 at 13:17

0 Answers0