i want to transform a spark pipeline into a vector.dev pipeline, so that i can benchmark the efficiency of both agents.
with spark i can connect to kaka using sasl/ssl with the following code, but i can't find the same options in vector.dev .
pyspark configuration:
df.writeStream \
.format("kafka") \
.outputMode("append") \
.option("kafka.sasl.jaas.config", f'org.apache.kafka.common.security.plain.PlainLoginModule required serviceName="kafka" username="access_user" password="password";')\
.option("kafka.bootstrap.servers", kwargs["bootstrap_servers"]) \
.option("topic", kwargs["topic"]) \
.option("checkpointLocation", str(new_directory_location))\
.option("kafka.sasl.mechanism", "PLAIN")\
.option("kafka.security.protocol", "SASL_SSL")\
.option("kafka.ssl.truststore.location", "path_to_jks.jks")\
.option("kafka.ssl.truststore.password", 'pass2')\
.option("kafka.ssl.truststore.type", "JKS")\
.option("kafka.ssl.endpoint.identification.algorithm", "")
I tried this configuration and i got an error with the jks file
[sinks.my_sink_6]
type = "kafka"
bootstrap_servers = "IP"
topic = "default2"
inputs = [ "my_source_id" ]
encoding.codec = "raw_message"
[sinks.my_sink_6.librdkafka_options]
"security.protocol" = 'sasl_ssl'
"sasl.mechanism" = "PLAIN"
"sasl.username" = "username"
"sasl.password" = "password"
"ssl.endpoint.identification.algorithm" = "none"
"ssl.key.location" = "path_to_jks.jks"
"ssl.key.password" = "pass2"
i got the following error : ERROR vector::topology: Configuration error. error=Sink "my_sink_6": creating kafka producer failed: Client creation error: ssl.key.location failed: error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM l ib
is there a way to change the ssl key type to jks ?
I generated a pem from the jks with this command keytool -exportcert -alias alias -keystore ca.jks -storepass dms@kafka -rfc -file certificate.pem
but i am still facing the same problem