I'm fairly new to working with Kafka and MSK in AWS. I'm using kafkajs to write from a lambda to an MSK cluster. My records are being written successfully to my Kafka cluster, but my client is also logging connection timeout errors into CloudWatch. I'm curious if I could be doing something different in my code to avoid having error logs.
This is my producer code:
const client = new Kafka({
clientId: "client-id",
brokers: ["broker1:9092", "broker2:9092"], // example brokers used here
});
const producer = client.producer({
idempotent: true
});
const record = {
topic: "topic1",
messages: [
{ value: JSON.stringify("message") }
]
};
await producer
.connect()
.then(async () => await producer.send(record))
.then(async () => await producer.disconnect())
.catch(err => throw new Error(JSON.stringify(err)));
And here is an example of the error output:
{
"level": "ERROR",
"timestamp": "2022-12-05T20:44:06.637Z",
"logger": "kafkajs",
"message": "[Connection] Connection timeout",
"broker": "[some-broker]:9092",
"clientId": "[some-client-id]"
}
I'm not sure if I just need to increase my connection timeout in the client or if I'm missing something in the initialization. Like I said, the record still makes it into the cluster, but I'd like to clean up the logs so I don't see this error so often. Has anyone had this issue and solved it? Or is this a normal thing to see when working with MSK and kafkajs?