0

I'm using websockets as producers that are kafka connected (using the confluent_kafka library) to a postgresql database. I have 4 parallel websockets running in different scripts, connected to different topics which output to different tables in the database.

It turns out that one of those websockets is quite demanding and can return 300 entries within a second or at worst, 10,000 entries within a few seconds. After a while, I get this error:

ERROR: Local: Queue Full

I've tried adding linger.ms=100 to confluent-7.3.1/etc/kafka/producer.properties but I still get the same issue. What would be a good approach to solving this problem? Should I raise the linger value to even higher numbers or would that incur some sort of downside to my pipeline? Are there any other parameters I should consider?

I'm using a local confluent set-up (for now) and I'm using JDBC connectors to sink the topic data to the database. Is this problem also just an issue with local set-ups and maybe just migrating to a more production-level set-up would solve it?

I'll gladly display specific code or any parameters if necessary. Since there are so many things to tweak I'm not really sure what would be helpful.

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Kafkaesque
  • 37
  • 3
  • `producer.properties` isn't used by anything – OneCricketeer Jan 19 '23 at 19:22
  • Can you share your Python producer code? How frequently do you call `producer.flush()` to clear out the queue/buffer? Do you get a similar error when using `kafka-python` library instead of one based on a C library? – OneCricketeer Jan 19 '23 at 19:24

0 Answers0