As far as I understand, the workload that needs to be executed by KSQL, will be stored in a meta topic (Command Topic), to which all of the KSQL Server nodes are subscribed as Kafka consumers. Incoming new workload in the form of a query, or more granular, singular tasks of a complex query, are written into that topic and all the consumers are obviously notified. But how do the KSQL Servers elect the "worker" for that specific task?
I found following KSQL Server Elastic Scaling in Kubernetes SO answer, as well as this this Confluent deep dive on that topic, but both imply that all KSQL Servers take the task, not just one of them. So how does KSQL ensure the same data is not processed twice, both from data consistency and load efficiency perspective?
My guess would be that all of the KSQL server nodes are within the same Kafka consumer group, so the same Kafka message is not interpreted twice, but each KSQL server node is responsible for one partition of that topic, which leads to effective distribution of load. Is my assumption right? Is this the same, how multiple Kafka Connect instances behave?