18

I am trying to check the kafka consumer by consuming the data from a topic on a remote Kafka cluster. I am getting the following error when I use the kafka-console-consumer.sh:

 ERROR Error processing message, terminating consumer process:  (kafka.tools.ConsoleConsumer$)
java.lang.IllegalStateException: No entry found for connection 2147475658
    at org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:330)
    at org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:134)
    at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:885)
    at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:276)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.tryConnect(ConsumerNetworkClient.java:548)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:655)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:635)
    at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
    at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
    at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:575)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:389)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:231)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:316)
    at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1214)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1179)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1164)
    at kafka.tools.ConsoleConsumer$ConsumerWrapper.receive(ConsoleConsumer.scala:436)
    at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:104)
    at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:76)
    at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54)
    at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Processed a total of 0 messages

Here is the command that I use:

./bin/kafka-console-consumer.sh --bootstrap-server SSL://{IP}:{PORT},SSL://{IP}:{PORT},SSL://{IP}:{PORT} --consumer.config ./config/consumer.properties --topic MYTOPIC --group MYGROUP

Here is the ./config/consumer.properties file:

bootstrap.servers=SSL://{IP}:{PORT},SSL://{IP}:{PORT},SSL://{IP}:{PORT}

# consumer group id
group.id=MYGROUP

# What to do when there is no initial offset in Kafka or if the current
# offset does not exist any more on the server: latest, earliest, none
auto.offset.reset=earliest

#### Security
security.protocol=SSL
ssl.key.password=test1234
ssl.keystore.location=/opt/kafka/config/certs/keystore.jks
ssl.keystore.password=test1234
ssl.truststore.location=/opt/kafka/config/certs/truststore.jks
ssl.truststore.password=test1234

Do you have any idea what the problem is?

Nooshin
  • 943
  • 1
  • 9
  • 24

5 Answers5

32

I have found the problem. It was a DNS problem at the end. I was reaching out the Kafka brokers by the IP addresses, but the broker replies with DNS name. After setting the DNS names on the consumer side, it started working again.

Nooshin
  • 943
  • 1
  • 9
  • 24
  • 10
    Could you explain more about how you did this? I'm having a similar issue. – Abigail Fox May 09 '19 at 13:13
  • 3
    You can set up local dns resolve in the hosts file. For example, if I use macOS, I will add a line in /etc/hosts, "your_kafka_host_name 1.2.3.4". Is same for other operating systems. – Alan42 Jun 20 '19 at 07:50
13

I had this problem (with consumers and producers) when running Kafka and Zookeeper as Docker containers.

The solution was to set advertised.listeners in the config/server.properties file of the Kafka brokers, so that it contains the IP address of the container, e.g.

advertised.listeners=PLAINTEXT://172.15.0.8:9092

See https://github.com/maxant/kafkaplayground/blob/master/start-kafka.sh for an example of a script used to start Kafka inside the container after setting up the properties file correctly.

Ant Kutschera
  • 6,257
  • 4
  • 29
  • 40
  • 1
    You can do it if you have access to the main Kafka, but if you are only a consumer, or producer and not have access to the Kafka cluster, it's better to set it on your DNS. – Nooshin Mar 04 '19 at 07:42
  • 1
    Similarly, you can use the container name instead of the ip address. Then put an alias to the container name in your hosts file. Has the same effect but won't break when you use a container with a different ip address – ThetaSinner Apr 05 '19 at 09:16
4

It seems the Kafka cluster listener property is not configured in server.properties.

In the remote kafka cluster, this property should be uncommented with the proper host name.

listeners=PLAINTEXT://0.0.0.0:9092
Nishu Tayal
  • 20,106
  • 8
  • 49
  • 101
  • 1
    The Kafka cluster was ok. People could use it from the same machine. the problem was that I couldn't get the response back. – Nooshin Feb 01 '19 at 12:07
2

In my case I was receiving that while trying to connect to my Kafka container, I had to pass the following:

-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092

-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092

Hope it helps someone

yeralin
  • 1,357
  • 13
  • 24
1

Are you sure the remote kafka is running. I would suggest running nmap -p PORT HOST in order to verify the port is open (unless it is configured differently the port should be 9092). If that is ok, then you can use kafkacat which makes things easier. Create a consumer running kafkacat -b HOST:PORT -t YOUR_TOPIC -C -o beginning or create a producer running kafkacat -b HOST:PORT -t YOUR_TOPIC -P

Rodrigo Loza
  • 1,200
  • 7
  • 14