0

I have been trying to run a consumer in my local machine connecting to a Kafka server running inside GCP.

Kafka and Zookeeper is running on the same GCP VM instance

Step 1: Start Zookeeper

bin/zookeeper-server-start.sh config/zookeeper.properties

Step 2: Start Kafka

bin/kafka-server-start.sh config/server.properties

If I run a consumer inside the GCP VM instance it works fine:

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning

I verified the firewall rules, and I have access from my local machine, I can reach both the public IP and the port the Kafka server is running on.

I tested many options, changing the server.properties of kafka, for example:

advertised.host.name=public-ip

or

advertised.listeners=public-ip

Following the answer on connecting-kafka-running-on-ec2-machine-from-my-local-machine without success.

mesmacosta
  • 466
  • 3
  • 10

1 Answers1

1

From the official documentation:

advertised.listeners

Listeners to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used. Unlike listeners it is not valid to advertise the 0.0.0.0 meta-address.

After testing many different options, this solution worked for me:

Setting up two listeners, one EXTERNAL with the public IP, and one INTERNAL with the private IP:

# Configure protocol map
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT

# Use plaintext for inter-broker communication
inter.broker.listener.name=INTERNAL

# Specify that Kafka listeners should bind to all local interfaces
listeners=INTERNAL://0.0.0.0:9027,EXTERNAL://0.0.0.0:9037

# Separately, specify externally visible address
advertised.listeners=INTERNAL://localhost:9027,EXTERNAL://kafkabroker-n.mydomain.com:9093

Explanation:

In many scenarios, such as when deploying on AWS, the externally advertised addresses of the Kafka brokers in the cluster differ from the internal network interfaces that Kafka uses.

Also remember to set up your firewall rule to expose the port on the EXTERNAL listener in other to connect to it from an external machine.

Note: It's important to restrict access to authorized clients only. You can use network firewall rules to restrict access. This guidance applies to scenarios that involve both RFC 1918 and public IP; however, when using public IP addresses, it's even more important to secure your Kafka endpoint because anyone can access it.

Taken from google solutions.

mesmacosta
  • 466
  • 3
  • 10
  • 1
    This blog also explains the concept: https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc – Robin Moffatt Jun 19 '20 at 08:28
  • when you say listeners=INTERNAL://0.0.0.0:9027,EXTERNAL://0.0.0.0:9037, am I supposed to replace the word INTERNAL with my internal IP and External with my external IP, for e.g. listeners = 10.153.0.3://0.0.0.0:9027, 31.83.170.127://0.0.0.0:9037 ? – Sajeed Sep 16 '20 at 13:54
  • no, just the IP ADDRESSES AFTER the INTERNAL and EXTERNAL words. advertised.listeners=INTERNAL://localhost:9027,EXTERNAL://kafkabroker-n.mydomain.com:9093 localhost:9027 -> You probably will keep it as localhost, unless you are using some DNS resolver, might need to change the port. kafkabroker-n.mydomain.com:9093 -> This will most likely change with your own external IP or DNS. – mesmacosta Sep 17 '20 at 14:00