0

I am using kafka-python (pip install kafka-python) in a Flask application to send messages to a Kafka cluster (running version 0.11). The application is deployed to AWS elastic beanstalk via docker. However, I see no messages reaching Kafka (verified that with a console consumer).

I don't know much about docker except how to connect to a running container. So that's what I did. I logged into the beanstalk instance and then connected to the docker container. There, I ran the following commands in Python3.

>> from kafka import KafkaProducer
>> p = KafkaProducer(bootstrap_servers='my_kafka_servers:9092', compression_type='gzip')
>> r = p.send(topic = 'my_kafka_topic', value = 'message from docker', key = 'docker1')
>> r.succeeded()
>> False
>> p.flush()
>> r.succeeded()
>> False
>> p.close()
>> r.succeeded()
>> False

All this while, I had a console consumer running listening to that topic but I saw no messages come through.

I did the same exercise "outside" the docker container (i.e., in the beanstalk instance). I first installed kafka-python using pip. Then ran the following in python3.

>> from kafka import KafkaProducer
>> p = KafkaProducer(bootstrap_servers='my_kafka_servers:9092', compression_type='gzip')
>> r = p.send(topic = 'my_kafka_topic', value = 'message outside the docker', key = 'instance1')
>> r.succeeded()
>> False
# waited a second or two
>> r.succeeded()
>> True

This time, I did see the message come through the console consumer.

So, my questions are:

  1. Why is docker blocking the kafka producer's sends?
  2. How can I fix this?

Is this something for which I need to post the docker configuration? I didn't set it up and so don't have that info.

EDIT I found some docker configuration specific info in the project.

{
  "AWSEBDockerrunVersion": "1",
  "Image": {
    "Name": "<AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/<NAME>:<VERSION>",
    "Update": "true"
  },
  "Ports": [
    {
      "ContainerPort": "80"
    }
  ],
  "Logging": "/var/eb_log"
}
Nik
  • 5,515
  • 14
  • 49
  • 75

2 Answers2

0

You will have to bind your docker container to local machine. This can be done by using docker run as:

docker run --rm -p 127.0.0.1:2181:2181 -p 127.0.0.1:9092:9092 -p 127.0.0.1:8081:8081 .... Alternatively you can use docker run with bind IP:

docker run --rm -p 0.0.0.0:2181:2181 -p 0.0.0.0:9092:9092 -p 0.0.0.0:8081:8081 .....

If you want to make docker container routable on your network you can use:

docker run --rm -p <private-IP>:2181:2181 -p <private-IP>:9092:9092 -p <private-IP>:8081:8081 ....

Or finally you can go for not containerising your network interface by using:

docker run --rm -p 2181:2181 -p 9092:9092 -p 8081:8081 --net host ....

Tim Givois
  • 1,926
  • 2
  • 19
  • 36
  • Could you take a look at the edits I posted to the question? Does that config help in understanding why this issue arises? Is there a way I can provide the bindings using this file? – Nik Feb 15 '18 at 23:08
  • Sure, I will edit the answer and add the config you should add to your docker to open the ports :) – Tim Givois Feb 15 '18 at 23:11
0

If you want to bind ports on Elastic Beanstalk and Docker you'll need to use Version 2 which only works with multi container environment. I am having the same issue as above and curious if the fix above works.

islandsound
  • 69
  • 1
  • 5