1

I have kafka installed on localhost. It is running.

● kafka.service - Apache Kafka Server
     Loaded: loaded (/etc/systemd/system/kafka.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2021-06-11 03:36:36 EEST; 23min ago
       Docs: http://kafka.apache.org/documentation.html
   Main PID: 103920 (java)
      Tasks: 71 (limit: 9353)
     Memory: 367.0M
     CGroup: /system.slice/kafka.service
             └─103920 /usr/lib/jvm/java-11-openjdk-amd64/bin/java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHe>

июн 11 03:36:41 dmytruk kafka-server-start.sh[103920]: [2021-06-11 03:36:41,214] INFO [/config/changes-event-process-thread]: Starting (kafka>
июн 11 03:36:41 dmytruk kafka-server-start.sh[103920]: [2021-06-11 03:36:41,292] INFO [SocketServer brokerId=0] Starting socket server accept>
июн 11 03:36:41 dmytruk kafka-server-start.sh[103920]: [2021-06-11 03:36:41,300] INFO [SocketServer brokerId=0] Started data-plane acceptor a>
июн 11 03:36:41 dmytruk kafka-server-start.sh[103920]: [2021-06-11 03:36:41,300] INFO [SocketServer brokerId=0] Started socket server accepto>
июн 11 03:36:41 dmytruk kafka-server-start.sh[103920]: [2021-06-11 03:36:41,307] INFO Kafka version: 2.7.0 (org.apache.kafka.common.utils.App>
июн 11 03:36:41 dmytruk kafka-server-start.sh[103920]: [2021-06-11 03:36:41,307] INFO Kafka commitId: 448719dc99a19793 (org.apache.kafka.comm>
июн 11 03:36:41 dmytruk kafka-server-start.sh[103920]: [2021-06-11 03:36:41,307] INFO Kafka startTimeMs: 1623371801301 (org.apache.kafka.comm>
июн 11 03:36:41 dmytruk kafka-server-start.sh[103920]: [2021-06-11 03:36:41,317] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
июн 11 03:57:24 dmytruk kafka-server-start.sh[103920]: [2021-06-11 03:57:24,797] INFO Creating topic PairsUpdated with configuration {} and initial>
июн 11 03:57:25 dmytruk kafka-server-start.sh[103920]: [2021-06-11 03:57:25,022] INFO [KafkaApi-0] Auto creation of topic PairsUpdated with 1 parti>

I am using kafka-python as a client. When I try to send a message, I get the error.

main.py

def send_kafka(topic: str, data: dict):
    kafka_producer = KafkaProducer(
        bootstrap_servers=["localhost:9092"],
        value_serializer=lambda m: json.dumps(m).encode("utf-8"),
    )
    kafka_producer.send("PairsUpdated", b"")

kafka.log

DEBUG:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=localhost:9092 <connected> [IPv4 ('127.0.0.1', 9092)]> Request 68: MetadataRequest_v1(topics=['PairsUpdated'])
DEBUG:kafka.protocol.parser:Received correlation id: 68
DEBUG:kafka.protocol.parser:Processing response MetadataResponse_v1
DEBUG:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=localhost:9092 <connected> [IPv4 ('127.0.0.1', 9092)]> Response 68 (3.993988037109375 ms): MetadataResponse_v1(brokers=[], controller_id=-1, topics=[(error_code=5, topic='PairsUpdated', is_internal=False, partitions=[])])
DEBUG:kafka.producer.kafka:_wait_on_metadata woke after 6.909250736236572 secs.
DEBUG:kafka.producer.kafka:Requesting metadata update for topic PairsUpdated
DEBUG:kafka.client:Sending metadata request MetadataRequest_v1(topics=['PairsUpdated']) to node bootstrap-0
DEBUG:kafka.protocol.parser:Sending request MetadataRequest_v1(topics=['PairsUpdated'])

python.log

Internal Server Error: /api/v1/pairs/
Traceback (most recent call last):
  File "/home/m0nte-cr1st0/work_projects/trading_bot/backoffice/venv/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
    response = get_response(request)
  File "/home/m0nte-cr1st0/work_projects/trading_bot/backoffice/venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/home/m0nte-cr1st0/work_projects/trading_bot/backoffice/venv/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
    return view_func(*args, **kwargs)
  File "/home/m0nte-cr1st0/work_projects/trading_bot/backoffice/venv/lib/python3.8/site-packages/rest_framework/viewsets.py", line 125, in view
    return self.dispatch(request, *args, **kwargs)
  File "/home/m0nte-cr1st0/work_projects/trading_bot/backoffice/venv/lib/python3.8/site-packages/rest_framework/views.py", line 509, in dispatch
    response = self.handle_exception(exc)
  File "/home/m0nte-cr1st0/work_projects/trading_bot/backoffice/venv/lib/python3.8/site-packages/rest_framework/views.py", line 469, in handle_exception
    self.raise_uncaught_exception(exc)
  File "/home/m0nte-cr1st0/work_projects/trading_bot/backoffice/venv/lib/python3.8/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
    raise exc
  File "/home/m0nte-cr1st0/work_projects/trading_bot/backoffice/venv/lib/python3.8/site-packages/rest_framework/views.py", line 506, in dispatch
    response = handler(request, *args, **kwargs)
  File "/home/m0nte-cr1st0/work_projects/trading_bot/backoffice/backoffice/controllers.py", line 78, in create
    send_kafka("PairsUpdated", b"")
  File "/home/m0nte-cr1st0/work_projects/trading_bot/backoffice/backoffice/main.py", line 33, in send_kafka
    kafka_producer.send("PairsUpdated", b"")
  File "/home/m0nte-cr1st0/work_projects/trading_bot/backoffice/venv/lib/python3.8/site-packages/kafka/producer/kafka.py", line 576, in send
    self._wait_on_metadata(topic, self.config['max_block_ms'] / 1000.0)
  File "/home/m0nte-cr1st0/work_projects/trading_bot/backoffice/venv/lib/python3.8/site-packages/kafka/producer/kafka.py", line 702, in _wait_on_metadata
    raise Errors.KafkaTimeoutError(
kafka.errors.KafkaTimeoutError: KafkaTimeoutError: Failed to update metadata after 60.0 secs.

telnet

>>> telnet 127.0.0.1 9092
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.

Ubuntu==20.04

Python==3.8.5

kafka-python==2.0.2

kafka.service

[Unit]
Description=Apache Kafka Server
Documentation=http://kafka.apache.org/documentation.html
Requires=zookeeper.service
After=zookeeper.service

[Service]
Type=simple
Environment="JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64"
ExecStart=/usr/local/kafka-server/bin/kafka-server-start.sh /usr/local/kafka-server/config/server.properties
ExecStop=/usr/local/kafka-server/bin/kafka-server-stop.sh
Restart=on-abnormal

[Install]
WantedBy=multi-user.target
OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Dima
  • 112
  • 12
  • take a look at [this](https://stackoverflow.com/a/62201617). Have you tried increasing Kafka memory? – Achyut Vyas Jun 11 '21 at 05:50
  • `KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"` ? @AchyutVyas – Dima Jun 11 '21 at 11:19
  • Yes, @Dima, I have the same Kafka configuration as your's `Environmet="KAFKA_HEAP_OPTS=-Xmx1G -Xms1G"`, but my Kafka service is consuming around 1.6G Memory. I'm confused by the memory consumption of your Kafka service. can you post the Kafka service file in question? – Achyut Vyas Jun 11 '21 at 14:40
  • where can i view this file? – Dima Jun 11 '21 at 15:29
  • See second line. kafka service file location is **/etc/systemd/system/kafka.service** – Achyut Vyas Jun 11 '21 at 15:31
  • @AchyutVyas I added `kafka.service` file to question – Dima Jun 11 '21 at 15:36
  • 1
    Can you add & change this configuration [`Environmet="KAFKA_HEAP_OPTS=-Xmx1G -Xms1G"`] in service? – Achyut Vyas Jun 11 '21 at 15:40
  • @AchyutVyas Hm.. It's help me) – Dima Jun 11 '21 at 15:47
  • One more question: have you tried pushing records in topics using **console-producer** when memory was low? – Achyut Vyas Jun 11 '21 at 16:23
  • 1
    console-producer is also a JVM process that allocates about somewhere between 128Mb and 512Mb for itself. Stick to `kafkacat` or your Python script if you're memory bound by running Java processes – OneCricketeer Jun 11 '21 at 17:56
  • Seems that there could be a problem at the broker side: `MetadataResponse_v1(brokers=[], controller_id=-1, topics=[(error_code=5, topic='PairsUpdated', is_internal=False, partitions=[])])` Error code 5 is for LEADER_NOT_AVAILABLE – Ignacio Acuña Frías Jun 13 '21 at 16:22
  • Could you provide more of kafka.logs? No one of those request in the logs are related to the Python’s producer request, but of Kafka’s internal. Also, could you add debug log for your client side? See (https://github.com/dpkp/kafka-python/issues/230) – Ignacio Acuña Frías Jun 13 '21 at 16:34
  • Wondering if a Fetch Metadata Request failing to connect to the cluster gives the timeout – Ignacio Acuña Frías Jun 13 '21 at 16:44

1 Answers1

0

Looking at the Kafka service status It's clear that your Kafka service is running on low memory(367.0 M)


you can verify the reason behind Timeout here.

Raises:
KafkaTimeoutError – if unable to fetch topic metadata, or unable to obtain memory buffer prior to configured max_block_ms


you can provide memory externally via updating the Kafka service file.

Environment="KAFKA_HEAP_OPTS=-Xmx1G -Xms1G"

and last restart kafka service.

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Achyut Vyas
  • 471
  • 1
  • 6
  • 18
  • Note: The default is `-Xmx1G -Xms1G` already – OneCricketeer Jun 11 '21 at 17:54
  • @OneCricketeer yes, the default value is `-Xmx1G -Xms1G`, but what might be the possible reasons behind low memory? (and 367 M is really very low for Kafka service?) – Achyut Vyas Jun 12 '21 at 05:07
  • 3
    Maybe 367Mb is the used memory seen by the OS, not the actual JVM heap boundary. The TimeoutError is on the client side, not the broker memory, by the way. – OneCricketeer Jun 12 '21 at 14:24