I've been trying to use the python kafka library for a bit now and can't get a producer to work.
After a bit of research I've found out that kafka sends (and I'm guessing expects as well) an additional 5 byte header (one 0 byte, one long containing a schema id for schema-registry) to consumers. I've managed to get a consumer working by simply stripping this first bytes.
Am I supposed to prepend a similar header when writing a producer?
Below the exception that comes out:
[2016-09-14 13:32:48,684] ERROR Task hdfs-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:142)
org.apache.kafka.connect.errors.DataException: Failed to deserialize data to Avro:
at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:109)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:357)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:226)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1
Caused by: org.apache.kafka.common.errors.SerializationException: Unknown magic byte!
I'm using the latest stable releases of both kafka and python-kafka.
EDIT
Consumer
from kafka import KafkaConsumer
import avro.io
import avro.schema
import io
import requests
import struct
# To consume messages
consumer = KafkaConsumer('hadoop_00',
group_id='my_group',
bootstrap_servers=['hadoop-master:9092'])
schema_path = "resources/f1.avsc"
for msg in consumer:
value = bytearray(msg.value)
schema_id = struct.unpack(">L", value[1:5])[0]
response = requests.get("http://hadoop-master:8081/schemas/ids/" + str(schema_id))
schema = response.json()["schema"]
schema = avro.schema.parse(schema)
bytes_reader = io.BytesIO(value[5:])
# bytes_reader = io.BytesIO(msg.value)
decoder = avro.io.BinaryDecoder(bytes_reader)
reader = avro.io.DatumReader(schema)
temp = reader.read(decoder)
print(temp)
Producer
from kafka import KafkaProducer
import avro.schema
import io
from avro.io import DatumWriter
producer = KafkaProducer(bootstrap_servers="hadoop-master")
# Kafka topic
topic = "hadoop_00"
# Path to user.avsc avro schema
schema_path = "resources/f1.avsc"
schema = avro.schema.parse(open(schema_path).read())
range = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for i in range:
producer.send(topic, b'{"f1":"value_' + str(i))