The Kafka connector is starting up OK and reads the data stream in the topic, however, it doesn't write any data and when I stop the connector i get an error
I have ensured the topic and logs directory is created on the HDFS filesystem
name=hdfs-sink
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=1
topics=rstest
#hdfs.url=adl:///
hdfs.url=hdfs://headnodehost
flush.size=3
Here is the output from the screen
[2019-05-08 12:08:20,963] INFO Opening record writer for: hdfs://headnodehost/topics//+tmp/rstest/partition=1/4ef1de96-bd7c-437c-b210-2ef6e62d3ead_tmp.avro (io.confluent.connect.hdfs.avro.AvroRecordWriterProvider:66) ^C[2019-05-08 12:08:32,911] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65) [2019-05-08 12:08:32,913] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:211) [2019-05-08 12:08:32,918] INFO Stopped http_8083@6c03fb16{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:306) [2019-05-08 12:08:32,958] INFO Stopped o.e.j.s.ServletContextHandler@4e31c3ec{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:865) [2019-05-08 12:08:32,960] INFO REST server stopped (org.apache.kafka.connect.runtime.rest.RestServer:222) [2019-05-08 12:08:32,960] INFO Herder stopping (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:77) [2019-05-08 12:08:32,960] INFO Stopping task hdfs-sink-0 (org.apache.kafka.connect.runtime.Worker:478) [2019-05-08 12:08:33,113] ERROR Error discarding temp file hdfs://headnodehost/topics//+tmp/rstest/partition=1/4ef1de96-bd7c-437c-b210-2ef6e62d3ead_tmp.avro for rstest-1 partition=1 when closing TopicPartitionWriter: (io.confluent.connect.hdfs.TopicPartitionWriter:451) org.apache.kafka.connect.errors.DataException: java.nio.channels.ClosedChannelException at io.confluent.connect.hdfs.avro.AvroRecordWriterProvider$1.close(AvroRecordWriterProvider.java:97)