I have a Kafka instance running on an AWS EC2 machine and acting as a producer to my AWS MSK Cluster. For writing the data to S3 bucket I have created an AWS MSK Connector using below configuration:
connector.class=io.confluent.connect.s3.S3SinkConnector
partition.duration.ms=86400000
s3.region=us-east-1
topics.dir=prod/raw/bettopics
flush.size=20000
schema.compatibility=NONE
s3.part.size=5242880
tasks.max=1
timezone=UTC
topics=MSKTutorialTopic
locale=en-US
format.class=io.confluent.connect.s3.format.json.JsonFormat
partitioner.class=io.confluent.connect.storage.partitioner.TimeBasedPartitioner
storage.class=io.confluent.connect.s3.storage.S3Storage
path.format='dt'=YYYY-MM-dd
s3.bucket.name=pvs2-prod-msk
timestamp.extractor=Wallclock
I am able to generate files in the S3 bucket using DefaultPartitioner
Partitioner class with below configuration, but unable to generate any files using the above configuration:
connector.class=io.confluent.connect.s3.S3SinkConnector
s3.region=us-east-1
schema.compatibility=NONE
tasks.max=1
topics=MSKTutorialTopic
format.class=io.confluent.connect.s3.format.json.JsonFormat
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
storage.class=io.confluent.connect.s3.storage.S3Storage
s3.bucket.name=pvs2-prod-msk
Am I missing any MSK Connector's configuration details, or putting in any incorrect details? Any help would be much appreciated, thanks!