0

I am trying to setup kafka sink connector for writing to exasol database.

I have followed https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-1/ this article.

Since I could not find any similar sink connector class for exasol hence I tried to use jar https://github.com/exasol/kafka-connect-jdbc-exasol/tree/master/kafka-connect-exasol/jars [copied this jar in $confluent_dir/share/java/kafka-connect-jdbc] and given the Dialect class inside it as a connector class name in my config json file below.

I have created a json file for configuration as below:

{
        "name": "jdbc_sink_mysql_dev_02",
        "config": {
                "_comment": "The JDBC connector class. Don't change this if you want to use the JDBC Source.",
                "connector.class": "com.exasol.connect.jdbc.dailect.ExasolDatabaseDialect",

                "_comment": "How to serialise the value of keys - here use the Confluent Avro serialiser. Note that the JDBC Source Connector always returns null for the key ",
                "key.converter": "io.confluent.connect.avro.AvroConverter",

                "_comment": "Since we're using Avro serialisation, we need to specify the Confluent schema registry at which the created schema is to be stored. NB Schema Registry and Avro serialiser are both part of Confluent Platform.",
                "key.converter.schema.registry.url": "http://localhost:8081",

                "_comment": "As above, but for the value of the message. Note that these key/value serialisation settings can be set globally for Connect and thus omitted for individual connector configs to make them shorter and clearer",
                "value.converter": "io.confluent.connect.avro.AvroConverter",
                "value.converter.schema.registry.url": "http://localhost:8081",


                "_comment": " --- JDBC-specific configuration below here  --- ",
                "_comment": "JDBC connection URL. This will vary by RDBMS. Consult your manufacturer's handbook for more information",
                "connection.url": "jdbc:exa:<myhost>:<myport> <myuser>/<mypassword>",

                "_comment": "Which table(s) to include",
                "table.whitelist": "<my_table_name>",

                "_comment": "Pull all rows based on an timestamp column. You can also do bulk or incrementing column-based extracts. For more information, see http://docs.confluent.io/current/connect/connect-jdbc/docs/source_config_options.html#mode",
                "mode": "timestamp",

                "_comment": "Which column has the timestamp value to use?  ",
                "timestamp.column.name": "update_ts",

                "_comment": "If the column is not defined as NOT NULL, tell the connector to ignore this  ",
                "validate.non.null": "false",

                "_comment": "The Kafka topic will be made up of this prefix, plus the table name  ",
                "topic.prefix": "mysql-"
        }
}

I am trying to load this connector with below command:

./bin/confluent load jdbc_sink_mysql_dev_02  -d <my_configuration_json_file_path>
P.S. My confluent version is 5.1.0

In similar fashion I have created a mysql-source connector for reading data from mysql and its working well , my use case demands to write that data to exasol database using sink-connector.

Although I am not getting any exceptions, but kafka is not reading any messages.

Any pointers or help to configure such sink-connector to write to exasol database.

0 Answers0