I am trying to export data from Kafka to Oracle db. I've searched related questions and web but could not understand that we need a platform (confluent etc.. ) or not. I'd been read the link below but it's not clear enough.
https://docs.confluent.io/3.2.2/connect/connect-jdbc/docs/sink_connector.html
So, what we actually need to export data without 3rd party platform?
Thanks in advance.

- 11
-
I don't know Kafka. But, if it is a one-time-job and you can export data from Kafka to a CSV file, then you could "import" it into Oracle using SQL*Loader or having the CSV file as an external table. No 3rd party involved. – Littlefoot Dec 30 '19 at 13:15
-
@Littlefoot no we will be using it constantly and there will be huge data – aspilond Dec 30 '19 at 13:19
-
There is a series of blog posts covering Kafka connect usage in more detail, try [Simplest Useful Kafka Connect Data Pipeline](https://www.confluent.io/blog/simplest-useful-kafka-connect-data-pipeline-world-thereabouts-part-1/) or for more in-depth [Kafka Connect Deep Dive – JDBC Source Connector](https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector/) and [here in this git](https://github.com/confluentinc/demo-scene/tree/master/connect-jdbc) you can also find an example Oracle config you can play with as well. – matanz Jan 01 '20 at 22:37
1 Answers
It's not clear what you mean by "third-party" here
What you linked to is Kafka Connect, which is Apache 2.0 Licensed and open source.
Kafka Connect is a plugin ecosystem, you install connectors individually, written by anyone, or write your own, just like any other Java dependency (i.e. a third-party)
The JDBC connector just happens to be maintained by Confluent. and you can configure the Confluent Hub CLI to install within any Kafka Connect distribution (or use Kafka Connect Docker images from Confluent)
Alternatively, you use Apache Spark, Flink, Nifi, and many other Kafka Consumer libraries to read data and then start an Oracle transaction per record batch
Or you can explore non-JVM kafka libraries as well and use a language you're more familiar with doing Oracle operations with

- 179,855
- 19
- 132
- 245
-
I am not sure how many will write own connectors. We are on a project opting for spark, kafka integration. – thebluephantom Dec 31 '19 at 08:00
-
Not everyone has a Spark cluster. Why should writing a connector be any harder than writing Spark code? You get a list of records and you map them into a jdbc request and that's really it, it doesn't have to as generic as Confluent's one is @theblue – OneCricketeer Dec 31 '19 at 12:57
-