The file need not to be in HDFS first.It can be taken from an edge node/local machine.Kudu is similar to Hbase.It is a real-time store that supports key-indexed record lookup and mutation but cant store text file directly as in HDFS.For Kudu to store the contents of a text file,it needs to be parsed and tokenised.For that, you need to have Spark execution/java api alongwith Nifi (or Apache Gobblin) to perform the processing and then storing it in Kudu table.
Or
You can integrate it with Impala allowing you to use Impala to insert, query, update, and delete data from Kudu tablets using Impala’s SQL syntax, as an alternative to using the Kudu APIs to build a custom Kudu application.Below are the steps:
- Import the file in hdfs
- Create an external impala table.
- Then insert the data in the table.
- Create a kudu table using keyword
stored as KUDU
and As Select
to copy the contents from impala to kudu.
In this link you can refer for more info- https://kudu.apache.org/docs/quickstart.html