jaydebeapi executemany() method is not working for big csv file writing to hadoop table.
Can someone please give example to writing csv data to Hive table?
jaydebeapi executemany() method is not working for big csv file writing to hadoop table.
Can someone please give example to writing csv data to Hive table?
big csv file writing to hadoop
Unclear why you're trying to use JDBC for this
pip install pyspark
import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.getOrCreate()
df = spark.read.csv("file.csv")
df.write.parquet("hdfs:///tmp/upload")
Alternatively, if you're using Apache Hive, then see https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html