1

I have created new table with csv file with following code

%sql

SET spark.databricks.delta.schema.autoMerge.enabled = true;

create table if not exists catlog.schema.tablename;

COPY INTO catlog.schema.tablename
  FROM (SELECT *  FROM 's3://bucket/test.csv') 
    FILEFORMAT = CSV 
    FORMAT_OPTIONS ('mergeSchema' = 'true', 'header' = 'true')

but i have new file with additional data. how can i load that please guide?

thanks

need to load new datafile in delta table

Alex Ott
  • 80,552
  • 8
  • 87
  • 132
patdev
  • 11
  • 1

1 Answers1

1

I tried to reproduce the same in my environment and got the below

Make sure, to check whether the schema and file.csv data_type should match otherwise you will get an error.

Please follow below syntax insert data from csv file

%sql

copy into <catalog>.<schema>.<table_name>
  from "<file_loaction>/file_3.csv"
  FILEFORMAT = csv
  FORMAT_OPTIONS('header'='true','inferSchema'='True');

enter image description here

B. B. Naga Sai Vamsi
  • 2,386
  • 2
  • 3
  • 11
  • thank you for the update. but will it take updated data or will it take all the rows from the file? – patdev Jan 12 '23 at 15:51
  • I also have tried but when i run the previous code which converts csv to parquet file and created delta table(where all columns become string or varchar) now if i run the copy command like your it will fail for the datatype mismatch or failed to merge incompatilble data types string and integer.. – patdev Jan 12 '23 at 17:00
  • schema should match otherwise you will get an error as per approach. – B. B. Naga Sai Vamsi Jan 13 '23 at 03:22