0

I have a dataframe having 100000 records that we have got after doing transformations, now i have to load all this data into synapse dedicated Pool in COUNTRY_TABLE. How can I achieve this in Synapse?

Few Other Queries

  1. Is it Compulsory to create a schema of our dataframe columns in dedicated pool table?
  2. How we can Overwrite the data Everytime in dedicated Pool from using query in spark notebook, if new data came i want to overwrite old data with the new data everytime.

I have created a Schema also for my destinaton table in dedicated pool with all column names we have in our spark dataframe

CHEEKATLAPRADEEP
  • 12,191
  • 1
  • 19
  • 42

1 Answers1

0

You can use the Azure Synapse Dedicated SQL Pool Connector for Apache Spark to load data to Synapse Spark Pool as the number of records is relatively low. Another option is to use the COPY command.

Aravind Yarram
  • 78,777
  • 46
  • 231
  • 327