0

I am trying to ingest 9,000,000 rows in an elastic pool database with 6 Vcore. Data ingestion using python (pyodbc).

Since data is large, I am ingesting the data in chunks.

I am getting weird behaviour after the 9th chunk of the ingestion. Process disappear and randomly appears after an hour.

Is there any solution for this?

Thom A
  • 88,727
  • 11
  • 45
  • 75

1 Answers1

0

My suggestion use non-durable memory-optimized table to speed up data ingestion, while managing the In-Memory OLTP storage footprint by offloading historical data to a disk-based Columnstore table. Use a job to regularly batch-offload data to a disk-based Columnstore table.

With that adjustment you can obtain 1.4 million sustained rows per second during ingestion.

ChrisGPT was on strike
  • 127,765
  • 105
  • 273
  • 257
Alberto Morillo
  • 13,893
  • 2
  • 24
  • 30