0

I'm trying to load data to a Spark DataFrame from MSSQL/Postgres behind a firewall.

When I use pipelines and datasets I can use a Linked service that connects via an integration runtime. How to do it with a notebook and dataframe? Is there a way to use a Linked service as a source/destination (that would be the best, like connecting the Cosmos DB ?

Today I load my data via a pipline, where the source is a Linked service with integration runtime, and the destination is an Azure Data lake gen2 parquet file. After that, I load my data from the parquet files to the Spark DataFrame.

Thom A
  • 88,727
  • 11
  • 45
  • 75
Robert G
  • 3
  • 2

0 Answers0