1

I am trying to import a table that contains 81462 rows in a dataframe using the following code:

sql_conn = pyodbc.connect('DRIVER={SQL Server}; SERVER=server.database.windows.net; DATABASE=server_dev; uid=user; pwd=pw') 
query = "select * from product inner join brand on Product.BrandId = Brand.BrandId"
df = pd.read_sql(query, sql_conn)

And the whole process takes a very long time. I think that I am already 30-minutes in and it's still processing. I'd assume that this is not quite normal - so how else should I import it so the processing time is quicker?

Questieme
  • 913
  • 2
  • 15
  • 34

1 Answers1

1

Thanks to @RomanPerekhrest. FETCH NEXT imported everything within 1-2 minutes.

SELECT product.Name, brand.Name as BrandName, description, size FROM Product inner join brand on product.brandid=brand.brandid ORDER BY Name OFFSET 1 ROWS FETCH NEXT 80000 ROWS ONLY
Questieme
  • 913
  • 2
  • 15
  • 34