1

I am trying to do some data exploration for this dataset I have. The table I want to import is 11 million rows. Here is the script and output

 #Creating a variable for our BQ project space
project_id = 'project space'

#Query

Step1 <- 
"
   insertquery
"  

#Executing the query from the variable above
    Step1_df <- query_exec(Step1, project = project_id, use_legacy_sql = FALSE, max_pages = Inf,page_size = 99000)

Error:

Error in curl::curl_fetch_memory(url, handle = handle) : 
  Operation was aborted by an application callback

Is there a different bigquery library I can use ? Looking to also speed up the upload time .

Jaskeil
  • 1,044
  • 12
  • 33
  • BigQuery is fast. The point is to enable fast queries on large datasets. So why not just explore by querying directly? Importing a table this size is probably counterproductive. – matt_black Oct 03 '20 at 08:48

0 Answers0