1

I am using QBApiWrap.QuickbaseApiWrapper.DoQuery in C# to get the records from my table. Previously it has been working fine. Now the number of records have increased to 30,000 and I am getting the following error:

Errcode : 75

Errtext : Report too large

Errdetails : Maximum number of bytes in report exceeded

How can I get all the records in a single dataset?

I am querying in below manner:

     DataSet dsEmployee = My_Integration.QBApiWrap.QuickbaseApiWrapper.DoQuery(dbIdEmployee, ticket, "","3.6.7.8.21.15");
Ananth
  • 51
  • 1
  • 9

3 Answers3

1

I just ran into this problem as well with a report that only has 4 fields but over 120K records. You could use API_DoQueryCount to get the total number of records your query will return in combination with the API_DoQuery options "num-n (n is the number of records to return" and "skip-n (n is the number of records to skip" to paginate your way through the records like this pseudo code below:

maxRecords = {API_DoQueryCount}
retrievedRecords = 0
recordChunk = 1000
myCSV = path\to\csv
tempRecords = ""
while (retrievedRecords < maxRecords)
  tempRecords = API_DoQuery&query="yourQuery"&options=num-{recordChunk}.skp-{retrievedRecords}
  myCSV += tempRecords
  retrievedRecords += 1000

While the example above is over simplified and leaves out the conversion step of XML to CSV I think it gives you a starting idea of how it can be done. More info on how to use DoQuery options can be found in documentation.

Also, our report stop returning records sometime after it grew beyond 110,000 records so it seems like they may have expanded the limit they had before. I would also play around with how many records you return at a time and use the largest safe number you can find to reduce how many requests your app has to make so that it performs faster.

Dima
  • 35
  • 8
crumdev
  • 1,599
  • 2
  • 11
  • 13
0

Quickbase has a record limit on the size of a single response which your query is exceeding. To avoid this error, you need to reduce the number of records being returned by the query. If you aren't actually using all of the returned records in your program, you should structure your query so that you exclude records that you aren't using. If you do, in fact, need all of the records in that table you'll need to download them in batches and then merge the datasets.

Unfortunately, I've never programmed C# so I can't help you with the syntax, but to download records in batches you can use API_GetSchema on your table DBID to get the next record id (table.original.next_record_id). This number is what the record ID will be for the next record to be created. So, you know that all of your record IDs are less than that number. Using a lower bound of 0 and an upper bound of 20,000 you can query for records where the record id is greater than the lower bound and less than the upperbound. Merge the results with an existing dataset, increase both bounds by 20,000 and repeat until the lower bound exceeds the next record id number. At the end you should have a single dataset and code that should scale with your Quickbase table.

Nathan Hawe
  • 281
  • 3
  • 5
0

Unfortunately Quickbase has record limit on the size of response. The one way doing this is performing query on chunks of data

Here is the code in the python based on new QB JSON API

import pyqb

qbc = pyqb.Client(url='https://yourcompany.quickbase.com', user_token='')
max_records = qbc.getnumrecords(database='DBID')
retrived=0
o=[]

headers = {
        'QB-Realm-Hostname': 'yourcompany.quickbase.com',
        'User-Agent': '{User-Agent}',
        'Authorization': 'token'
    }

while retrived < int(max_records):

    body = { 
      "from": "DBID",
      "select": [ ## select all the fields you want 
         109, 
         15,
         1,
         6,   
      ],
      "options" : {
          "skip":retrived,
          "top": 25000
      }
    }

    r = requests.post(
    'https://api.quickbase.com/v1/records/query', 
    headers = headers, 
    json = body)
    
    data=r.json()
    o.append(pd.json_normalize(data['data'])) # save records 

    retrived += data['metadata']['numRecords']