0

I am using Oracle Exadata cloud environment. I have a table which I am using to store raw data from excel import through an application. All the columns are VARCHAR2(100 CHAR). I have a simple select statement which returns all the records which are failed validation. In the application I am using OracleDataReader to fetch the records. This query takes about 15 minutes to return 308K records. I ran the statistics & are as below. I have a combine index on Batch_Id & User_Id columns. What can be done to improve the performance, this is a very simple select statement without any joins.

SELECT /*+ gather_plan_statistics */
      ExcelRowNumber,
       Program_Number,
       Program_Number_Source,
       Invoice_Number,
       Invoice_Amount,
       Unit_Number,
       Customer_Number,
       ErrorText
  FROM MKTG.STG_UNIT_INVC_CALC
 WHERE BATCH_ID = 2038326851 AND USER_Id = 'JAY' AND ErrorText IS NOT NULL                    

enter image description here

APC
  • 144,005
  • 19
  • 170
  • 281
Jay
  • 408
  • 2
  • 6
  • 27
  • If the issue is transferring that data across the network, then maybe you need to adjust [fetchSize](https://docs.oracle.com/database/121/ODPNT/OracleDataReaderClass.htm#i1004048)? – Alex Poole Feb 28 '20 at 12:03
  • I tried the query in SQL Plus & Toad with same result. I will still try to learn about fetchSize – Jay Feb 28 '20 at 12:09
  • 1
    Running locally on the database server, or also remotely? How long do they take if you do `select count(*) from ( – Alex Poole Feb 28 '20 at 12:14
  • 1
    *" tried the query in SQL Plus & Toad with same result"* Those are clients too, so whatever network issues you have will be an issue for all of them. The plan says it took less than a second in the database. We would expect Exadata to be pretty fast with Full Table Scan, as it's literally engineered to crunch through large volumes of data. It seems unlikely using an index will improve things, especially as the plan suggests your query is returning all the records in the query. – APC Feb 28 '20 at 12:19
  • I tried `select count(*)` & it returned 354484 records in 651 msecs. I have tried the query with & without index. couple of minutes faster with index, but not much. stats are up to date. – Jay Feb 28 '20 at 13:28
  • 1
    Were the stats the same for the count version? Assuming Oracle didn't optimise that to do far less work somehow, that indicates the problem is getting the data across the network from the DB to your clients. If all the columns are really 100 chars (even the ones that look like numbers?) that should only be about 240Mb, so that looks like a fairly severe network issue; but doesn't appear to be a DB problem. – Alex Poole Feb 28 '20 at 13:37
  • Thanks Alex. I will try to diagnose in that direction. – Jay Feb 28 '20 at 15:10

0 Answers0