2

My Oracle 11.2.0.3 FULL DATABASE Datapump Export is very slow, when i ask V$SESSION_LONGOPS

SELECT USERNAME,OPNAME,TARGET_DESC,SOFAR,TOTALWORK,MESSAGE,SYSDATE,ROUND(100*SOFAR/TOTALWORK,2)||'%' COMPLETED FROM V$SESSION_LONGOPS where SOFAR/TOTALWORK!=1

it show me 2 records, in opname one containing the SYS_EXPORT_FULL_XX, and another "Rowid Range Scan" and the message for the last one is

Rowid Range Scan : MY_SCHEMA.BIG_TABLE: 28118329 out of 30250532 Blocks done and it takes hours and hours. I.E : MY_SCHEMA.BIG_TABLE is a 220 GB table size having 2 CLOB colunn.

Hedi Fourati
  • 31
  • 1
  • 3

2 Answers2

1

If you have CLOBs in the table it will take a long time to export because that wont parallelize. Exactly what phase are you stuck in? Could you paste the last lines from the log file or get a status from data pump?

There are some best practices that you could try out:

  • SecureFile LOBs can be faster than BasicFile LOBs. That is yet another reason for going to SecureFile LOBs.

  • You could try to increase the STREAMS_POOL_SIZE to 256 MB (at least) although I think that is not the reason.

  • Use PARALLEL option and set it to 2 x CPU cores. Never export statistics - it is better to either export using DBMS_STATS or regather at target database.

Regards, Daniel

  • Processus esclave 2 Statut : Nom de processus : DW01 Statut : EXECUTING SchÚma de l'objet : MY_SCHEMA Nom d'objet : BIG_TABLE Type d'objet : DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA Objets terminÚs : 1 Nombre total d'objets : 11 474 Lignes terminées : 141 580 469 Parallélisme de processus esclave : 1 for the 3 others process, status is : WORK WAITING – Hedi Fourati Mar 24 '20 at 12:49
  • Is it possible to increase worker process parallelism? could it ameliorate the datapump performance? Actually PARALLEL option is enabled and it is = 4, my production server has a 16 cpu and 2 parallel_threads_per_cpu cpu_count 16 parallel_threads_per_cpu 2 – Hedi Fourati Mar 24 '20 at 12:49
  • Daniel, by "Use PARALLEL option and set it to 2 x CPU cores. Never export statistics - it is better to either export using DBMS_STATS or regather at target database." => do you think that increasing this parameter from 4 to for example 8 or 12 or may be 16 will increase datapump performance and decrease the total export duration actually taking +10 hours? – Hedi Fourati Mar 24 '20 at 12:49
  • If I understand the log extract correctly the data pump operation is still exporting the table data which means that you are probably waiting for the LOBs. There is not much more you can do - neither with the parallel option. The LOBs goes out in one process. – Daniel Overby Hansen Mar 25 '20 at 19:41
  • Thank you for your reply My BIG_TABLE has + 150 Million records, if I purge the contents of the CLOBS columns by putting a null value for for example 25% or 30% of the rows, do you think that will reduce the size of the table? to free up space in the tablespace? to reduce the export duration? – Hedi Fourati Mar 26 '20 at 12:29
  • I am not sure that the table itself will necessarily be smaller but the export should be faster. Otherwise, partitioning the table will help, I am quite sure. – Daniel Overby Hansen Mar 27 '20 at 19:57
0

Well for 11g and 12cR1 the Streams AQ Enqueue is a common culprit for this as well. If you ALTER SYSTEM SET EVENTS 'IMMEDIATE TRACE NAME MMAN_CREATE_DEF_REQUEST LEVEL 6' this will help if the issue is the very common Streams AQ Enqueue.