I need to import with impdp to my RAC database (Oracle18c, 3 physical nodes and RHEL7) and I would like to monitor the consumption of resources (cpu, ram and disk) of the 3 nodes, leaving something like a report in a file during the import process. Could you recommend any method to do this? Thank you
-
are you talking about import datapump on RAC ?? – Roberto Hernandez Aug 08 '20 at 08:06
-
Yes, is a RAC 18c – miguel ramires Aug 08 '20 at 23:39
1 Answers
I would start for using a simple query to monitor the process
select x.inst_id , x.job_name,b.state,b.job_mode,b.degree
, x.owner_name,z.sql_text, p.message
, p.totalwork, p.sofar
, round((p.sofar/p.totalwork)*100,2) done
, p.time_remaining
from dba_datapump_jobs b
left join dba_datapump_sessions x on (x.job_name = b.job_name )
left join gv$session y on (y.saddr = x.saddr and y.inst_id = x.inst_id )
left join gv$sql z on (y.sql_id = z.sql_id and y.inst_id = z.inst_id )
left join gv$session_longops p ON (p.sql_id = y.sql_id and p.inst_id = y.inst_id )
WHERE y.module='Data Pump Worker'
AND p.time_remaining > 0;
Regarding your point of measuring CPU, Memory or DISK, I don't know exactly what do you want to measure here. I don't know your environment , but probably it's virtualized to some point, so when you have layers and layers of virtualisation, those measures, that when you were dealing with physical servers were meaningful, they're now less meaningful, important of course, but not as much as before.
Anyway, if you still want to measure ( CPU ) you can join the query before with gv$sesstat
and gv$statname
.
select x.inst_id , x.job_name,b.state,b.job_mode,b.degree
, x.owner_name,z.sql_text, p.message
, p.totalwork, p.sofar
, round((p.sofar/p.totalwork)*100,2) done
, p.time_remaining
from dba_datapump_jobs b
left join dba_datapump_sessions x on (x.job_name = b.job_name )
left join gv$session y on (y.saddr = x.saddr and y.inst_id = x.inst_id )
inner join gv$sesstat t on ( t.sid = y.sid and t.inst_id = y.inst_id )
inner join gv$statname n on ( t.statistic# = n.statistic# and t.inst_id = n.inst_id )
left join gv$sql z on (y.sql_id = z.sql_id and y.inst_id = z.inst_id )
left join gv$session_longops p ON (p.sql_id = y.sql_id and p.inst_id = y.inst_id )
WHERE y.module='Data Pump Worker' and n.name like '%CPU used by this session%'
AND p.time_remaining > 0;
If you are having degradation of performance in import datapuump operations, there are some tricks to make the import faster, and also you have always the option to enable the parameter TRACE
during the import operation to check what is happening behind the wheels.
OPTIONS TO IMPROVE PERFORMANCE
You can change the way rows are imported. For example, you might want to try to use: access_method=direct_path
Take this in consideration when using direct path load. The structure of a table allows a Direct Path load, i.e.:
- A global index does not exist on a multipartition table during a single-partition load. This includes object tables that are partitioned.
- A domain index does not exist for a LOB column.
- The table is not in a cluster.
- The table does not have BFILE columns or columns of opaque types.
- The table does not have VARRAY columns with an embedded opaque type.
- The table does not have encrypted columns.
- Supplemental logging is not enabled or supplemental logging is enabled and the table does not have a LOB column.
- The table into which data is being imported is a pre-existing table and there is not an active trigger, and the table is not partitioned, and Fine-grained access control for INSERT mode is not enabled, and a constraint other than table check does not exist, and an unique index does not exist.
- The parameters QUERY, REMAP_DATA parameter were not used for the specified table in the Import Data Pump job.
Allow datapump operations to run in nologging by transform=disable_archive_logging:y
( 12c onwards )
If the export datapump was done in PARALLEL, and several dmp files were created, you should use the same degree in the import operation.
TRACING ORACLE DATAPUMP
Tracing can be enabled by specifying an 7 digit hexadecimal mask in the TRACE parameter of Export DataPump (expdp) or Import DataPump (impdp). The first three digits enable tracing for a specific Data Pump component, while the last four digits are usually: 0300. Any leading zero's can be omitted, and the value specified for the TRACE parameter is not case sensitive.
You should start with 480300 , in that case the Master Control process (MCP) and the Worker process(es) will be traced. When creating a Data Pump trace file, do not use a different value unless a different trace level is requested by Oracle Support. An example of how to enable trace:
expdp <LOGIN>/<PASSWORD> DIRECTORY=<DIRECTORY_NAME> DUMPFILE=<DUMP_NAME>.dmp \
LOGFILE=<LOG_NAME>.log TABLES=<SCHEMA_NAME>.<TABLE_NAME> TRACE=480300
Remember that user/password must be an user with enough privileges.

- 8,231
- 3
- 14
- 43