-1

We have a job that loads some tables every night from our source db to target db, many of them are partitioned by range or list. Before loading a table we truncate it first and for some reason, this process is taking too long time for particular tables.

For instance,TABLE A has 62 mln rows and has been partitioned by list (column BRANCH_CODE). Number of partitions is 213. Truncating this table took 20 seconds .

TABLE B has 17 mln rows, has been range partitioned by DAY column, interval is month, every partitiion has 213 subpartitions by list (column BRANCH_CODE). So in this case, number of partitions is 60 and number of subpartitions is 12 780. Truncating this table took 15 minutes.

Is the reason of long truncate process too many partitions? Or maybe we have missed some table specs or should we set specifig storage parameters for a table?

Sherzodbek
  • 170
  • 1
  • 20
  • did you test insert to temp, drop purge , create ? – Ali Fidanli Nov 11 '21 at 11:11
  • do you truncate the whole table ? or you truncate partitions or subpartitions ? It looks like a strange design. When you partition, or subpartitioin, you get the ability to work with such segments independenly. Why have so many partitiions and subpartitions if you are truncating the whole table ? – Roberto Hernandez Nov 11 '21 at 11:11
  • @AliFidanli , do you mean testing it by insertion table contents to another temporary table without partitions, dropping a target table and then creating as select? – Sherzodbek Nov 11 '21 at 11:48
  • @RobertoHernandez , I am truncating a whole table by executing 'truncate table table_name' command. Design was chosen by a table owner in the source table and copied in target table, however i can change its design on target db. My guess is that on source db, they wanted to get benefit of partition pruning since table gets populated incrementally. We also wanted to have use of partition pruning on target db, but i guess you're right, maybe it's faster to drop the last month partition and load only that partition? – Sherzodbek Nov 11 '21 at 11:53
  • 1
    if that is the case, and the queries are using partition pruning, then it makes sense the design, but not the population. Perhaps it would be much better to truncate only the partitions of the last month, a kind of incremental approach – Roberto Hernandez Nov 11 '21 at 12:08
  • @RobertoHernandez , yes, now after this discussion, it makes sense to apply that kind of incremental approach, thanks. But one thing I have concern is that when I truncate one partition of a table, all global indexes of the table becomes unusable, so I would have to rebuild them later, which I now have no idea on time it might take. – Sherzodbek Nov 11 '21 at 12:21
  • Do you have global or local indexes over these tables ? you can always use parallel ddl to improve performance and speed up the whole operation. Let me know if you need an example , but in that case I can only do it in an answer – Roberto Hernandez Nov 11 '21 at 12:40
  • @RobertoHernandez , do you mean using parallel ddl for rebuilding index? If so, our team have already used parallel ddl by using something like this ' alter index rebuild parallel c_parall_level nologging'. If yours is other case, then let's see that too – Sherzodbek Nov 11 '21 at 12:43
  • 1
    @Sherzodbek, it is exactly what I meant. you can always use the clause update indexes of the alter table truncate partition statement. Read the doc about it – Roberto Hernandez Nov 11 '21 at 13:40

1 Answers1

1

Manually gathering fixed object and data dictionary statistics may improve the performance of metadata queries needed to support truncating 12,780 objects:

begin
    dbms_stats.gather_fixed_objects_stats;
    dbms_stats.gather_dictionary_stats;
end;
/

The above command may take many minutes to complete, but you generally only need to run it once after a significant change to the number of objects in your system. Adding 12,780 subpartitions can cause weird issues like this. (While you're investigating these issues, you might also want to check the space overhead associated with so many subpartitions. It's easy to waste many gigabytes of space when creating so many partitions.)

Jon Heller
  • 34,999
  • 6
  • 74
  • 132