3

I'm loading big files via Oracle SQL Loader over vpn from home, and they're taking a lot of time. They were a lot faster to load when I loaded them from work. The files I'm loading are on my work server already.

So my thinking is that the slow down is because of the "Commit point reached - logical record count" that is printed for each row. Must be slow due to them having to be sent over the network. I googled but can't find any way to print less of them. Tried adding rows=5000 as a parameter, but I still get the prints for each row.

How can I print less of the "Commit point reached - logical record count" counts?

Thanks

Superdooperhero
  • 7,584
  • 19
  • 83
  • 138

4 Answers4

8

You can use the keyword silent, which is available in the options clause. You can set the following things to be silent:

  • HEADER - Suppresses the SQL*Loader header messages that normally appear on the screen. Header messages still appear in the log file.
  • FEEDBACK - Suppresses the "commit point reached" feedback messages that normally appear on the screen.
  • ERRORS - Suppresses the data error messages in the log file that occur when a record generates an Oracle error that causes it to be
    written to the bad file. A count of rejected records still appears.
  • DISCARDS - Suppresses the messages in the log file for each record written to the discard file.
  • PARTITIONS - Disables writing the per-partition statistics to the log file during a direct load of a partitioned table.
  • ALL - Implements all of the suppression values: HEADER, FEEDBACK, ERRORS, DISCARDS, and PARTITIONS.

You would want to suppress feedback.

You can either use on the command line, for instance:

sqlldr schema/pw@db silent=(feedback, header)

Or in the options clause of the control file, for instance:

options (bindsize=100000, silent=(feedback, errors) )
Barbaros Özhan
  • 59,113
  • 10
  • 31
  • 55
Ben
  • 51,770
  • 36
  • 127
  • 149
  • Thanks, no more display. Sadly it's still taking the same time to load the data :-( – Superdooperhero Aug 10 '12 at 13:42
  • If you don't have any indexes in your table and you don't mind doing a direct path load try using the `direct=true` option, which should help as would increasing the commit interval (far more than using stdout). Up the commit interval to 20k and it will go quicker. – Ben Aug 10 '12 at 13:50
  • so far so good(+1) thanks, but I think the statement should be without parentheses such as `sqlldr schema/pw@db silent=feedback, header`. – Barbaros Özhan Jan 25 '21 at 10:56
0

Try redirecting output and error to a file.

DCookie
  • 42,630
  • 11
  • 83
  • 92
0

Ben's answer is very good. Do not use any shell program to suppress the STDOUT. I used the following:

sqlldr schema/pw@db mycontrolfile.ctl | sed '/Commit point reached/d'

One of the problems with this is that if you are using scripts to run the command, then the error return value will be flushed by the sed command. Put it another way, if for some reason, the sqlldr failed with return value of 1, then sed will immediately flush this with return value of 0. So your error state will be flushed.

Redirecting STDOUT and STDERR might not be as good as Ben's solution.

Taryn East
  • 27,486
  • 9
  • 86
  • 108
Kemin Zhou
  • 6,264
  • 2
  • 48
  • 56
0

I had same problem while calling sqlldr from a shell script.

ERROR:

SQL*Loader: Release 10.2.0.3.0 - Production on Tue Aug 9 20:06:38 2016 Copyright (c) 1982, 2005, Oracle. All rights reserved.

Commit point reached - logical record count 64

SQL Loader failed for the table myschema.ORG_MOVEMENT_RULES. Exiting...

Upon Investigation, found that the control file (v_movementrules_control) got corrupted while copying from a Windows to Unix system and it got DOS/Windows line ending characters ^M at end of every line.

sqlldr userid=${USER}/${PASS}@${DB} control=${v_OrgRules_Control}
Martin Tournoij
  • 26,737
  • 24
  • 105
  • 146