Assuming this is Sybase ASE ... and with no idea what, if any, troubleshooting and P&T the OP has performed to date ... some observations, comments and suggestions:
NOTE: Most (all?) of these are going to apply to any program generating a moderate-to-high volume of DB write activity.
Where are the delays?
Have your DBA monitor wait events while you run your process; the wait events should be able to provide details on where the delays are occurring ... during parse/compile? IO waits? dataserver waiting for the client (ie, network and/or client/application delays?)
Statement caching
The optimizer has to parse, and possibly compile, every one of those INSERT statements.
If this is ASE 15+ and each INSERT is being compiled, this can take a long time. In this case it's typically a good idea to make sure the dataserver has been configured to support statement caching (to disable the compilation phase for statements #2 to #N).
Configuring the dataserver for statement caching means 1) allocating some memory to 'statement cache size' and 2) setting 'enable literal autoparam' to 1.
Batching DML statements
Each completed transaction requires a flush of the changed log record(s) to disk before the transaction can be considered 'complete'. The number of writes (of the log) to disk can be reduced by grouping several write commands (eg, INSERTs) into a transaction which will cause the log write(s) to be delayed until a 'commit transaction' has been issued.
While ASE 15+ should have log writes deferred for tempdb activity, it's usually a good practice to group individual DML statements into transactions.
It's not clear (to me) if you're using any sort of transaction management so I'd suggest implementing some transaction management, eg, wrapping the inner loop in a 'begin tran' and 'commit tran' pair.
External output can be slow
Any program that generates output ... either to a console or a file ... will typically see some degradation in performance due to generating said output (more so if the output is going to a file on a 'slow' disk). Even dumping a lot of output to a console can slow things down considerably due to the OS having to constantly redraw the console (shift all lines up by one line, add new line @ bottom, repeat).
If I'm reading your code properly you're generating a print
statement after each insert; so we're talking about 100K print
statements, yes? That's a lot of IO requests being sent to file or console.
I would want to run some timing tests with and without that print
statement (after the INSERT) enabled to see if this could be adding (significantly) to your overall run time.
NOTE: I know, I know, I know ... this sounds a bit silly but ... I've seen some processes sped up by 1x-2x magnitudes simply by limiting/disabling output to a console window. Try running your program without the INSERT and just the print ... how long does it take to scroll 100K lines across a console? how long does it take to print 100K lines to an output/log file?
Bulk inserts
Individual INSERTs are always going to be relatively slow compared to bulk loading capabilities. ASE has a builtin capability for (relatively) fast bulk data loads. At the OS level there is the 'bcp' program. For programming languages (eg, python?) the associated (Sybase/ASE) library should have a bulk insert/copy module.
I'd want to investigate your python/Sybase/ASE lib for some sort of bulk load module and then consider using it to perform the 100K INSERTs.