Based on your description in the question, and your comments, the problem has to do with how garbage collection works in Firebird. Firebird is a so-called Multi-Version Concurrency Control (MVCC) database, each change you make to a row (record), including deletions, will create new versions of that record, and keep previous versions available for other transactions that were started before the transaction that made the change is committed.
If there are no more transactions 'interested' in a previous version of record, that previous version becomes eligible for garbage collection. Firebird has two options for garbage collection: cooperative (supported by all server modes) and background (supported by SuperServer), and a third combined mode which does both (this is the default for SuperServer).
The background mode is a dedicated thread which cleans up garbage, it's signaled by active statements if they see garbage.
In the cooperative mode, a statement that sees garbage is also the one that has to clean it up. This can be especially costly when the statement performs a full table scan just after a large update or delete. Instead of just finding and returning rows, that statement will also rewrite database pages to get rid of that garbage.
See also the slides Garbage collection mechanism and sweep in details.
There are some possible solutions:
If you're using SuperServer, change the policy, by setting the setting GCPolicy
in firebird.conf
to background
.
The downside of this solution is that it might take longer before all garbage is collected, but the big benefit is that transactions are not slowed down by doing garbage collection work.
After committing a transaction that produced a lot garbage, execute a statement that performs a full table scan (e.g. select count(*) from table
) to trigger garbage collection, using a separate worker thread to not block the rest of your process.
This option only really works if there are no active transactions that are still interested in those old record versions.
Create a backup of the database (there is no need to restore, except to verify if the backup worked correctly).
By default (unless you specify the -g
option to disable garbage collection), the gbak
tool will perform garbage collection during a backup. This has the same restriction as option 2, as this works because gbak does the equivalent of a select * from table
Perform a 'sweep' of the database using gfix -sweep
.
This has similar restrictions as the previous two options
For connections that cannot incur the slowdown of a garbage collection, specify the connection option isc_dpb_no_garbage_collect
(details vary between drivers and connection libraries).
If you specify this for all connections, and your policy is cooperative (either because it is configured, or you're using Classic or SuperClassic server mode), then no garbage collection will take place, which can cause an eventual slowdown as well, because the engine will have to scan longer chains of record versions. This can be mitigated by using the previous two options to perform a garbage collection.
Instead of really deleting records, introduce a soft-delete in your application to mark records as deleted instead of really deleting them.
Either keep those records permanently, or really delete them at a later time, for example by a scheduled job running at a time the database is not under load, and include one of the previous options to trigger a garbage collection.