Whenever I use a small statement for example:
DELETE FROM c_ordertax WHERE (c_order_id,c_tax_id) IN ((183691598,1000862),(183691198,1000862));
It executes perfectly... but if I execute a lengthy statement to delete say 18755 records with these scalar values, it says "max_stack_depth" exceeded... this option in postgresql.conf has been set to 2MB and the query that threw the error doesn't even amount to 2MB, its just 300kb
Note: No Triggers are attached in the table
And one thing I noticed about other queries is, when I use single value in IN clause eg: DELETE FROM c_ordertax WHERE (c_order_id) IN ((183691598),(183691198));
they dont have any issues and however lengthy the query maybe, it executes perfectly...
My current options are:
- I could increase the "max_stack_depth" value but it is limited to 8MB and increasing it further causes issues and postgresql server couldn't restart... it can only restart properly of the option is set to a value less than 8MB
- I could Split up those statements but it might not be a graceful solution and that too requires me to know the maximum scalar values that can be accommodated in a single statement and if number of fields increase in scalar value, the total number of values that can be used in a single statement could reduce I fear...
So My Question is what is the maximum number of scalar values that can be used in an IN clause... if the number of fields in scalar value increases, is there a formula that can be used to determine the maximum number of scalar values that can be used eg:
5 values with 2 fields => ((1,2),(1,2),(1,2),(1,2),(1,2))
2 values with 3 fields => ((1,2,3),(1,2,3))
Any Database Mastermind encountered these kinda issues? If so How do I tackle it?