Mostly it depends what you want to achieve, usually you need to compromise on something to achieve something. For example, I am deleting 3 million records that are no longer being accessed by my users using a stored procedure.
If I execute delete query all at once, a table lock
gets escalated and my other users start getting timeout issues in our applications because the table has been locked by SQL Server
(I know the question is not specific to SQL Server but could help debug the problem) to give the deletion process better performance, If you have such a case, you will never go for a bigger batch than 5000. (See Lock Escalation Threshold)
With my current plan, I am deleting 3000 rows per batch and only key lock is happening which is good, I am committing after half a million records are processed.
So, if you do not want simultaneous users hitting the table, you can delete the huge number of records if your database server has enough log space and processing speed but 1 Trillion records are a mess. You better proceed with a batch wise deletion or if 1 Trillion records are total records in the table and you want to delete all of those records, then I'd suggest go for a truncate
table.