0

I found a blog a while back that had the sql to determine how expensive updating a clustered index was. Sql will do a delete and a insert.

It did something like this:

begin trans
update mytable set myclusteredindexColumn = 'abc' where myclusteredindexColumn = 'abc'

Query the DMV or something to show how expensive this is and don't commit.

Does anyone know how to determine how large the transaction log will be for the current uncommitted transaction?

-Randy

hpopiolkiewicz
  • 3,281
  • 4
  • 24
  • 36
randy
  • 253
  • 4
  • 17
  • I have never worried about the size of the transaction at the granular of a level. Not sure what is to be gained from that. Logically it would be the size of the original data (the entire row) plus the size of the new data (again the entire row) plus a few bytes for overhead. In the grand scheme of things the transaction log size for single row update is pretty much nothing unless you have a lot of MAX columns. – Sean Lange Jan 26 '16 at 22:41
  • Here is the script I came up with . Never could find that blog, but I think this is what I was looking for. select database_transaction_log_bytes_used from sys.dm_tran_database_transactions where transaction_id = (SELECT top 1 DTST.[transaction_id] FROM sys.[dm_tran_session_transactions] AS DTST WHERE DTST.[session_id] = @@SPID) and database_id = DB_ID() – randy Jan 28 '16 at 02:05

0 Answers0