2

I'm using Extensible Storage Engine to cache a number of large records - these records tend to be about 2MB in size (each)

Often they will be deleted within a few hours, it's rare that they live for more than this amount of time

I'm in a tight loop of JetBeginTransaction/JetPrepareUpdate/JetSetColums/Update/JetCommitTransaction - each iteration of this loop writes a 2MB record. the data being written has been preallocated and is in RAM - my producer shouldnt be taking any CPU or disk time

I'm measuring very slow performance, on the order of 2megabytes a second.

using procmon I see lots and lots of tiny reads and writes (512 bytes, 4096bytes, and lots of writes ~30K). The largest writes I see are 393,216bytes which I believe is the default for JET_paramMaxCoalesceWriteSize.

This feels like a tuning thing, what can I do to increase performance of large writes? I'm off almost two orders of magnitude from what this hardware should be capable of delivering.

stuck
  • 2,264
  • 2
  • 28
  • 62
  • you could try adding doing this as a parallel loop... however I am curious why you think the preformance should be better? what is your total disk read/write throughput. Seek times could be killing you? Also when you are doing this, is there a virus scanner running in the background? GL – John Sobolewski Feb 19 '11 at 21:32
  • 2megabytes a second is very very slow for large I/Os. with a SATA disk I should be seeing much closer to 120megabytes a second. I suspect a combo of seek times and small writes are killing me - addressing those will prob yield a solution. Keep in mind I'm making huge writes, for contigious I/Os it doeesnt get much better than 2MB a write. I do not have any scanners, or search engines on, however I do have VSS turned on – stuck Feb 19 '11 at 21:53
  • Are you using lazy transactions? – Laurion Burchall Feb 22 '11 at 16:32

0 Answers0